Dynamic programming

From NoskeWiki
Jump to navigation Jump to search
This represents a page the author has not finished yet, but hopefully will finish soon! As a wiki I can't guarantee the accuracy on any of these pages, but pages with this logo I can almost certainly guarantee DO contain errors and/or big omissions.


About

NOTE: This page is a daughter page of: Programming interviews


Dynamic programming is a way of breaking large problems into smaller problems and minimizing the amount of re-calculation needed by the "memorization" of answers to the smaller problems. This page is still under construction, but the links below give some good examples of how dynamic programming works.


Steps

Solving dynamic programming problems typically involves three steps:

  1. Characterize the structure and/or a recursive definition for an optimal solution
  2. Initialize the structure
  3. Compute the value of an optimal solution in a bottom-up fashion
  4. Construct an optimal solution from the computed information.


Knapsack Problem

The Knapsack problem is a conical dynamic programming question. The question is:

Given weights and values(profits) of N items, put these items in a knapsack of max capacity W to get the maximum total value(profit) in the knapsack.

The solution is explained fairly well here: 0/1 Knapsack Problem solved using Iterative and Dynamic Programming. Each item is either selected or not selected (binary).... and to try every combination of items would be O(N^2).. so instead of that the solution is to break it into it's smallest part.

Say we have 4 items:

  1. shirt (1kg) = $2.00
  2. shoe (3kg) = $2.40
  3. pants (2kg) = $1.40
  4. trophy (5kg) = $1.50

... If our capacity is 1kg and we only have a shirt, the answer is obvious ... there is only one choice.... you expand from there.


Links