Two Strategies for Hard Optimization Problems
Most optimization problems ask you to find the best solution among an exponentially large set of candidates. Exhaustive search is correct but infeasible; the challenge is to prune the search space without missing the optimum. Dynamic programming (DP) does this by identifying overlapping subproblems and caching their solutions, turning exponential recursion into polynomial time. Greedy algorithms do it by making a locally optimal choice at each step and proving — or hoping — that local optima compose into a global optimum. Both strategies appear everywhere in production systems, but they apply to different structural properties of the problem.
Dynamic Programming: Trading Memory for Speed
DP is applicable when a problem has optimal substructure (the optimal solution contains optimal solutions to subproblems) and overlapping subproblems (the same subproblem is solved many times in naive recursion). The edit distance algorithm used in spell checkers, diff tools, and DNA sequence alignment fills an n×m table where each cell depends only on its left, top, and diagonal neighbors — O(nm) time and O(n) space with rolling array optimization. The Viterbi algorithm, used in speech recognition and error-correcting codes, is DP over a trellis graph of hidden states. Knapsack variants appear in cloud resource allocation, ad auction optimization, and compiler register spilling decisions. The key implementation skill is identifying the recurrence relation and choosing between top-down memoization and bottom-up tabulation based on which subproblems are actually needed.
Greedy Algorithms: When Local Beats Global
A greedy algorithm works when the problem has the greedy choice property: a locally optimal choice can always be extended to a globally optimal solution without revisiting earlier decisions. Dijkstra's algorithm is greedy — it always expands the nearest unvisited node — and is provably optimal for non-negative edge weights. Huffman coding builds an optimal prefix-free code by greedily merging the two lowest-frequency symbols, producing the compression codes used in gzip, PNG, and JPEG. Prim's and Kruskal's algorithms for minimum spanning trees are both greedy and provably optimal. The danger is applying greedy thinking to problems that lack the greedy choice property: the fractional knapsack is solvable greedily, but the 0/1 knapsack requires DP.
Recognising Which Strategy to Use
The practical skill is diagnosing which strategy applies before writing any code. If you can prove an exchange argument — swapping any non-greedy choice for a greedy one never worsens the solution — then greedy works. If the problem requires combining solutions to overlapping subproblems and you can write a recurrence relation, DP applies. Problems on intervals (scheduling, merging) often admit greedy solutions; problems on sequences (longest common subsequence, matrix chain multiplication) almost always need DP. When neither applies cleanly, branch-and-bound or approximation algorithms come into play. Practicing the reduction — reformulating a new problem as a known DP or greedy problem — is what makes these strategies generalize.
Real Systems That Use These Patterns
Git's diff algorithm (Myers diff) is a DP shortest-path problem on an edit graph, minimizing the number of insertions and deletions to transform one file into another. Google Maps and Uber's routing engines use Dijkstra and A* (a greedy-informed search) for turn-by-turn navigation, with DP-based contraction hierarchies to precompute optimal routes at scale. The Needleman-Wunsch algorithm used in bioinformatics for genome alignment is global sequence DP. TCP congestion control uses a greedy increase / multiplicative decrease strategy to maximize throughput while avoiding collapse. Linux's completely fair scheduler uses a greedy selection of the task with minimum vruntime, implemented efficiently with a red-black tree.