A sample proposal
In an earlier posting I described, slightly abstractly, what we’re looking for in article proposals for Code Quarterly Below is a sample proposal I wrote for an article that I’m in fact planning to write. Keep in mind that this is just one example; there are as many ways to write a successful proposal as there are articles to be written. Another thing to keep in mind is that this is as finished a proposal as I could make—it’s not necessary to write something like this before you get in touch with us. As I’ve said before we’re happy to help writers develop their ideas. With all that it mind, here it is.
“If you don’t know what dynamic programming is, then you’re at a severe disadvantage. It’s going to come up time and time again.” —Peter Norvig, Coders at Work
Dynamic programming is, as the text Algorithms by Dasgupta, Papadimitrious, and Vazirani describes it, one of the “sledgehammers of the algorithms craft”. When it is applicable, dynamic programming can solve problems in polynomial time where a naïve brute force solution would grow exponentially. As Peter Norvig says, a programmer who doesn’t understand dynamic programming is at a disadvantage because it shows up again and again. Not only will such a programmer be missing an important tool from their own tool box, but they will also have a hard time understanding many interesting and important bits of code that use dynamic programming, such as Knuth’s famous line breaking algorithm or the core algorithm in the program
Unfortunately, dynamic programming is a topic that seems to inspire tutorials that are either understandable only by people who already understand dynamic programming or which go so far in their attempts to simplify things as to veer into oversimplification. (Programmers who came through a good university computer science program will probably have been taught about dynamic programming though even the academics are sorting out the best way to teach it: the third edition of the widely-used algorithms text, Introduction to Algorithms, published in 2009, advertises “improved treatment of dynamic programming” as one of its features.)
My piece will aim to provide an in-depth introduction to dynamic programming understandable by programmers who’ve never heard of it as well as by those who’ve already been confused by the tutorials that Googling “dynamic programming” will turn up. I will explain the origins of dynamic programming as a mathematical technique for solving planning problems, demonstrate its use as a computer programming technique with a few simple examples, and then look at several famous algorithms that use dynamic programming.
For my simple examples, I plan to use the problems of computing the nth Fibonacci number and counting the number of ways of making change for a given amount of money with certain denominations of coins. Computing Fibonacci numbers is an appealing first example—several existing tutorials use it as well—because it is a very simple problem that has both of the characteristics necessary for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. However it can be a tricky pedagogical device because it is so simple; I will use it to establish what optimal substructure and overlapping sub-problems are but will then be very careful to make clear the connection between that example and the more complex ones to follow.
The change counting problem, which comes from one of the early chapters of the classic, Structure and Interpretation of Computer Programs, is slightly more complex. Unlike the Fibonacci problem, it’s not entirely obvious how to turn a simple recursive function into an iterative one, one of the exercises posed in the book. However, as I’ll show, it’s easy if you use dynamic programming.
While working through these two examples I will discuss the difference between simple recursion (not dynamic programming), recursion with memoization (a way of doing top-down dynamic programming), and classic bottom-up dynamic programming and show how to start with simple recursion and convert it to the other two forms.
This last distinction—between top-down and bottom-up—is particularly important. Some authors try to simplify the presentation of dynamic programming by limiting their explanation to recursion with memoization. While that is one way to implement a dynamic programming algorithm, it leaves the reader unprepared to understand code that uses bottom-up dynamic programming. Also, such top-down algorithms are often less efficient than bottom-up versions due to the overhead of recursion. And for problems where the maximum depth of the recursion is linear with the size of the input, recursive solutions won’t work for large inputs if the language or runtime limits how deeply function calls can recurse.
After presenting the simple examples, I will examine a few historically important and and interesting algorithms that use dynamic programming: the TeX line breaking algorithm described in Knuth’s “Breaking Paragraphs into Lines”, the longest common subsequence algorithm described in Hunt and McIllroy’s paper about the original Unix diff program, “An Algorithm for Differential File Comparison”, and perhaps one more.
I’ll explain these algorithms by reimplementing the key parts in Python, showing first a straightforward implementation using dynamic programming and then explaining some of the tricks described in those papers to make the naive dynamic programming implementation more efficient.
After reading my article, a reader should be will prepared to use dynamic programming in their own code and to recognize and understand its use in other algorithms.