Linear programming in polynomial time

There are quite a few interesting, incredible, fascinating things here:

Simplex is not a polynomial time algorithm. Certain rare kinds of linear programs cause it to go from one corner of the feasible region to a better corner and then to a still better one, and so on for an exponential number of steps. For a long time, linear programming was considered a paradox, a problem that can be solved in practice, but not in theory!.

Then, in 1979, a young Soviet mathematician called Leonid Khachiyan came up with
the ellipsoid algorithm, one that is very different from simplex, extremely simple in its conception (but sophisticated in its proof) and yet one that solves any linear program in polynomial time. Instead of chasing the solution from one corner of the polyhedron to the next, Khachiyan’s algorithm confines it to smaller and smaller ellipsoids (skewed high-dimensional balls). When this algorithm was announced, it became a kind of “mathematical Sputnik,” a splashy achievement that had the U.S. establishment worried, in the height of the Cold War, about the possible scientific superiority of the Soviet Union. The ellipsoid algorithm turned out to be an important theoretical advance, but did not compete well with simplex in practice. The paradox of linear programming deepened: A problem with two algorithms, one that is efficient in theory, and one that is efficient in practice!

A few years later Narendra Karmarkar, a graduate student at UC Berkeley, came up with a completely different idea, which led to another provably polynomial algorithm for linear programming. Karmarkar’s algorithm is known as the interior point method, because it does just that: it dashes to the optimum corner not by hopping from corner to corner on the surface of the polyhedron like simplex does, but by cutting a clever path in the interior of the polyhedron. And it does perform well in practice.

But perhaps the greatest advance in linear programming algorithms was not
Khachiyan’s theoretical breakthrough or Karmarkar’s novel approach, but an unexpected consequence of the latter: the fierce competition between the two approaches, simplex and interior point, resulted in the development of very fast code for linear programming.

From Linear Programming in Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani available at http://www.cs.berkeley.edu/~vazirani/algorithms/chap7.pdf

Worst part of writing code: debugging.

That probably will never change and what makes it worse is the fact that, with whatever piece of code you ever write, you will spend most of your time debugging. At least, that’s how it seems.

The slow spread of a fast algorithm

In 1963, during a meeting of President Kennedy’s scientific advisors, John Tukey, a mathematician from Princeton, explained to IBM’s Dick Garwin a fast method for computing Fourier transforms. Garwin listened carefully, because he was at the time working on ways to detect nuclear explosions from seismographic data, and Fourier transforms were the bottleneck of his method. When he went back to IBM, he asked John Cooley to implement Tukey’s algorithm; they decided that a paper should be published so that the idea could not be patented.

Tukey was not very keen to write a paper on the subject, so Cooley took the initiative. And this is how one of the most famous and most cited scientific papers was published in 1965, co-authored by Cooley and Tukey. The reason Tukey was reluctant to publish the FFT was not secretiveness or pursuit of profit via patents. He just felt that this was a simple observation that was probably already known. This was typical of the period: back then (and for some time later) algorithms were considered second-class mathematical objects, devoid of depth and elegance, and unworthy of serious attention.

But Tukey was right about one thing: it was later discovered that British engineers had used the FFT for hand calculations during the late 1930s. And – to end this chapter with the same great mathematician who started it – paper by Gauss in the early 1800s on (what else?) interpolation contained essentially the same idea in it! Gauss’s paper had remained a secret for so long because it was protected by an old-fashioned cryptographic technique: like most scientific papers of its era, it was written in Latin.

Yes, my friends, Latin beat them.

From Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani

“An algorithm for designing algorithms”

I love such cyclical things. Oh, and here it is:

If the upper and lower bounds match,
    then stop,
else if the bounds are close or the problem isn’t important,
    then stop,
    else if the problem definition focuses on the wrong thing,
        then restate it,
        else if the algorithm is too slow,
            then find a faster algorithm,
            else if lower bound is too weak,
                then generate a stronger bound.

We can repeat this process until we are satisfied or exhausted.

from A Practical Introduction to Data Structures and Algorithm Analysis by Clifford A. Shaffer

The thing about recursive functions:

I once tried to run a recursive Fibonacci sequence1 code2 with a relatively small3 number on my computer4 and I just had to take pity on my computer and make the program crash because the computer was starting to send out some weird messages and it just wasn’t stopping, the run.

Yeah, it gets ugly pretty quick. Lots of times it’s the easiest solution to code and it’s really very helpful but you really gotta know how to properly code them and, of course, when to use them.

1 0, 1, 1, 2, 3, 5, 8, 13 …
2 I mean, the code to solve the Fibonacci numbers from 1 to n.
3 “Relatively small” because the iterative version of the code did just fine with much bigger numbers.
4 My Asus EeePC 701 running Ubuntu 10.04, you can judge for yourself about the limits of it.

Laziness, Impatience, Hubris

Before the clock ticks 12:00 and the date changes, I’d just present my output for the night:
principal virtues, white

Hopefully, one of them outputs. But, given the time, I’m not counting on finishing the other one tonight. The goal here is to make some sort of series from this theme.

It’s not exactly great but I like it. And I haven’t been doing these sorts of things in quite some time. I was wondering how many of those techniques I’ve learned I can still remembered but I settled for a plain background anyway.

It’s one of the first techniques I ever learned with The GIMP.

It comes in black, too:
principal virtues, black

If you don’t know yet where it’s from, I got it from Perl. I’m not taking any credit for myself. You can find that in Perl’s man pages and it’ll say,

The three principal virtues of a programmer are Laziness, Impatience, and Hubris.

I’m not quite a fan of the “programmer” bit but I like the rest.

If you don’t get it, “See the Camel Book for why.”