Oleg Zabluda's blog
Friday, December 02, 2016
Twenty Questions for Donald Knuth
Twenty Questions for Donald Knuth
17. Andrew Binstock, Dr. Dobb's: At the ACM Turing Centennial in 2012, you stated that you were becoming convinced that P = NP. Would you be kind enough to explain your current thinking on this question, how you came to it, and whether this growing conviction came as a surprise to you?

Don Knuth: As you say, I've come to believe that P = NP, namely that there does exist an integer M and an algorithm that will solve every n-bit problem belonging to the class N P in nM elementary steps.

Some of my reasoning is admittedly naïve: It's hard to believe that P ≠ NP and that so many brilliant people have failed to discover why. On the other hand if you imagine a number M that's finite but incredibly large—like say the number 10↑↑↑↑3 discussed in my paper on "coping with finiteness"—then there's a humongous number of possible algorithms that do n^M bitwise or addition or shift operations on n given bits, and it's really hard to believe that all of those algorithms fail.

My main point, however, is that I don't believe that the equality P = NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. Although I think M probably exists, I also think human beings will never know such a value. I even suspect that nobody will even know an upper bound on M.

Mathematics is full of examples where something is proved to exist, yet the proof tells us nothing about how to find it. Knowledge of the mere existence of an algorithm is completely different from the knowledge of an actual algorithm.

For example, RSA cryptography relies on the fact that one party knows the factors of a number, but the other party knows only that factors exist. Another example is that the game of N × N Hex has a winning strategy for the first player, for all N. John Nash found a beautiful and extremely simple proof of this theorem in 1952. But Wikipedia tells me that such a strategy is still unknown when N = 9, despite many attempts. I can't believe anyone will ever know it when N is 100.

More to the point, Robertson and Seymour have proved a famous theorem in graph theory: Any class of graphs that is closed under taking minors has a finite number of minor-minimal graphs. (A minor of a graph is any graph obtainable by deleting vertices, deleting edges, or shrinking edges to a point. A minor-minimal graph H for is a graph whose smaller minors all belong to although H itself doesn't.) Therefore there exists a polynomial-time algorithm to decide whether or not a given graph belongs to : The algorithm checks that G doesn't contain any of 's minor-minimal graphs as a minor.

But we don't know what that algorithm is, except for a few special classes , because the set of minor-minimal graphs is often unknown. The algorithm exists, but it's not known to be discoverable in finite time.

This consequence of Robertson and Seymour's theorem definitely surprised me, when I learned about it while reading a paper by Lovász. And it tipped the balance, in my mind, toward the hypothesis that P = N P.

The moral is that people should distinguish between known (or knowable) polynomial-time algorithms and arbitrary polynomial-time algorithms. People might never be able to implement a polynomial-time-worst-case algorithm for satisfiability, even though P happens to equal N P.


| |


Powered by Blogger