A week ago or so, Norbert Blum posted a paper in which he claims he has solved P != NP. He’s hardly the first to try, and he might not be the last. But I’ve seen surprisingly little analysis of this, so I figured I’d give my novice opinion of the subject.

I’ve now read a couple of blog posts regarding the paper in question. It explains some things, but I was still left confused.

Without further ado, here’s my read of Blum’s paper, with the caveat that I don’t usually dabble in complexity any more complicated than “exponential vs polynomial,” and the other caveat that I’ve probably spent a grand total of five hours on this, which is not a lot of time. I could have made (major) mistakes in this, and I appreciate any input or corrections. Email ‘em to me.

Here we go.

P is the set of languages accepted in polynomial time (with regard to the input length) by a deterministic Turing machine (if you don’t know what I’m talking about, think of a TM as a very clunky approximation of a computer). NP is the set of languages accepted in polynomial time wrt the input length by a non-deterministic TM (meaning you can sort of “guess” and assume you’ll get the right answer).

To prove P != NP, we need to show that there is a problem that is in NP but not P. To do that, we take a problem we know is in NP, and attempt to establish a lower bound on its runtime on a deterministic TM. If that lower bound is superpolynomial (meaning, strictly bigger than polynomial), then it can’t be in P.

This paper uses somewhat complex (ha-ha) notions of complexity. First of all, we’re primarily talking about circuit complexity here (whereas until now we’ve been talking about time complexity). So we’re talking about minimum circuit size, represented in numbers of AND, OR, and NOT gates. But we’re not just going to count the total number of AND, OR, and NOT gates; we’re going to use slightly more complicated notions:

  • Monotone complexity: What is the number of AND and OR gates in the smallest monotone circuit (no NOT gates allowed) we can build to compute this function?
  • Non-monotone complexity: What is the number of AND and OR gates in the smallest non-monotone circuit we can build to compute this function? (Now we’re allowed to use NOT, but we don’t count them.)
  • Standard Complexity: What is the number of AND and OR gates in the smallest non-monotone circuit we can build to compute this function, where all the NOT gates happen at the very beginning (on input wires)?

(You can convert any non-monotone circuit to a standard circuit pretty easily using De Morgan’s Laws, and it multiplies the circuit size by at most 2, which is a constant factor that we don’t care about. So proving statements about standard circuit complexity also proves them about non-monotone circuit complexity, and vice versa.)

Without getting into details, there’s a nice theorem that shows that if a function has a large non-monotone circuit size, then the time required to compute it on a TM is also large. So we often use circuit complexity in place of time complexity, because it can be easier to work with. I thought this chapter presented this concept well.

Now, unfortunately these results only apply to non-monotone circuits. Just because a monotone circuit has a superpolynomial lower bound, doesn’t mean it’s not in P. Razborov showed that the Perfect Matching problem (is there a subset of the edges in a graph that touch all vertices of the graph without ever sharing a vertex?) is in P, but the lower bound of a monotone circuit for this problem is superpolynomial. So a superpolynomial monotone lower bound isn’t sufficient.

But a superpolynomial non-monotone lower bound would be sufficient. The goal of Blum’s paper is to take superpolynomial lower bounds of monotone circuits for problems in NP from Razborov, Alon, and Boppana, and show how to apply them to non-monotone circuits. If we do that, we show P != NP.

How do we show lower bounds? We use approximators. What’s an approximator? It’s a smaller circuit that we “want” to have the same functionality as a specific larger circuit. Basically, if a “good” small approximator can be built, then the lower bound of the big circuit’s size is at most as big as the size of the approximator. If, on the other hand, we can show that no good approximator of a certain size exists, then we know that the lower bound of the big circuit’s size is bigger than the size of that approximator.

So our goal is to show that no good polynomial-size approximator exists for a (non-monotone) circuit in NP. That would show that that problem is in NP but not P.

How do we know an approximator is good? We use test inputs. We use positive test inputs, which we know are in the language, and negative test inputs, which we know are not. If an approximator is good, it will accept all the positive test inputs and reject all the negative test inputs. It’s also pretty easy to show that the test inputs are sufficient to show that the approximator has the same functionality as the full circuit with overwhelming probability - an approximator that succeeds at all the test inputs is extremely likely to have the same exact functionality as the circuit we’re approximating. So it’s an if-and-only-if kind of thing. There exists a good approximator of a certain size for a circuit if and only if that circuit can be written in as many gates as the approximator.

These two sources do a good job of talking about Razborov’s original approximator idea, which use approximators in disjunctive normal form (DNF - basically a bunch of monomials connected by OR gates). They bound the size of clauses and bound the number of ORs. Then they prove that no approximator of polynomial (monotone) size can approximate the Clique problem (which is in NP), even on just these test inputs. (Remember, these were all for monotone circuits - not sufficient to show anything about P vs NP. If we had a similar result for non-monotone circuits, we’d have something to talk about.)

Berg and Ulfberg create a slightly different method for Clique that uses “CNF-DNF approximators.” Same idea, but now we use DNF and CNF formulas (CNF is conjunctive normal form - a bunch of clauses connected by ANDs). But still, we’re only talking about monotone circuits. We can make CNF/DNF switches and DNF/CNF switches to convert between the two formats when each is most convenient, with well-understood effects on the number of gates.

Okay here we go. Blum’s paper released last week:

Blum’s main claim - Theorem 6 - is that if you can use CNF-DNF approximators to prove a lower bound on a monotone circuit of a (monotone) function, then you can translate that approximator into an approximator to prove a lower bound for a standard circuit of the same (still monotone) function.

Equivalently,* if you can show that no sufficiently small CNF-DNF approximator exists for monotone complexity, you can convert that argument into one that shows that no sufficiently small CNF-DNF approximator exists for standard complexity.

* To me, this is the least convincing claim by Blum. None of the blog posts I’ve read analyzing this paper seem to think this is an issue, so I might be missing something about this argument - it might be that you can prove a negation in this way. But it seems to me that the argument here is:

  1. CNF-DNF approximators of small (polynomial) size cannot approximate this function in monotone circuit-land.
  2. We convert these approximators to standard circuit-land.
  3. These approximators cannot be used to approximate the function in standard circuit-land.
  4. ??? Therefore, no CNF-DNF approximator exists that can approximate this function in standard circuit-land. ???

I don’t buy step 4; it seems to me that we’ve just proven that CNF-DNF approximators made by converting monotone CNF-DNF approximators can’t approximate the function. That’s not to say you can’t create a CNF-DNF approximator in a different way that can approximate the circuit. But like I said, no one else seems to think this is an issue, and I had literally never heard of an approximator until last Friday, so it’s possible I’m misunderstanding something about how they work.

A very simple thing that would eliminate this concern would be to show some bound on the error of the translation. If I could see that there is no better circuit than the translated one, then my whole concern goes out the window. I didn’t see it in there, but again, I didn’t spend that long on this and I’m not an expert.

You might also say “hey didn’t you say Razborov showed there was no polynomial monotone approximator for a problem in P? What’s up with that?” Well, it turns out the perfect matching problem never admitted a CNF-DNF approximator for the monotone version, so there’s no way to convert it upward. (I’m puzzled by this, because surely you ought to be able to encode Perfect Matching as a problem that does have a CNF-DNF approximator? Or else we’ve unlocked some weird property of P? But hey idk.)

Razborov’s other (stronger) statement that you can’t prove better than quadratic lower bounds for non-monotone circuits using approximations was also challenged by Blum, who says that Razborov made an assumption that doesn’t actually hold; the distance metric was too strong. Blum explains this in an earlier paper, which I couldn’t find a non-paywalled version of. To be honest, I don’t fully understand the minutiae in this paper either, it would have added like five more paragraphs to this post and I’m tired.

Anyway, then Blum goes on to show this result concretely for the Clique problem, just like Razborov did for monotone circuits.

Woohoo, we made it! We’ve proven (?) a superpolynomial lower bound of standard circuits for a problem in NP, which implies a superpolynomial lower bound of a non-monotone circuit for it, which implies superpolynomial time of computation for a deterministic TM. Thus, there is a problem in NP that is not in P, and therefore P != NP. Assuming nothing went wrong along the way.

So yeah, basically, to my novice eye, this result doesn’t not look promising. I’m sort of surprised I haven’t seen more hubbub about it. I’m not fully convinced that this argument about converting negative statements holds (no approximator exists for this context, and we convert it to another context and claim that no approximator exists in that new context). Maybe there’s some factoid about conversions of monotone approximators cover the space of standard approximators, I don’t know. Maybe there’s a flaw in the construction. Maybe Blum is wrong when he says this isn’t a natural proof (which provably cannot prove that P != NP). Or maybe he’s right, who knows! Exciting times.

Anyway, I hope you enjoyed my attempted explanation. Again, I’m not a complexity theorist; I welcome any comments and corrections by email.