Here are the notes for the project presentations (in the order the talks were presented):

## May 9, 2011

## April 29, 2011

## April 26, 2011

### Project presentation schedule

Below is the schedule for the project presentations. You should target a presentation for about 30 minutes, with 5-10 minutes for questions. The first 5-10 minutes should clearly introduce the context of the topic, and even though the time is short to present a lot of technical details, you should strive to teach the audience some technical aspect, eg. the proof of some key lemma that may be at the heart of the proof method. Please email me your slides and lecture notes after your presentation, and I will post them here.

Apr 27: (3-4:20pm)

- Carol Wang: Low-degree tesing
- Terry Zhou: Satisfiability threshold of random -SAT

Apr 29: (3-4:20pm)

- Yuan Zhou: Extractors and pseudorandom generators
- John Wright:

Note that at 4.30pm, Scott Aaronson will deliver the 2011 Buhl lecture on “*Quantum Computing and the Limits of the Efficiently Computable” *at the Mellon Institute auditorium. I encourage the class to attend this lecture after the project presentations.

May 2: (3-5pm)

- Srivatsan Narayanan: Cryptography in
- Richard Peng: Expanders/extractors from algebraic list-decodable codes
- Ankit Sharma: Complexity of Boolean formula minimization.

### Lecture 28: Resolution width lower bound; Course summary

After a recap of the resolution proof system, and the “short proofs are narrow” theorem statement, we stated the lower bound of on the width of resolution refutations of unsatisfiable -expanding CNFs. We proved that if a CNF with clauses and variables minimially implies a clause with literals, then . We then uses this prove that some clauses in the resolution refutation of an -expanding unsatisfiable CNF must have a clause such that the minimal sub-formula of that implies has between and clauses. This clause must have at least literals, giving the desired width lower bound. Finally, we proved the short proofs are narrow claim (here is a -CNF) for tree-like resolution refutations.

To wrap-up the course, we summarized the topics covered during the semester, loosely organized into the following categories. (Here is the summary of all the course lectures.)

- Identifying computational themes and organizing them by complexity classes
- Basic complexity classes and their complete problems (Formula-Value for P, SAT for NP, TQBF for PSPACE, ST-CONN for NL, QSAT for , Permanent for #P, etc.)
- Associated non-uniform models: circuits, formulae, branching programs.
- Computational notions of proofs (PCPs, interactive proofs) and randomness (pseudorandomness)
- Hierarchy of difficulty of problems, based on conjectured inclusions of complexity classes (SAT harder than Graph Isomorphism, Min-Formula harder than Sat, Permanent harder than Min-Formula, SAT not much harder than Unique-SAT, etc.)
- Relationships between complexity classes:
- Various inclusions .
- Outside above chain: ; and ; without proof ,
- (Surprising?) equalities: NL=coNL; IP=PSPACE; AM[k]=AM; (without proof): SL=L.
- Non-uniform vs uniform classes, and collapse results
- implies .
- implies .
- implies .
- implies .

- Lower bounds and separations
- Hierarchy theorems: , .
- Time-space trade-offs: .
- .
- Non-uniform lower bounds:
- Parity not in (proofs based on polynomial method and Hastad switching lemma)
- Branching program and formula lower bounds (Neciporuk and Krapchenko)
- Resolution lower bounds in proof complexity

- Pseudorandomness
- Randomized complexity classes BPP and BPL
- Nisan’s pseudorandom generator for BPL (and connection to randomness extractors)
- Hardness vs Randomness: Nisan-Wigderson generator for BPP
- Converting worst-case hardness to average-case hardness: connection to coding theory
- Hardness amplification: Yao’s XOR lemma, and proof via hardcore sets.
- Probabilistically checkable proofs
- Connections to approximation
- A polylog query PCP using polynomial based codes
- A constant query PCP with exponential sized proofs based on linearity testing
- Robust PCPs of proximity and composing them
- Connections to coding theory and property testing

We covered a variety of topics in a fair bit of depth; however, the richness of the field is such that there are many important topics which we did not get a chance to touch upon during the course. A partial list includes:

- Quantum complexity theory
- Communication complexity (and its applications to circuit complexity and streaming and data structure lower bounds)
- Average-case complexity (DistNP and DistNP-completeness, etc.)
- Cryptography: one-way functions, cryptographic PRGs, zero knowledge proofs, etc.
- List decoding and its complexity-theoretic applications
- Arithmetic circuits and algebraic complexity theory
- Parameterized complexity theory
- Descriptive complexity

## April 20, 2011

### Lecture 27: Expanders; proof complexity

We gave the proof of the following theorem concerning random walks in expanders, which is very useful for randomness-efficient error-reduction of one-sided error randomized algorithms: If is an -algebraic expander (which means the spectral gap of is ), and has size , then the probability that all vertices of a -step random walk that starts at a random vertex of fall inside is at most . Our presentation followed the one in these notes.

**Readings:** Prompted by a question, we talked briefly (and without any proofs) about Chernoff bounds for expanders. Here are two references for a proof: Gilman’s original paper, and a recent paper on constructive proofs of Chernoff bounds (Section 3.4 deals with expander random walks).

Since we did not cover any explicit construction of expanders, I also promised to give pointers to (short) notes for both the algebraic and combinatorial approaches. So, here is a proof for the spectral gap of the Margulis expander construction on that we mentioned in class, and here is a description of the zig-zag product and its analysis.

=======================================================

We then gave an introduction to the subject of proof complexity, which is related to the NP vs. coNP question in a similar way to how circuit complexity is related to the P vs. NP question. We talked about sound and complete proof systems, and defined the resolution proof system for refuting unsatisfiable formulae. We proved the soundness and completeness properties of the resolution proof system. For an unsatisfiable CNF formula , we defined the quantities , , and for the length/size of the shortest resolution refutation, length of the shortest tree-like resolution refutation, and the smallest width of a resolution refutation of respectively, where width of a proof is defined as the maximum number of literals in any clause appearing the proof. We made some comments on small width proofs and automizability.

We then stated the *Short proofs are Narrow* theorem, which upper bounds the width as a function of proof size for -CNF formulae as follows:

- .
- .

We deferred the proof of the above theorem, and talked about the notion of -expansion for bipartite graphs, and for CNF formulae (based on their clause-variable incidence graph). We stated that random -CNF formulae with clauses, for large enough compared to , are unsatisfiable with high probability, and are also -expanding w.h.p.

We then stated the width lower bound theorem (which we will prove next lecture): If is an unsatisfiable CNF formula that is -expanding, then . Note that this implies that random CNF formulae need a width of for refutation, and by the “short proofs are narrow” theorem, do not have length refutations for some . We concluded with the following exercise: A minimally unsatisfiable CNF formula must always have more clauses than variables. (Hint: Matching).

There was a question on whether there were separations known between general and tree-like resolution. I answered yes, but did not give a reference. Here is a paper that gives a near-optimal exponential separation.

**Readings:** Chapter 15 of Arora-Barak has a brief discussion of proof complexity. Our presentation of the width lower bound will follow these notes of Trevisan.

## April 19, 2011

### Lecture 26: Expanders

We defined a combinatorial notion of expanders, and saw several simple examples and why they weren’t good expanders in the sense we were interested in. We then gave two well-known examples of expanders, without proofs of their claimed expansion properties. We then turned to spectral properties of the adjacency matrix and Laplacian of a graph. We then defined the spectral gap of a graph, and proved that good spectral gap implies that a random walk mixes fast. We used the spectral gap as a criterion to define algebraic expanders, and proved that algebraic expansion implied combinatorial expansion. We also stated the converse (the “harder” direction of Cheeger’s inqeuality) without proof. We stated the theorem detailing the use of random walks on expanders to perform error-reduction in a randomness-efficient way, which we referred to in the low-degree polynomial based PCP construction; we will prove this theorem next lecture.

**Readings:** These notes from Spring 2009, or Sections 21.1 and 21.2 of Arora-Barak. If you are interested in learning more about expanders, you can also take a look at notes from this recent course at Stanford.

### Lecture 25: Exponential-sized PCPs

We continued our discussion of PCPs, focusing on the construction of a PCP with query complexity but which uses polynomially many random bits to index into an exponential-sized PCPs.

The PCP will be for the NP-hard problem of determining if quadratic equations over , for have a common satisfying assignment.

The PCP proof will expect the Hadamard code of such a satisfying assignment (which includes the dot product for every ) and also its quadratic function code (which includes the value for every matrix ).

The verifier can pick a random linear combination of the ‘s and check that it equals 0 with the help of just two queries into the Hadamard and QF encodings. As with the PCP of last time, the challenge is to ensure that the proof is indeed close to Hadamard and QF encodings of some . In this case, this ends up being technically easier (though still very interesting and influential).

We stated the linearity test to check proximity to the Hadamard code, and proved its soundness properties via Fourier analysis. A similar test can check that the QF code is close to a linear function of bits, that encodes some matrix with the inner product for every , Finally, we saw a consistency check that ensures that the . We saw the self-correction method (which is just the local decoding of Hadamard codes which we saw earlier) to make all these checks in a robust way when we only know that the tables are close to legal Hadamard codewords.

Putting everything together, we concluded that .

**Readings: **The above PCP is described in Section 11.5 of Arora-Barak, with the analysis of the linearity test appearing in Section 22.5.

Given our two PCP constructions, one (which we called the outer PCP) with log randomness and polylog query complexity, and another (which we called the inner PCP) with constant query complexity but polynomial amount of randomness, we talked (at a high level) about the method of *proof composition*, aimed at combining the best features of both these constructions. The idea is an intuitive one — we run the outer PCP, but instead of reading the polylog bits needed to verify the predicate, the verifier recursively expects a PCP proof that the predicate is satisfied, in the inner PCP format. It then calls the inner PCP to make this check. The big trouble with this, of course, is that the inner PCPs also need to ensure that the various predicates they are called upon are satisfied by a single global proof at the outer level. The semantics of the inner PCP, however, only talks about checking that a single predicate in isolation is satisfied.

Indeed, in the original proof of the PCP theorem, getting the details (or even the definition!) of the proof composition right was non-trivial. (In fact, for me personally, this was the “hardest” part of the proof to swallow, since one can take technically difficult statements like the soundness of the low-degree test as an interesting Math theorem and understand everything around it, but this is not quite possible with composition since it is integral to the whole structure of the proof.) Fortnunately, a very clean way to do proof composition was discovered around 2005. The key to this is to work with two variants of PCPs, which we briefly discussed in lecture along with ideas of how they compose seamlessly and syntactically:

*Robust*PCPs — eg. for the outer PCP, not only are half the checks violated by any proof, but say half of them are violated by a good margin, in the sense that say fraction of the polylog bits read in the proof need to be changed to cause the concerned check to accept.- PCPs of proximity (PCPP), aka Assignment Testers — here the PCP verifier checks the claim that the input formula is satisfied by a particular assignment (to which it has oracle access). The verifier must only read a small number of bits of , and an auxiliary proof of proximity, and reject strings that are far from every satisfying assignment to with good probability.

## April 17, 2011

### Lecture 24: Poly-log query PCPs

We discussed a PCP construction for the gap PCSP problem discussed in the previous lecture. Recall that a solution to a PCSP instance was a polynomial of total degree in variables, and its value was the fraction of constraints satisfied by (where each constraint stipulated that the values of at a tuple of points in satisfied some predicate).

The idea in the PCP is to expect the polynomial as the proof. We would then pick a random constraint of the PCSP instance, and check that it is satisfied by the values takes at the points specified by the constraint. The large gap in the value of the optimum between the satisfiable and unsatisfiable cases would hopefully imply the soundness of the PCP.

One could expect that the proof consist of all the coefficients of . This has the advantage that we know the prover is by fiat giving us a low-degree polynomial. However, the trouble is that evaluating at any point then requires reading all the coefficients, so this is bad in terms of query complexity. So instead we will expect that be given as the table of its values, i.e., a function , as the proof. Now querying the value of at any point is trivial.

The challenge now is to ensure that indeed represents the values of a low-degree polynomial. For this purpose, we discussed the *low-degree test: *pick two random points and check that the restriction of on the *line* is a univariate polynomial of degree in . This test makes queries and uses ranom bits. We stated, without proof, the low-degree testing theorem that if is -far from every degree polynomial, then this test rejects with probability .

We first saw an attempted construction with the choice . Naively repeating the low-degree test times would boost the rejection probability of -far tables to . However, this requires random bits which is about . However, one can do randomness-efficient error reduction (something we will see how to do using expanders after the PCP segment), and accomplish the same thing with randomness.

After ensuring (with high confidence) that is -close to some (uniquely specified) low-degree polynomial , one can check a constraint on the values of at some tuple as follows: shoot an independent random line through each , and if the restriction of on each line is a univariate degree polynomial, then we trust the values as the values of at these points. We proved that this works, namely if differs from at any of these points, then at least one of the line checks fails with probability . However, we noticed the problem that this uses too much randomness: random bits.

To circumvent this problem, we introduced the idea of *parallelization,* where we fit a random degree curve through the points and check that restricted to is a degree $dw$ univariate polynomial. If not, we reject; if so, we trust the values of at as being correct. We proved that if differs from at any of the points , then the curve check fails with probability . (In particular, there is no need to union bound over the points, and we can work with .)

With this, we completed the proof that , modulo the analysis of the low-degree test (which we skipped, but shall see a glimpse of in one of the final project presentations). We ended the lecture with a very brief glimpse into another PCP construction using exponential sized proofs, to be discussed in detail in the next lecture.

## April 7, 2011

### Problem set 5

The fifth (and final) problem set is now posted. Since I was a day late in getting it out, the due date will be *Thursday*, April 21 (5pm).

## April 6, 2011

### Lecture 23: PCPs

We stated the celebrated PCP theorem: There are absolute constants and a system for encoding proofs of satisfiability of 3SAT formulae in which -variable formulae have proofs of size , and a probabilistic verifier for checking proofs in this format, such that

- The verifier tosses random coins based on which it decides locations where to “spot-check” the proof;
- (completeness) if is satisfiable, then there is a proof which the verifier accepts with probability 1; and
- (soundness) if is unsatisfiable, then for every proof, the verifier accepts with probability at most 1/2.

We mentioned that strong forms of the PCP theorem known today allow us to take , and (for the latter, the soundness error needs to be bounded away from , say ; in fact one can take if the queries can be adaptive, or if we relax the completeness requirement and allow proofs to be rejected with a small (say ) probability even when is satisfiable).

We stated that another equivalent form of the PCP theorem is in terms of the existence of a polynomial-time gap producing reduction from 3SAT to NAE-M3SAT (Monotone-not-all-equal-3sat), which maps unsatisfiable 3SAT formular into NAE-M3SAT formulae which are at most -satisfiable for some constant . We sketched the proof of the equivalence of these two forms.

We had a very brief discussion of the historic context of the two different proofs (see here for a detailed history, nicely illustrated with pictures of the researchers involved) — the original algebraic one, and the recent proof via gap amplification. We then embarked on the first step of our goal, namely a gap producing reduction from SAT to an algebraic problem called polynomial constraint satisfaction problem (PCSP).

The advantage of this step is that it is “just” an old-school NP-hardness reduction, and it produces a huge gap (much better than what we want in the PCP theorem). The negative is that the soundness (or gap) guarantee only holds against solutions which are promised to be low-degree polynomials, something which we can’t assume or take for granted in our final PCP. Nevertheless, it is a great first step, which highlights the origin of the gap in a modular way.

We then defined the PCSP problem, where is the size parameter, and are functions of (here was the field sizel; was the number of variables of the polynomial and its total degree; and was the arity of the constraints).

The rest of the lecture was devoted to presenting and analyzing a gap-producing reduction from NAE-M3SAT to PCSP, that mapped instances with variables to instances of PCSP. The reduction mapped satisfiable instances of NAE-M3SAT to satisfiable instances of PCSP, and unsatisfiable instances of NAE-M3SAT to PCSP instances which are at most -satisfisfiable.

**Readings:** Our presentation via PCSP follows the outline in these lecture notes from a summer school on PCPs (from 2000). The gap producing hardness to PCSP is described in Chapter 2; the details in our proof were slightly different using the characterization of multivariate polynomials vanishing on a subcube .