February 13

Add to Calendar 2018-02-13 16:00:00 2018-02-13 17:00:00 America/New_York Amnon Ta-Shma: Parity samplers and explicit, epsilon-Balanced codes close to the GV Bound Abstract:I will show an explicit construction of a binary error correcting code with relative distance 1/2-epsilon and relative rate about epsilon^2. This comes close to the Gilbert-Varshamov bound and the LP lower bound. Previous explicit constructions had rate about epsilon^3, and this is the first explicit construction to get that close to the Gilbert-Varshamov bound.Technically, we define Parity Samplers. A parity sampler is a collection of sets with the property that for every "test" A of a given constant density, the probability a set from the collection falls into the test set A an \emph{even} number of times is about half. A \emph{sparse} parity sampler immediately implies a good code with distance close to half. The complete t-complex of all sequences of cardinality t is a good parity sampler, but with too many sets in the collection. Rozenman and Wigderson, and independently Alon, used random walks over expanders to explicitly construct sparse parity samplers, and their construction implies explicit codes with relative rate epsilon^4.I will show how one can get better explicit parity samplers (and therefore also better explicit codes)using a variant of the zig-zag product. In the random walk sampler, there exist many sets with substantial overlap. One way to look atthe zig-zag product is that it takes a sub-collection of the random walk sampler, and this sub-collection has a smaller overlap betweensets in the collection. The zig-zag product achieves that by keeping a small internal state. I will show that by enlarging the internal state one can further reduce the overlap, and as a result improve the quality of the parity sampler. STAR D463 Belfer sarah_donahue@hks.harvard.edu

March 20

Add to Calendar 2018-03-20 16:15:00 2018-03-20 17:15:00 America/New_York Ohad Shamir: Is Depth Needed for Deep Learning? Circuit Complexity in Neural Networks Please note that refreshments will be held in the G5 loungeAbstract======Deep learning, as its name indicates, is based on training artificial neural networks with many layers. A key theoretical question is to understand why such depth is beneficial, and when is it provably necessary to express certain types of functions. In fact, this question is closely related to circuit complexity, which has long been studied in theoretical computer science -- albeit for different reasons, and for circuits which differ in some important ways from modern neural networks. Despite this similarity, the interaction between the circuit complexity and machine learning research communities is currently quite limited. In this talk, I'll survey some of the recent depth separation results developed in the machine learning community, and discuss open questions where insights from circuit complexity might help. The talk is aimed at a general theoretical computer science audience, and no prior knowledge about deep learning will be assumed. 32-155 Belfer sarah_donahue@hks.harvard.edu

April 10

Add to Calendar 2018-04-10 16:00:00 2018-04-10 17:00:00 America/New_York Benny Applebaum: Exponentially-Hard gap-CSP and local PRG via Local Hardcore Functions Abstract:Is it possible to approximate the value of a 3-CNF formula to within a small constant factor in sub-exponential time?This question was recently raised by (Dinur 2016; Manurangsi and Raghavendra 2016) who conjectured that the answer is negative. We will explore the validity of this new Gap-ETH assumption and show that it follows from the exponential hardness of finding a satisfying assignment for smooth 3-CNFs, where smoothness means that the number of satisfying assignments is not much smaller than the number of “almost-satisfying” assignments. We further show that the latter (“smooth-ETH”) assumption follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function, confirming a conjecture of Dinur (ECCC 2016), confirming a conjecture of Dinur (ECCC 2016). Our reductions are inspired by classical cryptographic transformations from one-wayness to pseudorandomness. In particular, our main result is based on a new construction of general (Goldreich-Levin type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

April 11

Add to Calendar 2018-04-11 16:15:00 2018-04-11 17:15:00 America/New_York Dor Minzer: 2-to-2 Games is NP-hard Abstract:The PCP theorem characterizes the computational class NP, so as to facilitate proofs that approximation problems are NP-hard.It can be viewed as a significant strengthening of the famous Cook-Levin theorem, stating that the problem of deciding the satisfiability of a given CNF formula is NP-complete. A fundamental open question in PCP theory is whether a special type of PCP, namely, 2-to-2-Games, is still NP-hard. This conjecture is a variant of Khot's well-known Unique-Games Conjecture.A recent line of study pursued a new attack on the 2-to-2 Games Conjecture (albeit with imperfect completeness). The approach is based on a mathematical object called the Grassmann Graph, and relies on the study of edge-expansion in this graph.More specifically, the study of sets of vertices in the Grassmann Graph that contain even a tiny fraction of the edges incident to the set. A characterization of such sets was recently accomplished, completing aproof for the 2-to-2 Games Conjecture (with imperfect completeness).The talk discusses the 2-to-2 Games Conjecture, its implications and the line of study. Based on joint works with Irit Dinur, Subhash Khot, Guy Kindler and Muli Safra. Star - D463 Belfer sarah_donahue@hks.harvard.edu

September 18

Add to Calendar 2018-09-18 16:00:00 2018-09-18 17:00:00 America/New_York Costis Daskalakis: Improving Generative Adversarial Networks using Game Theory and Statistics Abstract: Generative Adversarial Networks (aka GANs) are a recently proposed approach for learning samplers of high-dimensional distributions with intricate structure, such as distributions over natural images, given samples from these distributions. They are obtained by setting up a two-player zero-sum game between two neural networks, which learn statistics of a target distribution by adapting their strategies in the game using gradient descent. Despite their intriguing performance in practice, GANs pose great challenges to both optimization and statistics. Their training suffers from oscillations, and they are difficult to scale to high-dimensional settings. We study how game-theoretic and statistical techniques can be brought to bare on these important challenges. We use Game Theory towards improving GAN training, and Statistics towards scaling up the dimensionality of the generated distributions.Bio: http://people.csail.mit.edu/costis/shortbio.html Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

September 25

Add to Calendar 2018-09-25 16:00:00 2018-09-25 17:00:00 America/New_York Sasha Razborov: Grand Challanges in Complexity Theory through the Lens of Proof Theory Abstract: Given our current inability to even formulate a coherent program towardssolving grand challenges in computational complexity (such as P vs. NP), itbecomes increasingly important to at least understand this state of affairs;such attempts are often called ``barriers''. The proof-theoretic approachtries to rule out the existence of solutions within a class of methods thatis both rigorously defined and reasonably large, in the hope that this willgive at least some insight into the nature of the difficulties. It turns outthat this approach becomes most successful when the class of methodsinherently involves, explicitly or implicitly, computational concepts similarto those it intends to study. One framework where this hope has materializedis called Natural Proofs, and the parallel framework in which thecorresponding problems are largely open is that of (propositional) proofcomplexity.The main purpose of this talk is to give an accessible introduction to thiskind of problems, in the hope of attracting to them new generation ofresearchers. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

October 02

Add to Calendar 2018-10-02 16:00:00 2018-10-02 17:00:00 America/New_York Suresh Venkatasubramanian: Towards a theory (or theories) of fairness in automated decision-making Abstract: With the advent of automated decision making has come a whole new vocabulary for algorithm design. We say that automated decision-making should be "fair" or "unbiased" or "nondiscriminatory". But our algorithms speak only mathematics, and these words need translation (if at all they can be translated). Over the last few years, the research area of fairness, accountability and transparency has made attempts to formalize what it means for an algorithm to be fair. In this talk I'll discuss what has become a conversation between society and belief systems on the one hand, and mathematical formalisms on the other. I'll explain the different formalizations of fairness and how they reflect differing perspectives on how society understands fairness and bias. On the other hand, I'll describe how the formalizations lead to conclusions about the possibility (or not) of fair decision-making. Bio: Suresh Venkatasubramanian is a professor at the University of Utah. His background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the problem of understanding and explaining the results of black box decision procedures. Suresh was the John and Marva Warnock Assistant Professor at the U, and has received a CAREER award from the NSF for his work in the geometry of probability, as well as a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a member of the Computing Community Consortium (CCC) Council of the CRA, a member of the board of the ACLU in Utah, and a member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

October 23

Add to Calendar 2018-10-23 16:00:00 2018-10-23 17:00:00 America/New_York Urmila Mahadev: Classical Verification of Quantam Computation Abstract: We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to use a quantum prover as a trusted measurement device. The protocol forces the prover to behave as follows: the prover must construct an n qubit state of his choice, measure each qubit in the Hadamard or standard basis as directed by the verifier, and report the measurement results to the verifier. The soundness of this protocol is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

November 06

Add to Calendar 2018-11-06 16:00:00 2018-11-06 17:00:00 America/New_York Vijay Vazirani: Planar Graph Perfect Matching is in NC Abstract: Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution!The case of bipartite planar graphs was solved by Miller and Naor in 1989 via a flow-based algorithm. In 2000, Mahajan and Varadarajan gave an elegant way of using counting matchings to finding one, hence giving a different NC algorithm.However, non-bipartite planar graphs still didn't yield: the stumbling block being odd tight cuts. Interestingly enough, these are also a key to the solution: a balanced tight odd cut leads to a straight-forward divide and conquer NC algorithm. However, a number of ideas are needed to find such a cut in NC; the central one being an NC algorithm for finding a face of the perfect matching polytope at which $\Omega(n)$ new conditions, involving constraints of the polytope, are simultaneously satisfied.Paper available at: https://arxiv.org/pdf/1709.07822.pdfJoint work with Nima Anari. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

November 20

Add to Calendar 2018-11-20 16:00:00 2018-11-20 17:00:00 America/New_York Scott Aaronson: Gentle Measurement of Quantum States and Differential Privacy Abstract: We prove a surprising connection between gentle measurement (where onewants to measure n quantum states, in a way that damages the statesonly by a little) and differential privacy (where one wants to query adatabase about n users, in a way that reveals only a little about anyindividual user). The connection is bidirectional, though with lossof parameters in going from DP to gentlemeasurement. By exploitingthis connection, together with the Private Multiplicative Weightsalgorithm of Hardt and Rothblum, we're able to give a new protocol forso-called "shadow tomography" of quantum states, which improves overthe parameters of a previous protocol for that task due to Aaronson,and which has the additional advantage of being "online" (that is, themeasurements are processed one at a time).Joint work with Guy Rothblum (Weizmann Institute); paper still in preparation. Kiva G449 Belfer sarah_donahue@hks.harvard.edu

November 27

Add to Calendar 2018-11-27 16:00:00 2018-11-27 17:00:00 America/New_York Avishay Tal: Oracle Separation of BQP and the Polynomial Hierarchy Abstract:In their seminal paper, Bennett, Bernstein, Brassard and Vazirani[SICOMP, 1997] showed that relative to an oracle, quantum algorithmsare unable to solve NP-complete problems in sub-exponential time(i.e., that Grover's search is optimal in this setting).In this work, we show a strong converse to their result. Namely, weshow that, relative to an oracle, there exist computational tasks thatcan be solved efficiently by a quantum algorithm, but requireexponential time for any algorithm in the polynomial hierarchy. (Thepolynomial hierarchy is a hierarchy of complexity classes thatcaptures P, NP, coNP, and their generalizations.)The tasks that exhibit this "quantum advantage" arise from apseudo-randomness approach initiated by Aaronson [STOC, 2010]. Ourcore technical result is constructing a distribution over Booleanstrings that "look random" to constant-depth circuits ofquasi-polynomial size, but can be distinguished from the uniformdistribution by very efficient quantum algorithms.Joint work with Ran Raz. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

December 04

Add to Calendar 2018-12-04 16:00:00 2018-12-04 17:00:00 America/New_York Michael Saks: Approximating the edit distance to within a constant factor in truly subquadratic time Abstract: Edit distance is a widely used measure of similarity of two strings based onthe minimum number of character insertions, deletions, and substitutions required to transform one string into the other. The edit distance can be computed exactly using aclassical discrete dynamic programming algorithm that runs in quadratic time.The best algorithm known for exact edit distance improves on quadratic running timeby only a polylogarithmic factor.I will describe recent work from FOCS 2018, joint with Diptarka Chakroborty, Debarati Das, ElazarGoldenberg, and Michal Kouck\'y that gives a randomized constant factor approximation to edit distancein time $\tilde{O}(n^{12/7})$. This is the first constant factor approximation algorithm thatruns in truly subquadratic time. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu

December 11

Add to Calendar 2018-12-11 16:00:00 2018-12-11 17:00:00 America/New_York Dean Doron: Probabilistic logspace algorithms for Laplacian solvers Abstract: A series of breakthroughs initiated by Spielman and Teng culminated in the construction of nearly linear time Laplacian solvers, approximating the solution of a linear system Lx = b, where L is the Laplacian of an undirected graph. In this talk we study the space complexity of the problem. We show a probabilistic, logspace algorithm solving the problem. We further extend the algorithm to other families of graphs like Eulerian graphs and graphs that mix in polynomial time.We also shortly discuss complexity-theoretic motivations for studying linear-algebraic problems in small space, and relations to local algorithms. Patil/Kiva G449 Belfer sarah_donahue@hks.harvard.edu