Haotian Jiang: Classical and New Tools for Discrepancy Theory, and What They Say About the Matrix Spencer Conjecture
Host
Noah Golowich
MIT
Abstract: This talk is centered around the Matrix Spencer Conjecture, which states that given symmetric n by n matrices A_1,...,A_n each with spectral norm at most 1, one can efficiently find \pm 1 signs x_1,... ,x_n such that their signed sum has spectral norm, or discrepancy, \|\sum_{i=1}^n x_i A_i\|_op = O(\sqrt{n}). Randomly choosing the signs x_i will result in discrepancy O(\sqrt{n \log n}), and the conjecture says that the extraneous \sqrt{\log n} factor for random signing can be removed, generalizing one of the most foundational results in discrepancy theory by [Spencer, 1985] .
In this talk, I will present a proof of this conjecture when the matrices A_i have rank at most n/\log^3 n. This resolves the Matrix Spencer Conjecture up to poly-logarithmic rank, and implies a nearly-optimal qubit lower bound for quantum random access codes encoding n classical bits with advantage >> 1/\sqrt{n}.
I will introduce the two central tools behind the proof of this result: the classical tool of partial coloring method for convex sets in [Gluskin, 1989] and [Giannopoulos, 1997] which is widely used in discrepancy theory, and the new tool of sharp matrix concentration inequalities in [Bandeira, Boedihardjo, van Handel, 2022] for random matrices with correlated Gaussian entries. This talk is based on a joint work with Nikhil Bansal and Raghu Meka, available at https://arxiv.org/abs/2208.11286.
In this talk, I will present a proof of this conjecture when the matrices A_i have rank at most n/\log^3 n. This resolves the Matrix Spencer Conjecture up to poly-logarithmic rank, and implies a nearly-optimal qubit lower bound for quantum random access codes encoding n classical bits with advantage >> 1/\sqrt{n}.
I will introduce the two central tools behind the proof of this result: the classical tool of partial coloring method for convex sets in [Gluskin, 1989] and [Giannopoulos, 1997] which is widely used in discrepancy theory, and the new tool of sharp matrix concentration inequalities in [Bandeira, Boedihardjo, van Handel, 2022] for random matrices with correlated Gaussian entries. This talk is based on a joint work with Nikhil Bansal and Raghu Meka, available at https://arxiv.org/abs/2208.11286.