Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

Speaker

MIT

Host

Rahul Ilango
MIT
Abstract: The Learning With Errors (LWE) problem, introduced by Regev in 2005, is the average-case problem of solving a noisy system of random linear equations over a finite ring. Regev showed that LWE is computationally hard assuming certain worst-case lattice problems are hard. Beyond its connections to lattices, LWE has been seminal in cryptography by enabling public key encryption, fully-homomorphic encryption, and lots more. In 2021, Bruna, Regev, Song and Tang introduced a continuous variant of LWE, called Continuous LWE (CLWE), and gave a quantum reduction showing that CLWE is computationally hard assuming some worst-case lattice problems are hard. Moreover, assuming the hardness of CLWE, they showed that learning Gaussian mixtures in n dimensions, with \sqrt{n} Gaussian components, is not possible in polynomial time. Several works have since shown the computational hardness of a number of other statistical learning problems, assuming the hardness of CLWE.

In this talk, I will present a direct (classical) reduction from LWE to CLWE, following a series of conceptually simple transformations. This reduction from LWE to CLWE allows us to bring the well-developed machinery of LWE-based cryptography to CLWE and its applications. For example, it allows us to obtain the hardness of CLWE under the classical hardness of worst-case lattice problems. Previously, the hardness of CLWE was known only under the quantum hardness of worst-case lattice problems. Moreover, assuming the polynomial hardness of LWE, this reduction allows us to show the hardness of learning n^\epsilon Gaussian components for any \epsilon > 0, which improves on Bruna et al., who show hardness for at least \sqrt{n} Gaussian components under polynomial (quantum) worst-case hardness assumptions. Under the subexponential hardness of LWE, our reduction implies the hardness of learning roughly \log n Gaussian components. This talk is based on joint work with Neekon Vafa and Vinod Vaikuntanathan, and is available at https://arxiv.org/abs/2204.02550.