May 09

Add to Calendar 2025-05-09 10:30:00 2025-05-09 12:00:00 America/New_York Quantum One-Time Programs, Revisited One-time programs (Goldwasser, Kalai and Rothblum, CRYPTO 2008) are functions that can be run on any single input of a user's choice, but not on a second input. Classically, they are unachievable without trusted hardware, but the destructive nature of quantum measurements seems to provide a quantum path to constructing them. Unfortunately, Broadbent, Gutoski and Stebila showed that even with quantum techniques, a strong notion of one-time programs, similar to ideal obfuscation, cannot be achieved for any non-trivial quantum function. On the positive side, Ben-David and Sattath (Quantum, 2023) showed how to construct a one-time program for a certain (probabilistic) digital signature scheme, under a weaker notion of one-time program security. There is a vast gap between achievable and provably impossible notions of one-time program security, and it is unclear what functionalities are one-time programmable under the achievable notions of security.In this work, we present new, meaningful, yet achievable definitions of one-time program security for probabilistic classical functions. We show how to construct one time programs satisfying these definitions for all functions in the classical oracle model and for constrained pseudorandom functions in the plain model. Finally, we examine the limits of these notions: we show a class of functions which cannot be one-time programmed in the plain model, as well as a class of functions which appears to be highly random given a single query, but whose one-time program form leaks the entire function even in the oracle model.This talk is based on joint work with Jiahui Liu, Justin Raizes, Bhaskar Roberts and Vinod Vaikuntanathan. TBD

May 02

Add to Calendar 2025-05-02 11:00:00 2025-05-02 12:00:00 America/New_York [Thesis Defense] Succinct Cryptography via Propositional Proofs The goal in modern cryptography is to obtain security while minimizing the use of computational resources.  In recent years, we have been incredibly successful in our pursuit for efficiency, even for cryptographic tasks that were thought to be “science fiction”. For example, we have constructions of fully homomorphic encryption and indistinguishability obfuscation for circuits from standard, falsifiable cryptographic assumptions. However, there are still some tasks in cryptography where achieving the “ideal” efficiency from standard assumptions has evaded us. In this thesis, we study the problem of achieving efficiency in the form succinctness in two such settings:Can we construct succinct non-interactive arguments (SNARGs) for all of NP?Can we construct indistinguishability obfuscation (IO) for Turing machines? In particular, can we achieve an obfuscation size which is independent of the input length?  While the problems seem unrelated at first glance, the root difficulty seems to stem from a similar place: both primitives have non-falsifiable security notions. In fact, this type of barrier exists for many other cryptographic primitives, including witness encryption. This leads to a central question: how can we construct non-falsifiable primitives from falsifiable assumptions? In this talk, we show how to leverage “propositional proofs” to overcome the non-falsifiability barrier, and make substantial progress in the goal of achieving succinctness in both of these settings. Our results include universal constructions of both SNARGs and IO for Turing machines from standard assumptions. We then show that these constructions are secure as long as there exists some construction for the respective primitive that has a “propositional proof” of its correctness property. We also demonstrate a general template for constructing adaptively sound SNARGs for NP, and give an instantiation via subexponential IO and LWE. We further apply our techniques to improve succinctness in other cryptographic settings such as secret sharing. Our results establish propositional proofs as a foundational tool for achieving succinctness across a broad range of cryptographic settings.https://mit.zoom.us/j/95528753473   TBD

March 14

Add to Calendar 2025-03-14 10:30:00 2025-03-14 12:00:00 America/New_York Xiao Wang, (Authenticated) BitGC for (Active) Rate-One 2PC This talk introduces BitGC, a computationally efficient rate-one garbling scheme based on ring-RLWE with key-dependent message security. The garbling consists of a SWHE-encrypted seed and one-bit per gate stitching information. The computation requires homomorphically expanding the seed using a low-depth PRG and then two additional levels of multiplication to assemble the garbled tables. As a result, it does not require bootstrapping operations needed for FHE.Second, I will describe a rate-one constant-round active two-party computation protocol by authenticating BitGC. We show efficient ways to distributively garble BitGC and to authenticate its correctness. In practice, the total cost in addition to BitGC is sublinear for layered circuits and one bit per gate for generic circuits.Finally, I will discuss some ongoing efforts and remarks on their concrete efficiency.BitGC will appear in Eurocrypt 2025, available at https://eprint.iacr.org/2024/1988. Authenticated BitGC is available at https://eprint.iacr.org/2025/232. Both results are joint work with Hanlin Liu, Kang Yang, and Yu Yu. TBD

March 07

Add to Calendar 2025-03-07 10:30:00 2025-03-07 12:00:00 America/New_York Noah Stephens-Davidowitz, Recursive lattice reduction---A simple framework for finding short lattice vectors We'll present a new framework called recursive lattice reduction for finding short non-zero vectors in a lattice. This gives new algorithms for solving the computational problem whose hardness underlies the security of lattice-based cryptography. These new algorithms are much simpler than prior work, and they provably match the state of the art. The analysis of the algorithms is also quite simple, and in particular, the analysis provides a much clearer explanation of why the algorithms perform as they do (i.e., the amount of time needed for these algorithms to find vectors of a given length, which is the key quantity that governs the security of lattice-based cryptography in practice). The framework is based entirely on the following idea: in order to find a short non-zero vector in an n-dimensional lattice, one should first find a dense \ell-dimensional sublattice for some \ell < n and then recursively solve the \ell-dimensional problem of finding short non-zero vectors in this sublattice. After doing this repeatedly, we are eventually left with the problem of finding short non-zero vectors in a k-dimensional sublattice for k small enough that we can simply find an optimal solution. One obtains a family of algorithms depending on how k and \ell are chosen. Based on joint work with Divesh Aggarwal, Thomas Espitau, and Spencer Peters, which appeared in SOSA 2025. https://arxiv.org/abs/2311.15064 .  TBD

February 21

Add to Calendar 2025-02-21 10:30:00 2025-02-21 12:00:00 America/New_York Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations We present a polynomial-time reduction from solving noisy linear equations over a finite field F with a uniformly random coefficient matrix to noisy linear equations over F where each row of the coefficient matrix has uniformly random support of size k. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, under popular cryptographic assumptions, we demonstrate that learning with errors and learning parity with noise with uniformly random k-sparse public matrices in dimension n are hard in time n^o(k). These results are nearly tight as both sparse problems can be solved in time n^k given sufficiently many samples.Our reduction allows us to derive several consequences in cryptography and the computational complexity of statistical problems. In addition, as a new application, we give a reduction from k-sparse LWE to noisy tensor completion of a low-rank order-k tensor. As a result, we obtain a nearly optimal lower bound for the problem based on standard lattice-theoretic assumptions.This talk is based on a joint work with Stefan Tiegel, Vinod Vaikuntanathan, and Guy Bresler.  TBD

January 30

Charles River Crypto Day @ MIT

Srini Devadas, Salil Vadhan and Nadia Heninger
MIT, Harvard and UC San Diego
Add to Calendar 2025-01-30 9:30:00 2025-01-30 16:30:00 America/New_York Charles River Crypto Day @ MIT MIT’s Schwarzman College of Computing is pleased to announcea day-long workshop surveying recent developments incryptography and computer security. The event will featureinvited talks by Florian Tramer (ETH Zurich), Salil Vadhan(Harvard), and Nadia Heninger (UC San Diego). In addition, MITfaculty member Srini Devadas and three MIT PhD students (NoahGolowich, Alexandra Henzinger, and Seyoon Ragavan) will give anoverview of research activity in these areas on campus.The event is open to the public, so please share this invitationwidely. *Please register at the link below so that we can makesure that there is enough coffee and food for everyone.*~~~ Logistical Information ~~~Date: Thursday, January 30, 2025Time: 9:30 am–4:30 pmLocation:MIT Building 45, 8th Floor51 Vassar StreetCambridge, MA 02139Please register here:https://forms.gle/BRjVYFrkuS9ym9VM9~~~ Program ~~~9:30-10:30am:Srini Devadas (MIT): “Designing Hardware for Cryptography and Cryptography for Hardware”10:30-11:00am: Coffee break11.00am-12.00pm: Invited talkSalil Vadhan (Harvard): “Multicalibration: a New Tool for Security Proofs in Cryptography”12.00-1.30pm: Lunch (provided)1.30-3.00pm: MIT Student Talks1.30-2.00pm: Noah Golowich: “Edit Distance Robust Watermarking”2.00-2.30pm: Alexandra Henzinger: “Somewhat Homomorphic Encryption from Sparse LPN”2.30-3.00pm: Seyoon Ragavan: “Factoring with a Quantum Computer: The State of the Art”3.00-3.30pm: Coffee break3.30-4.30pm: Invited talkNadia Heninger (UC San Diego): “Cryptanalynomics”~~~ Abstracts for Hour-Long Talks ~~~Title: “Designing Hardware for Cryptography and Cryptography for Hardware”Speaker: Srini Devadas (MIT)Abstract:There have been few high-impact deployments of hardwareimplementations of cryptographic primitives. We present thebenefits and challenges of hardware acceleration ofsophisticated cryptographic primitives and protocols, anddescribe our past work on accelerating fully homomorphicencryption. We argue the significant potential for synergisticcodesign of cryptography and hardware, where customized hardware accelerates cryptographic protocols that are designed with we present hardware acceleration in mind. As a concrete example, a new design of a zero-knowledge proof (ZKP) accelerator that leverages hardware-algorithm co-design to generate proofs 500 times faster than a 32-core CPU.This work was done in collaboration with Simon Langowski, NikolaSamardzic, and Daniel Sanchez.***Title: "Multicalibration: A New Tool for Security Proofs in Cryptography"Spaker: Salil Vadhan (Harvard)Abstract:In this talk, I will describe how Multicalibration, a newconcept arising from the algorithmic fairness literature, isa powerful tool for security proofs in cryptography. Specifically, the Multicalibration Theorem of[HébertJohnson-Kim-Reingold-Rothblum `18] asserts that everyboolean function g, no matter how complex, is"indistinguishable" from a "simple" randomized function. Specifically, there is a "low-complexity" partition of thedomain of g into a small number of pieces such that on almostevery piece P_i, if we choose an input X uniformly at randomfrom P_i, (X,g(X)) is computationally indistinguishable from(X,Bernoulli(p_i)), where p_i is the expectation of g on P_i. As shown by [Dwork-Lee-Lin-Tankala `23], this isa complexity-theoretic analogue of Szemeredi's Regularity Lemmain graph theory, which partitions the vertex set of every graphG into a small number of pieces P_i, such that on almost allpairs P_i x P_j, the graph G is, in a certain sense,indistinguishable from a random bipartite graph with edgedensity matching that of G on P_i x P_j.The Multicalibration Theorem allows us to reduce many questionsabout computational hardness and computationalindistinguishability to their information-theoretic analogues. Thus, it can be viewed as a qualitative strengthening of severalcomplexity-theoretic results that were already known to havemany applications to security proofs in cryptography, such asImpagliazzo's Hardcore Lemma [Impagliazzo `95, Holenstein `06],the Complexity-Theoretic Dense Model Theorem[Reingold-Trevisan-Tulsiani-Vadhan `08], and the WeakComplexity-Theoretic Regularity/Leakage Simulation Lemma of[Trevisan-Tulsiani-Vadhan `09, Jetchev-Pietrzak `14]. Inparticular, we show that these latter results all follow easilyas corollaries of the Multicalibration Theorem. Furthermore, wealso use it to obtain new results characterizing how manysamples are required to efficiently distinguish twodistributions X and Y in terms of their"pseudo-Hellinger-distance" (or the "pseudo-Renyi-1/2 entropy"of X in case Y is uniform).Joint works with Sílvia Casacuberta and Cynthia Dwork and withCassandra Marcussen and Louie Putterman.***Title: "Cryptanalynomics"Spaker: Nadia Heninger (UC San Diego)Abstract:This talk is a meditation on the current state of cryptanalysisresearch in public-key cryptography. I will explore theincentives for and against cryptanalysis in the academiccommunity, and how this is reflected in the current state ofclassical and post-quantum cryptanalysis research. Thisdiscussion is informed by my own experience, as well asa pseudorandomly chosen selection of unscientific personaldiscussions with a variety of researchers across our community.  MIT Building 45, 8th Floor, 51 Vassar Street

November 22

Add to Calendar 2024-11-22 10:30:00 2024-11-22 12:00:00 America/New_York Lossy Cryptography from Code-Based Assumptions Over the past few decades, we have seen a proliferation of advanced cryptographic primitives with lossy or homomorphic properties built from various assumptions such as Quadratic Residuosity, Decisional Diffie-Hellman, and Learning with Errors. These primitives imply hard problems in the complexity class SZK (statistical zero-knowledge); as a consequence, they can only be based on assumptions that are broken in BPPSZK. This poses a barrier for building advanced primitives from code-based assumptions, as the only known such assumption is Learning Parity with Noise (LPN) with an extremely low noise rate log2(n)/n , which is broken in quasi-polynomial time.In this work, we propose a new code-based assumption: Dense-Sparse LPN, that falls in the complexity class BPPSZK and is conjectured to be secure against subexponential time adversaries. Our assumption is a variant of LPN that is inspired by McEliece’s cryptosystem and random k-XOR in average-case complexity. Roughly, the assumption states that (T⋅M, s⋅T⋅M + e) is indistinguishable from (T⋅M, u),for a random (dense) matrix T, random sparse matrix M, and sparse noise vector e drawn from the Bernoulli distribution with inverse polynomial noise probability.We leverage our assumption to build lossy trapdoor functions (Peikert-Waters STOC 08). This gives the first post-quantum alternative to the lattice-based construction in the original paper. Lossy trapdoor functions, being a fundamental cryptographic tool, are known to enable a broad spectrum of both lossy and non-lossy cryptographic primitives; our construction thus implies these primitives in a generic manner. In particular, we achieve collision-resistant hash functions with plausible subexponential security, improving over a prior construction from LPN with noise rate log2(n)/n that is only quasi-polynomially secure.This is joint work with Aayush Jain. Paper: https://eprint.iacr.org/2024/175 32-G882, Hewlett Room

November 15

Add to Calendar 2024-11-15 10:30:00 2024-11-15 12:00:00 America/New_York Seminar Dot-Product Proofs and Their Applications We introduce a new type of probabilistic proof-system, called a dot-product proof (DPP). In a DPP, the input statement x and the proof \pi are vectors over a finite field F, and the proof is verified by making a single dot-product query to the concatenation of x and \pi. We will discuss constructions of DPPs and their applications, including:1. Exponential hardness of approximation for MAXLIN.2. Extremely succinct argument-systems. Joint work with Nir Bitansky, Prahladh Harsha, Yuval Ishai and David Wu 32-G882, Hewlett Room

November 08

Error Detection and Correction in a Computationally Bounded World

Daniel Wich (Northeastern University & NTT Research)
Add to Calendar 2024-11-08 10:30:00 2024-11-08 12:00:00 America/New_York Error Detection and Correction in a Computationally Bounded World We study error detection and error correction in a computationally bounded world, where errors are introduced by an arbitrary polynomial time adversarial channel. To take advantage of this restriction, we consider randomized codes that rely on a public random seed and consider different variants depending on (1) whether the seed needs to be known only to the encoder (self-seeded) or to both the encoder and decoder (seeded), (2) whether the adversary can choose the messages being encoded adaptively depending on the seed or only selectively ahead of time. In the case of a large constant-size alphabet we essentially give a complete characterization of such codes. Recall that the parameters of standard codes for worst-case (inefficient) errors are limited by the Singleton bound: for rate $R$ it is not possible to detect more than a $1-R$ fraction of errors, or uniquely correct more than a $(1-R)/2$ fraction of errors. In the computationally bounded setting, we show that going beyond the Singleton bound implies one-way functions for selective security or collision-resistant hash functions for adaptive. We construct self-seeded codes under these respective minimal assumptions with essentially optimal parameters:* Detection: the codes have a rate $R \approx 1$ while detecting a $\rho \approx 1$ fraction of errors.* Correction: for any $\rho < 1/2$, the codes uniquely correct a $\rho$ fraction of errors with rate $R \approx 1-\rho$. In the case of the binary alphabet, we construct selectively secure seeded codes under LWE. For error detection, our codes achieve essentially optimal rate $R \approx 1$ and relative error tolerance $\rho \approx 1/2$. For error correction, they can uniquely correct $\rho < 1/4$ relative errors with a rate $R$ that essentially matches that of the best list-decodable codes with error tolerance $\rho$. We also construct adaptively secure seeded codes with similar parameters based on a "crypto dark matter'" hardness assumption that mixes linear functions over different moduli. Giving a full characterization of what parameters are possible and under what assumptions in the binary alphabet setting remains as a fascinating open problem. Based on joint works with Jad Silbak 32-G882, Hewlett Room

November 01

Security Seminar

Mikko Hypponen, Ben Adida
WithSecure, VotingWorks
Add to Calendar 2024-11-01 14:00:00 2024-11-01 16:30:00 America/New_York Security Seminar Program:2:00pm - 3:00pmTECHNOLOGY REVOLUTIONSMikko Hypponen, WithSecureAbstract:All new technical innovations come with both advantages and disadvantages; we cannot simply select the benefits without also encountering the challenges. Once something is invented, we can't make it go away. This applies to things like artificial intelligence, the Tor Network, cryptocurrencies, strong encryption, quantum computing - even the internet itself.Bio:Mikko Hypponen is one of the most recognized cyber security experts world-wide, a keynote speaker and a best-selling author. Mikko works as the Chief Research Officer for WithSecure in Finland. He has served as an advisor for EUROPOL and the Monetary Authority of Singapore. Mikko's TED Talk has been seen by 2 million people, and his latest book has been translated to 5 languages.3:00pm - 3:30pm - Coffee Break3:30pm - 4:30pmA Voting Machine Everyone Can TrustBen Adida, VotingWorks.SB '98, MEng '99, PhD '06 Abstract:What does it take to build a voting machine that every voter can trust? Over the last 5 years, at VotingWorks, we've been working in the open on exactly that problem. We've concluded that the broadly accepted expert recommendations – paper ballots and post-election audits – are certainly necessary but far from sufficient. With public scrutiny into voting systems at an all-time high, and confidence in the outcome of our elections becoming increasingly partisan, we propose a new standard for voting machines – a standard that includes real transparency, strong system integrity, and a focus on simplicity. In this talk, we'll cover the why and the how, and we'll have voting equipment for attendees to try.Bio:Ben Adida is the Founder and Executive Director of VotingWorks, the only non-profit maker of voting equipment in the United States. Previously, Ben was VP of Engineering at Clever, Director of Engineering at Square, Director of Engineering at Mozilla, Research Faculty at Harvard Medical School / Children’s Hospital Boston, and research fellow with the Center for Research on Computation and Society at Harvard. Ben received his PhD from MIT’s Cryptography and Society at Harvard. Ben received his PhD from MIT’s Cryptography and Information Security group. 32-155
Add to Calendar 2024-11-01 10:30:00 2024-11-01 12:00:00 America/New_York "More Efficient Approximate $k$-wise Independent Permutations from Random Reversible Circuits via log-Sobolev Inequalities". We prove that the permutation computed by a reversible circuit with $\widetilde{O}(nk\cdot \log(1/\epsilon))$ random $3$-bit gates is $\epsilon$-approximately $k$-wise independent.Our bound improves on currently known bounds in the regime when the approximation error $\epsilon$ is not too small and is optimal up to logarithmic factors when $\epsilon$ is a constant. We obtain our results by analyzing the log-Sobolev constants of appropriate Markov chains rather than their spectral gaps.A corollary of our result concerns the \emph{incompressibility of random reversible circuits} as pointed out by concurrent work of Chen et al., who showed that a linear-in-$k$ bound for a multiplicative approximation to a $k$-wise independent permutation implies the linear growth of circuit complexity (a generalization of Shannon's argument). 32-G882 Hewlett Room