Led by Web inventor and Director, Tim Berners-Lee and CEO Jeff Jaffe, the W3C focus is on leading the World Wide Web to its full potential by developing standards, protocols and guidelines that ensure the long-term growth of the Web
We work on a wide range of problems in distributed computing theory. We study algorithms and lower bounds for typical problems that arise in distributed systems---like resource allocation, implementing shared memory abstractions, and reliable communication.
Alloy is a language for describing structures and a tool for exploring them. It has been used in a wide range of applications from finding holes in security mechanisms to designing telephone switching networks. Hundreds of projects have used Alloy for design analysis, for verification, for simulation, and as a backend for many other kinds of analysis and synthesis tools, and Alloy is currently being taught in courses worldwide.
The challenge that motivates the ANA group is to foster a healthy future for the Internet. The interplay of private sector investment, public sector regulation and public interest advocacy, as well as the global diversity in drivers and aspirations, makes for an uncertain future.
We focus on finding novel approaches to improve the performance of modern computer systems without unduly increasing the complexity faced by application developers, compiler writers, or computer architects.
Our interests span quantum complexity theory, barriers to solving P versus NP, theoretical computer science with a focus on probabilistically checkable proofs (PCP), pseudo-randomness, coding theory, and algorithms.
To further parallelize co-prime sampling based sparse sensing, we introduce Diophantine Equation in different algebraic structures to build generalized lattice arrays.
With strong relationship to generalized Chinese Remainder Theorem, the geometry properties in the remainder code space, a special lattice space, are explored.
We aim to understand theory and applications of diversity-inducing probabilities (and, more generally, "negative dependence") in machine learning, and develop fast algorithms based on their mathematical properties.
Our main goal is to develop fact checking algorithms that can assess the credibility of claims mentioned in the textual statements and provide interpretable valid evidence that explains why a certain claim is considered as factually true or fake.
Data often has geometric structure which can enable better inference; this project aims to scale up geometry-aware techniques for use in machine learning settings with lots of data, so that this structure may be utilized in practice.
The MOOC Learner Project provides learning scientists, instructional designers and online education specialists with open source software that enables them to efficiently extract teaching and learning insights from the data collected when students learn on the edX or open edX platform.
To enable privacy preservation in decentralized optimization, differential privacy is the most commonly used approach. However, under such scenario, the trade-off between accuracy (even efficiency) and privacy is inevitable. In this project, distributed numerical optimization scheme incorporated with lightweight cryptographic information sharing are explored. The affect on the convergence rate from network topology is considered.
Predicting the number of clock cycles a processor takes to execute a block of assembly instructions in steady-state (the throughput) is important for both compiler designers and performance engineers.
However, building an analytical model to do so is especially complicated in modern x86-64 Complex Instruction Set Computer (CISC) machines with sophisticated processor microarchitectures in that it is tedious, error-prone, and must be performed from scratch for each processor generation.
Ithemal is the first tool that learns to predict the throughput of a set of instructions. It does so more accurately than state-of-the-art hand-written tools currently used in compiler backends and static machine code analyzers. In particular, Ithemal has less than half the error of state-of-the-art analytical models (LLVM's llvm-mca and Intel's IACA).
Many optimization problems in machine learning rely on noisy, estimated parameters. Neglecting this uncertainty can lead to great fluctuations in performance. We are developing algorithms for these already nonconvex problems that are robust to such errors.
New privacy laws like Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have spawned a new industry of companies and platforms advertising that they can anonymize your data and be compliant with the law.
Last week MIT’s Institute for Foundations of Data Science (MIFODS) held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.
On Wednesday, October 31 MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) will be hosting a special one-day conference with BT to convene security professionals, government officials and academic experts to talk about key issues in cybersecurity.
Google AI’s Jeff Dean has a seemingly straightforward objective: he wants to use a collection of trainable mathematical units organized in layers to solve complicated tasks that will ultimately benefit many parts of society.
Every spring, engineering students from MIT and law students from Georgetown University overcome the distance between their institutions and disciplines in a semester-long flurry of virtual classroom meetings and late-night Google hangout sessions, culminating in presentations to policy experts in DC.