Imagine a world where computing systems are both trustworthy and as natural to interact with as a colleague. This dream may seem a long way off — after all, today’s computers are woefully insecure, with systems being compromised regularly for industrial infrastructure, financial systems, and military; and they are also woefully unnatural in the way they interact with us. But thanks to research at the intersections of cybersecurity and artificial intelligence (AI), the main weakness underlying these problems — that computers don’t really know what they’re doing — is being addressed.
Dr. Howard Shrobe, a Principal Research Scientist in MIT CSAIL, heads this research in the Computation Structures Group. His goal is to create and deploy high-performance, reliable, and secure computing systems that are easy to interact with, and he and his team achieve this in a number of ways involving two major research areas that intersect: Systems and security, and AI.
When Dr. Shrobe first came to MIT, he was working in the systems area on a project called Multics. Then, when Multics was finished, he wanted to continue his work in systems, so he drifted into the Artificial Intelligence Lab (now CSAIL) because there was more work in systems available there. As he worked in the lab, he became more and more interested in AI, particularly cognitive-style AI, which focuses on reasoning and representation. In the 1990s, he began to see that these two areas of interest have started to meet in the field of cybersecurity.
Some of Dr. Shrobe’s work is in pure systems, but much of his approach to cybersecurity is to bring in AI-style reasoning as part of the solution. For example, he and his group are designing computing systems that actively manage metadata in order to prevent attacks and help computers understand what they are doing at different levels, including hardware and low-level systems software. While an effort Dr. Shrobe is currently making in designing a new processor that tags every word in memory and registers with extra metadata is pure systems, he uses AI planning and reasoning techniques in other areas to try and provide second-layer defense and tools for analyzing systems.
Attack Planner in an example of one such system built by Dr. Shrobe and his team that utilizes AI at a tactical, psychological level to try to model the way attackers think about attacking systems. Typically, malicious actors will form an overall plan known as the Cyber Kill Chain, which involves the high-level steps of the strategy: Initial penetration, lateral motion from one machine to another, privilege escalation, and finally exploitation (and in many cases, obfuscation). The reasoning part of the system constructs as many plans for the attacker as it possibly can as a kind of auditing tool, so that users can be better prepared for all of the ways attackers could penetrate a system and the various consequences of the attack.
Another blend of AI and cybersecurity Dr. Shrobe and his group are working on involves the idea of building monitors that watch a system while it’s executing, and compare what the system is doing to a symbolic model of what it is supposed to do. If the system’s actual behavior goes outside the envelope of what the model sanctions, then you know that something is wrong. This model can also be used to reason backward and figure out where the violation might have originally occurred.
Recently, Dr. Shrobe has begun a project with DARPA called Assist that is purely AI and also advances natural interactions between computers and humans. In the Assist project, he continues to put forth cognitive questions that deal with what is known in psychology as “theory of mind”, which is essential to modeling the Other and the oftentimes subtle ways humans interact with one another. People nod, use hand gestures, and read facial expressions. Computers, on the other hand, don’t do that, so the goal here is to develop a kind of reasoning so that a system can reflect on what the people it’s interacting with are actually trying to do, what they’re thinking, what their plans might be, and how the computing system can interact with them in a constructive way.
Using both systems and AI, Dr. Shrobe’s approach to defense takes into consideration both the fundamental style and the heuristic style. Fundamentally, if there is a general category of weaknesses that attackers exploit, he works to remove that entire category in the system architecture. Heuristically, he works on randomization to try to make it much harder for an attacker to break into the system.
Staying steps ahead of cyberattacks is a constant challenge and requires creative thinking because it involves conflict with planning, intelligent beings. The research that Dr. Shrobe is doing to combine AI and systems is solving this issue, making computing systems more trustworthy and more anticipatory of what humans want them to do in an ever-changing digital landscape.
Using AI methods, we are developing an attack tree generator that automatically enumerates cyberattack vectors for industrial control systems in critical infrastructure (electric grids, water networks and transportation systems). The generator can quickly assess cyber risk for a system at scale.
This community is interested in understanding and affecting the interaction between computing systems and society through engineering, computer science and public policy research, education, and public engagement.
This CoR aims to develop AI technology that synthesizes symbolic reasoning, probabilistic reasoning for dealing with uncertainty in the world, and statistical methods for extracting and exploiting regularities in the world, into an integrated picture of intelligence that is informed by computational insights and by cognitive science.