In machine learning, a system can effectively make predictions from raw data by learning representations, e.g., of objects in the world. Many types of data such as images, languages, molecules, or interactions can be viewed as a graph. For such data, researchers are increasingly harnessing the power of Graph Neural Networks (GNNs), a structured framework for representation learning of graphs.
This project focuses on the theoretical foundations for analyzing the expressive power of GNNs, i.e., understanding and improving what kinds of things these networks can learn to predict, for two usages:
Representation of objects in the world helps us make predictions or clarifications about them. Deep-learning methods such as GNNs can represent and capture effective representations of data that can be modeled as graphs. This type of representation is used in a variety of applications. In the pharmaceutical industry, for example, GNNs can learn suitable representations of molecules to predict their properties and to help design new drugs. GNNs are also applied to obtain personalized recommendations for products, or to content provided by streaming media services.
After developing representations of objects, we look for ways a GNN can improve reasoning about these representations for machine learning and artificial intelligence (AI). Given a character's representation in a game, for example, you can try to predict how that character is going to behave. Similarly, if a machine-learning system is provided with representations of objects that interact, it can learn to reason about the representations in a series of steps.
What is often key to the success of a learning method is structure in the data. The structure we consider for the reasoning process is the structure of a procedure that can answer the reasoning question. Specifically, we show that many such tasks can be solved by dynamic programming, a general algorithmic setup that allows you to efficiently solve multi-stage problems.
By developing the theoretical foundations for reasoning about the expressive power of GNNs and expanding their representational capacity, we continue to pursue evolving and powerful architectures for machine learning with graphs.