CSAIL Event Calendar: Previous Series
Probabilistic Relational Models: Representation, Inference, & Learning
Speaker: Daphne Koller , Stanford University
Bayesian networks are a compact and natural representation for complex probabilistic models. They use graphical notation to encode domain structure: the direct probabilistic dependencies between variables in the domain. Bayesian networks have been applied successfully in a variety of applications; they can also be learned directly from data, allowing us to automatically extract the most significant correlations in the domain. Bayesian networks, however, are an unsuitable representation for complex domains involving many entities that interact with each other. In the first part of the the talk, I will describe probabilistic relational models (PRMs), which extend the language of Bayesian networks with the expressive power of object-relational languages. A PRM models the uncertainty over the attributes of objects in the domain and uncertainty over the relations between the objects. This language is capable of compactly modeling domains that are substantially more complex than ones for which standard Bayesian networks are appropriate. The same structure that allows compact representation can also support substantially faster inference than in equivalent Bayesian network models. In the second part of the talk, I will show how we can use probabilistic relational models to learn the probabilistic dependency structure in a relational domain, using a relational database as our starting point. I will discuss applications of this new learning technology to various domains, including complex biological data sets.