Compositional Approaches to Modelling Language and Concepts

Speaker

University of Bristol

Host

Dr. Una-May O'Reilly
CSAIL MIT
Zoom Link: https://mit.zoom.us/j/95408129253

Abstract:
Recent neural approaches to modelling language and concepts have proven quite effective, with a proliferation of large models trained on correspondingly massive datasets. However, these models still fail on some tasks that humans, and symbolic approaches, can easily solve. Large neural models are also, to a certain extent, black boxes - particularly those that are proprietary. There is therefore a need to integrate compositional and neural approaches, firstly to potentially improve the performance of large neural models, and secondly to analyze and explain the representations that these systems are using. In this talk I will present two compositional approaches for modelling language and concepts. I will describe applications to reasoning and aspects of language such as ambiguity or metaphor. I will go on to give extensions from a textual to a multimodal setting, and how to generalize to unseen concepts using composition. Finally, I will present some future directions in modelling analogy and for understanding the types of reasoning or symbol manipulation that large neural models may be performing.

Bio:
Martha is a Lecturer in the School of Engineering Mathematics and Technology at the University of Bristol, UK. Prior to Bristol, she held a Veni fellowship at the University of Amsterdam, was a postdoc in the Quantum Group at the University of Oxford, and is currently visiting the Santa Fe Institute working on approaches to modelling analogy. Her research interests are in compositional approaches to modelling language and concepts, and in how these can be integrated with neural or distributed systems.