November 17

Add to Calendar 2017-11-17 14:00:00 2017-11-17 15:00:00 America/New_York Machine Learning and AI for the Sciences —Towards Understanding Abstract: In recent years, machine learning (ML) and artificial intelligence (AI) methods have begun to play a more and more enabling role in the sciences and in industry. In particular, the advent of large and/or complex data corpora has given rise to new technological challenges and possibilities. In his talk, Müller will touch upon the topic of ML applications in the sciences, in particular in neuroscience, medicine and physics. He will also discuss possibilities for extracting information from machine learning models to further our understanding by explaining nonlinear ML models. E.g. Machine Learning Models for Quantum Chemistry can, by applying interpretable ML, contribute to furthering chemical understanding. Finally, Müller will briefly outline perspectives and limitations. 46-3002 Belfer sarah_donahue@hks.harvard.edu

December 15

Add to Calendar 2017-12-15 16:00:00 2017-12-15 17:00:00 America/New_York Have We Missed Half of What the Neocortex Does? Allocentric Location as the Basis of Perception Abstract: In this talk I will describe a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.I will be discussing material from these two papers. Others can be found at www.Numenta.com/papersA Theory of How Columns in the Neocortex Enable Learning the Structure of the Worldhttps://doi.org/10.3389/fncir.2017.00081Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortexhttps://doi.org/10.3389/fncir.2016.00023Speaker Biography: Jeff Hawkins is a scientist and co-founder at Numenta, an independent research company focused on neocortical theory. His research focuses on how the cortex learns predictive models of the world through sensation and movement. In 2002, he founded the Redwood Neuroscience Institute, where he served as Director for three years. The institute is currently located at U.C. Berkeley. Previously, he co-founded two companies, Palm and Handspring, where he designed products such as the PalmPilot and Treo smartphone. In 2004 he wrote “On Intelligence”, a book about cortical theory.Hawkins earned his B.S. in electrical engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003. 46-3002 Belfer sarah_donahue@hks.harvard.edu

February 16

Add to Calendar 2018-02-16 16:00:00 2018-02-16 17:00:00 America/New_York How can the brain efficiently build an understanding of the natural world? Abstract: The brain exploits the statistical regularities of the natural world. In the visual system, an efficient representation of light intensity begins in retina, where statistical redundancies are removed via spatiotemporal decorrelation. Much less is known, however, about the efficient representation of complex features in higher visual areas. I will discuss how the central visual system, operating with different goals and under different constraints, makes efficient use of resources to extract meaningful features from complex visual stimuli. I will then highlight how these same principles can be generalized to dynamic situations, where both the environment and the goals of the system are in flux. Together, these principles have implications for understanding a broad range of phenomena across animals and sensory modalities.This event is organized by the CBMM Trainee Leadership Council. 46-2001 Belfer sarah_donahue@hks.harvard.edu