November 17

Add to Calendar 2017-11-17 14:00:00 2017-11-17 15:00:00 America/New_York Machine Learning and AI for the Sciences —Towards Understanding Abstract: In recent years, machine learning (ML) and artificial intelligence (AI) methods have begun to play a more and more enabling role in the sciences and in industry. In particular, the advent of large and/or complex data corpora has given rise to new technological challenges and possibilities. In his talk, Müller will touch upon the topic of ML applications in the sciences, in particular in neuroscience, medicine and physics. He will also discuss possibilities for extracting information from machine learning models to further our understanding by explaining nonlinear ML models. E.g. Machine Learning Models for Quantum Chemistry can, by applying interpretable ML, contribute to furthering chemical understanding. Finally, Müller will briefly outline perspectives and limitations. 46-3002 Belfer sarah_donahue@hks.harvard.edu

December 15

Add to Calendar 2017-12-15 16:00:00 2017-12-15 17:00:00 America/New_York Have We Missed Half of What the Neocortex Does? Allocentric Location as the Basis of Perception Abstract: In this talk I will describe a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.I will be discussing material from these two papers. Others can be found at www.Numenta.com/papersA Theory of How Columns in the Neocortex Enable Learning the Structure of the Worldhttps://doi.org/10.3389/fncir.2017.00081Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortexhttps://doi.org/10.3389/fncir.2016.00023Speaker Biography: Jeff Hawkins is a scientist and co-founder at Numenta, an independent research company focused on neocortical theory. His research focuses on how the cortex learns predictive models of the world through sensation and movement. In 2002, he founded the Redwood Neuroscience Institute, where he served as Director for three years. The institute is currently located at U.C. Berkeley. Previously, he co-founded two companies, Palm and Handspring, where he designed products such as the PalmPilot and Treo smartphone. In 2004 he wrote “On Intelligence”, a book about cortical theory.Hawkins earned his B.S. in electrical engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003. 46-3002 Belfer sarah_donahue@hks.harvard.edu

February 16

Add to Calendar 2018-02-16 16:00:00 2018-02-16 17:00:00 America/New_York How can the brain efficiently build an understanding of the natural world? Abstract: The brain exploits the statistical regularities of the natural world. In the visual system, an efficient representation of light intensity begins in retina, where statistical redundancies are removed via spatiotemporal decorrelation. Much less is known, however, about the efficient representation of complex features in higher visual areas. I will discuss how the central visual system, operating with different goals and under different constraints, makes efficient use of resources to extract meaningful features from complex visual stimuli. I will then highlight how these same principles can be generalized to dynamic situations, where both the environment and the goals of the system are in flux. Together, these principles have implications for understanding a broad range of phenomena across animals and sensory modalities.This event is organized by the CBMM Trainee Leadership Council. 46-2001 Belfer sarah_donahue@hks.harvard.edu

April 13

Add to Calendar 2018-04-13 16:30:00 2018-04-13 17:30:00 America/New_York Is a Turing Test for Intelligence Equivalent to a Turing Test for Consciousness? Abstract: Rapid advances in convolutional networks and other machine learning techniques, in combination with large data bases and the relentless hardware advances due to Moore’s Law, have brought us closer to the day when we will be able to have extended conversations with programmable systems, such as advanced versions of Alexa or Siri, without being able to tell their siren voices from those of humans. This raises the questions to which extent systems that can pass a non-trivial version of the Turing test will also feel anything, that is, be conscious. I shall argue against this possibility for three reasons. Firstly, intelligent behavior, including speech, is conceptually radically different from subjective experience. Secondly, clinical case studies demonstrate that the neural basis of intelligence, self-monitoring, insights and other higher-order cognitive processes in the frontal regions of neocortex are distinct from the neural correlates of conscious experience in the posterior cortex. Thirdly, Integrated Information Theory (IIT), a fundamental theory of consciousness, predicts that conventional computers, even though they will be able, at least in principle, to simulate human-level behavior, will not experience anything. Building human-level consciousness requires neuromorphic computer architectures.Speaker Bio: Christof Koch is an American neuroscientist best known for his studies and writings exploring the basis of consciousness. Trained as a physicist, Koch was for 27 years a professor of biology and engineering at the California Institute of Technology. He is now Chief Scientist and President of the Allen Institute for Brain Science in Seattle, leading a ten year, large-scale, high through-put effort to build brain observatories to map, analyze and understand the mouse and human cerebral cortex.On a quest to understand the physical roots of consciousness, he published his first paper on the neural correlates of consciousness with the molecular biologist Francis Crick more than a quarter of a century ago.He is a frequent public speaker and writes a regular column for Scientific American. Christof is a vegetarian and cyclist who lives in Seattle and loves big dogs, climbing and rowing. MIT 54-100 Belfer sarah_donahue@hks.harvard.edu

April 18

Add to Calendar 2018-04-18 14:00:00 2018-04-18 15:00:00 America/New_York Fit without fear: an over-fitting perspective on modern deep and shallow learning CBMM Special SeminarAbstract:A striking feature of modern supervised machine learning is its pervasive over-parametrization. Deep networks contain millions of parameters, often exceeding the number of data points by orders of magnitude. These networks are trained to nearly interpolate the data by driving the training error to zero. Yet, at odds with most theory, they show excellent test performance. It has become accepted wisdom that these properties are special to deep networks and require non-convex analysis to understand.In this talk, I will show that classical (convex) kernel methods do, in fact, exhibit these unusual properties. Moreover, kernel methods provide a competitive practical alternative to deep learning, after we address the non-trivial challenges of scaling to modern big data. I will also present theoretical and empirical results indicating that we are unlikely to make progress on understanding deep learning until we develop a fundamental understanding of classical "shallow" kernel classifiers in the "modern" over-fitted setting. Finally, I will show that ubiquitously used stochastic gradient descent (SGD) is very effective at driving the training error to zero in the interpolated regime, a finding that sheds light on the effectiveness of modern methods and provides specific guidance for parameter selection.These results present a perspective and a challenge. Much of the success of modern learning comes into focus when considered from over-parametrization and interpolation point of view. The next step is to address the basic question of why classifiers in the "modern" interpolated setting generalize so well to unseen data. Kernel methods provide both a compelling set of practical algorithms and an analytical platform for resolving this fundamental issue.Based on joint work with Siyuan Ma, Raef Bassily, Chayoue Liu and Soumik Mandal. Singleton Auditorium (46-3002) Belfer sarah_donahue@hks.harvard.edu

April 20

Add to Calendar 2018-04-20 16:00:00 2018-04-20 17:00:00 America/New_York Accelerating Bio Discovery with Machine Learning Abstract: Google Accelerated Sciences is a translational research team that brings Google's technological expertise to the scientific community. Recent advances in machine learning have delivered incredible results in consumer applications (e.g. photo recognition, language translation), and is now beginning to play an important role in life sciences. Taking examples from active collaborations in the biochemical, biological, and biomedical fields, I will focus on how our team transforms science problems into data problems and applies Google's scaled computation, data-driven engineering, and machine learning to accelerate discovery.Speaker Bio: Philip Nelson is a Director of Engineering in Google Research. He joined Google in 2008 and was previously responsible for a range of Google applications and geo services. In 2013, he help found and currently leads the Google Accelerated Science team that collaborates with academic and commercial scientists to apply Google's knowledge and experience running complex algorithms over large data sets to important scientific problems. Philip graduated from MIT in 1985 where he did award-winning research on hip prosthetics at Harvard Medical School. Before Google, Philip helped found and lead several Silicon Valley start ups in search (Verity), optimization (Impresse), and genome sequencing (Complete Genomics) and was also an Entrepreneur in Residence at Accel Partners. Singleton Auditorium (46-3002) Belfer sarah_donahue@hks.harvard.edu

May 04

Add to Calendar 2018-05-04 14:00:00 2018-05-04 15:00:00 America/New_York Learning representations of the visual world Abstract: Recent advances in machine learning have profoundly influenced our study of computer vision. Successes in this field have demonstrated the expressive power of learning representations directly from visual imagery — both in terms of practical utility and unexpected expressive abilities. In this talk I will discuss several contributions which have helped improve our ability to learn representations of images. First, I will describe recent advances for constructing models for extracting semantic information from images by leveraging transfer learning and meta-learning techniques. Such learned models outperform human-invented architectures and are readily scalable across a range of computational budgets. Second, I will highlight recent efforts focused on the converse problem of synthesizing images through the rich visual vocabulary of painting styles and visual textures. This work permits a unique exploration of visual space and offers a window on to the structure of the learned representation of visual imagery. My hope is that these works will highlight common threads in machine and human vision and point towards opportunities for future research.Speaker Bio.: Jon Shlens is a senior research scientist at Google since 2010. Prior to joining Google Research he was a research fellow at the Howard Hughes Medical Institute and a Miller Fellow at UC Berkeley. His research interests include machine perception, statistical signal processing, machine learning and biological neuroscience. Singleton Auditorium (46-3002) Belfer sarah_donahue@hks.harvard.edu