I study the computational basis of human learning and inference. Through a combination of mathematical modeling, computer simulation, and behavioral experiments, I try to uncover the logic behind our everyday inductive leaps: constructing perceptual representations, separating "style" and "content" in perception, learning concepts and words, judging similarity or representativeness, inferring causal connections, noticing coincidences, predicting the future. I approach these topics with a range of empirical methods -- primarily, behavioral testing of adults, children, and machines -- and formal tools -- drawn chiefly from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. My work is driven by the complementary goals of trying to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners.

Josh Tenenbaum is the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences, and a member of the Computer Science and Artificial Intelligence Laboratory. He received his Ph.D. from MIT in 1999 and after a brief postdoc with the MIT AI Lab, he joined the Stanford University faculty as Assistant Professor of Psychology and (by courtesy) Computer Science. He returned to MIT as a faculty member in 2002. He currently serves as Associate Editor of the journal Cognitive Science, and he has been active on the program committees of the Neural Information Processing Systems (NIPS) and Cognitive Science (CogSci) conferences.



Neural Physics Engine

We've developed an object-based neural network architecture for learning predictive models of intuitive physics that extrapolates to variable object count and variable scene configurations with only spatially and temporally local computation.


 2 More