Fast-learning neural models that maximize Shannon information storage
Speaker: David Staelin , EECS, MIT
The human brain surpasses our finest computers using only ~30 watts, millisecond switching speeds, and unreliable neurons that individually signal via isolated voltage spikes sent roughly every second to perhaps 10,000 other neurons. We currently have no theoretical understanding of how this could work or what might constitute plausible performance bounds. Moreover, Blum and Rivest have shown that training even three canonical neurons in a classic reward-based fashion is arguably NP-complete, compounding our problem. Illustrative fundamental open theoretical questions include: 1) the proper definition of Shannon information in a neural context, 2) its practical upper bounds, and 3) identification of an explanatory neural model that permits these questions to be addressed with precision, even if biologically oversimplified.