We are developing an algorithmic theory for brain networks, based on simple synchronized stochastic graph-based neural network models.
We are developing an algorithmic theory for brain networks, based on simple synchronized stochastic graph-based neural network models. Inspired by tasks that are solved in actual brains, we are defining abstract problems to be solved by these networks. We are designing particular algorithms (networks) that solve the problems, and analyzing these algorithms in terms of static costs such as the number of neurons, and dynamic costs such as the time to converge to a solution. We are also proving lower bounds (e.g., that a certain network size is required in order to achieve a certain convergence time), and tradeoffs between different cost measures. We are interested in how noise and uncertainty affect the costs of solving problems. We are considering how to combine networks that solve simple problems into larger networks that solve more complex problems, and how to model brain networks at different levels of abstraction. With all of this, we hope to contribute to a greatly improved understanding of computation in the brain.