Abstract: The space-time dynamics of interactions in neural systems are often described using terminology of information processing, or computation, in particular with reference to information being stored, transferred and modified in these systems. In this talk, we describe an information-theoretic framework -- information dynamics -- that we have used to quantify each of these operations on information, and their dynamics in space and time. Not only does this framework quantitatively align with natural qualitative descriptions of neural information processing, it provides multiple complementary perspectives on how, where and why a system is exhibiting complexity. We will review the application of this framework in computational neuroscience, describing what it can and indeed has revealed in this domain. First, we discuss examples of characterising behavioural regimes and responses in terms of information processing, including under different neural conditions and around critical states. Next, we show how the space-time dynamics of information storage, transfer and modification directly reveal how distributed computation is implemented in a system, highlighting information processing hot-spots and emergent computational structures, and providing evidence for conjectures on neural information processing such as predictive coding theory. Finally, via applications to several models of dynamical networks and human brain images, we demonstrate how information dynamics relates the structure of complex networks to their function, and how it can invert such analysis to infer structure from dynamics.