XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Research

Our research focuses on the mathematical principles of learning, perception and action in brains and machines.

We aim to uncover the mathematical basis of intelligent behaviour in natural and artificial systems. In neuroscience, we work with experimentalists at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour and beyond to reveal computational principles from neural activity and pursue theories that connect principles of learning and computation to neural circuits. Our work in machine learning is similarly directed to the understanding of fundamental computational principles, elaborating the mathematics that underlie data-based discovery of structure, predictability and causality.

Here we provide a brief overview of some common research themes in the unit; please see each faculty member's individual page for more information.

Theoretical Neuroscience


Neural representations

How activity in neural populations reflects properties of stimuli, actions and internal cognitive variables is one of the most fundamental questions in neuroscience. We tackle this question in two ways: we work with empirical data (particularly from large neuronal populations) to understand, process and formalise the information available within them; we address theoretical issues associated with sophisticated versions of population codes.

A common thread in much of our work is the robustness of perceptual and motor systems in the presence of unexpected noise, non-stationary environments and the concomitant uncertainty - a robustness that sets them apart from even the best artificial systems. We also study how neural representations may account for uncertainty in internal variables to achieve such robustness.

Computation and dynamics

Biological neural networks exhibit rich dynamical behaviours whose importance for computation is under constant debate. We study computations achieved by recurrent dynamical systems in varying degrees of biological realism, looking for the general principles of computation-through-dynamics. These include data-driven models of motor cortex, dynamics in coupled excitatory-inhibitory systems, models of olfactory processing, etc. We also study the dynamical properties of active membrane processes associated with spiking.

Learning

Neural systems are remarkable in their ability to adapt to and learn from experience. We seek to understand the principles that guide this learning in many settings: based on sparse reinforcement or on rich teaching signals, or from the structure of the environment alone. Behavioural studies help identify the capabilities and limitations of biological learning. Theoretical work, cross-referenced to experimental data, addresses difficult problems in learning such as credit assignment (which synapse should adapt to improve prediction) and structure identification (how is the environment best parsed into its constituent causal components) by looking for biologically plausible algorithmic solutions. 

At the circuit level, learning has measurable physiological correlates in terms of changes at individual synapses and modifications of the stimulus-response properties of individual neurons. We study the theoretical significance of these changes at several levels, including the interpretation of spike-timing update rules for synaptic strength, the interaction of reinforcement and neuromodulation with receptive field plasticity, and the consequences of plastic changes on perceptual learning.

Neural systems

Although the principles of neural computation may apply broadly, theories can only be evaluated experimentally by considering specific neural systems. We develop theory and data analysis methods to investigate the organisational and computational principles that lie behind physiological, anatomical and psychophysical observations in many different subsystems of the brain. These range from sensory/perceptual systems (including vision, audition and olfaction), control systems underlying motor action, systems that effect choices and learning from reinforcement signals, and systems that underlie more elaborate cognition such as context-driven decision making, mapping and contextual awareness, attention, and planning. See also Neural data analysis below.
 

Machine Learning


Graphical models

Realistic models often require representing the dependencies between many random variables. Graphical models provide an elegant formalism for representing these dependencies and for implementing efficient probabilistic inference and decision making. We study novel algorithms for approximate inference and methods for learning both parameters and the structure of graphical models from data.

Kernel methods

Difficult real-world pattern recognition and function learning problems require that the learning system be highly flexible. Kernel methods such as Gaussian processes and support vector machines are one way of defining highly flexible non-parametric models based on similarities between data points. Gaussian processes, which correspond to neural networks with infinitely many hidden neurons, have proved powerful at avoiding some of the common pitfalls of learning such as 'overfitting'. We focus on how to make kernel methods even more flexible and efficient, how to learn the kernel from data, and how to use them in a variety of applications. 

Bayesian statistics

Bayesian statistics is a framework for doing inference by combining prior knowledge and data, and as such has been influential in the understanding of intelligent learning systems. We work on many areas of Bayesian statistics, including using variational methods to do inference efficiently in complex domains, model selection and non-parametric modelling, novel Markov chain methods, semi-supervised learning and modelling temporal sequences.

Reinforcement learning

Reinforcement learning studies how systems can actively learn about the transition and reward structure of their environments and come to choose appropriate actions. Apart from the links with conditioning and neuromodulation, we have studied various aspects of the trade-off between exploration and exploitation, the effects of approximation and the divination of hierarchical structure.

Network and relational data

Someone hands you a dataset that represent a small part of a large network, say, a social network or synaptic network. What can you learn about the network as a whole from this dataset? In order to be informative, how should sample data be selected from a network in the first place? Such questions are fundamental, but much harder than one might expect. And where we have answers, they are often far from obvious. They lead to a rich nexus at the intersection of machine learning, statistics and probability. Ingredients range from Bayesian modelling and empirical risk minimisation, through old favourites like sufficient statistics and convex analysis, to symmetry properties and dynamical systems.

Neural data analysis

The brain is perhaps the most complex subject of empirical investigation in scientific history. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterise this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. In collaborations with experimentalists, we have adapted machine learning techniques to characterise data from multiple extracellular electrodes, from identified single cells, as well as from local-field and magnetoencephalographic recordings. These studies have the potential to introduce powerful new theoretically motivated ways of looking at neural data.