Skip to main content

2016 CCN Workshop: Predictive Coding

Organizers: James Haxby, Hervé Abdi, Lars Muckli, Sam Nastase, and Adina Roskies

Sponsored by the Center for Cognitive Neuroscience

Dates: August 15 and August 16

Location: The Hanover Inn, Hanover, NH




Luc Arnal, University of Geneva

How to maximise surprise in others' brain


Current neurophysiological accounts of predictive coding suggest that distinct oscillatory channels might subserve the asymmetric propagation of predictions and errors in the brain. Although recent experimental observations have supported this hypothesis, it remains difficult to establish a causal relationship between the neural function (generating and testing predictions) and the hypothesised underlying oscillatory phenomena. I will propose an alternative approach to investigate sensory surprise by studying alarm communication signals (screams), which aim is to impede the natural goal of the brain that is to minimise sensory surprise. The proposed heuristic stems on the dual assumption that (i) screaming is -evolutionarily- the most important communication mean to promote survival and (ii) these communication signals evolved to fit neural processing constrains and maximise sensory surprise in the receiver's brain, thereby ensuring that he unconditionally perceives the vocal warning. I will argue that the frequency band (30-150 Hz) used in natural and artificial alarm signals maximises neural responses in the so-called gamma-band that we previously hypothesised as the carrier of sensory surprise in the brain. Although this link remains rather indirect and speculative, these results suggest that studying these sounds and their effects on the brain might provide useful insights about the cerebral architecture and its neurophysiological constraints.

André Bastos, MIT

Laminar-specific coding of working memory in frontal cortex


All cortical areas have some degree of laminar anatomical organization, which is characterized by different local and long-range inputs and outputs, expression levels of different molecular markers, and cell types. These distinctions have inspired many theories, such as predictive coding, about the putative functions of different layers, but little physiological data exists to support these claims. In particular, the role of the different cortical layers for cognition remains relatively unexplored. To address this, we recorded spike and LFP data from the frontal cortex (area PMd and SEF) of a macaque monkey with multiple 16 and 24 channel linearly-spaced multicontact probes as the animal performed a visual working memory (WM) task. As previously observed in visual cortex, LFP power in gamma frequencies (40-100 Hz) was strongest in superficial layers (L1-3), and alpha frequencies (8-12 Hz) predominated in deep layers (L5-6), suggesting some degree of functional compartmentalization by layers. We next examined the role of different frequency bands and layers for encoding WM information during the delay period of the task. We found that brief, punctate bursts of gamma-band activity in superficial layers reliably encoded the spatial position held in WM, but deep layers and other frequencies carried either very little or no information about WM contents. Deep and superficial layers synchronized their LFPs at sub-gamma frequencies, with spectral peaks in the alpha and beta frequency (~20-25 Hz) bands. Granger causality analysis revealed that this alpha-beta interaction was primarily unidirectional, with deep layers driving superficial. Finally, cross-frequency coupling analysis showed that the phase of delay period alpha oscillations in deep layers modulates the gamma amplitude of superficial layers. These analyses suggest a modulatory role for deep layers in WM maintenance, and an active role for superficial layers which encode WM contents in information-rich bursts of high-frequency gamma activity.

Jim DiCarlo, MIT

Neural mechanisms underlying visual object perception: the convergence of machine learning and neuroscience

Karl Friston, University College London

Predictive coding, active inference and belief propagation


I will consider prediction and choice based upon the minimisation of expected free energy. Crucially, (negative) free energy can always be decomposed into pragmatic (extrinsic) and epistemic (intrinsic) value. Minimising expected free energy is therefore equivalent to maximising extrinsic value, while maximising information gain or intrinsic value, i.e., reducing uncertainty about the causes of sensory samples. This decomposition resolves the exploration-exploitation dilemma; where epistemic value is maximised until there is no further resolution of uncertainty, after which exploitation is assured through maximisation of extrinsic value. This is formally consistent with the principle of maximum mutual information, generalising formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk sensitive (KL) control - using Hamilton's principle of least action. I will briefly review the normative theory - illustrating the minimisation of expected free energy using simulations and then turn to neuronal processes theories. In brief, the implicit (neuronally plausible) belief propagation offers a form of predictive coding, when hidden causes and outcomes are treated as discrete states.

Jakob Hohwy, Monash University

Better believe the free energy principle


Many believe that an important part of brain function is to form predictions of sensory input. Fewer believe the free energy principle, which is an extreme version of the idea that the brain is predictive. So it ought to be reasonable to believe that prediction is an important part of brain function while not believing the free energy principle. Using simple considerations from philosophy of science, I argue that if one begins with the assumption that prediction is an important part of brain function, then it is reasonable to also believe the free energy principle.

Elias Issa, MIT

Evidence that the ventral visual stream codes the errors used in hierarchical inference and learning


Hierarchical feedforward processing makes object identity explicit at the highest stages of the ventral visual stream. We leveraged this computational goal to study the fine-scale temporal dynamics of neural populations in posterior and anterior inferior temporal cortex (pIT and aIT) during face detection. As expected, we found that a neural spiking preference for natural over distorted face images was rapidly produced, first in pIT and then in aIT. Strikingly, in the next 30 milliseconds of processing, this pattern of selectivity in pIT completely reversed, while selectivity in aIT remained unchanged. Although these dynamics were difficult to explain using a pure feedforward model or extensions implementing adaptation, lateral inhibition, or normalization, a model class computing errors through feedback closely matched the observed neural data and parsimoniously explained a range of seemingly disparate IT neural response phenomena. This new perspective on neural dynamics in IT augments the standard model of online vision by suggesting that neural signals of states (e.g. likelihood of a face being present) are intermixed with the error signals produced during inference and learning in deep hierarchical networks.

Nikolaus Kriegeskorte, University of Cambridge

Testing complex brain-computational models to understand how the brain works


Recent advances in neural network modelling have enabled major strides in computer vision and other artificial intelligence applications. This brain-inspired technology provides the basis for tomorrow's computational neuroscience. Deep convolutional neural nets trained for visual object recognition have internal representational spaces remarkably similar to those of the human and monkey ventral visual pathway. High-resolution functional imagine is providing increasingly rich measurements of brain activity in animals and humans, but a major challenge is to leverage such data to gain insight into the brain's computational mechanisms. We are only beginning to develop statistical inference for adjudicating between alternative brain-computational models (BCMs). I will share first steps with a new method called probabilistic representational similarity analysis (pRSA), which accounts for the distorted reflection of representational spaces in activity measurements that subsample the representation (e.g. by local averaging in fMRI and by sparse sampling in array recordings). We are entering an exciting new era, in which we will be able to build neurobiologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence.

Inferring brain0computational mechanisms with models of activity measurements. Krieseskorte N, Diedrichsen J (in press) Philosophical Transactions of the Royal Society B.

Deep neural networks: A new framework for modeling biological vision and brain information processing. Krieseskorte N (2015) Annu. Rev.Vis. Sci. 2015. 1:417-46.

Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation SM Khaligh-Razavi, N Krieseskorte PLoS computational biology 10 (11), e1003915

Lucia Melloni, NYU School of Medicine

Knowns and unknowns of predictive computations in the human brain


Predictive Coding – a novel information coding framework that rests upon predictive computations - has become increasingly popular in recent years, partly because it aims to explain brain function as a whole on the basis of a small number of coding principles. Despite its appeal, direct experimental evidence for Predictive Coding computations in the human brain is scarce, even more so for the putative “canonical microcircuit” implementing Predictive Coding. Moreover, while predictions play a crucial role in Predictive Coding, it is puzzling how and via which mechanism they affect perception in light of the fact that some priors stabilize perception, while others have the opposite, repulsive effect. 

We have used a unique combination of functional magnetic resonance imaging, invasive electrocorticographic and intralaminar recordings, as well as lesion studies and modelling to understand how predictions are implemented and tested in the human brain. We have found two distinct brain networks that stabilize or reverse perception, respectively. The former localizes to a network of higher-order visual and fronto-parietal areas, while the latter is confined to early sensory areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. Electrocorticographic and intralaminar recordings in epilepsy patients have revealed that detecting deviations from predicted patterns arises from two distinct but interacting processes: i) differential adaptation of sensory responses, and ii) an explicit deviance detection system. These two processes cooperate, and in functional terms, fit well with Predictive Coding. However, contrary to most existing models that assume a hierarchical organization, our data reveal an anatomical interdigitation of the two systems. At the laminar scale, deviance signals are largest in superficial cortical layers. Together, our findings provide important evidence for the mechanistic implementation of Predictive Coding, but they also call for a radical reassessment of current models to accommodate our novel results. 

Lars Muckli, University of Glasgow

Visual predictions in different layers of visual cortex


Our brain imaging research has contributed to what is now seen as a paradigm shift in cognitive Neuroscience. Many agree that the brain can be conceptualized as a prediction machine; internal models predict future states, which are then compared to the incoming stream of sensory information. This new conceptual framework opens a number of essential empirical questions: How are predictions communicated? How precise are top-down projected predictions? How are prediction-errors signalled upstream and how are they used to update internal models? We have pioneered several empirical approaches, the most recent one utilizing ultra-high field fMRI, to investigate layer specific information content in cortical feedback (Muckli et al., 2015, Curr Biol). We use paradigms in which direct feedforward inputs to retinotopic visual areas are occluded (Muckli & Petro 2013 Curr Opin Neurobiol), including visual illusions (apparent motion, Alink et al. 2010, JNS; Petro & Muckli 2016, PNAS comment), auditory contextual scene stimulation in blindfolded subjects (Vetter et al. 2014 Curr Biol), and variations on our occlusion paradigm (Smith & Muckli 2010, PNAS) to uncover contextual feedback information to superficial layers of primary visual cortex. These paradigms allow us to measure spatial precision of feedback, temporal unfolding of feedback during saccadic eye-movements (Edwards et al., under review, Curr Biol), and other abstract categorical and task-dependent feedback information.

We are extending our framework to reconstruct and visualize cortical feedback – an approach that can be conceptualized as a day-dream reader: i.e. visualizing the internal models during mental imagery. We are planning extensions into long-term temporal predictions and mental time travel. In collaboration with rodent research labs, we are investigating the dendritic contribution to the superficial layers processing. Research on predictive processing affects brain-scale simulations (HBP), and conceptual and philosophical collaborations (Andy Clark, Jacob Hohwy). 

Nick Turk-Browne, Princeton University

Learning and prediction in the hippocampus


Last Updated: 8/10/16