2014 CCN Workshop
Decoding Population Responses

Organizers: James Haxby, Hervé Abdi, Hany Farid, Swaroop Guntupalli, Peter Tse

Co-sponsored by the CCN and the Neukom Institute for Computational Science

Dates: August 25 and August 26

Location: The Hanover Inn, Hanover, NH

Space is limited, so registration is required to attend. Please contact Courtney Rogers (courtney.rogers@dartmouth.edu) if you would like to register.

Program

Speakers

Stefano Fusi, Columbia University

High dimensional neural representations in complex tasks

Abstract:

Single-neuron activity in prefrontal cortex (PFC) is often tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and difficult to interpret. Because of its prominence in PFC, it is natural to ask whether such heterogeneity plays a role in subserving the cognitive functions ascribed to this area. We addressed this question by analyzing the neural activity recorded in PFC during an object sequence memory task. We show that the recorded mixed selectivity neurons offer a significant computational advantage over specialized cells in terms of the repertoire of input-output functions that are implementable by readout neurons. The superior performance is due to the fact that the recorded mixed selectivity neurons respond to highly diverse non-linear mixtures of the task-relevant variables. This property of the responses is a signature of the high-dimensionality of the neural representations. We report that the recorded neural representations have actually the maximal dimensionality. Crucially, we also observed that this dimensionality is predictive of animal behavior. Indeed in the error trials the measured dimensionality of the neural representations collapses. Surprisingly, in these trials it was still possible to decode all task-relevant aspects, indicating that the errors are not due to a failure in coding or remembering sensory stimuli, but instead in the way the information about the stimuli is mixed in the neuronal responses. Our findings suggest that the focus of attention should be moved from neurons that exhibit easily interpretable response tuning to the widely observed, but rarely analyzed, mixed selectivity neurons. 

Work done in collaboration with: M. Rigotti, O. Barak, M. Warden, X-J. Wang, N. Daw, E.K. Miller

Lisa Giocomo, Stanford University

Identifying the Ionic Algorithms for Calculating Spatial Maps

Abstract:

Our lab is interested in understanding how ion channels control neural coding and behavior. Throughout the nervous system, neural inputs and outputs are shaped, tuned and integrated by highly diversified sets of ion channels. Remarkably, how ion channels control neural coding and how these codes translate into accurate behavior remain central mysteries of neural processing. To address this question, we take advantage of the tractability of spatial coding by non-sensory medial entorhinal cortex neurons and my recent discovery of an ion channel that directly maps to specific features of functionally-defined medial entorhinal cells. Neural circuits in the medial entorhinal cortex translate sensory input about the external environment to an internal map of space. Within this circuit, grid cells provide a neural metric for distance traveled and are proposed to underlie path-integration and the cognitive process of self-localization. In this talk, I will discuss the contributions of single-cell intrinsic dynamics to neural coding and computation by entorhinal grid cells. The striking periodicity of the grid firing pattern has spurred multiple computational proposals for the emergence of grid cell firing properties, highlighting grid cells as an ideal system for investigating mechanisms of high-order cortical circuit computation.

Swaroop Guntupalli, Dartmouth College

A common linear model of representational spaces in human cortex

Abstract

Information represented in the neural populations can be modeled as a high-dimensional space with each dimension representing a local measure of neural activity and each point an activation pattern. While this principle may be common across different domains of information, building models of representational spaces that are common across brains presents a challenge. Common models based on anatomical features find correspondence for coarse-scale topographies but not fine-scale pattern differences. Here we present a common computational framework that aligns these representational spaces in different regions across brains into a common model representational space. We used a broad sample of response vectors measured during a complex, dynamic stimulation to adequately sample a rich variety of visual, auditory, and social percepts to derive the common model. Our results show that population codes for the same information in different brains can be accounted for in this high-dimensional common model representational space that is based on shared tuning functions, and is valid across many cortical fields. It can facilitate mapping cortical responses from different individuals to a common template preserving fine-scale information, and provides an explicit, computational account for their topographic variability across individual brains. A common representational space populated by response patterns pertaining to different perceptual and cognitive states aggregated across different subjects from different studies has the potential to serve as a functional brain atlas.

Nikolaus Kriegeskorte, MRC Cognition and Brain Sciences Unit

Vision as transformation of representational geometry

Abstract:

Vision can be understood as the transformation of representational geometry from one visual area to the next, and across time, as recurrent dynamics converge within a single area. The geometry of a representation can be usefully characterized by a representational distance matrix computed by comparing the patterns of brain activity elicited a set of visual stimuli. This approach enables us to compare representations between brain areas, between different latencies after stimulus onset, between different individuals and between brains and computational models. Results from fMRI suggest that the early visual image representation is transformed into an object representation that emphasizes behaviorally important categorical divisions more strongly than accounted for by visual-feature computational models that are not explicitly optimized to distinguish categories. Twenty-eight computational model representations, ranging from classical computer-vision features to neuroscientifically motivated models like HMAX, failed to fully explain the strong categorical divisions in IT. A deep convolutional neuronal network trained by supervised techniques on over a million category-labeled images came closest to explaining the IT representation. The categorical clusters appear to be consistent across individual human brains. However, the continuous representational space is unique to each individual and predicts individual idiosyncrasies in object similarity judgements. The representation flexibly emphasizes task-relevant category divisions through subtle distortions of the representational geometry. MEG results further suggest that the categorical divisions emerge dynamically, with the latency of categoricality peaks suggesting a role for recurrent processing.

Valerio Mante, University of Zürich

A new look at gating: selective integration of sensory signals through network dynamics

Abstract:

A hallmark of decision-making in primates is contextual sensitivity: a given stimulus can lead to different decisions depending on the context in which it is presented. This kind of flexible decision-making depends critically upon gating and integration of context-appropriate information sources within the brain. We have analyzed neural mechanisms underlying gating and integration in animals trained to perform a context-sensitive decision task. Surprisingly, both relevant and irrelevant sensory signals are present within frontal lobe circuits that form decisions, implying that gating occurs very late in the process. Dynamical systems analysis of the neural data, combined with a recurrent network model, suggest a novel mechanism in which gating and integration are combined in a single dynamical process.

Federico de Martino, University of Maastricht

The computational architecture of the auditory pathway - human fMRI investigations

Abstract

The complex circuitry of the human "auditory brain" allows us to make sense of the air pressure waves that enter our ears and as a consequence react to them. Brain processing of sounds proceeds from lower level of the pathway to higher ones through transformation of (topographic) information and is mediated by both feed-forward and feedback processes. Understanding how the brain extracts behaviourally relevant information from sounds necessitates a "system" approach that maps with great precision this computational process.


In this talk I will detail a series of (high field; 7 Tesla) MRI experiments oriented to the definition of sub-cortical and cortical functional and anatomical properties of key areas of the human auditory pathway. In particular, the studies presented here will focus on the human inferior colliculus, medial geniculate body and human (primary) cortical areas. Functional characteristics will be described on the basis of basic acoustical properties of sounds such as frequency, temporal and spectral modulations. High-resolution maps will be presented in both sub-cortical and cortical areas where our most recent data highlights the presence of tonotopic columns. Together with high resolution investigations of cortical myelination the results presented here represent an effort in the characterization of the human auditory pathway at a sub-millimeter level.

Lars Muckli, University of Glasgow

Layer-specific coding using ultra-high field (7T) fMRI to investigate feedback in the visual cortex

Abstract

David Mumford (1991) proposed a role for reciprocal topographic cortical pathways in which higher areas send abstract predictions of the world to lower cortical areas. At lower cortical areas, top-down predictions are then compared to the incoming sensory stimulation. Several questions arise within this framework: (1) do descending predictions remain abstract, or do they translate into concrete level predictions, the 'language' of lower visual areas? (2) how is incoming sensory information compared to top-down predictions? Are input signals subtracted from the prediction (as proposed in the predictive coding framework) or are they multiplied (as proposed by other models i.e. biased competition or adaptive resonance theory)?

Contributing to the debate of abstract or concrete level information, we aim to investigate the information content of feedback projections with functional MRI. We have exploited a strategy in which feedforward information is occluded in parts of visual cortex: i.e. along the non-stimulated apparent motion path, behind a white square that we used to occlude natural visual scenes, or by blindfolding our subjects (Muckli & Petro 2013). By presenting visual illusions, contextual scene information or by playing sounds we were able to capture feedback signals within the occluded areas of the visual cortex. MVPA analysis of the feedback signal reveals that they are more abstract than the feedforward signal. Furthermore, using high resolution MRI we found that feedback is sent to the outer cortical layers of V1. We also show that feedback to V1 can originate from auditory information processing (Vetter, Smith & Muckli 2014). We are currently developing strategies to reveal the precision and potential functions of cortical feedback. Our results link into the emerging paradigm shift that portrays the brain as a 'prediction machine' (Clark 2013)

References:

Mumford (1991) On the computational architecture of the neocortex – the role of the thalamocortical loop. Biol Cybernetics

Muckli & Petro (2013)Network interactions: non-geniculate input to V1. Curr Opin Neurobiol. Vetter,

Smith & Muckli (2014) Decoding Sound and Imagery Content in Early Visual Cortex. Current Biology

Clark (2013) Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behav Brain Sci

*Affiliation: Prof. Dr. Lars Muckli, Professor of Visual and Cognitive Neurosciences ------ Centre for Cognitive Neuroimaging (CCNi) / Institute of Neuroscience and Psychology / College of Medical, Veterinary and Life Sciences / University of Glasgow / 58 Hillhead Street / G12 8QB /

Bill Newsome, Stanford University

What's the deal with the Obama BRAIN Initiative?

Abstract:

In April of 2013, President Obama announced a grand challenge - the BRAIN Initiative - for US scientists to unlock the mysteries of the human brain. Dr. William Newsome, co-chair of the BRAIN planning committee appointed by NIH Director Francis Collins, will describe the project: what it is, why it is important, and how it can be achieved. The committee's report is now available on-line at the NIH BRAIN website: http://nih.gov/science/brain/2025.index.htm

Aude Oliva, MIT

Visualizing Human Mental Representations in Time and Space

Abstract

When we open our eyes, visual information flows into various parts of our brain, with each region interpreting different aspects of what we are seeing. Using representational similarity analysis (RSA; Kriegeskorte, et al., 2008), we combine ms-resolution magnetoencephalography (MEG), mm-resolution functional Magnetic Resonance Imaging (fMRI) and convolutional neural network (CNN) representations to identify stages of visual recognition processes happening at the millisecond and millimeter scales. This approach opens the door to large-scale views of the dynamics and algorithms of recognition at the scale of processing steps across the whole human brain. Work in collaboration with R. Cichy, D. Pantazis, A. Khosla, A. Torralba, MIT.