Home >

CBB talk series

The Cognitive Brown Bag (CBB) is a graduate student organized talk series, primarily attended by the faculty, graduate students, and staff from the cognitive labs at Dartmouth College. The talks are typically held on Thursdays from 12:15-1pm, in Moore 302.

Spring 2020

Due to the COVID-19 outbreak we are facing locally and globally, and to comply with Dartmouth's guidance, we will not be hosting the CBB talk series during the spring of 2020. We hope to be back to our regularly scheduled programming in the fall of 2020.

Winter 2020





Thurs, Jan 30


Rotem Botvinik-Nezer, Dartmouth College

Variability in the analysis of a single fMRI dataset by many teams

The "replication crisis" in many scientific fields has raised concerns regarding the reliability of published results. One reason for the high rate of false positive results is the large number of "researcher degrees of freedom", where the process of data analysis can be performed in multiple ways. This is specifically apparent in neuroimaging, where there is a thriving "garden of forking analysis paths". In the Neuroimaging Analysis Replication and Prediction Study (NARPS: https://www.biorxiv.org/content/10.1101/843193v1.full), we tested the variability fMRI results across  analysis pipelines that are used in practice in research laboratories. Seventy analysis teams independently analyzed the same fMRI dataset to test the same ex-ante hypotheses. Overall, our findings show that analytic flexibility substantially affect reported results. In this talk, I will present the background for this unique project, highlight the main findings and discuss implications and potential solutions.

Thurs, Feb 13


Tommy Botch, Dartmouth College

How colorful is visual experience? Evidence from gaze-contingent virtual reality

Color ignites visual experience, imbuing the world with meaning, emotion, and richness. Intuitively, it feels that we are immersed in a colorful world that extends to the farthest limits of our periphery. In this talk, I will present a series of studies in which we show that this impression is surprisingly inaccurate. We used gaze-contingent rendering in immersive virtual reality (VR) to reveal the limits of color awareness during active, real-world visual experience by systematically altering the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. 

In Study 1, we found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. In Study 2, we measured color detection thresholds using a staircase procedure while a new set of observers explicitly attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. In a Study 3, we confirm that individual differences in detection thresholds are test-retest reliable within participants..

In brief, our results provide the first measurements of color awareness during active, naturalistic viewing conditions and show that our intuitive sense of a rich, colorful visual world is largely incorrect. I’ll conclude by briefly describing several applications of this approach to the study of psychiatric conditions such as autism.

Thurs, Feb 20


Giancarlo La Camera, Stony Brook University

Cortical computations via metastable activity

Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary ‘states’. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Focusing on metastability allows us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. In this talk, I will present some recently established links between metastable dynamics, expectation and decision making, together with a theoretical proposal of how these links may emerge in networks of spiking neurons. 

Thurs, Feb 27



Mohsen Rakhshan, Dartmouth College

Neural substrate underlying computations of volatility and adaptive learning in a complex environment

In value-based decision making, integration of reward outcomes over time is essential. When an unpredicted reward outcome occurs, a decision-maker must be able to distinguish between the uncertainty due to the stochastic nature of events (variability) and the uncertainty due to the changes in the environment (volatility). Although variability should not lead to behavioral adjustments, volatility should adjust the learning. While volatility can be specific to a stimulus (or action), it is currently unclear how humans perceive the level of volatility of the stimulus (action) in a complex environment where multiple stimuli (actions) with different levels of volatility are interacting. Here, we developed a novel concurrent probabilistic reversal learning task to study how learning from reward feedback is influenced by the interaction of volatility of different stimuli and their associated actions. Furthermore, we propose that a network of reward-dependent metaplasticity can provide a plausible mechanism for both integration of reward under uncertainty and estimation of uncertainty in the complex environment. 


Bogdan Petre, Dartmouth College

Evoked pain intensity representation is distributed throughout the brain 

Information is coded in brain activity at different scales: locally, distributed across regions and networks, and globally. For pain, the scale of representation is controversial, and quantitative characterizations of spatial information distribution are lacking. Although generally believed to be an integrated cognitive and sensory phenomenon implicating diverse brain systems, both local and global representations are invoked to explain pain physiology. In this person-level meta-analysis (or mega-analysis) of data from 289 participants across 10 studies, we use model comparison combined with multivariate predictive models to investigate the spatial scale and location of acute pain representation. We compare models based on (a) a single most pain-predictive region, identified in a data driven manner; (b) a single best large-scale cortical resting-state network; (c) selected cortical-subcortical systems related to evoked pain in prior literature (‘multi-system models’, including Neurosynth.org); and (d) a model spanning the full brain. We estimate the accuracy of pain intensity predictions using cross validation (7 studies) and subsequently validate in three independent holdout studies. All spatial scales convey information about pain intensity, but distributed, multi-system models better characterize pain representations than individual regions or networks. Full brain models showed no predictive advantage over multi-system models with feature selection guided by previous literature. These findings suggest that the representation of evoked pain experience is distributed across multiple cortical and subcortical systems. They also provide a blueprint for identifying the spatial scale of information in other domains. 


Fall 2019





Thurs, Sep 19


Graduate student data blitz


Thurs, Oct 17


Mehran Moradi, Dartmouth College

 Multiple timescales of neural dynamics and integration of task-relevant signals across cortex

Recent studies have proposed the orderly progression in the time constants of neural dynamics as an organizational principle of cortical computations. However, relationships between these timescales and their dependence on response properties of individual neurons are unknown. We developed a comprehensive model to simultaneously estimate multiple timescales in neuronal dynamics and integration of task-relevant signals along with selectivity to those signals. We found that most neurons exhibited multiple timescales in their response, which consistently increased from parietal to prefrontal to cingulate cortex. However, there was no correlation between these timescales across individual neurons in any cortical area, resulting in independent parallel hierarchies of timescales. Additionally, none of these timescales depended on selectivity to task-relevant signals. Our results not only suggest multiple canonical mechanisms for an increase in timescales of neural dynamics across cortex but also point to additional mechanisms that allow decorrelation of these timescales to enable more flexibility.

Thurs, Oct 31


Mariam Aly, Columbia University

How hippocampal memory shapes, and is shaped by, attention


Attention modulates what we see and what we remember. Memory affects what we attend to and perceive. Despite this connection in behavior, little is known about the mechanisms that link attention and memory in the brain. One key structure that may be at the interface between attention and memory is the hippocampus. Here, I’ll explore the hypothesis that the relational representations of the hippocampus allow it to critically contribute to bidirectional interactions between attention and memory. First, I’ll show — in a series of human fMRI studies — that attention creates state-dependent patterns of activity in the hippocampus, and that these representations predict both online attentional behavior and memory formation. Then, I’ll provide neuropsychological evidence that the hippocampus is necessary for attention in tasks that recruit relational representations, particularly those that involve spatial processing. Finally, I’ll demonstrate that hippocampal memories enable preparation for upcoming attentional states. Together, this line of work highlights the tight links between attention and memory — links that are established, at least in part, by the hippocampus.

Thurs, Nov 7



Ratan Murty, MIT

 Is visual experience necessary for the development of face selectivity in the lateral fusiform gyrus?

The fusiform face area (FFA) responds selectively to faces and is causally involved in face perception. How does the FFA arise in development, and why does it develop so systematically in the same location across individuals? Preferential fMRI responses to faces arises early by around 6 months of age in humans (Deen et al., 2017). Arcaro et al (2017) have further shown in monkeys that regions that later become face selective are correlated in resting fMRI with foveal retinotopic cortex in newborns, and that monkeys reared without ever seeing a face show no face-selective patches. These findings have been taken to argue that 1) seeing faces is necessary for the development of face selective patches and 2) face patches arise in previously fovea-biased cortex because early experience with faces is foveally biased.

I will present evidence against both these hypotheses. We scanned congenitally blind subjects with fMRI while they performed a one-back haptic shape discrimination task, sequentially palpating 3D printed photorealistic models of faces, hands, mazes and chairs in a blocked design. We observed robust face selectivity in the lateral fusiform gyrus of most congenitally blind subjects during haptic exploration of 3D-printed stimuli, indicating that neither visual experience, nor fovea-biased input, nor visual expertise is necessary for face selectivity to arise in its characteristic location. Similar resting fMRI correlation fingerprints in individual blind and sighted participants suggest a role for long- range connectivity in the specification of the cortical locus of face selectivity.


Thurs, Nov 21


AJ Haskins, Dartmouth College

Active vision in immersive, 360° real-world environments: Methods and applications

Eye-tracking studies offer substantial insight into cognition, revealing which visual features viewers prioritize over others as they construct a sense of place in an environment. Such studies suggest that robust individual differences characterize gaze behavior, flagging the tool as a potential window into understanding psychiatric conditions such as autism. Yet, one key feature of real-world experience is overlooked by traditional eye-tracking paradigms. Everyday visual environments are actively explored: we gain rich information about a place by shifting our eyes, turning our heads, and moving our bodies. Little is known about how active exploration impacts the way humans encode the rich information available in a real-world scene.  
In this study, we sought to understand the impact of active viewing conditions on gaze behavior. We exploited recent developments in immersive Virtual Reality (iVR) and custom in-headset eye-tracking to monitor participants’ gaze while they naturally explored real-world, 360º environments via self-directed motion (saccades and head turns). In half of the trials, photospheres were passively displayed to participants while they were head-fixed, thus enabling us to perform quantitative, in-depth comparisons of gaze behavior and attentional deployment as subjects encoded novel real-world environments during self-generated (active exploration) versus image-generated (passive viewing) study conditions.
In brief, our results show that active viewing influences every aspect of gaze behavior, from the way we move our eyes to what we choose to attend to. In addition to highlighting the importance of studying vision in active contexts, I’ll conclude by briefly describing several applications of this approach to the study of psychiatric conditions such as autism.


 Spring 2019





Wed, Apr 10





Shih-Wei Wu, National Yang-Ming University

Probability estimation and its neurocompuational substrates

Many decisions we make depend on how we evaluate potential outcomes and estimate their probabilities of occurrence. Outcome valuation is subjective – it requires consulting the decision maker’s internal preferences and is sensitive to context. Probability estimation is also subjective – but requires the decision maker to first extract statistics from the environment before using them to estimate probability. Currently, it is unclear whether the two computations share similar algorithms and neural-algorithmic implementations.

I will present our recent work on context-dependent probability estimation, which we identified both similarities and differences in computational mechanisms between valuation and probability estimation. I will also talk about work on modeling probability estimation as Bayesian inference, which focuses on examining how and how well people estimate probability of reward in the presence of prior and likelihood information. Here we found suboptimal performance similar to base-rate neglect, which surprisingly is robust across a wide variety of setups that try to eliminate this behavior. Together, these results suggest many interesting aspects of probability estimation that have yet to be fully understood at the behavioral, computational, and neural algorithmic levels.

Thurs, Apr 18





Sarah Herald, Dartmouth College


What is the role of the left-hemisphere face areas?

Over the past two decades, neuroimaging studies have revealed a bilateral network of face-selective areas. Despite the presence of left hemisphere face areas, only a few cases of acquired prosopagnosia (AP) resulting from left hemisphere damage have been reported, and most of those cases involved left-handed individuals. Indeed, almost all cases of AP result from unilateral right or bilateral hemisphere damage. Given the apparent right-hemisphere dominance of face-processing from the lesion literature, what might be the role of the left hemisphere face areas? I will review the lesion, neuroimaging, microsimulation, and intracranial recording literature to summarize our current understanding, or lack thereof, about the left hemisphere face areas. Additionally, I will provide suggestions for how future face perception studies can better address the shortcomings of prior studies and fill in the gaps in our knowledge. 

Thurs, May 2





Vassiki Chauhan, Dartmouth College


 Acquisition of person knowledge is pivotal for carrying out successful social interactions. Not only do we need to recognize people in different environments and circumstances, we also need to efficiently integrate information about them across different modalities. In my presentation, I will go over a range of approaches we have employed to investigate the system for recognizing familiar individuals. First, I will discuss the dominant theories about person knowledge and share some empirical evidence for prioritized processing of faces of familiar individuals. I will also share some recent neuroimaging results probing the recognition of identities across different modalities. Then, I will present preliminary neuroimaging results from a sample of children who were born blind but whose sight has been recently restored, allowing us to investigate how the face processing network evolves over time. Finally, I will go over the possibility of using naturalistic stimuli in order to identify common face selective regions in the brain across different participants.


Thurs, May 23




Kay Alfred, Dartmouth College




 Shiva Ghaanifarashahi, Dartmouth College


Thurs, May 30




Malinda McPherson, Harvard University

Multiple pitch mechanisms in music and speech perception

Pitch conveys critical information in speech, music and other natural sounds, and is conventionally defined as the perceptual correlate of a sound's fundamental frequency (F0). Although pitch perception is widely assumed to rely on a single F0 estimation process, real-world pitch tasks vary enormously, raising the possibility of underlying mechanistic diversity. I will present evidence that at least two different pitch mechanisms can be dissociated across tasks. One mechanism appears to help listeners summarize the frequencies of sounds with their F0, creating a compact code for memory storage. I will also discuss the use of singing to confirm and extend these results in populations where traditional psychophysical judgments may be difficult to elicit (e.g. young children or remote cultures without formal educational systems).

 Winter 2019




Feb 21




Sarah Oh, Dartmouth College


Mar 14


Jonathan Freeman, NYU

More than meets the eye: Split-second social perception

Mar 21


Lucy Owen, Dartmouth College

Decrypting the neural code