Skip to main content
Home >

CBB talk series

The Cognitive Brown Bag (CBB) is a graduate student organized talk series, primarily attended by the faculty, graduate students, and staff from the cognitive labs at Dartmouth College. The talks are typically held on Thursdays from 12:15-1pm, in Moore 302.

Fall 2019

 

Speaker

Title

Abstract

Thurs, Sep 19

     
 

Graduate student data blitz

   

Thurs, Oct 17

     
 

Mehran Moradi, Dartmouth College

 Multiple timescales of neural dynamics and integration of task-relevant signals across cortex

Recent studies have proposed the orderly progression in the time constants of neural dynamics as an organizational principle of cortical computations. However, relationships between these timescales and their dependence on response properties of individual neurons are unknown. We developed a comprehensive model to simultaneously estimate multiple timescales in neuronal dynamics and integration of task-relevant signals along with selectivity to those signals. We found that most neurons exhibited multiple timescales in their response, which consistently increased from parietal to prefrontal to cingulate cortex. However, there was no correlation between these timescales across individual neurons in any cortical area, resulting in independent parallel hierarchies of timescales. Additionally, none of these timescales depended on selectivity to task-relevant signals. Our results not only suggest multiple canonical mechanisms for an increase in timescales of neural dynamics across cortex but also point to additional mechanisms that allow decorrelation of these timescales to enable more flexibility.

Thurs, Oct 31

     
 

Mariam Aly, Columbia University

How hippocampal memory shapes, and is shaped by, attention

 

Attention modulates what we see and what we remember. Memory affects what we attend to and perceive. Despite this connection in behavior, little is known about the mechanisms that link attention and memory in the brain. One key structure that may be at the interface between attention and memory is the hippocampus. Here, I’ll explore the hypothesis that the relational representations of the hippocampus allow it to critically contribute to bidirectional interactions between attention and memory. First, I’ll show — in a series of human fMRI studies — that attention creates state-dependent patterns of activity in the hippocampus, and that these representations predict both online attentional behavior and memory formation. Then, I’ll provide neuropsychological evidence that the hippocampus is necessary for attention in tasks that recruit relational representations, particularly those that involve spatial processing. Finally, I’ll demonstrate that hippocampal memories enable preparation for upcoming attentional states. Together, this line of work highlights the tight links between attention and memory — links that are established, at least in part, by the hippocampus.

Thurs, Nov 7

 

   
 

Ratan Murty, MIT

 Is visual experience necessary for the development of face selectivity in the lateral fusiform gyrus?

The fusiform face area (FFA) responds selectively to faces and is causally involved in face perception. How does the FFA arise in development, and why does it develop so systematically in the same location across individuals? Preferential fMRI responses to faces arises early by around 6 months of age in humans (Deen et al., 2017). Arcaro et al (2017) have further shown in monkeys that regions that later become face selective are correlated in resting fMRI with foveal retinotopic cortex in newborns, and that monkeys reared without ever seeing a face show no face-selective patches. These findings have been taken to argue that 1) seeing faces is necessary for the development of face selective patches and 2) face patches arise in previously fovea-biased cortex because early experience with faces is foveally biased.

I will present evidence against both these hypotheses. We scanned congenitally blind subjects with fMRI while they performed a one-back haptic shape discrimination task, sequentially palpating 3D printed photorealistic models of faces, hands, mazes and chairs in a blocked design. We observed robust face selectivity in the lateral fusiform gyrus of most congenitally blind subjects during haptic exploration of 3D-printed stimuli, indicating that neither visual experience, nor fovea-biased input, nor visual expertise is necessary for face selectivity to arise in its characteristic location. Similar resting fMRI correlation fingerprints in individual blind and sighted participants suggest a role for long- range connectivity in the specification of the cortical locus of face selectivity.

 

Thurs, Nov 21

     
 

AJ Haskins, Dartmouth College

Active vision in immersive, 360° real-world environments: Methods and applications

Eye-tracking studies offer substantial insight into cognition, revealing which visual features viewers prioritize over others as they construct a sense of place in an environment. Such studies suggest that robust individual differences characterize gaze behavior, flagging the tool as a potential window into understanding psychiatric conditions such as autism. Yet, one key feature of real-world experience is overlooked by traditional eye-tracking paradigms. Everyday visual environments are actively explored: we gain rich information about a place by shifting our eyes, turning our heads, and moving our bodies. Little is known about how active exploration impacts the way humans encode the rich information available in a real-world scene.  
 
In this study, we sought to understand the impact of active viewing conditions on gaze behavior. We exploited recent developments in immersive Virtual Reality (iVR) and custom in-headset eye-tracking to monitor participants’ gaze while they naturally explored real-world, 360º environments via self-directed motion (saccades and head turns). In half of the trials, photospheres were passively displayed to participants while they were head-fixed, thus enabling us to perform quantitative, in-depth comparisons of gaze behavior and attentional deployment as subjects encoded novel real-world environments during self-generated (active exploration) versus image-generated (passive viewing) study conditions.
 
In brief, our results show that active viewing influences every aspect of gaze behavior, from the way we move our eyes to what we choose to attend to. In addition to highlighting the importance of studying vision in active contexts, I’ll conclude by briefly describing several applications of this approach to the study of psychiatric conditions such as autism.

 

 Spring 2019

 

 Speaker

 Title

 Abstract

Wed, Apr 10

 

 

 

 

Shih-Wei Wu, National Yang-Ming University

Probability estimation and its neurocompuational substrates

Many decisions we make depend on how we evaluate potential outcomes and estimate their probabilities of occurrence. Outcome valuation is subjective – it requires consulting the decision maker’s internal preferences and is sensitive to context. Probability estimation is also subjective – but requires the decision maker to first extract statistics from the environment before using them to estimate probability. Currently, it is unclear whether the two computations share similar algorithms and neural-algorithmic implementations.

I will present our recent work on context-dependent probability estimation, which we identified both similarities and differences in computational mechanisms between valuation and probability estimation. I will also talk about work on modeling probability estimation as Bayesian inference, which focuses on examining how and how well people estimate probability of reward in the presence of prior and likelihood information. Here we found suboptimal performance similar to base-rate neglect, which surprisingly is robust across a wide variety of setups that try to eliminate this behavior. Together, these results suggest many interesting aspects of probability estimation that have yet to be fully understood at the behavioral, computational, and neural algorithmic levels.

Thurs, Apr 18

 

 

 

 

Sarah Herald, Dartmouth College

 

What is the role of the left-hemisphere face areas?

Over the past two decades, neuroimaging studies have revealed a bilateral network of face-selective areas. Despite the presence of left hemisphere face areas, only a few cases of acquired prosopagnosia (AP) resulting from left hemisphere damage have been reported, and most of those cases involved left-handed individuals. Indeed, almost all cases of AP result from unilateral right or bilateral hemisphere damage. Given the apparent right-hemisphere dominance of face-processing from the lesion literature, what might be the role of the left hemisphere face areas? I will review the lesion, neuroimaging, microsimulation, and intracranial recording literature to summarize our current understanding, or lack thereof, about the left hemisphere face areas. Additionally, I will provide suggestions for how future face perception studies can better address the shortcomings of prior studies and fill in the gaps in our knowledge. 

Thurs, May 2

 

 

 

 

Vassiki Chauhan, Dartmouth College

 

 Acquisition of person knowledge is pivotal for carrying out successful social interactions. Not only do we need to recognize people in different environments and circumstances, we also need to efficiently integrate information about them across different modalities. In my presentation, I will go over a range of approaches we have employed to investigate the system for recognizing familiar individuals. First, I will discuss the dominant theories about person knowledge and share some empirical evidence for prioritized processing of faces of familiar individuals. I will also share some recent neuroimaging results probing the recognition of identities across different modalities. Then, I will present preliminary neuroimaging results from a sample of children who were born blind but whose sight has been recently restored, allowing us to investigate how the face processing network evolves over time. Finally, I will go over the possibility of using naturalistic stimuli in order to identify common face selective regions in the brain across different participants.

 

Thurs, May 23

     

 

 

Kay Alfred, Dartmouth College

   

 

 

 Shiva Ghaanifarashahi, Dartmouth College

   

Thurs, May 30

     

 

 

Malinda McPherson, Harvard University

Multiple pitch mechanisms in music and speech perception

Pitch conveys critical information in speech, music and other natural sounds, and is conventionally defined as the perceptual correlate of a sound's fundamental frequency (F0). Although pitch perception is widely assumed to rely on a single F0 estimation process, real-world pitch tasks vary enormously, raising the possibility of underlying mechanistic diversity. I will present evidence that at least two different pitch mechanisms can be dissociated across tasks. One mechanism appears to help listeners summarize the frequencies of sounds with their F0, creating a compact code for memory storage. I will also discuss the use of singing to confirm and extend these results in populations where traditional psychophysical judgments may be difficult to elicit (e.g. young children or remote cultures without formal educational systems).

 Winter 2019

 

 Speaker

 Title

Feb 21

 

 

 

Sarah Oh, Dartmouth College

 

Mar 14

   
 

Jonathan Freeman, NYU

More than meets the eye: Split-second social perception

Mar 21

   
 

Lucy Owen, Dartmouth College

Decrypting the neural code

Last Updated: 11/19/19