Home >

CBB talk series

The Cognitive Brown Bag (CBB) is a graduate student organized talk series, primarily attended by the faculty, graduate students, and staff from the cognitive labs at Dartmouth College.

Spring 2020 on Zoom

 

Speaker

Title

Abstract

Thurs, May 7

     
 

Marvin Maechler, Dartmouth College

Attentional tracking takes place over perceived rather than physical locations

Illusions can induce striking differences between perception and retinal input. For instance, a static Gabor with a moving internal texture appears to be shifted in the direction of its internal motion, a shift that increases dramatically when the Gabor itself is also in motion. Here we ask whether attention operates on the perceptual or physical location of this stimulus. To do so we generated an attentional tracking task where participants (N=15) had to keep track of a single target among three Gabors that rotated around a common center in the periphery. During tracking, the illusion was used to make three Gabors appear either shifted away from or toward one another while maintaining the same physical separation. Tracking performance depends to large degree on target to distractor spacing, so if attention selects targets from perceived positions, performance should be better when the Gabors appear further apart and worse when they appear closer together. Results showed that tracking performance was superior with greater perceived separations, implying that attentional tracking operates over perceived rather than physical positions. 

 

Lucy Owen, Dartmouth College

What is the dimensionality of human thought?

Naturalistic processing requires coordinated activity patterns across our brain. In order to understand the dimensionality of neural activity patterns, and changes in the complexity of brain activity patterns over time, we used an fMRI dataset collected by Simony et al. (2016) in which cognitive richness was manipulated. Specifically, participants listened to an audio recording of a story, as well as scrambled versions of the same story (where the scrambling was applied at different temporal scales). We applied dimensionality reduction algorithms to the activity patterns in each experimental condition. We sought to understand the 'dimensionality' of the neural patterns that were sufficient to decode participants' listening times (our approach was similar to that of Mack et al. 2017). We trained classifiers with the same neuroimaging dataset using more and more principle components to decode the precise time when a given neural patterns was recorded. We found that even low-dimensional embeddings of the data were sufficient to accurately decode listening times from the intact story recording, whereas finer temporal scramblings of the story required higher-dimensional embeddings of the data to reach peak decoding accuracy.

Thurs, May 21

     
 

Sarah Herald, Dartmouth College

How are faces represented in visual areas?

I will be discussing three of my studies. First, I will discuss a series of experiments where I found category-selective areas in visual cortex respond primarily to images in the contralateral visual field, with the exception of the right Fusiform Face Area which responds to faces nearly equally across the entire visual field. rFFA’s outsized role in integrating face information across both visual fields may be the reason why a single unilateral lesion to the area is enough to cause large deficits in face perception. Second, I will discuss a case study of patient A.D., who perceives features on the right side of faces as if they had melted, a rare condition known as hemi-prosopometamorphopsia (hemi-PMO). A.D.’s results indicate faces are aligned to a view- and orientation-independent face template, that the representations of the left and right halves of a face are dissociable, and that these representations exist in both the right and left hemisphere. Finally, I will discuss a series of experiments I have planned to investigate the relationship between visual and haptic face recognition and to determine whether the representation of faces in visual areas is multimodal.

 

 

Thurs, Jun 4

     
 

Kirsten Ziman, Dartmouth College

 

Mental illness has traditionally been diagnosed based on the number and severity of symptoms from a disorder-specific list. This low-resolution approach allows two patients with virtually no common symptoms to receive the same diagnosis, and for individual patients to be commonly diagnosed with multiple comorbid disorders. As such, the NIMH deems it critical to move towards a holistic approach for mental illness, and has proposed a research framework, called RDoC, to this end. RDoC emphasizes comprehensive neural, cognitive, and behavioral analysis across the lifespan to understand how mental illness unfolds. In line with this objective, we are conducting a far-reaching analysis of cognitive traits as they relate to psychiatric tendencies. Specifically, we are analyzing rich cognitive profiles (aggregated perceptual and cognitive task data) as they map onto high dimensional psychiatric space (aggregated psychiatric survey data). We will use a machine learning approach to fully leverage the most psychiatrically informative behavioral nuances in classic psychological data (from perception, memory, and cognitive control tasks). In doing so, we hope to facilitate the shift towards a comprehensive, data-driven approach to psychiatry. 

 

 Heejung Jung, Dartmouth College

 

Conformity, the act of changing one’s behavior to align with social influence, is a robust phenomenon, persisting even in settings of subjective choice, where no right or wrong answers exist. However, understanding the neural representation of normative conformity of subjective choices and socially modulated values has been a challenge, as it is difficult to differentiate when an individual is acting on their own preferences versus majority opinion. In order to address these questions, we devised a paradigm that orthogonalizes one’s original preference (“value”) and the choice of the majority (“social”). In addition, by using a model-based approach, we were able to quantify both value and social factors that drive choices on a trial-by-trial basis, instead of using binary conditions of conformed versus non-conformed choices, as has been previously studied. Our preliminary results show that value regressors recruit the ventromedial prefrontal cortex and temporal parietal junction, while social regressors tend to recruit the temporal parietal junction. These findings suggest that conformed choices may go beyond the value computation in the medial prefrontal cortex, and may engage other social regions in the process of deciding to conform.

Fri, Jun 12

 

   
 

Mary Kieseler, Dartmouth College

 

When driving, we use mirrors to localize objects that would otherwise be invisible to us because they lie outside our field of view. Various species of vertebrates can learn to use a mirror to localize objects hidden from their view (monkeys: Anderson and Gallup 2011; chimpanzees, Menzel et al. 1985; a gorilla, Nicholson and Gould 1995; elephants, Povinelli 1989; pigs, Howell and Bennett 2011; African gray parrots, Pepperberg et al. 1995; crows, Medina et al. 2011). Octopuses are highly capable visual hunters who prey on live crabs. In the present study, we tested the hypothesis that Octopus bimaculoides could learn to use the mirror image of a visual scene to localize a predictor of food reward. Three octopuses were tested in the present study. We devised a design where at the beginning of each trial, the animal was placed in an opaque box facing a mirror. A virtual crab was projected on a back screen hidden from view of the subject, but its image was reflected in the mirror. The animal’s task was to move out of the box, turn around and go to the side where the virtual crab was projected. This required using the mirror as a tool to locate the side of the projected but 'hidden' crab. The octopuses in the study made significantly more correct choices than would be expected had they guessed at chance level. Our results show that octopuses are capable of learning to utilize a mirror to infer where their prey was located. This requires the cognitive capacity to use a complex visual representation of the environment to drive goal-oriented behavior.


 

 

Winter 2020

 

Speaker

Title

Abstract

Thurs, Jan 30

     
 

Rotem Botvinik-Nezer, Dartmouth College

Variability in the analysis of a single fMRI dataset by many teams

The "replication crisis" in many scientific fields has raised concerns regarding the reliability of published results. One reason for the high rate of false positive results is the large number of "researcher degrees of freedom", where the process of data analysis can be performed in multiple ways. This is specifically apparent in neuroimaging, where there is a thriving "garden of forking analysis paths". In the Neuroimaging Analysis Replication and Prediction Study (NARPS: https://www.biorxiv.org/content/10.1101/843193v1.full), we tested the variability fMRI results across  analysis pipelines that are used in practice in research laboratories. Seventy analysis teams independently analyzed the same fMRI dataset to test the same ex-ante hypotheses. Overall, our findings show that analytic flexibility substantially affect reported results. In this talk, I will present the background for this unique project, highlight the main findings and discuss implications and potential solutions.

Thurs, Feb 13

     
 

Tommy Botch, Dartmouth College

How colorful is visual experience? Evidence from gaze-contingent virtual reality

Color ignites visual experience, imbuing the world with meaning, emotion, and richness. Intuitively, it feels that we are immersed in a colorful world that extends to the farthest limits of our periphery. In this talk, I will present a series of studies in which we show that this impression is surprisingly inaccurate. We used gaze-contingent rendering in immersive virtual reality (VR) to reveal the limits of color awareness during active, real-world visual experience by systematically altering the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. 

In Study 1, we found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. In Study 2, we measured color detection thresholds using a staircase procedure while a new set of observers explicitly attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. In a Study 3, we confirm that individual differences in detection thresholds are test-retest reliable within participants..

In brief, our results provide the first measurements of color awareness during active, naturalistic viewing conditions and show that our intuitive sense of a rich, colorful visual world is largely incorrect. I’ll conclude by briefly describing several applications of this approach to the study of psychiatric conditions such as autism.

Thurs, Feb 20

     
 

Giancarlo La Camera, Stony Brook University

Cortical computations via metastable activity

Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary ‘states’. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Focusing on metastability allows us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. In this talk, I will present some recently established links between metastable dynamics, expectation and decision making, together with a theoretical proposal of how these links may emerge in networks of spiking neurons. 

Thurs, Feb 27

 

   
 

Mohsen Rakhshan, Dartmouth College

Neural substrate underlying computations of volatility and adaptive learning in a complex environment

In value-based decision making, integration of reward outcomes over time is essential. When an unpredicted reward outcome occurs, a decision-maker must be able to distinguish between the uncertainty due to the stochastic nature of events (variability) and the uncertainty due to the changes in the environment (volatility). Although variability should not lead to behavioral adjustments, volatility should adjust the learning. While volatility can be specific to a stimulus (or action), it is currently unclear how humans perceive the level of volatility of the stimulus (action) in a complex environment where multiple stimuli (actions) with different levels of volatility are interacting. Here, we developed a novel concurrent probabilistic reversal learning task to study how learning from reward feedback is influenced by the interaction of volatility of different stimuli and their associated actions. Furthermore, we propose that a network of reward-dependent metaplasticity can provide a plausible mechanism for both integration of reward under uncertainty and estimation of uncertainty in the complex environment. 

 

Bogdan Petre, Dartmouth College

Evoked pain intensity representation is distributed throughout the brain 

Information is coded in brain activity at different scales: locally, distributed across regions and networks, and globally. For pain, the scale of representation is controversial, and quantitative characterizations of spatial information distribution are lacking. Although generally believed to be an integrated cognitive and sensory phenomenon implicating diverse brain systems, both local and global representations are invoked to explain pain physiology. In this person-level meta-analysis (or mega-analysis) of data from 289 participants across 10 studies, we use model comparison combined with multivariate predictive models to investigate the spatial scale and location of acute pain representation. We compare models based on (a) a single most pain-predictive region, identified in a data driven manner; (b) a single best large-scale cortical resting-state network; (c) selected cortical-subcortical systems related to evoked pain in prior literature (‘multi-system models’, including Neurosynth.org); and (d) a model spanning the full brain. We estimate the accuracy of pain intensity predictions using cross validation (7 studies) and subsequently validate in three independent holdout studies. All spatial scales convey information about pain intensity, but distributed, multi-system models better characterize pain representations than individual regions or networks. Full brain models showed no predictive advantage over multi-system models with feature selection guided by previous literature. These findings suggest that the representation of evoked pain experience is distributed across multiple cortical and subcortical systems. They also provide a blueprint for identifying the spatial scale of information in other domains. 

 

Fall 2019

 

Speaker

Title

Abstract

Thurs, Sep 19

     
 

Graduate student data blitz

   

Thurs, Oct 17

     
 

Mehran Moradi, Dartmouth College

 Multiple timescales of neural dynamics and integration of task-relevant signals across cortex

Recent studies have proposed the orderly progression in the time constants of neural dynamics as an organizational principle of cortical computations. However, relationships between these timescales and their dependence on response properties of individual neurons are unknown. We developed a comprehensive model to simultaneously estimate multiple timescales in neuronal dynamics and integration of task-relevant signals along with selectivity to those signals. We found that most neurons exhibited multiple timescales in their response, which consistently increased from parietal to prefrontal to cingulate cortex. However, there was no correlation between these timescales across individual neurons in any cortical area, resulting in independent parallel hierarchies of timescales. Additionally, none of these timescales depended on selectivity to task-relevant signals. Our results not only suggest multiple canonical mechanisms for an increase in timescales of neural dynamics across cortex but also point to additional mechanisms that allow decorrelation of these timescales to enable more flexibility.

Thurs, Oct 31

     
 

Mariam Aly, Columbia University

How hippocampal memory shapes, and is shaped by, attention

 

Attention modulates what we see and what we remember. Memory affects what we attend to and perceive. Despite this connection in behavior, little is known about the mechanisms that link attention and memory in the brain. One key structure that may be at the interface between attention and memory is the hippocampus. Here, I’ll explore the hypothesis that the relational representations of the hippocampus allow it to critically contribute to bidirectional interactions between attention and memory. First, I’ll show — in a series of human fMRI studies — that attention creates state-dependent patterns of activity in the hippocampus, and that these representations predict both online attentional behavior and memory formation. Then, I’ll provide neuropsychological evidence that the hippocampus is necessary for attention in tasks that recruit relational representations, particularly those that involve spatial processing. Finally, I’ll demonstrate that hippocampal memories enable preparation for upcoming attentional states. Together, this line of work highlights the tight links between attention and memory — links that are established, at least in part, by the hippocampus.

Thurs, Nov 7

 

   
 

Ratan Murty, MIT

 Is visual experience necessary for the development of face selectivity in the lateral fusiform gyrus?

The fusiform face area (FFA) responds selectively to faces and is causally involved in face perception. How does the FFA arise in development, and why does it develop so systematically in the same location across individuals? Preferential fMRI responses to faces arises early by around 6 months of age in humans (Deen et al., 2017). Arcaro et al (2017) have further shown in monkeys that regions that later become face selective are correlated in resting fMRI with foveal retinotopic cortex in newborns, and that monkeys reared without ever seeing a face show no face-selective patches. These findings have been taken to argue that 1) seeing faces is necessary for the development of face selective patches and 2) face patches arise in previously fovea-biased cortex because early experience with faces is foveally biased.

I will present evidence against both these hypotheses. We scanned congenitally blind subjects with fMRI while they performed a one-back haptic shape discrimination task, sequentially palpating 3D printed photorealistic models of faces, hands, mazes and chairs in a blocked design. We observed robust face selectivity in the lateral fusiform gyrus of most congenitally blind subjects during haptic exploration of 3D-printed stimuli, indicating that neither visual experience, nor fovea-biased input, nor visual expertise is necessary for face selectivity to arise in its characteristic location. Similar resting fMRI correlation fingerprints in individual blind and sighted participants suggest a role for long- range connectivity in the specification of the cortical locus of face selectivity.

 

Thurs, Nov 21

     
 

AJ Haskins, Dartmouth College

Active vision in immersive, 360° real-world environments: Methods and applications

Eye-tracking studies offer substantial insight into cognition, revealing which visual features viewers prioritize over others as they construct a sense of place in an environment. Such studies suggest that robust individual differences characterize gaze behavior, flagging the tool as a potential window into understanding psychiatric conditions such as autism. Yet, one key feature of real-world experience is overlooked by traditional eye-tracking paradigms. Everyday visual environments are actively explored: we gain rich information about a place by shifting our eyes, turning our heads, and moving our bodies. Little is known about how active exploration impacts the way humans encode the rich information available in a real-world scene.  
 
In this study, we sought to understand the impact of active viewing conditions on gaze behavior. We exploited recent developments in immersive Virtual Reality (iVR) and custom in-headset eye-tracking to monitor participants’ gaze while they naturally explored real-world, 360º environments via self-directed motion (saccades and head turns). In half of the trials, photospheres were passively displayed to participants while they were head-fixed, thus enabling us to perform quantitative, in-depth comparisons of gaze behavior and attentional deployment as subjects encoded novel real-world environments during self-generated (active exploration) versus image-generated (passive viewing) study conditions.
 
In brief, our results show that active viewing influences every aspect of gaze behavior, from the way we move our eyes to what we choose to attend to. In addition to highlighting the importance of studying vision in active contexts, I’ll conclude by briefly describing several applications of this approach to the study of psychiatric conditions such as autism.

 

 Spring 2019

 

 Speaker

 Title

 Abstract

Wed, Apr 10

 

 

 

 

Shih-Wei Wu, National Yang-Ming University

Probability estimation and its neurocompuational substrates

Many decisions we make depend on how we evaluate potential outcomes and estimate their probabilities of occurrence. Outcome valuation is subjective – it requires consulting the decision maker’s internal preferences and is sensitive to context. Probability estimation is also subjective – but requires the decision maker to first extract statistics from the environment before using them to estimate probability. Currently, it is unclear whether the two computations share similar algorithms and neural-algorithmic implementations.

I will present our recent work on context-dependent probability estimation, which we identified both similarities and differences in computational mechanisms between valuation and probability estimation. I will also talk about work on modeling probability estimation as Bayesian inference, which focuses on examining how and how well people estimate probability of reward in the presence of prior and likelihood information. Here we found suboptimal performance similar to base-rate neglect, which surprisingly is robust across a wide variety of setups that try to eliminate this behavior. Together, these results suggest many interesting aspects of probability estimation that have yet to be fully understood at the behavioral, computational, and neural algorithmic levels.

Thurs, Apr 18

 

 

 

 

Sarah Herald, Dartmouth College

 

What is the role of the left-hemisphere face areas?

Over the past two decades, neuroimaging studies have revealed a bilateral network of face-selective areas. Despite the presence of left hemisphere face areas, only a few cases of acquired prosopagnosia (AP) resulting from left hemisphere damage have been reported, and most of those cases involved left-handed individuals. Indeed, almost all cases of AP result from unilateral right or bilateral hemisphere damage. Given the apparent right-hemisphere dominance of face-processing from the lesion literature, what might be the role of the left hemisphere face areas? I will review the lesion, neuroimaging, microsimulation, and intracranial recording literature to summarize our current understanding, or lack thereof, about the left hemisphere face areas. Additionally, I will provide suggestions for how future face perception studies can better address the shortcomings of prior studies and fill in the gaps in our knowledge. 

Thurs, May 2

 

 

 

 

Vassiki Chauhan, Dartmouth College

 

 Acquisition of person knowledge is pivotal for carrying out successful social interactions. Not only do we need to recognize people in different environments and circumstances, we also need to efficiently integrate information about them across different modalities. In my presentation, I will go over a range of approaches we have employed to investigate the system for recognizing familiar individuals. First, I will discuss the dominant theories about person knowledge and share some empirical evidence for prioritized processing of faces of familiar individuals. I will also share some recent neuroimaging results probing the recognition of identities across different modalities. Then, I will present preliminary neuroimaging results from a sample of children who were born blind but whose sight has been recently restored, allowing us to investigate how the face processing network evolves over time. Finally, I will go over the possibility of using naturalistic stimuli in order to identify common face selective regions in the brain across different participants.

 

Thurs, May 23

     

 

 

Kay Alfred, Dartmouth College

   

 

 

 Shiva Ghaanifarashahi, Dartmouth College

   

Thurs, May 30

     

 

 

Malinda McPherson, Harvard University

Multiple pitch mechanisms in music and speech perception

Pitch conveys critical information in speech, music and other natural sounds, and is conventionally defined as the perceptual correlate of a sound's fundamental frequency (F0). Although pitch perception is widely assumed to rely on a single F0 estimation process, real-world pitch tasks vary enormously, raising the possibility of underlying mechanistic diversity. I will present evidence that at least two different pitch mechanisms can be dissociated across tasks. One mechanism appears to help listeners summarize the frequencies of sounds with their F0, creating a compact code for memory storage. I will also discuss the use of singing to confirm and extend these results in populations where traditional psychophysical judgments may be difficult to elicit (e.g. young children or remote cultures without formal educational systems).

 Winter 2019

 

 Speaker

 Title

Feb 21

 

 

 

Sarah Oh, Dartmouth College

 

Mar 14

   
 

Jonathan Freeman, NYU

More than meets the eye: Split-second social perception

Mar 21

   
 

Lucy Owen, Dartmouth College

Decrypting the neural code