CBB talk series
The Cognitive Brown Bag (CBB) is a graduate student organized talk series, primarily attended by the faculty, graduate students, and staff from the cognitive labs at Dartmouth College. A list of past CBB talks can be found here.
The 2023-2024 talk series is organized by Yong Hoon Chung (Stoermer Lab), Clara Sava-Segal (Finn Lab), and Jae Hyung Woo (Soltani Lab). The talk time/location for these talks is expected to be announced in the late summer/early fall.
Fall 2023
Date | Speaker | Talk title |
Abstract |
Tuesday, October 3 | |||
Mary Kieseler (Dartmouth College) | Tracking the emergence of hyperfamiliarity for faces: Lengthy covert discrimination followed by hyperfamiliarity due to disrupted post-perceptual processes |
Imagine walking down the promenade near the beach in Maine and feeling happy because you feel you are surrounded by familiar faces, only to then realize that you can't possibly know all these people so far from your home. Nell is a 49-year-old woman who, following a severe migraine in August 2020, has experienced these sorts of situations ever since: All faces she sees feel familiar to her. In this talk, I will present data from behavioral tests as well as two EEG studies that shed light on the emergence of Nell's hyperfamiliarity for faces. While Nell's behavioral results clearly reflect the feelings of hyperfamiliarity she reports, her ERP results indicate that her hyperfamiliarity arises at post-perceptual stages: Nell shows covert discrimination between familiar and unfamiliar faces up to 600 milliseconds after stimulus presentation, and her visual identity face matching is intact. In the discussion section, I am especially interested in suggestions and discussions on the EEG findings showing discrimination between faces, even as late as P600, that fail to reach awareness: How do these data fit with research on awareness? What is the relation to consciousness and/or the timing of consciousness? |
|
Nate Heller (Dartmouth College) |
Indexing proneness to visual hallucinations with high-confidence false-alarm rates |
Visual hallucinations are experienced in a wide range of pathological and non-pathological contexts. Psychosis, psychedelics, and mystical experiences can all result in visual hallucinations. Due to their unpredictability, it is hard to measure the temporal and perceptual structure of hallucinations in these real-world contexts. Therefore, researchers have started turning to the use of model hallucinations. These are perceptual effects that are “hallucination-like” and that can be induced systematically in the lab. One example of an effective model hallucination are high-confidence false alarms: the vivid experience of a structured percept in a random stimulus. In the auditory domain, high-confidence false alarms have proved especially effective. In auditory signal detection tasks, high-confidence false alarm rates have been shown to correlate with hallucination proneness and have led to the identification of important neural correlates of auditory hallucination-like perception (Schmack et al. 2021). The success of this paradigm lends support to the view that auditory hallucinations result from an overweighting of prior expectations, which in turn shape perception through top-down processes (Corlett et al., 2019). Unfortunately, in the visual domain, no comparable task exists that gives researchers the same degree of experimental control needed to test the role of prior expectations and top-down processes in the production of visual hallucinations. In this talk, I will present two visual tasks that are designed to measure high-confidence false alarms: a face signal detection task and a motion signal detection task. I will describe a planned online study (n ≈ 400) in which I will test whether high-confidence false-alarm rates in these two tasks correlate with proneness to visual hallucinations in the general population. I will then invite the audience to join me in 1) speculating about novel applications of these tasks in translational research, and 2) imagining additional analyses of the data. | |
Tuesday, October 17 | Lorella Battelli (Harvard University) | ||
Tuesday, October 31 | |||
Kevin Ortego (Dartmouth College) | |||
AJ Haskins (Dartmouth College) | |||
Tuesday, November 7 | Juliet Davidow (Northeastern University) | ||
Tuesday, November 14 | |||
Yong Hoon Chung (Dartmouth College) | |||
Byeol Kim (Dartmouth College) |
Winter 2024
Date | Speaker |
Tuesday, January 9 | |
Michael Wang (Dartmouth College) | |
Mert Ozkan (Dartmouth College) | |
Tuesday, January 23 | |
Jae Hyung Woo (Dartmouth College) | |
Tommy Botch (Dartmouth College) | |
Tuesday, February 6 | |
Megan Hillis (Dartmouth College) | |
Bogdan Petre (Dartmouth College) | |
Tuesday, February 27 | |
Jane Han (Dartmouth College) | |
Anna Mynick (Dartmouth College) | |
Tuesday, March 12 | Jeongho Park (Harvard University) |
Spring 2024
Date | Speaker |
Tuesday, March 26 | Sam McDougle (Yale University) |
Tuesday, April 9 | Stefano Anzellotti (Boston College) |
Tuesday, April 23 | |
Alexis Kidder (Dartmouth College) | |
Clara Sava-Segal (Dartmouth College) | |
Tuesday, May 7 | |
Xinming Xu (Dartmouth College) | |
Mijin Kwon (Dartmouth College) | |
Tuesday, May 14 | Mehrdad Jazayeri (MIT) |
Tuesday, May 28 | |
Yeo Bi Choi (Dartmouth College) | |
Heejung Jung (Dartmouth College) | |
Tuesday, June 4 | |
Yeongji Lee (Dartmouth College) | |
Paxton Fitzpatrick (Dartmouth College) |