Virtual fMRI brown bag: February 18, 2022
Please join us for a talk given by Ilker Yildirim, Assistant Professor of Psychology, Yale University.
Reverse-engineering brain’s world models in the language of objects and generative models
Abstract: When we open our eyes, we do not see a jumble of light or colorful patterns. There lies a great distance from the raw inputs sensed at our retinas to what we experience as the contents of our perception: In the brain, the incoming sense inputs are transformed into rich, discrete structures that we can think about and plan with. These structured representations are what we call “world models” and they include representations of objects with 3D shapes and physical properties, scenes and surfaces with navigational affordances, and events with temporally demarcated dynamics. Real world scenes are complex, yet these world models are efficiently and selectively formed, driving action as task-driven, simulatable representations at ecologically relevant scales of space and time. How in the mind and brain do we build and use such internal models of the world? In this talk, I will begin to answer this question by presenting a novel approach that synthesizes a diverse range of tools including generative models, simulation engines, deep neural networks, and methods from information theory. For two core domains of high-level vision, perception of faces and bodies, I will show that this approach explains both human behavioral data and multiple levels of neural processing in non-human primates, as well as a classic illusion, the “hollow face” effect. I will then present ongoing work on a novel account of attention that situates vision in the broader context of an agent with goals; using objective behavioral measurements, I will show how this computational account explains implicit goals and internal representations underlying scene perception.