Home >  Events

fMRI brown bag: October 19, 2022

Colin Conwell

Colin Conwell

Postdoctoral Researcher, Harvard University

Vision Sciences Laboratory

Opportunistic experiments on a large-scale survey of diverse artificial vision models in prediction of 7t fmri data

Abstract: What can we learn from large-scale comparisons between publicly available deep neural network models and brain responses? Model-to-brain benchmarking approaches (e.g. BrainScore) typically seek the most predictive model of a designated cortical system. Here, we take a different approach, performing targeted comparisons (‘opportunistic experiments’) over open-source models to examine whether controlled variation in learning pressures from architecture, task, and input yield better or worse correspondence to brain data. We survey the accuracy of 215 deep neural network models in predicting the responses of ventral stream voxel from the 7T fMRI Natural Scenes Dataset, performing targeted comparisons in architecture (e.g. CNNs versus Transformers), task (e.g. CLIP-style language alignment versus SimCLR-style self-supervision), and input (e.g. ImageNet versus VGGFace-training), with both weighted and unweighted representational similarity analysis. Counter-intuitively, we find that brain predictivity levels are often broadly unaffected by even substantial changes in inductive biases (e.g. architecture or training), and instead depend most strongly on the brain-to-model mapping method employed, as well as the apparent diversity of the input data used for training. Taken together, these results can be considered a lay-of-the-land for the current state of model-to-brain correspondences, and a potential roadmap for the factors of interest that might drive the next generation of brain-predictive models moving forward.