Skip to main content

2018 CCN Workshop: Multidisciplinary Approaches to Understanding Face Perception

Organizers: Ida Gobbini, Hervé Abdi, Brad Duchaine, Jim Haxby, Sharon Gilad-Gutnick, Pawan Sinha, Lorenzo Torresani, and Matteo Visconti di Oleggio Castello

Sponsored by the Center for Cognitive Neuroscience and the Neukom Institute

Dates: August 29 and August 30

Location: The Hanover Inn, Hanover, NH

Registration is now closed. 

Confirmed speakers

Carlos Castillo, University of Maryland

Brad Duchaine, Dartmouth College

Developmental prosopagnosic have widespread selectivity reductions across category-selective visual cortex


It is unclear which cortical areas contribute to face processing deficits in developmental prosopagnosia (DP), and no previous studies have investigated whether other category-selective areas function normally in DP. To address these issues, we scanned 22 DPs and 27 controls using a dynamic localizer consisting of video clips of faces, scenes, bodies, objects, and scrambled objects.  DPs exhibited reduced face selectivity in all 12 face areas, and the reductions were significant in three posterior and two anterior areas. DPs and controls showed similar responses to faces in other category-selective areas, which suggests the DPs’ behavioral deficits with faces result from problems restricted to the face network. DPs also had pronounced scene-selectivity reductions in four of six scene-selective areas and marginal body-selectivity reductions in two of four body-selective areas. Our results demonstrate that DPs have widespread deficits throughout the face network, and they are inconsistent with a leading account of DP which proposes that posterior face-selective areas are normal in DP. The selectivity reductions in other category-selective areas indicate many DPs have deficits spread across high-level visual cortex.

M. Ida Gobbini, Dartmouth College

Neural mechanism for face recognition


People are very skillful in determining that a face is unfamiliar and reading social cues from faces of strangers (e.g. facial expressions of emotion conveyed by unfamiliar faces). The topic of face processing becomes more complicated, though, when considering the recognition of unique and view-invariant identity. Data show that despite the subjective impression of high efficiency for recognition of unfamiliar face identity, performance is vastly superior for familiar faces. Recognition of familiar faces is remarkably effortless and robust. Automatic activation of knowledge about familiar individuals and emotional responses play crucial roles in familiar face recognition.  I will present data that show how familiarity affects the earliest stages of face processing to facilitate rapid, even preconscious detection of these highly socially salient stimuli and how representation of identity is disentangled along the visual pathways from low-level information. I will present data that support the hypothesis that representation of personally familiar faces develops in a hierarchical fashion through the engagement of multiple levels in the distributed neural system from early visual processes to higher level of social cognition and emotion. 

Kalanit Grill-Spector, Stanford University

Neural mechanisms of the development of face perception


How do brain mechanisms form from childhood to adulthood leading to better face recognition? Extensive debate in the field of neurodevelopment argues whether brain development is due to pruning or growth. Here I will describe results from a series of recent experiments using new MRI methods in children and adults together with analysis of postmortem histology that tested these competing theories. Anatomically, we examined if there are developmental increases or decreases to macromolecular tissue in the gray matter and how anatomical development impacts function and behavior. Functionally, we examined if and how neural sensitivity to faces, as well as spatial computations by population receptive fields develop from childhood to adulthood. Critically, we tested how these neural developments relate to perceptual discriminability to face identity and looking behavior, respectively. Together, our data reveal a tripartite relationship between anatomical, functional, and behavioral development and suggest that emergent brain function and behavior during childhood result from cortical tissue growth rather than pruning.

Rob Jenkins, University of York, UK

How many faces do people know (and how many others do we differentiate)?


Despite decades of psychological research into face perception, some very basic metrics have never been estimated experimentally. Here I will attempt to estimate two of them - the number of faces that people know (familiar faces), and the number of unknown faces that people differentiate (unfamiliar faces). In linguistics, vocabulary size has been intensively studied, and has clear implications for word reading and other verbal abilities. By analogy, the number of faces that people understand may explain documented variations in face recognition ability.

Margaret Livingstone, Harvard Medical School

The development of specialized modules for recognizing faces, scenes, text, and bodies: what you see is what you get


There are distinct regions of the brain, reproducible from one person to the next, specialized for processing the most universal forms of human expertise.  What is the relationship between behavioral expertise and dedicated brain structures?  Do reproducible brain structures mean only certain abilities are innate, or easily learned, or does intensive early experience influence the emergence of expertise and/or dedicated brain circuits?  We found that intensive early, but not late, experience influences the formation of category-selective modules in macaque temporal lobe, both for natural stimuli and for stimuli never naturally encountered by monkeys.  This suggests that, as in early sensory areas, experience can drive functional segregation and that this segregation may determine how that information is processed.  The pattern of novel domain formation in symbol-trained monkeys indicates the existence of a proto-architecture that governs where experience can exert its effects on brain organization.  Our most recent work addresses the questions of what that proto-architecture is and what happens if monkeys never see what they would normally develop domains for.

Aleix Martinez, The Ohio State University

The face of emotion: From faces and emotion to the visual recognition of intent


We now have computer vision algorithms that can successfully segment regions of interest in images and video, recognize faces, objects and scenes, and even create accurate 3D models of them. But how about people’s intent? The recognition of non-verbal behavior, including emotions, is fundamental to humans. Without it, we would constantly misinterpret one another and human societies would be impossible. Yet, computers are unable to read people’s emotion and intent. In this talk, we will address this problem. First, we will identify the mechanisms human’s use to communicate emotion and intent. Second, we will show how to design computer vision and machine learning algorithms that can visually interpret these signals quickly and accurately. A typical mechanism people use to express emotion is the movement of their facial muscles. Thus, we will design algorithms that can identify muscle articulations in face images and videos filmed “in the wild.” Specifically, we will derive algorithms that can recognize more than 8,000 facial configurations and dozens of emotion categories. We will also show an algorithm that can edit a single image of a face to make it express any of these facial configurations and see that these images are indistinguishable from real pictures of facial expressions. We will then note that facial muscle articulations is not the sole mechanism by which people express emotion. When one experiences an emotion, the central nervous system releases hormones. These hormones change the flow and composition of one’s blood. And, these changes, are visible as small variations in facial color, thanks to the closeness of a large number of facial blood vessels to the surface of the skin. We will thus show algorithms that can read emotion using these color variations, even in the absence of any facial muscle movement, e.g., when people attempt to suppress an expression. Finally, we will summarize ongoing work of other behavioral signals that communicate emotion and intent to others, including body pose and kinematics.

Alice O'Toole, The University of Texas at Dallas

Understanding face representations in deep convolutional neural networks: Face Space Theory evolves


Computer-based face recognition has improved substantially in recent years. Machines, circa 2005, competed favorably with humans recognizing faces in images taken under variable illumination and across changes in facial expression and appearance. By 2010, machines performed nearly as well as humans in all but the most challenging cases.  However, these early algorithms were incapable of recognizing faces that were not frontally posed.  The development of deep learning and convolutional neural networks (DCNNs) in 2012 abruptly changed the state-of-the-art for machine based face recognition, making recognition across even large changes of viewpoint possible. These networks are trained commonly with millions of images of thousands of people. The number of computations between an image and the “top-level” face representation in a DCNN is typically on the order of 10’s of millions.  It is not surprising, therefore, that researchers do understand the nature of the face representations computed by DCNNs. In this talk, I will review briefly the evolution of computational models of face recognition and show how DCNNs address critical flaws in previous generation models.  I will present computational studies from my lab that are aimed at understanding how the feature codes at the top layers of state-of-the-art DCNNs support face recognition across a wide range of photometric and person variations, including changes in view.  These codes may offer an interesting insight into how people recognize familiar faces.

Bruno Rossion, CNRS, University of Lorraine, France

Pawan Sinha, Massachusetts Institute of Technology

Project Prakash: Merging science and service


'Project Prakash' is an initiative launched over a decade ago with the goal of providing sight surgeries to blind children from medically underserved communities in the developing world. In pursuing this humanitarian mission, the project is helping address questions regarding brain plasticity and learning. Through a combination of behavioral and brain-imaging studies, the effort has provided a picture of the landscape of visual learning late in childhood and has illuminated some of the processes that might underlie aspects of such learning.

Doris Tsao, California Institute of Technology

Faces: A neural Rosetta Stone


The specialized system for processing faces in the macaque brain has sometimes been thought of as a unique result of the evolutionary importance of face recognition to primates. I will discuss the organization and coding principles used by the face patch system, and then discuss how these principles generalize across all of IT cortex.

Galit Yovel, Tel Aviv University

Beyond faces: A comprehensive framework for person recognition


Humans are experts in person recognition. However, the study of person recognition has primarily focused on static images of unfamiliar faces, whereas in real life we typically recognize familiar people that are often seen in motion. Thus, to understand person recognition, we need to consider additional sources of information including familiarity, motion as well as the body and voice. In the first part of my talk, I will show that to account for familiar person recognition, both perceptual and person-related conceptual information should be considered, suggesting that studying unfamiliar faces may provide only partial understanding of the process of person recognition. In the second part of my talk, I will explore the conditions under which body and motion contribute to person recognition beyond the face and present a neural model for whole person perception in face and body-selective cortex. Overall, our studies move beyond the static image of an unfamiliar face and provide a comprehensive framework for the investigation of person recognition as it happens in real life.


Hervé Abdi, The University of Texas at Dallas

Gary Cottrell, University of California San Diego

Sharon Gilad-Gutnick, Massachusetts Institute of Technology

Swaroop Guntupalli, Vicarious

Lorenzo Torresani, Dartmouth College

Matteo Visconti di Oleggio Castello, Dartmouth College

Last Updated: 8/16/18