Home >  CCN talks >

CCN talk January 24, 2014

Michael Casey
 

MICHAEL CASEY

Department of Music and Department of Computer Science, Dartmouth College, Hanover, NH

Title: Decoding absolute and relative pitch imagery in the auditory pathway 

Michael Casey and Jessica Thompson

Time: 4:00-5:00

Place: Moore Hall, Room B3

Abstract

Our previous work (Casey, Thompson, Kang, Raizada, and Wheatley 2012) investigated decoding hemodynamic brain activity in the feed-forward pathways involved in music listening with rich stimuli. Our current work investigates top-down music processing via auditory imagery with an imagined music task. Most previous work on auditory imagery (e.g. Zatorre 2000; Zatorre, Halpern, and Bouffard 2010) used familiar tunes, such as nursery rhymes, that have associated lyrics which elicit activation of language areas in the brain. We required stimuli that were clearly pitched and musical, but without words, that would be easy to imagine. This led us to choose musical scales, which are accurately imagined by most trained musicians.

Our pilot experiment compared hemodynamic brain activity to heard and imagined musical tones at two different levels of pitch hierarchy; absolute pitch and relative scale degree. We used a continuous scanner acquisition paradigm with a two-second TR. Twenty four major scales were heard and imagined in ascending and descending order. Notes were two seconds long to align with the scanner TR. A total of thirty-six distinct pitches were used in the experiment. The data was labeled in two ways, according to absolute pitch and relative scale degree, without one variable confounding the other. We applied masks for primary auditory cortex, which contains a tonotopic organization of spectro-temporal receptive fields, and secondary auditory cortex (STS/STG), which is implicated in the cognition of hierarchical pitch structures.

Using the MVPA paradigm we tested both spectral clustering (unsupervised) and support vector machine (supervised) classification of high-low and pitch category discrimination for both the heard and imagined BOLD responses with absolute and relative pitch conditions. Preliminary supervised learning results for the absolute-pitch heard experiment yielded Acc=0.57 SE=0.023 (NULL Acc=0.50 SE=0.006) and absolute-pitch imagined results were ACC=0.60 SE=0.04. We will present further results of our preliminary experiments and give an overview of future directions for this work.

Bio

Michael Casey is the James Wright Professor of Music and Professor of Computer Science at Dartmouth College where he directs the Bregman Music and Auditory Research Lab, an interdisciplinary laboratory investigating the links between music, image, information, and neuroscience. He received his doctorate from the MIT Media Laboratory (1998) and since has held positions as Research Scientist at MERL, Cambridge, and Professor of Computer Science at Goldsmiths, University of London. Michael made significant contributions to the MPEG standards (ISO Moving Picture Experts Group), was PI/Co-PI for three EPSRC-funded projects in the UK, and is currently PI of an NEH-funded Digital Humanities project at Dartmouth. He has received faculty research awards from Yahoo! Research Inc. and Google Inc. His current work on neural decoding of auditory imagery is sponsored by the Neukom Institute for Computational Science at Dartmouth and a Dean of Faculty Scholarship and Innovation Award.

For higher resolution, click on the YouTube icon, then select resolution using the 'gear' icon