ICMPC

ICMPC Day Two – Memory

The first memory session I attended at this year’s ICMPC only contained two talks (I skipped the third and fourth in truth as I wanted a quick break before my talk to do some last minute practise!), but the first of them featured one of the greats in the field, Carol Krunmhansl. The second featured Kat Agres who was talking about an interesting EEG study of musical memory.

1) Carol is interested in investigating the power of our musical memory, in terms of both the amount we can store and the ease with which we trigger recall of tunes we have heard in the past. For this task she has taken inspiration from the phenomenon of accurate judgements in the absence of anything except minimal information about the stimulus. For example studies of visual memory in the past have shown very accurate memory of arrays containing up to 10,000 pictures with presentation as short as 1 second and retention intervals that were days in length – memory under these circumstances can reach 85% accuracy…an incredible feat! What is the musical analogue?

Carol presented participants in her study with excerpts of 28 songs that were either 300 or 400ms long, and that were chosen from 5 different decades. She asked people to try to recall the artist, title, decade, emotion, style of the music slice. Accuracy at 400ms was an astounding 26%! People were recognising a quarter of these very disparate song selections based on less than half a second of information. She also showed that our identification of emotion in the songs correlates at .93 with those given when we hear 15 seconds of the song. These results show that there is a huge amount of information available in every instance of music, even if people don’t recognise the tune. Also, as a nice aside, people tended to prefer older songs, even though they are recognised best. I completely agree…they don’t write them like they used to!! And as a final lovely number Carol suggested that if people can hold 100 clips of a 3 minute song (a rough average based on questioning her participants) then there is the potential to recognise 10,000,000 sound slices.

2) Next, Kat stepped up with her EEG study. She started with a really nice presentation of the background to expectation and prediction, and how they both play a crucial role in auditory perception. She then showed evidence of how our neural responses change as a memory representation begins to emerge. Specifically she showed the evolution of an ERP component known as N1. N1 amplitude appears in highly predictive context and then decreases once this information is learned. Kat used this phenomenon to test memory and learning of Irish reels. She created slowed down melodies from the reels for her experimental condition (slowed down so she could time stamp the ERPs) and created random melodies by mixing up the tones for her control condition. She then played both conditions across two blocks.

For the Irish reels, Kat found that there was greater N1 amplitude in block 1 that then decreased as predicted in block 2. For the random melodies N1 actually increased across the two blocks. Kat suggested this reflected the increasing difficulty of trying to assign structure and therefore learn about the random tunes. She backed up her conclusions with some Time Frequency Analysis where she showed increased activation in Alpha and Beta bands for the random tunes, activations associated with working memory and attention. In conclusion she showed that when we learn about the structure of melodies in new music that there is a decrease in obligatory sensory responses, which doesn’t happen if the sounds have no discernable structure. Nice work!

Comments Off on ICMPC Day Two – Memory