Rapid musical memory
When you are in a bar or restaurant or gym and you hear a familiar musical sound, how long is it before you can ‘name to the tune’?
A new study suggests that we are remarkably quick at identifying at least the genre of the music we hear. Apparently it might take as little 125 milliseconds! The speed and accuracy with which we can recognise music speaks to the depth and rapid processing capabilities of musical memory.
The authors of the new paper make a very good point to illustrate the speed at which we use our musical memory in everyday life. Imagine you are flicking through radio stations trying to select something to listen to while you carry out a household chore. According to the authors the scene probably sounds something like this:
“Amazingly, most listeners can perform this search mission very deftly, needing only the merest snippet to determine, first, whether a station is playing music and, second, whether the genre being broadcast is one that is desirable at the moment. Most listeners are able to make a choice in a much shorter time period than that provided by the scan function of most radios” (p2).
So how do we do this? There are two main pathways being used when we listen to music; a fast one and a slow one. The fast one deals with the unexpected sounds in music – like a sudden change to beat, volume or key. This pathway is built on our emergency body response system that responds to sudden changes in the environment and alerts us to potential danger. The slower, cortical system registers conscious changes, so we can more clearly identify what happens as sound develops. So, roughly speaking, these pathways say in order; 1) ‘something has happened!’ then 2) ‘what is it?’
But to say the second pathway is ‘slow’ has turned out to look a little unfair. Many studies over the past 5 years in particular have suggested that we need only tiny snippets of music to identify the genre, emotion and even the performing artist. I remember seeing a presentation given by Carol Krumhansl last year where she showed that nearly 100% of listeners could identify Britany Spears’ ‘Baby one more time’ with 100ms or less. It is that first, deep note – distinctive and apparently unmistakable!
The new study frames this slower listening response in an even more impressive broad scale context. Both musicians and nonmusicians (347 in total) showed impressive ability to identify genre’s as classical, jazz, country, metal, and rap/hip hop excerpts (10 examples of each genre – based on the most popular downloads from iTunes). None of the excerpts contained vocals.
Results
1) Participants recognised 76% of the tunes – at 250ms accuracy was at 54% and at 500ms it was already at 77%.
2) Classical music was the genre most accurately identified at 125ms whereas rap/hip hop was the worst, but this trend reversed itself by 500ms.
3) Musicians were better at recognising jazz overall and nonmusicians were better at rap/hip hop.
4) There were no overall differences between men and women, although men were slightly better at recognising metal at 125ms and women were better at identifying rap/hip hop at 250ms.
Anecdotally the experimenters say that their participants were often heard whispering the names of the artist or congratulating them on their choice of music. So, often a person had not only categorised the genre, perhaps by matching to a prototype schema in memory, but they had also matched the short sound to a memory for an exact recording.
That is just a sample of results from the study – there are many! Overall, the paper paints a fascinating picture of how well we are able to identify the type of music we hear, with only milliseconds worth of auditory information upon which to base our decision. Biologically this speed is supported by the actions of rapid auditory responses in the brain, led by the very oldest areas like the brainstem. The beauty of adaptation is that the system which evolved to help us pick out the sounds of dangerous animals in our once hostile environment now helps us choose between Britney and Bach, Black Sabbath and Big Country.
Published online 23 February 2011 by Psychology of Music. Sandra Teglas Mace, Cynthia L. Wagoner, Donald Hodges and David J. Teachout (2011) Genre identification of very brief musical excerpts.
3 Comments
Luc Duval
Very good, concise review! I’m new to your blog and have a lot to catch up on.
With the genres given, it seems that instrumentation would be the most instantly differentiable characteristic. I would be very interested in seeing a study that used non-musical bursts of sound with genre-specific instruments as a comparison in order to figure out if listeners are picking up on other information from real music. I can’t get the full article; did the authors discuss this sort of thing at all?
vicky
You make a very good point Luc. The authors only looked at the direct effects of real snippets of music on memory performance. So it is for future studies to look at the roots of the effect, including the effects of timbre, as you state. Thanks for reading the blog – I really appreciate your comments!
Charles Ross
Hi, I´m a Composer/musicologist doing a PhD at Glasgow Uni.
One of the mnemonic music models I have put forward are those which I believe to be used in a rather little known but characteristic music found in pockets of the world, from Baffin Island to Burkino Faso to Papua New guinea. Where a limited amount of micromotifs are juxtaposed and performed in continuous chains of phase repetitions.
The music is often played at a fast tempo making conventional frontal lobe organisational decisions seemingly impossible. Musicians have been recorded practicing the various motifs in isolation and then in connection with other motifs until a motif map is created (a little like a neural net) probably using a a system similar to chunking. The modular nature of this kind of music means that a quite different kind of musical memory is in operation here.
I am now extending this theory into my own musical compositions and have been working on the possibilities of extending our awareness of earworms into plyable musical entities that begin to generate phase iteritive sets. Noting your interest in Earworms I would be very interested in knowing about more of your research and perhaps my own findings might be useful to you. Best wishes -charles-