Musicians’ brains – locked in from the start
Musicians have different brains – that fact we have known for a long time. The study of musician and non-musician brains is probably one of the first stories in the science of neural (brain) plasticity; the idea that our brains respond and become modified by the things we experience in everyday life. Nowadays the existence of neural plasticity is beyond doubt: We see regular, remarkable examples of how the human brain, at any age although particularly in childhood, is able to re-organise itself in response to circumstances. For example, we know the brain can adapt after stroke or serious injury, after the loss of any of the senses and even as a result of our career choices. As for the latter, my favourite example is that of London Taxi drivers. Dr. Eleanor Maguire and her team found that the drivers show enlarged posterior hippocampus structures (the memory centre of the brain) which correlate with their possession of ‘the knowledge’, the mental map of London streets that they use to navigate. As a result of such evidence we take it as a given that our brains will adapt to the world around us and to the demands that we make of it every day. And it therefore makes sense that musicians’ brains would adapt as a result of their exposure to and engagement with music.
But the ease with which we today accept brain plasticity as a result of musical practice is a result of over a century of research, which at first did not have the benefits of the sophisticated brain imaging tools. In fact the evidence goes back to Victorian scientists. Sigmund Auerbach (1860-1923) was a very popular German surgeon and diagnostician who contributed numerous works on the operative treatment of tumours of the brain and spinal marrow/cord, nervous damages, and epilepsy. At the beginning of the twentieth century he conducted a series of post-mortem brain dissections and reported that parts of the temporal and parietal lobe (in particular the superior temporal gyrus) were larger than normal in the brains of 5 famous musicians of the time (1911). However, the problem with simply noting differences between musicians and nonmusicians brains in this way is that you have no evidence for causation – how do you know their musical practice caused these changes? Maybe their brains were different to start with and that is the reason they became successful musicians?
The only way to solve this kind of riddle is with longitudinal, developmental studies. You measure kids’ brains before they start music (or choose not to – that is your control group) and then you determine whether the changes that occur to their brains as they learn match those that we see in adult musicians. I know of only one group braving this kind of study. Gottfried Shlaug’s lab‘s results are starting to confirm that the neural differences we see in adult musicians are not present when children start learning – so logic suggests they must be a response to their environment. It is not conclusive yet, but it is a good indicator that musician/non-musician brain differences are largely the result of neural plasticity.
So what are the neural differences between musicians and non-musicians ? Well there are quite a few of them and I want to focus on just one recent study in today’s blog. So you will forgive me, I hope, if I say that if you want to know more about differences in general then I can recommend an article by Dr. Lauren Stewart which gives a great summary of this subject. Today we are interested in the brainstem. This is the oldest part of the brain and the part that is largely in charge of pre-conscious processing.
I first heard about brain stem studies about 4 years ago when I saw talks by Dr Nina Kraus and Dr Patrick Wong. Up until that point I had heard a lot about studying the higher centres of the brain with fMRI, PET and EEG but I have not been introduced to subcortical measures of musical processing. I found it fascinating. Both authors had perfected the technique of measuring the Frequency Following Response (FFR), an evoked potential generated in the upper portion of the brain stem. What happens in an FFR experiment is that a small number of electrodes are placed on the scalp (nowhere near as many as in a typical EEG scan) and then a series of simple sounds are played to one ear. As a participant you don’t have to do anything, in fact you can even fall asleep! Your brainstem follows the frequency of the sounds that it hears, even when you are unconscious. It becomes ‘phase locked’, meaning that it displays a characteristic waveform that follows the individual cycles of the stimulus (i.e. its frequency). Cool stuff eh?
Before the FFR paradigm came along we knew that musicians could unconsciously detect smaller changes in pitch than non-musicians (see work by Stefan Koelsch) but we didn’t know where this ability came from; was it coming from the lower pre-conscious levels of the cortex or the much older brainstem regions? Use of the FFR paradigm has shown that long-term musical experience changes how the brainstem responds to sounds in the environment, and that this correlates with performance in behavioural tasks. For example, Dr Patrick Wong (Wong et al., 2007) showed that musicians show enhanced brain stem responses to tones within speech (in Mandarin Chinese). What about skills that are critical to performing musicians though, such as detecting minute pitch variations thereby being able to tell whether you are in tune?
A paper out in the European Journal of Neuroscience by Gavin Bidelman and team recently looked at this question using the FFR paradigm. They looked at the properties of the FFR in response to tuned (major and minor) and detuned chordal, triad arpeggios in eleven musicians (vs. 11 controls). Detuning was accomplished by sharpening or flattening the pitch of the chord’s third. Following each note onset the authors took a ‘snapshot’ of the phase-locking in the FFR which occurred 15-20ms post-stimulus onset. Peaks in the FFR were identified by the researchers and confirmed by independent observers. FFR peaks were then quantified and segmented into three sections corresponding to the three notes heard. The authors then completed a separate, standard pitch discrimination task to determine whether the musicians had better responses at the perceptual level. What they found was amazing.
Results
1) For the perception test: musicians showed better discrimination performance, and their enhanced ability was the same for major and minor distinctions, as well as for tuned-up vs. tuned-down manipulations of pitch. The nonmusicians could distinguish major from minor, but could not reliably detect the detunings.
2) For the FFR data: musicians showed faster synchronisation and stronger brainstem encoding for the third of the arpeggios, whether the sequence was in or out of tune (notice the enhanced peak size and regularity in the image above) Nonmusicians on the other hand had much stronger encoding for the major/minor chords compared to that seen for the detuned chords.
The close correspondence between these two results supports the theory that musicians’ enhanced ability to detect out of tune pitches is rooted in pre-conscious processing of pitch that occurs in the brainstem, and specially in the enhancement of phase locked activity.
Conclusion
The thing that fascinates me is that this kind of evidence fills in some of the much needed gaps in our knowledge about how the so-called ‘lower’ centres of the brain are involved in processing jobs that it is very easy to causally attribute to the ‘higher’ centres of the brain, namely the cortex. In reality our perception of music starts at the level of the ear and all the way along its journey to our conscious minds it is carefully dissected, pre-processed and shaped. And it seems that our experience of the world can shape destinations all the way along this pathway, contributing to the overall behavioural differences we see in musicians and nonmusicians when they listen to music.
Thanks to Gavin (the lead author) for sending me the paper, which you can see yourself for free (along with his other work in this area) here at his website
Paper: Bidelman, G.M., Krishnan, A., & Gandour, J.T (2011) Enhanced brainstem encoding predicts musicians’ perceptual advantages with pitch. EJN, 1-9.
6 Comments
Pingback:
matthew avello
Hello,
I would like to speak with you about these tests on brain development. When I was a young child, before I could even speak I would sing. My father was very musical so we just chalked it up to being influenced by him singing to me as a baby. When I was four years old I taught myself how to play the piano and began writing my own songs and singing melodies at the same time. When I turned 10 years old I picked up a guitar and knew how to play it even though i’d never tried before. When I hear speach or music, anything audible I can actually section off each instrument, or vibration and analyze them individual and simultaneously all at once…It’s like I can raise the volume and break it down to the point of actually seeing the vibrations in my mind. I’m not sure this is a normal thing but I’ve always been interested in digging deeper to find out what’s responsible for this. I believe this was something I was born with and not something learned.
Thanks,
Matthew Avello
Pingback:
Erika
Wow! I suspect that you were born with greater musical ability than a good 99% of the population. That said, studies with autistic savants suggest that learning, repetition and practice play a huge role in the development of super abilities.
While many scientists wouldn’t accept the idea I also wonder if you were not a musician in a previous life. Some theories of reincarnation would hold that your spirit chose the parents and genes that you have, so that your spirit can continue learning and express itself. (For me, Latin and ancient Greek are not so much languages to be learnt as remembered, something that you recognise and remember when you see (or hear) it again after several years.)
Bernd Willimek
Why do Minor Chords Sound Sad?
The Theory of Musical Equilibration states that in contrast to previous hypotheses, music does not directly describe emotions: instead, it evokes processes of will which the listener identifies with.
A major chord is something we generally identify with the message, “I want to!” The experience of listening to a minor chord can be compared to the message conveyed when someone says, “No more.” If someone were to say the words “no more” slowly and quietly, they would create the impression of being sad, whereas if they were to scream it quickly and loudly, they would be come across as furious. This distinction also applies for the emotional character of a minor chord: if a minor harmony is repeated faster and at greater volume, its sad nature appears to have suddenly turned into fury.
The Theory of Musical Equilibration applies this principle as it constructs a system which outlines and explains the emotional nature of musical harmonies. For more information you can google Theory of Musical Equilibration.
Bernd Willimek
Willimek
Name: Bernd Willimek
Posts: 6
Country: Germany
Germany (de)
Pingback: