ICMPC

ICMPC Day Three- Computational Modelling

I ventured into the computation modelling seminar for one talk at the start of Day Three. Normally this would not be my choice of session; modelling is interesting but usually it is too hard for me to understand and when there are 4 sessions running in parallel there is something more pressing for me to see. But today was an exception. My friend Martin Rohrmeier was presenting his PhD work in computational modelling of melodies and I wouldn’t miss that for the world!

Martin’s work looks into the nature of implicit memory for melodies. The perception of music depends to a large extent on our background of acquired musical structures and priming studies have proved extremely useful in the past in demonstrating how we acquire musical structure without conscious knowledge.  For his PhD Martin wanted to develop a new implicit learning behavioural paradigm, one that promoted as much effective learning as possible, and then to test out known computer models of musical implicit memory/expectation  to see which matched the human results.

His baseline (human) experiment looks like it worked a treat. I am hoping to see the details of the paradigm that he will send me soon, but basically using a finite state grammar he managed to achieve learning effects of over 70% in both musicians and nonmusicians (in fact, learning did not differ between the two groups). He also showed that participants had acquired generalised knowledge about the new musical grammar, by using testing materials that were not directly related to those presented in the test phase. Believe me – that is some impressive learning!

He then tried a second human experiment this time using a grammar that violated Narmour’s principles of musical grammar. In these circumstances he found that learning dropped significantly to 67% in both groups. So although they were still learning, violating Narmour’s principles meant that implicit learning was reduced.

Now the main question: Can any of the established computer models of musical learning/expectancy match the human result across those two experiments? Martin chose to test Wiggins and Pearce’s N-gram model (developed at Goldsmiths! ), a competitive chunker model and a simple recurrent network. Unfortunately I am not blessed with a thorough knowledge of how those models all work; but I do know that the N-gram model is based on measuring the frequency of co-occurrence across pairs of items (known as bi-grams – can also use tri-grams and quad-grams), the chunking model is based on building a hierarchy of probability and the SRN is based on a simple feedback and feedforward network that build up context units.  Marin varied many of the free parameters within these models leading him to complete over 100,000 simulations. Wow!

Overall none of the models exactly matched the characteristics of the human performance, mostly since they failed to spot non-grammatical items. For example, N-gram and chunker models outperformed humans in Experiment 1 while the chunker didn’t get above chance. After a second round of simulations where Martin pre-exposed the models to music he found that only the N-gram (bi-grams and tri-grams) performed in the human range for Experiment 1 and 2.

Martin’s work is exceedingly clever stuff. If anyone can make modelling easy to comprehend (if not fully understand the workings) then it is him!

One Comment

  • Elisa Carrus

    I was at this talk too, and found it amazing. Such great work, and he made this stuff very easy to understand. You’ve done a great job at summing it up Vicky!