Why is music?
Music can move us like nothing else.
But how exactly does it work?
Scientists are seeking the answers.
“Music doesn’t really make any sense, from an evolutionary point of view,” says Jean-Julien Aucouturier, a brain researcher in Paris. “It doesn’t fit with the evolutionary view of why we have emotions.”
Darwin on music
“As neither the enjoyment nor the capacity of producing musical notes are faculties of the least use to man in reference to his daily habits of life, they must be ranked amongst the most mysterious with which he is endowed. They are present, though in a very rude condition, in men of all races, even the most savage; but so different is the taste of the several races, that our music gives no pleasure to savages, and their music is to us in most cases hideous and unmeaning.
“Whether or not the half-human progenitors of man possessed, like the singing gibbons, the capacity of producing, and therefore no doubt of appreciating, musical notes, we know that man possessed these faculties at a very remote period. M. Lartet has described two flutes made out of the bones and horns of the reindeer, found in caves together with flint tools and the remains of extinct animals. The arts of singing and of dancing are also very ancient, and are now practised by all or nearly all the lowest races of man. Poetry, which may be considered as the offspring of song, is likewise so ancient, that many persons have felt astonished that it should have arisen during the earliest ages of which we have any record.”
For some kinds of communication, it’s not difficult to understand the survival value. Screams can frighten a foe or warn others. Language can bond social groups. But why would we evolve to create and appreciate something as elaborate as music?
It’s a long-standing riddle. Since Charles Darwin’s time, the emotional power of music has been a matter of great debate. Darwin himself called the capacity to make or be moved by music one of “the most mysterious” of mankind’s endowments.
These days, big thinkers like Steven Pinker have waded in to argue that music is a nice-to-have by-product of the development of language. It is, in Pinker’s view, no more than an ‘auditory cheesecake’ – something we are glad of but could probably live without.
We react to a lion’s roar for good reason. The lion may be a threat. But the effect that a low cello has on us is not a matter of life and death; whether someone plays a C or a C# is not significant from an existential point of view, but it can still evoke great emotion. Like our emotional palate itself, music is immensely versatile: it can be melancholic, and foreboding, or overwhelmingly euphoric – witness Beatlemania and its successors.
Neys to ocarinas
In our day, unraveling this mystery has become a fast-growing sub-discipline of science – involving thousands of researchers around the world, armed with brain scanners, acoustics equipment, psychological tests and musical instruments from Turkish neys to Mayan ocarinas. The European Research Council has been funding some of this work, in projects suggested by scientists around the world. The ultimate answer to “why?” is still out of reach, but here we sample some of the musical themes they’re working on.
Jean-Julien Aucouturier is a researcher for France’s national research agency, CNRS, at the Institute for Research and Coordination in Acoustics/Music at Paris’ Pompidou Centre. He was trained in computer science, and has held several postdoctoral positions in cognitive neuroscience at RIKEN Brain Science Institute in Tokyo and the Université of Dijon. He is now heading the CREAM neuroscience lab in IRCAM, and uses audio signal processing technologies to understand how sound and music create emotions.
By understanding emotional responses to sound, scientists hope music could become a therapeutic tool
Ever hear a happy piano? Jean-Julien Aucouturier can digitally tweak the sound of a musical instrument – or even your own voice – to trigger emotions. From a research unit in Paris’ Pompidou Centre, he has been exploring how subtle changes to the timbre of sound can spark fear or joy, sometimes without the listener being consciously aware that their emotional buttons are being pressed.
Aucouturier, a computer scientist turned brain researcher, says advances in technology are opening doors to studying the link between music and emotions. The upshot could be musical therapies and new ways to communicate with non-verbal people including those with autism, stroke victims and coma patients. It may even equip companies with audio manipulation tools that would filter the voices of helpline staff to make them feel smilier.
“We want to crack the emotional code of music,’ he says. ‘Sound can become a clinical technology – using algorithms that sculpt sound to activate certain brain areas, just like the way pharmaceutical molecules target certain parts of the body.”
What is auto-tune?
Good news for those whose imperfect pitch prompts them to skip karaoke nights or mime the chorus of Happy Birthday: technology can make you sound like a star.
Auto-tune measures – and alters – the pitch in vocal and instrumental music recording. It works by slightly shifting the pitch to the nearest correct semitone.
This kindly covers up any bum notes, even during live performances. The technology has become a standard feature of professional recording studios, as suggested in this how-to video from a UK firm, Music Radio Creative. For his own research, however, Aucouturier developed a program tuned, not for pitch, but for emotion.
Is this the dawn of the music-aceuticals era? Access to large datasets on facial expressions, musical extracts, and emotions – along with digital tools for manipulating sound – have attracted engineers, computer scientists and data analysts to a field once dominated by musicologists and the occasional psychologist. This isn’t just shopping mall muzack; the goal is precise understanding of sound and emotion. So far, this has led researchers to categorise music as triggering “basic emotions” such as anger or fear; to identify the emotion-controlling amygdala deep inside the brain as involved in the sensation; and to observe how musical perception works across cultures – or doesn’t work at all, in people with brain injuries.
Aucouturier’s team, which leads the ERC-funded CREAM project, has found a way to look at what’s happening subconsciously when we process sound – by studying the tiny muscle twitches associated with smiling.
“We sat people at a computer and attached electrodes to their face,” he explains. “This allows us to measure activity of the muscles we use to smile.”
Next they played voices that had been manipulated to make them sound happier, sadder or more fearful by adjusting the timbre – a technique described as an auto-tune for emotion.
“We built a tool that changes your voice to make it smilier than it actually is,” Aucouturier says.
Finally, they watched the electrodes. They picked up more contractions of ‘smile’ muscles when people were played voices tweaked to sound happier. The effect was very subtle. The researchers couldn’t always see the smiles, and the subjects couldn’t always consciously tell a ‘happy’ voice from an unmanipulated one. But the electrodes didn’t lie: smiling, they were.
“Sometimes their muscles were more accurate than their conscious judgements,” he explains. “Even when they were able to tell which voice was happier, their facial muscles knew it before they did.”
Smiling comes with all kinds of physiological and psychological benefits. Even a forced smile – or a smile that forms when you hold a pencil between your teeth – can improve your mood. Using sound to fire our smile muscles, without telling a joke or a heart-warming story, could trigger positive emotions without us even realising it.
But what have smiles got to do with music?
DAVID and his electronic lyre
As part of their research, Aucouturier and his colleagues developed some software to study how voices convey emotion. They call it DAVID, partly as an acronym for Da Amazing Voice Inflection Device, but also as a tribute to Talking Heads frontman David Byrne, who was one of its first users.
The software subtly tweaks the sound properties of a spoken sentence to make it a bit more sad, happy, or scared. Often the impact is subconscious: The hearers may not detect much difference, but measurement of their facial muscles in the lab tell a different story.
Here are a few samples, can you tell the emotions conveyed?
Clip 1 - Happier
Clip 2 - Sadder
Listen to someone speaking on the phone and you can often tell whether they are smiling. The key to this is timbre: the shape of the lips alters the quality of the sound without changing the pitch or amplitude.
“It works in a similar way with a guitar,” says Aucouturier. “If the strings of two guitars are the same length they play the same notes, but a larger-bodied guitar has a different timbre to a smaller guitar, and variation in shape can also have an effect.”
Guitars, flutes, pianos – his software can be applied to any sound. “We are now testing whether people can pick up a happy signal from manipulated musical sounds. The question is do we get a smiley response to a smiley piano?”
One of the potential applications of the research is with coma patients. It can be difficult to know whether someone in an apparent vegetative state is processing information and sounds in their environment. But what if those sounds could be manipulated to make them stronger and more emotionally resonant? “If you could use sound to trigger fear, for example, you could test whether they have an emotional response,” explains Aucouturier.
People with autistic spectrum disorders could also benefit. The French team plans to work with doctors to see whether sounds with added ‘smiles’ or enhanced trustworthiness could be used to communicate with people who have difficulty maintaining eye contact or who have problems reading facial expressions. You don’t need to see a smile to feel it.
Like many technologies, some of the more sci-fi applications are borderline dystopian. The power to manipulate emotions through altered sounds is the power to manipulate people – often without their knowledge.
How do you play ‘domineering’?
Can you really tell what someone is thinking when they’re playing an instrument? French composer Eric Satie often wrote ‘thought’ instructions into his scores: “Obey…Settle down…Don’t worry…Tired…” are some of the enigmatic markings on his piano Pièces Froides of 1897.
Aucouturier’s lab decided to test that, by asking some professional musicians to improvise with an emotion in mind: domineering, insolent, disdainful, conciliatory or caring. Then others listened – some musicians, some not.
Surprisingly, it worked: more often than randomly possible, the hearers picked the right emotion. Not so surprisingly, the musician-hearers were better at it than the non-musicians. You can read the details in this 2017 paper from the journal Cognition.
In one experiment, CREAM researchers asked people to tell a story about being late for work on their first day at a new job. Participants were wearing earphones that allowed them to hear their own voice as they spoke. But, unknown to them, the sound had been manipulated to make it seem emotionally positive or negative. This influenced how people felt about the story they were telling, and even affected their word choices. Those who were hearing their own voice layered with a subtle worried timbre began to feel more stressed and view the episode in a more catastrophic light. People who heard their voices laced with happiness became more philosophical about the idea of being late for work.
“They didn’t know that they were hearing their own voice with effects that we had added in real-time,” he says. “The more they hear the happy-tuned version of their voice, the happier they felt. They even began to change the words they used depending on what they were hearing.”
Ongoing experiments on how earbuds could modify incoming sound may pave the way for devices that filter the world to make it less – or more – stressful. There are other possibilities too: the entertainment industry could offer emotionally souped-up music; workplaces could pipe in sounds that boost productivity or wellbeing; politicians could speak through a trust-laden microphone; and, when you call your mobile phone operator to complain, the customer service operator could auto-tune their voice to something warmer and more caring. They may be playing you like a fiddle.
“This is not even science fiction,” says Aucouturier. “We have some of these technologies now. We want to make socially useful tools but also raise the flag to the public about what can be done with sound technologies. It’s best to know what’s possible.”
Michael Ellison is a composer and reader in the School of Arts at the University of Bristol. He combines contemporary and traditional influences into a personal musical idiom. His first opera, Mevlâna-Say I am You (Rotterdam Operadagen and Istanbul Music Festivals, 2012) integrated Turkish traditional instruments into contemporary music—a direction his second opera, Deniz Küstü (Istanbul Music Festival, 2016, Jones/Tanbay/NOHlab) extends. Ellison has been commissioned by BBC Symphony Orchestra, Acht Brücken Festival, Radio France, Grenoble Festival, New York Youth Symphony, Siemens Foundation, Nova Chamber Music Series, amongst others. He is also co-director of Istanbul’s Hezarfen Ensemble.
Researchers in Britain and Turkey try to find a common musical language
Michael Ellison knows that globalisation is a big deal, causing cultures to collide. Result: problems, but also possibilities. Can music bridge the gap?
Practice makes perfect
What happens in the rehearsal room when East meets West.
Ellison is a soft-spoken American composer, now at the University of Bristol, leading an ERC project that morphs music from two disparate musical worlds: Turkish makam and Western classical and contemporary music. With Istanbul Technical University, he and his research team are working with Turkish instrumentalists, developing new notation systems and managing workshops to combine the two.
It can be difficult. For starters, he says, the Turkish tradition requires good improvisers: “a lot of what they do is by ear,” he says. Western classical musicians generally want it all written down. When they come together, “musicians from both sides get out of their comfort zone. It’s a slow process.”
At one workshop, the Western musicians tried to make a piece of makam music sound more ‘together’, rather than improvised – but the Turkish musicians wouldn’t buy it. Ellison relates:
“One of the Western musicians finally gave the Berlin Philharmonic as an example of the pinnacle of playing exactly together to an amazing degree.
“Our ney player, Bülent Özbek, then asked, ‘So the most interesting thing about the concert is that they all play exactly together?’”
What is makam?
“A makam (maqam in Arabic) is a series of trichords, tetrachords, and/or pentachords that make up a ‘scale’, with a particular tonic, particular dominant notes, and a melodic progression (ascending, descending, or a combination of the two). The scale of a makam is not a scale in the Western sense, as it may not repeat above and below the primary octave, and notes may shift at certain points in the progression of the makam.
“There are more than 200 makams, more than 50 of which that are relatively well known and commonly practiced today. In Turkish music a greater distinction is often made regarding the melodic direction or path of a makam than in Arabic music.”
‘Yes, they are so amazing,’ was the answer.
“To which Özbek replied, ‘Well, I wouldn't be interested in going to that concert.’”
Ellison is very nearly fluent in both types of music. Classical Turkish music is called makam, and dates to at least the 15th century and the court of Ottoman Sultan Murad II. The word denotes a place – referring to the way each of its musical modes has a starting place and final note (much as Western modes, from ancient Greece onward, have tones around which the melody moves.) The instruments are delicate and varied – from the lute-like oud to the ney, a flute made from a reed and blown from one end.
Ellison is hoping his research into East-West musical pairing will result in the birth of a new strand of contemporary music and opera. This has so far resulted in one chamber opera as well as new technologies and strategies to bring the musicians of different traditions closer together, such as a souped-up Western keyboard that can play Turkish sounds. He is studying the voice in detail, with spectrographs to analyse a singer’s tone colour and (coming soon) an ‘electroglottograph’ to measure the vibrating vocal folds. These efforts, together with videos of traditional singers, may help teach certain types of Anatolian singing. Just how do Yörük and Kurdish singers use their throats when they perform?
“The voice is the primary instrument in the music of many cultures,” says Ellison. “In Turkish music, for example, instrumentalists will listen to the great singers in the tradition to learn how to play.”
Inevitably there are also politics involved. Ellison says that when he first got to Turkey some 20 years ago he was surprised by “the amount of political information an average person tended to ascribe based on whether one was ‘Westward-leaning’ musically speaking or …’Eastern leaning’. I've had to explain to people a number of times that my interest in Ottoman or makam music doesn’t have anything to do with a political agenda; it simply had to do with the music.”
His ambition is to get more Western composers and musicians thinking Turkish; a book is in the works, for instance. But he has already written an opera combining the musical traditions. It is called Deniz Küstü: The Sea-Crossed Fisherman, based on Turkish writer and human rights activist Yashar Kemal’s 1978 novel by the same name. The 70-minute piece integrates Turkish and Western instruments, and includes contemporary choreography and video. The music varies from harsh to playful, from dissonant to dreamlike. A reviewer in Opera magazine described it: “Instruments break out of their traditional roles and mix to form novel sonorities, so that the sea music shimmers.”
There’s a second, large music theatre work in the pipeline, based on another novel by the same author, Legend of 1000 Bulls, about the end of nomadism of the Yörük tribes in Eastern Turkey.
“It’s a sad, but beautiful novel about people forced to settle in another place,” says Ellison. “There are interesting parallels with migration today.”
Natalie Sebanz is a CEU professor whose research revolves around the cognitive and neural basis of social interaction, with a special focus on how we coordinate our actions with others. She obtained her PhD at the Max Planck Institute for Psychological Research, Munich, and has held appointments at Rutgers University, the University of Birmingham, and Radboud University. She is a recipient of the European Science Foundation`s Young Investigator Award and the Young Mind and Brain Prize.
OK, piano duets are dull. But some researchers are studying the way they work. Could this help understand collaboration - from man to robots?
How do duets really work? In Natalie Sebanz’s lab, an expert and an amateur pianist try to keep in step.
Want to listen in? Here, an amateur plays the (very simple) piano part alone while the expert listens.
Here, the expert joins in and tries to compensate for the amateur.
What makes a great musical duet? Is it just another kind of performance, or is there something special about the collaboration involved? Answering those questions will not only make for better piano recitals; it also promises to improve all kinds of human interactions – and, possibly, give robots the kind of ‘intuition’ essential to working effectively with humans.
What happens during a show – between the performers and the audience? Another form of collaboration was put under the microscope by a group of scientists and artists during a 2013 Paris workshop involving Sebanz and other researchers.
As a cognitive scientist with an interest in music, Natalie Sebanz at Central European University in Budapest is delving deep into what happens in the brain when two people play a piano duet.
“We want to understand how people coordinate their actions to achieve a shared outcome,” says Sebanz. “In musical duets, timing matters on a millisecond scale.” What’s more, duet players also make predictions about what their partner is likely to do next. And what happens if your partner is a novice, or their timing is off? “It’s not about reacting. You have to anticipate the other pianist’s next actions.”
Sebanz leads JAXPERTISE, an ERC project looking at joint action learning. Her team invited pairs of skilled pianists to memorise both parts of a piano duet and then asked them to play together – each taking one part while their partner took the other. Both wore an electroencephalography (EEG) skull cap – a device that monitors electrical brain activity.
Birds do it, too
Duets also happen in nature – as this video of two plain-tailed wrens suggests. According to researcher Eric S. Fortune of Johns Hopkins University, and colleagues, the top bird in this video is the male; below is the female. An oscillogram traces the male and female parts in blue and magenta, respectively.
The first performance was flawless. After all, these were two professional and well-rehearsed musicians. Then Sebanz had them wear headphones, and began to play with the pitch of what they were hearing to see how the pianists would respond.
Some of the notes were changed in a way that deviated from the score but didn’t affect the joint auditory outcome. For example, although the pianists could hear the “wrong” note, it still fit the harmony of a chord produced by the pianists’ combined pitches; no dissonance. For other notes, the pitch was changed in a way that did jangle the resulting sound.
The study showed that duet players constantly monitor not only their own playing but also that of their partner in order to make predictions about what they will do next. When mistakes were built into the resulting music, it sparked a strong reaction in the brain. This was most pronounced when the errors affected the overall result. “People are very sensitive to errors, especially their own,” says Sebanz.
Knowing the score
In a separate study, the JAXPERTISE team looked at the dynamics of duets in which one partner is an expert and the other is a novice. How good, they wondered, were experts at predicting – and accounting for – the faulty timing of amateurs? In some instances, they paired novices with skilled pianists who were familiar with both parts of the duet; in others, the expert only knew their own score. They found that where the experts knew both parts, they were better able to coordinate in ways that improved the overall outcome of the duet.
This chimes with research showing that there are two ways to achieve a harmonious duet: have a clear leader and follower, or have two equal partners. By contrast, she says, “when there’s a mix of the two – where partners are neither equal nor clear about who is taking the lead – you have a problem. Outside music, you can see it in companies where there is a hierarchy but leadership is lacking.”
This is one of several ways that studying duets can have applications outside music. It has implications for dance – where it takes two to tango – as well as in doubles tennis. Teacher-student relationships can also be about working towards a shared outcome. “Our next project will look at what makes a good teacher. Is it better to take turns playing a piece of music so that the student imitates the teacher, or do you gain something by playing together – giving the student the comfort of a scaffold within which to operate?”
There is also growing interest in cooperative robots – or cobots – which would work alongside humans in factories, offices and domestic situations. For instance, when people move in synchrony they like each other more; so if robots could tune in to how humans move and speak, that could make for more natural human-machine interaction.
“A lot of research from our field is being picked up by people in robotics,” Sebanz says. “Some of it is looking at relatively simple cooperative tasks – like a robot handing a cup to a human. Just like a duet, this requires the robot to anticipate what their human partner will do next. It’s all about making predictions.”
Gerhard Widmer is professor and head of the Department of Computational Perception at Johannes Kepler University, Linz, and leads the Intelligent Music Processing and Machine Learning Group at the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna. His research interests include AI, machine learning, and intelligent music processing, and his work is published in a wide range of scientific fields, from AI and machine learning to audio, multimedia, musicology, and music psychology. He is a fellow of the European Association for Artificial Intelligence (EurAI), and has been awarded Austria’s highest research awards, the START Prize (1998) and the Wittgenstein Award 2009). He currently holds an ERC Advanced Grant for research on computational models of expressivity in music.
Tolstoy was always deeply interested in the philosophical grounds of music: What is music? What does it do? Why was it made? Here, the British Library summarises.
“Music is the shorthand of emotion,” Leo Tolstoy once wrote in a letter to his wife. If so, how exactly does that emotional language work?
Gerhard Widmer leads a project funded by the European Research Council that uses computers as tools to help decode this very human art. With them, he is investigating musical expressivity, especially in musical performance: the art, he says, of shaping “parameters such as tempo, timing, dynamics, or articulation so that the resulting music sounds natural and musical to human listeners, and conveys intended expressive or emotional qualities.”
Citizen scientists: Join the study!
Here are two music listening games that you can play which will contribute some empirical data to Widmer’s project, about how humans recognise and categorise expressive qualities.
Both games are completely anonymous, and no personal data are stored.
The Short Con Espressione Game
(~ 5-10 minutes)
Here are five recordings of the same piece of classical music (by Mozart), as played by 5 different pianists. Listen and describe.
The Long Con Espressione Game
(~ 30 minutes)
Listen to different performances of the same classical piano piece—nine pieces in total. Describe the performance.
Widmer’s research already has some fans. Using his team’s computer code, a group of Australian and Italian researchers found most people couldn’t distinguish between some samples of human and computer-generated piano music developed by Widmer’s lab, as well as by research teams in Italy, Sweden and Japan. They presented a panel of 172 human listeners with recordings of seven performances of a piece of piano music by German-Danish classical composer Friedrich Kuhlau. One performance was by a human, an internationally experienced concert pianist; the other six were generated by various algorithms.
Surprisingly, the experiment showed that the human performance was not rated better, at a statistically significant level, than the computer-generated performances. Best among the computer algorithms used in the study - and the only one to surpass the human pianist in terms of mean perceived “humanness” - was by two of Widmer’s team members, Maarten Grachten and Carlos Cancino.
It uses a computer neural network to analyse the sheet music of a piece, extracts a large number of so-called 'basis functions' (features capturing some structural aspects of the piece), and predicts musically reasonable timing, dynamics and articulation values for all notes to be played.
Widmer is pleased, but cautious. "Much as I like to hear that our algorithm did well, I would be careful not to over-interpret this result," he says. "Music and expressive performance are extremely complex and multi-faceted art forms. This study was based on a single piece of music, of a particular style, with particular properties; it is too early to draw strong and general conclusions. But the result does indicate that we may be moving in the right direction."
Fang Liu is associate professor at the University of Reading and her research aims to understand how the human brain processes pitch information for linguistic and musical purposes during production and perception.
Around 4% of the population suffers from ‘amusia’ – a disorder affecting the perception and production of pitch in music. Understanding this phenomenon could help a seemingly unrelated group: people with autism.
How's your sense of pitch?
We all know people whose singing is awful – but for many, the problem is more than a minor case of tin ear. One in every 25 people suffers from some form of amusia, a deficit in how their brains process pitch. Not only do they have difficulty producing and recognising melody, people with amusia can struggle to tune into subtle tonal differences in language.
Pitch plays an essential role in encoding speech prosody, musical melodies, and conveying emotions through music and speech. ‘Having precision in pitch matters more in music than in speech,’ says Fang Liu, an ERC-funded researcher from the University of Reading. ‘But in speech pitch can change the meaning of a sentence or a word.’ For example, the statement ‘It’s from Emily.’ and the question ‘It’s from Emily?’ might not be substantially different to someone whose pitch sensitivity is off.
For the most part, amusia does not affect everyday communication: people who are tone deaf can usually tell a statement from a question when the pitch contrasts involved are sufficiently large, and they are also able to produce speech in a normal way. But a separate neurodevelopmental condition, autism spectrum disorder, can have the opposite effect: impaired speech with improved pitch. Around half of children with autism are non-verbal; those who can speak often have exaggerated emotional expression and difficulty differentiating statements from questions, but show enhanced musical abilities.
While language problems in autism have been studied since the 1940s, research on how the amusic brain processes music and language is only recent, says Liu. And no research has looked at the link between the two. The ERC-funded projected led by Liu is looking at both conditions to deepen understanding and figure out how to help those affected.
Cooperating brain areas
‘Don’t use that tone with me’
In intonation languages such as English, pitch is commonly used to differentiate a statement from a question, emphasise a particular word in a sentence, and to express different emotions such as happiness and sadness. In tone languages, such as Mandarin, apart from fulfilling all those roles, pitch is also used to differentiate word meaning at the syllable level, and thus plays a much bigger part in communicating meaning.
In Mandarin Chinese for example, the meaning of a sound can be changed by any of four tones: a high tone, a rising tone, a rising-falling tone, and a falling tone.
The classic example is the word/sound ‘ma’ which means ‘mother’, ‘hemp’, ‘horse’ or ‘scold’ depending on which tone you use. And, just for good measure, the short toneless ‘ma’ is often added to the end of a sentence when asking a question.
|mā||媽 (trad) / 妈 (simp)||mother|
|mǎ||馬 / 马||horse|
|mà||罵 / 骂||scold|
Despite the pitch sensitivity required to master tone languages, Mandarin speakers with amusia tend not to have too many problems with their native tongue. “We have tested Mandarin speakers with amusia and they have no problem with speech communication in everyday life,” explains Liu. “This is because in speech, we use huge pitch contrasts to denote different meanings. It is only when it comes to more subtle differences that problems arise.”
Music and language share similar properties and are processed in overlapping brain regions, which sometimes combine to help us understand or appreciate speech or music. Liu suspects that the key to normal understanding of music and language is having a good balance in recognising and producing form and function in the two domains.
When people with amusia are played two tones and asked which is higher or lower than the other, they struggle if the difference is subtle. People with autism, on the other hand, generally have very good low-level pitch processing – “but this seems to inhibit them from having normal speech perception and production”, says Liu. “It actually causes problems with generating categories in speech. We think this is due to an impaired ability to separate form from function in language.”
To test the theory, her team is running a series of behavioural experiments to examine whether and to what extent people with amusia and autism differ in pitch processing abilities, memory capacities, and cognitive processing skills, and how this is linked to form and function processing in music and language. The team will also try to pinpoint the neurophysiological origins of speech and musical processing deficits in amusia and autism, and to find out whether speaking a tone language such as Mandarin would affect communicative abilities of people with autism.
“My hypothesis is that people with autism are confusing form with function,” says Liu. “People with amusia on the other hand have the opposite problem: they can struggle with form but not with function. In both cases it’s a problem of misbalancing between form and function in music and language.”
If the project leads to greater understanding of what lies behind the imperfections of both groups, it could pave the way for the design of new treatments.
“We are hoping to design some interventions that would improve language abilities of people with autism and musical abilities of people with amusia,” says Liu. “We plan to train autistic people to understand functions in language, and amusic people to process form in music.”