How do you feel?
The body is a sense machine. We are about to unlock its full power.
How humans perceive the world is stranger than you think. ERC researchers are unravelling the mysteries of the senses, helping us to see with our ears, feel sound and explore new doors of perception. Their insights are already spinning off into robotics, wearable technologies and ‘9D-TVs’.
We stand on the brink of breakthroughs that will restore lost senses, give us new abilities, and accelerate the development of human-like robots. Behind it is a boom in basic research on how the brain perceives the world – studies that include those by Amedi and others funded by the European Research Council, the EU’s frontier research agency. The impact, these researchers predict, will run from the profound to the, well, less profound: prepare yourself for Hollywood smell-tracks.
Can you see with your ears? Take the EyeMusic test
To help the blind, Israeli researcher Amir Amedi has developed a system to convert images into sound – an aural form of Braille. But this version works with a smartphone, so you can take a picture of anything and convert it to sound.
Want to hear it? Click the play icon.
What is it saying? Click here
How does it work?
EyeMusic looks at shapes, in a picture on-screen or taken with your smartphone camera, and translates them into Soundscapes - auditory representations. It ‘plays’ objects at the top of a picture with a high pitch; at the bottom, with a low pitch. It plays different colours – red, yellow, green, blue and black – with the sound of different musical instruments.
For any image, then, EyeMusic scans from left to right, just like reading a book. Click on the four images below to see how it works.
Sounding out the real world
You can ‘hear’ a table or a tree with EyeMusic. For that, theEyeMusic app, on Android or iPhone, converts a photo you take into a simplified, pixelated representation. Then it converts that into its sound codes. For instance, you can see here – step by step – how it can build an image of a tree in a field.
Want to play with it? You can upload your own image to this site, and listen to it.
There is a scientific point to all this, of course. By scanning the brains of blind people using EyeMusic, Amedi has seen how the visual cortex – the part that a sighted person uses in interpreting what the eyes see – is stimulated by EyeMusic. In short, the mind is ‘plastic’ – able to adapt.
Amir Amedi is a brain scientist with 15 years of experience in the field of brain plasticity and multisensory integration. He has a particular interest in visual rehabilitation. He is an associate professor at the Department of Medical Neurobiology at the Hebrew University and the ELSC brain centre. He is an adjunct research professor in the Institute of Vision at Université Pierre et Marie Curie. He holds a PhD in computational neuroscience (ICNC, Hebrew University) and postdoctoral instructor of neurology at Harvard Medical School.
He won several international awards and fellowships such as the Krill Prize for Excellence in Scientific Research from the Wolf Foundation (2011), the international Human Frontiers Science Program Organization postdoctoral fellowship and later a Career Development award (2004, 2009), the JSMF Scholar Award in Understanding Human Cognition (2011), and was recently selected as a European Research Council (ERC) fellow (2013).
Blind people can use sound and touch to learn to ‘see’
How do you see the world? For most of human history, the answer was ‘with my eyes (you idiot)’. As science progressed, this appeared to be confirmed. Brain scans showing which areas of the brain are activated when we perform specific tasks helped to map out sections responsible for vision, hearing, smell, movement – and much else besides.
The idea was that our eyes detect light and pass this information to the brain along the optic nerve, where the visual centre recreates the image in our mind’s eye. It seemed obvious that people whose eyes don’t work would never see because they cannot detect light. And it was presumed that the visual centres in their brains were either dormant because they never received any information from the eye, or would be irreversibly reassigned to do other tasks.
Paul Bach-y-Rita, M.D. (1934–2006)
Then came an American neuroscientist,Paul Bach-y-Rita. In the 1960s, he had a radical idea: what if ‘visual’ information about the world around us could be perceived by another of our senses? Bach-y-Rita became a pioneer in ‘sensory substitution’.
One of his best-known attempts connected a black-and-white camera, worn on the head, with electrodes that touch the tongue. The camera sent stronger electric pulses to the tongue when faced with dark objects, and a gentle tingle when it saw lighter shades. With the right training, a blind person could learn to build a picture of the world around them and describe the image they were ‘seeing’.
Tongue Display Unit
The first Tongue Display Unit (TDU) was designed as a portable, battery-powered device that could display static 12 X 12 tactile patterns on the tongue in a stand-alone mode. By connecting this device to a computer, a custom command language can create real-time controllable tactile images on the tongue.
It was a revolutionary finding. The more scientists looked, the more they learned that one sense could be swapped for another. The original theory of how we see wasn’t entirely wrong, but it was incomplete. The brain turned out to be much more flexible – or plastic – than anyone ever expected. As Bach-y-Rita often put it: ‘We don’t see with our eyes, we see with our brains.’
In the late ‘60s Bach-y-Rita even tried to train people to see based on vibrations. Using an old dentist’s chair and a large TV camera, he aimed to build a machine that would translate images into a series of vibrations that could be perceived as an image. He never quite cracked it but Bach-y-Rita laid the groundwork for a whole field of brain research.
‘Seeing’ with sound?
Fast-forward half a century, and with the benefit of huge advances in technology, sensory-substitution devices are enjoying a renaissance.
Some have tried to mimic the echolocation system used by bats to navigate the world in darkness – emitting ultrasonic waves and calculating the size and shape of objects based on how long it takes for the ultrasound waves to bounce back. Others are turning images into sounds. Amir Amedi at the Hebrew University of Jerusalem, has taken this to a new level – using his ERC grant to reveal new insights into brain development in the process.
Using Amedi’s EyeMusic device, blind people can describe faces, read other people’s emotions, recognise body postures and detect colour. Tall objects are represented by high-pitch sounds while low objects are lower pitch. The width of an object is represented by the duration of a sound; colour is conveyed by using different musical instruments.
While the sound is far from musical, 10 hours training is enough to teach someone the basics of this new language – although fluency takes longer. The potential to restore sight to those whose eyes have failed is enormous.
The ear teacher at work
In Amedi’s Jerusalem lab, a man born blind shows he can recognise cartoon images from sound. (In Hebrew with English subtitles)
But perhaps the biggest surprise is what happened when Amedi’s team began scanning the brains of people using EyeMusic. It turns out that the visual centres of the brains of people blind since birth – once thought by some to have withered like an unused muscle – were activated along with the auditory centre. Yes, the sound was coming in through the ears but the brain was interpreting it partly in the area considered to be for vision.
Amedi is not done yet. His lab has also developed the EyeCane – a device that vibrates when a blind person waves it in from of them, providing information about the distance between them and nearby objects. Users can navigate their way around a maze blindfolded. Again, it is the visual centres that light up even though the input is tactile. The device is relatively cheap to make and could be commercially available in a few years.
Rethinking brain organisation
Can the ears see?
In a TEDx Jerusalem talk, ERC grantee Amedi explains the science behind his work in sensory substitution.
This kind of research is challenging established theories of how the brain organises, or reorganises, itself. The orthodox view has been that there are critical periods in our early development during which the organisation of the brain is set for life. If mice had one eye covered from birth, the brain cells normally used to interpret information coming from that eye would be reassigned to do something else. Babies’ brains were seen as highly adaptable, but adult brains were more rigid, with specific zones permanently committed to specific jobs.
“It was thought that if you don’t experience vision in the first weeks or months of life, the brain’s visual centre wouldn’t develop normally,“ Amedi says. “This was extended to thinking that if you don’t see faces during that critical period of early life, you would not recognise them in adulthood.”
Experiments using the EyeMusic sensory substitution device in adults reveal this was not quite true. While the window of greatest opportunity for learning and brain development is early in life, blind people aged from 40 to 60 can learn to ‘see’ with just 10 hours of training.
Not everyone is convinced. Kevin O’Regan at the Laboratory of Perception Psychology, University of Paris Descartes, says the fact that the visual cortex lights up following auditory stimulation may not be as conclusive as it first appears.
“What Amedi has shown is that when you’re blind it frees up the area of your brain connected to the optical nerve,” he says. “I wouldn’t jump to the conclusion that people are seeing sounds: the visual cortex may be used because that’s the part that is available rather than because it is inherently related to visual input.”
So, what’s next for sensory substitution? Armed with knowledge of the adult brain’s flexibility, Amedi is working on a system that combines his EyeMusic algorithm with the latest in computer vision and sensor technology to give a more complete picture. “Artificial intelligence is becoming increasingly adept at recognising objects. We want to marry this with EyeMusic and maybe add a tactile glove to give an additional sensory input.”
The computer could tell you what an object is and where it is, possibly communicating this through a combination of sound and vibration to create an image in the mind’s eye.
All of which leads to questions about what vision – or hearing or touch – really is.
Marianna is a Reader (Multisensory Experiences) in the Department of Informatics at the University of Sussex. She leads the SCHI Lab - Sussex Computer Human Interaction Lab established within the School of Engineering and Informatics, as part of the Creative Technology Research group. Before joining Sussex, she was a Marie Curie Fellow at Newcastle University and prior to this an Assistant Professor for Human-Computer Interaction and Usability at the University of Salzburg. Her vision and ambition is to gain a rich and integrated understanding on peoples’ tactile, gustatory, and olfactory experiences in order to provide designers and developers with the ability to create truly compelling multisensory experiences and novel interactions with technology.
Move over sound and vision: 9D-TV is on the way
Black and white silent movies were revolutionary in their time. Then came sound. Then colour. Now 3D cinema is all the rage, while 4DX theatres – complete with moving seats and wind machines – promise to make the experience even more realistic.
But why stop there? Marianna Obrist says 9D-TV is on the horizon. That’s vision, hearing, smell and touch, plus the five tastes (sweet, sour, bitter, salty and umami). Just as soundtracks help to build suspense and convey emotion, ‘smell-tracks’ will bring us into movie scenes or video games in a totally new way.
The idea is not new. In the early 20th century, theatres experimented with adding perfumes that complemented movies but it never took off. And in 1965, BBC television unveiled new technology that would allow viewers to smell fresh coffee or chopped onions – as part of an April Fools’ Day hoax. More recently, Japanese researchers have hit the headlines with prototype ‘smelling screens’ that pump out gas corresponding to what’s on the screen – opening the door to more persuasive advertising or museum exhibits.
“It would be a more immersive, multisensory experience,” says Obrist, an ERC grantee who leads the Sussex Computer Human Interaction Lab (SCHI) at Sussex University. “But it has to be more than a gimmick. We need to think about integrating smell, touch and taste into the narrative from the beginning of the process.”
Sensorama, an early attempt at giving movies more ‘pow’. Morton Heilig, an American cinematographer, patented a machine in 1962 that gave the user the sights, sounds, and feel of riding a motorcycle through Brooklyn, NY. It caused a stir, but he couldn’t raise the capital to bring it to market.
The big challenges include figuring out how smell maps onto experiences; which tastes trigger what emotions; and how touch influences our experience. And, for each of these, what level of intensity is required to have the desired effect?
“We are only beginning to understand the chemical senses, but there has been a lot of progress over the past 10 years,” Obrist explains. “We can learn a lot from how scientists characterised colour in the 20th Century. As mathematical models were developed to describe the subtleties and contrasts of colour, cinematographers applied this to create powerful visual effects.”
If they can do the same for touch, smell and taste, designers would have toolkits to help them select the perfect scent for the mood they want to create – just as Photoshop gives graphic designers access to the full palate of colours when they are creating movie sets or advertising posters.
How much touch?
What’s the meaning of touch?
Want to spread a warm glow? Touch somebody around the index finger. That’s among the odd facts of touch that Obrist and other researchers are now exploring.
Interactive technologies have already gone beyond sound and vision as sensors have become cheaper and more sensitive. For example, your smartphone has a touchscreen and alerts you with vibrations. But this is just the beginning. Obrist’s SenseX project is part of a global effort to unlock the full power of human-computer interaction, particularly when it comes to haptic technologies that manipulate the sense of touch. For instance, she says, “when it comes to touch, we want to know where on the hand to stimulate and with what intensity.”
The Sussex University team has already shown that the way we touch can elicit emotional responses: a gentle brush against the edge of the hand can generate a negative emotion, while a touch around the index finger is positive.
Now they are working with a local Bristol-based start-up called Ultrahaptics that uses ultrasound to project sensations onto the hand. An array of small, computer-driven ultrasound emitters creates what they call an acoustic radiation force – analogous to the feeling you get if you put your hand in front of a powerful loudspeaker; only now the sound is so carefully controlled that it can feel like you are grasping a real object (the company says it is safe because only a tiny fraction of the energy gets beyond the surface of your hand.) This paves the way for consumer technologies featuring invisible buttons and dials that feel ‘real’ – and respond to manipulation.
“Mid-air haptic technology is quite momentous,” says Obrist. “It was only really developed a few years ago and is set to enter the consumer market very soon.”
How do you touch a sound?
Ultrahaptics, a UK start-up working with Obrist of Sussex University, has developed an ultrasound projector. Here’s how it works.
While academics continue working to understand more about tactile input and how to optimise these technologies, Ultrahaptics are working with big electronics companies and automakers to figure how to integrate the technology into cookers and cars.
“PlayStation and Xbox have already blended visual, audio and haptics [through vibrations] but our ultrasound technology allows you to interact naturally with virtual objects,” explains Anders Hakfelt, an Ultrahaptics vice president. “I think in just a few years’ time VR headsets will become a routine add-on for games consoles and perhaps televisions as they become cheaper and better. Then there will be a tipping point, probably sparked by a product that really captures the public imagination.”
Other applications include operating theatres, where surgeons could interact with technologies and devices without touching them – thus reducing the risk of spreading infection. And large flight simulators for pilot training could be reduced to a mobile VR headset with an ultrasound haptic device that feels realistic.
Sense of direction
Obrist’s team is also looking at the automobile - how scents might be used to convey information without requiring drivers to take their eyes off the road. The idea: “our cars are already packed with lights and beeps,” she says. “The olfactory system could be used to give messages by priming the driver to stay alert for interesting landmarks.”
There are still many challenges. Smell is a slow medium for passing messages and is unsuitable as a warning system. In tests, people tend to get used to smells quickly – a process known as habituation – which dims their effect. And there is the practical problem of clearing away a smell to make way for new perfume-packed messages.
And then there’s romance. Maintaining long-distance relationships is rarely easy, but Skype helps. Now add a signature scent of a loved one and a virtual sense of touch, and the experience becomes increasingly realistic. “If we can add smell and tactile stimulation it would vastly improve the physical and emotional experience of communicating online,” says Obrist. The question is: can we handle it?
Kevin O'Regan is ex-director of the Laboratoire Psychologie de la Perception, CNRS, Université Paris Descartes. After early work on eye movements in reading, he was led to question established notions of the nature of visual perception, and to discover, with collaborators, the phenomenon of "change blindness". In 2011 he published a book with Oxford University Press: "Why red doesn't sound like a bell: Understanding the feel of consciousness". In 2013 he obtained a five year Advanced ERC grant to explore his "sensorimotor" approach to consciousness in relation to sensory substitution, pain, color, space perception, developmental psychology and robotics.
Our sense of superiority notwithstanding, the human senses are not the most sophisticated in the animal kingdom. Dogs can hear sounds at a pitch well out of our range. Dolphins use echolocation as a simple and effective SatNav device.
So could humans incorporate new senses? Yes, says Kevin O’Regan, an ERC-funded researcher at the University of Paris Descartes. One project on his list is fine-tuning our internal GPS system so that finding magnetic north becomes second nature.
In the lab with Dr. O’Regan (buckle up!)
To test whether people can learn to ‘feel’ North on a compass, Frank Schumann in O’Regan’s team seated volunteers blindfolded in a special chair, gave them headphones (linked to an iPhone) that played a waterfall sound when pointed North, and then began rotating the chair. He doesn’t say, in his published Nature Scientific Reports paper, whether anyone got carsick.
This line of research was once seen as quackery, but has been attracting attention in recent years. Five years ago, scientists in the US identified neurons in the inner ear of pigeons which respond to the direction and intensity of magnetic fields. Then teams in the USA and the UK reported that humans, too, had a built-in homing device; we’re just not as adept at using it as pigeons are.
Now Frank Schumann and Christoph Witzel in O’Regan’s team have trained people to integrate a sense of magnetic north into their perceptual system using two smartphone apps called “hearSpace” and “naviEar”.
One group of people was given earphones enhanced with a geomagnetic compass. When they turned north, the pleasant sound of a waterfall could be heard from in front of them; the sound moved to the side and back as they turned away. And here’s the surprise: Soon people were so attuned to this new sense of direction that it became an integral part of their sense of orientation. Or, as O’Regan puts it: “We successfully integrated magnetic north into the neural system in the inner ear that underlies spatial orientation.”
Just to prove the point, the researchers then recalibrated the equipment. “We cheated by changing the direction of north as people turned. After 20 minutes of training, when we took off the earphones, people’s sense of space was all mucked up,” the researchers say, “and it even remained mucked up when we retested them a few days later.” This means that people not only integrated the north signal, but quickly came to trust the artificial magnetic sense more than their natural vestibular sensations.
Buzz for North
Companies are already getting in on the act. Cyborg Nest is selling a product called North Sense. It can be attached to the skin and gently vibrates when the user faces north. Rivals include wearable anklets that buzz when heading north – along with a host of smartphone apps that offer less invasive ways to find your way home.
And it needn’t stop with a compass. O’Regan thinks we could potentially find other ways to augment the suite of senses that comes preloaded in our heads. That, he says, would be “a first step towards the development of cyborgs.”
And that opens some interesting philosophical questions, about the difference between humans and robots. After all, O’Regan works at the university named after the man who wrote: “I think, therefore I am.” So if you’re up for some meditations, read on.
O’Regan starts with a simple question: What is it about a red patch of colour that makes us sense ‘red’ the way we do? Is it the way red patches excite the light receptors in our retinas? Yes, but why that particular experience? Is it the way nerves carry the signals to the brain? Yes, but what is it about those signals that mean ‘red?’ Is it the way the synapses in the brain cells respond to the signals from the retinas? Yes, but what is it about the synapses or the excitation patterns that would have anything to do with what we mean by ‘red?’
The more you think about it, the harder it is to say why red things feel red. There is, O’Regan says, “an infinite regress of questions” that get you nowhere. There’s “no way of making the link between physics and experience.”
What does it mean, to ‘feel’?
Trying to make machines that can feel the world – or emotions – is bound to involve some heavy philosophy. Here’s how O’Regan starts to explain it, in one of his academic presentations.
When Arnold Schwartzenegger, playing the role of a very advanced robot in the film Terminator ends up being consumed in a bath of burning oil and fire, he goes on steadfastly till the last, fighting to protect his human friends. As a very intelligent robot, able to communicate and reason, he knows that what's happening to him is a BAD THING, but he doesn’t FEEL THE PAIN.
This is the classic view of robots today: people believe that robots could be very sophisticated, able to speak, understand, and even have the notion of "self" and use the word "I" appropriately. But as humans we have difficulty accepting the idea that robots should ever be able to FEEL anything. After all, they are mere MACHINES!
Philosophers also have difficulty with the problem of FEEL, which they often refer to as the problem of QUALIA, that is, the perceived quality of sensory experience, the basic "what it's like" of say, red, or the touch of a feather, or the prick of a pin. Understanding qualia or feel is what the philosopher David Chalmers and what Daniel Dennett call: the "hard problem" of consciousness….
What’s needed if we want sentient robots, O’Regan goes on to explain, is a new way to think about sensation – a discussion that often echoes of the long history of philosophy from Descartes and Locke to Kant and beyond.
How to make a robot see red
So he takes a different approach. He notes that what we really mean by a sensory experience is what we do when we interact with the world when we have that experience. So in the case of ‘red’ what we mean by the experience of ‘red’is the particular way we interact with light reflected off a red object. That pattern can be described mathematically. It can as easily be programmed into a robot as you can teach a child the word for ‘red.’ The robot would see red, as defined in this precise manner.
In the same way, you could teach a robot to smell flowers, or sense the space around it. So long as the robot interacts in the correct way with its environment, it is feeling. The only thing lacking is that it be self-aware that it is feeling – but that, too, could be programmed.
O’Regan’s ERC grant allows him to explore what he calls this sensorimotor theory of sensation, on which he began work more than 15 years ago. According to this approach, our experience of the world is a product of how we interact with it, and it obeys a series of laws known as sensorimotor contingencies.
“Most people are looking for something in the brain that generates consciousness,” O’Regan says. “Perhaps they are looking for comfort in the idea that robots would never be able to feel in the way we feel.”
He believes this is a waste of time: there is nothing sufficiently special about humans that makes conscious robots an impossibility. Instead, we should focus on understanding how we really experience the world so that we can understand and enhance our ability to feel.
O’Regan says robots will soon eclipse human intelligence and will be able to perceive the world just as we do. “If we can explain feel – as a way of interacting with the environment – we can explain everything including emotions,” he says. “Will robots have emotions one day? Yes, it’s coming in the next 20 years.”
Antonio Bicchi is a scientist interested in automatic control (the science and engineering of systems), in haptics (the science and technology for the sense of touch), and in robotics (i.e., the machine that is not here yet).
After graduating from the University of Bologna, he was a researcher at MIT’s artificial intelligence lab, and is now chair of robotics at the University of Pisa. Since 2009 has led the Soft Robotics Lab at the Italian Institute of Technology in Genoa, and from 2013 he has been adjunct professor at Arizona State University in Tempe, Arizona.
His 2012-2017 ERC Advanced Grant 'SoftHands' established the basis for the theory of soft synergies in human and robot hands, which led to the design of new robotic and prosthetic hands.
A new wave of robotic hands can sense what they’re grasping – and handle things just right
Robots are great at building cars on factory assembly lines. But what about picking grapes or packing eggs?
Such tasks – once seen as too delicate and complex for machines – would require a bionic hand that can not only grasp things mechanically, but also feel them and respond appropriately. The human hand is astonishing: it has 27 degrees of freedom – meaning its joints can move in a total of 27 different ways (count them yourself.) It is sensitive to pressure, temperature and the flow of air over its surfaces.
Of course, designing robotic hands has already come a long way. Heavy industries routinely use machines that can grip, while US researchers have designed devices that mimic every bone and muscle of the human hand. But “each of these comes with its own set of limitations,” says Antonio Bicchi, Antonio Bicchi a robotics scientist at the IIT-Italian Institute of Technology and the University of Pisa, and an ERC grantee. “Simple grippers have problems with manipulation; more sophisticated machines can be fragile, costly and complicated to programme.”
The amazing headless hand
Antonio Bicchi’s team at the University of Pisa have designed a flexible robotic hand that can grasp anything – just right.
So, drawing on neuroscience and studies of motor control, Bicchi’s team on the Softhands project analysed human movements to identify which ones were vital to grabbing, holding and manipulating. They found that instead of using individual motors to control each finger joint, a single motor could be used to control a whole set of movements. Complex tasks, such as grasping, could be achieved using relatively simple combinations known as ‘synergies’. Synergies are found in human hands, where groups of muscles work together to perform complex tasks.
The second challenge was to have a hand that could adapt its grip depending on whether it was firmly holding a hammer or gently cradling a ripe peach – something the researchers call “variable impedance actuation.”
”Human hands are adaptable,” explains Bicchi. “But we do not consciously control the shape of our hand in fine detail. We very often let the hand take its own shape based on information from the environment that it is manipulating. Again, we used synergies as our guide and designed a hand by observing humans and studying the complex movements behind soft grip.”
The result is a bionic hand that moves through 19 degrees of freedom, is simple to control and robust enough to survive in the real world. The work was taken forward by the SoMa project that saw the Italian group team up with researchers in Germany and partners from industry to explore practical applications. But which industries are ready to embrace the technology?
One company keen to embrace the robotics revolution is Ocado, an online supermarket based in the UK that is also working with Bicchi. Their enormous warehouses stock 50,000 items and are already highly automated, with miles of conveyor belts and scores of robots roaming around on errands – such as fetching boxes for humans who will pack up a customer’s order. To take the next step, pack the boxes themselves, the robots would need hands able to recognise and manipulate objects which may not always be the same shape.
But can it bake a pie?
UK online grocer Ocado and Bicchi’s team have the perfect apple picker.
“Control of picking items up is extremely complex,” says Duncan Russell, research coordinator at Ocado. “Some items are delicate; others may change shape when the hand interacts with it. In addition to the design of the hand, we are looking at how machine learning can develop strategies for picking up and manipulating products.’ The next step: Getting robotic hands that can also perform quality control – rejecting overripe melons and bruised apples. And the next step: getting robots to alongside the maintenance teams.
Of course, many companies are now experimenting with these technologies – most famously, Amazon. So is it only a matter of time before the robots take over?
Alexandru Voica, head of technology communications at Ocado, says supermarkets of the future will become much more productive, curbing costs for consumers and reducing the scope for human error. But he expects the future to be a collaboration between humans and robots rather than a total takeover by the droids.
“Our purpose has always been to boost productivity rather that terminate our workforce,” he says. “The idea is for robots to work alongside humans. They can take over tasks that are dull or physically difficult, freeing people to focus on more meaningful work.”
Bicchi wants to use the hand as a prosthetic for amputees. But this presents new problems. The hand must be controlled by electrical signals from muscles in the patient’s arm. And it must provide feedback, so the users feel as though they are ‘touching’ objects in their environment. Of course, sensors can be packed onto the fingers tips for pressure or temperature; but the more sensors added, the more complex and fragile the hand. The answer, Bicchi says, is to focus on the task at, well, hand:
“For some tasks, it could be useful to have information about the shape of an objection. For others, vibrations or impact may be a priority.”
The solution: several gloves with different sets of sensors – some for using tools, for example, others for interacting with people. “The gloves allow you to customise the kind of feedback you want to bring back to the patient,” he explains. “It’s like the gloves are interchangeable skins that fit over the skeleton.”
Tamar Markin is a neuroscientist at the Institute of Cognitive Neuroscience at University College London. She heads the Plasticity Lab there. Her main interest is in understanding the key drivers and limitations of reorganisation in the adult brain. The primary model for this work is studying individuals who have lost a hand. A particular focus is on how habitual behaviour, such as prosthesis usage, shapes brain reorganisation.
She graduated from the Brain and Behavioural Sciences programme at the Hebrew University of Jerusalem in 2009. She was then awarded several career development fellowships to establish her research program on brain plasticity in amputees at the neuroimaging centre of the University of Oxford, first as research fellow and later as a principal investigator. She recently joined the faculty of UCL to continue this work.
Scientists want to know how the brain maps an arm – or three
‘I only have one pair of hands!’ It’s the exasperated refrain of busy people, overloaded with tasks. But what if you had two pairs? Could your brain cope?
This is not science fiction: Researchers at MIT and Boeing have made headlines with their work on giving their engineers additional arms. Aeroplane maintenance crews sometimes carry out repairs in confined spaces where two hands may be needed to hold something in place, while two more hands secure a screw or a replace a fuse. Instead of sending in two engineers to work together – leading to communication and access challenges – wouldn’t it be easier to send in one person with four hands? The engineer could control the extra hands with the flex of a muscle.
At MIT, researchers experiment with ways to add extra limbs for workers.
But can this really work? The answer depends on how the brain is organised. ’Embodiment’ – where the brain controls external objects in the same way as one’s own body parts – is central to integrating a new limb into the body. Contrary to popular belief, even professional tennis players or skilled sculptors do not truly feel that their tools are extensions of themselves. And it has long been thought that people with artificial limbs reject prosthetics if they continue to feel foreign and awkward.
Neuroscientist Tamar Makin has been studying amputees and people born without a hand to understand how the brain is organised – and how it it might incorporate additional limbs. Makin leads an ERC-funded project studying how humans interact with wearable robotic tech. Her team at University College London has found that in people born with one hand, the brain region that would normally control the missing hand lights up for other body parts – including the arm, foot and mouth – that are used to perform tasks that the hand would normally do.
'Having one hand has made me more stubborn'
Loss of a hand, from birth or through accident, has a profound effect on how the brain organises itself. Exactly how is among the questions that Makin and others are trying to answer.
For example, when tying a shoelace or opening a bottle, someone born without a hand might use one hand or their mouth. But the ‘hand’ areas of the brain are in control. The finding suggests more flexibility in the brain than was previously thought. It also raises the possibility that the brain might be able to adapt to new tasks, such as controlling extra arms, if it is considered to be a part of the body helping with gripping, holding, feeding and so on. “If that brain area can respond to the feet or the mouth, perhaps it can also adapt to a prosthetic,” says Makin.
People born with one hand develop their own workaround solutions to daily tasks such as feeding and dressing at an early age, when the brain is known to be super-flexible. Amputees, who suddenly lose a limb and then try to adapt to an artificial replacement, may be an even better model for studying how the brain responds to extra limbs. But the results so far are mixed.
How does the brain map the body?
Researchers have been able to deconstruct how the brain represents parts of the body. In this image, the nerve signals start in the hand (yellow pathway) or the face (orange pathway.) The signals go to the brain stem, which in turn relays signals to other parts before projecting them up to the cortex (top inset.) There, specific parts of the brain map the face and hand. Makin and colleagues are now studying how those pathways work when there is no limb – due to an accident, for instance. Astonishingly, they find the cortex is still mapping a limb. Perhaps the origin of phantom-limb pain? (From Makin, Tamar & Bensmaia, Sliman. (2017). Stability of Sensory Topographies in Adult Cortex. Trends in Cognitive Sciences. 21.
Prosthetic technology has become increasingly sophisticated but people do not always make use of the high-tech opportunities. Only 45 per cent of all arm amputees choose to use their prosthesis throughout the day. Of these, many prefer simpler, low-tech devices such as hook-like grippers. Just 20 per cent avail themselves of the most technologically-advanced robotic arms, complaining that they are awkward to control, require complex training, and lack ‘feedback’.
Makin thinks the answer isn’t necessarily better robotics or artificial intelligence. Rather, we need to “understand how the brain represents artificial body parts.” She hopes that this will allow for better prosthetics design and use. “If we could harness the resources of the brain pre-programmed to control artificial limbs then we could offer real opportunities to improve prosthesis usage. The trick is to understand how to minimise the conflict between existing representations that occupy these brain areas – for example, of phantom hands in amputees – with the new representation of the artificial limb.”
Beyond rehabilitation, understanding of how to improve embodiment of technology might help future efforts to incorporate additional hands.
But there may be costs, as well. “There are some scary scenarios where changes to the brain can be maladaptive,” says Makin. “By adding new limbs, it could jumble the brain’s representation of the rest of the body.” For example, people might get clumsier: “if you work for eight hours with four arms and then take them off, will the brain “know” how to switch back to ‘two-arms mode’? Can you safely drive home?”
David Melcher is an Associate Professor at the Center for Mind/Brain Sciences (CIMeC), University of Trento, where he leads the Active Perception Lab. His research focuses on the intersection of perception, attention, action and memory, including both basic science and applications to clinical groups. He has published more than 70 scientific articles, many in leading scientific journals such as Nature, Nature Neuroscience, Current Biology, PNAS and Neuron.
He serves on the Editorial Boards of the Journal of Vision and Perception/i-Perception and as a reviewer for numerous journals, funding agencies and conferences. He has also been active in research, workshops and public outreach projects bringing together art and science, including several museum exhibition projects and the co-edited book “Art and the Senses” (Oxford Press, 2011, 2013).
Melcher’s research has been funded by, among others, the European Research Council, the British Academy, the Royal Society, the Italian Ministry of Research and Education and the National Institutes of Health. In 2011, he received the American Psychological Association Distinguished Scientific Award for Early Career Contribution to Psychology in the area of perception/motor performance.
Researchers study how we sense time – and create a feeling of ‘now’
One of the most complex and mysterious jobs our brain does is create a sense of time and space. It’s also essential. We need to know where we are in the world – and to have a sense of the past, present and future – to survive. Hunter-gatherers needed to find food and avoid prey; we need to get to work on time, play sports and read articles like this one.
Philosophers have long pondered the meaning of ‘now’: the sense of what is happening at the present moment in time. The idea that we are simply watching the world unfold at a steady speed became untenable as science began uncovering the complexity of perception.
It turns out that we are constantly sampling the world in short bursts, using our eyes, ears and movement to gather information about the world around us. Our brains then reassemble this rapidly to create a smooth sense of time. “The brain uses these different temporal windows and spatial coordinate systems to give a coherent, continuous and seamless perception of our environment,” says David Melcher of the University of Trento, Italy.
Melcher leads an ERC-funded project which has used modern neuroscience tools to work out how long these windows last. The answer: 2 to 3 seconds.
If the shutter speed is just right, a video of a helicopter rotor can look like the blades aren’t moving. That’s a clue to how our minds perceive the world around us, says Melcher.
Researchers showed people short movies which had been chopped up into short clips lasting between a few milliseconds and several seconds. Then they shuffled the clips randomly, showed them to their volunteers and asked them if they followed the story. If the segments were shorter than 2.5 seconds, the story was impossible to follow; if they were longer than that, subjects could work out the correct order.
This suggests that our brains can piece together the jigsaw of events, provided the pieces of the puzzle are big enough. Melcher sees this as an illustration of how our brains work: we take bites of time 2 or 3 seconds long, process this information, and go back for more – not unlike how electronic communication works.
“If you talk on Skype or a cell phone, it seems to be a continuous stream of sound or vision,” he explains. “In fact, the system is taking advantage of a particular refresh rate in the brain. We don’t notice that the sound is sampled at a certain rate, packed up and sent in small clusters.”
Skype someone who is watching television and you’ll see what happens when the sampling is not in sync. Or watch what happens when the shutter speed of a video camera exactly matches the rotation of a helicopter’s blades.
“Our brains need time to process information into a 3D version of our environment,” says Melcher. “Most of the time we don’t notice that it’s not continuous. But if you see a TV in the background of a live broadcast, you see the cracks in the system. This is known as aliasing.”
Sensing, fast and slow
This 2 to 3 second ‘sampling’ approach is imperfect, but it resulted from a grand evolutionary compromise between the need to perform very fast and very slow sampling. A constantly fast or slow sampling speed each comes with pros and cons. Consider the need to be alive to the threat of rattle snakes: if your senses are always too sensitive, you would be plagued by false alarms; too slow, and humans would be a mere footnote in biological history.
However, some of Melcher’s work suggests our brains are running on two sets of sampling timescales simultaneously. “It’s a trade-off,” explains Melcher. “We cannot have a perfectly fast or perfectly slow system. It may be that we’re capable of both, depending on the task at hand.”
Sensorimotor show gathers speed
Many of the tests Melcher’s team run involve hooking volunteers up to brain scanners and exposing them to a variety of stimuli – sound, touch and visual – to see if they can keep track.
For example, they might put a flash of light on a screen or elsewhere in the room and ask participants to say how many flashes they saw, where they occurred and in what order. “If there are two flashes close together, you might just see one because of this inbuilt sampling system,” he explains.
But we’re not just sampling with our eyes and ears. “As we move around the world our brain is taking that movement into account,” says Melcher. “We don’t just perceive passively; it’s a sensorimotor system.” That concept – that sensation isn’t passive, but is shaped by how we interact with the world – has been gaining fans in the scientific world; it’s the same theory that O’Regan of Paris Descartes is working on, for instance.
Says Melcher: “Every time you move your head or eye or body you are asking questions and you perceive an answer. Over time you build a 3D vision of the world.”
Perhaps the quirkiest finding from his project is the way sampling can vary from one person to the next. Some people are generally a little faster than average, while others are a fraction slower. “We can show certain stimuli that you would interpret one way while others would interpret differently,” he says. “In theory, a video could be optimal for you but not for somebody who is looking over your shoulder.”
This has big implications for media and entertainment – which is why Apple and Facebook have taken an interest in this field. In the future, our smart phones and the videos we watch online could be personalised to match the sampling speed at which we are most comfortable. It may also have a clinical application if movies could be adapted to people with autism spectrum disorders where sensory processing can be a major challenge.
There are commercial applications too. As some people are slightly faster than others at processing sensory information, it would pay to know which ones are the best. Companies keen on hiring pilots or uncovering an exceptional race-car driver would surely add this criterion to the battery of tests they put candidates through during selection.
As scientists better understand how our brain samples our environment, self-improvement types will want to train themselves to speed up their sampling rates. But if you prefer to relax rather than speed up, syncing your brain to the beat of a metronome can have a calming effect. Of course, that’s nothing new: “People already do this when they put on relaxing music and pour a glass of wine,” Melcher says.