What happens when computers program themselves?

Artificial intelligence could transform our world. But first, ERC researchers are trying to answer the basic questions – about life and the universe – that AI poses

 

Smart robots can work together to get a job done – better than humans? Mauro Birattari’s lab at Université Libre de Bruxelles is on it.

Can computers learn to code?

Software is eating the world, venture capitalist Marc Andreessen noted in 2011. That was six years after renown American software engineer Grady Booch estimated that more than one trillion lines of code had already been written, with a further 30 billion lines added each year. Today, those figures are likely to be much bigger still. Whereas Microsoft Office 2001 had about 25 million lines of code, Microsoft Office 2013 has about 45 million.

Source: Information is Beautiful

The steady rise in the amount of code reflects the growing complexity and richness of software – with more programming languages and libraries of code. For developers, that also means headaches. “As it becomes harder to utilise the (code) libraries, people make more mistakes,” says Vechev, the ETH Zurich researcher.

But help is at hand. Much of the software developed in recent years is open source, meaning that the code is publicly available and viewable. This has ushered in the era of “Big Code”, a shorthand term for this vast and growing library of open source code. This library can be used to train computers to identify patterns in software, just as advanced web services, such as Google Photos or Google Translate, process millions of samples to learn to identify particular images or phrases.
 
In a project funded by the ERC, Vechev and his team are harnessing Big Code. “We are building an AI system that can learn from lots of data showing how existing programmers and developers write code and how they fix code.”

What’s it take to make a computer smart?

Brave new world: A computer can even bore itself... (Courtesy: Cornell Creative Machines Lab)

Science often advances in trends – and artificial intelligence is today an uber-trend among eager computer scientists and engineers around the world. One measure: AI patenting is way up – with an average annual growth rate of 43 per cent, according to the European Patent Office.

But what’s involved in creating AI? Experts in artificial intelligence have broken the job of computerised thinking into several related tasks. First, a computer needs information to work on and a way to store it. Then it needs a way to reason about the information, and to learn how to perform tasks (the “machine learning” part of AI). And, of course, it needs a way to talk to the bothersome humans who keep asking it questions.

Want to dive deeper into the subject? Here’s a good blog by Sridhar Mahadevan, a senior researcher at software firm Adobe Systems, with a reading list.

To do this, the computer also has to estimate how good a particular piece of code might be. “It is more complex than simply capturing patterns” in the software, notes Vechev. “The solution also uses probabilistic methods: we can attach a probability that this is likely to be good code for a particular task.” That could lead to software solutions that are difficult or impossible to develop through traditional approaches.

Vechev says his ERC-funded project takes advantage of a confluence of three factors: the growing body of Big Code, the evolution of machine learning algorithms and recent progress in automated reasoning engines – software a computer uses to think its way through a problem. As there is no such thing as complete information or data, machines need to be able to reason to fill in the blanks. For example, if a machine was asked to find out in what country Dom Perignon is made, the system would need to be able to reason that Dom Perignon is a type of champagne, champagne is only produced in the region of Champagne, which lies in the country of France, as explained by AI startup company Grakn AI.

The specific tasks Vechev’s team is working on right now are fairly basic. “Within the timeframe of this project, we think this is achievable for small low-level tasks, such as a search for a graph, sorting a set of numbers in a very efficient way or writing a program that manipulates pictures, modifying or cropping an image and saving the result,” Vechev notes. In the long-term, he is hoping his team can create a system that is able to write sophisticated code from high-level descriptions. “If the system could beat an expert human in a programing competition, that would be success,” he says.
 
From there, managers might be able to give instructions to computers and robots, just as they do to other human beings, rather than meticulously writing precise code.  This could trigger a paradigm shift in the way software is created, with implications for millions of developers worldwide. It could also help to streamline and improve the development of machine learning, natural language processing, programming languages and software engineering.

To commercialise his research, Vechev and his former PhD student Veselin Raychev have co-founded an ETH spin-off, called DeepCode, which is developing a new AI-based system for automating code review.

With work like this, some analysts see AI entering a “third wave” of development. In the first wave, in the 1970s, hard-coded AI systems generated decisions that could be interpreted and explained by human beings, but couldn’t deal with uncertainty and noise. The second, current wave of AI solutions, involving deep learning, has its own limitations. “They can be used to learn how to drive a car, for example, but they are not explainable,” says Vechev. “But the system also needs to be interpretable: Code development needs to be transparent. Our Big Code project is an example of a third generation system in that it is a step towards a general AI system, more like the human brain.”

Can robots be delicate?

École polytechnique fédérale de Lausanne

Swiss researchers visit a watch-making school, to teach robots to think like a craftsman

Could robots put Swiss watchmakers out of business? Not for a long time. In fact, robots really struggle to emulate the kind of delicate, fine-tuned manipulation to be found in industries like watchmaking.

 

Researchers wire up the skilled hands of a watchmaker.

It takes four years for human apprentices to learn the intricate skills required to assemble a hand-made Swiss watch. That’s because they are working at a scale where their normal sense of touch alone isn’t sufficient to guide the precise manoeuvres and fine manipulation required to build the tiny mechanisms. Today, that kind of delicacy is beyond any robot. They don’t, for example, typically manipulate flexible materials, pack fragile goods such as wine glasses, or perform other tasks that require humans to “feel” their way.

How to close the dexterity gap between humans and robots is the subject of an ERC-funded research project being run by Aude Billard, a professor at the Swiss Federal Institute of Technology in Lausanne. “We roboticists need to understand how humans learn to do tasks that go beyond your natural sense of touch,” says Billard. “In the case of watchmaking, humans manage to control micro-level displacements and forces with precision, which is challenging given that the human sensory system is incredibly noisy,” she adds, referring to the continual input we get from our senses. The questions puzzling Billard are: How do humans learn to place their arms, fingers and joints and constrain them to overcome this sensory-motor noise? How do they manage to model the world in which they’re working?

One hypothesis is that we simplify. For instance, we may learn to forbid our joints to move in a certain way, only allowing the tactile impact to happen in specific directions. And we appear to filter our sensory inputs, focusing on what’s important for the task at hand.  Billard notes that beginner drivers tend to pay attention to everything that is happening around the vehicle, while experienced drivers have learnt that not everything matters; they subconsciously apply the right amount of pressure to the brake and accelerator pedals to adapt to each situation.

The art of watches

What’s so hard about making a watch? The graph here, based on Billard’s studies, shows the difference between a senior and junior watchmaker: the skilled worker makes fewer time-consuming mistakes.










Here, Billard’s team explains their work in more detail (in French.)

This diagramme breaks down the task of assembling a watch into different actions. Most of the time goes into preparation.

So that brings her to the watch industry. Her research team is following a cohort of apprentices at one of the most famous watchmaking schools in Switzerland, as they gradually learn, through repetition, to achieve the right level of precision in each task without breaking the delicate mechanisms in the watch. Billard’s team has mounted sensors on the apprentices to measure the position of the fingers and the forces applied. It also plans to employ a device to monitor the electrical activity in the muscles in the apprentice’s forearms.

Of course, her aim is not to disrupt her country’s most famous industry. ”We are not interested in teaching robots how to make watches,” Billard stresses. “We are most interested in how robots can learn to implement that kind of process and which variables you need to learn to control precisely.”

Billard’s research team is also developing software that can enable robots to interact with moving objects, so they can, for example, learn to catch items by achieving the right orientation. Although assembling a watch and catching a ball may seem like very different skills, Billard says they both rely heavily on preparation and assuming the right body posture – something human beings learn with practice.

“Skilled watchmakers spend about 80 per cent of the time spent on a task just placing the finger correctly,” she explains. “Doing the task itself is very quick. The same is true of catching an object. Preparation is important.”

The ultimate goal is to create robots that can learn many different tasks, rather than being hard-wired to perform a specific job. “We want to reach a point where robots have a flexible brain, so that when they get out of the factory, they learn whatever task you want them to do,” Billard says.

Can computers really do math?

Czech Technical University

Sure, computers can do sums. But can they think like a mathematician? In Prague Josef Urban wants to make computers prove theorems

In 1606, British privateer Sir Walter Raleigh had a puzzle: What’s the best way to stack cannonballs on the deck of a ship? Through a mutual friend, the problem eventually got to Prague and Johannes Kepler, the famous mathematician and astronomer. Kepler’s conjecture: stack the balls in a hexagonal pyramid, with each layer nestled in the spaces created by the balls below.

 

Of oranges and cannonballs: Bradley Moore, of Mayville State University in North Dakota, prepared this explanation of the Kepler Conjecture.

Over the past 400 years, generations of mathematicians have tried to prove Kepler was right. In 1998, an American mathematician, Thomas Hales, developed a proof that other mathematicians certified as 99 per cent certain; but it took another two decades to create a formal proof that was rock solid – and that required the help of computers to run down every possible way of solving the problem.

To remove the need for such arduous legwork, AI researchers are trying to get computers routinely to verify and replicate the feats of reasoning exhibited by mathematicians. Josef Urban, principal researcher at the Czech Institute of Informatics, Robotics and Cybernetics in Prague and Cezary Kaliszyk at the University of Innsbruck are seeking to develop computer systems that help to automate the verification of complex mathematical theories and mission-critical software and hardware systems.

Credit: Gerwin Klein

Proving that a huge software program is error-free is no easy task. Those who do it call themselves “proof engineers.”

The value of this work was highlighted in 2009 when a team at National ICT Australia (NICTA) headed by Gerwin Klein completed a ground-breaking formal proof of the correctness of some general purpose software at the core of an operating system, known as a kernel. The NICTA team were then able to state categorically that the kernel will never crash or perform an unsafe operation, while also enabling the team to predict precisely how the kernel will behave in every possible situation. However, the kernel in question is relatively small, comprising just 8,700 lines of code, together with 600 lines of assembler. The challenge for Urban and his team is to create systems that can help with automated verification of much larger programs.

Some mathematical developments, such as the formal proof of the Kepler Conjecture, can involve billions of steps, Urban explains. “Ten years ago, people would have said using computers to verify such theorems is an interesting idea, but you can’t do it in practice,” he says. “Now many more people than just me are thinking that this is something that can be done to a great extent.”

Supported by funding from the ERC, Urban is trying to teach computers to figure out for themselves how to create proofs of mathematical theorems. “We need a big corpus of mathematics for a machine to learn from. (Hales’) project is one of the best examples, so we are using this data to try to train machine reasoning.”

Urban’s main idea is to build feedback loops between what the computer learns by studying the mathematical data, and new reasoning techniques. It does this repeatedly, continually improving the algorithms. The trained networks guide the logical reasoning engines, which in turn produce more and more proofs and reasoning data to learn from – a virtuous circle. In this way, the system gradually gets smarter at applying the reasoning rules that lead to a conclusion.  The goal is to create powerful automation services for mathematicians.

Urban’s project is the culmination of years of work focused on bridging the divide between mathematics and computer programming. For that to happen, mathematical theories need to be written in a way that computers can understand and verify. “Then they can read the mathematics and report any mistakes,” Urban explains. “They might even produce some deep elaboration, making millions and millions of steps very clear, so they can be verified by a simple proof checker.”  This is an old dream, first expressed 25 years ago in the QED Manifesto, a proposal by a group of mathematicians to create a computer database of all mathematical knowledge.

The QED Manifesto

“Quod erat demonstrandum” is a mathematician’s equivalent of “The End”: the conclusion of a proof, or “that which was to be demonstrated,” in Latin. But it’s also in the title of a famous project proposed in 1993 to capture all of mathematical knowledge in one huge computer database - in all its rigour, formality and glory. In essence, a silicon god of mathematics. Needless to say, it hasn’t yet been accomplished.

Urban’s research could also make it more straightforward to verify complex software and hardware designs, on which today's information society critically depends. Complete formal verification is the only known way to guarantee that a system is free of programming errors. Such guarantees will become increasingly valuable as computers begin to drive cars, land aeroplanes and take over activities that require high levels of safety.

Can computers figure out what’s important?

Delft University of Technology

The real world is too complex to be modelled in detail, so a Dutch researcher is helping computers prioritise what’s important to get things done

In 2017, a computer beat the world’s top-ranked human at the ancient Chinese game of Go. It was a landmark in artificial intelligence, and used a process known as sequential decision making: a decision is made, an event occurs, another decision follows, another event, and so on. But can such techniques be scaled to address the highly complex challenges confronting human beings in the real world, such as optimising the use of limited road space in dense urban areas? For Frans Oliehoek, the answer is no.

“The God of Go”: That’s how Ke Jie, a grandmaster at the game, described the Google computer that beat him 2 games to nil.

With today’s AI methods, says the associate professor at Delft University of Technology, “you are still using a model which requires training on millions and millions of games. The extent to which current algorithms can learn in the real world is very questionable. If you dropped a robot with these algorithms into a real world situation, it wouldn’t necessarily work.”

For instance, there’s a big gap between a computer learning from the primitive 84 by 84 pixel images of old-fashioned Atari games, and learning from the much higher resolution images required to capture real-world situations, such as the movement of cars around Manhattan. “There are real limits on how large the neural networks can be,” he explains. “We have found you are able to scale the model to two intersections. We have tried using 168 by 168 images, so you see a larger area. But the training becomes very slow already. It is not going to work.”

So Oliehoek is trying a different approach: ignore the detail. His solution is to abstract what is happening across a city, such as Manhattan, to focus on the most important “approximate influence points” – for instance, what enables the traffic lights at a particular intersection to respond correctly.

This is a pragmatic approach. Even if a computer were able to process data tracking the actual movements of Manhattan traffic in detail, much of that information wouldn’t be relevant to predicting the inflow of vehicles into a particular intersection. An enormous amount of computing power would be wasted. “But there are certain things that are going to increase the inflow of the cars, which we can model.” says Oliehoek. “I am fairly confident that we can use the recent advances in artificial intelligence to create the approximate influence points.”

Oliehoek intends to demonstrate his methods in two domains: traffic light control in an entire city, and robotic order-picking in a big warehouse. In the latter case, the goal is to optimise the actions of multiple robots working in warehouses. Unlike traffic lights, the robots in a warehouse will be moving from location to location, further increasing the complexity of the system.

In essence, Oliehoek is looking to create software that makes decisions by piecing together approximations of what is going on in each part of the system, rather than trying to build an exact simulation of the entire system. Such an approach mimics the way human beings try to anticipate what will happen next. “Do I have a model of your brain when I talk to you?” asks Oliehoek rhetorically.“No, as that would be too complex. But I can still predict the questions you might ask by using approximations.”

One of the most impressive things about the human brain is its ability to prioritise, quickly filtering relevant information out of the enormous amount of data being collected by the body’s sensors. “Humans are very good at picking the right level of abstraction to think about a problem,” says Oliehoek. “Abstraction is going to get a lot more attention in the field of AI. Greater use of abstraction would be getting to the basis of intelligence, which is to establish a hierarchy of problems.”

The traffic network: 4 roads, 3 stop lights

The competing AI programmes: Which learns faster?

The AI race: 2 computer programmes compete at directing traffic on a simple road network

Next time you’re stuck in city traffic, you may be blaming the stupid traffic lights. And you’re probably right. Olihoek’s lab is teaching computers to do the job better.

This video, from his lab, shows lots of little cars trying to get from one side of a road map to another, as quickly as possible. The problem: they’re coming from eight different directions, and piling up at three intersections. How to control the traffic lights so they can all pass as efficiently as possible?

The video shows two different programs at work, and the graph charts the average time it takes a car to cross the city. At first, both programs do badly at managing the lights, and the average time soars. But after around 700 cars have passed, the systems start doing it better. They are teaching themselves.

Of course, real cities have more than three intersections, but the research is still underway. Until they finish, better ride your bike.


Read next on ERC=Science2: What can robots learn from ants?

Sign up for the newsletter to get the next instalment – or follow us on Twitter and Facebook.