When machines learn
Big data, artificial intelligence, industry 4.0 – the new opportunities information technologies offer will change the world. Take a glimpse inside the world of researchers who teach machines how to think.
A Google search for Roger Federer, the Swiss tennis star, yields some 28,900,000 hits. International football star Lionel Messi even has as many as 61,300,000 entries. But there’s one name that beats both of them hands down: searching for AlphaGo, the computer that defeated a master player of the strategy game Go in March of this year, returns no fewer than 313,000,000 hits. AlphaGo dominated the headlines this spring: machine triumphs over man. For some, AlphaGo’s victory was the ultimate horror scenario, while others saw it as the breakthrough of artificial intelligence.
The master players
Joachim Buhmann, Professor for Computer Science and Head of the Institute for Machine Learning at ETH Zurich, offers a more sober assessment of the situation: “The Go player’s algorithm has, of course, set a milestone in machine learning, but it’s a milestone in a very limited, artificial field,” he says. Since the early days of computer science as a scientific discipline, one of the challenges against which it has been relatively easy to measure progress has been strategy games. It started with simple games such as Nine Men’s Morris and Draughts. In 1997, IBM’s computer Deep Blue beat the reigning chess world champion Garry Kasparov. Soon thereafter, programmers set their sights on the considerably more complex game Go as the next potential milestone.
What is interesting, however, is not the fact that AlphaGo has now claimed victory, but rather how it did so: unlike Deep Blue, it didn’t rely on sheer computing speed, but rather on enormous computing power “combined with a kind of clever learning,” explains Buhmann. But he qualifies this by adding: “Successfully solving such game problems isn’t the major breakthrough, because real intelligence is characterised by making a decision in the face of great uncertainty. And the game setting drastically reduces uncertainty.” His research colleague holds a similar view: Thomas Hofmann is Co-Director of the new Center for Learning Systems, a joint endeavour between ETH and the Max Planck Society. In his words, “We want to build machines that succeed in the real world. Self-driving cars, for instance, are confronted with far more complex and consequential decisions.”
Training in the sea of data
Nevertheless, the approach taken by the creators of AlphaGo to lead their computer to the championship is typical for many other areas of machine learning, as well. AlphaGo’s designers first fed the machine with 150,000 matches that had been battled out by good players, and used an artificial neural network to identify typical patterns in these matches. In particular, the computer learned to predict which move a human player would make in a given position. The designers then optimised the neural network by repeatedly having it play against previous versions of its own games. In this way, through small but constant adjustments, the network gradually improved its chances of winning. “There are two ingredients that enable this type of learning,” explains Hofmann. “You need a lot of data as learning material,” he says, “and sufficient computing speed.” Both are available today in many areas.
This dramatically changed the approach of developers in the field of artificial intelligence. Buhmann explains this based on the example of image recognition: previously, image experts had to tell the computer in detail which features it should use to categorise an image as a face, for example. “This meant that we had to rely on the knowledge of experts, and also that we had to describe vast amounts of rules in code,” he recalls. Today, it is sufficient to write a meta-programme that merely defines the basic principles of learning. The computer then learns by itself to tell, based on numerous sample images, which features depict a face. Thanks to Facebook, Instagram, etc., there is no shortage of learning material: “Today we can easily use millions of pictures or more as practice material,” says Buhmann.
Computers as doctors
He specialises in image recognition in the medical field. As he explains, this field is precisely where the advantage of machine learning is clearly evident: “We used to try to ask doctors about their specialist knowledge and then implement it in detailed rules,” he recalls, “but that endeavour ended in a terrific failure, because even good doctors often cannot provide clear explanations for their actions.” Today, computer programmes independently trawl through large volumes of image data for statistically relevant patterns. One specific area in which Buhmann and his colleagues use this type of method is cancer research, but the approach is also useful in studying neurological diseases such as schizophrenia, or neurodegenerative diseases such as dementia or Parkinson’s disease.
They have, for instance, developed a programme that helps pathologists to more accurately assess the likely development of a certain form of kidney cancer. The process involves obtaining patient biopsies and preparing histological sections, using certain dyes to make relevant features visible. The sections are digitised and analysed using machine image analysis methods in order, for example, to count the cancer cells that are in the process of dividing and that were made visible by the staining. The computer then combines such counts with additional data to develop prognoses for specific patient groups. In another project, computers were used to analyse magnetic resonance images of the brains of schizophrenia patients. The image analysis yielded three groups of patients with significantly different activity patterns in the brain. “We learned that there are different kinds of schizophrenia,” explains Buhmann, adding: “Now it’s up to pharmacists and doctors to find the right treatment for each patient type.” It is quite possible that automated analyses of brain images will help with this, too.
Language and meaning
Image recognition is to Buhmann what language is to his research colleague Hofmann. “Speech recognition as a branch of artificial intelligence is in particular demand when it comes to human-machine interaction,” explains Hofmann. He hopes that he will one day no longer have to tediously enter his desired destination via a keyboard to let a self-driving car know where he wants to go, but can instead spontaneously give it oral instructions. Hofmann is convinced that it won’t be long now until this happens: “Today, we can approach the problem of getting machines to understand text in a completely different way from how we could before.”
Here, too, big data supplies the material the machines use to pracise understanding texts. The web is a vast treasure trove of language, a gigantic training ground that helps machines filter out statistical regularities that show them relationships between words. “And it does a much better job of it than we could ever have done with abstract linguistic or phonetic rules,” says Hofmann. This kind of method canalsobeusedtooptimisetranslation programmes or search engines. Hofmann and his team are developing a programme that uses all Wikipedia entries (there are more than 5 million English-language articles) as a basis for learning to link texts and words in a way that makes sense. The links and cross references to other articles, which Wikipedia authors currently still create manually, will in future be added by a computer – faster and more comprehensively than any author would be able to manage. “It starts with the fundamental meanings of words. But then our goal is to get our programmes to understand the meaning of complete sentences and, ultimately, entire discourses,” says Hofmann.
On an equal footing with machines
Pie in the sky? Only partly. Translation programmes have already made tremendous progress in recent years. Search engines are constantly improving and computer programmes are now authoring sport updates. Hofmann himself was involved in founding a company called Recommind in the US. Their programmes analyse and sort texts with a view to their legally relevant content. “We automate document review, which used to take lawyers endless hours,” he explains. Today this company employs 300 staff worldwide and is the market leader in its field.
Recommind is just one example of how new technologies will change even jobs in highly qualified professions. Hofmann is convinced that there are only a few occupations that will not feel the impact of this technological change. “To date, machines have taken over repetitive, mechanical jobs. In future, they will also take intelligent decisions,” he says. Buhmann, too, is confident, claiming that “the new intelligent technologies will in future supplement or even replace activities performed by well-trained specialists.” For instance, the new possibilities in image analysis will no doubt massively change the work of pathologists. As Buhmann points out: “We will need far fewer pathologists in future – but that means doctors could spend more time on psychological care for the sick.” His colleague Hofmann adds: “In terms of technology, everything is possible. It’s a question of society’s willingness to find creative solutions for dealing with this technological change.”