“Medicine is becoming model-driven”
ETH Professor and computer scientist Joachim Buhmann works intensively on healthcare issues. In an interview with ETH News, he explains how computer models will make their way into the world of medicine, and talks about models that are so complicated that humans can no longer process them alone.
ETH News: Professor Buhmann, our society is currently going through a fundamental transformation process as the result of digitalisation. To what extent is this also changing medicine?
Joachim Buhmann: Forecasting models will play a much greater role in medicine than they do today. Using computers, we can now develop far more complex models than we could in the past. And nowhere is the need for these models more urgent than in medicine. We can collect data from a large number of medical cases and use it to learn about the mechanisms of a disease. At the same time, however, there are so many model parameters to consider that we need machines to process them. Before digitalisation, humans couldn’t study models if their sheer complexity meant we could no longer remember them. Today, however, we can – in that we no longer design the models ourselves but rather consider possible “learning” algorithms, which are then responsible for generating the models. This is known as machine learning.
Could you give us an example?
One of our projects focusses on cardiology. Cardiologists and radiologists consider the heart from different perspectives and generate different data in the form of ECGs and ultrasound echoes. Computers are already outstandingly good at combining and managing disparate data sets. There is real hope that, in future, it’ll be possible to use data of this kind in combination with machine learning algorithms to make reliable statements and prognoses, and that these will probably be more reliable than those currently obtained when two overworked doctors sit down together in a meeting room to evaluate the results by finding a consensus view.
What are the challenges facing this kind of model-driven medicine?
Initially, it will take a great deal of effort to actually collect the data. On the one hand, this relates to measurement data regarding patients; we call this primary data. At least as important, however, are annotated data. This is the contents of a patient’s case history, which a doctor has formulated using the primary data. Medicine is probably one of the most fundamental empirical sciences, and annotated data of this kind represents an invaluable body of knowledge. Access to this data – and how it is used – is a key element of our research. Of course, it is also absolutely vital that we standardise data acquisition across the various medical disciplines, and that the hospitals collate their data in well-specified way to obtain a critical mass of comparable cases. Welcome efforts to achieve this are currently underway as part of the Swiss Personalized Health Network.
This involves collecting personal and sensitive data about your own body, and people want to protect this data from improper use.
It goes without saying that we need to regulate how the data are handled. I don’t want my life insurance premiums to go up based on a genome analysis either. But it is possible to protect data effectively. Data protection issues get a huge amount of publicity these days, perhaps in part because it’s an easy message to get across in the media. The far greater challenge, however – and this is harder to get across – is to understand the data and to put it to use in the first place. That’s something we’re working on. Nobody needs to protect noise.
Doctors also express reservations about passing case histories on to researchers. By doing so, the doctors would provide an insight into their working methods and open themselves up to criticism.
Of course, a doctor will only be prepared to make their data available if they receive assurance that the data will not be used to bring legal claims against them. This must and can be regulated in law.
To what extent will the medical profession change with digitalisation?
Allow me to put it in an exaggerated and simplified way: a doctor is, to a large extent, an moderately well-organised database. While a computer will never be a better doctor, as doctors are also present at the bedside and computers can’t communicate empathy satisfactorily, it is true that information systems are the much more reliable repositories of knowledge. The capabilities of a doctor that stem from access to knowledge and knowledge generation are undergoing massive changes due to digital transformation. In our group, we’ve built systems that analyse biopsies from cancer patients. Computers are now as good as the pathologists in some cases, and sometimes even better. What’s more, computers work 24/7 and don’t suffer a performance drop after public holidays.
In addition to your role as a professor, you are also Vice-Rector for Study Programmes at ETH. Is there a need to adapt medical training?
Yes, absolutely. We need to dramatically improve the mathematical education given to doctors. In the new Bachelor’s degree in medicine at ETH, we are trying to diversify the teaching we offer to our students. We’re reinforcing the technological aspects and offering medical informatics as a subject, which is something I’ve pushed a lot.
Do doctors need to become specialists in medical informatics?
A doctor must be able to use computers as a tool. They needn’t necessarily be able to write programs, although it would obviously be great if they could. However, they definitively need to be able to recognise when the computer tells them something nonsensical.
In your research, you work on data processing chains. What does this mean?
Take cancer patients, for example. First, a biopsy is taken from them. This is dissected, and then other doctors annotate it and draw their conclusions. Later, additional sources of information arrive and must be incorporated. And doctors everywhere are already using tools. A chain of this kind involves applying a variety of complex algorithms to data and ultimately produces a diagnosis, along with a prognosis of the disease’s progression and perhaps a suggested treatment. These conclusions are all predictions. This long chain starts with an unbelievably large volume of data, and at the end you have just a few bits left. Today, no theory exists for checking whether and where essential bits may have been lost in such long data processing chains so that the process chain can then be improved. Developing a theory of this kind for robust algorithms design is a key part of my research.
About Joachim Buhmann
Joachim Buhmann (58) is a Professor of Computer Science at ETH Zurich. Originally from southern Germany, he leads the Pattern Recognition and Machine Learning working group. His research focus includes pattern recognition and data analysis, with a special emphasis on methodological questions of machine learning, statistical learning theory and applied statistics. He also serves as ETH Zurich’s Vice-Rector for Study Programmes.
Data in the spotlight
Data is playing an increasingly important role in our society, and is an issue on which ETH Zurich will focus more closely in the coming years. In a series of interviews, ETH News asks researchers at ETH Zurich about the specific topics they are focussed on, and how they see societal development in their field.
Previous interviews in this series:
- Lino Guzzella: “We have to seize this opportunity” (ETH News 20.06.2017)
- Srdjan Capkun:“It’s always a compromise” (ETH News 19.07.2017)