Protecting sensitive data, providing valuable insights.
Text: Tim Schröder
Medical details are strictly confidential. However, analysis of this information can reveal complex interrelations and therefore help patients. Led by medical ethicist Bernice Elger, an interdisciplinary team is looking at how this valuable information can be used safely and sagely in the future.
For many joggers today, the pulse watch is something they take for granted as much as a good pair of running shoes. Many people want to know how fit they are and whether their training is paying off. Now, industry is even developing fabrics that can measure your lactate levels – a key health indicator that tells athletes, as well as older people, how well their metabolism is working – from your sweat.
The amount of health data of this kind has increased significantly in recent years thanks to ever-smaller sensors and advances in micro-electronics. This data is a treasure trove of detailed information on the state of patients’ health – especially when it is analyzed using artificial intelligence. For computers have the ability to detect unknown connections within the data that humans could not spot on their own.
In the interdisciplinary collaborative project Explain, a team led by the University of Basel is investigating how blood pressure and heart rate readings or patients’ oxygen saturation levels during an operation can be used to monitor narcosis and help anesthetists.
It is therefore conceivable that, in future, computers will be able to use such readings to identify more quickly whether there are complications – for instance, whether a heart attack is imminent. “Today, quite a lot of patient data is stored in hospitals that could be used to develop assistance functions involving artificial intelligence,” says Professor Bernice Elger, Director of the Institute for Biomedical Ethics at the University of Basel, but often this data is not touched because of data protection.
“In our project, we are exploring from an ethical perspective whether and in what way this information could be utilized in future for digital assistance functions,” she explains. For one thing is clear: Data protection is highly cherished. On the other hand, it makes sense to use the data if it can ultimately benefit patients.
An interdisciplinary team
Bernice Elger has brought together computer scientists, doctors and legal scholars for the project, which is funded by the Swiss National Science Foundation. For example, the computer scientist Carlos Andrés Peña and his team at the Haute Ecole d'Ingénierie et de Gestion du Canton de Vaud (HEIG-VD) are trying to teach computers and algorithms how to explain their decisions to people.
Until now, algorithms and so-called neural networks have operated like a black box. They are trained using data to tackle a particular problem and then suggest solutions. How they have arrived at a solution remains unclear. This is problematic when algorithms present incorrect results – and people trust them blindly.
A few years ago, there was the celebrated case of the IBM software Watson for Oncology. The software had been trained to recognize different types of cancer and suggest treatments. However, because the training data was deficient, the software often drew the wrong conclusions. When the problem became public knowledge, there was an outcry among experts across the world.
A more benign example is that of an algorithm that was trained to recognize footballs using images. A closer analysis of the software showed that the algorithm identified footballs based on the criteria “black and white”, “hexagonal”, and “green”, as turf was visible on many photos. This example demonstrates how quickly such correlations can creep in. At the HEIG-VD, experts are now working on solutions to provide “explainability”.
Recommendations rather than diagnoses
“Studies show that doctors often rely on what a software package is telling them and doubt their own judgment,” Bernice Elger says. “This is fatal when the computer is providing incorrect results.” It would be safer if algorithms were just an aid to decision-making and could give reasons for their suggestions.
That is why the Explain project includes a team from the Department of Computer Science at ETH Zurich, which is developing approaches to support decision-making. For example, an algorithm could guide doctors through a diagnosis with red and green arrows, working through criteria one after another to produce the right result.
Good and bad hackers?
Bernice Elger says that the Explain project is special because it is open to tackling new questions that arise during the course of the project. “We can see firsthand how new problems crop up and then think about how to approach them from an ethical perspective.” Data protection, for example, was an issue from the start.
After a while, the question of how to deal with hackers cropped up. Not only is there a class of “bad” hackers who steal data or disable systems, there are also “ethical hackers” who look for weak points in computer systems in order to expose the problems – and plug the gaps. “In our project, we are also addressing the question of how ethical hacking should be treated in law. In addition, together with the Faculty of Law at the University of Zurich, we are thinking about how data protection law provisions can be adapted to enable us to use data for research without reducing protection for the people concerned.”
Bernice Elger explains that the idea for the project emerged from a long-term collaboration with Professor Luzius Steiner, the head of anesthesiology at University Hospital Basel. Having worked previously as a specialist in internal medicine, she is very familiar with how hospitals operate. “Luzius Steiner and I came to the conclusion that we can’t afford to leave patient data lying idle on hospital servers, as there are many ways in which it could be used. But that raises technical and ethical questions.”
The purpose of Explain is to bring the different disciplines together to resolve those questions. “When doctors and specialists in artificial intelligence work together closely, that helps us to develop technology that is easy to understand and to put it to good use in day-to-day clinical practice.”
More articles in the current issue of UNI NOVA.