Who makes the final decision?

Protecting citizens in the face of disaster often requires far-reaching decisions to be made. Any assistance is welcome – including from AI.

Illustration Waldbrand
(Illustration: Ray Oranges)

Wildfires are increasingly getting out of control, as shown by recent events in California and Australia. Yet firefighters continue to battle tirelessly against the flames – and nowadays they have more at their disposal than just water and controlled burns. Digitisation has long been part of their arsenal in the form of geoinformation systems, webcams and drones. These have become key tools in predicting and controlling wildfires, yet the huge quantities of data they produce quickly pushes human expertise to its limits. “AI is always useful when you’re dealing with masses of data,” says Benjamin Scharte, who heads the Risk and Resilience Research Team at the ETH Center for Security Studies (CSS). Recently, he and his colleague Kevin Kohler teamed up to analyse the use of AI in civil protection.

“Being able to use algorithms to make predictions is pretty exciting,” says Kohler. Which direction is the fire front heading? Where should we set the next controlled burns? By crunching all the available data, AI-based modelling tools can help answer these questions. This data might include weather forecasts, drought duration, wind direction – and even the potential amount of fuel available to the fire. The resulting predictions can make disaster response more efficient. In the best-case scenario, they can even act as a form of prevention.

Civil protection is particularly responsive to the use of AI because, all too often, it is a matter of life and death – and every minute counts. Experts are often expected to make snap decisions with far-reaching consequences, so they are grateful for any assistance that can underpin those decisions with more robust data. Ultimately, however, the quality of a decision always depends on the quality of the data. “However smart my algorithm, it will be of little use in an emergency if I can’t supply it with the right data for the disaster,” Kohler cautions.

Even the highest quality data can never fully replace the experience gained by experts over many years, so the question of whether a human or a machine should make the final decision is highly complex. Taken as a whole, the algorithm might conceivably produce a lower economic loss or fewer casualties than its human counterpart, but it may also make decisions in individual cases that we find unacceptable. “It’s clear to me that we, as a society, will continue to struggle with the idea of leaving decisions to autonomous machines,” Scharte says.

A matter of trust

So at what point might we be willing to let a machine make its own decisions? Scharte and Kohler agree that this depends on the context: “Civil protection is sometimes a matter of life or death. Humans should play a part in making those decisions – it’s not the place for machines to make fully autonomous decisions.”

A crucial factor is how much faith people have in the algorithm. Trust paves the way for acceptance, and both are enhanced when we are able to clearly follow what an algorithm is doing. For example, when doctors understand the decision logic of an algorithm, they are more likely to trust it and incorporate it in their work. Numerous studies have confirmed this – but Scharte sounds a note of caution: “Transparency and explainability don’t always increase security.” There are even cases where transparency might be a disadvantage, including man-made hazards such as cybercrime and terrorism. “If you reveal exactly how an algorithm detects suspicious patterns of behaviour, then adversarial actors have better odds of deliberately outsmarting it,” warns Scharte.

This text has been published in the current issue of the Globe magazine.

Further information

Center for Security Studies (CSS)

 

JavaScript has been disabled in your browser