Ethics guidelines galore for AI – so now what?

Anna Jobin has investigated together with researchers from the Health Ethics and Policy Lab which ethics guidelines for artificial intelligence already exist, and finds that ethical AI is by no means merely a technical matter.

Anna Jobin

Artificial intelligence, or AI for short, has been receiving heightened public attention for only a few years, yet the issue has already been overtaken by a new topic: today everything revolves around “ethical AI”. Numerous, very diverse organisations have issued ethics guidelines for AI or statements on the subject. Amid such a plethora of publications, it’s not easy to get a clear picture. What are these documents actually saying? Is the wheel being reinvented every time? What understanding of ethics prompts these recommendations on ethical AI? And who determines this? With new guidelines being issued each month, an inventory of sorts seems indispensable.

No single common ethical principle

The results of our comprehensive study “The global landscape of AI ethics guidelines” were recently published in Nature Machine Intelligence. The first thing to surprise me was that no single ethical principle was common to all of the 84 documents on ethical AI we reviewed. Still, five principles are mentioned in more than half the sources: transparency, justice and fairness, non-maleficence (afflicting no harm), responsibility, and privacy. Overall, we identified eleven ethical principles.

AI has an impact on our society - when it comes to ethical issues, the human being should be at the center.  (Image: Colourbox)
AI has an impact on our society - when it comes to ethical issues, the human being should be at the center. (Image: Colourbox)

The second thing to surprise, or rather disillusion me: sustainability is mentioned in only one sixth of all guidelines and recommendations; human dignity and solidarity occur even less often. This is particularly astonishing considering that two of the major challenges related to AI – energy consumption and climate change on the one hand, and restructuring of the labour market and feared job losses on the other – involve precisely these principles. From an ethical point of view, this marginalisation of sustainability, dignity and solidarity is disturbing. In my view, ethics guidelines should not ignore the major concerns people have, but instead must grapple with them.

Same term, diverging interpretations

When it comes to policy on a more granular level, there was also little agreement on what “ethical AI” means: all ethical principles referred to in the documents are interpreted in different, sometimes conflicting ways, and point to different courses of action. For example, whereas some guidelines endorse fostering trust in artificial intelligence, others warn against placing too much trust in AI systems.

“Ethical artificial intelligence is not merely a technical issue; ethics must also be part of the governance of AI.”Anna Jobin

Varying interpretations of ethical principles are not necessarily problematic per se. AI is a general term covering various techniques, and it is both plausible and sensible that different organisations and sectors may wish to highlight different issues. But some unspecific, vague statements on ethical AI lead me to suspect that they may be more akin to a PR campaign, or “ethics-washing” – a way of getting around regulation – rather than a serious examination of ethical issues.

Gold for the Zukunftsblog

Wissenschafts-Blog des Jahres 2019

Our blog was elected external pageWissenschafts-Blog des Jahres 2019 (science blog of the year). The editorial team thanks all readers and supporters.

Inclusive governance

A few months ago, the external pageSwiss Digital Initiative (SDI) was launched in Switzerland with the official aim of implementing ethical standards in a digital world. In light of our study, this prompts the question: which standards are meant here? In a first draft paper, the SDI alludes to the key principles of the UN Report on Digital Cooperation, the collaboratively developed external pageEthically Aligned Design (EAD) standards, and “applicable laws and regulations” more broadly.

Taking account of what already exists is a good start, but there’s a lot of work ahead. The EAD document, for example, repeatedly underscores the importance of involving all stakeholders. Which, of course, relates to social and political structures and processes rather than to technology strictly speaking. As I see it, everything boils down to this: ethical artificial intelligence is not merely a technical issue; ethics must also be part of the governance of AI. This entails a external pageparticipatory approach that includes stakeholders and civil society. Given the impact AI has on many areas of our society, when it comes to which values we should apply to AI and how these should be implemented, I believe civil society must have a say.

The article also appears as an opinion piece in the external pageNeue Zürcher Zeitung.

Reference

Jobin A, Ienca M, Vayena E: The global landscape of AI ethics guidelines. external pageNature Machine Intelligence , vol. 1, pages 389–399 (2019).

JavaScript has been disabled in your browser