“A development freeze would jeopardise transparency”

In an open letter, tech luminaries from the worlds of science and industry are calling for a freeze on training new AI models that are more powerful than GPT-4. ETH AI experts Andreas Krause and Alexander Ilic from the ETH Zurich AI Center consider this to be difficult to enforce and associated with risks.  

Andreas Krause and Alexander Ilic
Andreas Krause (left) and Alexander Ilic (right). (Photograph: Nicola Pitaro / ETH Zürich)

ETH News: Mr Krause, the external pageletter calls for a freeze in the training of artificial intelligence (AI) systems that are more powerful than GPT-4. Is such a drastic measure necessary?
Andreas Krause: I doubt that this demand can be enforced because there are huge commercial and strategic interests behind its development. What’s more, it is difficult to determine what they are specifically calling for to be restricted without distorting competition and jeopardising innovation in its application. Even if such a moratorium were to be declared, nobody could prevent work on training such models from continuing to be carried out covertly.

That would lead to less transparency.

Krause: Exactly. This would give rise to the danger that development that was previously largely open and transparent would become inaccessible and intransparent. And it would render it virtually impossible, for example, to understand the sets of data that current models have been trained on and the associated bias or flaws. There are already indications of this happening today.

So putting a freeze on development is not a good idea.
Alexander Ilic: No, because there are big question marks about the trustworthiness, reliability and interpretability of the language models currently being used. These elements are of crucial importance and definitely need to be researched even more and critically called into question across disciplines.

What do you suggest as an alternative?
Krause: While fundamental research is needed to develop the next generation of a safer and more trustworthy AI technology, we should also move interdisciplinary research forward and show how these technologies can be used for the benefit of humankind. AI can only be used appropriately in healthcare, for instance, and serve as a useful tool for society when it is reliable and trustworthy.

What role does the ETH Zurich AI Center play in all this?
Krause: At the ETH AI Center we combine fundamental and interdisciplinary research. Our goal is to promote technologies and areas of application that are of benefit to society. And, what’s more, our research is open and transparent.

Ilic: We want to counteract the trend we are witnessing of AI research being increasingly conducted behind closed doors and focus on open and interdisciplinary cooperation between research, industry and start-ups. We believe that important contributions will emerge, especially at the interface of disciplines (e.g. AI and medicine, AI and the humanities). We have therefore created a fellowship programme to attract the world’s best talent and bring them together at the ETH Zurich AI Center. With a 50% share of women, and employees from over 26 countries, we have also been able to create a culture right from the start that critically discusses the opportunities and risks of AI and helps to shape it responsibly.

The authors are also calling for the creation of an independent review body to develop safety protocols for AI design and development during the moratorium. What do you think of that?

Ilic: The development of testing procedures and certification of AI-based technology is certainly an important issue and must be pursued in the context of specific applications. But it is also important that we train new language models transparently and actively shape research rather than devoting ourselves fully to auditing and reviewing existing models. This is the only way we can ensure that systems become more trustworthy, safe and reliable. The tech giants pursue commercial interests and will therefore tend to focus on the biggest markets and cultural, linguistic regions. For this reason, we have joined the European AI research network ELLIS to help shape the AI world according to European values. But there is still a lot of potential here to promote diversity even further. For example, we could specifically build open data sets on different cultural and linguistic groups or, in the case of feedback from humans, researchers could pay attention to respondents’ cultural backgrounds and by doing so reduce any later bias. You won’t be able to force commercial providers to do this themselves. But research could make it easier for companies to make their systems more trustworthy by handling their own data openly and transparently.

The open letter also warns that new language models could spread propaganda and lies. Do you agree with this?
Krause: There has been a rapid development in generative AI models in recent months that enables the generation of ever-more realistic text and images. They can indeed be used for misinformation campaigns. Although research is also being carried out into how we can recognise such text and images, this development does pose a real risk.

The authors also see the danger of people losing their jobs through the use of AI, or of even being automated away by machines at some point. Isn't that exaggerated?
Krause: It annoys me that no distinction is being made between risks that we need to take seriously – such as the worry about misinformation – and science fiction – our world being taken over by machines. This makes it difficult to engage in any informed discussion and dialogue about the actual risks. AI will certainly change the professional world permanently. It is always more difficult to imagine which new jobs and occupational fields will emerge than those that might be automated away.

Ilic: There were similar concerns in the past in the context of new technologies (industrialisation, digitalisation, etc.). People will more likely be replaced by those who can work with AI than their jobs being completely replaced by AI. For this reason, it will be essential to support the population and industry in this transformation.

Andreas Krause is Chair of the ETH Zurich AI Center and Professor of Computer Science at ETH Zurich, where he heads up the Learning & Adaptive Systems group.

Alexander Ilic is Executive Director of the ETH Zurich AI Center.

JavaScript has been disabled in your browser