"It's time to get rid of the CEO"

Luisa Bomke

Jan 23, 2026

München. Stuart Russell is one of the world's most influential researchers in the field of artificial intelligence (AI). The professor has been teaching at the University of California, Berkeley, for 40 years and has shaped the debate on the opportunities and risks of the technology.

Russell is co-author of one of the most important textbooks in the discipline. More than 1500 universities use the standard work "Artificial Intelligence: A Modern Approach" in their teaching.

In an interview at the DLD conference in Munich, Russell explains to Handelsblatt why he has gone from technology optimist to admonisher, why he thinks current regulation is wrong and how high he believes the risk of AI getting out of control is.

Read the full interview with Berkeley professor Stuart Russell here:

Mr Russell, you co-wrote the world's most influential AI textbook and are now regarded as one of the most prominent warning voices. When exactly did the optimist become a sceptic?
I don't think "sceptic" is the right word. It's more of a compliment to AI when I say that it could become so powerful that it becomes a threat - unless it is developed properly. Someone who makes sure that nuclear reactors don't explode is not a sceptic about nuclear energy. That is simply common sense.

When did you realise the dangers?
I have been working on AI for around 50 years. The first edition of the textbook was published in 1994, and even then there was a chapter at the end entitled: "What if we succeed?" At the time, it seemed to me, firstly, that we were still a very, very long way from AI systems that could pose any risk at all. And secondly, that as soon as the risks became apparent, we would find ways to prevent them, i.e. develop approaches that came with real guarantees.

And when did that change?
From 2012 to 2014, I was on sabbatical in Paris and had a lot of time to think about where the field was heading. I realised that we had no real roadmap for how we would achieve systems - let's call it general artificial intelligence (AGI) for simplicity's sake - that are at least equal to humans in every dimension. But I could see that such a roadmap could emerge.

And your reaction at the time?
I started to take this question really seriously: What if we succeed? How do we stay in control? So I started working on exactly that. In the meantime, we have developed initial answers and a theoretical framework that can be used to demonstrably build AI for the benefit of people.

This article is for members only.
You are not logged in - please log in.

Articles

Related Articles