Artificial Intelligence – Between Hype and Hysteria
Prof. Bert Heinrichs, head of the research group “Neuroethics and Ethics in AI” at the Institute for Neuroscience and Medicine – Brain and Behaviour from the Helmholtz Research Field Information, provides insights into the current ethical issues of Artificial Intelligence. (Source: Forschungszentrum Jülich – Press Releases)
Hardly a week goes by without warnings of the dramatic impacts of the rapid development in the field of Artificial Intelligence (AI). Recently, a group of renowned personalities led by Sam Altman, the head of the company Open AI, demanded in a media-effective manner: “Mitigating the risk of extermination by AI should be a global priority, alongside other societal risks such as pandemics and nuclear war.”

Forschungszentrum Jülich
Mr. Heinrichs, what is the current debate about the dangers of AI essentially about?
Almost ten years ago, the philosopher Nick Bostrom, who teaches at the University of Oxford and directs the Future of Humanity Institute there, warned in his book Superintelligence: Paths, Dangers, Strategies of a global catastrophe. Bostrom devised the scenario that an AI, tasked with producing paperclips as efficiently as possible, ultimately extinguishes humanity because it uses all available resources for the given goal. The AI also resists attempts to stop the process because they are incompatible with its original task. This inevitably reminds us of HAL 9000, the computer of the spaceship Discovery from Stanley Kubrick’s 1968 film 2001: A Space Odyssey. HAL also resists being shut down by the crew, who suspect a malfunction, and begins to kill them. In the end, a crew member manages to trick HAL and deactivate it.
The motif in the film as in reality is the same: loss of control. Are we not threatening to develop an overpowering technology that we can no longer control?
And, do you share such an assessment?
Soberly viewed, such a horror scenario is rather unlikely in the near future – the same applies to overly euphoric utopias, such as those described by the inventor and publicist Ray Kurzweil. In his 2005 book The Singularity Is Near: When Humans Transcend Biology, he rhapsodizes about the impending fusion of technology and biology and prophesies the imminent emergence of the singularity – a techno-biological superintelligence that will solve all earthly problems. The fear of loss of control and the extinction of humanity is countered by the hope of technological paradise.
Why do you consider these scenarios unlikely?
For that, AI systems would have to develop entirely new properties that they do not yet have, such as independent goals and desires, or consciousness. Of course, it cannot be ruled out that it will someday be possible to equip artificial systems with these properties, or that they may “emerge” without targeted programming, as critical voices warn. Former Google employee Blake Lemoine caused a stir last year when he claimed that the Google AI system LaMDA had consciousness. However, independent checks could not confirm his claim. What is much more important: We still understand properties such as independent goals, desires, and consciousness far too little to be able to make reliable predictions. We therefore need more interdisciplinary basic research to better estimate development paths.
Even though it is far less entertaining than the overblown scenarios à la Bostrom and Kurzweil, a solid assessment of risks can only be achieved if one looks more closely at the details of current research and development. One quickly realizes: There are quite a few serious problems that should be addressed with political measures.
Which ones do you specifically have in mind?
A first important step is to achieve conceptual clarity. Speaking of “the AI” is misleading. In fact, today the term “artificial intelligence” unites a variety of different approaches in computer science. Particularly relevant is the so-called Deep Learning, which has been described by three of the world’s leading researchers in this field, Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, as follows: “Deep Learning enables computer models, which consist of multiple processing layers, to learn representations of data with multiple levels of abstraction.” You can tell, it quickly becomes very technical here. But it’s also clear, this is not about an artificial being that has consciousness and, comparable to humans, a very general ability to solve problems. It is a class of techniques in modern computer science – nothing more and nothing less. This finding should not obscure the fact that Deep Learning can be very powerful and often produces surprising results.
What relevance does AI have for everyday life?
In fact, we encounter AI in everyday life at every turn. Whether in Internet search engines, in the recommendation systems of providers like Amazon or Netflix, in semi-automated driving and navigation aids or smart home systems – AI is used everywhere. Some people may find these services eerie, but they are not really threatening. But there are also other contexts in which AI has already made its entrance, and where its use could potentially have much more dramatic consequences than an inappropriate movie recommendation. Such as in medicine. AI is or will very soon be used to support diagnosis and therapy decisions. But what if an AI-based diagnosis system overlooks a dangerous brain tumor or makes a fatal therapy decision? These are real problems that need to be addressed. There is now a broad consensus in the professional world that automation of processes should not be taken too far in important decisions. Specifically, this means that diagnostic systems can be used for support, but they must not completely replace the decision of a doctor. As a tool in human hands, AI systems can undoubtedly lead to significant improvements.
So, you also see a great potential?
Of course, but AI will not turn the world into a paradise. It is a technology that, like all technologies, can certainly help solve some of humanity’s problems, but not all. In addition, it threatens to create new problems or at least exacerbate some existing problems. This includes, among other things, the high energy requirements of deep learning applications, which have only recently been intensively discussed, but which must be taken very seriously in view of climate change.
What approach do you recommend for classifying and evaluating this technology?
Canadian philosopher Jocelyn Maclure has advised some time ago in a professional article to adopt a “deflationary view” of AI, aiming at a realistic assessment of the opportunities and risks. He also warned that “inflationary views”, which dramatize opportunities or risks one-sidedly, distract from the real problems. This currently seems to pose a real danger. Instead of discussing extinction scenarios, we should talk more about implementation rules and focus on very real disadvantages of AI applications for individual groups. What if biases in training data cause medical diagnostic systems not to work for ethnic minorities? What if fully automated processes result in some people no longer getting bank loans without knowing why? What if the clever use of AI makes it impossible to recognize fake news? These are real dangers that we urgently need to counteract with political measures.
A recent editorial in the prestigious scientific journal Nature argues similarly: “Stop talking about AI’s doomsday of tomorrow when AI poses risks today,” it says. The horror scenarios allow a small group of very influential tech companies to dominate the discourse on AI. In fact, however, it is precisely this group that is responsible for ensuring that their products are safe, in every respect. If there are indeed indications of emerging properties in systems like GPT4 or LaMDA, then it is the manufacturers of these systems who must take effective measures, rather than indulging in media-effective warnings.
What could such measures look like?
The often criticized European Union initiated a legislative process in April 2021, which will result in the world’s first comprehensive regulation for AI. On June 14, 2023, the members of the EU Parliament adopted their position on the AI law. This is now followed by negotiations with the EU member countries in the Council. At the heart of the draft is a differentiation according to risk classes. Depending on how risky an AI application is, different measures apply, ranging up to a complete ban. Regardless of whether one finds all the details of the draft convincing or not, the EU approach is on the right path, as it follows neither the hype nor the hysteria around AI. It looks at AI for what it is: A multifaceted, new technology that offers opportunities and risks, which need to be analyzed soberly in order to enact appropriate regulatory measures. Instead of flowery warnings, concrete regulations are needed that counteract discrimination and social injustice by AI, rule out fatal wrong decisions, prevent disinformation and manipulation, and promote self-determined handling of this technology.
The original press release can be found at:
Künstliche Intelligenz – zwischen Hype und Hysterie (only in german)
Localization in the Helmholtz Research Field Information:
Helmholtz Research Field Information, Program 2: Natural, Artificial and Cognitive Information Processing, Topic 5: Decoding Brain Organization and Dysfunction
Contact:
Prof. Dr. Bert Heinrichs
Group Leader at the Institute of Neuroscience und Medicine (INM)
Brain and Behaviour (INM-7)
Phone: +49 2461/61-96431
E-Mail: b.heinrichs@fz-juelich.de



