ChatGPT and other AI models: We need to talk about artificial intelligence
The hype surrounding the AI chatbot ChatGPT has triggered an unprecedented public debate about artificial intelligence – and that’s a good thing.
When ChatGPT was released in November 2022, the service recorded around one million registrations within five days. Just two months later, over 100 million people worldwide were using the AI chatbot. This makes it the fastest-growing consumer app in history.
Microsoft is now using the AI chatbot developed by OpenAI in its Bing search engine and selected Office applications. Meanwhile, Google and Meta, as well as Chinese tech giants such as Baidu, Tencent and Alibaba, have announced that they are working on their own AI chatbots.
Within a very short space of time, a veritable hype arose, which was also fueled by the fact that some media stylized AI chatbots as a revolutionary technology and equated them with the invention of the printing press or the steam engine. By the time the first schoolchildren and students began to have their homework and presentations prepared by AI, the topic had finally reached the center of society.
Artificial intelligence heats things up
It is clear that AI chatbots bring both opportunities and risks. The opportunities certainly include efficiency gains in almost all industries as well as completely new ways of supporting people in their everyday lives – for example, when AI is used in search engines, household appliances, cars or in the service sector. At the same time, AI chatbots also harbor risks such as the potential misuse of user data, ethical concerns about discrimination and bias and the risk of job losses. Furthermore, an AI is only as good as its training data. Outdated or insufficiently verified data can also lead to the spread of misinformation.
As a result, two camps have now formed in the public eye that are shouting particularly loudly: Those who see artificial intelligence as a panacea and would like to throw all concerns directly overboard. And those who see the new AI models as an existential threat to humanity and want to stop their development and use immediately.
The Future of Life Institute has been particularly prominent in this regard with an open letter in which prominent supporters such as Elon Musk and Apple co-founder Steve Wozniak call for an immediate pause in the development of new AI models. Eliezer Yudkowsky even goes one step further and demands: “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down”.
Artificial intelligence: voices from Karlsruhe
But what do AI experts in Karlsruhe think about current developments? We asked them.
“Any critical and objective discourse regarding developments in AI is to be welcomed in principle. One may question whether a 6-month moratorium is a feasible measure – and the suggestion of an emergent AI system that could pursue its own agenda seems inappropriate to me. However, some of the risks mentioned in the letter are quite real. For example, in addition to undesirable effects on the labor market, negative effects on online debates and opinion-forming processes are also to be expected. In general, we need to weigh up the extent to which we continue to become dependent on technology with negative side effects and the potential for human manipulation. However, calling for a general halt to AI development seems to be going too far. But it is advisable to think about a sensible use now and to involve the population in the process.”
Prof. Dr. Achim Rettinger, Director of Information Process Engineering at the FZI Research Center for Information Technology
“It is important that we in Germany and Europe start doing our homework now. The LEAM consortium, the “Large European AI Models” initiative of the German AI Association, focuses on the provision of computing capacities for the creation and training of large language models in Europe. This is an important step, particularly with regard to data protection and data security, in order to avoid becoming completely dependent on the providers now in the spotlight.”
Tobias Joch, Managing Director of inovex GmbH
“I think it’s far-fetched to see AI systems like ChatGPT as an existential threat to humanity in the near future. In my view, the big, very direct risk is that fact-based social debates will become even more difficult as the flood of false information, whether intentional or unintentional, continues to increase. At the same time, AI systems have huge potential to make interaction with computer systems more human-friendly and to relieve us of routine tasks. We should take advantage of this.”
Christoph Amma, Kinemic
“The development of powerful artificial intelligence has great potential, but must nevertheless be driven forward more slowly in order to maintain ethical standards. To this end, however, it must be ensured that players worldwide join the initiative – if necessary by means of state control. Rushing ahead with European regulation is not only useless, but counterproductive.”
Carsten Kraus, AI & Data Science Expert, CK Holding GmbH
Artificial intelligence: open discourse is important
An open discourse on the pros and cons of artificial intelligence (AI) is crucial to ensure a balanced understanding of this ground-breaking technology. By discussing different perspectives, potential risks can be identified, ethical considerations taken into account and solutions developed that meet both societal needs and technological opportunities.
At a time when AI is becoming increasingly integrated into our everyday lives and key decision-making processes, we need to ensure that all stakeholders – researchers, companies, policy makers and especially the public – are actively involved in shaping responsible and sustainable AI development.
This is the only way we can exploit their immense potential and at the same time minimize their negative impact on people and society.