ChatGPT: Why an AI chatbot is causing a stir
Everyone has been talking about ChatGPT for a few weeks now. Some are even calling the AI chatbot revolutionary. What’s behind it?
Artificial intelligence (AI) has been used in many areas of our everyday lives for
many years: Cars,
smartphones, televisions, kitchen appliances, voice assistants – AI-based technologies are now
everywhere in one form or another.
However, these are often “invisible”. Or to put it another way: if you ask
a voice assistant about the weather and receive the corresponding answer
, you don’t even think about the fact that the loudspeaker on the
shelf can only understand us through the use of AI and machine learning.
ChatGPT is a different story. The AI chatbot developed by the US company OpenAI at shows us what it is capable of as soon as we use it for the first time. And many people are – quite rightly – impressed by .
ChatGPT makes the limitless potential of AI visible and usable for everyone
“GPT” stands for “Generative Pre-training Transformer”, or, to put it more simply, an AI that can generate human-like texts . The basis for this is a neural network for processing texts. If you like, ChatGPT is constantly fed with training data, which comes from various sources. These can be social networks, books, publishing offers, online forums and much more.
ChatGPT can then interact with the people via a chat window and solve a variety of tasks. This ranges from simple answers to short questions to the creation of entire texts and the writing of smaller programs. Yes, you read that right. If you ask ChatGPT to program an app for a simple task, the chatbot will spit out the corresponding code a short time later. Compose a children’s song or write an essay on the history of artificial intelligence ? No problem at all. Of course, ChatGPT also masters translations effortlessly.
Of course, none of this works 100
percent yet. That is why it is important for OpenAI to emphasize that it is currently
a free research preview that also generates
incorrect information from time to time.
But the hype has long been there. Students are using ChatGPT to write papers on
, publishing houses are experimenting with news articles written by AI
– and numerous other industries are also trying
to find out to what extent ChatGPT can make their work easier in the future
. Especially in times of a shortage of skilled workers, people are naturally trying to automate as
much as possible.
ChatGPT: AI-based chatbots also bring problems with them
There is no doubt that AI-based chatbots such as ChatGPT will be able to write entire books and plays in the future and will also contribute a great deal to automation in marketing coding.
However, we must not forget that the results of AI are only as good as the data it is fed with. For this reason, ChatGPT consumes an immense amount of energy and is currently unaware of certain things or provides completely incorrect information. As some of the sources are unknown, the statements can only be checked for accuracy to a limited extent, especially when it comes to scientific facts. Especially as there are already reports that AI can even falsify scientific sources almost perfectly on request.
In addition to the fake news problem, there is also the issue of copyright. After all, AI always uses texts and works that are already available. Even during data mining, the question arises as to whether the respective data may be used at all for training the AI. For ChatGPT, for example, texts published on the internet were used as training data. For this reason, it should also be carefully considered whether AI-generated works should actually be published as a company.
Discussion about use in education and science
The first schools have already banned the use of ChatGPT. “The new artificial intelligence ChatGPT has the potential to plunge the school system into a deep crisis. At least if it continues to be based on the idea that performance is defined by an output whose creation plays no role. The emergence of performance as a process can be seen as learning. In this respect, the question of learning at school must be asked anew. ” writes Bob Blume alias Netzlehrer on his blog.
Robert Lepenies, President of Karlshochschule International University in Karlsruhe, sees not only the risks but above all the opportunities of the new ChatGPT voice AI in the university environment.
“Homework – as we know it – no longer works, especially not at the big universities. In such a standardized system, which is purely optimized for output, it’s easy to cheat with the software. However, the tool is more likely to uncover what is fundamentally wrong in the academic world. At best, an AI like this is a creative collaborator that you can take with you – I would like to see more discussions on this. We should ask ourselves: what do we actually want to achieve as a university? I am firmly convinced that students who are keen to learn can benefit enormously from an AI tool like this. “, he explains in an interview with WirtschaftsWoche.
1. #GPT-3 makes many forms of examination unthinkable as of today, since machine learning with GPT-3 generates texts that are qualitatively indistinguishable from the work of students in the social sciences. Just tested for you – our professors don’t recognize the difference.
– Robert Lepenies (@RobertLepenies) December 7, 2022
Hidden behind the scenes are further ethical challenges

Of course, we are working hard to optimize ChatGPT. However, the conditions under which this is happening are already casting a shadow over the new technologies. In addition to the lines of code and databases, a lot of painstaking manual work goes into the chatbots. Workers in Kenya optimize the application and – as a Time Magazine investigation published last week reveals – under unsafe and poor conditions. Hundreds of pages of internal documents from Samasource and OpenAI and four interviews with employees who worked on the project indicate that behind the hype surrounding artificial intelligence, there appears to be another story of exploitation in the Global South. The workers have to work far too long shifts sifting through and sorting out sometimes disturbing content so that users are spared similar content – for between 1.32 and 2 US dollars an hour, according to the research.
Cover picture: Netzoptimisten
The first part of the text first appeared on TECHTAG. The magazine for the digital economy in Baden-Würrtemberg. Author is Frank Feil.