HfG research group KIM investigates the opportunities and risks of AI
The opportunities and risks of artificial intelligence (AI) are the subject of controversial debate in science and society. On the one hand, there is technical progress and the development of innovative systems for autonomous driving. On the other hand, there is the fear of data misuse and the continued dependence of humans on humans. On the Karlsruhe University of Arts and Design (HfG) scientists in the Artificial Intelligence and Media Theory Artificial Intelligence and Media Theory research group (KIM) such questions.
The HfG formulates the reason for the research activities in relatively unknown territory as follows: “In the face of the rapid spread of AI in business and industry, the humanities and art institutions have a special responsibility to explore the limits and effects of these new technologies on social and aesthetic issues.”
“The myth of AI as an omniscient entity has cracked”
KIM coordinator Ariana Dongus is a little clearer in an interview with karlsruhe.digital: “The myth of AI as an omniscient entity has already cracked. That’s why we can question the hype surrounding the great successes of the new technologies,” says the HfG academic. After all, artificial intelligence is “neither witchcraft nor magic”, but merely a means of quickly analyzing extremely large amounts of data. Within the research group, the topic is discussed in a completely open-ended manner. “We have a very broad base,” emphasizes Dongus. The sociological effects of AI are examined, as are the various areas of application for innovative algorithms.
Some applications promote racism and inequality
When it comes to potential problems with the use of AI, the first thing that comes to mind for Ariana Dongus is racism. The mere analysis of data often creates an extremely undifferentiated picture of society. For example, many algorithms still classify migrants as less creditworthy or more prone to violence. And facial recognition software is much more prone to errors with non-white people. For Dongus, the consequences of such misjudgements are obvious: when artificial intelligence underpins prejudices through statistical analysis, inequality in society is further reinforced. “Diversity doesn’t really play a role in AI yet,” says Dongus.
Too few women in the tech industry
For Dongus, one reason for such negative effects is the current structure of the tech scene. White men are still in the majority in the digitalization industry. Most of the major AI pacesetters were founded by men, and the boardrooms of major tech companies are also male-dominated. “There are now efforts to ensure that tech companies represent all ethnicities and genders,” says Dongus. At the moment, however, such initiatives mainly serve to correct the company’s image and to find qualified employees; the industry is still a long way from achieving a balance of ethnicities and genders.
And what would the world look like if more women were responsible for the development of AI? “Definitely different,” emphasizes Dongus. So far, AI has mainly been used in traditionally male-dominated areas such as the development of new driving systems or increasing productivity in factories. “Women think differently. They focus more on the impact of a development on society as a whole,” says Dongus. This is why a higher proportion of women in AI companies could ensure the development of innovative medical technology or sustainable systems for energy generation and agriculture.