News Single View

A challenge for teaching

h_da researchers on the opportunities, risks and limits of ChatGPT & Co.

 

How ingenious or dangerous are AI tools such as ChatGPT, which are currently seeing to hype and hysteria in equal measure? How are researchers at Darmstadt University of Applied Sciences working with these new applications and how will they change teaching in the future? ChatGPT & Co. are at any rate popular among students. Two thirds of those asked already use them in their studies, as a nationwide survey by h_da researchers has revealed.

By Astrid Ludwig, 21.6.2023

Answering the questions didn’t take long: “Do you use AI tools in your studies and, if yes, which ones and for what?” The aim was for students to complete the questionnaire in no longer than ten minutes. “We wanted to reach as many students as possible for our short study,” says Jörg von Garrel, Professor of Process and Production Innovation with a Focus on Quantitative Social Research at h_da. What started with a few students at his own university quickly spread to the whole of Germany. The team led by Professor von Garrel wrote to 400 universities and 4,000 teachers across the country. The survey was also announced on platforms such as “Studieren.de”. The result: a deluge of replies. By the beginning of June, over 6,300 students had completed the questionnaire. “We hadn’t expected such an enormous response.” The study is not only one of the first to produce such extensive feedback, it also shows “how topical and controversial the topic is,” says von Garrel.

ChatGPT clearly in the lead

The survey was anonymous. Two thirds of respondents said that they used AI for their studies, one third said they did not. The h_da team had asked for honest answers, “but the number of users is probably even higher,” von Garrel presumes. The most used tool by far is ChatGPT, he reports. Also mentioned was the German app DeepL, which is used for translations. For the researchers from the Faculty of Social Sciences, it was important to draw a broad picture across all disciplines because there are by all means differences – not only in how often but also in the ways AI is used. According to the survey, it is engineering and computer science students who most frequently resort to AI, followed by mathematics and natural science students – in both these fields above all for programming and simulations. Humanities students as well as economics, law and social science students stated that they get AI to generate texts and analyses for them or used the tools for research and literature studies. In art and architecture degree programmes, AI is often a source of ideas, as well as helping with concept development and design.

For von Garrel, who deals primarily with people’s behaviour vis-à-vis innovative technologies such as AI and self-learning systems and the question of how digitalisation, sustainability or demographic change are changing the world of work, one thing is sure: “ChatGPT & Co. are social gamechangers.” The study has shown, he says, how quickly these tools have diffused into society, adding that they will also lead to significant changes in university teaching. The apps are intelligent, but they are based on statistical methods and trained data. “ChatGPT does not satisfy our academic requirements,” says social scientist von Garrel.

Significant changes in teaching

Von Garrel has tried out the AI himself. “The text was good, but the references to my academic work were wrong or non-existent.” Experts call it ‘hallucinations’ when AI simply invents answers because it has not been trained with appropriate data. This means that for good teaching in the future, the following must apply: “AI tools are an instrument but not a substitute for academic work or thorough research.” He advocates critical reflection. Handling ChatGPT & Co. properly “must become part of our teaching”. AI can enhance efficiency, generate ideas or provide assistance, “but deciding, classifying, reflecting, that is still our task, now as before.”

He weighs up the pros and cons: “Innovations always have a creative and a destructive element.” He finds the development exciting. But how will teaching staff be able to distinguish in the future whether it was the student or AI that wrote the term paper or Bachelor’s thesis? “That will undoubtedly be difficult.” Von Garrel says that didactics and exam formats will have to change. “We cannot award grades solely on the basis of written work, and in the future we should also ask more often in oral tests whether students have understood the content.”

New exam formats

Markus Döhring, Professor of Data Science and Foundations of Computer Science at h_da, takes a similar view. He recommends a departure from purely written exams. “Personal contact will become more and more important, such as in the form of colloquia, oral defence of papers and discussions which prove that students are familiar with the topic in question.” However, with large numbers of students, such as in computer science, individual assessment will not be easy: “To ensure quality, a better supervision ratio would be necessary in the future,” says Döhring. In his opinion, combining several modules to enable fewer but more comprehensive exams would also be conceivable.  “AI can’t handle complex contexts and requirements,” he says. At least not yet.

Döhring is fascinated by the AI tools’ speed, quality and possible applications. “Three or four years ago, the situation today would have been unimaginable.” He has been working in applied research on artificial intelligence for many years. For example, Steinbeis Transfer GmbH and R+V Versicherung have been running an innovation lab at h_da since May, where researchers are looking at how AI can be used to improve customer support. Among other things, Döhring is working together with Professor Oliver Skroch to make processes for handling emails and enquiries from insurance customers more efficient. They are utilising the possibilities offered by artificial intelligence and self-learning systems, “but within a technical framework that provides transparency and ensures data protection.”

Finetuning of own AI models

Why? Because that is the problem with systems that are not open, such as ChatGPT & Co. ChatGPT was developed by the American company OpenAI, in which Microsoft has invested tens of billions of dollars. GPT stands for “generative pre-trained transformer”. So that the system is able to answer questions universally and generate texts, codes or images, the AI was trained on billions of data items and websites as well as other information from the internet. “We’re unable to evaluate how the results come about because, for example, we don’t know the training data,” says Döhring. He also has reservations about data protection: people who use the tools have no control over their data because the servers are in the US.

However, there are alternatives where the training material is known: initiatives such as “Common Crawl”, which collect data collectively, or similar systems that use Wikipedia. However, so far only large corporations like Microsoft or Google can afford a wealth of data such as with ChatGPT or Bard. “But it’s dangerous to leave the definition of what is the truth or a good answer only to the big providers,” warns Döhring. That offers too much room for manipulation. That is why he also uses open source systems for his research work, which he supplements with his own programming and special elements, characteristics or language features. Customising applications to user needs in this way is called finetuning. In the “Data Science” Master’s degree programme, Döhring teaches his students this kind of text and web mining, showing them which data and pre-trained models can be used in which ways. Together with his colleague Michael Braun, he will soon be adding a lecture on modern neural network architecture.

Development of “AI Test Management”

However, hallucinations, the chatbot weak spot, occur in all GPT systems. “This can only be remedied by systematically feeding queries into the databases and adding even more information,” says Döhring. A lack of training data, known as “algorithmic bias”, is also the reason for the frequently criticised lack of diversity or for racial discrimination. Döhring therefore urges anyone using the tools to check texts and sources each time.

The European Parliament wants to limit ChatGPT & Co. The aim of the new “AI Act” is to oblige development companies not only to document which training data they use but also to check the systems in advance for possible risks to health, fundamental rights, the environment, democracy or security. And to make them safe against cyberattacks. Döhring too believes that AI systems can cause damage. For example, if they were to gain access to the internet and able to exploit security gaps there. Or if unexpected and undesirable side effects occur – like hallucinations. “Developing AI test management will be a major task for computer science,” says Döhring.

 

CONTACT

Christina Janssen
Science Editor
University Communication
Tel.: +49.6151.16-30112
Email: christina.janssen@h-da.de

       

Professor Jörg von Garrel’s website: fbgw.h-da.de/fachbereich/personen/professorinnen/prof-dr-joerg-von-garrel

Professor Markus Döhring’s website: fbi.h-da.de/personen/markus-doehring

Interview with Markus Döhring in Main-Echo: www.main-echo.de/region/rhein-main-hessen/daten-sind-der-rohstoff-der-zukunft-art-6705715

Report about the KI Tutorial at the h-da in March 2023: https://dgi-info.de/die-modelle-hinter-chatgpt-ki-tutorial-an-der-hochschule-darmstadt/