—— Chatbots are taking over more and more aspects of our digital lives. Thanks to AI technology, they offer quick and affordable approaches to customer service and online advice. But what if a chatbot starts spreading misinformation?

Interactions with a human touch remain important, even in our digital world. When we write commands to ChatGPT, we address it informally and say “please.” Facial recognition programs wink at us and chatbots have names that could be friends of ours: Eliza, Alice, Fred, Lenny… and Tessa. Psychologically speaking, such humanization makes sense—we’re more likely to trust them with our most personal thoughts and questions, from romance to medicine, from shopping to work. 

According to IBM, chatbots are primarily based on artificial intelligence and natural language processing—understanding the text input allows them to generate answers that simulate a normal conversation with a human counterpart. The chances are high that you, too, have spoken with a chatbot at some point this year. Booked an appointment? Ordered a product? Submitted a service request? 


In spring 2023, Tessa took charge of the counseling center of the National Eating Disorders Association (NEDA), an American non-profit organization. For twenty years, the NEDA Helpline—available via telephone and chat—has been the place to go for any questions about eating disorders and medical or psychological counseling. Tessa wasn’t just another call center employee, however; Tessa was a chatbot—developed in collaboration with a medical school and trained to specifically address issues around body image and to apply therapeutic methodologies. 

The management’s idea was that the smart chatbot would gradually replace the entire team and thus save costs. 

But then Dr. Alexis Conason spoke up. The psychologist and specialist in eating disorders had subjected the chatbot to a critical test. Her results: very worrying. In response to the questions Conason had typed about self-image and body shame, some of Tessa’s recommendations included being vigilant about a daily deficit in calories to lose weight—a highly problematic answer, the psychologist confirmed. “Any focus on intentional weight loss is going to be exacerbating and encouraging to the eating disorder,” she said to the New York Times. 

Soon, other users started sharing their experiences with the chatbot. According to one person who suffers from an eating disorder, Tessa recommended that they take their body measurements using a skin-fold caliper—including providing information about where they could purchase such an item. 

The chances are high that you, too, have spoken with a chatbot at some point this year.


In contrast to ChatGPT—which is also a chatbot system—Tessa is not based on generative AI technology, but pulls information and conversation modules from a specially written program developed by a team from the Washington University School of Medicine. Psychiatry professor Ellen Fitzsimmons-Craft confirmed to the technology magazine Wired that the chatbot’s sometimes disturbing responses were not part of the program. She has no idea how the conflicting advice might have ended up in Tessa’s conversational repertoire. “Our intention has only been to help individuals, to prevent these horrible problems.” 

With pressure mounting on social media, Tessa was finally deactivated just a few weeks later, in early June 2023. Many people’s uncertainty about the overall technology remains. Whereas workers are worried about losing jobs to the rapid growth of generative AI technology, experts admonish about the need for greater responsibility. The current speed with which investments are being made in chatbots and AI worldwide frightens many people. Can you really trust a self-learning chatbot that continually adapts its responses and answers to reflect previous conversations? Even Microsoft’s Head of Responsible AI, Sarah Bird, admits that a bot like ChatGPT could also be “dreaming” or “hallucinating,” which can lead to incorrect information. 

Regardless of the many concerns, chatbots are on the rise around the world: an estimated 1.5 billion people regularly use them—the majority of them in the US, Great Britain, Germany, India and Brazil—and the numbers are increasing. According to a study by the AI company Tidio, just 38 percent of those surveyed would prefer to speak to a real person. But live customer representatives are becoming rarer and rarer, in any case—chatbots promise immense savings opportunities in the service field. According to Tidio, about 11 billion dollars were saved last year through the use of chatbots. 

At the same time, according to the magazine Business Insider, total purchases made via chatbot will surpass 142 billion dollars next year. This figure was just 2.8 billion in 2019. 

Chatbots are here to stay. Using deep learning, they will be able to reliably cover more and more areas and will continue to provide faster, more direct customer service digitally for companies in the future. Nevertheless, society will need to ensure that chatbots do their jobs effectively without unexpected consequences.


The aspect of ethics in artificial intelligence is also an important topic for TÜV SÜD. This is because the question of standards that can differentiate between “good AI” and “bad AI” is becoming increasingly important with the global success of such applications. That's why TÜV SÜD cooperates with the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA), a global standardization organization. Our common goal: to design artificial-intelligence applications responsibly and safely. This can include such aspects as processes and quality measures for creating algorithms. The strategic cooperation includes collaboration on creating standards, training courses and certifications. TÜV SÜD services have already integrated the first globally valid AI ethics standards from the IEEE SA. This provides companies with targeted support in developing high-quality AI applications. 

Learn more