—— Artificial intelligence (AI) has enormous potential to change society. Whether or not we perceive these changes as positive will strongly depend on how we regulate AI and its development. The EU has agreed on the world’s first comprehensive, governmental regulatory framework for AI. This will set the standards for the future.

It was one of the hottest topics of the year 2023. Generative artificial intelligence, which can create images or texts at a simple user prompt, seems to be advancing much faster than previously thought possible. Since the hype surrounding ChatGPT, the biggest question is no longer if artificial intelligence will be changing our lives—but when and to what extent.

Very early on—in April 2021—the European Union began examining the question of which legal framework conditions are necessary to ensure that the new technology doesn’t continue to advance unchecked. The TÜV Association, and TÜV SÜD (TÜV-Verband) with it, has been working intensively on the implementation of quality and security standards for AI. Binding standards are particularly necessary where AI systems pose a risk for humans and the environment, and compliance must be monitored by independent testing organizations. Because only when AI is proven safe will society fully accept its use.

The European Union’s task in this regard is to strike the right balance between protecting personal rights, on the one hand, while not stifling the opportunities that innovative AI systems offer for economic development, on the other. The EU has now adopted the long-awaited EU AI Act, which attempts to walk this tightrope between caution and progress.

One problem is the practice of collecting data across the web. Artificial intelligence is fed and trained with massive amounts of data (text and images)—sometimes without any regard for copyright issues. According to the newly adopted AI Act, AI developers in the future will have to prove (via greater transparency) that their systems do not violate applicable laws. AI creations, whether they be texts, images or videos, must also be identified as such. This is intended, among other things, to combat the spread of AI-generated misinformation. However, there are simplifications in the regulations for smaller companies, which is intended to prevent European AI startups from being inable to compete on the global market.

At the same time, the act calls for particular caution to be exercised in what it defines as risk areas. Wherever critical infrastructure may be affected (water or power supplies, personnel administration or security and safety agencies), artificial intelligence must never be given complete decision-making powers. The use of AI for law enforcement has been highly controversial from the outset. The current plan allows the application of biometric techniques such as facial recognition for prosecuting specific crimes, but prohibits the use of AI for so-called warrantless surveillance.

Specific requirements have now been established for AI development. But how will these new regulations affect commercial applications? This is one of the questions being investigated by the TÜV Association in its TÜV AI.Lab (TÜV-Verband). One goal of this joint venture of the various TÜV companies is to translate the new regulatory requirements into practice. To this end, employees of the TÜV AI.Lab are developing testing criteria and procedures that are tailored to particular use cases and their specific risk profiles.

Because one thing is clear: the question of how effectively the EU will be able to enforce its regulations—without unduly restricting a new technology—will determine whether Europe with its AI Act becomes a pioneer of AI regulation or is relegated to the status of a technological outsider.