EU AI Act: Sensible guardrail or innovation killer?

AI regulation in the European Union is getting serious. The text of the EU AI Act was published in the Official Journal of the EU on July 12, 2024, and the set of rules around the development and use of AI tools officially entered force at the beginning of August. The measures take effect in stages: Affected companies have to follow the first rules in just six months.

In the industry, AI regulation is viewed with mixed feelings. While some welcome guardrails for the correct use of AI tools, others warn of a bureaucratic monster associated with it and fear that the rules could slow down further innovation. Read the first comments on the launch of the AI Act here.

Bitkom: AI Act must not become a stalemate

For Ralf Wintergerst, president of German digital association Bitkom, many questions remain unanswered even after the entry into force at both national and European level. “Whether Germany and Europe become innovation locations for artificial intelligence or laggards depends crucially on the further design and implementation of the AI Act. The implementation must not become a stalemate for companies: Long legal uncertainty, unclear responsibilities and complex bureaucratic processes in the implementation of the AI Act would hinder European AI innovation. The goal must be to consistently advance the use of AI both in business and administration as well as in society. This can only succeed if the implementation is carried out with little bureaucracy and in a practical manner,” he said.

Wintergerst calls on the German federal government to submit a proposal for a national implementing law for the AI Act soon. Companies need to know what to expect. Specifically, the Bitkom president demands: “In addition to the appointment of a central national authority, there needs to be equally clearly defined responsibilities among the national market surveillance and conformity assessment bodies. In addition, all competent authorities must be provided with sufficient staff and resources to carry out their tasks. Last but not least, SMEs and startups in particular should be supported by a tailor-made design of the planned AI real-world laboratories and practical assistance from the authorities in dealing with the AI Act.”

TÜV: Better opportunities for AI made in Europe

The TÜV Association, which focuses on the development of digital safety standards, welcomes the entry into force of the EU AI Act and emphasizes that the rules will create a legal framework for safe and trustworthy AI. “The AI Act offers the opportunity to protect against the negative effects of artificial intelligence and at the same time to promote innovation. It can help to establish a global lead market for safe ‘AI Made in Europe’,” says Joachim Bühler, managing director of the TÜV Association. “It is now important to make the implementation efficient and unbureaucratic. Independent bodies play an essential role in this, not only with regard to the binding requirements, but also in the voluntary AI testing market.”

According to Bühler, companies would be well advised to familiarize themselves with the requirements now, especially regarding the transition periods. “It is important to assess how and where the AI Act affects their activities.” In addition, from the TÜV man’s point of view, a uniform interpretation and consistent application of the risk-based approach are crucial for the AI Act to be effective in practice: “This is where the member states are called upon,” said Bühler.

Like Bitkom, the TÜV Association is also calling for the most efficient and unbureaucratic implementation of the regulations. This requires clear responsibilities and responsible bodies to implement the regulations in practice. Accordingly, implementation guidelines for the classification of high-risk AI systems should be published by the AI Office as soon as possible to provide legal certainty for small and medium-sized enterprises (SMEs). In addition, it is important to keep an eye on new AI risks and the development of systemic risks of particularly powerful general-purpose AI models and to drive forward the development of systematic AI damage reporting.

GI: New societal challenges posed by AI

The Gesellschaft für Informatik eV (GI), a professional society for computer scientists, wants to shed light on the consequences of the AI Act from different perspectives with various position papers. In principle, regulation in the field of artificial intelligence is welcomed, emphasized GI President Christine Regitz, noting that computer science bears a social responsibility. “In the field of artificial intelligence, we observe that those affected by such systems need support and that rules are needed for the development and use of AI systems,” she said. “Many good and necessary rules for the safe use of AI come into force with the AI Act. Computer science now plays an important role in implementing these rules and addressing gaps.”

The GI managers refer above all to the moral dimension of the AI Act. “AI systems pose qualitatively new challenges for our society,” states Christine Hennig, spokesperson for the GI Department of Computer Science and Society. These must be discussed not only technically, but also in legal, ethical and social terms. “The question of technology assessment is central to this,” says Hennig and asks: “What digitally transformed world do we want to live in in the future?”

An answer is not easy. The social context of the use of an AI system by humans has yet to be analyzed and evaluated, concludes Ralf Möller, spokesman for the AI department at the GI. Möller is quite critical of the role of regulators in this. “Limiting the performance of technology or making the architecture of a system the basis of regulation does not seem expedient or overarchingly possible.”

The AI Act roadmap

Staggered transition periods apply to the implementation of the AI Act. From the beginning of 2025, AI systems that use manipulative or deceptive techniques, for example, are to be banned. From August 1, 2025, codes of conduct for certain general-purpose AI models will come into force. EU Member States must also designate national authorities for market surveillance. Mandatory audits for high-risk AI in areas such as lending, human resources or law enforcement will be required from August 2026. They affect not only AI developers, but also AI providers and operators. From 2027, the requirements for AI in products subject to third-party testing will come into force.

The EU’s regulatory framework divides AI applications into different risk classes. High-risk systems used in areas such as medicine, critical infrastructure, or human resource management are subject to strict regulations and must meet comprehensive requirements for transparency, security, and oversight. Limited risk systems, such as chatbots, only need to meet transparency requirements, while minimal risk systems, such as simple video games, are not regulated at all. Violations can result in fines of up to €15 million or up to three percent of global annual turnover.



Source link