Ways IT leaders can meet the EU AI Act head on

The AI systems the document mostly focuses on fall into the unacceptable risk and high-risk categories. The former includes banned AI applications, such as those assessing individuals based on socioeconomic status. The EU also prohibits law enforcement from performing real-time remote biometric identification in public spaces, and it’s also against emotion recognition in the workplace and at school. The latter category covers areas like critical infrastructure, exam scoring, robot-assisted surgery, credit scoring that could deny loans, and resume-sorting software.

Organizations that work with high-risk systems and know they’ll be affected by this law should start preparing. “If it’s a company that develops AI systems, then all of those obligations that have to do with technical documentation, with transparency for data sets, can be anticipated,” Tudorache says.

Additionally, companies looking to incorporate AI into their business model should ensure they trust the technology they integrate by first thoroughly understanding the systems they deploy to prevent complications down the line.

The biggest mistake organizations can make is failing to take the AI Act seriously because it’s disruptive and will massively affect many business models. “I expect the AI Act to create bigger ripples than the GDPR,” says Tim Wybitul, head of privacy and partner at Latham & Watkins in Germany.

Adapting to a moving target

As the AI Act begins to reshape the landscape of European technology, industry leaders are trying to navigate its implications. Danielle Jacobs, CEO of Beltug, the largest Belgian association of CIOs and digital technology leaders, has been discussing the AI Act with her colleagues, and they’ve identified several key challenges and actions.

Many Belgian CIOs, for instance, want to educate their employees and set up awareness programs focused on exploring the most effective ways to use gen AI.



Source link