- This fantastic 2-in-1 laptop I tested is highly recommended for office workers (and it's on sale)
- This Eufy twin-turbine robot vacuum is a steal at $350 for Black Friday
- The robot vacuum that kept my floors free of muddy paw prints this fall is $600 off
- Here's how to get the ultimate Kindle bundle for $135 this Black Friday (plus more ways to mix and match deals)
- This racecar-looking robot mower mows a gorgeous lawn and is on sale for Black Friday
EU’s AI Act challenge: balance innovation and consumer protection
LAION demands that open-source AI models in particular shouldn’t be over-regulated. Open-source systems in particular allow more transparency and security when it comes to the use of AI. In addition, open-source AI would prevent a few corporations from controlling and dominating the technology. In this way, moderate regulation could also help advance Europe’s digital sovereignty.
Too little regulation weakens consumer rights
On the other hand, the Federation of German Consumer Organizations (VZBV) calls for more rights for consumers. According to a statement by consumer advocates, consumer decisions will in future be increasingly influenced by AI-based recommendation systems, and in order to reduce the risks of generative AI, the planned European AI Act should ensure strong consumer rights and the possibility of independent risk assessment.
“The risk that AI systems lead to false or manipulative purchase recommendations, ratings and consumer information is high,” said Ramona Pop, board member of VZBV. “The Artificial intelligence is not always as intelligent as the name suggests. It must be ensured that consumers are adequately protected against manipulation and deception, for example, through AI-controlled recommendation systems. Independent scientists must be given access to the systems to assess risks and functionality. We also need enforceable individual rights of those affected against AI operators.” The VZBV also add that people must be given the right to correction and deletion if systems such as ChatGPT cause disadvantages due to reputational damage, and that the AI Act must ensure AI applications comply with European laws and correspond to European values.
Self-assessment by manufacturers is not enough
Although the Technical Inspection Association (TÜV) basically welcomes groups in the EU Parliament to agree on a common position for the AI Act, it sees further potential for improvement. “A clear legal basis is needed to protect people from the negative consequences of the technology, and at the same time, to promote the use of AI in business,” said Joachim Bühler, MD of TÜV.
Bühler says it must be ensured that specifications are also observed, particularly with regard to transparency of algorithms. However, an independent review is only for a small part of AI systems with high risk intended. “Most critical AI applications such as facial recognition, recruiting software or credit checks should continue to be allowed to be launched on the market with a pure manufacturer’s self-declaration,” said Bühler. In addition, the classification as a high-risk application should be based in part on a self-assessment by the providers. “Misjudgments are inevitable,” he adds.
According to TÜV, it would be better to have all high-risk AI systems tested independently before launch to ensure the applications meet security requirements. “This is especially true when AI applications are used in critical areas such as medicine, vehicles, energy infrastructure, or in certain machines,” said Bühler.