- Ham radio is still a disaster lifeline, even in the iPhone era - here's why
- You can now talk to ChatGPT on the phone - no Wi-Fi needed
- Accelerating AI for financial services: Innovation at scale with NVIDIA and Microsoft
- You can finally buy LG's transparent OLED TV - if you're willing to pay $60,000
- Docker Desktop 4.37: AI Catalog and Command-Line Efficiency | Docker
LatticeFlow launches first comprehensive evaluation framework for compliance with the EU AI Act
The site has so far ranked models from the likes of OpenAI, Meta, Mistral, Anthropic and Google on more than two dozen technical specifications. Other model makers are also urged to request evaluations of their models’ compliance.
“We reveal shortcomings in existing models and benchmarks, particularly in areas like robustness, safety, diversity, and fairness,” researchers from LatticeFlow, INSAIT and ETH Zurich wrote in a technical paper. “Compl-AI for the first time demonstrates the possibilities and difficulties of bringing the act’s obligations to a more concrete, technical level.”
Most models struggle with diversity, non-discrimination
Under the EU AI Act, models and systems will be labeled as unacceptable, high, limited, and minimal risk. Notably, an unacceptable label would ban a model’s development and deployment. Model makers could also face large fines if found not in compliance.