- Potential Nvidia chip shortage looms as Chinese customers rush to beat US sales ban
- These tech markets are taking the brunt of the new US tariffs - what that means for you
- JALを救ったSAKURAプロジェクト:50年ぶりの改革と復活の全貌
- IBM Cloud speeds AI workloads with Intel Gaudi 3 accelerators
- Change these 5 settings on your TV for a quick and easy way to improve its picture quality
LatticeFlow launches first comprehensive evaluation framework for compliance with the EU AI Act

The site has so far ranked models from the likes of OpenAI, Meta, Mistral, Anthropic and Google on more than two dozen technical specifications. Other model makers are also urged to request evaluations of their models’ compliance.
“We reveal shortcomings in existing models and benchmarks, particularly in areas like robustness, safety, diversity, and fairness,” researchers from LatticeFlow, INSAIT and ETH Zurich wrote in a technical paper. “Compl-AI for the first time demonstrates the possibilities and difficulties of bringing the act’s obligations to a more concrete, technical level.”
Most models struggle with diversity, non-discrimination
Under the EU AI Act, models and systems will be labeled as unacceptable, high, limited, and minimal risk. Notably, an unacceptable label would ban a model’s development and deployment. Model makers could also face large fines if found not in compliance.