- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
- 세일포인트 기고 | 2025년을 맞이하며… 머신 아이덴티티의 부상이 울리는 경종
Governments Eye Disclosure Requirements for AI Development Labs
AI laboratories will be compelled to disclose their development of general-purpose AI as governments look to have more oversight over this rapidly evolving technology.
This is according to AI scientist Inma Martinez, chair of the Multi-stakeholder Experts Group at Global Partnership on Artificial Intelligence (GPAI).
As governments develop regulation relating to generative AI, almost all will eventually require AI labs located within their territory to disclose what problems their tools are supposed to solve, she claimed during the Palo Alto Networks Ignite London event on March 7, 2024.
Private and Open Source AI Models Under Scrutiny
Generative AI tools like OpenAI’s ChatGPT disrupted the AI narrative in 2022 by introducing a new paradigm, Martinez argued in a conversation with Haider Pasha, Palo Alto’s CSO for EMEA and Latin America.
For the first time with GenAI it is now up to the receiver to assess whether the AI model’s output was successful or at least satisfactory.
Although she said she believes some of these tools will revolutionize sectors, starting with supply chain, logistics, healthcare and education, they will also bring many risks.
“Throughout 2023, we began to see the holes in the cheese,” she said.
GenAI started to be used for malicious purposes, including developing convincing phishing campaigns and creating code for polymorphic malware.
Large language models (LLMs) also started to be hacked with techniques like direct and indirect prompt injections.
While private generative AI models (OpenAI’s ChatGPT, Google’s Bard/Gemini, Anthropic’s Claude…) have been increasingly scrutinized, she argued that open-source models should no less be kept under close watch.
“I’m a big promoter of open source software, and I call myself a Linux lady, but some use cases showcased through open source LLMs were aberrations,” she continued.
‘Frontier’ AI Needs Full Transparency, Like Nuclear Technologies
Over the past few months, she praised governments for drafting policy strategies to crack down on some of those GenAI risks, albeit in a haphazard way.
“We’ve realized that there is no consensus on what the values supporting AI regulations should be. For instance, the International Organization for Standardization (ISO), which is trying to develop the standards that future AI regulations will be based on, recently told me they realized that the concept of ‘safety’ has a very different meaning in the UK and Spain,” Martinez explained.
Moreover, even like-minded countries take different approaches regarding AI regulation, with the US and the UK taking a vertical, sector-focused regulatory stance, while the EU chose to go the horizontal route with its AI Act.
Martinez predicted that most governments will align on one regulatory requirement. This requirement will demand that AI labs and firms developing general-purpose AI models – sometimes called ‘frontier models’ – disclose what exactly they are developing and for which purposes.
“We wouldn’t imagine a lab in the UK, for example, saying it’s developing nuclear power technologies without having to explain its goal,” she concluded.