- 기업에 'AI 윤리 전문가'가 필요할까?
- This 'lifelike' AI granny is infuriating phone scammers. Here's how - and why
- Upgrade to Windows 11 Pro for $18 - the lowest price this year
- One of the most reliable power banks I've tested can even inflate car tires (and get 50% off in this Black Friday deal)
- This is the smartest electronic precision screwdriver I've ever tested (and now get 10% off for Black Friday)
Governments Eye Disclosure Requirements for AI Development Labs
AI laboratories will be compelled to disclose their development of general-purpose AI as governments look to have more oversight over this rapidly evolving technology.
This is according to AI scientist Inma Martinez, chair of the Multi-stakeholder Experts Group at Global Partnership on Artificial Intelligence (GPAI).
As governments develop regulation relating to generative AI, almost all will eventually require AI labs located within their territory to disclose what problems their tools are supposed to solve, she claimed during the Palo Alto Networks Ignite London event on March 7, 2024.
Private and Open Source AI Models Under Scrutiny
Generative AI tools like OpenAI’s ChatGPT disrupted the AI narrative in 2022 by introducing a new paradigm, Martinez argued in a conversation with Haider Pasha, Palo Alto’s CSO for EMEA and Latin America.
For the first time with GenAI it is now up to the receiver to assess whether the AI model’s output was successful or at least satisfactory.
Although she said she believes some of these tools will revolutionize sectors, starting with supply chain, logistics, healthcare and education, they will also bring many risks.
“Throughout 2023, we began to see the holes in the cheese,” she said.
GenAI started to be used for malicious purposes, including developing convincing phishing campaigns and creating code for polymorphic malware.
Large language models (LLMs) also started to be hacked with techniques like direct and indirect prompt injections.
While private generative AI models (OpenAI’s ChatGPT, Google’s Bard/Gemini, Anthropic’s Claude…) have been increasingly scrutinized, she argued that open-source models should no less be kept under close watch.
“I’m a big promoter of open source software, and I call myself a Linux lady, but some use cases showcased through open source LLMs were aberrations,” she continued.
‘Frontier’ AI Needs Full Transparency, Like Nuclear Technologies
Over the past few months, she praised governments for drafting policy strategies to crack down on some of those GenAI risks, albeit in a haphazard way.
“We’ve realized that there is no consensus on what the values supporting AI regulations should be. For instance, the International Organization for Standardization (ISO), which is trying to develop the standards that future AI regulations will be based on, recently told me they realized that the concept of ‘safety’ has a very different meaning in the UK and Spain,” Martinez explained.
Moreover, even like-minded countries take different approaches regarding AI regulation, with the US and the UK taking a vertical, sector-focused regulatory stance, while the EU chose to go the horizontal route with its AI Act.
Martinez predicted that most governments will align on one regulatory requirement. This requirement will demand that AI labs and firms developing general-purpose AI models – sometimes called ‘frontier models’ – disclose what exactly they are developing and for which purposes.
“We wouldn’t imagine a lab in the UK, for example, saying it’s developing nuclear power technologies without having to explain its goal,” she concluded.