AI risks are everywhere – and now MIT is adding them all to one database


SEAN GLADWELL/Getty Images

By now, the risks of artificial intelligence (AI) across applications are well-documented, but hard to access easily in one place when making regulatory, policy, or business decisions. An MIT lab aims to fix that. 

On Wednesday, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) launched the AI Risk Repository, a database of more than 700 documented AI risks. According to CSAIL’s release, the database is the first of its kind and will be updated consistently to ensure it can be used as an active resource. 

Also: AI PCs bring new security protections and risks. Here’s what users need to know

The project was prompted by concerns that global adoption of AI is outrunning how well people and organizations understand implementation risks. Census data indicates that AI usage in US industries climbed from 3.7% to 5.45% — a 47% increase — between September 2023 and February 2024. Researchers from CSAIL and MIT’s FutureTech Lab found that “even the most thorough individual framework overlooks approximately 30% of the risks identified across all reviewed frameworks,” the release states. 

11.png

MIT CSAIL

Fragmented literature on AI risks can make it difficult for policymakers, risk evaluators, and others to get a full picture of the issues in front of them. “It is hard to find specific studies of risk in some niche domains where AI is used, such as weapons and military decision support systems,” said Taniel Yusef, a Cambridge research affiliate not associated with the project. “Without referring to these studies, it can be difficult to speak about technical aspects of AI risk to non-technical experts. This repository helps us do that.”

Without a database, some risks can fly under the radar and not be considered adequately, the team explained in the release. 

Also: How can seniors learn about AI’s benefits and threats? Take a free class – here’s how

“Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots,” Dr. Peter Slattery, a project lead and incoming FutureTech Lab postdoc, said in the release. 

To address this, researchers at MIT worked with colleagues from other institutions, including the University of Queensland, Future of Life Institute, KU Leuven, and Harmony Intelligence, to create the database. The Repository aims to provide “an accessible overview of the AI risk landscape,” according to the site, and act as a universal frame of reference that anyone from researchers and developers to businesses and policymakers can use. 

To create it, the researchers developed 43 risk classification frameworks by reviewing academic records and databases and speaking to several experts. After distilling more than 700 risks from those 43 frameworks, the researchers categorized each by cause (when or why it occurs), domain, and subdomain (like “Misinformation” and “False or misleading information,” respectively).

Also: AI-powered ‘narrative attacks’ a growing threat: 3 defense strategies for business leaders

The risks range from discrimination and misrepresentation to fraud, targeted manipulation, and unsafe use. “The most frequently addressed risk domains,” the release explains, “included ‘AI system safety, failures, and limitations’ (76% of documents); ‘Socioeconomic and environmental harms’ (73%); ‘Discrimination and toxicity’ (71%); ‘Privacy and security’ (68%); and ‘Malicious actors and misuse’ (68%).” 

Researchers found that human-computer interaction and misinformation were the least-addressed concerns across risk frameworks. Fifty-one percent of the risks analyzed were attributed to AI systems as opposed to humans, who were responsible for 34%, and 65% of risks emerged after AI was deployed, as opposed to during development. 

Topics like discrimination, privacy breaches, and lack of capability were the most discussed issues, appearing in over 50% of the documents researchers reviewed. Concerns that AI causes damage to our information ecosystems were mentioned far less, in only 12% of documents. 

MIT hopes the Repository will help decision-makers better navigate and address the risks posed by AI, especially with so many AI governance initiatives emerging rapidly worldwide.  

17.png

MIT CSAIL

The Repository “is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches,” said Dr. Neil Thompson, researcher and head of the FutureTech Lab. “We are starting with a comprehensive checklist, to help us understand the breadth of potential risks. We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”

Also: Businesses’ cloud security fails are ‘concerning’ – as AI threats accelerate

Next, researchers plan to use the Repository to analyze public documents from AI companies and developers to determine and compare risk approaches by sector. 

The AI Risk Repository is available to download and copy for free, and users can submit feedback and suggestions to the team here.





Source link