RSAC: Researchers Share Lessons from the World’s First AISIRT


As the use of AI explodes in sensitive sectors like infrastructure and national security, a team at Carnegie Mellon University is pioneering the field of AI security response.

In the summer of 2023, researchers at the University’s Software Engineering Institute, the birthplace of the first Computer Emergency and Response Team (CERT), believed there was an urgent need to charter a new entity to lead research and development efforts to define incident response tactics, techniques, and procedures for AI and machine learning (ML) systems and coordinate community response actions.

Just over six months later, Lauren McIlvenny and Gregory Touhill shared the lessons they learned running the world’s first AI Security Incident Response Team (AISIRT) during the RSA Conference 2024.

Explaining the Need for an AISIRT

The AISIRT was launched because McIlvenny and Touhill’s research data showed a continuous increase in AI-powered attacks and attacks on AI systems.

“We continue to see a lot of activity associated with AI-related systems and technologies now being targeted in the wild,” Touhill said.

The pair mentioned the numerous threats posed to generative AI tools like AI chatbots and large language model (LLM) systems, as well as attacks targeting the engines powering AI models, Graphics processing unit (GPU) kernels, whose implementations can be susceptible to memory leaks and can be leveraged to access sensitive information.

The AISIRT was developed in collaboration between Carnegie Mellon University and CERT Division’s partner network.

It became partly operational after it first launched in August 2023 and has been fully operational since October 2023.

It is focused on identifying, understanding, and mitigating ‘vulnerabilities’ for AI systems that are of interest to and used by defense and national security organizations.

In this context, McIlvenny explained that ‘vulnerabilities’ include traditional software vulnerabilities, adversarial machine learning weaknesses, and flaws leading to joint cyber-AI attacks.

How the AISIRT Functions

The AISIRT leverages existing rules of engagement from cyber incident response and its structure is inspired by a traditional Computer Security Incident and Response Team (CSIRT).

It consists of four main components: an AI incident response element, an AI vulnerability discovery toolset, an AI vulnerability management framework, and an AI situational awareness service.

The AISIRT involves a variety of stakeholders, including:

  • A team lead who can translate the technical aspects in an understandable way to affected parties
  • System/database administrators
  • Network engineers
  • AI/ML practitioners
  • Threat intelligence researchers
  • Specialists from Carnegie Mellon University and other trusted industry/academic partners as needed

In the future, McIlvenny and Touhill said they see the AISIRT as a hub for updating and sharing best practices, standards, and guidelines around AI for defense and national security organizations.

They plan to establish an AI community of practice across academia, industry, defense and national security organizations, as well as legislative bodies.

“At least 20% of what we’re showing here in the AISIRT structure will need to evolve in the future,” McIlvenny estimated.

Lessons Learned After Six Months Running the AISIRT

McIlvenny and Touhill shared some of the lessons learned after running the AISIRT for over six months.

These are:

  • AI vulnerabilities are cyber vulnerabilities
  • AI vulnerabilities are occurring throughout the entire system
  • Cybersecurity processes are mature and should continue to evolve to support AI
  • AI systems are different than today’s traditional IT in several interesting ways
  • Complexity in AI systems complicates triage, diagnostics and troubleshooting
  • Tools to identify vulnerabilities aren’t there yet
  • There is a need for secure development training (i.e. DevSecOps) tailored for AI developers
  • Red team pentesting of AI systems throughout the development cycle can identify material weaknesses early in the development cycle

However, they insisted that the AISIRT – and AI security as a whole – was still in its infancy and that organizations using AI and stakeholders trying to protect against AI threats still have countless unanswered questions, including the following:

  • Emerging regulatory regimes: What is the standard of care for using AI systems, and what is the standard of care as we develop AI systems?
  • Evolving privacy impacts: How will AI systems affect maintaining the privacy rights of citizens? How will AI systems weaken existing privacy protection protocols?
  • Threats to intellectual property: What do I do if our valuable intellectual property is leaked into a generative AI system? What do I do if our valuable intellectual property is discovered in an AI system?
  • Governance and oversight: What are the best practices in AI governance and oversight? Do I need to establish separate governance models for European and North American lines of business due to different regulatory environments?

“We’re at a stage where questions around AI security still greatly outnumber answers, so please reach out and share your experience using and securing AI,” Touhill concluded.



Source link