UK AI Research Under Threat From Nation-State Hackers


AI research in the UK is vulnerable to nation-state hackers seeking to steal sensitive data and insights, a new report by the Alan Turing Institute has warned.

The Institute urged the government and academia to develop a long-term strategy to address systemic cultural and structural security barriers to effective AI research security.

The researchers noted that the UK’s “world-leading AI research ecosystem” is a high priority target for state threat actors who are looking to use the technology for malicious purposes.

Access to the underlying sensitive datasets used to train AI model could also provide strategic insights that impact defense planning and intelligence efforts.

China, Russia, North Korea and Iran are the states that pose the great threat to AI academic research, the Institute said.

Barriers to AI Research Security

Despite these risks, there are currently major constraints to AI research cybersecurity. These are creating opportunities for state threat actors to acquire knowledge or steal intellectual property (IP).

Much of this is born out of a “fundamental tension” between academic freedom and research security, according to the researchers.

Cultural Resistance in Academia

The researchers noted that academics are under significant pressure to provide transparency about the data and methods they used to make their findings. Academic journals will often reject submissions where data and code are not made available.

These transparency practices “embed an inherent vulnerability in academic culture,” as threat actors can use this underlying data and described techniques for malicious purposes.

Informal peer-to-peer academic collaborations compound this issue due to the information-sharing culture of academia in early-stage research, the researchers added.

Existing Procedures Are Restrictive

The report also found that academic research security can be more resource-intensive than other forms of due diligence because of the myriad considerations required to understand the potential risks.

This includes the large number of government departments involved in research security, which creates friction for academics and professional services staff seeking guidance.

This friction has resulted in a lack of incentives for researchers to follow non-binding government-issued security guidance.

Lack of Security Awareness

Another major barrier is the lack awareness of the security threat to AI research within the academic community.

Individual academics often have to make personal judgements on the risks of their research, which can be challenging to do in practice.

“It is difficult for researchers to foresee and quantify the risks stemming from early-stage research – and understanding how research may be exploited by adversaries is not an easy task,” the report noted.

Academia’s Funding and Talent Shortage

A lack of access to funding and poor talent retention in academia also introduces new vulnerabilities relating to research security.

Academics are sometimes incentivized to accept funding from dubious sources or accept higher-paid roles at organizations that that can then exploit their insight and expertise.

These organizations may be linked to nation-states with malicious intentions around AI research.

Striking the Balance Between Security and Academic Freedom

The report provided several recommendations for the UK government and academia to strike a balance between the open nature of academic AI research and effective research security practices.

These include:

  • The government should grant funding opportunities for research security activities to encourage institutions to invest in research security training
  • The government should prioritize efforts to plug the AI skills gap and encourage young people to take up academic research roles in the UK
  • Case studies of relevant threats that have been intercepted or disrupted
  • The government should standardize the T&Cs of its grants, providing researchers with greater clarity about the guidance and legal provisions they need to follow
  • All academic institutions should be required to deliver research security training for new staff and postgraduate research students as a prerequisite for grant funding
  • The academic sector should develop a centralized due diligence repository to document risks and inform decision-making on AI research partnerships and collaboration
  • The government should work with academia and journal publications to mitigate bias towards publishing publicly available research
  • The development of scrutiny committees in academia to help researchers identify and mitigate risks



Source link

Leave a Comment