UK IT Pros Express Concerns About C-Suite’s Generative AI Ambitions
Ahead of the UK’s AI Safety Summit, new research has revealed significant concerns among UK IT professionals regarding the deployment of generative artificial intelligence (AI) applications.
The study highlights that 93% of UK IT professionals express concerns about their organization’s C-suite ambitions for generative AI. Their foremost worries revolve around inadequate training and comprehension at the executive level (28%). Further concerns include a lack of risk assessments (23%) and an insufficient grasp of operational aspects (22%). O’Reilly conducted the study in September 2023 and involved 500 IT professionals.
UK Government AI Ambitions
While the UK government endeavors to create a favorable regulatory environment through the upcoming Global AI Safety Summit, 25% of IT professionals lack confidence in their organization’s current capabilities to ensure compliance with evolving AI regulations.
An additional 51% feel ‘somewhat’ confident that their organization possesses the skills necessary to keep pace with the changing regulatory landscape.
“It is all well and good to see the Government recognizing that the threat of AI is increasing. However, rather than worrying about the future, we also need to realize that the danger is already here,” commented Trevor Dearing, director of critical infrastructure at Illumio.
“If we’re serious about protecting the nation against AI, then we must echo the US strategies of mandating the implementation of security strategies like Zero Trust. This will allow organizations to restrict the ability of an AI attack to spread and reduce the ‘learning surface’ that AI attacks are so heavily reliant on.”
Are We Prepared for AI?
The O’Reilly report also underscores a potential gap between the UK’s aspirations to lead in AI and the skills and preparedness of IT professionals. Approximately 71% of IT teams believe the current digital skills gap could impede the UK government’s ambition to become a global AI leader.
The study also highlights that despite substantial investments in generative AI, workplace policies and staff training have not kept pace.
In particular, employees outside of IT departments have received limited (32%) or no training (36%) about the impact of generative AI on the workplace. This lack of employee training is cited as a significant concern by 27% of IT professionals, similar to their concerns about advanced cybersecurity threats posed by these technologies.
Andy Patel, senior researcher at WithSecure, noted: “We didn’t regulate the recommendation algorithms that drive social networks. They’ve caused significant harm: extremism, the spread of disinformation and dangerous conspiracy theories, mass influence operations and an increasingly polarized society.”
“Open-source AI models are already out there. They’re already being modified to do harmful things […] Different regions are approaching the problem [differently]. However, the only thing that should matter is regulations that properly address threats that are possible now, and that may be possible in the near future.”
Lack AI of Policy in Businesses
The O’Reilly study shows that 41% of IT professionals report the absence of a workplace policy for the use of generative AI technologies, with an additional 11% unsure of their organization’s policy status.
A recent ISACA study of 2300 digital trust professionals found that a mere 10% of organisations have formal, comprehensive policies in place governing the use of AI technology.
In response to these challenges, 82% of IT professionals desire more learning and development opportunities related to generative AI. Notably, 61% are considering changing employers next year if their organization fails to provide upskilling opportunities in generative AI.
Some 70% of those who took part in the ISACA Generative AI 2023 Survey said AI will have a positive impact on their jobs. However, 81% of them said they will need additional training to retain their job or advance their career.
“Organizations should continue to invest in generative AI to remain innovative and competitive. At the same time, they must also ensure that staff are adequately trained and that robust workplace policies are in place,” said Alexia Pedersen, VP of EMEA at O’Reilly.
“This is not only a strategy for improved recruitment and retention in the face of a widening skills gap, but also a necessary step to guarantee ethical and safe AI deployments if Britain wants to fulfill its global ambitions.”
Read more on AI-powered attacks: New ChatGPT Attack Technique Spreads Malicious Packages