- ITDM 2025 전망 | 금융 플랫폼 성패, 지속가능한 사업 가치 창출에 달렸다” KB국민카드 이호준 그룹장
- “고객경험 개선하고 비용은 절감, AI 기반까지 마련” · · · AIA생명의 CCM 프로젝트 사례
- 2025年、CIOはAIに意欲的に投資する - そしてその先も
- The best robot vacuums for pet hair of 2024: Expert tested and reviewed
- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
Most Cyber Leaders Fear AI-Generated Code Will Increase Security Risks
Developers in almost all (83%) organizations use AI to generate code, causing security leaders to fear it could fuel a major security incident, according to a new Venafi survey.
In a report published on September 17, the machine identity management provider shared findings highlighting that the divide between programming and security teams is being widened by AI-generated code.
The report, Organizations Struggle to Secure AI-Generated and Open Source Code, showed that while seven in ten (72%) security leaders feel they have no choice but to allow developers to use AI to remain competitive, nearly all (92%) have concerns about this use.
Almost two-thirds (63%) have even considered banning AI in coding due to the security risks.
AI Over-Reliance and Lack of AI Code Quality Top Concerns
Because AI and particularly generative AI technology is evolving at a fast pace 66% of security leaders feel they cannot keep up.
An even more significant number (78%) are convinced that AI-generated code will lead their organization to a security reckoning and 59% are losing sleep over the security implications of AI.
The top three concerns most cited by the survey respondents are the following:
- Developers to become over-reliant on AI, leading to lower standards
- AI-written code to not be effectively quality checked
- AI to use dated open-source libraries that have not been well-maintained
Kevin Bocek, Chief Innovation Officer at Venafi, commented: “Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks – recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg.”
The recent CrowdStrike-induced IT outage showed everyone the impact of how fast code goes from developer to worldwide meltdown, he added.
Lack of AI Visibility Leads to Tech Governance Concerns
Additionally, the Venafi survey shows that AI-generated code does not only create technology concerns but also tech governance challenges.
For instance, almost two-thirds (63%) of security leaders think it is impossible to govern the safe use of AI in their organization, as they do not have visibility into where AI is being used.
Despite concerns, less than half of companies (47%) have policies in place to ensure the safe use of AI within development environments.
“Anyone today with an LLM can write code, opening an entirely new front. It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. We have to authenticate code from wherever it comes,” Bocek concluded.
The Venafi report results from a survey of 800 security decision-makers across the US, UK, Germany and France.