- If ChatGPT produces AI-generated code for your app, who does it really belong to?
- The best iPhone power banks of 2024: Expert tested and reviewed
- The best NAS devices of 2024: Expert tested
- Four Ways to Harden Your Code Against Security Vulnerabilities and Weaknesses
- I converted this Windows 11 Mini PC into a Linux workstation - and didn't regret it
Forrester: GenAI Will Lead to Breaches and Privacy Fines in 2024
Rampant generative AI (GenAI) use next year will lead to some major data breaches and fines for application developers using the technology, according to a leading analyst.
Forrester made the claims in its 2024 predictions for cybersecurity, risk and privacy and trust.
Senior analyst, Alla Valente warned of the indiscriminate use of “TuringBots” – GenAI assistants that help to create code – especially if developers don’t scan code for vulnerabilities once generated.
“Without proper guardrails around TuringBot-generated code, Forrester predicts that in 2024 at least three data breaches will be publicly blamed on insecure AI-generated code – either due to security flaws in the generated code itself or vulnerabilities in AI-suggested dependencies,” she added in a blog post.
There may also be regulatory trouble ahead for applications that rely on GenAI products like ChatGPT to surface information to users.
Valente predicted at least one would be fined for its handling of personally identifiable information (PII).
“While OpenAI has the technical and financial resources to defend itself against these regulators, other third-party apps running on ChatGPT likely do not,” she noted.
“In fact, some apps introduce risks via their third-party tech provider but lack the resources and expertise to mitigate them appropriately. In 2024, companies must identify apps that could potentially increase their risk exposure and double down on third-party risk management.”
Read more on GenAI risks: Generative AI Can Save Phishers Two Days of Work
The European Data Protection Board has already launched a task force to coordinate enforcement action against ChatGPT, following a decision by the Italian Data Protection Authority in March to suspend use of the product in the country.
In the US, the FTC is investigating OpenAI.
GenAI may also play a part in Valente’s third prediction: that 90% of data breaches in 2024 will feature a human element. According to Verizon, the figure is already at 74%.
Security experts have warned multiple times that GenAI can supercharge social engineering by enabling threat actors to scale highly convincing phishing campaigns.
“This increase [in people-centric risk] will expose one of the touted silver bullets for mitigating human breaches: security awareness and training,” argued Valente.
“As a result, more CISOs will shift their focus to an adaptive human protection approach in 2024 as NIST updates its guidance on awareness and training and as more human quantification vendors emerge.”