- How to detect this infamous NSO spyware on your phone for just $1
- I let my 8-year-old test this Android phone for kids. Here's what you should know before buying
- 3 lucrative side hustles you can start right now with OpenAI's Sora video generator
- How to use Microsoft's Copilot AI on Linux
- Protect 3 Devices With This Maximum Security Software
New Research Exposes Security Risks in ChatGPT Plugins
Security researchers have uncovered critical security flaws within ChatGPT plugins. By exploiting these flaws, attackers could seize control of an organization’s account on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).
“The vulnerabilities found in these ChatGPT plugins are raising alarms due to the heightened risk of proprietary information being stolen and the threat of account takeover attacks,” commented Darren Guccione, CEO and co-founder at Keeper Security.
“Increasingly, employees are entering proprietary data into AI tools – including intellectual property, financial data, business strategies and more – and unauthorized access by a malicious actor could be crippling for an organization.”
In November 2023, ChatGPT introduced a new feature called GPTs, which operate similarly to plugins and post similar security risks, further exacerbating the vulnerability landscape.
In a new advisory published today, the Salt Security research team identified three types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were discovered within the plugin installation process itself, allowing attackers to install malicious plugins and potentially intercept user messages containing proprietary information.
Secondly, flaws were found within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms such as GitHub.
Finally, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and execute account takeovers.
Read more on API security: Expo Framework API Flaw Reveals User Data in Online Services
“Generative AI tools like ChatGPT have rapidly captivated the attention of millions across the world, boasting the potential to drastically improve efficiencies within both business operations as well as daily human life,” said Yaniv Balmas, vice president of research at Salt Security.
“As more organizations leverage this type of technology, attackers are too pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data.”
Following coordinated disclosure practices, Salt Labs collaborated with OpenAI and third-party vendors to remediate these issues promptly, mitigating the risk of exploitation in the wild.
“Security teams can fortify their defenses against these vulnerabilities with a multi-layered approach,” explained Sarah Jones, cyber threat intelligence research analyst at Critical Start. This includes:
-
Implementing permission-based installation
-
Introducing two-factor authentication
-
Educating users on code and link caution
-
Monitoring plugin activity constantly
-
Subscribing to security advisories for updates
Image credit: WaterStock / Shutterstock.com