- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
New Research Exposes Security Risks in ChatGPT Plugins
Security researchers have uncovered critical security flaws within ChatGPT plugins. By exploiting these flaws, attackers could seize control of an organization’s account on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).
“The vulnerabilities found in these ChatGPT plugins are raising alarms due to the heightened risk of proprietary information being stolen and the threat of account takeover attacks,” commented Darren Guccione, CEO and co-founder at Keeper Security.
“Increasingly, employees are entering proprietary data into AI tools – including intellectual property, financial data, business strategies and more – and unauthorized access by a malicious actor could be crippling for an organization.”
In November 2023, ChatGPT introduced a new feature called GPTs, which operate similarly to plugins and post similar security risks, further exacerbating the vulnerability landscape.
In a new advisory published today, the Salt Security research team identified three types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were discovered within the plugin installation process itself, allowing attackers to install malicious plugins and potentially intercept user messages containing proprietary information.
Secondly, flaws were found within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms such as GitHub.
Finally, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and execute account takeovers.
Read more on API security: Expo Framework API Flaw Reveals User Data in Online Services
“Generative AI tools like ChatGPT have rapidly captivated the attention of millions across the world, boasting the potential to drastically improve efficiencies within both business operations as well as daily human life,” said Yaniv Balmas, vice president of research at Salt Security.
“As more organizations leverage this type of technology, attackers are too pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data.”
Following coordinated disclosure practices, Salt Labs collaborated with OpenAI and third-party vendors to remediate these issues promptly, mitigating the risk of exploitation in the wild.
“Security teams can fortify their defenses against these vulnerabilities with a multi-layered approach,” explained Sarah Jones, cyber threat intelligence research analyst at Critical Start. This includes:
-
Implementing permission-based installation
-
Introducing two-factor authentication
-
Educating users on code and link caution
-
Monitoring plugin activity constantly
-
Subscribing to security advisories for updates
Image credit: WaterStock / Shutterstock.com