- "기밀 VM의 빈틈을 메운다" 마이크로소프트의 오픈소스 파라바이저 '오픈HCL'란?
- The best early Black Friday AirPods deals: Shop early deals
- The 19 best Black Friday headphone deals 2024: Early sales live now
- I tested the iPad Mini 7 for a week, and its the ultraportable tablet to beat at $100 off
- The best Black Friday deals 2024: Early sales live now
Ethical Hackers Could Earn up to $20,000 Uncovering ChatGPT Vulnerabilities
OpenAI is offering white hat hackers up to $20,000 to find security flaws as part of its bug bounty program launched on April 11, 2023.
The ChatGPT developer announced the initiative as part of its commitment to secure artificial intelligence (AI). The company has been under scrutiny by security experts since the launch of the ChatGPT prototype in November 2022.
Speaking to Infosecurity, Mike Thompson, information security manager at Zen Internet said, “It is important that OpenAI runs a bug bounty scheme as a matter of priority, as the technology is from November 2022 the insane giddiness that has ensued has completely overshadowed the potential risk.”
Vulnerabilities in the Library
In its announcement, OpenAI acknowledged that despite its heavy investment in research and engineering to ensure its AI systems are safe and secure, vulnerabilities and flaws can emerge.
“We believe that transparency and collaboration are crucial to addressing this reality. That’s why we are inviting the global community of security researchers, ethical hackers and technology enthusiasts to help us identify and address vulnerabilities in our systems,” the company said.
On March 23, OpenAI announced it had fixed a vulnerability in ChatGPT4 which had allowed users to view the titles of chats by other users during a nine-hour period on March 20. Concerns were raised that the bug in the ChatGPT open-source library could lead to privacy concerns.
Read more: ChatGPT Vulnerability May Have Exposed Users’ Payment Information
“This is not the limit of vulnerabilities found nor of what will ever exist. One of most efficient steps for companies to ensure the security posture of their products is to launch a bug bounty program. This is time, tested and true since 1995 when Netscape launch of the first bug bounty program. I’m glad OpenAI sees this,” Zaira Pirzada, cybersecurity advisor at Lionfish Tech told Infosecurity.
She added that Sam Altman, CEO of OpenAI, is likely realizing that the that the public is as much a necessary part of testing as they are of consuming.
The company has partnered with Bugcrowd to manage the submission and reward process.
Casey Ellis, founder and CTO of Bugcrowd, told Infosecurity, “OpenAI’s decision to actively solicit feedback from the hacker community on the security of their products is huge and continuing validation of hackers as ‘the Internet’s Immune System’, and the transparency and accountability of the approach will go a long way to continuing to build user trust in a relatively new market. I think all emerging technology companies and categories can learn from this.”
Nikki Webb, global channel manager at Custodian360 higlighted, “Bug Bounties’ collaborative approach fosters continuous improvement, protects user data, and bolsters overall security in the digital landscape.”
The rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries. At the time of writing over 10 vulnerabilities had been rewarded. As part of the program, ethical hackers are not permitted to release information about the vulnerabilities found.
The scope of the program includes OpenAI’s APIs and AP Keys, ChatGPT, third party corporate targets related to OpenAI, OpenAI research org and the OpenAI.com website. The bug bounty program is for traditional software issues and not AI model issues.
Jake Moore, global security advisor at ESET noted that while the bug bounty program won’t address all possible attack vectors, it acts as another tool in the cybersecurity toolkit preventing a new wave of threats.
Recent research by BlackBerry found that 51% of security leaders expect ChatGPT to be at the heart of a successful cyber-attack within a year. The biggest security concerns centre around how the large language model could be leveraged by cyber-threat actors to launch attacks, including malware development and convincing social engineering scams.
Image credit: Koshiro K / Shutterstock.com