- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
ChatGPT: The Next Wave of Innovation or Your Biggest Security Threat?
By Craig Burland, CISO, Inversion6
ChatGPT, an artificial intelligence language model developed by OpenAI, is in the midst of the hype cycle where every success or failure is shouted from the rooftops. Its millions of users generate millions of queries per day with probably an equal number of posts, comments, and articles. Since its public go live in November 2022, it has garnered significant attention and dramatically exceeded expectations, surpassing the AI models before it. The platform uses deep learning algorithms and vast amounts of text data to generate human-like responses to natural language inputs. It has a wide range of capabilities, including text generation, language translation, and answering complex questions wrapped in human-like speech. Potential applications include customer service, content creation, data analysis, and education. Unfortunately, it has also been used to write malware, craft phish, cheat on term papers, and plan fictitious crimes.
Across the recent history of innovation, there’s a tendency to react with excitement, uncertainty, and even fear. The steam engine sparked fears of job loss alongside ideas of revolutionizing transportation. The first airplanes generated wonder and amazement at human accomplishment and fears of military applications. Computerphobia reached its peak in the mid-1980s, marked by fears about humans losing jobs or becoming dependent on devices for critical thinking. Organ transplantation, space travel, DNA manipulation, etc. have all elicited strong reactions that emphasized both amazing potential and dreadful consequences, finally settling into an equilibrium that is nuanced and complex.
As a cyber defender and risk manager, it’s vital to see both sides of this innovation and develop an approach that balances the potential threat and business opportunity. Uncertainty and fear would demand blocking all access to ChatGPT and the API. Excitement and wonder would propose feeding terabytes of data into the platform for its near-prescient insights. Before either approach, further consideration is warranted. Let’s take a few examples of GPT’s more high-profile concerns and counter-balancing opportunities:
A few weeks ago, ChatGPT was criticized in numerous articles for enabling creation of advanced, polymorphic malware. While most of the articles left out key facts like the web version didn’t actually produce the malware and that ample human intervention was required, use of ChatGPT as a malware engine was theoretically possible. However, one must also consider the potential benefit of using ChatGPT to stub out software for developers, speeding new product development. The promise of low-code or no-code applications with the assistance of a tool like ChatGPT is now more than marketing hype. Take a more specific use – writing a routine to encrypt a large store of content. The resulting code could be used to secure an important transaction or ransomware. ChatGPT doesn’t know or understand the difference.
More recently, the internet was abuzz with news that hackers had bypassed the ChatGPT controls to create new service offerings like phish email automation. The original article (it has since been updated for clarity) left out the key point that the bypass was merely the use of the API which currently doesn’t have all the constraints of the web version. (API abuse is currently prohibited by OpenAI policy, not a technical control.) Like the scenario above, while the API can be used to generate phish – until OpenAI detects and terminates the access – it can also be used to generate phish testing campaigns and awareness posts that vary in content and tone, keeping the message fresh.
Currently, there are numerous articles about how to turn off the response controls, enabling ChatGPT answer without filters eliminating answers that potentially enable unlawful activity. While these claims are proving dubious upon further scrutiny, consider the potential uses in threat modeling or role playing. Asking ChatGPT to act as an insider threat and it will decline. Asking ChatGPT to help you, as a CISO, to brainstorm ways an insider can harm your organization becomes an insightful method to verify your defenses. In a 10-minute span, ChatGPT can guide you from a 10,000-foot view down into the weeds. In this example, ChatGPT could walk one through high-level threats like monitoring cloud storage down to a user-awareness quiz about social engineering attacks that included answers! Extending the use case to role playing, think about the tremendous value of interacting with an AI programmed to emulate malicious behavior in helping people prepare for real scenarios.
The impact of ChatGPT may mark the beginning of an AI race as the big players – Microsoft, Google, Baidu, Meta, Amazon – invest millions upon millions to build the most complete AI platform. They’ll push the envelope of innovation, adding features and functionality as desired, then following with mitigations and controls as required. Like the innovations before it, we’ll be enthralled with the excitement and possibility as we simultaneously wrestle with the uncertainty and fear. We’ll climb and climb until we reach the peak of expectations, then slowly slide into disillusionment. Finally, our perceptions will evolve from black and white to nuanced and complex. Like the innovations before it, we will come to understand that ChatGPT is just a tool. A complex and intriguing tool, but a tool nonetheless. We should not fear it. We should not revere it. We should consider it, understand it, and then use it.
About the Author
Veteran cybersecurity leader, Craig Burland, CISO, Inversion6 works directly with the firm’s clients, building and managing security programs, as well as advising them on cybersecurity strategy and best practices. He has decades of pertinent industry experience, including leading information security operations for a Fortune 200 company. He is also a former Technical Co-Chair of the Northeast Ohio Cyber Consortium and a former Customer Advisory Board Member for Solutionary MSSP, NTT Global Security, and Oracle Web Center. Craig can be reached online at LinkedIn https://www.linkedin.com/in/craig-burland/ and at our company website www.inversion6.com.