- Docker Desktop 4.36 | Docker
- This 3-in-1 MagSafe dock will charge your Apple devices while keeping them cool (and for Black Friday it's only $48)
- Why Cisco Leads with Wi-Fi 7: Transforming Future Connectivity
- What is AI networking? How it automates your infrastructure (but faces challenges)
- I traveled with a solar panel that's lighter than a MacBook, and it's my new backpack essential (and now get 23% off for Black Friday)
ChatGPT Leveraged to Enhance Software Supply Chain Security
ChatGPT has been leveraged by OX Security to enhance its software supply chain security offerings, the firm has announced.
The cybersecurity vendor has integrated the famous AI chatbot to create ‘OX-GPT’ – a program designed to help developers quickly remediate security vulnerabilities during software development.
The platform can rapidly inform developers how a particular piece of code can be exploited by threat actors and the possible impact of such an attack.
Additionally, OX-GPT presents developers with customized fix recommendations and cut and paste code fixes, allowing security issues to be quickly resolved pre-production.
Many software developers are not sufficiently trained in cybersecurity, leading to vast amounts of code being created that contain vulnerabilities, thereby necessitating the continuous patch management cycle.
While experts have highlighted how ChatGPT can be used for nefarious means, such as to launch more sophisticated cyber-attacks, others have outlined its potential to help create more secure code by design, thereby significantly reducing the risk of software supply chain incidents like SolarWinds and Log4j.
Speaking to Infosecurity, Neatsun Ziv, CEO and co-founder of OX Security, said that this utilization of the AI tool will provide faster and more accurate data to developers compared to other tools, allowing them to repair security issues far more easily.
“It starts with potential exploitations, the full context of where the security issue exists (which application, some code related to it) and possible damage to the application and the organization. So when an issue is identified as ‘critical,’ developers can confirm that they are not just chasing another false positive,” he explained.
Ziv added that OX-GPT is able to reduce the vast majority of false positives due to the vast datasets it has been trained on – tens of thousands of real-world cases containing vulnerabilities, exploits, code fixes and recommendations gathered and generated by OX’s platform.
However, he noted that this is an ongoing process and “it is essential that we continue to train it on the newest vulnerabilities, newest findings, latest best-practices and newest attacks discovered, especially in the fast-paced domain of securing the software supply chain.”
Ziv also emphasized that the platform will allow developers to retain control over their code “but also saving them weeks of manual work.”
Harman Singh, managing director and consultant at Cyphere, said that he expects ChatGPT and other generative AI models to make accuracy, speed and quality improvements to the vulnerability management process.
“Repetitive and time-consuming processes such as looking for patterns in log files (in terms of logging and monitoring), finding vulnerabilities from vulnerability assessment data and helping with triage are some of the vulnerability management tasks that will be most likely utilized this year [by the technology],” he outlined.
Don’t Rely on Generative AI to Write Code Yet
However, Singh cautioned that while AI models can be trained to help develop secure code, they should not be used to generate code by themselves as they are not a “like-for-like” replacement for human developers.
“If you ask me whether AI systems can produce end to end secure code, I doubt that because code-generating AI systems are likely to cause security vulnerabilities in the applications,” he outlined.
Singh pointed to a study published last year by Cornell University, where researchers recruited 47 developers to complete various code problems. Notably, the developers who were provided with assistance from this model were found to be significantly more likely to write insecure code compared to the other group that did not rely on this model.
He added: “AI coding is here to stay; however, it is yet to mature and relying on it completely to help us solve problems would be a naive idea.”