- If ChatGPT produces AI-generated code for your app, who does it really belong to?
- The best iPhone power banks of 2024: Expert tested and reviewed
- The best NAS devices of 2024: Expert tested
- Four Ways to Harden Your Code Against Security Vulnerabilities and Weaknesses
- I converted this Windows 11 Mini PC into a Linux workstation - and didn't regret it
Lessons from the field: How Generative AI is shaping software development in 2023
Since ChatGPT’s release in November of 2022, there have been countless conversations on the impact of similar large language models. Generative AI has forced organizations to rethink how they work and what can and should be adjusted. Specifically, organizations are contemplating Generative AI’s impact on software development. While the potential of Generative AI in software development is exciting, there are still risks and guardrails that need to be considered.
Members of VMware’s Tanzu Vanguard community, who are expert practitioners at companies across different industries, provided their perspectives on how technologies such as Generative AI are impacting software development and technology decisions. Their insights help answer questions and pose new questions for companies to consider when evaluating their AI investments.
AI won’t replace developers
Generative AI has introduced a level of software development speed that didn’t exist before. It helps increase developer productivity and efficiency by helping developers shortcut building code. Solutions, like the ChatGPT chatbot, along with tools such as Github Co-Pilot, can help developers focus on generating value instead of writing boilerplate code. By acting as a multiplier effect of developer productivity, it opens up new possibilities in what developers can do with the time they save. However, despite its intelligence and benefits to automating pipelines, the technology is still far from completely replacing human developers.
Generative AI should not be seen as being able to work independently and still needs to be supervised – both when it comes to ensuring the code is correct and when it comes to security. Developers still need to be able to understand the context and meaning of AI’s answers, as sometimes they are not entirely correct, says Thomas Rudrof, DevOps Engineer at DATEV eG. Rudrof believes that AI is better for assisting with simple, repetitive tasks and acts as an assistant rather than replacing the developer role.
Risks of AI in software development
Despite Generative AI’s ability to make developers more efficient, it is not error free. Finding bugs and fixing them may be more challenging using AI as developers still need to carefully review any code AI produces. There is also more risk related to the software development itself as it follows the logic defined by someone as well as the available dataset, says Lukasz Piotrowski, developer at Atos Global Services. Therefore, the technology will only be as good as the data provided.
On an individual level, AI creates security issues as attackers will try to exploit the capabilities of AI tools while security professionals also employ the same technology to defend against such attacks. Developers need to be extremely careful to follow best practices and not include credential and tokens in their code directly. Anything secure or containing IP that can be revealed to other users should not be uploaded. Even with safeguards in place, AI might be capable of breaking security. If care is not taken in the intake process, there could be huge risks if that security scheme or other info are inadvertently pushed to generative AI, says Jim Kohl, Devops Consultant at GAIG.
Best practices and education
Currently, there are no established best practices for leveraging AI in software development. The use of AI-generated code is still in an experimental phase for many organizations due to numerous uncertainties such as its impact on security, data privacy, copyright, and more.
However, organizations already using AI need to use it wisely and should not trust the technology freely. Juergen Sussner, Lead Cloud Platform Engineer at DATEV eG, advises organizations to try to implement small use cases and test them well, if they work, scale them, if not, try another use case. Through small experiments, organizations can determine for themselves the technology’s risks and limitations.
Guardrails are necessary when it comes to the use of AI and can help individuals effectively use the technology safely. Leaving AI usage unaddressed in your organization can lead to security, ethical, and legal issues. Some companies have already seen severe penalties around AI tools being used for research and code, therefore acting quickly is necessary. For example, litigation has surfaced against companies for training AI tools using data lakes with thousands of unlicensed works.
Getting an AI to understand context is one of the larger problems with leveraging AI in software development, says Scot Kreienkamp, Senior Systems Engineer at La-Z-Boy. Engineers need to understand how to phrase prompts for AIs. Educational programs and training courses can help teach this skill set. Organizations serious about AI technologies should upskill appropriate personnel to make them capable of prompt engineering.
As organizations grapple with the implications of Generative AI, a paradigm shift is underway in software development. AI is going to change the way developers work. At the minimum, developers leveraging the technology will become more efficient at coding and building software platform foundations. However, AI will need an operator to work with it and should not be trusted independently. The insights shared by VMware’s Vanguards underscore the need for cautious integration and the need to maintain guardrails to mitigate risk in software development.
To learn more, visit us here.