- Is classic Outlook crashing when you start or reply to an email? A fix is on the way
- Samsung will still give you $50 for reserving a Galaxy S25 preorder within the next few hours
- Preparing for the PCI 4.0 Implementation in the Retail environment
- Securing Election Integrity In 2024: Navigating the Complex Landscape of Modern Threats
- Simplifying Zero Trust Security for the Modern Workplace
AI coding agents come with legal risk
Without some review of the AI-generated code, organizations may be exposed to lawsuits, he adds. “There’s a lot of work that’s going on behind the scenes there, getting beyond maybe just those individual snippets of code that may be borrowed,” he says. “Is that getting all borrowed from one source; are there multiple sources? You can maybe sense that there’s something going on there.”
While human-written code can also infringe on copyright or violate open-source licenses, the risk with AI-generated code is related to the data the AI is trained on, says Ilia Badeev, head of data science at Trevolution Group, a travel technology company. There’s a good chance many AI agents are trained on code protected by IP rights.
“This means the AI might spit out code that’s identical to proprietary code from its training data, which is a huge risk,” Badeev adds. “The same goes for open-source stuff. A lot of open-source programs are meant for non-commercial use only. When an AI generates code, it doesn’t know how that code will be used, so you might end up accidentally violating license terms.”