- This is the best car diagnostic tool I've ever used, and it's only $54 with this Black Friday deal
- 5 things successful managers do to earn respect and build trust
- From Service to Security: My Path to Empowerment at Cisco
- Aggressive Chinese APT Group Targets Governments with New Backdoors
- How Cisco Uses the Isovalent Platform to Secure Cloud Workloads
Risks of Artificial Intelligence for Organizations
Artificial Intelligence is no longer science fiction. AI tools such as OpenAI’s ChatGPT and GitHub’s Copilot are taking the world by storm. Employees are using them for everything from writing emails, to proofreading reports, and even for software development.
AI tools often come in two flavors. There is Q&A style where a user submits a “prompt” and gets a response (e.g., ChatGPT), and autocomplete where users install plugins for other tools and the AI works like autocomplete for text messages (e.g., Copilot). While these new technologies are quite incredible, they are evolving rapidly and are introducing new risks that organizations need to consider.
Let’s imagine that you are an employee in a business’ audit department. One of your reoccurring tasks is to run some database queries and put the results in an Excel spreadsheet. You decide that this task could be automated, but you don’t know how. So, you ask an AI for help.
The AI asks for the details of the job so it can give you some tips. You give it the details.
You quickly get a recommendation to use the Python programming to connect to the database and do the work for you. You follow the recommendation to install Python on your work computer, but you’re not a developer, so you ask the AI to help you write the code.
It is happy to do so and quickly gives you some code that you download to your work computer and begin to use. In ten minutes, you’ve now become a developer and automated a task that likely takes you several hours a week to do. Perhaps you will keep this new tool to yourself; You wouldn’t want your boss to fill up your newfound free time with even more responsibilities.
Now imagine you are a security stakeholder at the same business that heard the story and is trying to understand the risks. You have someone with no developer training or programming experience installing developer tools, sharing confidential information with an uncontrolled cloud service, copying code from the Internet, and allowing internet-sourced code to communicate with your production databases. Since this employee doesn’t have any development experience, they can’t understand what their code is doing, let alone apply any of your organizations software policies and procedures. They certainly won’t be able to find any security vulnerabilities in the code. You know that if the code doesn’t work, they’ll likely return to the AI for a solution, or worse, a broad internet search. That means more copy and pasted code from the internet will be running on your network. Additionally, you probably won’t have any idea this new software is running in your environment, so you won’t know where to find it for review. Software and dependency upgrades are also very unlikely since that employee won’t understand the risks outdated software can be.
The risks identified can be simplified to a few core issues:
- There is untrusted code running on your corporate network that is evading security controls and review.
- Confidential information is being sent to an untrusted third-party.
These concerns aren’t limited to AI-assisted programming. Any time that an employee sends business data to an AI, such as the context needed to help write an email or the contents of a sensitive report that needs review, confidential data might be leaked. These AI tools could also be used to generate document templates, spreadsheet formulas, and other potentially flawed content that can be downloaded and used across an organization. Organizations need to understand and address the risks imposed by AI before these tools can be safely used. Here is a breakdown of the top risks:
1. You don’t control the service
Today’s popular tools are 3rd-party services operated by the AI’s maintainers. They should be treated as any untrusted external service. Unless specific business agreements with these organizations are made, they can access and use all data sent to them. Future versions of the AI may even be trained on this data, indirectly exposing it to additional parties. Further, vulnerabilities in the AI or data breaches from its maintainers can lead to malicious actors getting access to your data. This has already occurred with a bug in ChatGPT, and sensitive data exposure by Samsung.
2. You can’t (fully) control its usage
While organizations have many ways to limit what websites and programs are used by employees on their work devices, personal devices are not so easily restricted. If employees are using unmanaged personal devices to access these tools on their home networks it will be very difficult, or even impossible, to reliably block access.
3. AI generated content can contain flaws and vulnerabilities
Creators of these AI tools go through great lengths to make them accurate and unbiased, however there is no guarantee that their efforts are completely successful. This means that any output from an AI needs to be reviewed and verified. The reason people don’t treat it as such is due to the bespoke nature of the AI’s responses; It uses the context of your conversation to make the response seem written just for you.
It’s hard for humans to avoid creating bugs when writing software, especially when integrating code from AI tools. Sometimes these bugs introduce vulnerabilities that are exploitable by attackers. This is true even if the user is smart enough to ask the AI to find vulnerabilities in the code.
One example that will be among the most common AI introduced vulnerabilities is hardcoded credentials. This is not limited to AI; It is one of the most common flaws among human-authored code. Since AIs won’t understand a specific organization’s environment and policies, it won’t know how to properly follow best practices unless specifically asked to implement them. To continue the hardcoded credentials example, an AI won’t know an organization uses a service to manage secrets such as passwords. Even if it is told to write code that works with a secret management system, it wouldn’t be wise to provide configuration details to a 3rd party service.
4. People will use AI content they don’t understand
There will be individuals that put faith into AI to do things they don’t understand. It will be like trusting a translator to accurately convey a message to someone who speaks a different language. This is especially risky on the software side of things.
Reading and understanding unfamiliar code is a key trait for any developer. However, there is a large difference between understanding the gist of a body of code and grasping the finer implementation details and intentions. This is often evident in code snippets that are considered “clever” or “elegant” as opposed to being explicit.
When an AI tool generates software, there is a chance that the individual requesting it will not fully grasp the code that is generated. This can lead to unexpected behavior that manifests as logic errors and security vulnerabilities. If large portions of a codebase are generated by an AI in one go, it could mean there are entire products that aren’t truly understood by its owners.
All of this isn’t to say that AI tools are dangerous and should be avoided. Here are a few things for you and your organization to consider that will make their use safer:
Set policies & make them known
Your first course of action should be to set a policy about the use of AI. There should be a list of allowed and disallowed AI tools. After a direction has been set, you should notify your employees. If you’re allowing AI tools, you should provide restrictions and tips such as reminders that confidential information should not be shared with third parties. Additionally, you should re-emphasize the software development policies of your organization to remind developers that they still need to follow industry best practices when using AI generated code.
Provide guidance to all
You should assume your non-technical employees will automate tasks using these new technologies and provide training and resources on how to do it safely. For example, there should be an expectation that all code should use code repositories that are scanned for vulnerabilities. Non-technical employees will need training in those areas, especially in addressing vulnerable code. The importance of code and dependency reviews are key, especially with recent critical vulnerabilities caused by common third-party dependencies (CVE-2021-44228).
Use Defense in Depth
If you’re worried about AI generated vulnerabilities, or what will happen if non-developers start writing code, take steps to prevent common issues from magnifying in severity. For example, using Multi-Factor Authentication lessens the risk of hard-coded credentials. Strong network security, monitoring, and access control mechanisms are key to this. Additionally, frequent penetration testing can help to identify vulnerable and unmanaged software before it is discovered by attackers.
If you’re a developer that is interested in using AI tools to accelerate your workflow, here are a few tips to help you do it safely:
Generate functions, not projects
Use these tools to generate code in small chunks, such as one function at a time. Avoid using them broadly to create entire projects or large portions of your codebase at once, as this will increase the likelihood of introducing vulnerabilities and make flaws harder to detect. It will also be easier to understand generated code, which is mandatory for using it. Perform strict format and type validations on the function’s arguments, side-effects, and output. This will help sandbox the generated code from negatively impacting the system or accessing unnecessary data.
Use Test-Driven Development
One of the advantages of test-driven-development (or TDD) is that you specify the expected inputs and outputs of a function before implementing it. This helps you decide what the expected behavior of a block of code should be. Using this in conjunction with AI code creation leads to more understandable code and verification that it fits your assumptions. TDD lets you explicitly control the API and will enable you to enforce assumptions while still gaining productivity increases.
These risks and recommendations are nothing new, but the recent emergence and popularity of AI is cause for a reminder. As these tools continue to evolve, many of these risks will diminish. For example, these tools won’t be Cloud-hosted forever, and their response and code quality will increase. There may even be additional controls added to perform automatic code audits and security review before providing code to a user. Self-hosted AI utilities will become widely available, and in the near term there will likely be more options for business agreements with AI creators.
I am excited about the future of AI and believe that it will have a large positive impact on business and technology; In fact, it has already begun to. We have yet to see what impact it will have on society at large, but I don’t think it will be minor.
If you are looking for help navigating the security implications of AI, let Cisco be your partner. With experts in AI and SDLC, and decades of experience designing and securing the most complex technologies and networks, Cisco CX is well positioned to be a trusted advisor for all your security needs.
Share: