- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
Generative AI and IGA: Three considerations
The introduction of ChatGPT and the generative artificial intelligence (AI) model is one of the most significant new technology developments security leaders have seen. It’s impacting almost every sector of business, including identity governance. It can greatly lower the barrier to entry for the adoption of AI capabilities within a governance solution. With the help of AI, businesses can enhance their efficiency by automating numerous routine tasks and repetitive processes currently done by humans. It can also extend to encompass tasks currently only handled by humans. This can reduce error, too.
However, before adopting this technology, it’s important to understand not just how it can help with identity governance but also the potential risks and limitations. These pertain not just to generative AI, but also to an organization’s policies, governance requirements and willingness to cede control.
Generative AI can help tackle governance challenges
A large challenge in governance is typically the decisions that users need to make. Let’s say an employee wants to look at purchase orders within Salesforce, but they can’t determine which role maps to that request in order to be able to gain access. Another example might be employee B, who needs access to a SharePoint site with five-year projected business models and acquisition plans, but they don’t know who can give them that information or even what job role oversees that information.
Generative AI can help here, by looking at who these types of requests have been approved for in the past, including what department they were in, what job function they held, and the reasons they provided for needing access. That kind of information and metadata can enable the AI to guide the employee in the right direction for how to get the materials they seek, in a way that’s simple and easy to understand.
Generative AI brings the ability to help by training the models to address these situations. GPT provides a heavily trained large language model that can be layered with training to get some very good use cases and scenarios from.
Another example might be: a prospect is interacting with a product and an organization wants to provide information that’s in product documentation portals used for customer support tickets. The model can amalgamate a lot of that information. So, if the prospect is asking for product information, the AI can pull those different pieces of information together to provide recommendations or guidance that are specific to their request. The beauty of a large language mode (LLM) is that it can understand the user’s intent and provide relevant results in natural language.
Or take AI in identity security, which can recommend access privileges based on access rights for peer groups. Similar to how automating role mining would identify logical groupings, AI can categorize recommendations based on what other users are doing throughout the network. Let’s say a user requests the same access as her colleague, who is working in the same role. Generative AI can be used to not just to recommend the right items but to complete the request on the user’s behalf, saving the user time and confusion.
That said, depending on what the task is, for some organizations, they will still need to have human involvement in the process to verify the request is appropriate. Low-risk and highly repetitive tasks are ideal for AI. When it comes to higher risk or more sensitive tasks such as granting access to approve wire transfers over $1 million, this is a scenario in which internal controls, influenced by risk assessment and audit mandates, may necessitate a human decision-maker, while AI can offer suggestions and advice. This approach guarantees that during audits, there’s an individual responsible for satisfying compliance and security controls.
Understand the potential risks — and an organization’s unique needs
To really succeed with generative AI, security leaders need to have an understanding of their organization’s audit control requirements, risk tolerance and the concerns they may have with decision automation. Having this foundational understanding will help security leaders mitigate these concerns and ensure they can really benefit from it in a meaningful way.
For instance, when it comes to identity governance, there’s personal data security leaders don’t want to leak out. This is a real concern when it comes to data privacy, data processing agreements, data ownership and data usage, especially when leveraging a technology that was created by an outside organization.
It’s essential to approach AI deployment with a security-first mindset and a commitment to safeguarding sensitive data. Is the LLM being used exclusive to the environment or the SaaS provider, e.g. not delivered by a public service? Security leaders need a clear understanding of how their data is used to provide this service; for example, ensuring it’s not used or retained to improve the LLM.
Environments for deploying and using LLMs should be carefully controlled, limiting access to authorized personnel and ensuring robust security measures are in place. Only provide the specific data necessary for the AI model’s task. Minimize the exposure of sensitive or personal data to reduce the risk of unintentional leakage. Whenever possible, use anonymized or masked data for model training and testing. This further reduces the risk of exposing personal information.
Generative AI is not a panacea
Generative AI is a powerful tool, but it won’t solve every problem. In many circumstances, AI should be implemented in tandem with human input as a way of improving decision-making or processes if it is to be fully optimized.
There are many use cases that LLMs can tackle today in governance with minimal need for additional training. As we progress into more specialized use cases or areas where the needed decisions have a security impact, customization may be required to improve accuracy.
In other words, while AI can help to accelerate many identity workflows, there will be some areas where it will need to be combined with humans who can take the recommendations the generative AI provides and make a final decision. At the end of the day, AI is very effective in automating manual and/or repetitive processes, freeing up employees to focus on some of the higher risk tasks where security leaders need a person who can ultimately be accountable for the decisions and results.
The most important lesson to be learned from the most recent AI developments is that for it to be most beneficial, it should be built upon solid underlying processes.
Successfully pairing IGA and AI
Generative AI is revolutionizing identity governance, offering efficiency gains and error reduction by automating routine tasks. However, organizations must tread carefully, and ensure they understand their tolerance for risk and automation. For instance, some organizations may have business processes that require an accountable human; they won’t be able to pass a compliance audit if they can’t fully justify their processes. It’s important to understand these types of business restrictions when determining how and where security leaders will apply the use of generative AI. Generative AI is a potent tool, it’s not a one-size-fits-all solution. It can thrive in an area like identity management, where extensive automation is required. However, organizations must carefully consider their own risk, and find ways to mitigate it and move forward.