AI Governance: Act now, thrive later

We all know technology moves fast and is only moving faster. Artificial Intelligence (AI) technologies are moving faster than previous technologies and it is transforming companies and industries at an extraordinary rate. There is such excitement about these technologies and their use cases that we are starting to see implementations everywhere. Employees are experimenting, developing, and moving these AI technologies into production, whether their organization has AI policies or not. It is essential that we realize AI technologies are now part of modern life and an integral part of our technology portfolio.   

With the rapid advancement and deployment of AI technologies comes a threat as inclusion has surpassed many organizations’ governance policies. These changes can expose businesses to risks and vulnerabilities such as security breaches, data privacy issues and harm to the company’s reputation.   

When organizations build and follow governance policies, they can deliver great benefits including faster time to value and better business outcomes, risk reduction, guidance and direction, as well as building and fostering trust.   

However, people will complain that governance hinders creativity, hinders progress, hinders forward momentum, adds additional steps/processes, and total cost increases due to the added steps that need to be followed. Governance is also seen as a roadblock to the agility needed to quickly deploy into production.   

But in reality, the proof is just the opposite. Companies need to create and implement AI governance policies so that AI can deliver benefits to the organization and the customer, to provide a fair, safe and inclusive system that is trusted by the users. AI governance provides the direction and guardrails for how the technologies should be implemented consistently across the organization, and what outlines the degree of oversight needed.   

There have been many organizations that state that AI governance should come from governments first. While there is a lot of effort and content that is now available, it tends to be at a higher level which will require work to be done to create a governance model specifically for your organization. You need to create your own that is tailored to your organization, your needs, requirements and operational designs. 

It is easy to see how the detractions can get in the way. Many things can get in the way of creating and implementing an AI governance framework, but by methodically addressing each blocker, organizations can implement an AI governance framework that works best for their company’s mission and values. Just as with anything, something is better than nothing.   

Why is something better than nothing? Gartner surveyed IT and Data Analytics leaders and found that only 46% had an AI governance framework implemented. Of those, only 12% said that they had a dedicated AI governance framework in place and 34% had extended other governance frameworks to include AI-specific policies. Leaving 55% saying that their organization had not yet implemented an AI governance framework.   

So, as you develop or update your governance process, remember that governance is action. It is the activities that cover planning, approvals, security, process, monitoring, remediation and of course auditing. It needs to be embedded in every AI project. Create processes that are close to the actual practice of developing, deploying, operationalizing and maintaining AI solutions. The benefits far outweigh the alternative.   

What is governance?   

Let’s start by defining governance and setting the foundation. This will help get past any naysayers.   

When defining governance, it is best to start at the top with business governance and then lead into IT governance and then AI governance. Starting with business governance sets the foundation because it supports leadership to strengthen the organization’s competitiveness in the long term, and to remain competitive in a constantly changing world.   

  • Business governance. This is the system of rules, practices and processes in which a company is directed and controlled, that help guide the organization in the right direction and get consistently great outcomes. 
  • IT governance. This is the process that ensures the effective and efficient use of IT resources and ensures the effective evaluation, selection, prioritization and funding of competing IT investments to get measurable business benefits. Essentially to match their IT goals with their business goals. 
  • AI governance. This is the set of processes that outline and guide the use of AI in your organization. The goal is to ensure that AI is ethical, transparent, responsible, fair as well as compliant with legal and regulatory standards. It also includes managing the risks, quality and accountability of AI systems and their outcomes.   

AI governance is critical and should never be just a regulatory requirement. It is a strategic imperative that helps mitigate risks and ensure ethical AI usage, builds trust with stakeholders and drives better business outcomes. AI Governance should absolutely be part of your AI strategy from the beginning and not an afterthought.   

Governance is action and there are many actions an organization can take to create and implement an effective AI governance model. Start with:  

  • An AI culture. The more people feel included, the more that they will be engaged and play a part in the adoption. The AI culture should also include training and a learn-it-all all culture. AI is moving so fast that it isn’t about a know-it-all all, but about a learn-it-all all process. 
  • Creating an AI governance committee. Have people with expertise as well as representatives from each project area. You want to have people who are close to the implementation so that policies fit correctly and are not shoe-horned in. This committee should have oversight across all AI activities in the organization. This committee should also put in place the policies around human oversight and control. They should review each implementation to understand where a human is in the loop and what controls have been built into the system. They will also need to determine what action would dictate a human acting as the loop so that there is no confusion as to who does what, when and according to what event action. This team should also keep track of all AI technologies used and deployed across the organization. 
  • Continual communication. Share the policies and share the activities that the AI governance committee is doing. Communicate what others across the organization are doing with AI and connect the dots so that groups can learn from each other and share preferred practices or lessons learned. 
  • Metrics. Start by identifying key performance indicators (KPIs) that outline the goals and objectives. Metrics should include system downtime and reliability, security incidents, incident response times, data quality issues and system performance. You can also measure user AI skills, adoption rates and even the maturity level of the governance model itself. Set goals and report metrics to determine if you are achieving the goals set out by the organization or the AI governance committee.   

This is certainly not a complete or exhaustive list, and it isn’t meant to be. It is a starting point, and there should be continual improvement based on the identified results and issues encountered. Modify the policies as needed and continue to monitor and strive for improvement.   

As mentioned above, don’t let the challenges of creating and implementing an AI governance process slow you down or get in the way. It is important to identify the most common challenges as they can quickly derail any positive efforts. Let’s talk about a few of them:  

  • Lack of data governance. Organizations need to have a data governance policy in place. Without this, it can feel like you are creating two governance models simultaneously and the hurdle to implement each one is even greater. Policies around data quality, data bias, data privacy and usage, data ownership, privacy violations and others will be the groundwork for ensuring the trustworthiness of the organization, and any AI system built upon the data. 
  • Lack of clarity on AI’s business impact. The business and IT need to be in lockstep. IT will require the resources to build and productionalize the system. If there is a struggle to identify the benefits, reduce the risks and see a clear business advantage, the projects will never succeed. 
  • Poor collaboration between the business and IT. This directly follows the lack of clarity challenge. When there is poor collaboration and lack of communication it directly impedes the effectiveness of both IT and the business and quickly leads to failed projects. This becomes a snowballing issue that keeps growing as AI projects involve multiple stakeholders across the organization. 
  • Organizational silos. When there are communication issues or infighting, it leads to each team operating in silos as it relates to technology usage, ownership and direction. This increases risk and makes it next to impossible to deal with legal or compliance issues as well as any security concerns or consistency in operations across the business. 
  • Skill gap. While AI was once considered an up-and-coming technology, it isn’t any longer. Organizations may feel that they don’t have the necessary understanding or expertise to build and implement an AI governance policy or framework. However, there are so many resources available now, including the NIST AI Framework, that organizations should no longer see this as a challenge. As you utilize frameworks, ensure that you are customizing the policy to match the implementation to your business.    

While these challenges can feel overwhelming, think about what happens when you don’t or won’t implement a corporate-wide AI Governance model.   

In the same Gartner study mentioned earlier, leaders were able to identify negative impacts experienced by their organization as an outcome due to a lack of governance. Leaders also indicated an increased cost (47% of respondents), failed AI initiatives (36%), decreased revenue (34%), negative customer experiences (29%) and more. The cost due to lack of governance is too high to ignore.   

Start small, grow and expand and work across departments. Decide what is required for risk mitigation, legal requirements, accountability, roles and responsibilities on escalations, as well as responsibility for outcome acceptance. Keeping in mind that governance is action, we will specifically be focusing on two activities that need to be part of every governance process.   

Take stock of your AI inventory  

One of the things that happens inside organizations is the race to incorporate AI models and tools across different teams. Teams might compete to be in production first, to be the first to incorporate new technology, or first to play with the latest technologies that dominate the news. The result of this is that organizations can end up with a wide array of AI models, versions of those models and AI tools. Especially when there were no governance policies in place to help guide the selection and use of AI technologies.   

We all know that when a solution is built and gets promoted to a production environment, the odds of going back and changing the technologies used are quite low. So what organizations are left with is a management nightmare and a completely unorganized array of technology which leads to even worse tech debt.   

In this article, I have talked about the need for governance and planning. Not only to help counter the negative impacts outlined in the Gartner study I mentioned earlier but to guide the organization, architects, management and operations so that you don’t get into a situation where you are adding more complexity. One of the essential activities every organization must do is to keep an AI inventory.   

Start by keeping an inventory of all products and services that are currently using AI technology. Then for each product or service, keep track of any AI large language models, model versions and AI frameworks and tools that are in use. By creating this inventory, you can see across the organization what tools are being used, and what models are being used and manage your activities related to the model lifecycle. Typically, a model’s retirement date occurs while the solution is still in use, before the application version is released and usually before the application’s retirement date. This requires a version release for the solution to maintain an active model, which brings us to the activity of model management.   

Manage your models   

Knowing where AI large language models are used across the organization is only one part of model management. You need to keep in mind, and plan for, the retirement timeframes of the models used. You need to perform testing of the new model and ensure that you are setting aside enough time for testing and evaluation.   

Let’s start with the lifecycle of the model itself. If we look at GPT models (OpenAI and Azure OpenAI), each model is released with a retirement date. You can look at the documentation for additional information as well as to see which model can be used as a suggested replacement.   

If we look at the documentation and pick a model as an example, we can see that, for instance, GPT3.5 version 0613 has a retirement date of February 13, 2025. If you are using this model, you need to determine if you are going to upgrade manually or if you have auto-update enabled. If you have auto-update enabled then you don’t have to do anything and your model will be updated to a model within the same model family (in this example, it will be updated to GPT3.5 version 0125). How comfortable are you with your model being updated underneath your solution without having tested the new model?   

If you choose to have your model auto-updated and it updates to GPT3.5 version 0125, this model also has a retirement date. Its date is May 31st, 2025. This gets you three months before you are upgraded again. The suggested replacement for GPT 3.5 0125 is GPT 4.0 mini and there isn’t an auto-update path from one model family to another. So, you may decide that going directly to GPT 4 will give you the longest lifecycle path.   

However, you now need to decide whether GPT 4o or O1. O1 is more focused on reasoning and solving complex problems whereas GPT4o is a multimodal model for test and image processing. Both 4o and O1 have a mini model as well.   

By keeping an inventory of what models are being used across your organization, you can start to manage the activities and timelines of model upgrades across the organization as well as which model best fits the need for the application. It is far easier to keep the models in sync across implementations and perform the updates in sync.   

A policy should be created to outline model upgrades and the process to be in control of the model updates and determine the exact model version you will update to. You want to make sure that you are testing the new model and not letting it be updated automatically and hope that each model version behaves the same as its predecessor.   

The next part of any model update is the testing that needs to take place.   

Keep your existing evaluation harnesses, tests and testing metrics created and used for the old model available for model upgrades. As you test the new model, you don’t want to be in a situation where you need to recreate the evaluation tests. By keeping the testing harness, it is easy, and faster, to update and have them ready when you need to test the output from the new model against what is required for the application/solution. Many organizations tune their prompts specifically for a version of a model. Those prompts now need to be tested with the new model. To maintain trust across the user base, you need to make sure that the solution behaves the same. The new model will introduce changes in responses and the existing prompts will need to be re-tuned. You need to make sure that you are incorporating time for testing and tuning in the upgrade process.   

Therefore, you need to make sure that you have the original test harness ready for any model upgrades so that you can put the new model through the same tests and ensure that your tuning of prompts elicits the same quality of output required. In addition to tuning the prompt, ensure that you are testing and collecting metrics for relevance, coherence and groundedness. Compare those metrics against the metrics collected when you tested the original model.   

As you test this new model, compare results to the old model output and expected results and have the metrics ready to share. There will be times when you need to prove that the new model is in line with the results and metrics from the previous model. The only way to successfully show people is to be able to share the output metrics and result tests.   

Pull in the same direction

Don’t wait. Get involved now in your organization’s governance planning. Whether it is being created from scratch specifically for AI governance or extending an existing governance plan.   

Having a governance model is a clear way to ensure that everyone is operating in the same direction, understands the guidelines across the company and is implementing the solutions according to the same plan. But a governance plan and model doesn’t need to be a centrally created plan. Every part of the organization should have input and help create the governance model. Each team needs to be proactive in brainstorming regarding the policy and taxonomy that is relevant to their implementation so that these ideas can be reviewed and aid in creating the multitude of policies that will fairly cover the full organization.   

But just because you have teams to create and implement the governance process and policies, if the teams doing development and deployment are not following the policies, there isn’t really governance. If teams aren’t following the policies, then there is a governance gap, and the goals set out by the organization will never be realized. Without governance, the organization will not be in a position to respond to legal issues, issues of trust and fairness, and provide transparency and risk mitigation. The proper development, implementation and leadership oversight of governance policies is the only way for an organization to realize the desired business outcomes. 

Stephen Kaufman serves as a chief architect in the Microsoft Customer Success Unit Office of the CTO focusing on AI and cloud computing. He brings more than 30 years of experience across some of the largest enterprise customers, helping them understand and utilize AI ranging from initial concepts to specific application architectures, design, development and delivery.  

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects. 



Source link

Leave a Comment