- The top mobile AI features that Apple and Samsung owners actually use
- Accelerate Your Docker Builds Using AWS CodeBuild and Docker Build Cloud | Docker
- Cisco's FY24 Purpose Report: Looking back while looking ahead
- Attacker Distributes DarkGate Using MS Teams Vishing Technique
- Ride the AI wave: Learn to flow
SaaS governance is improving, but AI presents new challenges
Despite hitting a high in 2022, apps identified as “shadow IT” dropped from 53% to 48% in 2023. This drop signals an increase in SaaS governance actions: we’re getting better within enterprises at knowing what apps employees are using, and better at enforcing policies around SaaS use. Anecdotally, I’m seeing that the creation of SaaS governance councils is becoming the norm; businesses are responding to a need for repeatable processes that allow teams to cross-functionally collaborate and make decisions on SaaS portfolios, for both renewals and purchases.
It’s also clear that we have a long way to go when it comes to shadow IT. Innovators and early adopters within a company continue to seek out AI-native applications and AI solutions for unmet needs. Vendors using AI within their platforms present a new set of challenges for IT and compliance teams. The reality is, that even as the use of shadow IT trends down overall, AI adoption continues to rise, and businesses need to consider new strategies to ensure governance and compliance as employees seek out innovative solutions. In fact, research from Accenture found that while 80% of companies plan to increase investment in responsible AI, only 6% of organizations felt they had the appropriate governance structures in place.
So what can leaders within the enterprise do to ensure they don’t lose their grip on governance?
Bring teams together
Shadow IT is no longer just an IT problem — collaboration with finance and procurement demonstrates how better business outcomes happen when teams align around data. Establishing policies around AI use and enforcing them is a team sport as well, which requires a truly multidisciplinary approach. By establishing cross-functional teams with representatives from across IT, legal, compliance, data science, and business units, your leadership team will set the tone for collaborative efforts that also ensure that diverse perspectives are considered when developing AI governance policies and procedures.
Get clear on policies and procedures
To ensure good governance of any SaaS — AI or otherwise — your org needs comprehensive policies governing the procurement, deployment, and use of specific technologies. These policies should cover the basics: defining “Approved Usage” and “Not Approved Usage,” and also providing clarity around how you handle data privacy, security protocols, algorithm transparency and more. And don’t let them fade away in your intranet or employee handbook –– make sure that these policies are communicated effectively to all stakeholders and enforced consistently throughout the organization (and automated when possible).
Let data drive
An AI governance initiative without proper data to support it is a recipe for inefficiency. There are plenty of tools and technologies specifically designed for monitoring, managing, and governing AI applications within the enterprise. These tools can facilitate real-time monitoring of AI systems, detect anomalies or deviations from established policies, and automate compliance workflows to streamline governance processes. Armed with the data these platforms provide, your organization can conduct regular audits, risk assessments and performance evaluations of AI systems to identify and mitigate potential risks or compliance issues proactively.
Keep communication open
AI use in the enterprise is still nascent and evolving — so continued education and training programs around the risks of shadow IT and unsanctioned AI use, and the importance of compliance with policies, is well worth the investment.
By embracing a proactive approach to AI governance and fostering collaboration across functional boundaries, enterprise leaders can effectively navigate the complexities of AI adoption while safeguarding against potential risks and ensuring compliance with regulatory requirements. In doing so, we can all harness the incredible potential that AI-equipped tools and platforms bring to our daily work — while making sure no one is putting the organization in a compromising position.