- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
Briefing the board on AI: Educate to tee up investment
Beginning with a broad overview of what IT is doing about AI, including opportunities and challenges unique to the company, is the best way to introduce the topic — and it provides an opportunity to segue more granularly into how AI can help, what it is, and how it works.
Effective education on AI
Another guideline for briefing the board on AI is to aim to educate board members to the point where they can comfortably talk about AI with highly skilled business associates in their day-to-day business roles.
It’s important for board members to feel they have sufficient command of the topic so that, in those conversations, they can ask questions and follow threads relevant to the business. Moreover, having a foundational understanding of AI and what it does also assists them in developing confidence in the technology that can make board approvals for AI projects easier down the line.
If you’re the CIO and you’re introducing the topic of AI in a boardroom presentation, your primary goal should be that each board member leaves the room with a basic understanding of AI, what AI does, and how it can help the company — and at a level where they can conduct further productive discussions on the topic with colleagues and company leaders.
Highlight benefits and risks
Because company boards are tasked with overseeing corporate compliance and social responsibility as well as reviewing revenues, expenses, and operational and financial health, CIOs must continue to educate board members about both AI benefits and AI risks. Doing so also helps set board expectations for AI initiatives because the board will understand upfront that implementing AI is not just a technical undertaking; it involves compliance, policy-making, and risk management as well.
For example, what if a company’s AI is biased, or if the data the company is using is flawed, and the company makes a faulty strategic decision based on its output? How do you prevent that? What about potential privacy issues that can arise with customers? How do you mitigate that, and what privacy guardrails can you put in place so that it never happens? What about your AI workforce? Is it sufficiently skilled and diverse enough to assure that the AI algorithms it develops are inclusive? And what about your employees? Will the introduction of AI cause worker layoffs or create risks of losing some of your most valuable employees?