- I use this cheap Android tablet more than my iPad Pro - and it costs a fraction of the price
- One of my favorite budget tablets this year managed to be replace both my Kindle and iPad
- Critical Vulnerabilities Found in WordPress Plugins WPLMS and VibeBP
- How to detect this infamous NSO spyware on your phone for just $1
- I let my 8-year-old test this Android phone for kids. Here's what you should know before buying
6 tough AI discussions every IT leader must have
To do that, Perez assesses AI projects as he would other projects built with other, less-hyped technologies, working with teams to evaluate the use case, governance needs, anticipated business benefits, and expected returns.
“You can’t dilute your efforts on projects that won’t provide value,” Perez adds. “So pick those few key areas where you can see the value and the great purpose that AI can deliver to the organization and put your bets on those, focus on those, learn from those and expand into other areas as you learn.”
3. What can we reasonably achieve given resource constraints?
After generative AI burst onto the scene, Nicholas Colisto, senior vice president and CIO of multinational manufacturer Avery Dennison, had worked to get his company to embrace its potential.
“AI has been around for a long time,” he says, “but when gen AI came out and really exploded in early 2023, a lot of firms — including ours — said no to it. [Our firm’s leaders] wanted to make sure there were guidelines in place to protect the company, its data, and its people.”
To get past those points, Colisto worked to educate leaders about the capabilities and risks of AI, seeking to move the company from “no to know.” His efforts paid off, as Avery Dennison has since moved into gen AI ideation, pilots, and production.
Still, he acknowledges he has had to temper some of the forward momentum with reality checks regarding resources.
“I’ve had to remind people that there wasn’t a new bag of money that came along with AI,” Colisto says, adding that some mistakenly thought that the company’s longstanding funding practices didn’t apply to AI-related initiatives. “People thought AI was different in terms of how companies are investing in it; they thought corporate would have all this money for AI. But I don’t hear a lot of companies saying they’re going to forgo their investment cycles and go all in with AI.”
Other CIOs say they, too, are initiating conversations about the limits in money as well as talent and time.
“Talent is an issue right now,” says Sreekanth Menon, vice president and global AI/ML services leader at business transformation services firm Genpact. “Companies need a different type of talent to work with AI. They need a massive upskilling, or they need to bring in new talent. And everybody is trying to work with their partners to build an [AI talent] ecosystem. But all that takes time.”
Many organizations are impacted by a shortage of the right talent. The Microsoft/IDC study found that 52% of survey respondents listed a lack of skilled workers as their biggest barrier to implementing and scaling AI.
CIOs are also reminding colleagues that their IT teams have other work to do, too. “We have normal project cycles. We have regular projects in the pipeline. We weren’t going to drop everything for AI,” Colisto says.
Rather than argue over those points, CIOs say they shepherd conversations back to business priorities.
“We’re asking our business units to identify ideas that align with strategic objectives and then talk about the costs of those ideas, the cost-efficiency or revenue-generating impact of those ideas, and the feasibility of achieving those gains,” Colisto says. “With AI, we still need business units to identify the most value-based programs. We have hundreds of use cases across all the major functions, and we’re funding those that are highly prioritized.”
4. Can the current state of our data operations deliver the results we seek?
Another tough topic that CIOs are having to surface to their colleagues: how problems with enterprise data quality stymie their AI ambitions.
A 2023 poll of 1,500-plus AI practitioners and decision-makers, conducted by S&P Global Market Intelligence for data platform maker WEKA, found that data management was the most frequently cited technological inhibitor to AI and machine learning deployments. Similarly, the 2023 US AI Risk Survey Report from professional services firm KPMG found that data integrity was No. 1 among the top three risks — followed by statistical validity and model accuracy.
“The data conversation is very real, and often it’s the CIOs saying, ‘If you don’t fix it, the results you want won’t happen,’” says Krishna Prasad, chief strategy officer and CIO at UST, a digital transformation solutions company.
Prasad says he has heard the conversation in his own organization as well as the firm’s client companies.
CIOs say it’s a particularly difficult discussion because they generally don’t have responsibility for the state of business data; all they can do is share their observation and help devise remedies.
On the other hand, CIOs say their calls for more action on data quality are getting more traction today because of AI. “AI is elevating the conversation around data, because data is now more critical for the enterprise than ever before,” Salesforce’s Perez says.
5. What is our appetite for risk and how do we address it?
The risks and security concerns around AI initiatives also dominate many of the conversations that CIOs are having with their executive colleagues and teams.
There’s good reason for that: Companies have seen both their proprietary and regulatory-protected data fed into open AI tools, such as ChatGPT. They’ve seen AI turn out biased answers and outright fabricated results (known as AI hallucinations). And they’ve gotten AI outputs they cannot authenticate or validate, due to a lack of explainability.
“There are examples left and right of people who are using this technology in ways that put themselves and their organization at risk, and they don’t realize that they’re doing it,” Crawford says.
CIOs stress that blocking the use of AI isn’t the answer. Prohibiting its use won’t stop it, as some employees will likely continue to experiment with it. Plus, enterprise software makers are incorporating AI into the products and services they sell, so AI is entering the enterprise anyway. Furthermore, prohibiting or limiting its use due to fear of risks disadvantages the organization against competitors that are moving forward with AI initiatives.
CIOs need to bring those points to the table, Crawford and others say. CIOs also now must follow the legislative discussions about possible regulations and distill how any regulations would impact their organization’s AI agenda.
“It’s a hard conversation to have for CIOs as they encourage innovation but at the same time try to protect customer data and intellectual property, because if CIOs are not careful with what they say, they can come across as someone who is a naysayer or not supportive of this new technology and people will work around you,” Crawford adds.
6. How does our AI strategy address ethical concerns?
Amanda Crawford, executive director of the Texas Department of Information Resources and CIO for the State of Texas, says she’s talking about the ethical and acceptable parameters of AI use as part of her conversations about the technology’s use within state government.
“We don’t want to be bleeding edge; that is not something we aspire to do, because of the obligations and responsibilities that come with being the government. That comes up with other emerging technologies, and that’s certainly true for a technology that’s a massive disruptor like AI,” she says. “So the pace at which we move in government is a little bit slower, it’s a little bit more thoughtful, intentional, and deliberate because it has to be, because of the nature of what we do. We have to maintain trust.”
That’s not to suggest that the State of Texas does not use AI, Crawford says. In fact, like many private entities, the Texas state government has deployed chatbots, intelligent automation and intelligent systems throughout its operations and is studying where generative AI and other newer AI technologies could be used.
“But the conversations we’re having are around the ethical and privacy challenges and the obligations we have in the government for constitutional rights and the privacy rights of our constituents. Those things have to factor into our decisions,” Crawford explains.
For example, those conversations might focus on whether some tasks must be performed by humans rather than intelligent systems due to law, public policy, best practices or citizen expectations, she says.
“Frequently I’m seeing that as the CIO it’s my role to ask these questions,” Crawford says, adding she and many other IT leaders within the State of Texas are looking for “leadership at the executive and legislative level to come back to help us with the guardrails as we roll this out.”
Crawford and government entities aren’t the only ones asking about the ethical parameters of AI.
CIOs are talking to executive colleagues about their responsibilities should their AI systems produce biased results or hallucinations, UST’s Prasad says. They’re talking about how to ensure they can trace and explain AI outputs and what should happen if they can’t.
“AI enables you to do all kinds of things, but the question is do you really want to. That’s a topic that requires a conversation with the executive team and even the board,” he says.