- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
CIOs must beware committing ‘AI washing’ themselves
The IT industry has a long history of vendors exaggerating the functionality of their products, and it’s certainly happening in the current AI hype cycle. Now, even user companies appear to be overstating their AI capabilities, potentially leading to major headaches.
IT leaders at organizations considering AI are under major pressure — from boards, other executives, and the market itself — to roll out major AI initiatives. Moreover, confusion about the definition of AI may also lead to unintentional overhyping of AI capabilities, observers say.
With the current AI gold rush, companies may be tempted to exaggerate their AI implementations to lure investors and customers, a practice called “AI washing,” but they should think twice before doing so, says David Shargel, a regulatory compliance lawyer with law firm Bracewell.
US regulatory agencies are watching for exaggerated AI claims, with the US Securities and Exchange Commission announcing a settlement in March with two investment advisors. The SEC had charged the two firms with making misleading statements about their use of AI for investment advice, and the companies paid $400,000 in the settlement.
Pressure from above
Some companies may overstate their AI use because they don’t understand what AI encompasses, says Shargel, who co-authored a blog post in January about AI washing. But there appears to be some intentional exaggeration happening as well, he says.
“AI washing is a new phenomenon, but it’s really just a different kind of fraud,” he says. “Companies always commit fraud, and they’ll find new ways to do it, based on new technology.”
Beyond regulatory problems, companies overstating their AI use could expose themselves to shareholder lawsuits and a loss in customer trust, Shargel adds.
In other words, the call for AI ingenuity at some organizations may turn out to be a siren song.
That’s because there’s heavy pressure on CIOs and other IT leaders to adopt and successfully deploy AI, creating some incentive for exaggeration, says Kjell Carlsson, head of AI strategy at Domino Data Lab, provider of an enterprise AI platform.
“There’s this incredible demand and desire for AI, and when they’re saying AI, they are referring to the mandate for generative AI,” he says. “It’s, ‘We’ve seen the power of OpenAI—tell me how we’re going to be using large language models in order to transform our business.’”
Companies have always followed technology trends and tried to jump on the bandwagon, he says.
“The moment that ChatGPT hit, it was amazing how instantly, mostly the business intelligence vendors, went in and dusted off their chatbots so that they could say, ‘We are an AI-enabled business intelligence center,’” Carlsson adds.
Some BI vendors rebranded their simple chatbots as AI tools, and while they weren’t technically wrong, they were using rules-based systems that don’t rely on generative AI, which is driving the current market hype, he adds.
Now, user companies, rather than technology vendors, may be tempted to do the same.
A record number of S&P 500 companies, 199 of them, mentioned AI in their earnings calls covering the first quarter of 2024, according to analysis by document search firm FactSet. The five-year average for AI mentions during earnings calls was 80, with mentions increasing sharply since the first quarter of 2023, shortly after the debut of ChatGPT, the company wrote in a blog post.
Confusion and inflation
This rush to associate with AI has companies looking toward future revenue growth, as opposed to short-term profits, an attractive position for investors, says Toby Coulthard, CPO of Jacquard, provider of an AI tool for brand messaging. With the confusion about the definition of AI, whether it includes large language models (LLMs), neural networks, machine learning, or simply a data science application, gives companies “a lot of latitude” when claiming to use AI, he says.
249%
Increase in number of S&P 500 companies mentioning AI in earnings call from Q1 2023 to Q1 2024
“Intrinsic motivation to do something to preserve or inflate market capitalization, combined with an under-defined concept, leads to a big grey area on what is appropriate or not,” Coulthard adds. “Until the marketplace really defines AI in a meaningful way, or until investors weigh AI in a more balanced way, I don’t expect it to slow down.”
Lack of use cases
One difficulty for CIOs and other IT leaders pressured to explore AI is that boards and CEOs are excited about gen AI and LLMs, but more traditional AI and machine learning tools still can provide many benefits to many organizations, Domino Data Lab’s Carlsson says.
While gen AI shows major potential, few use cases have bubbled up so far, some for specific industries such as biopharma, with a handful of applications for most other companies, he says. Intelligent chatbots to handle customer complaints are taking hold, as are virtual assistants with enterprise search features, according to Carlsson.
Few other use cases for gen AI have emerged, he adds.
“When I’m talking to organizations, even ones that have had a couple of success stories, the pipeline of further use cases is kind of slim, and you’re getting to seemingly marginal projects, from a business value point of view, quite quickly,” he says. “One company I was talking to said, ‘We had 500 suggestions for ways we could use generative AI, and it was really five suggestions repeated 100 times.’”
In some cases, companies are using gen AI for things that more traditional AI tools can do, like basic analysis and process automation, which can create problems for IT leaders, he says. Gen AI can still hallucinate, even if tuned, creating a level of uncertainty when more traditional tools would be more consistent.
“This is a new a new technology, working with new data, going after new use cases, and that’s challenging,” he says. “You run into the fact that these models just don’t behave like your traditional models. It’s a scary level of uncertainty and risk, and that makes it difficult to use as a rip and replace for existing technologies.”
Take care
Bracewell’s Shargel advises companies to be careful about making broad claims about their AI capabilities. To avoid regulatory scrutiny, companies should create a definition of AI for use internally and in regulatory filings.
CIOs and other executives should also consider reviewing their AI capabilities before making claims, he adds.
“What companies really need to do is think about AI in the same way they think about any other type of disclosure,” Shargel says. “Companies don’t make disclosures about their financial condition without a review or compliance audit. Companies need to treat disclosures about technology and AI in the same way, and they need have the ability to assess the accuracy.”