- Mastering Azure management: A comparative analysis of leading cloud platforms
- GAO report says DHS, other agencies need to up their game in AI risk assessment
- This LG Bluetooth speaker impressed me with a design feature I've yet to see on competitors
- Amazon's AI Shopping Guides helps you research less and shop more. Here's how it works
- This Roku Ultra streaming device gave my TV 4K superpowers - and it's on sale right now
9 ways to avoid falling prey to AI washing
Request evidence to back the claims
While buzzwords can be seductive, it helps to gently ask for evidence. “Asking the right questions and demanding proof of product claims is critically important to peel away the marketing and sales-speak to determine if a product is truly powered by AI,” Ammanath says.
CIOs who evaluate a specific product or service that appears to be AI-powered can ask how the model was trained, what algorithms were used, and how the AI system will adapt to new data.
“You should ask the vendor what libraries or AI models they use,” says Tkachenko. “They may have just everything built on a simple OpenAI API call.”
Matthias Roeser, partner and global leader of technology at management and technology consulting firm BearingPoint, agrees. He adds that components and framework should be thoroughly understood, and the assessment should include “ethics, biases, feasibility, intellectual property, and sustainability.”
This inquiry could help CIOs learn more about the true capabilities and the limitations of that product, thereby helping them decide whether to purchase it or not.
Pay attention to startups
Startups position themselves at the forefront of innovation. However, while many of them push the boundaries of what’s possible in the field of AI, some may simply exaggerate their capabilities to gain attention and money.
“As a CTO of a machine learning company myself, I often encounter cases of AI washing, especially in the startup community,” says Vlad Pranskevičius, co-founder and CTO of Ukrainian-American startup Claid.ai by Let’s Enhance. He noticed, though, that recently the situation has become more acute, adding that this phenomenon is especially dangerous during hype cycles like the one currently being experienced, as AI is perceived as a new gold rush.
Pranskevičius believes, though, that AI washing will be kept in check in the near future as regulations around AI become more stringent.
Build a tech professional reputation
It’s not uncommon for a company to acquire dubious AI solutions, and in such situations, the CIO may not necessarily be at fault. It could be “a symptom of poor company leadership,” says Welch. “The business falls for marketing hype and overrules the IT team, which is left to pick up the pieces.”
To prevent moments like these, organizations need to foster a collaborative culture in which the opinion of tech professionals is valued and their arguments are listed thoroughly.
At the same time, CIOs and tech teams should build their reputation within the company so their opinion is more easily incorporated into decision-making processes. To achieve that, they should demonstrate expertise, professionalism, and soft skills.
“I don’t feel there’s a problem with detecting AI washing for the CIO,” says Max Kovtun, chief innovation officer at Sigma Software Group. “The bigger problem might be the push from business stakeholders or entrepreneurs to use AI in any form because they want to look innovative and cutting edge. So the right question would be how not to become an AI washer under the pressure of entrepreneurship.”
Go beyond the buzzwords
When comparing products and services, it’s essential to evaluate them with an open mind, looking at their attributes thoroughly.
“If the only advantage a product or service has for you is AI, you should think carefully before subscribing,” Tkachenko says. “It’s better to study its value proposition and features and only start cooperation when you understand the program’s benefits beyond AI.”
Welch agrees: “Am I going to buy a system because they wrote it in C, C++, or Java?” he asks. “I might want to understand that as part of my due diligence on whether they’re going to be able to maintain the code, company viability, and things like that.”
Doing a thorough evaluation may help organizations determine whether the product or service they plan on purchasing aligns with their objectives and has the potential to provide the expected results.
“The more complex the technology, the harder it is for non-specialists to understand it to the extent it enables you to verify that the application of that technology is correct and makes sense,” Kovtun says. “If you’ve decided to utilize AI tech for your company, you better onboard knowledgeable specialists with experience in the AI domain. Otherwise, your efforts might not result in the benefits you expect to receive.”
Being up to date on AI-related products and the issues surrounding them can help CIOs make informed decisions as well. This way, they can identify potential mistakes they could make and, at the same time, leverage new ideas and technologies.
“I don’t think there’s enough education yet,” says Art Thompson, CIO at the City of Detroit.
He recommends CIOs do enough research to avoid falling into a trap with new or experimental technology that promises more than it can deliver. If that happens, “the amount of time to rebid and sort out replacing a product can really harm staff from being able to get behind any change,” he says. “Not to mention the difficulty in people investing time to learn new technologies.”
In addition, being informed on the latest AI-related matters can help CIOs anticipate regulatory changes and emerging industry standards, which can help them be compliant and maintain a competitive edge.
And it’s more than just the CIO who needs to stay up to date. “Educate your team or hire experts to add the relevant capabilities to your portfolio,” says BearingPoint’s Roeser.
Additional regulatory action around AI
New regulations on the way could simplify the task of CIOs seeking to determine whether a product or service employs real AI technology or not. The White House recently issued an AI Bill of Rights with guidelines for designing AI systems responsibly. And more regulations might be issued in the coming years.
“The premise behind these actions is to protect consumer rights and humans from potential harm from technology,” Ammanath says. “We need to anticipate the potential negative impacts of technology in order to mitigate risks.”
Ethics shouldn’t be an afterthought
Corporations tend to influence the discourse on new technology, highlighting the potential benefits while often downplaying the potential negative consequences.
“When a technology becomes a buzzword, we tend to lose focus on the potentially harmful impacts it can have in society,” says Philip Di Salvo, a post-doctoral researcher at the University of St. Gallen in Switzerland. “Research shows that corporations are driving the discourse around AI, and that techno-deterministic arguments are still dominant.”
This belief that tech is the main driving force behind social and cultural change can obscure discussions around ethical and political implications in favor of more marketing-oriented arguments. As Di Salvo puts it, this creates “a form of argumentative fog that makes these technologies and their producers even more obscure and non-accountable.”
To address this, he says there’s a crucial challenge to communicate to the public what AI actually isn’t and what it can’t do.
“Most AI applications we see today — including ChatGPT — are basically constructed around the application of statistics and data analysis at scale,” says Di Salvo. “This may sound like a boring definition, but it helps to avoid any misrepresentation of what ‘intelligent’ refers to in the ‘artificial intelligence’ definition. We need to focus on real problems such as biases, social sorting, and other issues, not hypothetical, speculative long-terminist scenarios.”