- New Levels, New Devils: The Multifaceted Extortion Tactics Keeping Ransomware Alive
- Elden Ring, 2022's Game of the Year, hits a record low price of $20 on Amazon for Black Friday
- This is the best car diagnostic tool I've ever used, and it's only $54 in this Black Friday deal
- This robot vacuum has a side-mounted handheld vacuum and is $380 off for Black Friday
- This 2 TB Samsung 990 Pro M.2 SSD is on sale for $160 this Black Friday
Successful AI Implementations Hinge on Trust
Business spending on artificial intelligence (AI) technologies is growing by leaps and bounds as organizations strive to improve efficiency, simplify, and automate processes, and build more proactive capabilities. Global spending on AI-centric systems is on track to surpass $300 billion in 2026, with the U.S. accounting for more than 50% of the total, according to IDC forecasts. But the success of these investments may ultimately hinge on trust.
A National Institute of Standards and Technology (NIST) study points out that “determining that the AI system is trustworthy because it meets its system requirements won’t ensure widespread adoption of AI. It is the user, the human affected by the AI, who ultimately places their trust in the system.”
We polled the CIO Experts Network of IT professionals and industry analysts for insights into what is needed to create trust in AI-based systems. Many of them stressed the need for transparency and communication, as well as understanding user fears about the potential for intrusive technology.
“The not-so-simple solution for building trust in AI is transparency,” says Peter Nichol (@PeterBNichol), Chief Technology Officer at OROCA Innovations. “Consumers don’t trust decisions based on variables they can’t see and calculations they don’t understand. So, we must shine a light inside the ‘black box’ of AI. Coupling time-tested open-source technology with trusted industry partners is an excellent step towards transparency, whether a company is looking at firewalls, SDN orchestration, or cloud services driven by AI.”
Ramprakash Ramamoorthy (@ramprakashr), Director of AI Research at Zoho Corporation, contends that staffers and customers both need the means to understand how the AI model arrives at its conclusions and recommendations. “Most AI models are just black-box models today, but it’s becoming increasingly important to explain why a particular decision has been arrived at, given AI is being deployed in mission-critical systems like credit assessment, recruitment screening, etc. Explainable AI is the key to building trust.”
Overcoming fears
It’s also important for planners and decision makers to understand the fears and concerns of customers and employees.
“A lack of trust in AI seems to come from both ends of the knowledge spectrum: those with little or no knowledge of AI fear it, as do those with an in-depth knowledge of its current and potential capabilities,” says Emily Gray-Fow (@Emily_Gray_Fow), a B2B tech and engineering content writer. “Thus, organizations need to make sure that everyone has a basic grounding in what AI is and what it can and can’t do. This must be coupled with clear-cut boundaries and rules as to what it will be allowed to do [versus not].”
Several influencers stressed the need to educate employees on strategies and benefits of AI implementations:
“Building trust in AI is a crucial step towards successful digital transformation. Companies shouldinvest in building trust in AI internally by educating and training staff and management; developing understanding on how predictions and models work and how these can benefit and empower employees and/or processes; pointing out what the purpose of AI insight is and where human logic and execution comes in; and finally, by engaging teams in the process and collections regular feedback.”
— Elitsa Krumova (@Eli_Krumova), a global thought leader and tech influencer
“When the entire team clearly understands how AI will ultimately benefit their organization, there is a much better chance that trust can be built. IT leaders must provide a level of transparency so all team members understand the AI’s role and how it can be trusted. When employees understand their respective roles and how AI can bolster their results, they become further engaged and part of the solution.”
— Scott Schober (@ScottBVS), President/CEO at Berkeley Varitronics Systems, Inc.
“Employees want to review supporting information and know when the AI has sufficient context to make an automated decision. They want controls and AI-augmented decision-making tools so they can act responsively to address outlying conditions or rapid spikes in demand. Leaders should debunk myths and set realistic expectations on where AI helps the business and where employees will benefit from AI-enabled workflows.”
— Isaac Sacolick (@nyike), President of StarCIO and author of Digital Trailblazer
There are also generational differences to consider. “Like many technology trends, older stakeholders tend to be more cautious about advanced technologies than younger generations; this is especially the case for AI,” says Frank Cutitta (@fcutitta), CEO & Founder of HealthTech Decisions Lab. “For example, mature physicians feel that intuition can trump technologies like AI. On the other hand, millennial physicians tend to embrace AI in greater proportions.”
Kieran Gilmurray (@KieranGilmurray), CEO at Digital Automation and Robotics Limited, says that building trust requires organizations to do a variety of things very well. “They must provide context and transparency around their AI models, gain workforce buy-in through education, provide the necessary skills to guide people to make the right decisions using AI, and construct a robust governance framework to effectively scale their organization’s use of AI. Each individual action builds on top of the other to help gain and retain trust in AI.”
How to move forward
Setting reasonable expectations and measured implementations is also important. “Be realistic about what you hope to achieve, know what it can and can’t do, and stagger the time between the stages of rollout,” advises Nicki Doble, Chief Transformation Officer with AIA Philippines. “Start with using it to baseline your networks comprehensively. No one is going to trust a system that gives too many false positives, so take your time.”
Start small, urges Will Kelly (@willkelly), a content and product marketing manager focused on the cloud and DevOps. “Start with small AI projects to build trust across your organization. Use these small projects as live internal demos to help set appropriate expectations for what AI is and isn’t for your organization,” he says. Kelly also advocates for finding internal champions “who can use those small projects as proofs of concept” – this helps explain the benefits in business terms that all stakeholders can understand.
Building trust in AI is based on fundamentals, says David Guzman (@drguzman), President at DrGuzman, LLC. “Organizations require an AI governance strategy and protocols. AI needs to be embedded in every aspect of the technology base, from networks to systems monitoring to application stacks to business intelligence platforms. AI cannot be a one-off implementation. It needs to permeate every aspect of the technology landscape.”
Are you ready to move forward with implementation of AI-based systems? For more insights that may illuminate your choices, visit.
Copyright © 2022 IDG Communications, Inc.