- The 25+ best Black Friday Nintendo Switch deals 2024
- Why there could be a new AI chatbot champ by the time you read this
- The 70+ best Black Friday TV deals 2024: Save up to $2,000
- This AI image generator that went viral for its realistic images gets a major upgrade
- One of the best cheap Android phones I've tested is not a Motorola or Samsung
Why You Need the Ability to Explain AI
Trust is a critical factor in most aspects of life. But this is especially true with complex concepts like artificial intelligence (AI). In a word, it’s essential for day-to-day users to trust these technologies will work.
“AI is so complicated that it can be difficult for operators and users to have confidence that the system will do what it’s supposed to do,” said Andrew Burt, Managing Partner, BNH.AI.
Without trust, individuals will remain uncertain, doubtful, and possibly even fearful of AI solutions, and those concerns can seep into implementations.
Explaining the how and why
“The capacity to reveal or deduce the ‘why’ and the ‘how’ is pivotal for the trust, adoption, and evolution of AI technologies,” said Bob Friday, Chief AI Officer and CTO of Enterprise Business at Juniper Networks. “Like hiring a new employee, a new AI assistant must earn trust and get progressively better at its job while humans teach it.”
So, how do you explain AI?
Start by educating yourself. There are plenty of guidance tools, but as a primer start with this series of videos and blogs. They help not only define AI technologies, but also relay the business applications and use cases for these solutions.
Next, be sure you can explain the benefits that users will gain from AI. For example, AI technologies can reduce the need for manual, repetitive tasks such as scanning code for vulnerabilities. These responsibilities can be draining for IT and network teams, who would rather spend their time on interesting or impactful projects.
At the same time, it’s important to explain that humans are required in the AI decision-making loop. They can ensure the system’s accountability and help interpret and apply the insights that AI delivers.
“The relationship between human and machine agents continues to grow in importance and revolves around the topic of trust and its relationship to transparency and explainability,” Friday said.
Additional trustworthy considerations
Developing AI trust takes time. In addition to focusing on explainability, Friday recommended that IT leaders do their due diligence before deploying AI solutions. Ask questions such as:
- What are the algorithms that contribute to the solution?
- What data is ingested and how is it cleaned?
- Can the system itself explain its reasoning, recommendations, or actions?
- How does the solution improve and evolve automatically?
Burt from BNH.AI also suggested incorporating controls that will bring IT teams into the AI deployment process and ensure the probability of the solution doing what it’s supposed to do.
For example, incorporate appeal and override functionality to create a feedback loop, Burt said. “Make sure users can flag when things go wrong, and operators can override any decisions that might create potential incidents.”
Another control is standardization. Documentation across data science teams is typically quite fragmented. Standardizing how AI systems are documented can help reduce risks of errors, as well as build AI trustworthiness, Burt said.
Lean on experts
Finally, seek guidance from experts. For example, Juniper has developed its AI solutions around core principles that help build trust. The company also offers extensive resources, including blogs, support, and training materials.
“Our ongoing innovations in AI will make your teams’, users’ and customers’ lives easier,” Friday said. “And explainable AI helps you start your AI adoption journey.”
Explore what Mist AI can do – watch a demo, take a tour of the platform in action, or listen to a webinar.
Copyright © 2023 IDG Communications, Inc.