- "기밀 VM의 빈틈을 메운다" 마이크로소프트의 오픈소스 파라바이저 '오픈HCL'란?
- The best early Black Friday AirPods deals: Shop early deals
- The 19 best Black Friday headphone deals 2024: Early sales live now
- I tested the iPad Mini 7 for a week, and its the ultraportable tablet to beat at $100 off
- The best Black Friday deals 2024: Early sales live now
AI for Decision-Makers: How to Win Trust from the Outset
In business, data science and artificial intelligence are usually geared towards powerful efficiencies and growth. User trust is often overlooked. This can quickly morph into a major problem, particularly when AI is introduced to support strategic choices.
Data science and AI teams focus constantly on methodology and accuracy. This is critical, ensuring algorithms deliver valuable insights, analytics and support increased automation.
Nevertheless, most organizations face growing problems around users’ trust in algorithms. On the one hand, the quality of automated analysis is not clearly understood, and on the other, there is a perceived threat of machines making people’s own expertise redundant. This has become a particular difficulty in a crucial area of AI: decision support.
“The moment that models start guiding strategic decisions, there is a shift in requirements,” explains René Traue, senior data scientist at the market intelligence and consultancy firm GfK. “Users must be able to deeply trust the applications. They have to find them indispensable when making major choices. If not, they can end up walking away from them.”
Building confidence
In order to overcome this issue, the applications running AI algorithms must be designed to build confidence in the outcomes. “Think of a decision support system as being like an assisted driving car. That car might automatically brake if you get too close to the driver in front, or correct the steering if you drift lane. However, many people would not be happy to go straight into trusting the automation to take control in this way: first they need to gain confidence in the quality of the support system,” Traue explains.
Carmakers have acted by adding warnings when their cars are about to self-brake, or ensuring drivers keep ultimate control through the steering wheel when any correction is being made.
“Drivers can then increasingly trust the car to make the right decisions. They can stop instinctively ‘fighting it’ and allow the automation to work,” Traue says. “It’s the same idea in business. Decision support must be applied in a very transparent way, allowing the user to keep a key level of control at first, while the system proves itself to be consistently good and helpful.” There is an additional key requirement: company strategists expect to receive clear evidence from the system to back up any actions advised.
Respecting limits
GfK’s own decision support system, gfknewron, informs decisions in contexts including forecasting sales, setting prices, making brand decisions, and scenario testing, to name just a few. “We remain acutely aware of the importance of getting our solutions right, so we are completely focused on what works and what the limitations are,” Traue explains. This includes ensuring any analytical conclusions are not only built on extensive data, but also run through a rigorous quality assurance process. GfK’s system examines all results, and flags or even suppresses any that have possible quality problems – allowing GfK’s human experts to review and accept or correct, as necessary. This is a critical area of investment, to avoid any risk of sending out potentially misleading guidance.
“gfknewron is designed so that people can understand the rationale for the recommendations it gives them. We constantly assess the algorithms, using not only our data scientists but also – and increasingly – our specialist MLOps analysts, who continuous monitor the validity and accuracy of our models,” he says. “We want to help decision makers trust the utter reliability of gfknewron in accelerating good choices and freeing up their time.” In addition, the company encourages radically transparent feedback from users.
Eliminating complexity
Just as one negative experience can make people avoid an AI-powered decision support system altogether, a beneficial experience tends to result in increased trust. There is enormous potential for AI to support more and more decision areas, when users see it working well. Traue concludes: “The world is becoming so complex. Tech and consumer brands may be managing multiple products, distribution channels, promotion campaigns, and marketing channels at any one time. When decision-makers have trustworthy AI to cut through this complexity and data, they can focus their time on identifying the best option from the recommendations, to develop a competitive advantage in the market.”
To find out more about gfknewron, visit www.gfk.com/products/gfknewron