A CIO primer on addressing perceived AI risks
Ask your average schmo what the biggest risks of artificial intelligence are, and their answers will likely include: (1) AI will make us humans obsolete; (2) Skynet will become real, making us humans extinct; and maybe (3) deepfake authoring tools will be used by bad people to do bad things.
Ask your average CEO what the biggest risks of artificial intelligence are and they’ll more likely talk about missed opportunities — of AI-based business capabilities competitors are able to deploy sooner than they can.
As CIO you need to anticipate, not only actual AI risks, but perceived ones as well. Here’s how to go about it.
Risks perceived by an average schmo
1. Will AI make humans obsolete? Answer: This isn’t a risk; it’s a choice. Personal computers, then the internet, and then smartphones all led to opportunities for computer-augmented humanity. AI can do the same. Business leaders can focus on building a stronger, more competitive business by using AI capabilities to augment and empower their employees.
They can, and some will. Others will use AI to automate tasks currently performed by the humans they employ.
Or, more likely, they’ll do both. Neither will be better in an absolute sense. But they will be different. As CIO you’ll have to help communicate the company’s intentions, whether AI is used for employee augmentation or replacement.
2. Skynet. This, the most chilling of the possible AI futures, is, as it happens, the least likely. It’s the least likely, not because killer robots aren’t possible, but because a volitional AI would have no reason to produce and deploy them.
In nature, organisms that hunt and kill other organisms are either predators that want food, or competitors for the same resources. Other than those of our fellow humans who hunt for sport it’s rare for species to harm members of other species just for the heck of it.
Except for electricity and semiconductors, it’s doubtful we and a volitional AI would find ourselves competing for resources intensely enough for the killer robot scenario to become a problem for us.
That’s especially because if an AI is competing with us for electricity and semiconductors it would be unlikely to squander the electricity and semiconductors it has to build killer robots.
3. Deepfakes. Yes, deepfakes are a problem and, as the pointy end of the war-on-reality spear they’re a problem that will get nothing but worse. Especially worrisome is the false sense of security purveyors of here’s-how-to-spot-deepfakes guidance provide (for example, this). They’re worrisome because, to the extent their techniques work, they’re an instruction manual on how to produce harder-to-detect deepfakes. And they contribute to a Lewis Carroll-esque “red queen” scenario — red queen because deepfake-creation AIs and deepfake-detection AIs will have to improve faster and faster just to stay in one place with respect to each other.
And so, just as malware countermeasures evolved from standalone antivirus measures to cybersecurity as a whole industry, we can expect a similar trajectory for deepfake countermeasures as the war on reality heats up.
AI risks as perceived by the CEO
CEOs who don’t want to quickly become former CEOs expend quite a lot of their time and attention on some form of “TOWS” analysis (threats, opportunities, weaknesses, and strengths).
As CIO, one of your most important responsibilities has, for quite some time, been to help drive business strategy by connecting the dots, from IT-based capabilities to business opportunities (if your business exploits them first) or threats (if a competitor exploits them first).
That was the case before the current wave of AI enthusiasm washed over the IT industry. It’s what “digital” was all about and is even more the case now.
Add AI to the mix and CIOs have another layer of responsibility, namely, how to integrate its new capabilities into the business as a whole.
The silent AI-based threat: Artificial human frailties
There’s one more class of risk to worry about, one that receives little attention. Call it “artificial human frailties.”
Start with Daniel Kahneman’s Thinking, Fast and Slow. In it, Kahneman identifies two ways we go about thinking. When we think fast, we use the cerebral circuitry that lets us identify each other at a glance, with no delay and little effort. Fast thinking is also what we do when we “trust our guts.”
When we think slow, we use the circuitry that lets us multiply 17 by 53 — a process that takes considerable concentration, time, and mental effort.
In AI terms, thinking slow is what expert systems, and for that matter, old-fashioned computer programming, do. Thinking fast is where all the excitement is in AI. It’s what neural networks do.
In its current state of development, AI’s form of thinking fast is also what’s prone to the same cognitive errors as trusting our guts. For example:
Inferring causation from correlation: We all know we aren’t supposed to do this. And yet, it’s awfully hard to stop ourselves from inferring causality when all we have as evidence is juxtaposition.
As it happens, a whole lot of what’s called AI these days consists of machine learning on the part of neural networks, whose learning consists of inferring causation from correlation.
Regression to the mean: You watch The Great British Baking Show. You notice that whoever wins the Star Baker award in one episode tends to bake more poorly in the next episode. It’s the Curse of the Star Baker.
Only it isn’t a curse. It’s just randomness in action. Each baker’s performance falls on a bell curve. When one wins Star Baker one week, they’ve performed at one tail of the bell curve. The next time they bake they’re most likely to perform at the mean, not at the Star Baker tail again, because every time they bake, they’re most likely to perform at the mean and not the winning tail.
And yet, we infer causation — the Curse!
There’s no reason to expect a machine-learning AI to be immune from this fallacy. Quite the opposite. Faced with random process performance data points we should expect an AI to predict improvement following each poor outcome.
And then to conclude a causal relationship is at work.
Failure to ‘show your work’: Well, not your work; the AI’s work. There’s active research into developing what’s called “explainable AI.” And it’s needed.
Imagine you assign a human staff member to assess a possible business opportunity and recommend a course of action to you. They do, and you ask, “Why do you think so?” Any competent employee expects the question and is ready to answer.
Until “Explainable AI” is a feature and not a wish-list item, AIs are, in this respect, less competent than the employees many businesses want them to replace — they can’t explain their thinking.
The phrase to ignore
You’ve undoubtably heard someone claim, in the context of AI, that, “Computers will never x,” where x is something the most proficient humans are good at.
They’re wrong. It’s been a popular assertion since I first started in this business, and it’s been clear ever since that no matter which x you choose, computers will be able to do whatever it is, and do it better than we can.
The only question is how long we’ll all have to wait for the future to get here.
Artificial Intelligence, IT Leadership, Risk Management