10 ways to prevent shadow AI disaster
For example, Chandrasekaran says, there is a good chance that sensitive data could be exposed, and that proprietary data could help an AI model (particularly if it’s open source) get smarter, thereby aiding competitors who may use the same model.
At the same time, many workers lack skills required to use AI effectively, further upping the risk level. They may not be skilled enough to feed the AI model the right data to generate quality outputs; prompt the model with the right inputs to produce optimal outputs; and verify the accuracy of the outputs. For example, workers can use generative AI to create computer code, but they can’t effectively check for problems in that code if they don’t understand the code’s syntax or logic. “That could be quite detrimental,” Chandrasekaran says.
Meanwhile, shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says.