- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
6 generative AI hazards IT leaders should avoid
The key is not to expect a result you can use straight away and be alert to recognizing the ways generative AI can be usefully wrong. Treat it as a brainstorming discussion that stimulates new ideas rather than something that will produce the perfect idea for you fully baked.
That’s why Microsoft has adopted Copilot rather than autopilot for most of its generative AI tools. “It’s about putting humans in the loop and designing it in such a way that the human is always in control with a copilot that’s powerful and helping them with every task,” CEO Satya Nadella said at the Inspire conference this summer. Learning to experiment with prompts to get better results is a key part of adopting generative AI, so tools like Copilot Lab can help employees gain these skills.
Similarly, rather than attempting to automate processes, create workflows for your own generative AI tools that encourage staff to experiment and evaluate what the AI produces. Remember to account for what information the human reviewing the AI suggestions will have about the situation — and what incentive they have to vet the results and check any cited sources, rather than just save time by accepting the first option they’re given without making sure it’s accurate and appropriate.
Users need to understand the suggestions and decisions they accept from generative AI well enough to know what the consequences could be and justify them to someone else. “If your organization doesn’t explain AI-assisted decisions, it could face regulatory action, reputational damage and disengagement by the public,” warns the UK’s Information Commissioner’s Office.
Offering multiple alternatives every time and showing how to interpret suggestions can help, as well as using prompts that instruct an LLM to explain why it’s giving a particular response. And in addition to having generative AI cite the sources of key information, consider ways to highlight elements that are important to double check, like dates, statistics, policies, or precedents that are being relied on.
But ultimately, this is about building a culture where generative AI is seen as a useful tool that still needs to be verified, not a replacement for human creativity or judgement.
“Generative AI or any other form of AI should be used to augment human decision-making, not replace it in contexts where its limitations could cause harm,” Daga points out. “Human reviewers should be trained to critically assess AI output, not just accept it at face value.”
As well as a process that includes human review, and encourages experimentation and thorough evaluation of AI suggestions, guardrails need to be put in place as well to stop tasks from being fully automated when it’s not appropriate. “For instance, AI might generate company press briefings, but only a human editor can approve the sharing of content with selected journalists and publications,” he adds.
Generative AI can certainly make developers more productive, too, whether exploring a new code base, filling in boilerplate code, autocompleting functions, or generating unit tests. You can take advantage of that extra productivity but still decide code won’t be released into a production environment without human review.
Businesses are accountable for the consequences of their choices, and that includes deploying AI in inappropriate areas, says Andi Mann, global CTO and founder of Colorado-based consultancy Sageable. “The customer will not let you off the hook for a data breach just because, ‘It was our AI’s fault.’”
Hide the AI
It’s crucial to ensure responsible use of the system, whether that’s by employees or customers, and transparency is a big part of that. An embarrassing number of publications use AI-generated content that’s easy to spot because of its poor quality, but you should be clear about when even good-quality content is being produced by an AI system, whether it’s an internal meeting summary, marketing message, or chatbot response. Provide an ‘off-ramp’ for automated systems like chatbots that allow users to escalate their question to a human.
“Customers should have the option to opt out of interactions with generative AI, particularly in sensitive areas,” says Daga.
Assume AI can solve every problem
As generative AI usage increases in business, so does the awareness that people need to be using their own judgment on what the AI suggests. That’s what eight out of 10 IT staff said in last year’s State of DevOps Automation Report, and up to just over 90% in the 2023 State of DevOps Automation and AI study.
That caution is justified, says Mann, especially where domain-specific training data that can be used to generate predictable, desirable, and verifiable outputs is limited, as in IT operations because it’s prone to inaccurate results given insufficient training data.
“GenAI will be less meaningful for any use case dealing with novel problems and unknown causes with missing or undocumented knowledge,” he warned. “Training an LLM is impossible if undisclosed human tribal knowledge is your only potential input.”
He does see opportunities, though, to use GenAI as a sidekick. “It can be an advisor or active expert by training an engine to learn what ‘known good’ IT operations look like across defined disciplines and knowledge stores, and recognize known problems, diagnose known causes, identify known inefficiencies, and respond with known remediations,” he says. But while some IT problems that may seem new can be tackled with familiar processes and solutions, it won’t be clear in advance which those are.
“We know gen AI almost never says it doesn’t know something, but instead will throw out misleading, spurious, wrong, and even malicious results when you try to get it to solve ‘unknown unknowns,” says Mann.
Make more work for humans
Content produced by generative AI can be helpful, of course, but because it’s so easy to create, it can also end up making a lot more work for those who need to vet it and take action based on it.
Fiction magazines report receiving so many low-quality AI-written stories that it’s effectively a denial of service attack. Publishers have been experimenting with AI to copy edit manuscripts, but writers and editors alike report that suggested edits are frequently unhelpful, irrelevant, or just plain wrong — running into problems with technical terms, house style, complex sentence structures, and words used in correct but unusual ways, for starters. Be honest when you assess what areas generative AI is actually able to contribute to.
A key part of adopting any AI tool is having a process for dealing with errors beyond correcting them individually each time. Don’t assume generative AI learns from its mistakes, or that it’ll give you the same result every time. If that matters, you need to use prompt engineering and filters to constrain results in the most important areas.
Also be prepared for generative AI use in areas and processes you hadn’t planned for, where it may be less accurate. Again, transparency is key. Staff need to know the company policy on when they can use generative AI and how to disclose they’re using it. You may also want to include generative AI usage in audits and eDiscovery the same way you do with enterprise chat systems.
Organizations may need to start setting these policies with more urgency. Out of a thousand US businesses surveyed by TECHnalysis Research in spring 2023, 88% were already using generative AI, but only 7% of those early adopters had formal policies.
And in a recent IDC study on AI opportunity, a over a quarter of business leaders said lack of AI governance and risk management was a challenge for implementing and scaling the technology. Initial concerns have been about the confidentiality of enterprise data, but reputational damage should also be a priority. In addition, over half called a lack of skilled workers their biggest barrier, which usually refers to developers and data engineers. But less technical business users will also need the skills to carefully frame questions they put to an AI tool, and assess and verify the results.