- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
How CIOs navigate generative AI in the enterprise
One prospect is how malicious individuals might capitalize on gen AI to further their efforts. Rajavel notes how cybercriminals are already utilizing the tech to execute attacks at scale for its ability to draft convincing phishing campaigns and spread disinformation. Attackers could also target gen AI tools and the models themselves, leading to data leakage or poisoning the outputs.
“It’s possible that generative systems could accelerate and enable attackers,” adds O’Grady. “Arguably, the biggest concern for many enterprises, however, is the exfiltration of private data from closed vendor systems.”
These technologies can produce very convincing results that can be riddled with inaccuracies. Outside of bugs within the models, there are also cost implications to consider, and it’s very easy to unknowingly or unnecessarily spend a lot on gen AI, whether from using the wrong models, not having visibility into consumption costs, or not using them effectively.
“AI is not without risk,” says Perez. “It needs to be built from the ground up with humans in control of the areas that ensure anyone can trust its outcomes — from the most basic user to the most experienced engineer.” Another unanswered question for Perez is the ownership of AI development and maintenance since it’s also putting pressure on IT teams to keep up with the demand for innovation, as many IT workers lack time to implement and train AI models and algorithms.
The elephant in the room: employment
Then there’s the outcome that’s stirred up the mainstream media: the replacement of human labor by AI. But how gen AI will affect employment in IT groups has yet to be determined. “Impacts on employment are, at present, difficult to forecast, so that’s a potential concern,” says O’Grady.
While there’s undoubtedly a mix of opinions in this debate, Walgreens’ Sample doesn’t believe AI poses an existential threat to humanity. Instead, he’s optimistic about the potential for gen AI to improve the lives of employees. “The glass-half-empty viewpoint is AI will impact a lot of jobs, but the glass-half-full viewpoint is it’ll make humans better at what they do,” he says. “Ultimately, I think AI will eliminate people from having to do repetitive tasks, which can be automated, and allow them to focus on higher level jobs.”
How to soothe AI concerns
Responding to the deluge of concerns AI poses will take a manifold approach. For Perez, the quality of gen AI hinges on the data these models ingest. “If you want quality, trusted AI, you need quality, trusted data,” he says. The problem, however, is data is often riddled with errors, requiring tooling to integrate unstructured data in disparate formats from various sources. He also stresses going beyond “human in the loop” approaches to put humans more in the driver’s seat. “I see AI as a trusted advisor but not the sole decision maker,” he adds.
To uphold software quality, rigorous testing will also be required to check that AI-generated code is accurate and bug-free. To that end, Malagodi encourages companies to adopt a “clean as you code” approach that involves static analysis and unit testing to ensure proper quality checks. “When developers focus on clean code best practices, they can be confident their code and software is secure, maintainable, reliable, and accessible,” he says.
As with any new technology, adds Bedi, the initial enthusiasm needs to be tempered with proportionate caution. As such, IT leaders should consider steps to effectively use AI assistants, like observability tools, which are capable of detecting architectural drift, and can support preparation for application requirements.
Applying governance around AI adoption
“Generative AI represents a new era in technological advancement with the potential to bring substantial benefits if properly managed,” says Pooley. However, he advises CIOs to balance innovation with the inherent risks. Controls and guidelines must especially be applied to limit data exposure through uncontrolled usage of these tools. “As with many technology opportunities, CIOs will find themselves accountable should it go wrong,” he adds.
For Sample, the onus partially lies on regulators to adequately address the risks AI poses to society. For instance, he references a recent executive order from the Biden administration to establish new AI safety and security standards. The other aspect is spearheading corporate guidelines to govern this fast-paced technology. Walgreens, for example, has embarked on a journey to define a governance framework around AI that includes considerations like fairness, transparency, security, and explainability, he says.
Busse at Workato similarly advocates for setting internal directives prioritizing security and governance in the wake of AI. He advises educating employees with training, developing internal playbooks, and implementing an approval process for AI experimentation. Pooley notes that many firms have established an AI working group to help navigate the risks and harness the benefits of gen AI. Some security-aware organizations are taking even more stringent measures. To combat exfiltration, many buyers prioritize on-premise systems, adds O’Grady.
“CIOs should be leading the charge to ensure their teams have the right training and skills to identify, build, implement, and use generative AI in a way that benefits the organization,” says Perez. He describes how at Salesforce, product and engineering teams have implemented a trust layer between AI inputs and outputs to minimize the risks that come from using this powerful technology.
That said, being intentional with AI is just as important as governing it. “Organizations are rushing to implement AI without a clear understanding of what it does and how it’ll benefit their business the most,” says Hyland’s Watt. AI won’t fix every problem. So understanding the problems the technology can and can’t fix is fundamental to knowing how to maximize it, he says.
Positively impacting the business
With the proper checks in place, gen AI is set to catalyze greater agility across countless areas, and CIOs foresee it being utilized to realize tangible business outcomes, like user experiences. “Generative AI is going to allow companies to create experiences for their customers that once felt impossible,” says Perez. “AI is no longer just a tool for niche teams. Everyone will have opportunities to use it to be more productive and efficient.”
But UX benefits don’t end with external customers. Internal employee experience will benefit as well, adds Rajavel. AI copilots trained on internal data could cut IT ticket requests in half, she predicts, simply by instantly sourcing answers already found on internal company pages.
Walgreens is also improving customer experience with gen AI-driven voice assistants, chatbots, and text messaging, says Sample. By reducing call volume and improving customer satisfaction, team members can better focus on their in-store customers. Plus, the company is also deploying gen AI to optimize in-store operations, such as supply chain, floor space, and inventory management, helping leaders make decisions regarding the top and bottom lines of the business. But vigilance is key.
“As with all prior technical waves, AI is undoubtedly going to be accompanied by significant downsides and collateral damage,” says O’Grady. “Overall, it will accelerate development and augment human abilities while dramatically expanding the scope of problems.”