IT leaders grapple with shadow AI


“Our focus is embracing and accelerating the use of smart artificial intelligence, while managing it with DLP tools to ensure security,” says Wright.

Education also will play a critical role in taking control over generative AI at Parsons. “Our focus is educating employees on the best practices and tools to accomplish their goals while protecting the company,” Wright says.

Insurers understand risk

As a global insurer with a presence in many countries, TMG’s international units have been experimenting with generative AI. “We did see a tremendous amount of personal experimentation going on. But because we are risk-aware, there was not a rush to put everything on ChatGPT. The reaction was quick and clear: education and monitoring,” says Pick.

TMG has set up working groups within its various companies to examine use cases such as drafting letters and marketing content to give humans a headstart on the process, according to Pick. Another prospective generative AI use case is for the various business units to draft reports on market conditions and performance.  

“Any company with many business units can benefit from generative AI’s ability to summarize information,” notes Pick. “To take an underwriting manual and summarize it in plain language could take seconds or minutes to get to a first draft, rather than days or weeks,” he says. “That will enable us to focus our people resources more efficiently in the future.”

In addition to ingesting and generating written content, generative AI shows great potential in application development, according to Pick. The ability to translate in near real-time a stored procedure from one language into another with an accuracy rate of perhaps 60%, while including comments, will increase developer efficiency greatly, he asserts. “It could take weeks for a programmer to do the same thing. That will pay dividends for years,” Pick says.

In addition, the use of private LLMs is immediately attractive for an insurance provider such as TMG. “There is the hope that it might find things humans would not notice. We’re also interested in ‘little LLMs,’ if we can get to that state, because you would not need a cloud data center. Instead, we would use sandboxes that are cordoned off so that we are stewarding the data,” says Pick.

But even with private LLMs, regulation comes into play, says the CIO. “For a global company such as TMG to use a private LLM, the data would need to be loaded into a tenant system that is within the area governed by specific regulations, such as GDPR in Europe,” he explains.

Building on POC

Chan’s pursuit of both safety and opportunity shows promise in several POCs. “We are training Azure OpenAI with all the product information we have, so a business person can do a quick search to find a particular connector and can get back several examples, including which ones are in stock. It saves time because people no longer need to call the materials team,” Chan says.

Azure OpenAI also generates custom contracts quickly. “By loading the last 10 years of contracts into the repository, we can say, ‘I need a contract for a particular project with such and such terms,’ and it comes up with a full contract within seconds,” says Chan. Sales executives can then review and tweak the contract before sending it to the customer. The quick turnaround is expected to result in quicker conversions of prospects to sales as well as happier customers.

The process is similar with requests for proposals (RFPs), in which business analysts specify what they need and generative AI creates the RFP within seconds. “The business analyst just reviews and makes changes. This is a huge productivity gain,” says Chan. Engineers can also call upon generative AI to come up with possible solutions to customer demands, such as reducing the physical footprint of a circuit board by replacing certain components in the bill of materials, while shortening the go-to-market lead time. “It will return options. That is huge in terms of value,” Chan says.  

A challenge worth taking on

In general, CIOs are finding the upside of generative AI productivity justifies grappling with the challenges of controlling it. “We make sure the company data is safe, yet the AI is not lacking in capabilities for IT and business employees to innovate,” says Chan.

According to Pick, generative AI will not make human workers obsolete, just more productive. “We don’t view it as a people replacement technology. It still needs a human caretaker,” he says. “But it can accelerate work, eliminate drudgery, and enable our employees to do things of a higher order, so we can focus people resources more acutely in the future.”

Most important, Pick says, generative AI has much more potential than earlier much-hyped technologies. “This is not the next blockchain, but something that will really be valuable.”

To extract that value, Goetz of Forrester says setting policies for generative AI is a matter of establishing clear dos and don’ts. She recommends, like Chan, following a two-track strategy in which approved generative AI applications and data sets are made available to employees, while AI applications and use cases that might put data in jeopardy are prohibited. Following the guidelines, according to Goetz, will make possible safe, self-service usage of generative AI in an organization.

In the meantime, when developing or deploying gen AI capabilities, Saroff of IDC recommends assessing the controls that generative AI tools implement, as well as the unintended risks that might arise from the use of those AI tools.

IDC



Source link