How to get gen AI spend under control

“You had to set it up,” he says. “And the compute needed to run a 70 billion model is significant. We set it up ourselves, provisioned a server, deployed the model, and then there was usage on top of that.”

Azure now offers a pay-as-you-go option where customers just pay the token costs, but for enterprises looking to deploy on-prem models, the set-up costs still exist.

“In an ideal world, that would be the best scenario because you’re no longer constrained by token costs,” he says. “The only cost you pay is for infrastructure. But you still need to have the compute capacity and other things, like networking.”

Oversight costs

When gen AI is moved into production, another unexpected cost might be the required oversight. Many systems require humans in the loop or expensive technical guardrails to check for accuracy, reduce risk, or for compliance reasons.

“I don’t think we expected the regulations to come so soon,” says Sreekanth Menon, global head of AI at Genpact. “Once generative AI came in, it became a leadership top topic, and all the governments woke up and said we need regulations.”

The EU Act is already in place, and there’s US work in progress. “Now companies have to accommodate that when developing AI, and that’s a cost,” he says. But the regulations aren’t a bad thing, he adds. “We need regulations for the AI decisions to be good and fair,” he says.

Adding in regulatory compliance after systems are built is expensive, too, but companies can plan ahead by putting good AI governance systems in place. Ensuring the security of gen AI models and associated systems is also a cost that companies might not be prepared for. Running a small-scale production test will not only help enterprises identify compliance and security issues, he says, but will help them better calculate other ancillary costs like those associated with additional infrastructure, search, databases, API, and more. “Think big, test small, and scale quick,” he says.

AI sprawl

In the past, with traditional AI, it might have taken a year or two of experimenting before an AI model was ready for use, but gen AI projects move quickly.

“The foundation models available today are allowing enterprises to quickly think of use cases,” says Menon. “Now we’re in a stage where we can think of an experiment and then go into production quickly.” He suggests that enterprises restrain themselves from doing all the AI projects all at once, have a cost mechanism in place and clear objectives for each project, then start small, scale wisely, and continuously invest in upskilling.

“Upskilling is a cost, but it will help you to save on other costs,” he says.

Matthew Mettenheimer, associate director at S-RM Intelligence and Risk Consulting, says he often sees gen AI sprawl within companies.

“A CIO or a board of directors wants to enable AI across their business, and before they know it, there’s quite a bit of spending and use cases,” he says.

For example, S-RM recently worked with a large consumer manufacturer that decided to push AI enablement through their business without first building a governance structure. “And every single department went off to the races and started trying to implement generative AI,” he says. “You had overlapping contracts with different tools for different parts of the organization, which really started to bloat their spend. Their marketing department was using one tool, their IT team was using another. Even within the same department, different teams used different tools.”

As a result, the company was paying for similar services over and over again, with each group having its own contracts, and no efficiencies from doing things at scale. And people were getting subscriptions to gen AI products they didn’t know how to use.

“There were a lot of great intentions and half-baked ideas,” he says. As a result, there was a massive uptick in IT spending, he says. Enterprises need to start by understanding where gen AI can really make an impact. Then enterprises should build their projects step by step, in a sustainable way, rather than going out and buying as much as possible. Some areas of particular concern, where companies might want to hold off on spending, are use cases that might hold culpability for the organization.

“If you’re an insurance provider, using AI to determine if a claim will be paid or not can land you in a bit of liability if the AI mechanism isn’t used or calibrated properly,” Mettenheimer says. Instead, prioritize use cases where workers can be freed up to handle more complex tasks.

“If someone is spending five hours a week updating the same spreadsheet and you can reduce that time to zero hours per week, that really frees up that individual to be more productive,” he adds. But if it takes as much time to check the AI’s work product as it saves, it’s not really making the job more efficient.

“Generative AI is a really powerful and incredible tool, but it’s not magic,” he says. “There’s a misconception that AI will be able to do everything without the need for any manual processes or validation, but we’re not at that point yet.”

He also recommends against doing AI projects where there are already perfectly good solutions in place.

“I know of a few cases where people want to use AI so they can feel like they’re getting a competitive edge and can say that they’re using AI for their product,” he says. “So they lay AI on top of it, but they’re not getting any benefits other than just saying they’re using AI.”

Senior executives are eager to get going on gen AI, says Megan Amdahl, SVP of partner alliances and operations at Insight.

“But without a firm destination in mind, they can spend a lot of time on cycles that don’t achieve the outcomes they’re hoping for,” she says. For example, clients often go after small use cases that improve efficiency for a small number of people. It can sound like a great project, but if there’s no way for it to be expanded, you can easily wind up with a sea of point solutions, none of which produces real business impact.

“Here at Insight, we were selecting which team to go after in order to improve help desk feedback,” she says. One strong use case had a team size of 50 who were checking the status of customer orders. But not only was the team small, the people were located in low-cost locations. Improving their efficiency with gen AI would have some impact, but not a significant one. Another team was creating bills of materials for clients, and it was much larger. “We went after the team size of 850 instead so it would have a broader impact,” she says.

In addition to selecting projects with the widest possible impact, she also recommends looking for those that have a narrower scope, as far as data requirements are concerned. Take for example a gen AI help desk assistant.

“Don’t go after every type of question that the company can get,” she says. “Narrow it down, and monitor the responses you get back. Then the amount of data you need to pull in is reduced as well.”

Organizing data is a significant challenge for companies deploying AI, and an expensive one, as well. The data should be clean, and in a structured format to reduce inaccuracy. She recommends that companies looking to decide which gen AI projects to do first should look at ones that focus on revenue generation, cost reduction, and improving affinity to their brand.



Source link