- This premium projector has a dazzling display that could replace my 4K TV
- Uncover the latest upgrades in the NSO Sandboxes
- Samsung's new Galaxy AI features are what iPhone users wish they got with Apple Intelligence
- The future of sales? These AI agents offer 24/7 ABC energy for SMBs
- GhostGPT: New Chatbot for Malware Creation, Scams
Cost concerns put CIOs’ AI strategies on edge
Questionable outcomes and a lack of confidence in generative AI’s promised benefits are proving to be key barriers to enterprise adoption of the technology.
But according to a recent survey from IDC, cost concerns are another top gen AI roadblock, with 46% of 1,000-plus IT pros surveyed saying the lack of predictability in pricing is a primary obstacle to implementing gen AI at their organizations.
To assuage those concerns, IDC survey respondents “report a preference for pay-as-you-go consumption, which is out of sync with most vendors that want a commitment in advance,” according to an executive summary based on the October survey of IT pros and line-of-business executives.
Sastry Durvasula, chief operating, information, and digital officer at TIAA, firmly believes consumption-based pricing is the best model for business organizations’ AI strategies.
“Most organizations are still figuring out their AI usage patterns, so committing to large upfront costs is risky. Pay-as-you-go offers better cost visibility and control, plus the flexibility to scale based on actual usage,” he says. “We’re less concerned about one-time training/fine-tuning costs and more worried about managing ongoing operational expenses. This way, we can directly tie costs to value and adjust as needed.”
Chris Nardecchia, CIO of Rockwell Automation, agrees that pay-as-you-go is the preferred pricing model for CIOs.
“Most enterprises, especially outside of tech, face significant barriers to implementing in-house AI infrastructures that are capable of running highly advanced models,” he says. “While building from scratch is out of reach for most, consumption-based models allow CIOs to implement AI incrementally with more measurable ROI.”
IT leaders are gaining a better understanding of vendors’ gen AI pricing approaches — but by and large they don’t like it. Dave McCarthy, research vice president at IDC and one the survey’s authors, points out that CIOs are still dealing with how best to manage unexpected costs in the cloud and have learned that estimating costs for new workloads is challenging without historical data.
“Since AI is new for most companies, this creates a budgeting challenge for their AI initiatives. To make matters worse, many vendors are still experimenting with varying pricing models that are subject to change,” McCarthy says. “That uncertainty creates a challenge for risk-averse companies that must work within budget constraints. Pay-as-you-go pricing is a way to reduce financial risk by not locking into long-term contracts.”
Adnan Masood, chief AI Architect at UST, says “unpredictable pricing” makes it tough even for CFOs to manage AI spending.
“Costs that fluctuate in ways even a CFO using advanced data-driven strategy can’t fully forecast, … that’s a massive threat to solvency and can derail the core competencies these executives must protect,” he says.
As Masood sees it, “the real fear is not the technology’s power, but the lack of real-time cost control and clear performance metrics to justify audacious AI investments.”
Questionable outcomes, dubious benefits
In addition to pricing fears, IDC found concerns about bad outcomes (51.3%) — including unintended bias, unauthorized usage of someone else’s intellectual property, or unintentional leakage of confidential information — and lack of confidence in the benefits (46.1%) of generative AI as top roadblocks to adoption.
Here, an antidote may be using SaaS agents and pursuing basic gen AI use cases, such as automated document summarization, rather than attempting to build and train a foundation model, says Paul Beswick, CIO of Marsh McLennan. Doing so can also be a cost-conscious inroads to AI, he adds.
“There is absolutely a sweet spot of relatively easy-to-access capability at a modest price that many technology organizations are perfectly capable of reaching. I think the bigger risk is that they get distracted by trying to shoot for things that are less likely to be successful or buying into technologies that don’t offer a good price/performance trade-off,” he says.
“Most organizations should avoid trying to build their own bespoke generative AI models unless they work in very high-value and very niche use cases,” Beswick adds. “For most companies, I think there’s far better return in taking advantage of the ecosystem that’s being built and that is relatively easy to buy or rent your way into.”
UST’s Masood agrees that the cost potential of model training isn’t for the faint of heart.
IT leaders “seem most alarmed by the specter of runaway training bills: Once you press ‘go’ on a large-scale generative model, it can be a bottomless pit without operational transparency and robust risk mitigation strategies,” he says. “At the same time, a daily sticker shock from incremental charges wreaks havoc on institutional legitimacy — no one wants to explain last night’s spike in AI usage to the board without a strong governance innovation framework.”
Budget constraints also play a role in preventing the building out of AI infrastructure, given the cost of GPUs, Rockwell’s Nardecchia says. A shortage of experienced AI architects and data scientists, technical complexity, and data readiness are also key roadblocks, he adds.
“Foundational models require vast, clean, and structured data — and most organizations are still battling legacy silos and low-quality data. This is largely the No. 1 constraint I hear from peers,” he says, regarding concerns about bad outcomes.
Vendors are working to overcome these obstacles by addressing pricing concerns and trying to improve outcomes. For example, Microsoft this week introduced consumption-based pricing for Copilot Chat. And Amazon recently unveiled features for its Bedrock generative AI platform designed to improve outcomes.
At AWS re:Invent, Doordash’s Chaitanya Hari said Amazon Bedrock’s new Knowledge Bases feature allowed the company to implement the entire retrieval augmented generation (RAG) workflow, from ingestion to retrieval, without the need for a lot of custom data integrations or complex data back-end management.
“Even if a model is fast and fairly accurate, how do we ensure that it’s pulling information from the context that we’ve provided and not just making things up? We went through multiple iterations of prompt engineering and fine-tuning to ensure our AI models reliably referenced only the knowledge bases that we provided with Amazon Bedrock,” said Hari, product owner of enterprise AI solutions at DoorDash.
“We were able to mitigate a large portion of our hallucinations, prevent things like prompt-injection attacks, and detect things like abusive language,” Hari said. “This gave us the confidence to scale without compromising on quality or trust.”
Data exchange costs, gen AI premiums
IDC’s survey also revealed additional pricing worries that are hindering gen AI adoption, including the often-hidden costs associated with exchanging data between systems.
Most organizations expect public cloud IaaS to be their primary source for gen AI infrastructure, IDC’s survey shows. But many may want to use on-premises systems in conjunction with IaaS for greater privacy, the report writers note. This preference for a hybrid gen AI architecture “will require well-defined pricing models that account for costs associated with data transfer between deployment locations,” according to IDC’s executive summary.
Premium pricing for gen AI services is another CIO concern, IDC and CIOs note.
“Premium costs for agentic AI — sophisticated AI agents acting autonomously — are rationally terrifying when the ROI is fuzzy,” UST’s Masood says.
How agentic AI use will ultimately be priced by vendors is a matter of debate and confusion. Salesforce, for instance, which recently announced Agentforce 2.0, is taking a per-conversation approach to pricing. The platform is being used, for example, by FedEx to streamline operations and by Saks Fifth Avenue to answer customer questions about retail items.
Such advanced capabilities may not be affordable for all businesses for some time. According to IDC’s survey, varied pricing models for gen AI-infused services are a given — but stabilization is anticipated within a few years.
“Most buyers of GenAI-infused services show expectations of premium pricing for services delivery using GenAI today,” the executive summary notes. “However, in three years, organizations expect balanced and varied pricing models for GenAI-infused services delivery.”
Overcoming gen AI roadblocks
While almost every company is considering or implementing some form of AI, few do it right the first time, as evidenced by high AI pilot failure rates. But it doesn’t have to be that way.
“CIOs and business owners need to take a different approach to implementing new AI-driven processes and there are multiple strategies to increase the success of AI pilots,” says Chris Stephenson, managing director of intelligent automation and AI for Alliant.
“Sometimes, even with ideal functionality, an AI pilot can fail from lack of buy-in from key stakeholders funding the project or the employees meant to use it,” he adds. “At the outset of an AI pilot, project leaders should … identify key measurements for ROI from the project early to show stakeholders how the project is tracking at every step.”
Data center provider Digital Realty instructs CIOs to start small with targeted pilots to prove ROI, building trust and confidence across the organization by aligning AI with business goals and using clear metrics to show how it drives revenue, cuts costs, or mitigates risk.
“We advise enterprise customers to maintain visibility across their entire infrastructure stack. A simple yet effective approach is to track the relationship between tokens, watts, and dollars,” says Chris Sharp, Digital Realty CTO. “This model monitors token production in AI deployments, the power required to support infrastructure — accounting for density and capacity dynamics — and the associated operational costs over time.”
Bryan Muehlberger, CIO at Lumiyo and former CIO and CTO at Vuori and Red Bull, advises CIOs to factor all costs related to AI — uncertain pricing models, power costs, and economic condition — into any equation before moving ahead.
“Right now, the rising costs of chips, the power consumption related to them, and the macro-economic tensions with China and within the supply chain [are key concerns],” he says. “These will be very impactful to the future of AI in the coming one to two years. Even OpenAI is experiencing some issues deploying their latest versions due to these complexities.”