- 5 biggest Linux and open-source stories of 2024: From AI arguments to security close calls
- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- Interpol Identifies Over 140 Human Traffickers in New Initiative
- 5 network automation startups to watch
- The State of Security in 2024: The Fortra Experts Take a Look
Generative AI is a make-or-break moment for CIOs
There are two common approaches for Shapers. One is to “bring the model to the data” — that is, hosting the model on the organization’s infrastructure, either on-premises or in the cloud environment. The other is to “bring data to the model” — that is, when an organization puts a copy of the large model itself on cloud infrastructure through hyperscalers. In either case, CIOs need to develop pipelines to connect gen AI models to internal data sources. Training a model on internal data makes the model’s predictions that much better and more specific to company needs. Companies will need to store much more interaction information, such as conversations with customer service agents, and continually use huge amounts of data to make gen AI systems effective.
Makers build a foundation model from scratch. This is expensive and complex, requiring huge volumes of data, internal AI expertise and computing power. There is a substantial one-off investment to build the model and train employees, starting at $5 million, and can go up to hundreds of millions, depending on such factors as training infrastructure, model parameters, and choice of model architecture. Because of the cost and complexity, this will be the least-common archetype.
Getting gen AI strategy right
Experimenting with gen AI use cases is relatively easy; scaling them up in a way that unlocks value is much more challenging. Without the right internal organization, even the most promising gen AI programs could fall short. Redesigning business processes and workflows, and retraining users to take advantage of gen AI capabilities must occur. Upgrading enterprise technology architecture to integrate and manage generative AI models is also key in orchestrating how they operate with existing AI and machine learning (ML) models, applications, and data sources.
The CIO’s first move should be to centralize gen AI capabilities to coordinate activities, build expertise, and allocate capabilities to priority initiatives. The goal of this team, including data engineers, MLOps engineers, and risk and legal experts, is to collaborate on building gen AI for the first few use cases. The focus should be on connecting gen AI models to internal systems, enterprise applications, and tools. Only by doing the structural work at the tech stack level can a business get past developing a few isolated use cases to industrializing to capturing substantial value. The principle is to manage and deploy gen AI as a foundational platform service that is ready for use by product and application teams.
In the best-case scenario, all of the above would be in place as an organization begins its gen AI journey. In the absence of such ideal conditions, CIOs should still begin developing a platform for a set of priority use cases, adapting, and adding as they learn.
The buzz around gen AI is that it has the potential to transform business as we know it. Potential, though, is not certainty, or even probability. CIOs and CTOs will be on the front lines to ensure that organizations execute with strategic intent and focus, and don’t get trapped in endless, and expensive, pilot purgatory.