Generative AI is hot, but predictive AI remains the workhorse

Since the release of ChatGPT in November 2022, generative AI (genAI) has become a high priority for enterprise CEOs and boards of directors. A PwC report, for instance, found that 84% of CIOs expect to use genAI to support a new business model in 2024. Certainly, there’s no doubt that genAI is a truly transformative technology. But it’s also important to remember that it is just one flavor of AI, and it’s not the best technology to power every use case.

The concept of what qualifies as AI changes over time. Fifty years ago, a tic-tac-toe-playing program would have been thought of as a type of AI; today, not so much. But generally speaking, the history of AI falls into three different categories.

  • Traditional Analytics: Organizations have been using analytical business intelligence (BI) for the last four decades, but the name shifted to analytics as the technology became more sophisticated and advanced. Generally speaking, analytics looks backward to unearth insights about what happened in the past.
  • Predictive AI: This technology is forward-looking, analyzing past data to unearth predictive patterns and then using current data to provide accurate forecasts of what will happen in the future.
  • Generative AI: GenAI analyzes content — text, images, audio, and video – to generate new content according to the specifications of the user.

“We work with a lot of chief data and artificial intelligence officers (CAIOs),” said Thomas Robinson, COO at Domino, “and, at most, they see generative AI accounting for 15% of use cases and models. Predictive AI is still the workhorse in model-driven businesses, and future models are likely to combine predictive and generative AI.”

In fact, there are already use cases where predictive and generative AI work in concert, such as analyzing radiology images to create reports on preliminary diagnoses or mining stock data to generate reports on which are most likely to increase in the near future. For CIOs and CTOs, this means that organizations will need a common platform for developing complete AI.

Complete AI development and deployment doesn’t treat each of these types of AI as a separate animal, each with its own stack. True, genAI may require a bit more power in the way of some GPUs, and networking may need to be beefed up for better performance in some areas of the environment, but unless an organization is running a truly gigantic genAI deployment on the scale of Meta or Microsoft, building a new stack from the ground up isn’t required.

Processes for governance and testing also don’t need to be completely reinvented. For example, mortgage risk models powered by predictive AI require rigorous testing, validation, and constant monitoring – just as do genAI’s large language models (LLMs). Again, there are differences, such as genAI’s well-known problem with “hallucinations.” But generally, the processes for managing genAI risk will be similar to those of predictive AI.

Domino’s Enterprise AI platform is trusted by one out of five Fortune 100 companies to manage AI tools, data, training, and deployment. With this platform, AI and MLOps teams can manage complete AI – predictive, and generative – from a single control center. By unifying MLOps under a single platform, organizations can enable complete AI development, deployment, and management.

Learn how to reap the rewards and manage the risk of your genAI projects with Domino’s free whitepaper on responsible genAI.



Source link