Cloudera jockeys for AI platform prominence
The product has been in preview since June and is publicly available now, Ricky says. “We already have a massive amount of customers, including a tier-one bank, a large manufacturer, and a large oil and gas company.” However, Cloudera could not disclose customer names at this time.
For enterprises not using open-source models but proprietary AI from OpenAI or Anthropic or other providers, Cloudera has connectors to those services as well, Ricky said. “We have customers using Nvidia and customers using OpenAI and Azure.”
Enterprises can even create model gardens, with a selection of vetted AIs that their developers can use.
“In an organization, I could have 10 or 15 different use cases for AI,” says Sanjeev Mohan, principal at SanjMo consultancy and former Gartner research vice president for data and analytics. “One could be translating into French. There, Google’s LLM for translation has an advantage. Another use case could be developer productivity. If I want to write code that would be a different model. Another model could be for customer service. Another use case could be to convert Cobol to Java. So, I want a model garden so I can pick and choose the model for my use case.”
With the added support for Nvidia NIMs, some of those models will now perform dramatically better. For example, downloading Llama 3.2 and running the base model using Nvidia’s GPUs is a common deployment option. But using the optimized Nvidia NIM version will cut the cost in half, Mohan says.
Nvidia is a good choice of hardware partner for generative AI, says Ari Lightman, professor at Carnegie Mellon University. “Right now, they’re the dominant player and they have everyone’s attention,” he says.