Nvidia AI Enterprise adds generative AI microservices
Version 5.0 of Nvidia’s enterprise-spanning AI software platform will feature a smorgasbord of microservices designed to speed app development and provide quick ways to ramp up deployments, the company announced today at its GPU Technology Conference.
These microservices are provided as downloadable software containers used to deploy enterprise applications, Nvidia said in an official blog post. They’re split into two main categories — Nvidia NIM, which covers microservices related to deploying production AI models, and CUDA-X, for microservices like cuOpt, the company’s optimization engine.
For NIM microservices the focus is on deployment times for generative AI apps, which the company said can be reduced “from weeks to minutes” with its services. The microservices include Triton Inference Server for standardizing AI model deployment, and TensorRT-LLM to help optimize and define large language models, making it easier for companies to experiment with LLMs without having to delve into C++ or Nvidia CUDA. They’ll be accessible via Amazon SageMaker, Google Kubernetes Engine, and Microsoft Azure AI, and integrations with AI frameworks like Deepset, LangChain and LlamaIndex are also supported.
CUDA-X microservices, by contrast, are more focused on data preparation and model training, as well as tools to enable developers to tie their generative AI apps to business data, whether that’s numerical information, text, or images. Other microservices in this category are almost applications of their own, like Nvidia Riva for translation and speech AI, the aforementioned cuOpt for process and routing optimization and Earth-2 for climate and weather simulations.
A host of further integrations is also coming to AI Enterprise 5.0, the company said. Business data hosted on Box, Cloudera, Cohesity, Datastax and the like can be used in AI applications as of version 5.0, and Nvidia-powered hardware can be found in servers and PCs from most major vendors, including Dell, HPE and Lenovo.
Nvidia described the microservices as a new layer in its full-stack computing platform, connecting model developers with platform providers and enterprises and providing a standardized path for running custom AI models across clouds, data centers, workstations and PCs.