First combined AI-RAN network from Nvidia and SoftBank supports inferencing, claims return of $5 for every $1 invested

Bringing AI as close as possible to enterprise

SoftBank performed an outdoor trial in Japan’s Kanagawa prefecture in which its AI-RAN infrastructure built on Nvidia AI Enterprise achieved carrier-grade 5G performance while using excess capacity to concurrently run AI inference workloads. These workloads included  multimodal retrieval-automated generation (RAG) at the edge, robotics control, and autonomous vehicle remote support. SoftBank is calling the trial ‘AITRAS.’

In inferencing, pre-trained AI models interact with previously unseen data for predicting and decision-making. Edge computing moves this closer to data sources to hasten the process.

Garcia pointed out that the concept of edge intelligence has emerged in the last 18 months following the launch of ChatGPT. It pulls together enterprise edge (data centers), operational edge (physical branches), engagement edge (where enterprises interact with consumers) and provider edge (where AI-RAN sits).

This new partnership represents a trend in the market of “bringing AI as close as possible to the enterprise. Enterprises rely on providers for infrastructure for not only running model training, but also inferencing,” Garcia said.

Converting from cost center to revenue-generating asset

Traditional RAN infrastructure is designed using customized chips (application-specific integrated circuits) built solely for running RAN. By contrast, as Nvidia’s Vasishta explained, RAN and AI workloads built on Nvidia infrastructure are software-defined, and can be orchestrated or provisioned according to need.

This can accelerate the 5G software stack compliant with 5G standards to the same level, and in some cases exceeding, the performance/wattage of traditional RAN infrastructure, he said.



Source link

Leave a Comment