Nvidia aims to bring AI to wireless

Key features of ARC-Compact include:

  • Energy Efficiency: Utilizing the L4 GPU (72-watt power footprint) and an energy-efficient ARM CPU, ARC-Compact aims for a total system power comparable to custom baseband unit (BBU) solutions currently in use.
  • 5G vRAN support: It fully supports 5G TDD, FDD, massive MIMO, and all O-RAN splits (inline and lookaside architectures) using Nvidia’s Aerial L1+ libraries and full stack components.
  • AI-native capabilities: The L4 GPU enables the execution of AI for RAN algorithms, neural networks, and agile AI applications such as video processing, which are typically not possible on custom BBUs.
  • Software upgradeability: Consistent with the homogeneous architecture principle, the same software runs on both cell sites and aggregated sites, allowing for future upgrades, including to 6G.

Velayutham emphasized the power of Nvidia’s homogeneous platform, likening it to the iOS for iPhone. The CUDA and DOCA operating systems abstract the underlying hardware (ARC-Compact, ARC-1, discrete GPUs, DPUs) from the applications. This means that vRAN and AI application developers can write their software once, and it will run seamlessly across different Nvidia hardware configurations, which future-proofs deployments.

Power-efficient and cost-competitive

There has been some skepticism around whether the GPU-powered vRAN can match the power and cost efficiency of custom BBUs. Nvidia asserts that they have crossed a tipping point with ARC-Compact, achieving comparable or even better energy efficiency per watt. The company didn’t disclose pricing details, but the L4 GPU is relatively inexpensive (sub-$2,000), suggesting a competitive total system cost (estimated to be sub-$10,000).

The path to AI-native RAN and 6G

Nvidia envisions the transition to AI-native RAN as a multi-step process:

  • Software-defined RAN: Moving RAN workloads to a software-defined architecture.
  • Performance baseline: Ensuring current performance is comparable to traditional architectures.
  • AI integration: Building on this foundation to integrate AI for RAN algorithms for spectral efficiency gains.

Nvidia believes AI is ideally suited for radio signal processing, as traditional mathematical models from the 1950s and 60s are often static and not optimized for dynamic wireless conditions. AI-driven neural networks, on the other hand, can learn individual site conditions and adapt, resulting in significant throughput improvements and spectral efficiency gains. This is crucial given the hundreds of billions of dollars providers spend on spectrum acquisition. Nvidia has said it aims for an order-of-magnitude gain in spectral efficiency within the next two years, potentially a 40x improvement from the last decade.

To make this possible, Nvidia tools, including the Sionna and Aerial AI Radio Frameworks, support rapid development and training of AI-native algorithms. The “Aerial Omniverse Digital Twin” enables simulation and fine-tuning of algorithms before deployment, mirroring the approach used in autonomous driving, another area of focus for Nvidia.



Source link

Leave a Comment