Nscale offers AMD AI chips-as-a-service in green data center

Lee also discussed the implications for various stakeholders in the AI industry. “Developers must understand more precisely the nature of the AI computation. General data processing, training, and inference are different types of computation that stress different types of hardware in a data center server. Increasingly, developers might use one type of server for model training and another for inference,” he said. “In addition to AMD’s offering, AI developers will increasingly see data centers deploy other custom chips for inference from Intel, Microsoft, Google, Meta, and others.”

When asked how this move differs from other AI-focused computing infrastructures provided by hyperscalers like Azure, AWS, or GCP, Lee pointed to AMD’s long-standing efforts in creating and popularizing the ROCm software ecosystem. “Whether AMD’s chips will gain traction depends on whether ROCm provides sufficient support for inference computations compared to hyperscaler alternatives,” he noted.

Olivier Blanchard, Research Director from The Futurum Group, suggested several factors that may have influenced Nscale’s decision to work with AMD. “Nscale already has a good working relationship with AMD and decided to strengthen it by choosing their GPUs over NVIDIA’s,” he explained. Additionally, Blanchard pointed out that there might be a cost-benefit, as “NVIDIA GPUs tend to price high.”

Supply chain considerations could also have played a role in Nscale’s choice. “It could also be a supply chain decision: Nvidia GPUs can come with six-month lead times, and high demand could create supply bottlenecks. It is possible that AMD can provide better lead times and a lower risk of supply chain disruptions,” Blanchard noted.



Source link