Microsoft’s Maia AI, Azure Cobalt chips to rev up efficiency, performance
Additionally, analysts believe that the new chips provide a window of opportunity for Microsoft to build its own AI accelerator software frameworks as demand for AI or generative AI grows further.
“Building accelerators for AI workloads will be a way to improve performance while using less power than other chips such as graphics processing units (GPUs). Increasing performance while being energy efficient will continue to be more important for vendors and enterprises as attempt to meet sustainability goals and benefit from the potential of AI,” Newman said.
Custom chips to give Nvidia, AMD and Intel a run for their money
Microsoft’s new custom chips are not powerful enough to replace GPUs from Nvidia for the purposes of developing large language models. But they are well-suited for inferencing — being used in operational AI workloads — and as they roll out they will reduce the need for the company to use chips from Nvidia, AMD and Intel, analysts said, adding that custom chips from AWS and Google will also challenge chipmakers in the future.
“Intel, NVIDIA, and AMD are all seeing the rise of Arm based instances and should see them as a competitive threat in certain instances,” Newman said.
The migration of workloads from x86 architecture chips to Arm isn’t plug and play yet — since software is often written for specific chip architectures — but has become less of a sticking point as developers continue to make progress in running more and more workloads on Arm, the Futurum Group’s Newman said.
Analysts say that with cloud service providers using custom silicon at varying levels, the data center market will see a “more meaningful shift” to Arm in the coming years despite x86 chips currently dominating market share by a substantial margin.
Among all chipmakers, Newman believes that Nvidia will be the least impacted, at least in the near term, as demand for its GPUs is set to remain elevated.
However, in some instances or use cases the custom chips from cloud service providers may see a symbiotic relationship with Nvidia, especially the Grace Hopper chips, which are targeted towards developing and training large language models.
Microsoft’s new custom chips are expected to start rolling out in early next year to its data centers. Since Microsoft does not plan to sell the chips to third parties, it will not have to deal with the restrictions imposed by the administration of US President Joe Biden on tech exports to China.