- You can get a free Samsung TV with this Verizon 5G home internet deal - how it works
- I thought the budget phone market was in shambles - then I held Google's latest Pixel
- Over Half a Million Hit by Pennsylvania Schools Union Breach
- CMS ARS: A Blueprint for US Healthcare Data Security and Compliance
- Verizon is giving these Android phones satellite texting for free - Google and Samsung models included
Server spending is at unprecedented levels as enterprises rush to adopt AI

The non-x86 growth is largely being fueled by high demand for Nvidia Blackwell chips, Fernandez explained, as well as moves by the Chinese government to support local manufacturers such as Huawei, which is now one of the main suppliers of ARM processors for the Chinese market.
“We’re thinking the ARM pipeline will continue growing, because the appetite for adopting Blackwell chips that are ARM-based and have GPUs embedded in them is really high,” said Fernandez.
Longer-lasting infrastructure
GPUs that could be embedded in servers became more broadly available in 2024, contributing to the market growth. Most of that inventory has so far been consumed by cloud service providers (CSPs) and large hyperscalers, which means availability will open up for other enterprises in 2025, Fernandez noted.
“We can probably expect 2025 to be a year of strong investment,” she said, adding that access to supply is “getting much better.”
At the beginning of 2024, the wait time for GPUs was a lengthy 36 weeks. But, she pointed out, “the largest buyers in the world already have their infrastructure, so there’s going to be more availability.”
However, the industry will come to a point where it begins to digest all the infrastructure that’s been acquired, so investment will likely begin to slow in 2026, she said. Enterprises are beginning to move into different phases of AI deployment that don’t require as many resources. Initial model training, for instance, requires a good deal of infrastructure investment up front, but that infrastructure can be repurposed for later inferencing and fine-tuning steps.