Seven important trends in the server sphere

The pace of change around server technology is advancing considerably, driven by hyperscalers but spilling over into the on-premises world as well. There are numerous overall trends, experts say, including:

  • AI Everything: AI mania is everywhere and without high power hardware to run it, it’s just vapor. But it’s more than just a buzzword, it is a very real and measurable trend. AI servers are notable because they are decked out with high end CPUs, GPU accelerators, and oftentimes a SmartNIC network controller.  All the major players — Nvidia, Supermicro, Google, Asus, Dell, Intel, HPE — as well as smaller vendors are offering purpose-built AI hardware, according to a recent Network World article.
  • AI edge server growth: There is also a trend towards deploying AI edge servers. The Global Edge AI Servers Market size is expected to be worth around $26.6 Billion by 2034, from $2.7 Billion in 2024, according to a Market.US report. Considerable amounts of data are collected on the edge.  Edge servers do the job of culling the useless data and sending only the necessary data back to data centers for processing. The market is rapidly expanding as industries such as manufacturing, automotive, healthcare, and retail increasingly deploy IoT devices and require immediate data processing for decision-making and operational efficiency, according to the report.
  • Liquid cooling gains ground: Liquid cooling is inching its way in from the fringes into the mainstream of data center infrastructure. What was once a difficult add-on is now becoming a standard feature, says Jeffrey Hewitt, vice president and analyst with Gartner. “Server providers are working on developing the internal chassis plumbing for direct-to-chip cooling with the goal of supporting the next generation of AI CPUs and GPUs that will produce high amounts of heat within their servers,” he said. 
  • New data center structures: Not so much a server trend as a data center trend, but data center layouts are changing to accommodate AI server hardware. AI hardware is extremely dense and runs very hot, more so than typical server systems. Data center operators of every type deploying AI hardware have to be mindful of where they place it, says Naveen Chhabra, senior analyst with Forrester Research.

“You need to identify the zones in which you can put those put that power,” he said. “You can’t simply concentrate the power into a particular zone in the data center and say here is where I’m going to run all my AI applications. That may not be the most pragmatic architecture.”

  • Virtualization land grab: Broadcom’s handling of the VMware acquisition has soured many potential customers and they are looking elsewhere, says Hewitt. “I would say that some server OEMs have been moving to support additional server virtualization options since the acquisition of VMware by Broadcom. This last trend is intended to support other virtualization choices if their clients are seeking those,” he said.
  • InfiniBand starts to fade: InfiniBand will start to fade as an option for high speed interconnectivity in favor of Ethernet, Chhabra said. “The way Ethernet is evolving, expectations are that in two to three years it would have the capability to handle high speed interconnect.  Organizations would not want to maintain two different stacks of connectivity when one would be able to do the job,” he said.
  • Component shortages drive people to the cloud: Chhabra says that with this current component shortage and demand  for data center equipment, that might drive people to the cloud rather than on premises. “I can tell you that if you want, let’s say, 20 server units with Nvidia GPUs, you are going to wait for at least a year, year and a half, to effectively get that shipped to your doors. And that is forcing companies to think about for that interim, can I go source it from somewhere? And people are exploring all those options,” he said.



Source link

Leave a Comment