- This Sony Bravia is my pick for best TV for the money - and it's on sale for up to 38% off
- A CSO's perspective: 8 cyber predictions for 2025
- The best Apple TV VPNs of 2025: Expert tested and reviewed
- This hidden Pixel camera feature makes your photos more vibrant - how to enable it
- Samsung Unpacked 2025: How to watch and what to expect
HPE beats Dell and Supermicro in $1B AI server deal with X
The financial scope of the deal underscores its significance. Analysts see HPE’s landmark $1 billion deal with X as a major endorsement of its AI capabilities, but competition remains fierce in the high-growth AI server market.
“HPE’s $1 billion deal with X not only enhances its credibility but also highlights its recognition for high-performance computing capabilities,” said Rachita Rao, senior analyst at Everest Group. “While HPE focuses on software-integrated AI infrastructure, Dell’s strong partnership with Nvidia and scalable AI solutions tailored for mid-market customers position it as a major competitor. Supermicro, despite its robust AI server capabilities, is currently grappling with challenges stemming from accounting irregularities, which may hinder its market position.”
Musk’s expanding AI ambitions
Elon Musk’s well-documented commitment to AI advancements adds context to the scale of this agreement.
Musk’s ventures, including Tesla and xAI, have been among the most prominent adopters of AI infrastructure. Notably, xAI’s Colossus supercomputer, built by Dell with 100,000 Nvidia H100 GPUs, is already a game-changer in the AI domain.
This new partnership between X and HPE could potentially fuel Musk’s vision of integrating advanced AI capabilities into his companies. For instance, Musk recently revealed plans to incorporate xAI’s Grok chatbot into Tesla vehicles, allowing users to interact with their cars using conversational AI.
The deal also underscores evolving technology needs. “Nvidia’s Blackwell technology is likely to drive demand for liquid cooling solutions to manage heat-intensive AI workloads. While scalable infrastructure and compute power are critical now, future growth will hinge on customized AI models requiring specialized chips like ASICs and FPGAs for optimized performance,” Rao noted.