- I tested the best Mint alternatives, and this is my favorite money app
- 5 ways to achieve AI transformation that works for your business
- Tech winners and losers of 2024: For every triumph, a turkey
- The ReMarkable 2 remains one of my favorite creative tools, and it's $70 off for Black Friday
- Join BJ's Wholesale Club for just $20 and save on holiday shopping
Nvidia HPC/AI chip is actually six chips
The H200 NVL connects four cards, double the number of its predecessor, the H100 NVL. It also offers the option of liquid cooling; the H100 did not. Instead of using PCIe to communicate, H200 NVL uses NVLink interconnect bridge, which enables a bidirectional throughput of 900 GB/s per GPU, seven times that of PCIe 5.
The H200 NVL is intended for enterprises to accelerate AI and HPC applications, while also improving energy efficiency through reduced power consumption. It has 1.5x more memory and 1.2x more bandwidth over the H100 NVL. For HPC workloads, performance is boosted up to 1.3x over H100 NVL and 2.5x over the NVIDIA Ampere architecture generation.
Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are all expected to deliver a wide range of configurations supporting H200 NVL. Additionally, H200 NVL will be available in platforms from Aivres, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, MSI, Pegatron, QCT, Wistron and Wiwynn.