Intel flexes AI chops with Gaudi 3 accelerator, new networking for AI fabrics

The Xeon 6 processors offer a 4x performance improvement and nearly 3x better rack density compared with second-generation Intel Xeon processors, Intel stated.

Taking aim at Nvidia and targeting large AI processing needs, Intel announced the Gaudi 3 AI accelerator chip, which it says is 40% on average more power efficient than similar Nvidia H100 chips. 

“The Intel Gaudi 3 AI accelerator will power AI systems with up to tens of thousands of accelerators connected through the common standard of Ethernet,” Intel stated. For example, 24 200-gigabit Ethernet ports are integrated into every Intel Gaudi 3 accelerator, providing flexible and open-standard networking.

Intel Gaudi 3 promises 4x more AI compute and a 1.5x increase in memory bandwidth over its predecessor, the Gaudi 2, to allow efficient scaling to support large compute clusters and eliminate vendor lock-in from proprietary networking fabrics, Intel stated.

The idea is that the accelerator can deliver a leap in performance for AI training and inference models, giving enterprises a choice in what systems they deploy when taking generative AI to scale, Katti said.

The Intel Gaudi 3 accelerator will be available to original equipment manufacturers in the second quarter of 2024 in industry-standard configurations of Universal Baseboard and open accelerator module (OAM). Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are among the vendors that will implement Gaudi 3 in servers and other hardware. General availability of Intel Gaudi 3 accelerators is set for the third quarter of 2024.



Source link