- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
- 세일포인트 기고 | 2025년을 맞이하며… 머신 아이덴티티의 부상이 울리는 경종
Intel flexes AI chops with Gaudi 3 accelerator, new networking for AI fabrics
The Xeon 6 processors offer a 4x performance improvement and nearly 3x better rack density compared with second-generation Intel Xeon processors, Intel stated.
Taking aim at Nvidia and targeting large AI processing needs, Intel announced the Gaudi 3 AI accelerator chip, which it says is 40% on average more power efficient than similar Nvidia H100 chips.
“The Intel Gaudi 3 AI accelerator will power AI systems with up to tens of thousands of accelerators connected through the common standard of Ethernet,” Intel stated. For example, 24 200-gigabit Ethernet ports are integrated into every Intel Gaudi 3 accelerator, providing flexible and open-standard networking.
Intel Gaudi 3 promises 4x more AI compute and a 1.5x increase in memory bandwidth over its predecessor, the Gaudi 2, to allow efficient scaling to support large compute clusters and eliminate vendor lock-in from proprietary networking fabrics, Intel stated.
The idea is that the accelerator can deliver a leap in performance for AI training and inference models, giving enterprises a choice in what systems they deploy when taking generative AI to scale, Katti said.
The Intel Gaudi 3 accelerator will be available to original equipment manufacturers in the second quarter of 2024 in industry-standard configurations of Universal Baseboard and open accelerator module (OAM). Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are among the vendors that will implement Gaudi 3 in servers and other hardware. General availability of Intel Gaudi 3 accelerators is set for the third quarter of 2024.