- Cómo trabaja Ikea con la IA: 30.000 empleados formados
- La falta de métricas de éxito de los CIO condena a muchos proyectos de IA
- 'Burnout': una epidemia crónica en la industria de TI
- How the web’s foundational tech is evolving in the era of remote work
- ITDM 2025 전망 | “불경기 시대 속 콘텐츠 산업··· 기술이 돌파구를 마련하다” CJ ENM 조성철 엔터부문 CIO
IBM launches a software-defined storage server for AI
IBM has added a new member to its Spectrum Scale Enterprise Storage Server (ESS) portfolio that featuers a faster controller CPU and more throughput and that is designed to work with Nvidia’s DGX dense compute servers for AI training.
The new ESS 3500 is a 2U design with 24 drive bays and a maximum raw capacity of 368TB. But it can achieve up to 1PB through LZ4 compression, a first for the series that earlier ESS versions do not have. The ESS 3500 can achieve up to 91GB/s of throughput performance, better than the 80GB/s of the older models.
The 3500 runs Spectrum Scale, IBM ’s scale-out parallel file system that spans on-premises, cloud, and edge networks. It uses dual active controllers with either 100Gbit Ethernet or 200Gbit HDR InfiniBand ports and a 48-core AMD Epyc processor on each controller.
The 3500 directly targets Nvidia’s DGX dense compute systems, which are all GPUs and memory but no storage. It does this through use of Nvidia’s GPUDirect Storage technology, which creates a direct data path between GPUs and storage via NVMe or NVMe over Fabrics (NVMe-oF).
Normally, data needs to be loaded into the CPU and main memory before being moved to the GPU for processing. GPUDirect allows the system to bypass the CPU and main memory completely and provides a direct connection between storage and GPU memory.
IBM claims that with this system, auto parts maker Continental was able to improve AI training time for self-driving vehicles by as much as 70% using IBM Spectrum Scale and IBM ESS 3500 with a DGX system.
The ESS 3500 is available now.
Copyright © 2022 IDG Communications, Inc.