- AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl
- 3 reasons why you need noise-canceling earbuds ahead of the holidays (and which models to buy)
- Your power bank is lying to you about its capacity - sort of
- Linux Malware WolfsBane and FireWood Linked to Gelsemium APT
- Cisco and Tele2 IoT: Co-Innovation Broadens IoT Benefits Across Industries
IBM launches a software-defined storage server for AI
IBM has added a new member to its Spectrum Scale Enterprise Storage Server (ESS) portfolio that featuers a faster controller CPU and more throughput and that is designed to work with Nvidia’s DGX dense compute servers for AI training.
The new ESS 3500 is a 2U design with 24 drive bays and a maximum raw capacity of 368TB. But it can achieve up to 1PB through LZ4 compression, a first for the series that earlier ESS versions do not have. The ESS 3500 can achieve up to 91GB/s of throughput performance, better than the 80GB/s of the older models.
The 3500 runs Spectrum Scale, IBM ’s scale-out parallel file system that spans on-premises, cloud, and edge networks. It uses dual active controllers with either 100Gbit Ethernet or 200Gbit HDR InfiniBand ports and a 48-core AMD Epyc processor on each controller.
The 3500 directly targets Nvidia’s DGX dense compute systems, which are all GPUs and memory but no storage. It does this through use of Nvidia’s GPUDirect Storage technology, which creates a direct data path between GPUs and storage via NVMe or NVMe over Fabrics (NVMe-oF).
Normally, data needs to be loaded into the CPU and main memory before being moved to the GPU for processing. GPUDirect allows the system to bypass the CPU and main memory completely and provides a direct connection between storage and GPU memory.
IBM claims that with this system, auto parts maker Continental was able to improve AI training time for self-driving vehicles by as much as 70% using IBM Spectrum Scale and IBM ESS 3500 with a DGX system.
The ESS 3500 is available now.
Copyright © 2022 IDG Communications, Inc.