- ITDM 2025 전망 | “불경기 시대 속 콘텐츠 산업··· 기술이 돌파구를 마련하다” CJ ENM 조성철 엔터부문 CIO
- Network problems delay flights at two oneworld Alliance airlines
- Leveraging Avaya Experience Platform to accelerate your digital banking transformation
- The best iRobot vacuums of 2024: Expert tested and reviewed
- This simple Gmail trick gave me another 15GB of storage for free - and I didn't lose any files
AMD unveils exascale data-center accelerator at CES
The Consumer Electronics Show (CES) might be the last place you’d expect an enterprise product to debut, but AMD unveiled a new server accelerator among the slew of consumer CPUs and GPUs it launched at the Las Vegas show.
AMD took the wraps off its Instinct MI300 accelerator, and it’s a doozy.
The accelerated processing unit (APU) is a mix of 13 chiplets, including CPU cores, GPU cores, and high bandwidth memory (HBM). Tallied together, AMD’s Instinct MI300 accelerator comes in at 146 billion transistors. For comparison, Intel’s ambitious Ponte Vecchio processor will be around 100 billion transistors, and Nvidia’s Hopper H100 GPU is a mere 80 billion transistors.
The Instinct MI300 has 24 Zen 4 CPU cores and six CDNA chiplets. CDNA is the data center version of AMD’s RDNA consumer graphics technology. AMD has not said how many GPU cores per chiplet there are. Rounding off the Instinct MI300 is 128MB of HBM3 memory stacked in a 3D design.
The 3D design allows for tremendous data throughput between the CPU, GPU and memory dies. Data doesn’t need to go from the CPU or GPU to DRAM; it goes out to the HBM stack, drastically reducing latency. It also allows the CPU and GPU to work on the same data in memory simultaneously, which speeds up processing.
AMD CEO Lisa Su announced the chip at the end of her 90-minute CES keynote, saying MI300 is “the first chip that brings together a CPU, GPU, and memory into a single integrated design. What this allows us to do is share system resources for the memory and IO, and it results in a significant increase in performance and efficiency as well as [being] much easier to program.”
Su said the MI300 delivers eight times the AI performance and five times the performance per watt of the Instinct MI250. She mentioned the much-hyped AI chatbot ChatGPT and noted it takes months to train the models; the MI300 will cut the training time from months to weeks, which could save millions of dollars on electricity, Su said.
Mind you, AMD’s MI250 is an impressive piece of silicon, used in the first exascale supercomputer, Frontier, at the Oak Ridge National Lab.
AMD’s MI300 chip is similar to what Intel is doing with Falcon Shores, due in 2024, and Nvidia is doing with its Grace Hopper Superchip, due later this year. Su said the chip is in the labs now and sampling to select customers, with a launch expected in the second half of the year.
New AI accelerator on tap from AMD
The Instinct isn’t the only enterprise announcement at CES. Su also introduced the Alveo V70 AI inference accelerator. Alveo is part of the Xilinx FPGA line AMD acquired last year, and it’s built with AMD’s XDNA AI engine technology. It can deliver 400 million AI operations per second on a variety of AI models, including video analytics and customer recommendation engines, according to AMD.
Su said that in video analytics, the Alveo V70 delivers 70% more street coverage for smart-city applications, 72% more hospital bed coverage for patient monitoring, and 80% more checkout lane coverage in a smart retail store than the competition, but she didn’t say what the competition is.
All of this is within a 75-watt power envelope and a small form factor. AMD is going to take pre-orders for the V70 cards today, with availability this spring.
Copyright © 2023 IDG Communications, Inc.