- Bye bye, Wi-Fi: How to add a wired network to your home without running Ethernet
- Why I no longer recommend this Windows-like Linux distro
- How to buy Casio's tiny digital watch for your finger in the US this week
- This free Google AI tool turns complex research papers into concise conversations
- Tech supply chains at risk as the US launches probe into China’s legacy chip dominance
AMD launches Instinct AI accelerator to compete with Nvidia
AMD has run a distant second to Nvidia in the GPU acceleration HPC market despite the fact that its accelerators power the worlds fastest supercomputer. Its looking to gain ground with the launch of the Instinct MI300X data center GPU.
AMD CEO Lisa Su kicked off the launch event and compared the AI revolution to the Internet revolution that began 30 years ago. “But what’s different about AI is that the adoption rate is just much, much faster. So although so much has happened, the truth is, right now we’re just at the very beginning of the AI era. And we can see how it’s so capable of touching every aspect of our lives,” she said.
The company first introduced the Instinct MI300 family at CES earlier this year. It formally launched the Instinct MI300X along with its CPU-GPU hybrid chip, the Instinct MI300A, at its Advancing AI event in San Jose, Calif., on Thursday, marking its biggest challenge yet to Nvidia’s dominance in the HPC acceleration race.
The OEM support is without question. Several OEMs, including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro, said they would ship servers with the MI300X accelerator card. Also, HPE, Supermicro, Gigabyte, and Atos subsidiary Eviden will ship next year with the MI300A card.
On the cloud side, the MI300X will be used to power upcoming virtual machine instances from Microsoft Azure and bare metal instances from Oracle Cloud Infrastructure. In addition, smaller cloud service providers like Aligned, Akron Energy, Cirrascale, Crusoe and Denvr Dataworks said that they would also support MI300X.
AMD also announced an update to its ROCm 6 GPU programming platform, which it promotes as an alternative to Nvidia’s CUDA programming language. The update has features optimizations for generative AI, particularly large language models, along with support for new data types, advanced graph and kernel optimizations, optimized libraries and advanced attention algorithms.