- 3 handy upgrades in MacOS 15.1 - especially if AI isn't your thing (like me)
- Your Android device is vulnerable to attack and Google's fix is imminent
- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
- One of the most dependable robot vacuums I've tested isn't a Roborock or Roomba
High-bandwidth memory nearly sold out until 2026
While it is easy to blame Nvidia for this shortage, it’s not alone in driving high-performance computing and the memory needed to go with it. AMD is making a run, Intel is trying, and many major cloud service providers are building their own processors. This includes Amazon, Facebook, Google, and Microsoft. All of them are making their own custom silicon, and all need HBM memory.
That leaves the smaller players on the outside looking in, says Jim Handy, principle analyst with Objective Analysis. “It’s a much bigger challenge for the smaller companies. In chip shortages the suppliers usually satisfy their biggest customers’ orders and send their regrets to the smaller companies. This would include companies like Sambanova, a start-up with an HBM-based AI processor,” he said.
DRAM fabs can be rapidly shifted from one product to another, as long as all products use the exact same process. This means that they can move easily from DDR4 to DDR5, or from DDR to LPDDR or GDDR used on graphics cards.
That’s not the case with HBM, because only HBM uses a complex and highly technical manufacturing process called through-silicon vias (TSV) that is not used anywhere else. Also, the wafers need to be modified in a manner different from standard DRAM, and that can make shifting their manufacturing priorities very difficult, said Handy.
So if you recently placed an order for an HPC GPU, you may have to wait. Up to 18 months.