- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
High-bandwidth memory nearly sold out until 2026
While it is easy to blame Nvidia for this shortage, it’s not alone in driving high-performance computing and the memory needed to go with it. AMD is making a run, Intel is trying, and many major cloud service providers are building their own processors. This includes Amazon, Facebook, Google, and Microsoft. All of them are making their own custom silicon, and all need HBM memory.
That leaves the smaller players on the outside looking in, says Jim Handy, principle analyst with Objective Analysis. “It’s a much bigger challenge for the smaller companies. In chip shortages the suppliers usually satisfy their biggest customers’ orders and send their regrets to the smaller companies. This would include companies like Sambanova, a start-up with an HBM-based AI processor,” he said.
DRAM fabs can be rapidly shifted from one product to another, as long as all products use the exact same process. This means that they can move easily from DDR4 to DDR5, or from DDR to LPDDR or GDDR used on graphics cards.
That’s not the case with HBM, because only HBM uses a complex and highly technical manufacturing process called through-silicon vias (TSV) that is not used anywhere else. Also, the wafers need to be modified in a manner different from standard DRAM, and that can make shifting their manufacturing priorities very difficult, said Handy.
So if you recently placed an order for an HPC GPU, you may have to wait. Up to 18 months.