- What is AI networking? How it automates your infrastructure (but faces challenges)
- I traveled with a solar panel that's lighter than a MacBook, and it's my new backpack essential (and now get 23% off for Black Friday)
- Windows 11 24H2 hit by a brand new bug, but there's a workaround
- This Samsung OLED spoiled every other TV for me, and it's $1,400 off for Black Friday
- How to Protect Your Social Media Passwords with Multi-factor Verification | McAfee Blog
Lambda raises $320 million for GPU cloud
AI cloud service provider Lambda has scored a $320 million cash infusion to build out its GPU-based services, which provide AI training clusters made up of thousands of Nvidia accelerators.
Lambda is the latest cloud company to offer GPU processing – instead of the standard CPU processing – dedicated to all things AI, particularly inference and training. Vultr, CoreWeave and Voltage Park are all offering similar cloud GPU services.
Lambda is preparing to deploy “tens of thousands” of Nvidia GPUs, including the current top-of-the-line H100 Hopper accelerators as well as Nvidia’s forthcoming G200 GPU accelerators, which are set to double the performance of the H100. Lambda is also looking to deploy Nvidia’s hybrid GH200 CPU/GPU superchips.
Lambda’s stated mission is to build “the #1 AI compute platform in the world,” and to accomplish this, “we’ll need lots of Nvidia GPUs, ultra-fast networking, lots of data center space, and lots of great new software to delight you and your AI engineering team,” it said in a statement announcing the funding.
The $320 million Series C funding is led by a number of venture funds, including B Capital, SK Telecom, T. Rowe Price Associates, Inc., and existing investors Crescent Cove, Mercato Partners, 1517 Fund, Bloomberg Beta, and Gradient Ventures, among others.
“With this new financing, Lambda will accelerate the growth of our GPU cloud, ensuring AI engineering teams have access to thousands of Nvidia GPUs with high-speed Nvidia Quantum-2 InfiniBand networking,” the company said.