- Why the Google Pixel Tablet is still my smart home display after a year (and it's on sale)
- Got a new password manager? Don't leave your old logins exposed in the cloud - do this next
- This midrange OnePlus phone undercuts the Galaxy S25 and delivers a flagship experience
- 칼럼 | 주가 상승 이끈 오라클의 AI 중심 전환, 남은 과제는 개발자 공략
- “엉터리 데이터, AI 성과 두 배로 망쳐”···글로벌 CIO 4인이 제시한 AI 시대의 데이터 관리 해법
Nvidia introduces ‘ridesharing for AI’ with DGX Cloud Lepton

The platform is currently in early access but already CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nscale, SoftBank, and Yotta have agreed to make “tens of thousands of GPUs” available for customers.
Developers can utilize GPU compute capacity in specific regions for both on-demand and long-term computing, supporting strategic and sovereign AI operational requirements. Nvidia expects leading cloud service providers and GPU marketplaces to also participate in the DGX Cloud Lepton marketplace.
The platform utilizes Nvidia AI software stack, including NIM and NeMo microservices, Nvidia Blueprints and Nvidia Cloud Functions, to accelerate and simplify the development and deployment of AI applications.