- The cheapest earbuds Apple sells just got cheaper with this Woot deal
- Cisco IR8100 Among First Products Certified for Wi-SUN FAN 1.1
- Critical vulnerability in AMI MegaRAC BMC allows server takeover
- What is Nvidia Dynamo and why it matters to enterprises?
- SpyCloud’s 2025 Identity Exposure Report Reveals the Scale and Hidden Risks of Digital Identity Threats
HPE, Nvidia broaden AI infrastructure lineup

“Accelerated by 2 NVIDIA H100 NVL, [HPE Private Cloud AI Developer System] includes an integrated control node, end-to-end AI software that includes NVIDIA AI Enterprise and HPE AI Essentials, and 32TB of integrated storage providing everything a developer needs to prove and scale AI workloads,” Corrado wrote.
In addition, HPE Private Cloud AI includes support for new Nvidia GPUs and blueprints that deliver proven and functioning AI workloads like data extraction with a single click, Corrado wrote.
HPE data fabric software
HPE has also extended support for its Data Fabric technology across the Private Cloud offering. The Data Fabric aims to create a unified and consistent data layer that spans across diverse locations, including on-premises data centers, public clouds, and edge environments to provide a single, logical view of data, regardless of where it resides, HPE said.
“The new release of Data Fabric Software Fabric is the data backbone of the HPE Private Cloud AI data Lakehouse and provides an iceberg interface for PC-AI users to data hosed throughout their enterprise. This unified data layer allows data scientists to connect to external stores and query that data as iceberg compliant data without moving the data,” wrote HPE’s Ashwin Shetty in a blog post. “Apache Iceberg is the emerging format for AI and analytical workloads. With this new release Data Fabric becomes an Iceberg end point for AI engineering. This makes it simple for AI engineering data scientists to easily point to the data lakehouse data source and run a query directly against it. Data Fabric takes care of metadata management, secure access, joining files or objects across any source on-premises or in the cloud in the global namespace.”
In addition, HPE Private Cloud AI now supports pre-validated Nvidia blueprints to help customers implement support for AI workloads.
AI infrastructure optimization
Aiming to help customers manage their AI infrastructure, HPE enhanced its OpsRamp management package which monitors servers, networks, storage, databases, and applications. To OpsRamp the company added support for GPU optimization which means the platform can now manage AI-native software stacks to deliver full-stack observability to monitor the performance of training and inference workloads running on large Nvidia accelerated computing clusters, HPE stated.