Red Hat reveals major enhancements to Red Hat Enterprise Linux AI
Well, that was fast. It was only back in early September that Red Hat released Red Hat Enterprise Linux AI (RHEL AI) 1.0. Now, Red Hat has followed up by announcing the general availability of RHEL AI 1.2. This latest release introduces several key improvements aimed at streamlining the development, testing, and deployment of large language models (LLMs).
Before jumping into what’s new and improved, here’s a bit about what RHEL AI brings. It’s designed to streamline generative AI (gen AI) model development, testing, and deployment. RHEL AI is also meant to make training LLMs affordable.
Also: Ubuntu 24.10 Oracular Oriole takes flight
This new platform combines IBM Research’s open-source-licensed Granite large language model (LLM) family, the LAB methodology-based InstructLab alignment tools, and a collaborative approach to model development via the open-source InstructLab AI project.
RHEL AI also uses Retrieval-Augmented Generation (RAG) to enable LLMs to access approved external knowledge stored in databases, documents, and other data sources. This approach enhances RHEL AI’s ability to deliver the right answer rather than an answer that sounds correct.
Also: We’re a big step closer to defining open source AI – but not everyone is happy
This next generation of RHEL AI boasts expanded hardware support. The new version now supports Lenovo ThinkSystem SR675 V3 servers, offering factory preload options for faster and easier deployment. In a technology preview, RHEL AI 1.2 introduces support for AMD Instinct Accelerators, including the MI300x GPUs for training and inference and MI210 GPUs for inference tasks.
RHEL AI 1.2 has also extended its reach to major cloud platforms — users can now deploy RHEL AI on Azure and Google Cloud Platform (GCP), in addition to existing support for AWS and IBM Cloud.
Also: Linus Torvalds talks about AI, Rust adoption, and the Linux kernel
The software has also seen significant improvements. The new “Periodic Checkpointing” feature saves long training runs at regular intervals during fine-tuning. Users can resume training from the last saved checkpoint instead of starting over, saving valuable time and computational resources.
RHEL AI 1.2 comes with the PyTorch Fully Sharded Data Parallel (FSDP). Introduced as a technology preview, FSDP dramatically reduces training times for multi-phase training of models with synthetic data. The technology shards a model’s parameters, gradients, and optimizer states across parallel workers (for example, GPUs), cutting training times by considerable amounts.
Also: How open source is steering AI down the high road
RHEL AI continues Red Hat’s move to make LLM training more accessible to programmers and subject matter experts, not just data scientists.
As Joe Fernandes, Red Hat’s Foundation Model Platform vice president, said: “RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI.”
In other words, with RHEL AI, making AI useful for your particular needs is becoming even easier.
Finally, with the release of RHEL AI 1.2, Red Hat is also deprecating support for version 1.1, giving users 30 days to upgrade to the latest release. This rapid iteration underscores Red Hat’s aggressive push into the enterprise AI market. For better or for worse, AI development is accelerating at an ever-increasing pace.