- Is this the OnePlus Open 2? Oppo's new foldable phone is as thin as its USB-C port
- Major Cybersecurity Vendors’ Credentials Found on Dark Web
- I made an AirTag that lasts 10 years with this clever accesssory - here's how
- Gen AI ROI falls short of expectations, but belief persists
- Finally, a luxury soundbar that's compact and delivers immersive audio (and it's $300 off)
Red Hat delivers AI-optimized Linux platform
At launch, RHEL AI includes support for the Granite 7-billion-parameter English language model. Another Granite model, the 8-billion-parameter coding model, is in preview and will be generally available at the end of this year or beginning of 2025.
RHEL AI also comes with Instruct Lab, an open-source project that helps enterprises fine tune and customize these Granite models, or other open-source AI models.
Finally, RHEL AI also comes with all the underlying platform infrastructure, Katarki says. That includes immediate support for Nvidia hardware. Support for AMD and Intel hardware is expected to arrive in the next few weeks.
Everything is packaged up as a container image, Katarki adds, so that enterprises can use their existing container management tools to administer it. On top of the software, there’s also support and legal indemnification for both the open-source software and the Granite model.
“Think of it as an appliance,” Katarki says. “It’s got everything. The Granite model, Instruct Lab, all the underlying platform software you need, and the operating system underneath it all.”
RHEL AI helps enterprises get away from the “one model to rule them all” approach to generative AI, which is not only expensive but can lock enterprises into a single vendor. There are now open-source large language models available that rival those available from the commercial vendors in performance.