- Dell Technologies and Red Hat partner to integrate RHEL AI into PowerEdge servers.
- The collaboration supports AI and machine learning development, enabling organizations to scale without cloud hosting.
- PowerEdge servers can be deployed on-premises or in a hybrid cloud environment.
- RHEL AI is optimized for foundational AI model development and is the platform for Dell’s PowerEdge R760xa server.
- RHEL AI provides access to IBM’s open-source Granite large language models, excelling in generative AI coding tasks.
- InstructLab tools for model alignment are included in RHEL AI and designed to scale across distributed environments.
- Continuous validation for AI workloads, including Nvidia acceleration, enhances stability and scalability.
- The collaboration aims to give customers greater flexibility in managing AI workloads on-prem and hybrid cloud environments.
Main AI News:
Dell Technologies has partnered with Red Hat to integrate Red Hat Enterprise Linux AI (RHEL AI) into its widely used PowerEdge servers, making its hardware an essential tool for AI and machine learning development. This collaboration is designed to help organizations scale their IT infrastructure efficiently, supporting AI strategies without requiring cloud hosting. Depending on the organization’s preferences, Dell’s PowerEdge servers can be utilized in both on-premises and hybrid cloud environments.
RHEL AI is a tailored version of the Red Hat Enterprise Linux OS built to develop foundational AI models. Now the go-to platform for Dell’s PowerEdge R760xa AI server, RHEL AI enables developers to quickly build, test, and deploy AI models in production. The platform features open-source Granite language models from IBM Research, offering an alternative to models like Meta’s Llama and OpenAI’s GPT. It excels in generative AI tasks such as coding.
In addition to Granite, RHEL AI includes InstructLab tools for model alignment, utilizing the Large-scale Alignment for chatBots framework. Available as a bootable RHEL image for standalone server setups, it’s also part of Red Hat’s OpenShift AI—a hybrid cloud platform designed to scale machine learning operations across distributed environments.
Deploying RHEL AI on Dell’s PowerEdge servers simplifies the user experience for AI, as the platform is continuously tested and validated for AI workloads, including those leveraging Nvidia’s accelerated computing. Dell’s Senior VP Arun Narayanan stressed the importance of this validation, underscoring that it enhances the stability and scalability of infrastructure investments while speeding up the deployment of essential AI workloads on a trusted software platform.
Joe Fernandes, Red Hat’s GM for generative AI platforms, pointed out that AI projects require extensive computing resources capable of scaling with project growth. The collaboration with Dell to optimize RHEL AI for PowerEdge servers gives customers the confidence and flexibility to manage generative AI workloads across hybrid cloud environments, helping to propel their business forward with innovation.
Conclusion:
This partnership between Dell and Red Hat marks a significant step in simplifying AI deployment and providing organizations with the flexibility to scale AI and machine learning efforts in diverse environments. By optimizing RHEL AI for PowerEdge servers, Dell is strengthening its position in the AI infrastructure market, enabling businesses to invest in on-premises solutions with cloud-level capabilities. This move responds to the growing demand for scalable AI infrastructure. It will likely push competition in the AI hardware space, benefiting companies looking for more versatile, validated, and reliable options. Integrating open-source models like Granite promotes innovation while reducing reliance on proprietary AI technologies.