TL;DR:
- HPE introduces an AI cloud for large language models, aiming to leverage its supercomputing capabilities for sustained growth in the high-performance computing (HPC) business.
- HPE partners with Aleph Alpha GmbH, a German startup specializing in large language models with a focus on explainability, to offer domain-specific AI applications.
- The company’s offering, powered by Cray supercomputing infrastructure, provides large language models on-demand as a multitenant service.
- Analysts acknowledge HPE’s innovative approach but suggest the company needs to provide more specifics, particularly regarding machine learning operations (MLOps), for a comprehensive assessment.
- HPE’s strategy has the potential to revolutionize the handling of large workloads and high-performance computing tasks, but successful execution and addressing remaining questions will be critical.
Main AI News:
Hewlett Packard Enterprise Co. (HPE) recently made headlines with its groundbreaking announcement of an artificial intelligence (AI) cloud tailored for large language models (LLMs). This move underscores HPE’s distinctive approach, aimed at achieving sustained growth in its high-performance computing (HPC) business.
While HPE boasts a clear advantage in supercomputing intellectual property, the public cloud giants currently dominate the AI landscape. Their perspective suggests that generative AI, exemplified by OpenAI LP’s ChatGPT, is inherently reliant on the cloud and its immense computational capabilities. The burning question, therefore, is whether HPE can bring unique capabilities and a focused strategy to the table, ultimately yielding a competitive advantage and, of course, substantial profits in this ever-evolving space.
In this insightful Breaking Analysis, we delve into HPE’s recent Discover conference and dissect their LLM-as-a-service announcements. Our aim is to answer a crucial question: Does HPE’s strategy represent a viable alternative to existing public and private cloud-based gen AI deployment models, or is it destined to become a niche player in this burgeoning market? To shed light on this matter, we have the pleasure of hosting CUBE analyst Rob Strechay and Andy Thurai, Vice President and Principal Analyst at Constellation Research Inc.
HPE’s Latest Offering: The AI Cloud Unveiled
Back in 2014, prior to the HP and HPE split, HP had unveiled the Helion public cloud. However, the project was eventually discontinued two years later, with Amazon Web Services Inc. seizing the public cloud throne due to HP’s lack of scale and differentiation.
This time, HPE is determined to chart a different course. At the recent Discover event, HPE officially entered the AI cloud arena by expanding its GreenLake as-a-service platform. The company now offers large language models on-demand, providing a multitenant service fueled by the computational might of HPE supercomputers.
Partnering with Aleph Alpha GmbH, a Germany-based startup specializing in large language models with a keen focus on explainability, HPE aims to empower its strategy of delivering domain-specific AI applications. To kickstart this initiative, HPE’s inaugural offering features Luminous, a pretrained LLM developed by Aleph Alpha. This breakthrough solution allows enterprises to capitalize on their own data, training and fine-tuning custom models using proprietary information.
Our esteemed analysts, Strechay and Thurai, share their invaluable insights on this announcement.
Unleashing the Power of Cray Supercomputing Infrastructure
At the heart of the discussion lies HPE’s plan to leverage Cray supercomputing infrastructure in an “as-a-service” model, thereby democratizing access to high-performance computing.
Here are the key takeaways from their conversation:
- Strechay commends HPE’s innovative approach of providing supercomputing power as a service, harnessing the capabilities of Cray technology. However, he highlights that the announcement precedes the actual general availability by approximately six months. Strechay acknowledges that HPE might be playing catch-up in the LLM market, but their unique angle of incorporating high-performance computing sets them apart.
- Thurai concurs with Strechay’s assessment and adds a touch of optimism, suggesting that HPE’s proposed model holds promise for managing large workloads effectively. He finds the concept of seamlessly transferring substantial workloads to HPE without the need for extensive fine-tuning quite compelling, especially for high-performance computing tasks.
- Nevertheless, Thurai raises some valid concerns. He underscores the lack of concrete details about critical aspects like machine learning operations (MLOps) in HPE’s announcement. Thurai emphasizes the necessity of obtaining these specifics to form a well-founded opinion on the viability of HPE’s strategy.
- Strechay also emphasizes that this offering should be viewed more as a platform as a service (PaaS) rather than infrastructure as a service (IaaS).
Both analysts cautiously express their optimism about HPE’s strategy, noting its potential to revolutionize the handling of large workloads and high-performance computing tasks. However, they agree that HPE must provide more specific information about its execution plan, particularly concerning MLOps, before substantial conclusions can be drawn. Ultimately, success hinges on effective execution.
Conclusion:
HPE’s strategic move to offer an AI cloud for large language models demonstrates its ambition to capitalize on its supercomputing expertise and generate profits in the AI market. While the company faces challenges and requires more detailed information, its differentiated approach has the potential to reshape how large workloads are managed. HPE must focus on execution and address lingering concerns to secure a strong position in this rapidly evolving market.