TL;DR:
- AMD aims for a resurgence in the second half of the year with upcoming product launches.
- Q1 2023 revenues for AMD’s Data Center group declined, but cash reserves remained strong.
- Demand for Bergamo Epyc CPUs slows due to prior high adoption and rising interest in large language models.
- Intel faces challenges in the data center market, while AMD positions itself as an alternative.
- Lisa Su prioritizes AI, combining AMD’s AI teams under a single organization managed by Victor Peng.
- AMD anticipates increased demand for AI hardware, including the forthcoming Instinct MI300 GPUs.
- The allocation of CPU-GPU resources in El Capitan raises questions amid high demand.
- Nvidia’s Hopper GPUs and Grace CPUs face uncertain manufacturing capacity, driving up prices.
- AMD is well-positioned to capitalize on the AI computing era with its compute engines and software capabilities.
- AMD has made progress in GPU acceleration for HPC simulation and modeling.
- AMD’s expansion in the AI market poses a challenge to Nvidia and Intel.
- AI exhibits recession-proof characteristics, fueling growth opportunities for companies like Nvidia, AMD, and Intel.
Main AI News:
When it comes to the economy, our collective expectations play a significant role in shaping its trajectory. The unusual events and discussions that unfolded in early 2023 have left companies cautious about making excessive investments in systems. Even AMD, which has been steadily expanding its presence in the server market for several years, is beginning to experience the effects of the prevailing macroeconomic conditions.
However, Lisa Su and her team at AMD have devised a plan that aims to revive the company’s fortunes in the second half of the year. They anticipate a remarkable resurgence with the upcoming “Genoa” Epyc 9004 ramp, the introduction of the “Bergamo” hyper scaler and cloud CPUs, and the revelation and installation of the “Antares” Instinct MI300 hybrid CPU-GPU in the powerful “El Capitan” exascale-class supercomputer at Lawrence Livermore National Laboratory.
(Unofficially, we have named the MI300 “Antares” as AMD seems to overlook the importance of synonyms in its codenames. The previous MI100 was named after the “Arcturus” GPU, while the MI200 series drew inspiration from the “Aldebaran” GPU. Hence, it seems fitting to christen the MI300 after “Antares,” one of the largest stars in the night sky and a red supergiant in the constellation Scorpio.)
During a call with Wall Street analysts, Su confirmed AMD’s expectation of achieving over 50 percent growth in the second half of this year compared to the same period in 2022—a remarkable feat considering the company’s already impressive performance in the latter half of the previous year. Unfortunately, the first quarter of 2023 was lackluster, and it appears that the second quarter will follow suit.
In the March quarter, AMD experienced a 9.1 percent decline in overall revenues, amounting to $5.35 billion. This decline can be attributed to substantial investments in future data center roadmaps and the lingering effects of the PC market’s CPU and GPU inventory from the previous year. Consequently, the company reported a loss of $139 million. Despite these challenges, AMD’s cash reserves reached $5.94 billion, representing a 1.4 percent sequential increase but a 9.1 percent decrease from the $6.53 billion held in the bank a year ago.
Considering that many hyperscalers and cloud builders are eagerly anticipating the release of the 128-core Bergamo Epycs, equipped with Zen 4c cores designed specifically for their workloads, it is unsurprising to witness a slowdown in CPU demand. These industry players already secured a substantial number of Genoa Epyc 9004s even before the official launch in November 2022. Moreover, the rising popularity of large language models has led to increased consumption among cloud builders and further influenced the demand landscape.
With Intel’s comparable offering, the “Sierra Forest,” not scheduled for release until the first half of 2024, AMD faces minimal competitive pressure in the hyperscaler and cloud builder segment regarding the Bergamo CPUs. This time, AMD could enjoy a three-quarter head start over Intel in terms of many-cored server CPUs—an advantage that has been rare in recent times, considering that Sierra Forest will boast 144 cores compared to Bergamo’s 128 cores.
The confluence of declining volumes and the investment in CPU, GPU, and DPU roadmaps have put AMD’s Data Center group in a challenging position. In the first quarter of this year, data center product sales inched up by a mere 0.2 percent, reaching just under $1.3 billion, while operating income plummeted by 65.3 percent to $148 million. Our proprietary model, infused with numerical wizardry akin to that employed by our counterparts on Wall Street, reveals that AMD’s Epyc line generated $1.22 billion in revenues, marking a modest 0.6 percent year-on-year growth.
However, sales of Instinct GPUs experienced an 18.8 percent decline, amounting to $65 million, while Pensando DPUs contributed approximately $10 million, largely driven by a substantial Microsoft installation that has been in progress for some time. Furthermore, our estimates indicate a sequential decrease of 21.8 percent in Data Center group sales.
In stark contrast, as we reported recently, Intel’s Data Center & AI group witnessed a significant decline in revenues, plummeting by 38.4 percent year on year to $3.72 billion, with shipments of Xeon SP processors experiencing a staggering 50 percent drop. This resulted in an operating loss of $518 million for the group. Intel’s Network and Edge group, which has a foothold in the data center market, reported sales of $1.49 billion, down 32.7 percent year on year, and incurred an operating loss of $300 million.
Returning to AMD’s financial performance, while we lack specific volume figures, it is evident that average selling prices were negatively impacted as more CPUs were allocated to cloud builders and hyperscalers. We hypothesize that CPU shipments declined at a faster rate than revenues due to AMD’s ability to command higher average selling prices each quarter by enhancing architectural features and selling more Genoa chips than the previous generation, “Milan” Epyc 7003s.
Our analysis suggests that sales of Epyc CPUs to cloud builders and hyperscalers amounted to $952 million, reflecting a 15.4 percent year-on-year increase but a significant sequential decline of 22.5 percent compared to Q4 2022. Consequently, sales of CPUs destined for enterprises, telcos, smaller service providers, governments, and academia were estimated at $268 million, a substantial decline of 30.9 percent.
However, Lisa Su, AMD’s President and CEO, is keenly focused on establishing a competitive presence in the AI space. She highlighted the successful porting of the PyTorch AI framework to the ROCm environment for Instinct GPU accelerators. Additionally, Su mentioned the utilization of the LUMI supercomputer in Finland, powered by Instinct MI250X GPUs and Epyc 7003 CPUs, for training a large language model in Finnish.
Su elaborated on the call with Wall Street analysts, emphasizing the heightened customer interest in the forthcoming Instinct MI300 GPUs designed for AI training and large language model inference. She acknowledged the significant progress made in achieving key MI300 silicon and software milestones during the quarter. The company remains on track to launch the MI300 later this year, supporting the El Capitan exascale supercomputer project at Lawrence Livermore National Laboratory, as well as catering to the needs of prominent cloud-based AI customers.
The prospects of El Capitan, with its CPU-GPU complexes, raise intriguing questions about the allocation of these resources. It remains uncertain how many of Nvidia’s “Hopper” H100 GPUs, potentially augmented by “Grace” Arm server CPUs, can be manufactured. However, given the heightened interest in large language models, it is highly likely that demand will outstrip supply. This scenario is expected to drive up prices for Nvidia’s GPUs and CPUs, compelling some customers to explore AMD’s alternatives.
Furthermore, Lisa Su revealed that all of AMD’s AI teams across different divisions and groups had been consolidated into a single organization, headed by Victor Peng, the former CEO of Xilinx, an FPGA manufacturer, and the general manager of AMD’s Embedded group. This new AI group, likely a virtual and cross-group entity, will steer AMD’s comprehensive AI hardware strategy and shape its AI software ecosystem. This ecosystem encompasses optimized libraries, models, and frameworks that span across all of the company’s computing engines.
Su emphasized that we are still in the nascent stages of the AI computing era, witnessing a rate of adoption and growth that surpasses any other recent technology. As the recent surge of interest in generative AI demonstrates, the widespread deployment of large language models and other AI capabilities across cloud, edge, and endpoints necessitates substantial advancements in compute performance.
AMD is well positioned to capitalize on this surge in demand due to its diverse portfolio of high-performance and adaptive compute engines, strong customer relationships across various large markets, and expanding software capabilities. Su expressed great enthusiasm for the company’s prospects in AI, identifying it as their top strategic priority and highlighting their deep engagement with customers to deliver joint solutions to the market.
While AMD initially lagged in the first wave of GPU acceleration for HPC simulation and modeling, it has made considerable progress with its current CPUs, GPUs, and ROCm stack. In the second wave of GPU acceleration focused on AI training, Nvidia still holds a significant advantage.
However, given the demand-supply dynamics and the collaborative efforts of the HPC community, which is also striving to catch up with hyperscalers, AMD stands a strong chance of securing a significant share of the AI market. Additionally, AMD enjoys an advantage over Intel in terms of GPUs and the OneAPI stack, as AMD has proven to be a reliable supplier of CPUs and is now expanding into GPUs.
Significantly, AI is currently exhibiting recession-proof characteristics, making it a resilient and thriving sector. This phenomenon can be likened to the “Dot Chat Boom,” emphasizing the enduring demand and growth potential of AI for companies such as Nvidia, AMD and Intel.
Conlcusion:
AMD’s focus on AI as its top strategic priority, consolidation of AI teams, and progress in the CPU, GPU, and DPU roadmaps show its commitment to expanding its presence in the data center market. With the forthcoming “Genoa” Epyc 9004 ramp, “Bergamo” hyperscaler and cloud CPUs, and the Instinct MI300 hybrid CPU-GPU, AMD is well-positioned to take advantage of the increasing demand for compute performance driven by the rise of large language models and other AI capabilities. While Intel faces significant challenges in this space, AMD’s reliable supply of CPUs and expanding GPU portfolio make it a strong contender. The resilience and growth potential of the AI market further solidifies the importance of AI for companies such as Nvidia, AMD, and Intel.