- Celestial AI secures significant funding to commercialize its Photonic Fabric technology, aiming to revolutionize memory integration in AI processing.
- The Photonic Fabric suite offers silicon photonics interconnects, interposers, and chiplets, enabling the decoupling of AI compute from memory.
- Chiplets provide versatile solutions for enhancing HBM memory capacity and facilitating chip-to-chip interconnectivity.
- Celestial’s second-gen Photonic Fabric promises doubled bandwidth and quadrupled lanes, signaling significant advancements.
- Innovative memory expansion modules integrate HBM with DDR5, leveraging silicon photonics interposer technology to mitigate latency and combine the benefits of both memory types.
- Competitors like Ayar Labs and Lightmatter also pursue photonics-centric solutions, indicating a transformative shift towards co-packaged optics and silicon photonic interposers.
Main AI News:
In the realm of 2024, the realm of interconnects offers an abundance of options for weaving together numerous accelerators, whether in tens, hundreds, thousands, or even tens of thousands. From Nvidia’s NVLink and InfiniBand to Google’s TPU pods communicating via optical circuit switches (OCS), and AMD’s Infinity Fabric facilitating various traffic types, the landscape is rich with connectivity solutions. Even traditional Ethernet, favored by Intel for Gaudi2 and Gaudi3, maintains its relevance.
Yet, the crux lies not in constructing expansive meshes but in mitigating the significant performance penalties and bandwidth constraints inherent in off-package communication. Moreover, there’s the issue of HBM memory, a linchpin for AI processing, being tightly bound in a fixed ratio to computational resources.
“This industry effectively treats Nvidia GPUs as exorbitantly priced memory controllers,” remarked Dave Lazovsky, CEO of Celestial AI, whose firm recently secured a hefty $175 million in Series C funding from USIT and several other notable venture capitalists to advance its Photonic Fabric technology.
Our scrutiny of Celestial’s Photonic Fabric, a suite encompassing silicon photonics interconnects, interposers, and chiplets designed to decouple AI compute from memory, conducted last summer revealed promising potential. Now, nearly a year later, the luminaries behind Celestial assert engagements with multiple hyperscale clients and a prominent processor manufacturer regarding the integration of their technology. While specifics remain undisclosed, the involvement of AMD Ventures as a backer and the hints dropped by senior vice president Sam Naffziger indicate intriguing prospects for collaboration.
The primary focus of Celestial’s endeavors appears to be on chiplets, which offer versatility in enhancing HBM memory capacity or facilitating chip-to-chip interconnectivity akin to optical NVLink or Infinity Fabric. These chiplets, slightly smaller than HBM stacks, provide robust opto-electrical interconnects capable of delivering 14.4 Tb/sec or 1.8 GB/sec of total off-chip bandwidth.
Moreover, Celestial’s forthcoming second-gen Photonic Fabric promises substantial advancements, doubling the bandwidth to 112 Gb/sec PAM4 SerDes and quadrupling the number of lanes, thereby enhancing throughput capabilities.
However, despite these strides, challenges persist. While chip-to-chip connectivity seems straightforward, expanding memory capacity presents unique hurdles. Celestial’s innovative solution involves a memory expansion module, housing two HBM stacks complemented by DDR5 DIMMs. Leveraging silicon photonics interposer technology, this module acts as an interface, transforming HBM into a write-through cache for DDR5, effectively combining the advantages of both technologies while mitigating latency.
The implications extend beyond mere technical prowess, hinting at a paradigm shift in memory architecture. Celestial envisions a future where compute, storage, and management networks seamlessly converge, fostering efficient machine learning operations without necessitating switches.
Yet, despite the ambitious vision, timing remains a critical factor. Lazovsky anticipates sampling Photonic Fabric chiplets to clients by the second half of 2025, with product availability slated for at least a year thereafter, reaching volume production by 2027.
Nevertheless, Celestial isn’t alone in its pursuits. Competitors such as Ayar Labs and Lightmatter, each with their own photonics-centric solutions, vie for dominance in the burgeoning market. The emergence of such contenders underscores the inevitability of a transition towards co-packaged optics and silicon photonic interposers, signaling a transformative era in computing architecture.
Conclusion:
The emergence of Celestial AI’s Photonic Fabric, alongside competitors in the photonics-centric space, marks a significant advancement in memory integration for AI computing. By decoupling AI compute from memory and leveraging silicon photonics interconnects, these innovations promise enhanced performance and efficiency, shaping the future landscape of computing architecture. Market players must adapt to this transformative shift towards co-packaged optics and silicon photonic interposers to remain competitive in the evolving market.