Marvell Teralynx 10 Switch Enters Production to Meet Surge in AI Cloud Demands

  • Marvell’s Teralynx 10 Ethernet switch is now in volume production.
  • The switch offers 51.2 Tbps throughput and the lowest industry latency.
  • Designed to support AI training, inference, and general-purpose computing.
  • The switch addresses the growing need for high-bandwidth connectivity in AI clusters and data centers.
  • It supports open network platforms like Linux Foundation’s SONiC and SAI.
  • Shipments of 51.2 Tbps switches are projected to rise significantly, with a 120% CAGR.
  • Key features include low latency (500 ns), industry-best radix (512), low power consumption, and full programmability.
  • The switch is supported by major OEMs, ODMs, and ISVs.

Main AI News:

Marvell Technology, Inc., a prominent player in data infrastructure semiconductor solutions, has announced that its Marvell® Teralynx® 10 Ethernet switch is now in volume production, with customer deployments already in progress. The Teralynx 10 is a programmable, low-power Ethernet switch with a groundbreaking 51.2 terabits per second (Tbps) throughput and the industry’s lowest latency, designed to support a wide range of workloads including AI training, inference, and general-purpose computing.

As the demand for high-bandwidth connectivity surges due to the expansion of AI clusters and data centers, the Teralynx 10 switch addresses this need with advanced capabilities. The switch’s production coincides with a broader industry shift towards open network platforms, such as Linux Foundation’s SONiC and SAI (Switch Abstraction Interface), moving away from proprietary network operating systems. This transition allows for the rapid deployment of multi-manufacturer switching solutions, accelerating development and optimizing performance for diverse use cases.

According to 650 Group, shipments of 51.2 Tbps switches are projected to soar from approximately 77,000 units in 2024 to 1.8 million by 2028, marking a 120% compound annual growth rate (CAGR). Alan Weckel, co-founder of 650 Group, emphasized that this capacity milestone is a game-changer for data center switching, predicting that cloud service providers will increasingly adopt this switch to build next-generation infrastructure while prioritizing multi-vendor diversity.

The Teralynx 10 leverages a new switch architecture to deliver exceptional bandwidth, ultra-low latency, and programmability, making it versatile for various network segments including top-of-rack (ToR), leaf, spine, AI clusters, and network edge. Its architecture supports in-field feature updates and new protocol integrations, ensuring future-proofing as network demands evolve.

Marvell’s Teralynx 10 is also positioned to benefit from the industry’s shift to open networking. The switch supports the open-source SONiC platform, facilitating a unified networking system across multiple manufacturers and enhancing supply chain stability. This open-source approach mirrors the disruption of proprietary server operating systems by Linux, promising greater silicon diversity and accelerated development for data center operators.

Nick Kucharewski, senior vice president and GM of Marvell’s Network Switching Business Unit, highlighted the Teralynx 10 as an optimal solution for the AI and cloud infrastructure market. He noted, “This is the right product at the right time, as the market turns to innovative solutions from customer-focused suppliers like Marvell to meet the explosive demand for high-bandwidth switching in AI data center buildouts.”

Key Features and Benefits of the Teralynx 10 Switch:

  • Lowest Latency: Provides 51.2 Tbps throughput with latency as low as 500 nanoseconds and sub-600 nanoseconds across all packet sizes, crucial for AI, machine learning, and distributed workloads. This feature enhances job completion times and operational efficiency.
  • Industry-Best Radix: The 512 switching radix reduces the number of switch tiers in large clusters, leading to significant power and total cost of ownership (TCO) reductions.
  • Low Power Consumption: Operates at 1 watt per 100 gigabits-per-second of bandwidth, optimizing energy use.
  • Programmable Architecture: Fully programmable with no impact on packet processing capacity or latency, supporting various use cases and future-proofing as networking technologies evolve.

Marvell’s Teralynx 10 switch has garnered support from major OEMs, ODMs, and ISVs, expediting adoption and offering customers extensive choice and optimization capabilities. Industry partners have praised the switch’s innovative features and performance, underscoring its role in advancing AI and cloud infrastructure.

Gavin Cato, head of Portfolio Solutions and CTO at Celestica, remarked, “The proliferation of AI and machine learning in data centers represents a transformative shift, and Marvell’s Teralynx 10 is at the forefront of this change.” Similarly, David Tsai of Wistron Neweb Corporation and Ram Periakaruppan of Keysight emphasized the switch’s impact on high-performance and reliable AI cloud infrastructure.

Michel Haddad of MultiLane and Jacob Christensen of Teledyne LeCroy Xena also lauded the Teralynx 10, noting its integration into interoperability testing and performance benchmarking, further solidifying its position in the market.

As Marvell continues to innovate in data infrastructure, the Teralynx 10 switch sets a new standard for high-performance networking in AI cloud deployments, meeting the increasing demands of modern data centers with efficiency and cutting-edge technology.

Conclusion:

The introduction of Marvell’s Teralynx 10 switch represents a significant advancement in data center networking, particularly for AI applications. As AI clusters and data centers expand, the demand for high-bandwidth, low-latency solutions grows. The Teralynx 10 meets these needs with its high throughput and cutting-edge features, positioning itself as a critical component for future infrastructure. The industry’s shift towards open networking platforms enhances its relevance, providing data center operators with increased flexibility and reduced dependency on proprietary systems. This development underscores the increasing emphasis on high-performance, scalable solutions in the rapidly evolving AI and cloud markets.

Source