Media Partner For

Alliance Partner For

Home » Technology » Semiconductors » Marvell Expands CXL Portfolio with Next-Gen AI Switch

Marvell Expands CXL Portfolio with Next-Gen AI Switch

Marvell Building with logo

In response to growing memory constraints in AI data centers, Marvell Technology, Inc. has unveiled a new CXL-based switching solution aimed at enabling scalable, high-performance memory architectures. The company’s latest device, the Structera S 30260 CXL switch, is designed to facilitate rack-level memory pooling, allowing data center operators to dynamically access and allocate memory resources beyond traditional server boundaries.

The development reflects a broader shift in infrastructure design as AI workloads become increasingly memory-intensive. Expanding large language models (LLMs), longer context windows, and rising key-value cache demands are placing significant pressure on memory capacity and bandwidth. Conventional approaches, including scaling within individual servers or relying heavily on high-bandwidth memory, are proving both costly and insufficient for next-generation requirements.

The Structera S 30260 leverages Compute Express Link (CXL) 3.0 technology and features 260 lanes, delivering aggregate bandwidth of up to 4TB/s. It operates alongside other components in Marvell’s ecosystem, including Structera A near-memory accelerators, Structera X memory-expansion controllers, and Alaska P PCIe/CXL retimers. Together, these elements enable a disaggregated memory architecture that supports efficient communication between CPUs, GPUs, XPUs, and other accelerators.

The solution builds on Marvell’s integration of technologies from XConn Technologies, strengthening its position in the emerging CXL ecosystem. By enabling composable memory across the rack, the switch allows hyperscalers and enterprise operators to improve memory utilization, enhance system flexibility, and optimize overall infrastructure performance.

Rishi Chugh, Vice President and General Manager of Marvell’s Data Center Switch Business Unit, indicated that addressing the AI “memory wall” requires a fundamental architectural shift, with memory pooling emerging as a key enabler of scalable AI systems.

The technology also aims to improve latency and throughput by providing near-local shared memory access, reducing reliance on multi-hop data transfers. Gerry Fan, Senior Vice President of Engineering at Marvell, noted that such capabilities are increasingly critical for efficient LLM inference and overall GPU utilization.

Industry analysts highlight that ongoing DRAM supply constraints and pricing pressures are accelerating interest in alternative memory architectures. James Sanders of TechInsights pointed to CXL-based solutions as a way to provide greater flexibility and scalability in evolving AI environments.

The Structera S 30260 is scheduled to begin sampling in the third quarter of 2026, further expanding Marvell’s portfolio of CXL-based solutions for next-generation data center infrastructure.

Announcements

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Share this post with your friends

RELATED POSTS