Media Partner For

Alliance Partner For

Home » Technology » Semiconductors » Salience Labs Launches 32-Port Optical Switch for AI Data Centers

Salience Labs Launches 32-Port Optical Switch for AI Data Centers

Salience Labs Gets $30 Series A, Appointments Two Veterans

Salience Labs Limited has introduced a new all-optical 32-port silicon photonic switch designed to improve networking performance and energy efficiency in AI data centers, as operators seek faster and more scalable infrastructure to support rapidly expanding artificial intelligence workloads.

The company said the switch is the highest-performing device of its kind currently available and represents the first commercial offering in its optical circuit switch (OCS) product family. The portfolio is expected to expand to include 64- and 128-port systems, enabling data center operators to scale networking capacity as demand for high-performance computing infrastructure grows.

AI data centers are experiencing unprecedented expansion as organizations deploy larger GPU clusters for training and running generative AI models. According to industry research firm ABI Research, global data center capacity is projected to grow nearly sixfold between 2025 and 2035, rising from 24.4 gigawatts to 147.1 gigawatts. The growth is being driven by enterprise AI adoption, hyperscale cloud expansion, and increasing power density within server racks.

However, as AI clusters grow larger and more distributed, the networking layer has emerged as a critical bottleneck. Traditional electronic packet switching architectures rely on repeated optical-electrical-optical (OEO) conversions, which introduce latency, increase energy consumption, and can limit the scalability of GPU interconnects.

Salience Labs’ newly launched optical switch is designed to address these limitations by enabling direct optical connectivity between compute resources. The integrated silicon photonics architecture removes the need for conventional electronic switching elements, allowing data to move through optical paths with lower latency and higher efficiency.

The company said the technology is designed to connect thousands of GPUs across multiple racks, supporting both scale-up and scale-out network architectures used in modern AI data centers. By improving network throughput and reliability while reducing power requirements, optical switching could help data center operators manage the rapidly growing demands of AI workloads.

“Optical switching is moving networks from electronic packet routing to highly predictable, energy-efficient optical connectivity,” said Vaysh Kewada, chief executive officer and co-founder of Salience Labs. “We are transforming the networking layer, unlocking the ability to extend scale-up and scale-out networks across the data center. Improving latency and performance ultimately helps organizations extract greater value from AI applications.”

According to the company, the optical switch architecture offers several performance and efficiency advantages. By eliminating the need for optical transceivers used in conventional optical-electrical-optical switching systems, the solution can reduce power consumption by as much as eight times compared with traditional approaches.

The technology also addresses network latency challenges that affect AI model training and inference performance. Salience Labs said its optical switching architecture reduces overall latency and removes tail-latency effects that can slow down distributed workloads. These improvements can increase tokens per second per user—a key metric used to evaluate AI model responsiveness—by as much as 80 percent.

The switch supports high-bandwidth communication for modern AI workloads and has been tested at data rates of up to 200 gigabits using 100-Gbaud PAM4 encoding. In addition, the system’s compact design allows it to occupy a fraction of a standard 1RU rack footprint, helping data centers accommodate more networking capacity without expanding physical infrastructure.

Analysts expect networking investments to increase significantly as AI infrastructure scales globally. According to Dell’Oro Group, spending on switching technologies for AI back-end networks could exceed $100 billion by 2030 as hyperscale operators expand scale-up, scale-out, and scale-across networking architectures.

To accelerate development and deployment of its optical switching platform, Salience Labs is collaborating with several technology partners across the semiconductor and networking ecosystem. These include Tower Semiconductor, which is supporting the development of photonic integrated circuits using its silicon photonics manufacturing platform.

“Advances in networking technologies such as optical circuit switching are essential as AI workloads continue to grow,” said Dr. Ed Preisler, vice president and general manager of the RF Business Unit at Tower Semiconductor. “Our collaboration with Salience combines photonic integrated circuit design with a scalable silicon photonics platform that can support the transition from development to high-volume production.”

Salience Labs is also working with Keysight Technologies to demonstrate and validate optical circuit switching technologies using advanced AI networking test platforms.

“The unprecedented growth of AI is driving the industry to advance optical technologies capable of meeting massive bandwidth demands,” said Ram Periakaruppan, vice president and general manager of Network Test and Security Solutions at Keysight. “Our collaboration with Salience demonstrates how optical circuit switching can improve bandwidth efficiency and reduce latency in AI networking environments.”

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Share this post with your friends

RELATED POSTS