Seven Key Innovations Shaping AI Connectivity Showcased at DesignCon 2025

Paroma Sen, VP, Corporate Marketing

Astera Labs will be at DesignCon 2025, taking place January 28-30 at the Santa Clara Convention Center, to showcase our latest chip, board, and system design innovations for AI and cloud infrastructure.

Join us at Booth #755 to see our Intelligent Connectivity Platform of PCIe®, Ethernet, and CXL® connectivity solutions in action and learn how we are unleashing the full potential of next-generation AI platforms at cloud-scale.

Here’s a quick preview of seven highlights at this year’s DesignCon:

1. PCIe 6.x Ecosystem Gains Momentum

The PCI Express® (PCIe) 6.x ecosystem is emerging and our expanding portfolio of PCIe 6.x solutions set the stage for a smart, scalable connectivity backbone in accelerated compute platforms.

We have teamed up with Micron to accelerate the PCIe 6.x ecosystem. We are showcasing the first public demonstration of end-to-end PCIe 6.x interop between our Scorpio P-Series Fabric Switch and Micron’s PCIe 6.x SSD, currently available for ecosystem enablement.   

As GPU clusters grow in size and complexity, PCIe cabling solutions are required to scale clusters rack-to-rack and across the data center. We will demonstrate our Aries 6 Smart Cable Modules with Active Electrical Cables (AECs) and our Scorpio P-Series Fabric Switches can deliver PCIe 6.x connectivity rack-to-rack.

2. Better Together: Aries + Scorpio

Our PCIe 6.x connectivity portfolio of software-defined Retimers and Switches is enhanced by our COnnectivity System Management and Optimization Software (COSMOS) suite to unlock unprecedented data center observability, enhanced security, and extensive fleet management capabilities.

In a new demonstration, we will highlight how combining telemetry from Aries 6 Smart DSP Retimers and Scorpio P-Series delivers top-down insights on low level PCIe 6.x connectivity issues throughout the entire system.

3. COSMOS: Predictive Analytics at Scale

Our COSMOS software suite goes beyond traditional fleet management by integrating advanced predictive analytics to enhance overall AI system reliability at scale.

Check out a showcase of COSMOS’ ability to isolate specific link failures and proactively resolve issues to ensure seamless operation and minimize downtime across the data center.

4. Optimizing AI Inferencing with CXL-attached Memory

Our Leo CXL Smart Memory Controllers are proven to optimize AI inferencing applications by expanding memory capacity and bandwidth to handle larger datasets with greater efficiency.

Connect with our CXL experts at the show to learn how Leo:

  • Accelerates token generation for faster time-to-insights
  • Lowers CPU utilization per query to increase the number of LLM instances per server
  • Maximizes ROI of expensive GPU resources 

5. Unleashing High Density AI Connectivity with Taurus Ethernet SCMs

Our Taurus Ethernet Smart Cable Modules extend 200/400/800G Ethernet signal reach for high-density AI racks using Active Electrical Cables and support scale-out hyperscale networks.  

As part of our rack demonstration, we will show real-time telemetry via our COSMOS software suite. When deployed in high-density AI racks, COSMOS offers rich and deep diagnostics features that enable continuous monitoring of crucial high-speed front-end and back-end fabric maximizing uptime and ROI.

6. Spotlight on Connectivity: See the Accelerated AI Platform

Our Accelerated AI Platform is based on the OCP DC-MHS specification and OCP inspired hardware designs, demonstrating how all of our PCIe, CXL, and Ethernet-based connectivity solutions can be integrated into a single AI system.

See the platform integrated in our rack demo and learn how Astera Labs is supporting open specifications to advance AI and cloud infrastructure innovation.

7. UALink Expands with New Board Members

As a promoter member of the UALink™ Consortium, we are spearheading efforts alongside other industry leaders to deliver the UALink specifications and create an ecosystem around a new optimized scale-up interconnect for AI clusters.

We are delighted to welcome Alibaba, Apple, and Synopsys as fellow board members, joining us along with Arm, AWS, Cisco, Google, HPE, Intel, Meta, and Microsoft. Together, we are working closely together to deliver a power efficient, low latency interconnect that optimizes AI accelerator performance, utilization, and uptime. Stop by our booth to learn more about the growing momentum behind UALink technology.

Email  to schedule a meeting with Astera Labs to learn more about how we are unlocking innovative solutions for impactful AI connectivity. We look forward to seeing you at DesignCon 2025!