The year 2024 has been extremely exciting for Astera Labs with multiple big milestones including a successful IPO in March and the expansion of our Intelligent Connectivity Platform with the introduction of our Scorpio Smart Fabric Switch family. The company continues to scale aggressively with the goal of becoming the premier provider of purpose-built connectivity solutions into the AI and cloud infrastructure market, and we are just getting started.
Driving AI and Cloud Innovation: How Astera Labs is Expanding its Market Opportunity
Jitendra Mohan, Chief Executive Officer, Co-FounderAstera Labs’ Vision for the AI Era
Astera Labs’ co-founders share their vision for delivering purpose-built connectivity for AI and cloud infrastructure and expanded market opportunities with AI fabric switches.
Expanding the AI and Cloud Infrastructure Market
Astera Labs’ strong growth is being fueled by the rapid evolution of AI models, which are now scaling up to trillions of parameters. To keep pace with burgeoning AI model sizes, next generation GPUs and AI accelerators are quickly ramping computational capacity at an incredible pace. Unfortunately, faster bandwidth requirements and increasing architectural complexity are creating acute bottlenecks around the delivery of data, networking, and memory connectivity to these accelerators. System-level utilization and computational efficiency is therefore constrained, leaving GPUs and AI accelerators underutilized and operating at roughly 50% peak capacity. These developments present an opportunity for Astera Labs’ Intelligent Connectivity Platform to enhance system performance and overall productivity with differentiated hardware and software solutions, purpose-built to address the specific requirements of AI infrastructure.
Purpose-built Connectivity Solutions for AI and Cloud Infrastructure
In the diagram below, we see the anatomy of an AI server emphasizing the high-speed connectivity channels between the key compute elements and their resources.
On the left is the scale-up back-end fabric of the AI server consisting of a dense mesh of interconnected AI accelerators or GPUs. Our new Scorpio X-Series Fabric Switches are architected to deliver the highest back-end GPU-to-GPU bandwidth with platform-specific customization through our software-defined architecture and COSMOS software stack. We also see a large and growing opportunity for back-end connectivity solutions across our Aries PCIe®/CXL® Smart DSP Retimers and Smart Cable Modules™ (SCMs) and Taurus Ethernet SCMs product lines for high-speed interconnect over copper today and complemented by longer-reach optical connections over time.
The right side of the AI server diagram illustrates the high-speed connectivity links moving data from the GPU cluster to its various resources such as the AI head-node which includes CPUs, networking, memory, and storage. Data exchange between AI accelerators and these resources is achieved predominately through PCIe connectivity which is now scaling to its 6th generation to support 64 GT/s per lane throughput. Given the increases in GPU processing power and growing complexity of system topologies, moving data between the back-end GPU cluster as well as to and between the front-end resources with robust signal integrity and proven interoperability becomes increasingly critical to system performance. We address these higher speeds and increased complexity with our new Scorpio P-Series Fabric Switches which are specifically architected for mixed traffic head-node connectivity across a diverse ecosystem of PCIe hosts and end points. Additionally, the varied topologies and resource mixtures present increased opportunities for additional Aries PCIe/CXL Smart DSP Retimer attach and emerging use-cases for Leo CXL Smart Memory Controllers.
Finally, in the center of the illustration we see the high-speed connectivity pipe that allows the AI server to scale-out data to the broader network utilizing both PCIe and Ethernet. Once again, robust and reliable high-speed connectivity is essential to optimize AI accelerator utilization as AI compute outputs or tokens need to be pushed back out to the end-user for consumption with minimal latency. We believe the trend of higher bandwidth requirements and a diverse set of scale-out topologies present growing opportunities for our Taurus Ethernet Retimer and Smart Cable Module product families.
Capitalizing on Growth Opportunities
Astera Labs is well-positioned to outpace industry growth rates through a combination of strong secular tailwinds and the expansion of our silicon content opportunity for AI platforms.
Scorpio Smart Fabric Switches represent our latest product family and elevates Astera Labs to a new level for value delivered to AI platform providers and hyperscalers. These purpose-built smart fabric switches enable essential GPU-to-GPU clustering and mixed-traffic interconnectivity between GPUs and head-nodes, addressing the increasing speeds and complexities present in next-generation AI systems. Our Scorpio Fabric Switch solutions are designed using our differentiated software-defined architecture which provides our hyperscaler customers the flexibility to customize their architectures to maximize performance per watt while accelerating time to market. Within the Scorpio family is the industry’s first PCIe Gen 6 Smart Fabric Switch which will support the doubling of front-end connectivity bandwidth as CSPs look to drive higher system-level performance by keeping their highly performant GPUs and AI accelerators fed with data. Scorpio increases our overall silicon dollar content opportunity per AI platform and will unlock a significant new addressable market for Astera Labs over the coming years.
Aries PCIe/CXL Smart DSP Retimers continue to be the industry’s connectivity workhorse, addressing a wide range of AI server deployments. Diverse interoperability characteristics allow for seamless integration with third-party GPUs from leading providers like NVIDIA and AMD, while also addressing the internal ASICs and AI accelerators developed by hyperscalers. Furthermore, we believe the faster bandwidth requirements supported by the transition to PCI Gen 6 will not only drive the need for the additional signal reach extension capabilities of our Aries products, but also higher value per device as we provide double the performance. In these custom AI servers, Aries plays a critical role in ensuring signal integrity, not only in head-node connectivity but also in backend GPU-to-GPU applications, where each GPU must connect with every other GPU. This functionality presents a substantial growth opportunity for Astera Labs, enabling deeper integration and broader market penetration and higher average unit attach rates. Longer term, our hyperscaler customers are looking to drive reliable PCIe connectivity to increase AI accelerator cluster scale and utilization. Our industry first end-to-end PCIe over Optics technology demonstration showcases our ability to drive signals over fiber at lengths to 50 meters and beyond. We look to broaden our Aries product family over time to support our customer’s efforts to scale AI clusters across the data center which will further expand our market opportunity.
Taurus Ethernet Smart Cable Modules are now being deployed in 400Gb Ethernet applications, serving general compute applications and AI server needs across platforms leveraging both third-party GPUs and internal AI accelerators. This diverse set of deployments for 400G applications will broaden further as the industry adopts 800G Ethernet port switching over the next couple years. Once again, increasing data rates and additional system-level and architectural complexity will drive demand for robust signal integrity technologies, and we expect Active Electrical Cables (AECs) penetration across next-generation AI server racks to increase. We expect additional attach for Ethernet AECs at 800G to help drive increased unit demand for Taurus SCMs over time, with higher value per solution due to higher device level functionality.
Leo Smart CXL Memory Controllers are poised to address a large market opportunity as hyperscaler customers look to solve the memory bandwidth and capacity challenges becoming more prevalent with next-generation data center server CPUs with new deployments and expanding use cases benefiting from its capabilities. Our Leo product family is shipping in pre-production volumes for rack-level testing at our hyperscaler customers to support memory and database applications within general-purpose compute systems while also being assessed for additional use-cases. We look for our Leo CXL memory controllers to see broadened adoption across the ecosystem as CXL-capable data center server CPUs begin volume deployment into new infrastructure deployments.
Lastly, our COSMOS software suite is integrated across our entire product portfolio and our hyperscaler customers’ fleet management stacks to leverage our software-defined architecture. With AI architectures evolving at a rapid pace, the flexibility afforded by this differentiated approach allows hyperscalers to seamlessly customize, optimize, monitor, and manage their valuable infrastructure. This flexibility is becoming even more critical as we move from heterogeneous front-end connectivity architectures to platform-specific and sometimes proprietary back-end clustering protocols looking to derive the highest performance and tightest reliability.
Looking Ahead
Astera Labs is strategically positioned for long-term growth, driven by a combination of strong secular tailwinds and our expanding product portfolio which will drive higher dollar content opportunity per AI platform. We have become a trusted and strategic partner to our Hyperscaler and system OEM customers with tens of millions of smart connectivity solutions widely deployed and field tested across nearly all AI infrastructure programs globally. This close collaboration with Hyperscalers and AI platform suppliers has provided us with a “front-row seat” regarding the direction of compute technologies and connectivity topologies required to support next-generation AI applications. While the introduction of the Scorpio Smart Fabric Switch family is the next critical step in our corporate journey, we are hard at work identifying and developing new technologies that will expand Astera Labs’ footprint from I/O devices within the rack to fabric-class solutions that connect AI accelerators across data centers.
With a strategic focus on key markets, a talented team, and innovative products, Astera Labs is well equipped to provide exceptional value to both customers and investors.
There is much more on the horizon—stay tuned!
About Jitendra Mohan, Chief Executive Officer, Co-Founder
Jitendra co-founded Astera Labs in 2017 with a vision to remove performance bottlenecks in data-centric systems. Jitendra has more than two decades of engineering and general management experience in identifying and solving complex technical problems in datacenter and server markets. Prior to Astera Labs, he worked as the General Manager for Texas Instruments’ High Speed Interface Business and Clocking Business. Earlier at National Semiconductor Corp, Jitendra led engineering teams in various technical leadership roles. Jitendra holds a BSEE from IIT-Bombay, an MSEE from Stanford University and over 35 granted patents. In addition to work, Jitendra enjoys outdoor activities and reading about the origins of the Universe.