AI Accelerator Baseboard

Deliver high-performance, robust PCIe® link connectivity with optimized thermal efficiency in Universal Baseboard (UBB) designs

Similar Solutions

UBB Specification Background: As AI models grow in complexity, the demand for larger scale AI systems with distributed processing across multiple GPUs intensifies. To meet this challenge, the Open Compute Project (OCP) community introduced a modular UBB to enable flexible and scalable GPU system configurations while addressing the need for efficient heat dissipation.

Challenges

  • Connecting multiple AI accelerators using PCIe® can cause signal integrity issues, making it difficult to maintain reliable connections.
  • Complex, large-scale clusters with multiple devices located far apart introduce stringent PCB layout, power, and thermal design constraints.
  • Cloud-scale deployments require flexible solutions with software configurability for customized integration into diagnostics and infrastructure management.
  • Dense PCIe links between multiple GPUs, CPU root complexes, switches, and other endpoints present risks in system design and data center deployment.

Solution: Aries PCIe®/CXL® Smart DSP Retimer

  • Purpose-built for demanding AI server channels, delivering robust signal integrity and link stability over long distances for dense GPU baseboards with multiple high-speed connectors
  • Low latency with a compact footprint for optimized performance, power consumption and thermal efficiency
  • Real-time lane, link and device health monitoring and management features with Astera Labs’ COSMOS suite that provides comprehensive diagnostics and telemetry
  • Rigorous interop testing in our Cloud-Scale Interop Lab with leading GPU, CPU, PCIe switch, network, and storage endpoints to minimize interoperation and accelerate time-to-market