The upgrade from PCIe® 4.0 to PCIe 5.0 doubles the bandwidth from 16GT/s to 32GT/s but also suffers greater attenuation per unit distance, despite the PCIe 5.0 specification increasing the total insertion loss budget to 36dB. After deducting the loss budget for CPU package, AIC, and CEM connector, merely 16dB system board budget remains. Within the remaining budget, engineers need to consider safety margin for board loss variations due to temperature and humidity.
By Liang Liu, Systems and Applications Engineer, Astera Labs – PCI-SIG® Member
Written by Astera Labs for PCI-SIG
PCI Express® (PCIe®) technology is the most important high-speed serial bus in servers. Due to its high bandwidth and low latency characteristics, PCI Express architecture is widely used in various server interconnect scenarios, such as:
- Within a Server: CPU to GPU, CPU to Network Interface Card, CPU to Accelerator, CPU to SSD
- Within a Rack: CPU to JBOG and JBOF through board-to-board connector or cable
- Emerging GPUs-to-GPUs or Accelerators-to-Accelerators interconnects
At the same time, with the rapid development of heterogeneous computing, the data throughput requirements in the server system are becoming higher and higher. Two years after the release of the PCIe 4.0 specification, the PCIe 5.0 specification was officially released in May 2019. PCIe 5.0 technology still uses the same 128b / 130b coding scheme, and the symbol rate increased from 16 GT/s to 32 GT/s. In keeping with tradition, the PCIe 5.0 specification is backwards compatible with lower-speed PCIe generations.
The Challenge of PCIe 5.0 Technology Design
In the case of other standards greater than 30 GT/s, the PAM-4 modulation method is usually used to make the signal’s Nyquist frequency one-quarter of the data rate, at the cost of 9.5 dB signal-to-noise ratio (SNR). However, PCIe 5.0 architecture continues to use the non-return-to-zero (NRZ) signaling scheme, thus the Nyquist frequency of the signal is one-half of the data rate, which is 16 GHz. The higher the frequency, the greater the attenuation. The signal attenuation caused by the channel insertion loss (IL) is the biggest challenge of PCIe 5.0 technology system design.
PCIe 5.0 specification outlines the bump-to-bump IL budget as 36 dB for 32 GT/s, and the bit error rate (BER) must be less than 10-12. To address the problem of high attenuation to the signal, the PCIe 5.0 specification defines the reference receiver such that the continuous-time linear equalizer (CTLE) model includes an ADC (adjustable DC gain) as low as -15 dB, whereas the reference receiver for 16 GT/s is only -12 dB. The reference decision feedback equalizer (DFE) model includes three taps for 32 GT/s and only two taps for 16 GT/s.
In addition, the possibility of errors on the serial link becomes higher as the data rate reaches 32 GT/s. Due to the significant role of the DFE circuit plays in the receiver’s overall equalization, burst errors are more likely to occur compared to 16 GT/s. To counteract this risk, PCIe 5.0 architecture introduces precoding in the protocol. After enabling precoding at the transmitter side and decoding at the receiver side, the chance of burst errors is greatly reduced, thereby enhancing the robustness of the PCIe 5.0 specification 32 GT/s Link.
PCIe 5.0 Technology Channel Insertion Loss Budget
Table 1 uses a typical system base board plus add-in card (AIC) application as an example to list the insertion loss budget for PCIe 4.0 architecture (16GT/s) and PCIe 5.0 architecture (32 GT/s). At 32 GT/s, after deducting 9 dB for the CPU package, 9.5 dB for the AIC, and 1.5 dB for the CEM connector, the remainder for the system base board is only 16 dB.
PCIe® Rev | Total Channel Insertion Loss Budget | Root Package | CEM Connector | Add-in Card (AIC) | Remaining Budget for System Base Board |
3.0 (8 GT/s) | 22 dB | 3.5 dB | 1.7 dB | 6.5 dB | 10.3 dB |
4.0 (16 GT/s) | 28 dB | 5.0 dB | 1.5 dB | 8.0 dB | 13.5 dB |
5.0 (32 GT/s) | 36 dB | 9.0 dB | 1.5 dB | 9.5 dB | 16.0 dB |
However, when looking at the 16-dB system base board budget, engineers need to consider the following factors:
- As the PCB temperature rises, the IL of the PCB trace becomes higher
- Process fluctuation during PCB manufacturing can result in slightly narrower or wider line widths, which can lead to fluctuations in IL
- The amplitude of the Nyquist frequency signal (16-GHz sine wave in the case of 32 GT/s NRZ signaling) at the source side is 800 mV pk-pk, which will reduce to about 12.7 mV after 36 dB of attenuation. This underscores the need to leave some IL margin for the receiver to account for reflections, crosstalk, and power supply noise that all potentially will degrade the SNR.
Thus, the IL budget reserved for the PCB trace on the system base board should be 16 dB minus some amount of margin which is reserved for the above factors. Many hardware engineers and system designers tend to leave 10-20% of the overall channel IL budget as margin for such factors. In the case of a 36-dB budget, this amounts to 4-7 dB.
As the demand for artificial intelligence and machine learning increases, PCIe 5.0 technology will enable more and more system topologies. The change from PCIe 4.0 architecture to PCIe 5.0 architecture brings the channel IL budget from 28 dB to 36 dB, which will bring new design challenges. By leveraging advanced PCB materials and/or PCIe 5.0 Retimers to ensure sufficient end-to-end design margin, system designers can ensure a smooth upgrade to PCIe 5.0 architecture.
Reference
- AN 835: PAM4 Signaling Fundamentals https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/an/an835.pdf