Quantcast
Channel: Cadence Functional Verification
Viewing all articles
Browse latest Browse all 652

DDR5 DIMM Design and Verification Considerations

$
0
0

DDR5 is the latest generation of the DDR server memory capable of supporting data rates of up to 8800 Mbps which is quite a leap over previous generations of DDR memories. It is used in a wide variety of applications with a huge server, and the data center market is a key driver behind the adoption of the DDR5-based memory systems. As systems are moving tow ards more CPU cores, higher bandwidth, and more capacity, DDR5 is expected to take over DDR4 in usage this year. 

PCDDR memories are used as a part of Dual In-line Memory Module (DIMM) cards. DIMMs are a JEDEC-defined standard way of getting higher density and bus width by using several DRAM memories on the DIMM card. Traditionally, DIMMs can be categorized as Small Outline-DIMM/Unbuffered DIMM (using just the DRAM memories), Registered DIMM (using RCD + the DRAM), and Load Reduced DIMM (using RCD + DRAM + DB) among other types.

This blog talks about some of the most common things that design and verification folks need to consider while working with DDR5 SDRAM and DDR5 DIMM-based memory subsystems.

Reset and Power On Initialization

Successful power on and reset sequence of the DDR5 DIMM and its components is the first step of DDR5-based memory subsystem bring-up. While the procedure is described in the JEDEC DDR5 SDRAM and RCD/DB Specs, here are a few things to be considered:

  • The setting of the operating speed CWs for RCD (and DB if required) via sideband since the in-band operations can’t be done without these CWs set
  • DRAM MR and RCD/DB CW programming: Many of the DDR5 SDRAM/RCD and DB Mode Registers either have no default values or the default values that explicitly need to be programmed to something else for normal operation. The host will need to set these as part of power on initialization. Some of the DRAM MR that has to be programmed include MR8 (Read/Write Preamble and Postamble), MR13 (tccd_l value), etc. Similarly, some of the RCD CWs that should be programmed include CH-A/B RW09 (Output Address and Control Enable), CH-A RW[RW00(Global Features), CH-A RW[RW01(Parity, CMD Blocking, and Alert), CH-A RW[RW04(Command Space), etc.
  • Device training: The host needs to train different components of the DIMM, including RCD, DRAM, and DB (if application), during the DIMM card bring-up. Some of the training steps include CSTM/CATM, Internal/External write leveling, write pattern training, etc., for DRAM, DCSTM/DCATM, QCATM, QCSTM, etc., for RCD and DWL, HWL, MRW, MRD, MWD, etc., for DB. Some of these trainings need to be repeated over the course of simulation as device PVT variables change.

Speed Bin Compliance

DDR5 SDRAM Specification defines very specific latencies that are allowed for each operating speed. This is also true for other timing parameters like tAA, tRCD, etc. Refer to DDR5 SDRAM Speed Bin Tables for planer and 3DS devices for more details. There is some tolerance defined in the DRAM Spec that is helpful for boundary conditions where some jitters can cause the device to go over or under the defined range.

Refresh, RFM, and Temperature Requirements

DRAM Refresh requirement is the most important reason that needs to be considered to avoid any loss of data in the DRAM. This could be challenging as the PVT variation can cause the refresh requirement to change over time and the host needs to keep track of the current state of the device. Another important consideration is paying attention to the RFM requirement especially when the device is operating at high temperatures. The host should make sure it’s in sync with the number of active rows that DRAM can support within certain times. The host can also use the DRFM and ARFM to additionally mitigate the potential risks to data stored in DRAM.

ZQ Calibration and Target/Non Target On-Die Termination

DDR5 supports both T and NT-ODT to improve the signal integrity, among other things. The host is required to program the T and NT-ODT register settings properly. The host is also responsible for managing command spacing and the programmable aspect of tODL on/off times for read/write command data phases. Similarly, there are several things that the host should consider when a ZQ calibration starts or is latched, including requirements on maintaining the signals like DQ/DQS at high impedance states and not issuing any commands to DRAM while a ZQ latch operation is going on.

DRAM/DIMM Reliability Features

DDR5 supports a number of data reliability features like on-die Error Correction Code (ECC), Error Correction and Scrub (ECS), CRC, parity, MBIST, etc., that the host needs to keep track of. The host is, for example, recommended to run a full error scrub cycle once every 24 hours or required to issue regular refresh commands during manual ECS operation. Similarly, transmission errors resulting in CRC or parity errors need to be carefully handled, and features like sPPR/hPPR can be used when required.

Rank to Rank Spacing Constraints

The host needs to consider the rank to rank command separation requirement explicitly or implicitly mentioned in the specifications. This is important to avoid potential data clobbering and damage to the DRAM due to different DRAMs/hosts trying to drive different logic values at the same time. Other rank to rank considerations include accounting for ZQ Register sharing between different DRAMs within the same rank or between different ranks, proper target, and NT ODT settings for DRAMs in different ranks, among other things.

Cadence VIP supports DDR5 SDRAM and all types of DDR5 DIMM like DDR5 SO-DIMM/UDIMM, RDIMM, and LRDIMM among others. Cadence Online Support (COS) also has extensive DDR5 VIP articles explaining the multiple features of the DDR5 and DDR5 DIMM memory models.

More information on Cadence DDR5 VIP is available at Cadence VIP Memory Models Website.


Viewing all articles
Browse latest Browse all 652

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>