The Love/Hate Relationship with DDR SDRAM Controllers
by Graham Allan, MOSAID
Almost everyone knows that the bulk of DRAMs produced end up in desktop and laptop computers just like the one used to write this article. In fact, approximately 90% of all DRAMs produced are used in computers leaving the remaining 10% as square pegs pounded into round holes. An increasing number of SoC designs are requiring an interface to some form of external memory. The modern DDR2 SDRAM offers security of supply, high storage capacity, low cost, and reasonable channel bandwidth but comes with an awkward interface and complicated controller issues.
Combined with the unique command structure resulting from the internal DRAM arrays, SoC designers are left with a daunting task when faced with the incorporation of a modern DRAM interface into their design.
A Brief History of the SDRAM
The evolution of the commodity DRAM over the past 15 years has seen the interface peak bandwidth increase by a factor of well over 2000%. While no one has been able to bend the fundamentals of physics and perform similar improvements to the latency of a basic random row access, the increase in bandwidth at the pins and the ability to access data in bursts has helped close the gap with a typical processor’s insatiable need for faster memory bandwidth. Throughout this period, the Joint Electron Device Engineering Council (JEDEC) committee known as JC42 has been the primary breeding ground of industry standards for commodity DRAMs.
In late 1993, JEDEC issued the original SDRAM standard that eventually became referred to as the PC100 SDRAM standard. Pushing the timing parameters of SDRAM to the practical limit resulted in the PC133 SDRAM that increased the channel frequency to 133MHz.
By the late 1990s, JEDEC had a solid DRAM roadmap. Beginning in 1996 and concluding in June 2000, JEDEC developed the DDR (Double Data Rate) SDRAM specification (JESD79). In order to offer significant improvement for systems requiring higher bandwidth, DDR SDRAMs incorporated significant improvements over PC100 and PC133 SDRAMs including dual edge clocking (a.k.a. Double Data Rate or DDR clocking), source synchronous data strobes, SSTL_2 low voltage signaling and the incorporation of an internal delay locked loop (DLL). DDR2 SDRAMs were subsequently specified by 2003 (JESD79-2) offering pin bandwidth of up to 800Mb/s - double that of DDR SDRAMs.
During the development of the DDR and DDR2 SDRAM standards, engineers focused more attention on the overall system timing budgets and where they could address critical areas that were limiting performance. DDR clocking was a proven concept to improve bandwidth while avoiding higher frequency clocks (although it did place new emphasis on clock duty cycle requirements). Perhaps the most noteworthy elements of the DDR and DDR2 SDRAM standard were the adoption of source synchronous clocking and the incorporation of an on-chip DLL (or equivalent circuit).
Keep the DRAMs Simple, Put Complexity in the Controller
There are three critical decisions that forever complicated the DDR SDRAM memory controller. DLLs or equivalent circuits actually first appeared in some of the single data rate SDRAMs in the late 1990s to eliminate some of the clock insertion delay between the clock pin and the data output buffers. Using a DLL to reduce the data access time from clock (tAC) specification greatly improved timing budgets significantly. However, most DRAM vendors were able to just get by without the use of a DLL and as such, those that relied upon the DLL quickly followed suit and revised their designs to manage without one. By the time DDR SDRAM was being developed, the DLL became a requirement within the design as the clock insertion delay was insurmountable at the required clock frequencies for DDR SDRAM. Incorporating a DLL or equivalent circuitry into the DDR SDRAM also required a logical specification between the edges of the output data eye and the edges of the input clock. This was the first of three critical decisions that, as we will see, complicated the memory controller design. JEDEC decided to align the edge of the output data with the edge of the clock.
To facilitate high bandwidth operation, DDR SDRAMs use a source synchronous design where one or more data strobes (DQS) are generated by the same SDRAM chip that is transmitting data. The advantage with this system is that the data signals and the DQS strobe(s) have similar loading and physical characteristics and the DDR SDRAM can easily drive the DQS strobe with minimal skew relative to the data pins. Using the DQS strobe to sample the read data at the memory controller facilitates higher bandwidth. However, the adoption of the data strobes required the second critical decision affecting the memory controller – where to place the edges of the data strobes in relation to the data eye. In an ideal world, the most logical alignment is to place the data strobe edges exactly in the middle of the data eye – thus facilitating the ease of data capture at the controller. However, this would have significantly complicated the DLL used on the SDRAM, as it was only utilized to eliminate clock insertion delay. Centering the data strobe edges in the DDR data eyes would have required the SDRAM DLL to perfectly shift the strobe edges by 90 degrees. Logically, it makes little sense to add a cost burden to multiple memory components when they typically interface to one memory controller. Thus, the decision was made to make life easier (and cheaper) for the DDR SDRAM (and more complicated for the controller) by aligning the edges of the read data eye and the data strobes. The burden of shifting the data strobe into the center of the read data eye to properly sample the data was left to the controller. Conversely, for write data sent to the DDR SDRAM, the decision was made to require that the data strobe be centered within the write data eyes making it easy for the SDRAM to sample the data. Again, this requires the DDR controller to incorporate the complex circuitry required to precisely time the placement of the data strobe edges.
The final critical decision affecting the memory controller was related to the data strobes themselves - should the data strobes be unidirectional (have one strobe signal for reads and another strobe signal for writes) or bi-directional (use one strobe that gets turned around between reads and writes). Ultimately, to conserve pins and for other reasons, JEDEC adopted a bi-directional data strobe for the standard. This decision resulted in data strobes that are not free-running clocks but rather are driven by the DDR SDRAM only when data is being output and must be driven by the memory controller when write data is presented to the DDR SDRAM.
In hindsight, these key decisions were entirely valid looking at a memory subsystem from a total cost point of view – keep the complicated elements in the fewest chips. However, the result is that these three critical decisions placed all of the heavy lifting onto the shoulders of the memory controller. For write operations to the DDR SDRAM, the memory controller must place the data strobe in the middle of the data eye. For read operations from the DDR SDRAM, the memory controller must shift the data strobe into the middle of the data eye to properly capture the data. Add to this that the data strobe is not a free-running periodic signal, and the requirement for a master/slave DLL within the memory controller is created. Typically, a memory controller uses a master DLL to lock to the free-running, periodic system clock and a slave DLL to shift the non continuous data strobe such that the data strobe edges are centered in the DDR data eyes.
DDR2 SDRAMs have further complicated the data strobe functionality by offering an option to have the strobe be a differential signal. Meant to track the single ended data signals, differential data strobes incorporate a different logical threshold making the system more sensitive to slew rates. This has been largely corrected with extensive derating tables based on signal slew rates.
The bi-directional data strobe pins are tristated (undriven, they get pulled to the termination voltage level, VTT) if neither the memory controller nor the SDRAM are driving data. To prevent noise on a tristated DQS from generating false DQS edges, the data strobe input buffers are typically enabled within the memory controller such that they are active only during read cycles. The DQS input buffer enable scheme implemented within a memory controller should compensate for different delays and uncertainties such as I/O delays, board delays, CAS latency, additive CAS latency, and general timing uncertainties. Typically, a data training sequence is performed at startup to find the optimum position for the DQS input buffer enable signal. This can be accomplished by performing reads with deterministic patterns while sweeping through the possible system latency values.
The DDR PHY – More Than Just I/Os
For SoCs that require an interface to an external DDR (DDR or DDR2) SDRAM, the physical interface (PHY) requirement includes, at a minimum, application specific SSTL I/Os and some solution for handling the timing requirements of the data strobes. DDR2 SDRAM PHYs use SSTL I/Os that incorporate programmable on die termination (ODT) resistors that replace those previously required as external components. In addition, some form of PLL, DLL or calibrated delay circuitry is required to shift the data strobes into the center of the data eyes as previously outlined.
Solutions that use a calibrated delay circuit typically use a training sequence where the delay line is swept from minimum to maximum with expected data used to find the edges of fail and pass regions with the final setting placed in the middle of the pass region. This approach is more sensitive to temperature and voltage variation as the delay line variation is not self correcting. Periodic recalibration is one way to address this problem but this can consume precious memory channel bandwidth. In addition, calibrated delay circuits do not accommodate spread spectrum clocking as the delay remains fixed while the clock frequency is modulated.
PHYs incorporating DLLs or PLLs do not require external calibration as they are entirely self-calibrating. The PLL/DLL is locked to the clock frequency and is therefore immune to temperature and voltage variation as the delay line or VCO is constantly adjusted to match the clock frequency. PLLs and DLLs also track frequency changes in spread spectrum clocking and self-correct their respective delays. Using a master/slave DLL with precise 90 degree phases of the input clock, along with a slave (mirror) delay line controlling the strobes, the edges of the data strobes can be accurately shifted into the center of the data eyes. The mirror or slave delay line is required because the data strobes are not free-running clocks. Using a PLL often requires that the memory channel clock be multiplied by 4 to generate the 90 degree phases of the clock. However, a PLL still requires some form of slave delay line to time the data strobe edges.
The DDR Controller – More Brains than Brawn
The brains behind any DRAM controller is the logic associated with the command timing and execution. DDR SDRAMs are not straightforward devices. They contain multiple independent banks and every random read or write access must be preceded by a bank activate command and ultimately followed by a bank precharge command. Once a bank has been activated, the result is an open page of data that permits more than one read or write operation to a small subset of the bank.
In order to maximize the memory channel bandwidth, it is advantageous to look ahead into the queue of commands and group all those together that access any open page in an open bank. Reducing the overhead of bank activate and precharge “downtime” via command reordering and scheduling can significantly improve the performance of the SoC to memory channel.
The memory controller should also make every attempt to “hide” the bank activate and precharge commands in command slots that would otherwise go unused. Minimizing command contention also optimizes the channel performance.
The DDR SDRAM controller logic must also facilitate the refresh requirements for the DRAMs. Arbitrating between a latency intolerant command and an overdue refresh requirement requires complex prioritization within the controller. The controller must also frequently arbitrate between multiple sub blocks in the SoC that use the memory resource. Such arbitration requires the ability to prioritize traffic in the memory channel without starving low priority commands via an endless queue of high priority commands. Ultimately, this process can never be perfect and is frequently tailored to specific applications.
IP to the Rescue
Developing a DDR SDRAM interface requires multiple engineering disciplines. The brain of the memory controller is developed with a typical ASIC design flow (RTL, logic synthesis, place and route) and the brawn of the PHY is developed in a full-custom mixed signal design environment (schematic capture, analog simulation, custom layout). Few modern SoCs are designed with access to design teams that have the appropriate expertise and EDA tools satisfying both fields. Fortunately today’s SoC designers no longer have to dread the memory controller and interface challenges as semiconductor IP is now available thus reducing total development costs and time to market.
Graham Allan is an electrical engineer and director of marketing for the Semiconductor IP group at MOSAID.. MOSAID provides complete memory interface solutions for SoC designers facing the challenge of interfacing to external DDR SDRAMs in its MOSAID Memorize™ DDR PHY and memory controller IP. Memorize IP products combine hard and soft IP to address both the complications of the DDR SDRAM PHY and the logistical idiosyncrasies of the DDR SDRAM command structures. They are silicon-proven in major 130nm and 90nm processes.
|
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |