Common I/O design strategies for high-speed interfaces
Common I/O design strategies for high-speed interfaces
By Jason Baumbach, Julian Jenkins, Jon Withrington, Applications Engineers, Cypress Semiconductor, San Jose, Calif.
January 27, 2003 (11:23 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030124S0030
High-speed serial interfaces are proliferating in chips used in the metro communications application space. Various standards are developed around the evolving common methodology of implementing high speed I/O and millions of logic gates on the same monolithic IC. However, different standards also have different requirements and from a silicon design perspective. Creating a high speed I/O cell that meets the requirements of different standards becomes an attractive design proposition. The "single-I/O-meets-multiple-standards" approach is fraught with pitfalls for those who neglect the details. For example, major hurdles must overcome when creating a single I/O for WAN/Metro line card interfaces including OC-48/STM-16 CML optical modules, SFI-4.2, SPI-5, SFI-5, GbE, VSR-4.3, Infiniband, and XAUI. Only by understanding the differences among emerging high-speed interface standards and the tradeoffs involved in a common IO impl ementation will the system designer will better be able to choose the right device for his application. Advantages from using a single I/O architecture for multiple standards are all the expected advantages normally encountered with IP re-use: a shortening of development and debug time, a shortening of verification time, and an acceleration of time-to-market for products that use the subject I/O architecture. The advantages are not all "free," however - one of the first requirements to be addressed with any common-I/O strategy is the wide range of data rates that must be supported by a given I/O. Basically, a given serial link can be modeled with three elements: the transmitter, a channel that propagates the signal, and a receiver: The channel may be as simple as a pc board trace used to interconnect two chips (SFI-4.2, which is chip-chip, will have simple channels), or it may be much more complicated - for example, for a WAN backplane application the "channel" may h ave multiple lengths of pc board trace joined by connectors. For long-reach standards the channel may also have optics since long reach is required. In an ideal system, the edges of a digital signal will always occur at integer multiples of the signal period. In a real system, the edges of a digital signal will occur in a distribution around the center point, which is the average period of the digital signal. Jitter is defined as the variation in the edge placement of a digital signal. Three jitter components are usually specified: jitter generation, jitter tolerance, and jitter transfer. Jitter generation is the amount of jitter created by a device assuming the device's reference clock to be jitter-free. Jitter tolerance is the maximum amount of jitter a device can withstand and still reliably receive data. Jitter transfer is a measure of the amount of jitter transferred from the receive side of a device to the transmit side of a device. Jitter requirements for high-speed I/ O standards vary widely. Deterministic jitter (DJ) is jitter generated by either insufficient channel bandwidth, leading to inter-symbol-interference, or by duty-cycle distortion, which leads to timing errors in data clocking. Random jitter (RJ) is usually assumed to have a Gaussian distribution and is generated by physical noise such as thermal noise. Sinusoidal jitter (SJ) is used to test the jitter tolerance of a receiver across a range of jitter frequencies and is not a jitter type that would be encountered in a deployed system. Sinusoidal jitter is artificially injected into the receive side of a circuit to measure the performance of the receiver in the presence of the user-defined sinusoidal noise source. With this SJ technique, the receiver's jitter tolerance versus frequency can be measured. RJ is usually calculated as TJ-DJ -- so the amount of RJ is not usually explicitly defined; it is calculated from the amount of total jitter and the amount of deterministic jitter present. Multiple approaches to meet the jitter requirements can be taken. Since many of these high-bandwidth interfaces use source-synchronous clocks, the jitter in the generated clock is of concern. Such systems benefit from using a high-quality crystal and PLL to generate the board clock used to clock most of the system logic, since clocks recovered from the received data usually have high jitter relative to a quality crystal oscillator. Pre-emphasis may be applied to the output signals to ensure the received signal has a well-defined shape after the frequency-dependent deleterious effects of the channel are taken into consideration. PLLs required by the clock-and-data-recovery circuits in the receivers must be able to accurately track the input data. The receivers may also use equalization to reshape the received pulse and "open the eye" of the received signal. Pulse-shaping The pre-emphasis and equalization techniques described above are methods of pulse-sh aping where the shape of the waveform is modified to "open-up" the eye diagram. Pre-emphasis is done by emphasizing the high frequency content of the output waveform and is done by the transmitter. Equalization is done by emphasizing the high frequency content of the input waveform and is done by the receiver. The emphasis on the high-frequency content is required since the channel frequency response is a low-pass response. Pre-emphasis/equalization for different standards may not be compatible, however. For example, since the TJ spec for GbE is so large (0.749UI), pre-emphasis or equalization may be employed. But the pre-emphasis or equalization curve for GbE will not have any beneficial effect for the I/O when it is used in a XAUI application, since the XAUI data rate is almost 3x that of GbE. Now consider the minimum rise time and minimum fall time requirement of GbE: An I/O that adheres to the minimum rise and fall times of GbE will have an edge rate too slow for a faster standard, such a s XAUI. One simpler common pre-emphasis technique is to temporarily increase the rail voltage of the transmitter for 0-1 or 1-0 transitions. With this technique the rise and fall times for the circuit are accelerated, since after the transition the output is allowed to "settle" to a voltage closer to the common-mode voltage for a continuous run of common symbols. This technique has the advantage of requiring minimal circuit area to implement, since it can be done using digital logic complex analog filters are not required. Signal coupling An example differential IO architecture used by many CMOS differential circuits, The transmitter may be AC- or DC-coupled to the receiver. For DC-coupling, the transmitter output lines are directly connected to the receiver input lines - so any DC voltage on the transmitter output line is presented to the receiver input line. The common-mode voltage of a DC-coupled receiver will therefore vary as the common-mode voltage of the transmitter varies. For an AC-coupled link, the transmitter output lines are connected to the receiver input lines through series capacitors, which serve as DC-blockers. An AC-coupled receiver can control its common-mode voltage, since the AC-coupling capacitor serves as a DC block - the transmitter cannot vary the common-mode voltage of the receiver. AC-coupling is possible because the maximum run-length (number of consecutive 1s or 0s) of the subject protocol is limited (the pattern must be DC-balanced). When the maximum run-length of a protocol is too large, AC-coupling is not possible. The differential transmitter is paired with a differential receiver - however, while the differential transmitter architecture is relatively standardized there are many different differential receiver architectures in use. A DC-coupled example receiver architecture coupled to the differential transmitter. The receiver architecture (based on OIF SxI-5) must be able to tolera te a range of common-mode voltages. For example, consider SFI-5, which allows VTXDD and VRXDD to vary by up to 10%. In this case, VTXDD may be 1.32V and VRXDD may be 1.08V. Also, the variance in ground potentials may be up to 50mV, so VRXSS may be 50mV less than VTXSS. In this case, the common-mode voltage of the inputs to the receiver will be very close to the rail voltage of the receiver (VRXDD), making the design of the DC-coupled receiver difficult. However, the transmitter design is simplified - since the common mode voltage of the data lines is pulled to the supply rail by RRXDD, VTXS will also be higher than in a design without this characteristic. The design of ISOURCE will therefore be easier, since the higher the potential difference across the current source the easier it is to keep the current source transistors operating in the saturated region. To understand the benefits of the receiver architecture, consider the following:
The presence of RRXDD in the receiver allows the I/O designer to concentrate on the design of the transmitter specifically the current source.
One of the advantages of an AC-coupled high-speed link is the control the receiver designer has over the common-mode voltage - the designer can optimize the receive circuit for a specific common mode voltage, because the input signals will not have any DC component. As a result, the jitter requirements of a particular specification can potentially be met with mo re margin with an AC-coupled receiver than with a DC-coupled receiver.
The result is that the design of a DC-coupled transmitter may be easier than an AC-coupled transmitter for the same set of specifications. Also, the design of an AC-coupled receiver will be easier than the design of a DC-coupled receiver for the same set of specifications.
Reliability/durability
The primary concern for reliability is ESD protection. Since the I/Os for multi-gigabit standards are by definition high data rate, the I/O must have low capacitance. The requirement for low capacitance leads to novel ESD structures to ensure the I/Os are fully protected without introducing the deleterious effects on rise time which result from high capacitance. Such effects include a decrease in supported bandwidth and increases in jitter and power consumption.
Some multi-Gigabit standards such as SFI-5 and SPI-5 use source-synchronous clocking. However, due to the high data rates th e clocks are not as fast as the data rate for an individual receiver/transmitter pair. For example, SFI-5 requires a _ data rate clock, so for the SFI-5 OC-768 application the clock frequency will be 622MHz, since the SFI-5 interface is a sixteen-bit interface (39.8Gbit/s/16/4 = 622MHz).
Due to the high frequencies, one channel in a multi-Gigabit parallel interface may have time-varying skew - this is called wander. A source-synchronous link such as SFI-5 requires a clock-and-data-acquistion (CDA) circuit, which uses a multiple of the source-synchronous clock to sample the data. Each lane, therefore, requires its own independent clock-and-data-acquisition (CDA) circuit. The PLL for each of these CDA circuits uses the input clock as a reference to help establish phase lock.
Other standards such as SFI-4.2 do not have source synchronous clocks. They rely wholly on their clock-and-data-recovery (CDR) circuits to source the clocks required to extract the data and establish both phase and frequency lock.
There are two means to implement multiple differential I/O standards on a single device. Separate the multiple modes, but use one IO pin. For example, in a given configuration the I/O may be used for XAUI/2xFC, and in another configuration the I/O may be used for OC-48/SFI-5. The other approach is to separate the I/Os and use different pins for different standards.
Other problems with implementing multiple differential standards with a single I/O may be encountered. For example, some specifications may be mutually exclusive, preventing the design of a single I/O adherent to both specifications.
Another possible problem is voltage tolerance a high-bandwidth, low-voltage I/O will not have the voltage tolerance of a low-bandwidth, high-voltage I/O.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |