NVM OTP NeoBit in Maxchip (180nm, 160nm, 150nm, 110nm, 90nm, 80nm)
Signal skew managed dynamically
Signal skew managed dynamically
By Hansel Collins, Co-founder, Chief Science Officer, TriCN Inc., San Francisco , EE Times
January 27, 2003 (11:37 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030124S0024
In the evolution cycles of chip design, developers have continually sought to design smaller, faster chips that consume less power. As the demand for bandwidth increases, developers are faced with the added difficulty of managing issues regarding skew and signal integrity. Skew, in particular, has severe effects on the performance of a chip's multilane interface, limiting its maximum length for a given bandwidth. Traditionally, the approach used to address skew has relied on passive techniques that involve carefully matching and balancing the delays of each data path of the interface. As the bandwidth of multilane interfaces increases, however, that approach loses effectiveness. Hence, to tackle skew, more-complex/dynamic techniques are required. In general, the solutions to problems with skew are based on analog designs and techniques that carry a costly penalty in terms of both power dissipation and size. New digital design method s have been shown to offer superior performance in size and power over their analog equivalent solutions. This article will address the advantages of digital design techniques in the implementation of these new high-performance interfaces. As the speed of the data transfer increases, the amount of time that a particular bit remains valid (known as a "unit interval" or "bit time") decreases. At gigabit speeds, the amount of skew that exists between any two lanes becomes proportional to the bit time of the data being transferred. Furthermore, the amount of skew that exists between the lanes of the interface is largely unknown. Hence, when data is received at the destination node, the bits are no longer "word-aligned" as when they were transmitted, preventing the use of a simple register to receive the data. To solve that problem, interfaces such as PCI Express, XAUI and the SPI standards make use of serializer/deserializers (serdes) to transfer the data and "re-collate" or "word align" at the d estination. Typically, the serdes are analog-based designs that employ phase-locked loops (PLLs) or delay-locked loops (DLLs). The challenge now becomes how to implement a large number of PLL/DLL blocks for a multigigabit interface that is in close proximity to high-speed digital circuit blocks. That task manifests itself in multiple ways. Electrical noise in electronic systems can be classified either as device/component or man-made, the latter of which is generally coupled or conducted into a circuit from external sources. While electrical noise appears in both analog and digital systems, its effects on each are different. For small-signal circuits in analog systems, noise is a major concern because the desired signal and the noise signal are processed identically, which creates the potential for corrupted data. Hence, it is highly preferable to maintain a large signal-to-noise ratio. In the implementation of an analog circuit such as a PLL or DLL, conducted noise is always an issu e. In particular, when placing multiple PLL and DLL blocks in close proximity to digital blocks or to each other, careful attention must be paid to conducted noise between the various blocks. Supply rail noise, often generated by switching logic, I/O circuits or other PLL and DLL blocks, is another form of conducted noise that must be addressed. Threshold voltages In digital circuits, electrical noise usually results in timing variation (i.e., jitter or push-out), and the extent is highly dependent on the amount of noise present and the noise margin of the particular technology. In a binary circuit, recognition of a signal as either a logic 0 or logic 1 depends on whether its voltage is above or below a given set of thresholds. The values of those threshold voltages determine the noise margin and the amount of noise that can be tolerated by the system. In analog systems, a significant percentage of the power is dissipated in the biasing networks, which draw a dc current whether the circuit is active or not. Also, biasing networks have an additional area penalty that increases if a PVT (process, voltage temperature) compensation circuit is required. For most of the digital technologies in use today, the transistors are designed to operate as a switch residing in either saturation (fully on) or cutoff (fully off or high-impedance) mode. While operating in either of those states, the amount of power dissipated is significantly less than for devices operating in the linear region. Hence, the majority of the power dissipated in digital circuits occurs when they are switching from one state to the other. Each semiconductor process and geometry has characteristics (supply voltage resistance, capacitance, transistor gain and leakage currents) that determine the performance of analog and digital circuits alike. Digital-based circuits and systems are generally more tolerant of process changes due to the fact that their transistor operates as a switch. Conversely, ana log circuits are very sensitive to all of the above-mentioned process characteristics. For PLL or DLL designs, process has a significant effect on key subcircuits. In the case of the PLL, these are the low-pass filter and voltage-controlled oscillator. As an example, the reduced supply voltage and increased leakage current of the 130-nanometer and 90-nm processes require a redesign of PLL/DLL taken from previous process designs to account for the performance and operational shifts. In that case, achieving the desired performance generally requires an increase of the bandwidth gain of circuits that translates into larger devices and area. Testability is a key issue for designers of both analog and digital circuits. The ease of testing analog systems is highly dependent on the application and the type of analog system. For some analog circuits, like PLL and DLL blocks, system bring-up and testability become problematic with both the application and its target operation. PLLs and DLLs generally don't le nd themselves to common debugging techniques like reducing the operating frequency. Hence, workaround strategies and/or circuits must be included for that purpose, further affecting size and power usage. In contrast, digital designs offer inherent testability, including bring-up strategies such as Scan, reduced clock frequency and single-step procedures. Although not all of those strategies are applicable to all digital circuits and architectures, there is usually a minimum set that provides adequate test coverage. While some designers prefer to stick with the more-traditional analog approach, many others are finding the advantages of the digital approach too compelling to ignore. http://www.eet.com
Related Articles
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- A RISC-V ISA Extension For Ultra-Low Power IoT Wireless Signal Processing
- Simplifying AC and DC data acquisition signal chains
- Where Innovation Is Happening in Geolocation. Part 1: Signal Processing
- Ins and Outs of Assertion in Mixed Signal Verification
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |