|
|||||
High-speed transceivers require systems modeling
High-speed transceivers require systems modeling In approaching data rates of 6 Gbits/second and higher, the quality of the communications channel is becoming a limiting factor in transceiver performance. An intimate understanding of the channel is necessary to select the signal processing architecture that will perform well on such a channel. LSI Logic Inc. developed a new design methodology for transceivers as part of its Ultra HyperPHY 2.5-/3.2-Gbit/second transceiver for applications in 110-nanometer technology. This new design flow uses system modeling of the communication channel in its entirety. This new approach takes advantage of the fact that high-speed transceiver development has reached the point where the limit no longer lies exclusively in the transistor performance, but also in the quality of the communication channels used for data transfer. This fact necessitates a greater understanding of the channel in order to design a transceiver architecture that will function reliably i n such an environment. As high-speed link integration has evolved, two milestones identify where the competencies and responsibilities have shifted from the ASIC designer to the ASIC supplier. Through the 155-Mbits/s data rate point, the transceiver intellectual property was mostly independent from technology and, in general, was developed directly by the ASIC architect. The ASIC supplier's primary responsibility was the design of the buffers, the only part of the transceiver design strongly dependent on technology. Beginning with integrated 622-Mbit/s transceivers, the dependency of transceiver performance on the transistors increased, which meant that the silicon supplier needed to optimize the transceiver design for their particular technology. In developing the new generation of transceiver for 3.2-Gbit/s applications, LSI Logic has introduced an improved methodology for transceiver IP design, once it became clear that the limits of the previous design flow were rapidly approaching. It was not adequate to reliably design and integrate 6 and 12 Gbits links. The new methodology considers the transceiver as a part of the entire application environment. Designing silicon in isolation is no longer sufficient; the limitations now are not only in the silicon process technology but also in the channel. Designers now must evaluate performance with a variety of board materials with various connectors in several ASIC environments while handling an array of data traffic characteristics. Leveraging techniques from other areas of communications, LSI Logic has developed a system model to enable reliable design of transceivers running at 12 Gbits/s and beyond. A communication channel is composed of the active devices such as transceivers, the physical medium and the data traffic. It consists of both the physical implementation of the silicon in the system architecture as well as the application data it will need to transport. A challenge for an ASIC transceiver is to support different application s and their corresponding transmission media. Three main types of communication channels exist: chip-to-chip on-board connection that connects different ASICs or standard parts on the same board; board-to-board through a backplane for system connectivity inside the rack; and chassis-to-chassis through a cable to connect or expand equipment. In addition to the physical communication channel, the data characteristics can directly affect the transceiver performance. The data traffic can use a dc-balanced physical coding sublayer (PCS, like 8B/10B coding), can be scrambled to prevent long sequences of no transitions or can be direct data for example, in clock synchronous links. The ASIC transceiver needs to address each of these conditions. System modeling Spice and tools like it provide transistor-level analysis, which is the most accurate way to perform silicon verification. The result of this type of analysis in terms of overall system performance is typically an eye diagram that does n ot sufficiently account for effects like random jitter and crosstalk. Ideally, a tool for obtaining a measure of performance would be able to simulate a large number of bits to fully capture the consequence of random effects. The output should be not only more accurate eye diagrams, but also parameters even more indicative of overall performance such as jitter tolerance and bit error rate. Unfortunately, because Spice is modeling on such a low level, today's processor speeds render it impotent in providing such overall performance evaluation. LSI has defined a new model to bridge this gap by addressing the overall system from a higher level. As such, the model is able run at a higher speed, simulate more bits and generate the desired metrics. This system model is a C program that can emulate the random and deterministic characteristics of the system. This model does not completely replace the highly accurate Spice simulations, but rather uses results generated by them, along with lab measurement s, as input. The model is tuned for a specific architecture and environment through a number of parameters. First, parameters linked to the backplane and connectors are included. These are deterministic parameters that are obtained through lab measurements or Spice simulation. The most prominent consequence of these parameters is deterministic jitter or intersymbol interference. The flow starts from a time domain reflectometry measurement of the step response for the system under consideration. Other parameters are obtained through simulation, measurement or architectural design. These parameters are then used as input into the system model used to generate an eye diagram. This eye diagram is compared to an eye diagram measured in the lab. A good match here helps to verify the model of the transmitter and the communication medium. Then, jitter tolerance plots are generated with the model. These are also compared with lab measurements to confirm the accuracy of the receiver model. Once the model has been validated, it can be applied in design to ensure that a transceiver architecture will work with sufficient margin in a particular backplane/connector environment at the desired speeds. In addition, the system model can find the effect on margins by modification of any of the input parameters. Applying the model One key aspect of transceivers from 2.5 Gbits/s and beyond is the capability to compensate signal attenuation with pre-emphasis on the transmitter. Pre-emphasis is a predistortion of the TX waveform to partially compensate for high-frequency attenuation in the signal path. For any string of ones or zeroes, only the first bit is transmitted at full amplitude. Succeeding bits are lower amplitude, and the overall effect is to increase the energy of the high-frequency components of the signal that suffer more of the dielectric attenuation. The level of optimal pre-emphasis is a function of the dielectric characteristics and of the signal path length. Without the cap ability to run system simulations, the amount of pre-emphasis specified in a chip might not be sufficient. This could result in reduced performance or a redesign of the transceiver. While eye diagrams are excellent tools, the ultimate measure of performance is bit error rate. System models can be used to generate such estimates. The actual sampling position used in the receiver can be expressed as a probability density function that is determined based on the CDR and channel properties. Using these bit error rate curves and the probability density function of the sampling point, the total bit error rate can be calculated. This could give very good indication of the robustness of the link before silicon availability. Another important capability of system modeling is assisting in the specification of the CDR bandwidth. This information permits the architects to determine what bandwidth settings should be used to best meet spec while allowing the least amount of additional noise into the loop. Another factor going into the choice of the CDR bandwidth is the potential for frequency offsets. Such an offset requires the CDR to frequently update its phase selection to track the frequency offset. Because phase information is obtained only through transitions in the data, a higher bandwidth setting will be required to track frequency offsets with longer runs in the data without transitions. The system model enables simulations to specify the required minimum bandwidth settings for expected frequency offsets and run lengths. Joe Caroselli is HyperPHY systems architecture manager, High-Speed Interface Engineering. The article's co-authors are Leo Fang, HyperPHY development director, High-Speed Interface Engineering, and Marco Accomazzo, senior FAE Communications at LSI Logic Inc. (Milpitas, Calif.).
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |