Testable SoCs : How systems level considerations impact cost-effective Gigabit Ethernet PHYs
How systems level considerations impact cost-effective Gigabit Ethernet PHYs
By Phil Callahan, Eric Kimball, Vivek Telang, Mile Milisavljevic, EE Times
July 18, 2002 (11:34 a.m. EST)
URL: http://www.eetimes.com/story/OEG20020718S0019
By Phil Callahan, Senior Product Marketing Manager, Eric Kimball, Mixed Signal Design Engineer, Vivek Telang, Design Manager, Mile Milisavljevic, DSP System Design Engineer, Cicada Semiconductor, Austin, Texas, pac@cicada-semi.com The trigger that enabled the transition from Ethernet (10Mb) to Fast Ethernet (100Mb) throughout the data center and eventually to the desktop occurred when IT decision-makers could purchase "10x the performance for 2-3x the price". In order for Gigabit Ethernet (GbE) over copper, or "1000BASE-T," to experience this same transition and become pervasive in both cost sensitive data center LANs and SANs, as well as the desktop, two major obstacles must be overcome: High semiconductor component costs, especially for the physical layer transceiver and touble-free installation and robust performance over the ubiquitous, category-5 copper cable plants, found in over 90 percent of all corporate LANs. The 1000BASE-T transceiver is the critical technological component that will enable this transition. It represents the most complex wire-line transceiver ever standardized, requiring a combination of digital signal processing (DSP) and analog circuit design skills to achieve the ambitious goal of reliable, gigabit per second data transmission over limited bandwidth of Category-5 cable. Yet, even if the design task alone was not a Herculean obstacle, a comprehensive system-level approach must be used to tackle the high costs of manufacturing and testing such a complex DSP and mixed signal device, the PHY. Innovative use of structural as well as the more traditional functional test techniques must be employed to eliminate reliance on expensive analog ATE. Making the Gigabit Ethernet system designer's job easier as well as meeting time-to-market objectives dictates careful attention to minimizing the number of external components, such as termination resistors and he at sinks. So, what are systems-level design considerations involved in the development of these very high performance Gigabit PHYs? A typical 1000BASE-T switch system has the PHYs located between a multi-port Switch Controller and the magnetic modules used to connect the PHY to the Category-5 cable. To meet the economic goals of the "10X/2X" rule as well as trouble-free installation and operation, the 1000BASE-T switch system must simultaneously provide:
There are two main functional blocks within a 10/100/1000BASE-T PHY. The digital section, comprising of a DSP data pump, PCS (Physical Coding Sublayer), control functions for initialization, test and the primary system interface. These functions typically require up to one million digital gates or four million transistors, and can be implemented in virtually any mainstream CMOS process.
The second functional b lock, the Analog Front End, or "AFE," includes ADCs, DACs, a VGA, and a hybrid. For each PHY, there are normally four ADCs, four DACs and four hybrid, together requiring approximately 250,000 analog elements (transistors, capacitors and resistors). Analog signal processing can only be realized with highly area efficient and accurate analog design techniques typically impossible with traditional analog design approaches.
The analog portion of a Gigabit over copper Ethernet PHY has many design challenges. It must be low power, occupy a small area, and yield well to meet the cost goals of Gigabit Ethernet, or 1000BASE-T. One of the most fundamental decisions to meet these requirements is the choice of process technology. While CMOS is the obvious choice because of the large complexity of the DSP functions needed to implement the 1000BASE-T standard, there is still the question of whether a mainstream digital-only CMOS process, or a digital process with analog options, would yield the optimal cost structure. Although most of today's wafer foundries offer analog process options, such as double poly capacitors or bipolar transistors, typically the analog options only become available some time after the mainstream digital processes are in place. Avoiding these options provides significant benefits.
For example, it enables the flexibility to port a Gigabit PHY product to a more aggressive process without needing to wait for the analog options to become available. Also, the additional costs, added through the analog processes due to their additional masks and more complex processing, can be avoided. And, it eliminates the difficulty of having the same product running at multiple wafer fabrication facilities, which is desirable from a supply chain risk management point of view. For these reasons, designing in mainstream digital 0.18 micron and 0.15 micron processes p roduces the most cost-effective PHYs.
Process variations
In a large, complex chip, the analog portion could compromise the yield without careful planning; thus, one of the goals for the analog as well as the rest of the chip was to architect the design so the wafer test yields would be limited only by the process's inherent defect density. This means that the AFE must be insensitive to the full range of the normal process variations.
In order to achieve this goal, the circuits that are sensitive to process variations should be calibrated, an example of which is the Flash ADC. In a Flash ADC, the comparators must match each other well so random variations in the transistors will not adversely affect the yield. In order to avoid this, each comparator is individually calibrated. The result is that the ADCs performance is nearly identical across the die and from part to part. In addition, this calibration provides a powerful test tool that helps minimize the ATE test resources required to test the device in production. By reading out the calibration value for each comparator, a given part can be compared statistically to other parts, screening for process problems while optimizing process yields.
Another calibrated function is the GMII interface which is used to connect the PHY to a layer 2 function, typically a Media Access Controller (MAC) found in a switch or a Network Interface Card (NIC). The use of internal source termination resistors significantly lowers the necessary external component count. For example, the GMII interface requires 24 signals per port. For a 48-port switch line card, over 500 series termination resistors can be eliminated from the pcb when source terminations are included in the PHY device. However, this internal termination needs to be well matched to the nominal 50 watts external transmission lines. To accomplish this calibration, the PHY makes use of a single external resistor. By placing a fixed voltage across this e xternal resistor, accurate currents are provided to all of the analog blocks and in particular the GMII calibration block. By placing the same fixed voltage across an internal resistor that matches the GMII termination resistor, a current is produced which can be compared to the current produced by the external resistor. The internal resistor is then adjusted until it matches 50 ohms. By employing these calibration techniques the yield of the PHY is maximized and, in fact, actual production yields fall within a couple of percent of reaching a crucial goal: producing a yield equal to the defect density of the process.
Because of the high port density requirements of today's Ethernet switches, power is another very important factor in the design of a Gigabit PHY. One of the original design goals was to have a part that consumes a watt or less to allow the use of plastic packaging and to keep the silicon junction temperature as low as possible. To highlight this issue, keep in mind that for a 10C decre ase in junction temperature, the MTBF for a component is improved by a factor of two.
Being able to achieve power consumption under 1 watt is significantly enhanced in part by the using the calibration techniques that were mentioned earlier. Transistor matching is inversely proportional to the square root of the area that the transistor occupies. So, if calibration is not used then the transistors need to be made larger to prevent yield loss from mismatches on the die. This in turn increases the capacitance in the analog, which means that the power increases. By employing calibration, the area and the power of the analog circuits are reduced.
Even more fundamental than calibration to achieving the aggressive power consumption goal is being able to model the entire PHY architecture and the cable. Accurate modeling enables the optimal analog and digital architectural balance, resulting in the lowest power, best performance, and optimal die size in the lowest cost, least aggressive digital CMO S process technology.
For instance, Cicada's proprietary "C"-based, bit-level accurate system models and simulators allow for rapid architectural and design tradeoffs to be analyzed from the systems level down to gate level, without throwing away accuracy anywhere in the process. This results in bit-level optimization of filter lengths, and fixed-point precision. One area where power is often wasted in a communications chip is in the ADC. It is fairly common to see an ADC that has an ENOB (Effective Number of Bits) that is around one bit less than its ideal resolution. In a Flash ADC that means that half of the comparators are wasted. For example, if an 8-bit ADC is required and one bit is typically lost due to typical CMOS process mismatching effects, then a 9-bit ADC must be used, even though there is an area and power sacrifice of 256 comparators. Again, with calibration, the ADC in the Cicada PHY provides a resolution of 7.2-bits with an ENOB of 7-bits.
Line driver power
Another place that power was saved was in the transmitter architecture. The amount of power that must be delivered to the load is fixed but the power that is consumed internally is not. For a Gigabit driver the best internal power consumption is achieved with a voltage mode line driver. However, to maintain backward compatibility with the plus 350M Ethernet ports installed today, a triple speed Gigabit Ethernet PHY must maintain compatibility with the existing 10BASE-T standard. Using a voltage mode line driver for 10 BASE-T would require a supply voltage of more that 5 V, unrealistic given that today the most prevalent I/O voltage is 3.3 V. In order to use a voltage mode line driver for 1000BASE-T and a current mode line driver for 10BASE-T, Cicada uses its voltage mode line driver to terminate the line while the 10BASE-T driver connects to the other side of the termination. By employing these techniques to minimize in the line driver power, we achieved the goal of having a part that consumes slightly l ess than a watt per port.
Given the mixed signal complexity of a 10/100/1000BASE-T PHY and the cost challenge of providing 10X the performance for 2X the price of a 10/100BASE-T solution, some creative use of Design-for-Testability (DFT) techniques must be used.
Consequently, concurrent use of Advanced DFT Techniques is absolutely essential to minimize test time while providing highest overall coverage for high quality. Some of the DFT techniques required for cost effective Gigabit Ethernet PHYs are:
- Built-in self-test (BIST)
- Scan
- Loopback (analog and digital)
- On-chip PRBS generators for BER testing
- JTAG
- Iddq
Extensive use of these structural test methodologies enables the use of a standard, widely available digital ATE platform, rather than expensive and less widely available analog ATE, which results in minimizing manufacturing costs without compromising device quality or overall test coverage.
In or der to accomplish this, the analog portion had to rely heavily on a BIST strategy. In addition, structural level testing was employed as opposed to extensive functional level testing because it is difficult to achieve adequate and efficient test coverage with functional testing.
As an example, consider the variable gain amplifier. One way to functionally test the VGA is to link the part over various cable lengths, but each test would be mostly redundant and a large amount of time would be wasted. A better approach is to connect the line driver, VGA and ADC in a signal chain. Then VGA gains and DAC codes are selected to cover most of the ADC range. This is a quick test that allows examination of all VGA gains, all DAC settings and substantially all ADC codes at the same time.
Structural testing is sufficient as long as there is very little interaction between analog blocks. Our lab testing has shown that there are no measurable noise interactions between blocks. Therefore, testing that each block meets its performance specifications also guarantees that the product meets its overall performance specifications. This strategy has been verified through extensive lab correlation. The analog tests were also selected to maximize pin coverage in the analog. A pin is considered covered when it is tested both functionally and parametrically. Analog pin coverage is greater than 95 percent.
The last type of testing done in the analog is testing the digital control blocks with scan. These methods achieve the goal of a complete analog test that does not require specialized analog testers.
In addition, for the system designer, the added benefit is that advanced, in-system testability is easily accessible, such as the BER test, providing a low cost way to improve and simplify manufacturing quality. A comprehensive, systems level approach is required to deliver physical layer ICs consuming less than 1 watt per port for Gigabit Ethernet over copper the most complex wire line communications standard ever developed and deployed.