In-circuit SoC verification controls costs
EE Times: Latest News In-circuit SoC verification controls costs | |
Adrian M. Hernandez (07/12/2004 9:00 AM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=22104443 | |
Both of the key benefits of a system-on-chip are related to cost. The first involves manufacturing. An SoC integrates many board components onto a single chip, which lowers the overall bill of materials. The second benefit is in design reuse. SoC designs rely on intellectual property, or cores, to build up a chip. By doing so the R&D cost of developing a single core can now be spread across many projects. Similarly, the R&D cost of developing an SoC is lower, since the integration team can now quickly create a design by simply connecting cores. However, although the design reuse cost benefit may seem possible on paper, it is often unrealized in fact. This is because SoC R&D carries a hidden cost: design verification. SoC verification is complex because it must exercise both the core and the system. At the core level, every line of hardware description language will be scrutinized during verification. The goal of the core verification team is to ensure that the core specifications are met and that every line of HDL works as expected under every supported condition. System verification checks the core-to-core connections as well as such top-level interconnects as clocks and pads. At the system level, the cores are assumed to be functionally correct and so the cores are only tested against a subset of possible conditions. Even with this smaller subset, however, the task of verifying an SoC can be long. To verify the SoC, both core and system designers are making extensive use of electronic design automation tools. EDA tool vendors are actively developing novel tools for SoC verification. An HDL simulator is still the most widely used tool, however. Not only is this tool familiar, but it's effective for testing and debugging HDL designs. But HDL simulation is not a practical verification solution for large designs such as SoCs. As cores get more complex and SoCs start to use more cores, the simulator limits become more apparent. The reasons are twofold. First, the verification test harness for a complex core or an SoC will take more time to develop, even if only a small subset of the overall functionality is tested. The second reason is that it may take hours or days to run the entire test harness. This is because today's state-of-the-art simulator can have an effective rate of hertz throughput for a complex HDL design. This has led SoC design teams to develop alternate methods and tools for validating their chips. One such method used by many SoC designers is validation with FPGAs. FPGAs are attractive for SoC verification because they now come in sizes that have tens of thousands of flops and function blocks as well as embedded hard-core processors. Thus, you may be able to fit your entire SoC in one FPGA. The other attraction of using FPGAs is that the design will run at or near speed and can be reprogrammed when errors are found. To test the SoC on the FPGA, you use real-time data, which is fed into the target SoC or core under test. As a result, with this method, the process of developing the test harness and running it are both accelerated. Another benefit of the FPGA verification method is that it can help uncover defects that were not found during simulation. Recently, I was part of a double-data-rate (DDR) memory controller design team. I spent about two months developing a simulation test harness for the team's memory controller core. For this simulation I spent a good amount of time analyzing not only the core but also the board on which the core was targeted. I worked as the printed-circuit-board designer and created Spice models and studied the signal integrity effects on the timing budget of the design. The simulation models I created for the memories, the target device with the core and the pc board were accurate, to the best of my knowledge. In simulation, the core worked, and so the pc board was built. And since the DDR core targeted an FPGA, no ASIC spin was necessary. When the boards came back loaded and ready to use-so I thought-I loaded the DDR core design and found that nothing worked. After an hour of confusion, a logic analyzer was connected to the DDR memories. It turned out that the timing on the board was different from what it was on simulation. The problem was that the Spice model did not take into account a pc board noise source that was eating into the timing budget. If it had not been for the FPGA implementation, this defect would never have been uncovered through simulation. Limited vision There is a problem with observation, however, when an FPGA is used for in-circuit verification. While simulation gives as much internal design observability as needed, in-circuit verification limits observability to the periphery of the chip. To address that problem, designers can use on-chip logic analyzers from FPGA providers as well as other third parties like Synplicity Inc. These on-chip analyzers are essentially drop-in cores that allow the designer to probe any internal SoC signal. An embedded debug or test mux can also be used to increase SoC observability. This mux can be created by the verification team or, if you are using Xilinx Inc. FPGAs, you can use Agilent Technologies Inc.'s trace core 2, or ATC2. With the mux, you connect as many signals of interest and route them to unused pins. The pins are then routed to probing points or trace connectors to which a logic analyzer is connected. Now the logic analyzer can see inside the chip. Thus, a familiar and reliable debug tool is used to regain observability of the SoC design. By mixing the on-chip logic analyzers with the observation mux, numerous SoC observation possibilities unfold (see figure). This is because the on-chip logic analyzer can be used for viewing wide buses of signals with a shallow buffer. For deep traces, the observation mux can route groups of those same signals to a logic analyzer. So by working with both the on-chip and off-chip analyzers, an SoC system or core verification team can regain simulation-like observability while running the SoC at the target speed. Adrian M. Hernandez(adrian_hernandez@agilent.com) is a digital design engineer at Agilent Technologies Inc. (Colorado Springs, Colo.).
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrateā¢
E-mail This Article | Printer-Friendly Page |