|
|||||||||||||||||||||||||||||||||||||||||||||||
Remote Testing and Diagnosis of system-On-Chips Using Network Management FrameworksBy Oussama Laouamri & Chouki Aktouf, DeFacTo Technologies Abstract : This paper describes a novel approach of remotely diagnosis and testing hardware embedded IP cores within a SoC by using available networking infrastructures between the testing machine and the SoC under test. To this end, classical network management protocols are used to cost-effectively remotely manage SoCs including their embedded cores. Existing on-chip design-for-test (DFT) architectures are extended to allow remote and costeffective distributed testing of multiple SoCs. By running intensive experimentations on several ITC’99 and ITC'02 design benchmarks, the testing methodology is shown cost-effective. 1. Introduction The Systems-on-Chip (SoC) design paradigm has been widely accepted and implemented in practice during the last few years as an enabling design methodology for the improvement of productivity and the increase of design functionality. The main issues to be faced during the testing of an SoC are essentially related to the new challenges offered by the growing complexity of the concerned designs. Semiconductor technologies feature sizes are getting deep in the submicron area, thus allowing the integration of many complex cores within a single design. In the DSM (Deep Sub-Micron) arena, testing and diagnosis become a major concern for quality and cost. Testing costs are already significant parts of the chips total production cost and represent a bottleneck in the successful completion of a design project and its final market success. Unlike other silicon manufacturing costs, the cost of test has not benefited as much from the overall downward trends over time. Large automated test equipments (ATE) have steadily risen into the multi-millions of dollars for cutting-edge capabilities. The length and number of test vectors per design are mounting, resulting in each design consuming more time on the testers. The International Technology Roadmap for Semiconductors (ITRS) reported in its 2001 analysis that for some products in certain market segments, test may account for more than 70% of total manufacturing cost, or process technology. The solution to this predicament is reducing the reliance on “big irons” testers and instead utilizes more low-cost testers that are DFT-aware. Besides traditional manufacturing testing needs, diagnosis and silicon debug can be needed at various stages in the lifecycle of a product (ASIC, system): during development, qualification, production ramp-up, or in maintenance-like activities in the field life of a product. The root cause of the problem-to-be-debugged can lie in the technology used, as well as in the design implementation (or the combination of these); the debugging process has to cover both aspects. Although it is clear that software tools and DFT are gaining in importance with the advent of Systems-on-Chip (SoC), diagnosis and physical debug methods have still an important role to play. Given an SoC which is already integrated within an electronic system, any mechanism that allows (i) remote application of test vectors for any of the SoC blocks and (ii) gathers related test results, is very beneficial. Beyond making easy the access to DUT blocks from external I/O pins, a remote access should be made possible any of an SoC live time. Only networking infrastructure can allow such an easy access. Making an SoC TCP/IP compliant extends testing and diagnosis possibilities. Such a compliance may fit with TCP/IP network management protocols such as SNMP. Today, available networking infrastructures allow secure data transfer. Such infrastructure can serve as a powerful vehicle to drive test vectors from a test engine to chips and gather test results for a deep analysis. Furthermore, the large experience in network management of electronic and computer systems can serve for the testing community to better monitor the behavior of chips during execution of real-life applications. Indeed, network management for local TCP/IP networks is mainstream. To maintain and deliver high service quality to end users, network performance and reliability must be constantly monitored. To date, several research works have addressed hardware-based solutions using network protocols and applications. In the Applied Research Lab (ARL) at Washington University, a set of hardware components for research in the field of networking, switching, routing and active 2networking have been developed [1]. However, hardware components of layered protocol wrappers (UDP/IP wrappers) [1] have been proposed which process Internet packets in reconfigurable hardware. Hence, several network applications which use this wrapper library [1] have been developed. For instance, an Internet router or a firewall are important applications that use the wrapper library to route and filter packets [2, 3]. A single chip has been used to filter Internet SPAM and to guard against several types of network intrusion. In such research works have not addressed a hardwarebased SNMP solution at the application layer. This is important since such a feature has to be considered at the chip level. SNMP is considered as an application layer protocol which uses a TCP/IP suite (in practice UDP is used). In this work, an SNMP agent is developed on a wrapper library described in [1]. This agent is developed within an SoC to help the external testing of the overall SoC. In this work, a new DFT methodology named SNMP/1500 which makes complex SoCs very easily testable and diagnosable is presented. By using existing networking infrastructure, the proposed methodology can interoperate with existing DFT methodologies and networking technologies. Using the IEEE 1500 DFT methodology [5, 6], test logic is extended and made compliant with SNMP (Simple Network Management Protocol) TCP/IP management protocol. SNMP is known as a simple and a very powerful management protocol. It embeds a set of features that allow the management of heterogeneous and complex networks. In this research work, SNMP is considered within SoCs to make possible either remote testing or real-time monitoring of embedded cores. IEEE 1500 standard [5, 6] enhances testability of SoCs. It helps isolating blocks or IP cores which allows targeting their individual test or monitoring. The proposed architecture embeds two kinds of interfaces. The first interface extends a traditional IEEE 1500 wrapper. The second interface embeds a SNMP proxy agent at the SoC level. Starting from SNMP requests sent by a test engine such as an ATE (Automatic Test Equipment) via TCP/IP network, the on-chip SNMP agent is made capable to perform IEEE 1500 boundary scan operations at the level of an embedded core. This allows supporting testing, diagnosis and monitoring of an SoC or a block of an SoC with a full compliance with both IEEE 1500 and SNMP standards. It is noteworthy that the proposed architecture reuses existing Test Access Modules (TAM). The rest of the paper is organized as follows: Section 2 presents the SNMP/1500 compliant testing architecture. Section 3 presents software considerations that help implementing the proposed approach. Next, in section 4, the main approach which combines both management and testing standards is presented. Section 5 summarizes the implementation results and performance evaluation focused on testing operations. Finally, conclusions are given in section 6. 2. SNMP/1500 Test architecture In this architecture, IEEE 1500 wrappers are extended and represent SNMP agents. An SNMP agent which is illustrated in Figure 1.a is managed by proxy agent. Test data are carried out between IP cores and proxy agent via a Test Access Mechanism (TAM). The proposed SNMP/1500 architecture is compliant with existing TAM structures. As previously detailed in [7, 8], such an architecture embeds new registers such as IDIP (IDentifier of IP core), TECTTEST (TECHnique of TEST) and OID (Object Identifier) are added. The Wrapper Instruction Register (WIR) controls wrapper operations. Given control inputs, WIR operations are directly controlled (WIP) by proxy agent. Furthermore, WIR is extended by new instructions such as WS_GETREQUEST and WS_SETREQUEST. Hence, a SNMP operation is indicated by two parameters: PDU (kind of operation: get-request, set-request...) and OID. The same behavior is extrapolated at the level of the IP cores test interfaces (extended 1500 wrapper) which ensure a full compliance with the IEEE 1500 standard. Indeed, during a SNMP operation, the semantics of a IEEE 1500 instruction is completed by an flattened OID, which is the equivalent to a hierarchical one. Through the use of network layered protocol UDP/IP wrappers [1], a test operator has the ability to manage of the SoC infrastructure by using a SNMP proxy agent module (Fig. 1.a). The proxy agent monitors and controls the embedded cores under test. The proxy agent is used to translate information between SNMP and IEEE std. 1500 protocols. It provides a protocol conversion function which allows a management station to apply a consistent management framework of all SoC and IP cores infrastructures. Starting from SNMP requests, the proxy agent performs IEEE 1500 wrapper boundary scan operation. This allows supporting testing and monitoring strategies based on the IEEE 1500 standards. The proxy agent can be considered as an IP core, which gets SNMP requests coming from the management station. Such requests are converted into instructions in compliance with the extended 1500 standard. In a similar way, answers from IP core are converted towards an SNMP protocol representation (response). Finally, test results are sent to the ATE as SNMP messages. Figure 1.b shown various types of IP cores which are compliant with the proposed SNMP/P1500 architecture. Figure-1.b.i shows an IP core including an internal Built-in Self-Test (BIST) structure. The SNMP/P1500 architecture operates also with the IP cores using scan test infrastructure (Fig. 1.b.ii). To ensure a hierarchical testing, any IP core can be considered as an SoC with a full compliance with the proposed SNMP/P1500 architecture (Fig. 1.b.iii).
Fig. 1 - Proposed SNMP/P1500 architecture 3. Software design considerations A MIB (Management Information Base) data model represents a software interface between SNMP Framework and the design under test. Hence, it must exist a correspondence between the MIB knowledge available to the manager (ATE) and what is really implemented within the agent (the SoC). The manager can only carry out the operations which are envisaged in the MIB.
A MIB basic element [7, 8] is called “mibSoCTest”. Any new module is identified by the OID “1.3.6.1.4.1.X”, for which X is a reference given by Internet Engineering Task Force (IETF). Given an IP core or an SoC under test, the MIB describes both features of the implemented test techniques which are associated to the IEEE 1500 wrapper and information related to the testing process. The MIB is divided into two parts: the information at the SoC level and those at the level of IP cores. The first part of the MIB is dedicated to the SoC: SoC identifier, configuration of basic components, etc. The second part of the MIB is dedicated to the IP cores. For instance, the table called “ipCoresWrappedP1500Table” is related to the information regarding IEEE 1500 test architecture of each IP core. The index of this table is called “ipCoreIndex”. It represents the logical address of IP cores in the SoC environment. The following table shows examples of managed objects. 4. Remote test protocol 4.1. Overview The proposed SNMP/1500 interface (UDP/IP wrapper and Proxy Agent) controls the internaland wrapper boundary-scan (WBR of IEEE std. 1500) via TCP/IP network. This interface assumes the connection of input and output terminals of the scan chains (including the boundary-scan chain), test control pins, and clock pins to ATE channels (managed by existing TCP/IP bandwidths). At the level of every IP core, the access to the rest of the functional pins is achieved via the IEEE 1500 wrapper boundary-scan chain. In this work, both UDP/IP wrapper and Proxy Agent constitute an SNMP/1500 wrapper around an SoC (Fig. 1). In this case, the DFT for the SoC can be designed without even knowing about the target ATE. Later, by using the re-configurable logic, the number of scan chains and their length can be modified in the embedded cores according to the ATE specification. The basic idea behind of combination of DFT technique and SNMP standard is to provide remote access not only to the functional terminals, but also to the internal scan chains via the boundary-scan architecture (IEEE 1500 wrapper), in order to enable even further scalability of the SoC-ATE interface. Given such an extension, the SoC becomes able to understand SNMP requests. SNMP requests (get-request, set-request…) retrieve or modify the value of any managed objects (e.g. IP core identifier, SoC identifier, test vector, tests techniques, etc.) at SoC level. Our proposed SNMP/1500 wrapper around an IC truly converts fixed number of external test inputs and outputs, which represent a TCP/IP network interface, into IEEE std. 1500 dedicated internal test inputs and outputs. The SNMP set-request message (set-request OID TV) applies test vectors on the IP cores where both object identifiers (OID) and test vectors are specified. The OID distinguishes the type of the applied test. The SNMP get-request message (getrequest OID) retrieves information of test or monitoring (e.g. IP core identifier, SoC identifier, tests techniques, registers of monitoring, etc.) of either the IP core or the SoC by specifying the identity of an instance of managed object. 4.2. SNMP-IEEE 1500 relationships The relationship between the SNMP requests and those of IEEE 1500 is implemented at the level of the proxy agent. The proxy agent converts SNMP requests into IEEE 1500 instructions. For example, the SNMP request “get-request X.2.2.1.3.5” is used to recover the contents of TECTEST register (4 bits which identify the used test technique). This request is converted into WS_GETREQUEST IEEE 1500 instruction (Fig. 2) with a flattened OID that equals “7”. This flattened OID relates to the hierarchical OID “X.2.2.1.3”. Given some IP cores, the last number (ipCoreIndex) of the hierarchical OID represents the logical address (number “5”) of the considered IP core. To distinguish the type of applied tests at IP core level the flattened OID is considered instead of a hierarchical OID. This choice is motivated by the need for minimizing the processing logic of hierarchical OID for each IP core. Figure 2 illustrates this example. In this figure, the data enable signals (DataEn_IN and DataEn_Out) indicate if 32 bits databus Data_IN and Data_OUT are a valid payload of SNMP message or not. Fig. 2— Functioning of SNMP/1500 architecture 5. Simulation results The considered design flow is based on Synopsys® tools. Using a 0,18 ì CMOS technology, implementation of a 200 MHz proxy agent requires 16369 gates. Given a million of gates of SoC f2126 of ITC’02 [9] that embeds four cores, 2% of area overhead are estimated for the SNMP/1500 architecture adapted to this SoC benchmark. This seems reasonable knowing the number of added features. The proxy agent analyzes at 33 MHz to 100 MHz or 200 MHz best fitted for production lines of electronic component manufacturers, who require extra-high throughput. Therefore, the proxy agent can operate even for high-speed networks. The theoretical maximum network throughput that can be supported is 3,2 Gb/s for a 100 MHz proxy agent. However, several experimentations at IP core level have been conducted using twenty-two design benchmarks known as ITC99 benchmarks (b01 to b22) for estimating the cost of area overhead of extended 1500 wrapper. In summary, the area cost of extended wrapper depends on the size of the core, as well as the number of core terminals. We reported 1% additional silicon area for extended wrapper at IP core level, on top of 4.5% area costs in order to make all cores fully testable with internal full scan. A queueing model is used in the performances analysis of the proposed approach. The OMNeT++® [10] is considered for analyzing the performance of both monitoring and testing of the proposed approach. The implemented queueing network (Fig. 3) is the model M/G/1/N FIFO queue system. It has a Poisson arrival distribution, a general service time, a single server and a finite queue (N: system capacity). The service time is calculated for each SNMP request received starting from its contents. The parameters participated in calculation of service time are: service type, test type, size of monitoring registers, size of test patterns, number of test per message, chip speed (clock frequency of test architecture), length of each core-internal scan chains, etc. The setting of these parameters typically influence the number of clock cycles needed to apply one test pattern or to recover one monitoring register.
As shown in Figure 4, three main components are considered in the model that proposed in this work: system under test or monitoring, the generator of SNMP requests and the system (sink) that processes and analyzes SNMP responses. These two last components constitute the testing station (ATE). An extensive simulation work has been conducted to show how much necessary parameters for modeling influence performances of testing operations. For instance, let us assume that a mean of arrival rate is ë=10000 SNMP messages per second, i.e. on average one message appears every 1/ë=1/10000= 0,00001 second. This implies that the interarrival times have an exponential distribution with an average interarrival time of 0,00001 second. The system capacity (N) represents the number of SNMP messages supported. Cq is a capacity of queue, therefore N=Cq+1. Several simulations have been conducted using sevral ITC’02 SoC Test Benchmarks [9]. It was also shown in the proposed results the influence of system parameters such as: queueing capacity (Cq), chip speed (clock frequency of test architecture) and interarrival rate (1/ë, versus of arrival rate ë). In Figure 4, the instantaneous testing time is shown according to characteristic of SoC test benchmark d695 (number of test patterns, number and length of scan chains, etc.). The test of the SoC d695 is not complex and its traffic intensity converges towards zero because it does not use very long scan chains. Simulations of Figure 4 show specific peaks of different intensity. Such peaks express the latency of the messages during functioning of the on-chip DFT. Indeed, the clock frequency of the architecture influences considerably the performances of test processes (Fig. 4.a and 4.b)
Fig. 4— Instantaneous testing time for SoC d695 (Duke University) Table 2 gives simulation results of three ITC’02 SOC Test Benchmarks: u226, p22810, p34392. This table is organized as follows. Column 1 gives the names of the SoCs. In Column 2, the number of modules is listed. Column 3 shows the total number of inputs/outputs terminals in the SoC; this is the sum of inputs/outputs counts of all modules. Column 4 shows the total number of scan flip-flops in the SoC; this is the sum of scan chain lengths of all modules. Column 5 lists the sum of test pattern counts of all tests. Column 6 gives the test data (in Kbytes) volume generated by ATE; this is the data volume which is needed for a 100% of faults coverage. The next columns give the results in two cases: SoC without a queue (Cq=0) and SoC with a queue where its capacity is only one SNMP message (Cq=1). Column 7 presents the rates of test vectors processed by the SoCs under tests. The fault coverage can be deduced from the number of test vectors processed. The testing time (in ms) is given in the last column.
Tab. 2– Performances analysis of testing operations for ë=10000, Clock frequency =100MHz It is noteworthy that used benchmarks have different test complexities. It is clear that a high pattern count does not directly imply into a large test time. The testing time is dependent on the number of test patterns times (the number of clock cycles that it takes to load and unload one test pattern). Some tests have few test patterns, but utilize very long scan chains, whereas other test have many patterns, but do not use scan chains at all. 6. Conclusion A new approach of a remote testing, diagnosis and monitoring of System on Chips and its embedded IP cores is presented. The approach is based on an implementation of a hardware-based network management application called proxy agent. The proxy agent is a part of hybrid testing/management solution that is based on a combination of the SNMP and a standard designfor- test called IEEE 1500. Through the use of network layered protocol wrappers, a test operator has the ability to manage and precisely test the activities of embedded cores (IP cores) by using existing TCP/IP networks. The approach was analyzed at the levels of both the IP core and the SoC. In future research SNMPv3 capabilities will be used where the approach will consider authentication and privacy features to better manage critical hardware applications. A more extensive validation of the approach is also planned. References [1] F. Braun, J. W. Lockwood and M. Waldvogel, “Layered Protocol Wrappers for Internet Packet Processing in Reconfigurable Hardware”, Proc. of Hot Interconnects 9 (HotI-9), pp. 93-98, California, USA, Aug 2001. [2] J. W. Lockwood, C. E. Neely, C. K. Zuver, J. Moscola, S. Dharmapurikar and D. Lim, “An Extensible, System-On- Programmable-Chip, Content-Aware Internet Firewall”, Field-Programmable Logic and Applications (FPL’03), pp. 859-868, Lisbon, Portugal, October 2003. [3] J. Moscola, J. W. Lockwood, R. P. Loui and M. Pachos, “Implementation of a Content-Scanning Module for an Internet Firewall”, 11th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM’03), pp. 31-38, California, USA, April 2003. [4] L. Miclea, Sz. Enyedi, G. Toderean, A. Benso and P. Prinetto, “Agent Based DBIST / DBISR and its Web / Wireless Management”, International Test Conference 2003, Charlotte, NC, USA, pp. 952-960, October 2003. [5] E. J. Marinissen and Y. Zorian, “Challenges in Testing Core-Based System ICs”, IEEE Communication Magazine, Vol. 37, No. 6, pp. 104-109, June 1999. [6] E.J Marinissen, R.Kapur, M. Lausberg, T. McLaurin, M. Ricchetti, and Y. Zorian, “On IEEE P1500’s Standard for Embedded Core Test”, Journal of Electronic Testing: Theory and Applications, vol. 18, no. 4-5, pp. 365–383, August-October 2002. [7] O. Laouamri and C. Aktouf, “Enhancing Testability of System on Chips Using Network Management Protocols”, In Proc. of IEEE Design Automation and Test in Europe (DATE’04), pp. 1370-1371, Paris, France, February 2004. [8] O. Laouamri and C. Aktouf, “Towards a More Precise Network Management Through Electronic Design”, In Proc. of IEEE 3rd International Conference on Networking, (ICN’04), pp. 45-49, Guadeloupe, France, February 2004. [9] E.J. Marinissen, V. Iyengar and K. ChAkrabarty, “A Set of Benchmarks for Modular Testing of SOCs”, In Proc. of IEEE International Test Conference (ITC’02), pp. 519-528, Baltimore, MD, October 2002. [10] OMNeT++ object-orie:nted discrete event simulation system. URL reference: http://www.omnetpp.org, 2004 |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |