How FPGAs empower system-level design
EE Times: Latest News How FPGAs empower system-level design | |
Nick Martin (09/26/2004 5:22 PM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=47902922 | |
In the 1970s, 8-bit microprocessors became a commodity technology and electronics design changed very rapidly in response. The ability to partition part the design into a "soft" medium — programming the processor — had a profound effect on the industry. Now, any part of the design that could be moved into software could be changed, even after the product was manufactured. Over the last few decades companies such as Xilinx, Altera, and Actel have poured hundreds of millions of dollars into the ongoing development of programmable hardware devices in the form of the FPGA. Throughout the late 90's the FPGA arms race between the main players continually drove the capacity of these devices up and the relative price down. That race is now focused on low-cost/high capacity FPGAs built on 90nm/300mm technology, and this creates an opportunity very analogous to the 8-bit microprocessor revolution. Suddenly you can put a whole embedded system on an FPGA for less than $20. The advent of these cheap and immensely capable FPGAs allows us to move much of a design to a "soft" re-programmable hardware environment. However, for programmable systems to emerge as a mainstream technology delivery platform, engineers must overcome formidable obstacles which currently inhibit the migration of system-level complexity into this new programmable "nano-space." Benefits of systems design with FPGAs In my own experience, one thing that stands out when working with an FPGA-based design methodology is that you can proceed a long way down the design process before final decisions have to be made regarding processor choice, peripheral choices, and software versus hardware implementations for a given function. In some ways this is a polar opposite of a traditional, very top-down, systems specification discipline. Even after the design is committed to hardware, the balance between software and hardware can be changed, and the hardware design can be updated. With this model the goal is generally to get as much of the design onto the FPGA as possible to give the maximum flexibility as far as when design changes can be made. Another significant advantage with FPGA-based embedded systems is the ability to change the definition of the processor core to allow tasks to be moved from the main processor, where they are implemented in software, to a co-processing hardware implementation. These trade-offs can be changed as the system develops, allowing performance to be optimized based on experimental feedback from the real running system. The ability to shift to a different variant of a processor, or to a completely different processor, provides another potential advantage for re-programmable platforms. This process can be made mostly transparent from the software perspective, and completely transparent from a hardware perspective. Once the FPGA-based embedded application moves to 32-bit, then cost can emerge as a potential driver. As large FPGAs become cheaper, then both hybrids and soft cores move into the same general cost area as dedicated processor. Risk in FPGA systems design Risk is often about perceived danger rather than real danger. Perceived risks are a barrier to adoption, and real risks are a barrier to success after adoption has occurred. For example, a perceived risk may be that adoption of FPGAs for system level design requires a high level of HDL expertise that the customer lacks. A real and significant risk is that the customer fails to get his current system design to work in the new FPGA-based paradigm. Or, that the new system fails to meet the performance requirements of the application. Additional risk will be perceived if the customer is required to use unknown (and therefore unproven) processor architectures, or new and unproven tools, in order to use FPGAs for system implementation. The new system may end up costing more than projected because of lack of understanding of the new process by the customer, or because performance requirements were not met with the originally specified devices. If the customer does not have a high-level of expertise in the FPGA area, then this is a real issue that needs to be resolved. I'd like to explore a pathway that exchanges the formidable complexities of verification-driven ASIC/SoC design methodologies for a much simpler model that works intuitively for the mainstream engineer. In this alternate approach, the risks of programmable-hosted system development fall away dramatically and allow the benefits of a reconfigurable development and product delivery platform to stand in clear relief. An FPGA system-level design methodology System components — processors, peripheral devices and other discrete components — are well understood. However, when it comes time to move these components into nano-space, inside an ASIC or FPGA, system assembly becomes more difficult. First, there's the problem of the design itself. This is typically rendered at RTL level in a language like VHDL or Verilog, which work efficiently to describe component-level behavior but which quickly grow cumbersome and verbose when the description spans an entire system. Designing at this level is traditionally driven by the need to simulate and verify a device-level design as a single entity with verification tools. Building and debugging these designs is notoriously difficult and time-consuming once a relatively low complexity threshold is crossed. At the component level, the designer faces another set of challenges. Attempts to provide practical industry-level standards for the required blocks of supporting IP, processor cores and peripherals, have been notably unsuccessful to date. Even within the FPGA world, each family of devices has its unique functional architecture which must be individually targeted. So, a processor core that's been fashioned for one application may be problematic for the next. The solution is to render these soft components back into entities that behave like, and can be manipulated like, their hardware counterparts. Once this is accomplished, they can be used the way traditional components are used, without the need to know about or deal with their "internals." These soft components can then be represented symbolically in a design system and assembled schematically using the same skills that engineers use in designing systems with conventional hardware. The problem of debugging the design remains, however, and this gives rise to another novel solution that exploits re-programmability. In the past, you could either attempt to simulate your design or you could build a prototype to verify its performance. Taking advantage of the re-programmability of FPGA devices, we can try any number of variants without risk and simply "re-burn" as we learn. We can take this a step further by integrating the design flow all the way from the schematic through to programming the device and delivering it on a PCB with any required off-chip functionality designed in hardware. Hooked-up to an FPGA development board, we can use virtual instruments to probe our design via live JTAG connections and debug on the fly. We can extend this further by integrating embedded software development into the same environment. Now we can program and run our embedded processors in real time, long before a prototype is committed. When we take all of these elements and combine them in single design environment, it creates a completely fresh approach to electronics that we like to call "live design." I think that this "live" approach is exactly what is needed to allow these exciting new "system-capable" FPGAs to serve the widest possible range of applications. But, there's something else about "live design" that I think is even more compelling. Once users experience the "liveness" of working with real hardware in real time on a virtually-instrumented nano-level breadboard, it changes their whole view of design. There is a kind of freedom, creativity and fun that can transform a grown man back into that grinning kid with a hot soldering iron. Being able to move between concept and circuit, unfettered by "fixed" hardware, means that many of the constraints that would otherwise drive the process can simply be abandoned. The goal may remain the same — better design in less time — but the experience can now be something completely new. Nick Martin is founder and joint CEO of EDA tools provider Altium Ltd.
| |
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |