The future of programmable logic
The future of programmable logic Before long, platform FPGAs containing fixed or configurable processors and custom hardware will dominate the field of hardware design. By then, hardware/software codesign will be the norm. Fifteen years ago, Xilinx and Alteranow the elders of the FPGA industrywere four and five years old, respectively; Actel was just three. In those days, programmable devices consisted of PALs (programmable array logic devices) and CPLDs (complex programmable logic devices), which were essentially small sets of AND-OR planes with crosspoint switches to connect them plus a few registers to actually create something useful like a state machine. These devices contained the equivalent of hundreds of gates of logic and were used primarily to replace glue logic. Well-placed PALs could be reprogrammed to correct design mistakes quickly and easily, without management ever knowing. Then Xilinx came up with the SRAM-based field programmable gate array (FPGA) that could hold from 1,000 to more than 5,000 logic gates. Unfortunately, using all those gates while still connecting them and getting them to do something useful was another story. Engineers found that 60% utilization was good, 70% great, and 80% a practical impossibility. Actel quickly followed with its antifuse technology. Antifuse technology produced nonvolatile parts, making designs more secure from reverse engineering than SRAM-based devices. The process was inherently faster than SRAM technology too: no delay occurred on startup while the FPGA loaded its design from a PROM. Other advantages of antifuses included higher densities (and thus lower costs per gate) and the elimination of the extra PROM from the board. At the time, I thought antifuse would surely dominate FPGA architectures. So much for my ability to prognosticate. For all practical purposes, SRAM-based FPGAs won that war. It turned out the antifuse process was nonstandard and more difficult than SRAM, leading to delays in getting new parts to market and leaving it generations behind SRAM in process development. Altera came next, following its success in CPLDs with an SRAM-based FPGA. Altera's initial advantage over Xilinx was not in its hardware as much as its development tools. Altera developed a toolset that included support for schematics and hardware development languages, a simulator, timing analysis, synthesis, and place-and-route. And these tools were nearly free. Many of us were still poking around inside FPGA layouts and connecting one configurable logic block (CLB) to a specific long line to get better timing. We took a good look at the price and capabilities of the Altera tools. Suddenly, Xilinx had to fight for dominance in the market it had created. The competition produced better hardware, better development tools, and generally better solutions. Current technology Software tools for FPGA development have greatly increased in functionality and further decreased in price over the years. Xilinx, pressured by Altera, now offers a great tool set. One great equalizer is that independent tool vendors have sprung up to support all device families from all FPGA vendors. Synplicity was a pioneer in this area. Previously, Synopsys, the original synthesis company, provided synthesis tools for application-specific integrated circuits (ASICs) that could be "adjusted" for FPGAs. Synplicity, however, focused their technology solely on FPGAs, fine-tuning their synthesis algorithms for specific FPGA architectures from different vendors. This approach has enabled them to capture the majority share of the FPGA synthesis market today. Since FPGA vendors can resell the Synplicity tools, the playing field is evening out somewhat as tool vendors focus on developing software while FPGA vendors focus on developing hardware. The advent of cores Hard-core options range from simple standard I/O interfaces like PCI to networking interfaces to specialized RISC processors and DSPs. The upside to these hard cores is that they reduce costs and development time. The downside is that the FPGA vendors are taking a gamble that the particular features they choose to embed in their devices are the ones their customers want now and in the future. For example, including a network interface inside an FPGA seemed like a good idea in the roaring '90s, but with the collapse of the communications industry some vendors may be regretting that decision today. Customers need to decide between using a fixed hard-core processor that has been characterized and tested or a soft core that is more flexible and can be tailored to their specific need. Designers seem to prefer soft-core processors. The large vendors, Xilinx and Altera, can afford to put the wrong hard core in their chips and change their minds mid-stream. The smaller vendors face more of an all-or-nothing proposition; the ones that choose the popular hard cores will find great success. Platform FPGA to dominate Platform FPGAs will have a mix of soft- and hard-core processors. Soft cores will be the choice for the least complex designs and for new designs that don't have legacy code to support. Software tools that enable easy configuration of soft-core processors will be necessary to drive their acceptance. Hard-core processors will be the choice for complex designs and for designs that need to run legacy code. High-end designs will use multiple processors, perhaps some soft, others hard. The ability to achieve such levels of integration with complete hardware reprogrammability will put pressure on a large number of would-be ASIC designers to use FPGAs instead. In the near future, all but the most high-end performance-sensitive and high-volume cost-sensitive system-on-chip designs will be done in FPGAs. Interestingly, as FPGAs become more widely used, the costs will come down even more. This is because the costs of one set of semiconductor masks for a particular FPGA device can be amortized over all the designs that use that FPGA. If you design an ASIC, however, your mask costs are spread only over the hundred thousand chips you've manufactured. If you design the same part in an FPGA, your mask costs are spread over the hundred million chips that the FPGA vendor has manufactured. Vendors have begun toying with embedding FPGA logic inside an ASIC. This hybrid device enables the majority of the design to be optimized and frozen while smaller sections of the design can be changed in the field. For example, you can change communication protocols on the chip and also debug the state machines during in-system testing. These hybrids can be a platform for reconfigurable computing where computer hardware adapts to the specific program that's being executed. I believe that the hybrid devices will have some success in the short term. However, given that most designs will migrate from ASIC to FPGA, there's little room in the long run for such hybrid devices. As the market for fixed-plus-programmable platforms grows, perhaps today's processor vendors, including Intel, will add programmable logic to their chips. That will signal true convergence and be the next step in blurring the boundary between hardware and software. New architectures Traditional CLBs use lookup tables (LUTs) to implement Boolean equations. They also include muxes to combine signals and flip-flops to register the outputs. Some FPGA vendors are experimenting with new CLB structures. The Altera Stratix, for example, includes CLBs with LUTs where the muxes have been replaced with various forms of multipliers, adders, and subtractors to implement DSP applications more effectively. I have doubts about whether these new CLB structures will see success in anything but very specialized applications. The history of digital computing shows that new logic structures, such as neural networks, multi-valued logic, and fuzzy logic, come along often. But with all the tools that have been developed for plain old Boolean logic and its success in implementing any kind of functionality, logic gates remain the structure of choice. For that reason, the simple CLB consisting of an LUT and registers will probably remain strong for most future FPGA devices. We need new tools Hardware designers can use hardware description languages like Verilog to design their chips at a high level. They then run synthesis and layout tools that optimize the design. As FPGAs come to incorporate processors, the development tools need to take software into account to optimize at a higher level of abstraction. Hardware/software codesign tools will be a necessity, rather than a luxury. Ultimately, hardware and software expertise must be melded in the FPGA designer who must understand system-level issues, though perhaps not the particulars of FPGA routing resources or operating-system task switching. Intelligent tools will be needed to synthesize and optimize software just as it's now used to synthesize and optimize hardware. These intelligent tools will work with libraries of pretested hardware objects and software functions, leaving "low-level" C and Verilog design necessary only for unique, specialized sections of hardware or software. Software developers and their tools will also be affected by this integration. To take full advantage of the hardware components in the programmable devices, compilers and real-time operating systems will need to make such integration more seamless. If dynamic reconfigurability ever becomes commonplace, future real-time operating systems may even get into the business of scheduling, placement, and routing of hardware objectsperhaps treating them as distinct tasks with communication mechanisms not unlike software tasks. Essentially, platform FPGAs with embedded processors will take market share away from ASICs and also become the dominant platform for embedded system design. And it's this dominance that will force further development of tools to help us fulfill the promise of hardware/software codesign. Bob Zeidman is a consultant specializing in contract design of hardware and software. He is the author of the books Designing with FPGAs and CPLDs, Verilog Designer's Library, and Introduction to Verilog. Bob holds an MSEE degree from Stanford and a BSEE and BA in physics from Cornell. His e-mail address is bob@zeidmanconsulting.com. Copyright 2005 © CMP Media LLC
By Bob Zeidman, Courtesy of Embedded Systems Programming
Oct 2 2003 (15:00 PM)
URL: http://www.embedded.com/showArticle.jhtml?articleID=15201141
But that's all in the past. Zooming ahead to the present day, there are still just a handful of FPGA companies. Xilinx and Altera dominate while Actel, QuickLogic, Lattice, and Atmel each share the remainder of the market with products aimed at specific applications and needs. SRAM is the dominant technology, though antifuse is used for applications where the protection of intellectual property is paramount. Antifuse also has some power consumption advantages over SRAM. Actel has introduced flash memory-based FPGAs that promise to have the speed, size, and nonvolatility advantages of antifuse technology while using a more standard process that's easier to manufacturethough still not as widely used as an SRAM process.
The latest trend in FPGAs is the inclusion of specialized hardware in the form of hard cores. Vendors realize that if large numbers of their customers need a particular function, it's cost effective to include fixed cells inside the FPGA. For example, the hard-core version of an 8-bit microcontroller takes up far less real estate than the same design loaded into bare gates, the latter approach being called a soft core.
Platform FPGAs, those containing either soft- or hard-core processors, will dominate embedded system designs 15 years from now. Within the next few years, these platforms will come down significantly in price as process features shrink. For many designs, the advantages of using a single, programmable device that may include multiple processors, interfaces, and glue logic will make it the preferred choice over using today's discrete devices on a printed circuit board.
Internal FPGA architectures will continue to evolve but not in drastic ways. Routing, which is still the most significant problem, will be addressed with multiple layers of metal, new kinds of crosspoint switching, and new kinds of point-to-point connections. The CLBs, however, will remain similar to those available today, though the number of inputs, outputs, and registers will vary.
The most significant area for the future, I believe, lies in the creation of new development tools for FPGAs. As programmable devices become larger, more complex, and include one or more processors, a huge need will open up for tools that take advantage of these features and optimize the designs.
Related Articles
- Programmable Logic Holds the Key to Addressing Device Obsolescence
- Implementing Ultra Low Latency Data Center Services with Programmable Logic
- The growing use of programmable logic in mobile handsets
- Programmable logic, SoC simplify power steering, accessory control
- Using switched capacitors to create programmable analog logic blocks in mixed-signal designs
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |