Bluetooth low energy v5.4 Baseband Controller, Protocol Software Stack and Profiles IP
Reconfigurable illogic
Reconfigurable illogic Some say the microprocessor is dead. Not so fast, says industry expert Rich Belgard. The microprocessor isn't going away. As with the laws of motion, you can't just shift energy or complexity around without paying the price. Here's a more optimistic prognosis for the microprocessor. For at least a decade, some people have been predicting that reconfigurable logic will replace the microprocessor. That hasn't happened yet, and I'm convinced it won't. Although reconfigurable logic has its place, I believe it's in addition to or as an extension of the microproc Nick Tredennick and Brian Shimamoto conclude in their article ("The Death of Microprocessors") that microprocessors will no longer be capable of handling the new age of applicationswhat the authors call untethered systems. They define an untethered system as battery operated and not plugged into a wall. Fittingly, I'm writing this counterpoint on a new Intel ULV-Pentium M laptop, somewhere over Kansas. Weighing around three pounds, my laptop says I have about four and half hours of battery left. On this particular trip, my Blackberry has been powered-on for the past three days, has received my e-mail, and still has 85% battery life. I have my trusty cell phone with me, and its battery lasts about a week, if I'm circumspect. Each of these untethered devices uses at least one microprocessor, and they all work pretty well. So, I find it quite strange that Tredennick and Shimamoto suggest I'll need to throw out everything I know and our industry knows and switch the entire model to reconfigurable systems. Defining a reconfigurable system Apparently, it's much more easily described by what it is not. Tredennick and Shimamoto claim it's not a PLD nor FPGA as we know them. They say these are too expensive, slow, and power hungry. It's not an extensible microprocessor, such as those ARC and Tensilica have created, because those are configured at design time. And, clearly, it's not a standard microprocessor. So exactly what is it? First, a reconfigurable system sounds to me as if it's something that relies on or is enabled by a new, nonvolatile memory. Again, the authors tell us what this new enabling memory is not, but they're short on what they claim it is. The character of this new memory is that it's fast, cheap, dense, nonvolatile, and, processwise, compatible with logic. Tell me again where this comes from? Given this miracle memory, we'll page the hardware configurations into and out of a programmable-logic area on the chip. We can do this on application boundaries, and when we get proficient as Tredennick and Shimamoto say we will, we'll even do it cycle by cycle. Well, I can't speak to reconfiguring logic on a cycle-by-cycle basis, but as far as reconfiguring logic: I think we've already been there. Been there, done that This machine, although hardly a microprocessor, had different instruction sets for each of a system development language (SDL), and the application languages of the time: Fortran, Cobol, and others. The microarchitecture of the machine was generic, in that the programmers' model, for example, could assume any word length; had no predefined condition codes; had a variable arithmetic logic-unit width; allowed stack and flat memory; and, through microprogramming, supported soft instruction sets (called S-languages) for the interface to each of the programming languages. Some of the experimentation we did at Burroughs and that was carried out at a few universities that had B1700s was to build microprograms dedicated to applications, not just S-languages. Not surprisingly (at least to computer scientists older than 30), these applications ran an order of magnitudesometimes multiple orders of magnitudefaster than the S-language versions of the programs to which the applications were compiled. Why? Because we didn't have to go to memory to pick up an instruction, decode it, get the operands, vector to its implementation in microcode, perform its steps, check for interrupts, and do it all over again. This experimentation led to conference papers on subjects like "vertical migration," where experimenters moved sections of applications into microcode and back to decide on the "proper" level of performance versus complexity of implementation, since this microcode was difficult to write. Some people even suggested migrating the whole problem into in-line microcode, but the microcode was difficult to write. Remember this. Of course, the B1700 reconfigured itself at the start of each program and again at task swaps. At the time, we didn't call it reconfigurable logic, but I think that's what it was. We've had reconfigurable systems, such as the Burroughs B1700 and others. We've had them before, and if we built them today, in today's technology, we'd call them microprocessorseven though they would include reconfigurable logic. Cause of death Another premise is that we, as an industry, will be unable to make smaller, faster transistors for microprocessors. This has nothing to do with replacing microprocessors with reconfigurable systems. Whether we use transistors to build microprocessors or reconfigurable logic, we still need fast, small transistors. To achieve the same densities we have (or will have) in microprocessors, we'll also need to put these smaller, faster transistors into reconfigurable logic. The ultimate technology that makes reconfigurable logic work will also make microprocessors work. Many Japanese semiconductor companies have experimented for almost a decade with reducing leakage current in transistors by biasing the substrates of the transistors. And, what of the newly announced LongRun 2 system Transmeta has developed for microprocessors? As I understand it, Transmeta's LongRun 2 reduces leakage current by programmatically adjusting the bias voltage of the transistor substrate to effect a larger threshold voltage in the transistors when the transistors aren't required to run at their fastest. I understand Transmeta can do this across different parts of the substrate. Therefore, total leakage current can be substantially reduced. Today, virtually every semiconductor manufacturer is working on new materials, new techniques, and new processes to solve the leakage problem. Finally, a key premise of the article is that reconfigurable logic will replace microprocessors with the advent of memory that is fast, cheap, dense, nonvolatile, and logic-process compatible. It's difficult to argue that such a memory won't change the industry. But I don't believe this memory necessarily argues for replacing microprocessors with reconfigurable logic. I can imagine using this memory in microprocessors as easily as I can imagine using it in reconfigurable logic. No free lunch We can move the inherent complexity of a problem's solution from microprocessors into reconfigurable logic, but where does the complexity we see in contemporary microprocessors go? Does it go into the design tools? Does it go into the control of the devices? Does it go into the interconnects? I don't know. But my experience tells me it goes somewhere. Microprocessors take much of the complexity of the problem domain and encapsulate it into the microprocessor design. Those hundreds of designers and engineers in the engineering groups of the microprocessor companies shoulder the weight of much of the complexity. Thousands of person-years of compiler and other software design absorb more of the complexity. Ultimately, the tools and the microprocessors give a platform to the customer, with a lot of the complexity abstracted out by experts in microarchitecture, programmers' model, and tools. ARC and Tensilica are creating microprocessors that allow a degree of configurability by enabling extensions to the instruction sets of their microprocessors. I would imagine that the companies creating these extensions do so to increase the performance of operations specific to these companies' specific problem domain. These companies factor specific complexities out of the programs that run on the microprocessors and into the microprocessors themselves. DSP-like extensions, SIMD extensions, and other extensions important to specific customers are created to increase the performance of the algorithms that are important to these customers. General-purpose microprocessors can't anticipate these specific algorithms and therefore leave the complexity to the user. To move application-specific complexities into the design of the underlying microprocessor once, as these companies allow, makes sense. Altera, Atmel, Xilinx, and others have begun embedding microprocessor cores on their programmable logic devices. They come at the microprocessor-extensions solution from the opposite end: start with application-specific portions and let the designers use microprocessor software for the rest. This solution is similar to the ones from ARC and Tensilica, but it comes from a different worldview. This solution also makes sense. Tools and infrastructure When there were user-microprogrammable systems, such as the B1700, few people actually microprogrammed these systems because the complexity was too hard to handle. The tools needed were just too primitive. Implementing reconfigurable systems will require sophisticated tools and an infrastructure that doesn't yet exist. Some weeks ago, I attended a panel where one of the speakers said something like "I couldn't imagine having to write Microsoft Word in VHDL." Neither can I. MPU lives on Sure, we trade off efficiency by using microprocessors instead of full-custom reconfigurable systems. But we trade it for time to market, for economy of scale, for cost, and for abstracting out implementation complexity, so that the problem solvers can think in their domain and not in the domain of the implementation. We now see microprocessors that have custom extensions, and we see traditional PLDs that incorporate standard microprocessor cores. These hybrid approaches, combining reconfigurable (or reconfigured) logic as an adjunct to standard microprocessor cores, make sense. Are fully reconfigurable systems the completely wrong way to go? No, certainly not. In the future, there will be a niche for systems that can use a reconfigurable logic, but reconfigurable systems will never replace the microprocessor. Rich Belgard is a contributing editor of The Microprocessor Report and a consultant who has been active in the computer industry for more than 30 years. He designed and managed the development of computer architectures at Burroughs, Data General, Tandem Computers, and Rational Software, including hardware, software, and microarchitecture and served as chairman and vice chairman of the Association for Computing Machinery's Special Interest Group on Microarchitectures and vice chairman of the Institute of Electrical and Electronic Engineers' Technical Committee on Microprogramming and Microarchitectures. Copyright 2005 © CMP Media LLC
By Rich Belgard, Courtesy of Embedded Systems Programming
Aug 19 2004 (15:00 PM)
URL: http://www.embedded.com/showArticle.jhtml?articleID=29111964
Before we throw out all those billions of microprocessors, however, let's examine how Tredennick and Shimamoto define a "reconfigurable system." I always wonder what a reconfigurable system is when I hear my colleagues bandy the term about.
Carefully reading through Tredennick and Shimamoto, I find myself back in 1974, working on my second processor, the Burroughs B1700. This processor could change its instruction set based on the program it was about to run, and it remains one of the most interesting machines I've ever known. (For more information on the B1700, see "Design of the Burroughs B1700," Wayne T. Wilner, Proceedings of the Fall Joint Computer Conference, 1972.)
A key premise of "The Death of the Microprocessor" is that today's microprocessors and digital signal processors can't satisfy the combined performance and power requirements of untethered systems. Maybe some future requirements don't exist yet, but as of today, I just plain disagree. To the extent that untethered devices need better performance and lower power, I think these requirements argue for evolutionary engineering solutions, not for a paradigm shift away from microprocessors.
I'm a proponent of what I call conservation of complexity: given a specific algorithm to solve a problem, we can move complexity around the implementation but can't get rid of it entirely. We can build RISC microprocessors that have (or are alleged to have) less complexity than CISC microprocessors, but what we're actually doing is shifting the complexity of instruction scheduling from the hardware to the compiler. And because instructions in a RISC processor do less work, we need more instructions to do the same work, more memory to hold the instructions, more sophisticated caches, and on and on and on. The overall system complexity remains.
With all the reconfigurable systems' design complexity visible to the user, how are these things to be controlled, configured, programmed, and debugged? For microprocessors, we've invested thousands of person-years of tools, at both the design level and the programming level, and we have a paradigm that works.
Allowing full customization of a design via reconfigurable systems merely transfers all the solution complexity to the user. We have microprocessors because we're willing to let those hundreds of engineers that design them "own" a lot of the complexity. We have thousands of person-years invested in tools, and those tools offload a lot of complexity as well. Tools for reconfigurable systems will need to be created in order for reconfigurable systems to be designed and deployed. These tools will probably require hundreds of person-years of development.
Related Articles
- Metric Driven Verification of Reconfigurable Memory Controller IPs Using UVM Methodology for Improved Verification Effectiveness and Reusability
- Safety intended Re-configurable Automotive microcontroller with reduced boot-up time
- Proposal of a Dynamically Reconfigurable Processor Architecture with Multi-Accelerator
- Design of a 8051 Microcontroller in FPGA with reconfigurable instruction set
- Performance Evaluation of Inter-Processor Communication Mechanisms on the Multi-Core Processors using a Reconfigurable Device
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Synthesis Methodology & Netlist Qualification
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Demystifying MIPI C-PHY / DPHY Subsystem
E-mail This Article | Printer-Friendly Page |