Design reuse expands across industry
Design reuse expands across industry
By Ron Wilson, EE Times
March 27, 2003 (5:03 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030324S0039
It would be pretty difficult to uncover the coining of "platform" in a system-on-chip design context, or to be sure what it meant to that original user. But it is possible to get the original sense.
In the SoC community, a platform is a means to increase the level of abstraction, and therefore reduce the amount of detail, in the design process, specifically by reusing previously designed components in a black-box fashion. That is a pretty general definition. What it means in practice depends on which parts of the design process are being abstracted.
In today's market, the term platform may refer to an abstraction of detail in the physical design process, in verification or in mapping a logical design into a technology. It may also refer to a near elimination of the chip design process, or even to a simplification of the non-design-related tasks w ithin a design flow. Each of these concepts needs examination.
An ASIC vendor will often refer to it back-end flow, from sign-off to samples, together with it extended libraries, as a platform. In some cases a vendor is so impressed with its libraries that it will refer simply to its intellectual-property library as a platform. In either case, the point is that by drawing on the libraries and using them in conjunction with an approved tool flow, the design team avoids having to know much of anything about the IP blocks within the library beyond their function.
At this level of abstraction, the design team is spared the creation of significant, large blocks of IP, usually including CPU and DSP cores, memories, peripheral controllers, I/O cells and major functional blocks. The team can work with behavioral or cycle-accurate models of the blocks without getting involved in their gate-level behavior, trusting either to hard macros or to the synthesis flow to deal with that part. This is still fa r from plug-and-play design, in that it leaves the problems of interconnecting the blocks and of creating any customer-specific logic up to the design team. But it is clearly a reduction in complexity.
System-level platforms
With the addition of a few more elements, this notion of a platform takes on much more of a system feel. Some IP vendors provide extensive libraries but add the IP necessary to interconnect the blocks. Examples would include CPU IP vendor ARM Ltd. (Cambridge, U.K.,) with its nearly ubiquitous Amba bus architecture; Palmchip (San Jose, Calif.); and Sonics Inc. (Mountain View, Calif.) In the case of ARM the scheme is very much CPU-centric. In the latter two cases the IP and methodology go beyond simply an on-chip bus, approaching a general framework for interconnecting large blocks of IP into a system. But both offer the additional abstraction of being able to treat much of the on-chip interconnect at a behavioral level for much of the design.
Of course, the ultim ate expression of this philosophy goal would be to anticipate what IP blocks the customer would need in the design, fabricate the chip ahead of time and simply let the design team customize the details to its needs through software, field programmability or perhaps one-time programmability. And that, also, is an emerging technology trying to gain attention under the crowded platform umbrella.
There are many paths toward this end. One is simply to add a very general programmable platform to the tool chain, methodology, rich IP library and interconnect schemes mentioned above. That, in a sense, is the current business plan for FPGA vendors. But given the limitations of FPGAs, it takes a little more than that; vendors have learned that for some widely used but performance-critical IP blocks, you simply can't toss them into the programmable logic array and hope for the best. You have to provide them in silicon alongside or embedded in the array.
That thinking has led to a range of "platform FPGA" products. The first moves in this direction were incremental, with Altera Corp. (San Jose) and Xilinx Inc. (San Jose) embedding sophisticated memory blocks, then hardware multipliers and then high-speed I/O modules in their devices, primarily in response to the needs of the networking business. Early in the game Quick Logic Corp. (Sunnyvale, Calif.) took the concept to a different set of markets by embedding a MIPS CPU, memory and common peripheral blocks into one of its families, making, in effect, a 32-bit microcontroller with a moderately sized FPGA inside.
ASIC vendors have come at the issue from a different angle. More or less at the same time, but independently, AMI Semiconductor (Pocatello, Idaho,) Chip Express Corp. (Santa Clara, Calif.,) NEC Electronics (Santa Clara) and Lightspeed Semiconductor (Sunnyvale) evolved what might be called the structured ASIC. These devices are like gate arrays, in that they are customized to a particular design by means of the last few metal layers from a common base wafer. But rather than being huge seas of transistor quads or NAND gates, they are regular arrays of more complex logic cells, register cells, memory structures and sometimes other structures. In that sense they resemble the internal structure of FPGAs, but with metal-mask interconnect rather than field-programmable interconnect.
These devices promise the fast turnaround time and low nonrecurring expense of gate arrays, but with performance approaching that of full cell-based ASIC designs. And these offerings, together with their libraries and back-end design services, are also being referred to as platforms.
An interesting variant on the theme is fabless ASIC start-up Telairity, which offers the same concept of larger building blocks but without the prediffused wafers. Instead, Telairity permits customers to design in a very hierarchical manner, starting with macroblocks that comprise a few hundred to a few thousand gates each. These small pieces are the building m aterials from which larger digital functions are composed. And those functions, in turn, are combined into blocks that are larger still, until eventually the chip is fully implemented.
The blocks carry their own clocking, test and interconnect conventions, eliminating much of the back-end design necessary for a conventional cell-based flow. But Telairity takes in the customer's design as a network of these macroblocks, completes the back-end design tasks and submits the result to foundry. The company claims to be closer to full cell-based performance than structured-ASIC approaches, but with only slightly longer turn-around.
In recent weeks there has been another wrinkle in this story, as NEC and Telairity have reached out to embrace front-end design-planning and, in NEC's case, synthesis tools as well. This moves them in the direction of recommending, if not exactly offering, a full tool flow to go with their libraries and implementation vehicles.
Toward the futu re
All of these structured, user-definable approaches can obviously benefit from having big blocks of IP such as CPUs, DSP cores and dual-port memories available. One can take that argument a step further and say that for a particular range of applications, one could make a particular set of these blocks hard macros, diffuse them onto the die and have, Quicklogic fashion, a platform already optimized for a particular range of applications kind of a single-board computer on a chip. It would only be necessary to add user-specific blocks to the design to create a differentiated SoC for a particular customer.
That pretty much defines the approach of LSI Logic's RapidChip program. LSI (Milpitas, Calif.) will offer platform chip design that combine critical blocks already optimized and interconnected on the die with a substantial gate array area, allowing the user to move to a completed SoC in gate array turnaround times. The selection of predefined cores will be made to meet the needs of particular application areas.
In other companies similar concepts are being discussed employing an area of embedded FPGA on the die instead of an embedded gate array. IBM Microelectronics (Essex Junction, Vt.) is discussing an embeddable version of a small Xilinx FPGA as simply another element in its ASIC library. And ST Microelectronics has discussed a number of ways of adding field-programmable logic to its own application-directed platform chips.
Meanwhile the key concept for configurable-CPU rivals ARC International (San Jose.) and Tensilica Corp. (Santa Clara) is that by making relatively minor extensions to the CPU configuration or instruction set, a design team can enormously increase the CPU throughput on particular tasks, pulling jobs that normally would demand dedicated hardware back into the realm of software solutions.
ARC has taken perhaps the more familiar approach, assembling a configured CPU core, peripheral cores, software development tools, operatin g software and application code for a particular application area into a platform. Thus the company offers packages for such applications as 802.11 and USB On the Go.
Tensilica's view, as expressed by CEO Chris Rowen, is more radical. Rowen starts with the observation that configurable CPU cores in modern processes are very small and can range from modestly general-purpose to enormously fast special-purpose devices. Why not, then, simply write the application in C++, decompose the resulting code into tasks and start assigning tasks to processors? Noncritical tasks get lumped into a general-purpose CPU. Critical tasks get one or more CPUs configured to handle them at the necessary throughput and latency. The application maps into an array of configured CPU cores, with virtually no reference to conventional ASIC methodology.
Both of these views abstract not the back-end design tasks but the front end. In principle, systems-on-chip using either ARC's or Tensilica's cores could be implemented usi ng any of the platform approaches mentioned earlier. In practice today, however, they are used with a conventional COT flow.
Even that might be eliminated if technology puts a sufficient grounding of feasibility under yet another new concept: the array of programmable processors. This idea is in effect to make an FPGA-like structure composed not of logic cells but of configurable processing elements. The exact size, power and composition of the elements vary widely from thinker to thinker.
PACT (San Jose) and Quicksilver Technology (San Jose) are both pursuing such architectures. Once again, the idea is to abstract away the front end of the design process by moving directly from application code to a processing-element array. But this time the array would be predefined off the shelf, much like an FPGA.
What process generation will be necessary to make these ideas practical, and in which sorts of algorithms they will prosper, remain to be seen. But the ideas can not be simply dismiss ed.
Related Articles
- The complete series of high-end DDR IP solutions of Innosilicon is industry-leading and across major foundry processes
- IoT Security: Exploring Risks and Countermeasures Across Industries
- Solving a problem like reuse
- How to reuse your IIoT technology investments - now
- Resolution of Interoperability challenges in Automatic Test Point insertion across different EDA vendors
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |