Retargeting IP -> Design system compiles silicon straight from C code
Design system compiles silicon straight from C code
By Bernard Cole, EE Times
March 26, 2001 (2:52 p.m. EST)
URL: http://www.eetimes.com/story/OEG20010326S0050
Proceler Inc. is about to unfurl a compiler technology that makes the process of code writing and compilation the first and defining step in the development process. The two- year-old startup will unveil its approach at the Embedded Systems Conference, April 9-13 in San Francisco. The new methodology will enable compiled code for an application to dictate all other decisions in the design, including the choice of hardware, bus structure, clock rate, register design and memory architecture. The Dynamically Variable Instruction-set Architecture, or Dvaita, will allow embedded developers to write their application code in standard C or C++ with no special extensions and then use it to automatically generate high-performance, application-specific soft processors, said Naren Nachiappan, president and chief executive officer at Proceler (Berkeley, Calif.). Dvaita is expected to provide high-performance acceleration for compute-int ensive embedded applications, while cutting months off design cycles. "What this methodology allows the developer to do is identify computationally intensive code blocks-such as loops-that are candidates for acceleration," Nachiappan said. "These blocks can then be implemented in reconfigurable logic while compiling the remaining code as a standard executable for the system microprocessor." The beta version is expected this summer, with full product release in the fall. In development for almost two years, the technology is the culmination of nearly 20 years of friendship between Nachiappan and Krishna Palem, chief technology officer, who first met as engineering students at the University of Texas. Nachiappan wound up as senior vice president of business development at VenturCom, a developer of software for Windows-based embedded systems, where he was chief architect of the industry's first embedded Unix product. Palem became a professor and senior resear ch leader at the Georgia Institute of Technology, and director of the Center for Research in Embedded Systems and Technologies. There he did work on the parallel and optimizing compiler technologies that serve as the basis of the Proceler scheme. "In conversations we had over the years we felt that eventually-as densities increased, and as the needs of end users diversified beyond just a few architectures-the current approaches, which depended on a lot of technology and expertise unfamiliar to the embedded designer, would have to give way to a more straightforward approach," Palem said. Most approaches to configurability create optimized code late in the design cycle, he said. "Configurability is thus limited, because developers must keep in mind the need to provide an appropriate tool chain to compile, debug and execute the code on the resulting architecture," said Palem. "The most important part of the embedded application-the actual code, and its compilation into its most efficiently executable form-came last." To turn the development process on its head, Proceler uses Dvaita as the basis of a soft microarchitecture: a set of modular, presynthesized components that implement an instruction set, as well as data flow and control elements. Through an associated set of tools in the Dvaita software platform, application-specific soft processors are generated from C/C++ source code by optimizing the performance of compute-intensive portions of the code. Also provided will be an application programming interface and run-time interface, which seamlessly integrate the soft processors on a reconfigurable-computing system. Dvaita's instruction-set architecture differs from the norm as well. A typical ISA, said Nachiappan, abstracts hardware functional units into instructions and storage resources into registers. More complex instruction-set architectures expose other aspects of the data path that can be scheduled by the compiler. He called these "hard ISAs that require the implementation of the instructions to be fixed at design time and presented to the compiler in the form of a fixed microprocessor data path implementation." Proceler, by contrast, "uses a soft ISA that can include instructions and components customized to a particular application," he said. The run-time interface is portable to alternative real-time operating systems that reside on the microprocessor. The API lets third parties integrate Proceler's development capabilities into their own development schemes. The soft processors can then be automatically implemented on reconfigurable-computer systems. These might consist of a standard microprocessor with a reconfigurable-logic device like an FPGA in a bus-based configuration. Or they might be integrated into one of the new generations of processor architectures called configurable system-on-chip devices. The manner in which soft ISAs and microarchitecture customization are used t o construct application-specific soft processors is one that is familiar in many ways to the standard methodology. With the Dvaita suite of tools, the components of the microarchitecture are designed, synthesized and archived off-line. For example, an ADD instruction may lead to the design of a 16-bit ripple carry adder with registered inputs. That design can be implemented via schematic capture or synthesis and translated into a netlist. The design is implemented on the reconfigurable logic to create a compact layout with known placement and geometric properties. That information can be used to annotate the netlist, which can then be archived for use by the compiler. All microarchitecture components such as registers, operators and controller elements are created and archived in this manner. The compute-intensive segments of an application program are compiled to the soft ISA. This program representation is now implemented via a soft microarchitecture. Alternative options range from optimizations of an existing microarchitecture to the generation of a customized microarchitecture. In the Dvaita methodology, said Nachiappan, such a microarchitecture implementation is called a processing engine. Multiple processing engines can be created for a program corresponding to distinct code segments-distinct loops, for example. Certain system-level functions are necessary for computations to be performed. These include access to I/O, memory and the host microprocessor. "At this point, all of the elements to construct a soft processor are available," said Nachiappan. "Vendor-specific tools now create hardware configuration information for a specific device." Besides its role as a generator of actual silicon, he said, Dvaita could be used with a hardware/ software co-development system to provide a developer with a way to do "what if" scenarios long before a design is committed to hardware. "Conceivably, a developer could generate the C or C++ code for the application and then run simulations of various configurations of hardware and software until the best configuration is achieved," he said. Because Proceler's soft processors accelerate the compute-intensive functions of the embedded application, the Dvaita methodology will probably find its first uses in high-performance applications like network processing. For example, said Nachiappan, in a network router, the technology could be used to automatically partition the application code, implementing iterative, compute-intensive operations like packet processing as an application-specific soft processor in reconfigurable logic. Dvaita could also implement the noncompute-intensive functions of the router, like protocol processing, on a RISC processor. Other standard software components, such as the administrative user interface, could be quickly and seamlessly integrated into the network router design. To overcome the performance hit even an optimized architecture will incur in an FPGA design, Dvaita uses a number of parallelizing algorithms and methodologies based on Palem's research. "An important issue is the performance relative to execution of the compute-intensive code blocks on the microprocessor," said Palem. "Our approach is based on extracting two sources of parallelism. Loops are first analyzed for instruction-level parallelism within the loop body. The second is the parallel execution of loop iterations." Proceler is a privately held, venture-capital backed company.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |