MARGAUX, France Programming models must improve to make full use of next-generation systems-on-chip (SoCs), according to presenters at the Multi-Processor SoC (MPSoC) workshop here Thursday. A programming model defines the way in which SoC platforms are developed. As such, it uses abstractions to hide the underlying execution platform. Traditionally, programming models have come from the software world but Ahmed Jerraya, MPSoC organizer and leader of the system-level synthesis group at TIMA Laboratory in Grenoble, France, said that new thinking is needed for multi-processing SoCs. "The programming model as defined by the software community cannot work for SoCs," Jerraya said. "When we develop hardware/software interfaces, we need a different programming model. It must abstract both hardware and software." Further, Jerraya said, the programming model must abstract the CPU, which defines the hardware/software interface. Compilation today generally ignores the CPU environment, he noted. The parallel programming models for SoCs must therefore change, he said, to account for CPU organization, bus-functional model interfaces, instruction-set simulation, transaction level models, and ultimately RTL hardware as the programming models become more explicit. "Hardware/software interface co-design requires a unified model that represents hardware, software, and CPUs together," Jerraya said. In a later session, Jerraya's colleague, Frederic Petrot of TIMA's system-level synthesis group, presented a possible answer. He said TIMA is developing a "service-based component model" that can handle both hardware and software elements. "We separate functional service from the way it is physically accessed," said Petrot. The models have two components interfaces that include service declarations and data structure, and parameterized implementations. The goal is a single model that represents system design from abstract specification to RTL, with automated generation of wrappers and transactors. Jan Madsen, professor of informatics and mathematical modeling at the Technical University of Denmark, presented a system-level modeling framework called ARTS. "The whole idea is that it should be easy to try out new things, like new processors and new scheduling," he said. ARTS is a SystemC-based simulation environment that makes it possible to examine the consequences of selecting given real-time operating systems, processors, network topologies, and task mapping. Madsen showed how it can examine processor utilization, bus contention, and memory files. ARTS has been used by university researchers for network-on-chip (NoC) design. What happens when SoCs start to include MEMS, optical, or electro-biological components? Gabriela Nicolescu, professor at the Ecole Polytechnique du Montreal, talked about the specification and validation requirements for those kinds of "heterogeneous SoCs." The key, she said, will be modeling and simulation that can alleviate the need for physical prototypes. One possibility is extending languages, as demonstrated by Verilog-AMS, while another is an environment that allows heterogeneous models of computation, such as Ptolemy from the University of California at Berkeley. Nicolescu said her university has a working prototype of a SystemC/Simulink environment that can provide integrated continuous and discrete-event simulation. There is about a 20 percent overhead in terms of inter-simulation communications time, she noted. One institution that's done extensive research in parallel programming models for multi-processor SoCs is the St. Petersburg State University of Aerospace Instrumentation (Russia), according to Yuriy Sheynin, director of the institute for high-performance computer and network technologies there. Programming heterogeneous multi-processor SoCs is a challenging task, he said, that requires a new approach to parallel programming along with computation models for procedure-level and task-level parallelism. What's needed, he said, are an efficient language, methodology and tools. Sheynin said his group's research focuses on task-level parallelism, used for algorithmic development, and procedure-level parallelism, used for source code programming and optimization. They have developed a set of "asynchronous growing processes" (AGP) models for parallel computations. These graphical models, based on asynchronous distributed control, explicitly represent all interactions of processes. The group has also developed Visa, a parallel programming language, which is formally specified in terms of AGP models. It is, Sheynin noted, an interactive and graphical language. "With the right language and the right tools, parallel programming is not more complicated than sequential programming," he said. MPSoC is an international forum focused on application specific multi-processor SoCs. |