Network performance requirements push up microcontroller ante
Network performance requirements push up microcontroller ante
By William Peise, Chief Technology Officer, NetSilicon Inc., Waltham, Mass., EE Times
September 5, 2002 (11:53 a.m. EST)
URL: http://www.eetimes.com/story/OEG20020828S0038
For OEMs of intelligent electronic devices, the demand for network connectivity and increased functionality networking creates new challenges for design engineers, while introducing many elements and choices to the design process. Design teams must select hardware platforms that can handle several connectivity media, I/O, and internal controllers, while still having sufficient processor overhead and bandwidth to run an application.
Engineers working on an embedded application must not only take cost, performance and issues of real-time deterministic operation into account when choosing a microprocessor, but also build a network-connected device with sufficient flexibility and scalability.
What exactly are the performance requirements the necessary benchmarks and feature sets of a successful network-attached processor or microcontroller? Among the important features are the ability to perform amid heavy network traffic; dir ect memory access (DMA) channels to successfully move data on and off the chip; and support for 100 Base T Ethernet networks.
Some design projects light switches or sensors, for example demand a very low bill-of-materials (BOM) cost, as little development time as possible, and may have very low-level connectivity needs requiring nothing but a basic TCP/IP stack and stripped-down RTOS ported to the chip. In this scenario, an 8- or 16-bit microcontroller hardware platform is sufficient for a design team's needs. Solid designs of 8- and 16-bit microcontrollers have been deployed for years and perform many jobs well.
In today's market, as more applications require some form of connectivity, some 8- and 16-bit microcontroller providers are attempting to move their products into the networked device arena, to capture greater market share. While the allure of lower upfront BOM costs appears compelling, there are key factors an engineer must consider when evaluating a low-cost design. p>
Hardware redesigns are costly. Intellectual-property and future-proof design is where differentiation is found. It is critical to have a flexible platform that provides enough headroom for software enhancements or even simple modifications to the base hardware offering. The foundation to this approach is 32-bit processors.
Beginning a network-enabled device project takes careful planning. Design engineers must address concerns often overlooked during the product development cycle and the component selection process.
Most 8- and 16-bit microcontrollers only support 10 Base T Ethernet. Specific design requirements may not call for 100 Base T support, but understanding that 100 Base T Ethernet is the most prevalent installed network, use of 10 Base T devices on 100 Base T networks requires the use of a 10/100 Ethernet switch port. In many cases the switch port will be more expensive than the device itself.
Another factor to consider is whether an 8- and 16-bit microcontroller has the performance required to be attached to and operate on a busy network. Many 8- and 16-bit chips do not support contemporary memory controllers allowing the addition of SDRAM, forcing an engineer to add glue logic to support an engineer's memory chip. SDRAM is the highest performance, lowest-cost RAM memory, and it's readily available in the marketplace. The engineer has to be concerned with refreshing the RAM and generating items called rows and column address selection signals, with some timing issues involved.
The bandwidth example illustrates the careful consideration needed when determining what networking architecture to use. The days of discrete logic designs utilizing processor cores with external hardware ranging from memory and DMA to standard UARTs or Ethernet MACs have passed for reasons such as board space, cost, and development time.
SoC updates
Most new designs, even control-oriented ones, are based on SoC technology. These specialized ASICs provide the bas eline for a complete product. The critical choice is determining what SoC technology is right for the immediate task, as well as the software enhancements that will occur in later versions of the product design.
Next, many 8- and 16-bit microcontrollers have severe address space limitations. Even if memory can be added, it may have to be executed in some form of paging design or in some cases even swap-outs to get at the active instruction area. This becomes a costly proposition.
Most engineers also end up forced into using some form of assembly language, or using excruciatingly difficult tight C coding to try and fit into the memory space. This kind of exercise increases a product's time-to-market, adding to product development costs and "functionality creep" the phenomenon that takes place when the functionality necessary to product design is so tight on the chip that support and updates are difficult, if not impossible, to execute.
As device connectivity evolv es and companies develop more electronic products with Ethernet/Internet networking, additional features and functions are demanded. With 8- and 16-bit microprocessors, every time product marketing asks for additional features, a significant functional rewrite is required. A company can end up with numerous versions of a product because, over time, for each feature added, one must be deleted because the microcontroller does not have the necessary address space. Some ASICs may require that the processor core physically move all traffic from the MAC to memory. On a 10 Base T network, there may be sufficient memory in the processor to handle the Ethernet connection and the primary application for an 8- or 16-bit platform.
With 100BaseT network speeds, however, this is not the case. If a dedicated subsystem, such as a DMA controller, is not responsible for repetitive data movement from Ethernet front-end to memory system, and vice versa, application performance suffers. Higher performance processors, suc h as 32-bit processors with DMA channels, can transfer data to and from devices more quickly than those in which the data path goes through the main processor.
When a DMA controller handles the interface communication and processor core focusing on the application, there still needs to be some method of getting respective data and operands to and from a common memory architecture. This is accomplished with a system bus. In the case of an internal memory and bus controller, the system bus can be viewed as two different, but functionally similar buses. In the case of memory-mapped peripherals, when access is made to a device via the application code, the memory controller will decode what external device the memory controller set-up is actually accessing.
The 32-bit microcontroller is ideal for this scenario, for if the DMA controller is trying to move to memory data coming in the Ethernet receive channel, the result is bandwidth resource contention. This contention directly impacts product per formance, and 8- and 16-bit processors cannot handle these demands.
Another potential issue with performance relates directly to bus width. With 10BaseT embedded applications, 8- or 16-bit devices may be perfectly suitable for the network speed, combined with the application at hand. Most of these devices are geared for providing network connectivity to low-level embedded devices. The intensity of the applications running on the device, as well as the real-time requirement, may not be as rigid.
Limited bus bandwidth, horsepower, and lack of an integrated development environment (IDE) cause system architects to take a closer look at the more powerful 32-bit SoC architectures. They have enough bandwidth to support high-speed interfaces, such as USB, Ethernet, and high-speed serial for Bluetooth, and they are generally built around an industry-standard processing core, such as ARM or PowerPC.
Today, an engineer will find many 32-bit SoC designs with the resources needed to meet the requ irements of contemporary networks. These will be a little more costly, especially when an engineer adds the memory needed for them to work, but they enable the design to be carried forward into the future.
Related Articles
- Meeting Increasing Performance Requirements in Embedded Applications with Scalable Multicore Processors
- Hardware/software design requirements planning: Part 3 - Performance requirements analysis
- USB3.0 application building using low performance 8-bit microcontroller
- OCP-based Memory Controller IP Offers Optimal Power and Performance Requirements for 3G Applications
- BIST, BISR tools push up quality, yield
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |