|
|||||
How to find the "right" embedded computing platform
How to find the "right" embedded computing platform Historically designers and manufacturers have resorted to proprietary architectures to build their compute platforms. As microprocessors became prevalent and the economy of scale made it a very compelling choice, standard microprocessor architectures penetrated embedded systems. This perhaps is the best explanation for the prevalence of PCI within the communications infrastructure. However, PCI did not fit all the needs. PICMG, a consortium standards organization, successfully standardized the PICMG 2.x CompactPCI standard for industrial (communications infrastructure, industrial automation, military, medical) applications. Over time, several improvements were implemented in PICMG 2.x for various functions such as hot-swap, platform management, switch fabrics, CPU redundancy, etc. Many of these improvements were originally implemented as proprietary features and became de facto standards before they were adopted officially. The progres sion continues as PICMG puts the finishing touches on AdvancedTCA (a.k.a. PICMG 3.x). On a parallel path, desktop technology through PCI or PCI-X based motherboards found its way into infrastructure architecture for "Rack Mounted Servers." This adoption was driven by cost conscious organizations trying to maximize the economy of scale enjoyed by high volume desktop technology. Though Rack Mount Servers (RMS) are not as fault resilient or scalable as peer architectures, such as CompactPCI or AdvancedTCA, the cost effectiveness of the RMS is compelling for enterprise use and some limited infrastructure applications. In effect, PCI has been a prime foundation for compute platform architecture for a few decades now both in RMS and PICMG 2.x approaches. Moving forward, newer standards such as switch fabric architectures, which include Ethernet, Infiniband, StarFabric and PCI Express are vying for the coveted de facto standardization spot. Applications can be categorized into groups base d on their functionality: Compute intensive (server cluster), port I/O intensive, and storage intensive among others. For example, a Softswitch platform requires a significant compute intensive solution that scales easily in terms of compute power. On the other hand, a medical imaging platform requires specialized compute power for image processing, but may not have the same scalability requirement as the Softswitch. For each category of application the characteristics of a specific compute platform make it the best choice. It is quite likely that the architect will have to choose between conflicting requirements based on some kind of prioritization. Within each application category there are several factors that dictate the compute platform architecture. It is impossible to find one architecture for all applications. At best, one could find an approach to the architecture that meets (or comes close) to meeting the requirements for a group of applications. The attributes and requi rements for the product typically correlate to a target market segment, i.e., an enterprise edge product has lower density than a core network element product. Hence functions such as processing capacity and interconnect fabric speed will be far less than what's needed in the core network element. Also, the cost pressures on the enterprise edge product are much greater than the core network element. Industry standards and acceptance have also evolved accordingly. While PCI architectures have been common in enterprise level solutions such as voicemail servers, higher performance CompactPCI has gained popularity in carrier access infrastructures. AdvancedTCA is providing even higher performance for the core network elements. Where the standards have not yet evolved, proprietary architecture has ruled. Most of the wireless infrastructure has been proprietary since its requirement for processing density and scalability has not yet been met by any standard. Functional demands Each time a new platform architecture is chosen, depending on the product requirements and other factors such as ruggedization, the choice of technology for each functional block is determined. For example, when choosing form factor, all the functional blocks for a hand-held ultrasound-imaging device may reside on a single board, while each of the functional blocks may span multiple boards for a more sophisticated ultrasound imaging system.
The same integration is true in the case o f a "pico-cell" base-station and a 2.5G BSC. This selection process will be repeated for other functional blocks. The choice of interconnect fabric will vary depending on the amount of data movement between the functional blocks. Some applications may warrant a Gigabit Ethernet fabric while for others it will be overkill. As for processing power, a Xeon processor may be needed for a Radio Network controller, while a Pentium III, Xscale, or PowerPC may be sufficient for a Pico-cell controller. Depending on the requirements and constraints for the platform, the architect can choose an architecture that enhances its capabilities. PCI is very conducive to enterprise applications and provides adequate performance for mid-range performance solutions such as an Internet appliance. PCI-X is suitable where higher performance PCI is required. Rack Mount Servers are best in both enterprise and carrier applications where the processing scalability needs are limited to less than 10 nodes. RMS solutions are also ideal for compute intensive applications like those found in database servers. Switch fabric CompactPCI (PICMG 2.16) has emerged as a favorite choice for carrier-grade and other ruggedized solutions. ATCA is promising to be a popular high performance solution for carrier-grade use, including server clusters. Both CompactPCI and ATCA are optimum for I/O intensive carrier applications where rear-I/O support and interface modularity are required. Given the technology churn that exists today, choosing and maintaining a competitive platform is an ongoing challenge. Like the saying "An ounce of prevention is worth a pound of cure," dealing with platform migration is best done by "designing it in from the beginning." To design-in migration options requires modular elements in the architecture that are conducive to next generation technology upgrades. Perhaps, the strongest argument for an open standard solution comes from the fact that migration is easier when open, standardized interface s are used. Many of the architecture choices like CompactPCI, ATCA, etc., are very conducive to modular designs. Standards organizations and industry forums have taken great pains to study this issue and come up with "reference" architectures that are modular, open, standard and help deal with platform migration. This modularity concept is not new. It has been used by designers for quite some time. Memory cards (DIMM) are a good example of modularity used in computers and PCI-Mezzanine Cards (PMC) brought modularity to PCI architecture. PMCs enabled a great level of modularity and flexibility to I/O interfaces such as E1/T1 and DS3/E3. By taking advantage of these standards, designers are able to migrate their design - hardware and software - to the next generation without reengineering the interface. Only the base card requires a redesign thus saving a great deal of design and validation time. Such modularity also saves money through minimizing costly field spares and the training necessary for service and field support personnel. This PMC modularity concept has now been extended to processor modules (PrPMC) and now enable changes to processor speed, memory and to some extent interfaces in a single step. The theory can be carried even further to full "carrier" card concept where a complete module is embedded into a different form-factor and architecture. For example, a PCI card can be self-contained in a CompactPCI or ATCA card enabling easy migration between architectures. There are potential downsides to this modular approach. It could potentially add cost to the architecture by way of connectors and performance since high-speed signals tend to degrade through each connector.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |