|
|||||
Enter the Inner Sanctum of RapidIO: Part 2 Enter the Inner Sanctum of RapidIO: Part 2 As designers begin evaluating new interconnect options, the RapidIO specification has once again emerged as one of the front-runner technologies for next-generation communication architectures. But, to successfully implement the technology, designers must first understand the key technical elements that make this specification come to life. This is the second installment in our detailed inside look at the RapidIO specification. In Part 1, we examined the three main layers that make RapidIO work. We also looked at the serial and parallel interfaces. Now, in Part 2, we'll further the discussion by detailing the bandwidth requirements and flow control mechanisms. We'll also look at the key requirements for building RapidIO switches and endpoints. Let's kick off the discussion by looking at the bandwidth requirements or both the serial and parallel interfaces. Bandwidth< br>The RapidIO serial and parallel interfaces offer a range of bandwidth options. The 8-/16-bit parallel interface peak bandwidth ranges from 4 to 32 Gbit/s in each direction depending on width and applied clock rate. The 1X/4X serial interface offers a peak bandwidth of 1 to 10 Gbit/s in each direction depending on link speed and lane width. An early goal for the protocol was to minimize overhead. The parallel interface efficiency ranges from 48 to 87% for data payload sizes between 32 and 256 bytes. Over a similar payload size range, the serial interface efficiency ranges from 53 to 88%, not counting the 8B/10B encoding. These numbers include acknowledgement overhead. Given that PCI-64 reaches an efficiency of only 49 to 69% over a similar transfer size, there is evidence to suggest the efficiency goal was successfully metan impressive feat for a system-level packet-oriented protocol. Ordering, Flows, and Deadlock Avoidance For ordering purposes, the concept of flows is defined at the logical layer of the specification. A flow is a sequence of ordered non-maintenance requests between a specific endpoint pair. Request transactions from the same source but targeting different destinations exist in unrelated flows and have no ordering requirements between them. Response transactions are not part of any flow and there is no ordering between them. Multiple prior itized flows may exist between a source and destination pair. Request packets in flows of higher priority may pass those of lower priority flows. Packets in a lower priority flow must never pass those of a higher priority. Prioritized flows are defined to allow different classes of service between endpoint pairs. The degree of differentiation in service between prioritized flows depends upon the implementation of the endpoints and switches along the flow. Within a flow, strict ordering of request transactions is required. This means writes may not pass writes and reads push writes ahead. Because responses are not part of any flow, read and write responses may be serviced by an endpoint out-of-order. In practice, this means read requests may be performed out-of-order (though a read request must still push writes ahead). Ordering and flows are defined at the logical layer but implemented at the physical layer. Both physical layers define three flows through the use of a 2-bit packet priority field. Ea ch packet is assigned one of four priorities. Request packets are assigned a priority based on flow level. Requests in the lowest priority flow are assigned the lowest priority; the next highest priority flow is assigned the next priority and so on. Maintenance transactions have a priority field but exist outside of other request flows. When routed through the network, maintenance packets with the same path may never pass maintenance packets of equal or higher priority. This effectively strictly orders maintenance packets between source/destination pairs. Deadlocks occur when a dependence loop exists in the system. Deadlock exists when forward progress at any point in the loop requires progress to be made ahead of it and no place in the loop can make forward progress. While network topologies in which loops exist for response-less transactions are forbidden, some transactions do require responses and thus have the potential for creating dependency loops. Provision for deadlock avoidance for requ ests with responses must then be provided. The approach can be summarized as creating the circumstance in which responses can always make forward progress in the system regardless of the presence of other transactions. This is accomplished at the PHY by assigning responses a priority one higher than the priority of the associated request and optionally allowing endpoints to promote the priority of their response even higher until the packet can make forward progress. In order for this approach to work, all devices in the system must implement buffer management schemes that always prevent higher priority packets from becoming blocked by lower priority packets. Controlling Flows End-to-end flow control is supported at the logical layer and is used to control congestion when it occurs in the network. As traffic sources increase the amount of data they inject into the network, the capacity of some links can be exceeded and cause buffers behind the link to fill. This not only causes congestion along these primary pathways but head-of-line blocking can also cause congestion in unrelated paths that share common links. Flow control is accomplished using a congestion control packet (CCP) that is generated by a switch or endpoint experiencing congestion. This packet functions as an XON/XOFF and is sent backward to turn off the source of packets and later to reenable that source as congestion abates. CCP packets exist within their own flow and are independent of request and maintenance flows. CCP packets are ordered within their flow and are always sent at the highest physical priority, thus allowing it to pass requests from all other logical flows. CCP packets control logical flows not physical priorities. Often, CCP packets are generated for congestion caused by non-maintenance requests and not, for example, responses since they resolve congestion by releasing resources at their destination. Because logical flows are encoded in physical priority bits and promotion of priority can complicate matters, a reverse mapping of priority to flow must be done. Unlike other packets, the CCP packet may be dropped by switches should buffers be filled. If this occurs for an XOFF packet, subsequent congestion backward from the original congestion point will cause additional CCP packets to be generated. If an XON packet is dropped, a timeout mechanism is provided to turn disabled flows back on. Notes on Endpoints Each endpoint in a system is assigned a unique device ID at initialization. This ID represents the routing address used as packets make their way through the network to the desired endpoint. Associated with each device ID is a set of capability, command, and status registers. Similar to those defined in PCI, these registers allow system software to identity the capabilities of the device as well as give access to control and status information. In addition, register space is set aside for extended and implementation specific features. A set of required registers is associated with each device ID in the system. When endpoints have more than one device ID associated them, they are required to duplicate required registers. As a result, it is likely most implementations w ill allocate one device ID per endpoint. Buffer sizing and management is implementation dependant but must in general follow deadlock avoidance rules that require that packets and their associated operations cannot be blocked by lower priority packets and their associated operations. Endpoint designs that support end-to-end flow control at a minimum disable packet transmission for flows turned off by incoming CCP packets and associate a time-out counter in case the associated XON is lost in the network. XOFF CCP packets for a given flow are counted and the flow turned back on only when the corresponding number of XONs have been received. Endpoints may also issue CCP packets when internal buffers reach critical levels much as switches would. Notes on Switch Designs Switches are not endpoints and thus have no device ID. In general they do not source or sink packets. The only exceptions to this rule are that switches must source and sink maintenance transactions and may optionally generate congestion control packets. The transport layer defines a destination-based routing scheme where each switch examines the destination ID of an incoming packet, finds the ID in a routing table and routes the packet to the output port indicated. With the exception of the link-specific AckID bits, packets proceed through the switch unmodified. No modification is necessary because the AckID bits are not covered by packet CRC. Switch implementatio ns can vary widely in complexity. Both store-and-forward and cut-through operation is supported. Cut-through is aided by the early CRC that allows a switch to have confidence in the integrity of the header (and thus priority and destination ID) before it has been fully cut-through routed to its destination port. The amount of buffering, arbitration policies and level of service each flow receives are implementation specific. Some switches may elect to support end-to-end flow control. When supported, switches monitor the state of internal buffering and when selected water marks are reached, issue congestion control packets to turn off associated flows at the source. Switches keep track of outstanding XOFF packets on a per flow basis and turn that flow on again when the buffer falls below relevant watermarks. Hot Swap PCI Compatibility Wrap Up RapidIO technology addresses each of these requirements while offering the lowest overhead and widest functionality. RapidIO technology presents for the first time the opportunity for designers to leverage an open industry standard at the system level for both control and data pl ane applications. Author's Note:To find out more information on the RapidIO specifications, visit the Trade Association's web site at www.rapidio.org. Editor's Note: To view Part 1 of this article, click here. About the Author
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |