|
|||||||||
Steps for Delivering Multimedia Over 5 GHz WLANs
Steps for Delivering Multimedia Over 5 GHz WLANs In recent years, wireless LAN (WLAN) networks have gained popularity for data networking within the home and small office. However, the much larger market of delivering multimedia content (high quality video like HDTV, SDTV as well as audio) has been left essentially untapped. Many semiconductor and system companies have attempted to serve this market with standards-based solutions, all of which are hampered by their data-centric roots. The channel reliability required to faithfully deliver a feature-length HDTV movie is orders of magnitude more demanding than traditional data applications for example. A new generation of devices is now available that have been designed from the start to meet the market requirements thereby leading to some breakthroughs in systems design and implementation. In this paper, we explain the requirements of a multimedia home network and the key elements required to successfully implement a system to deliver this high quality content with guaranteed quality of service (QoS). The paper goes on to explain the importance of MAC/PHY/RF co-design to optimize performance in multipath environments, cost, and power consumption. We also present real-world testing results that show that when a 5 GHz wireless system is designed correctly, it can cover the entire home with guaranteed throughputs for delivery of multimedia content. Multimedia vs. Data The requirements of a true multimedia networking technology take the following into account: The 802.11a and .11g standards provide for physical layer bit rates of up to 54 Mbit/s. At first glance, this looks like a large improvement over the existing 802.11b standard of 11 Mbit/s. Unfortunately, due to limitations at the MAC layer, achievable throughputs of only 20 Mbit/s have been demonstrated. 802.11a and g are highly inefficient communication system. They are based on a carrier sense multiple access (CSMA)-based access scheme used in 802.11x networks. Within the constraints of this standard, large overheads are incurred due to preambles, and carrier sensing periods. In an attempt to reduce this overhead as a percentage of the data payload, longer packets are sent. The problem with the use of longer packets is that they are more prone to packet errors over the wireless channel and hence more retransmissions occur, which then reduces the overall throughput. Throughput efficiencies of only 40 to 50 percent are achievable in point-to-point CSMA systems. Synchronous MAC architectures used in systems such as HiSWan and Hiperlan, achieve higher throughputs. As an example, one proposed system achieves throughput efficiency rates of 80 percent through the use of a synchronous, schedule-based TD/TDMA MAC. In this system, intra-frame carrier sense is not required, and shorter packet sizes can be used. This removes the packet error rate problem mentioned above, and yields high throughputs with the added benefit of very low latency (on the order of a few milliseconds). Security and QoS The higher levels (application layer and session layer) must be preserved. These are all systems that exist "above" the wireless distribution system within the home. The link-layer security portion should offer the following elements in order to prevent data capture and possible attacks:
While security is important, it is not the only key feature when developing a multimedia-enabled WLAN network. Quality of service (QoS) is another key parameter that must be provided. Table 1 summarizes the typical QoS parameters applicable for multimedia content distribution within a home entertainment network. As the table shows, the latency and jitter requirements of these systems pose a significant design goal to any network carrying this content, especially a wireless one.
A wireless network based on IP protocols will not meet all of the requirements shown in Table 1for the reasons mentioned in the previous sections. Synchronous networks can achieve these requirements given that they are designed with the proper constraints and attention to the key latency, jitter, and PER performance parameters. Video Interface: Isochronous vs. Asynchronous The video decoder must decode at the rate at which data is being sent to it. This is the case for, among other reasons, that the content may be live and therefore must be sent and decoded at the rate at which the content is being encoded. This implies that the decoder is a "slave" to the source. Both the MPEG-2 encoder and the decoder operate at a nominal clock of 27 MHz, however these clocks are only accurate to 810 Hz. This means that the encoder's clock will drift with respect to the decoder's clock. Typically most hardware MPEG-2 decoders contain very little buffering (i.e. memory), usually just enough for one or two frames of video. This requires the decoder to maintain fairly tight synchronization with the encoder. There are two means for slaving the decoder to the encoder. For the purposes of this discussion they will be referred to as (i) embedded timing techniques and (ii) buffer management techniques. Most data-centric systems (802.11x) are forced to use buffer management techniques where TS packets arriving at the decoder are placed into a buffer. The decoder reads from the buffer. Before the decoding starts, the buffer must initially fill to a watermark which once reached, the decoder will begin to pull TS packets from the buffer and begin decoding. A microprocessor or other intelligent device must monitor this buffer for underflow/overflow and make adjustments to the decoders speed or alternatively insert or drop packets. A system design challenge arises since the buffer size depends upon the jitter introduced by the channel, the variability of the TS itself, and the jitter introduced by the interfaces between the channel and the decoder (e.g. PCI or other asynchronous data bus latencies). The buffer must be made large enough to absorb all three sources of jitter as well as distinguish the very subtle effect of clock drift between the encoder and decoder. Existing attempts to use data-centric 802.11x protocols to deliver video, require large buffers (on the order of 32MB) which can result in delays between 2 and 8 seconds on existing products in the market today. This is an unacceptable latency, and leads to the inability to support interactive feedback or channel changing in a timely manner. Isochronous data transport systems using embedded timing techniques have been shown to overcome the inherent system issues caused by the use of large buffers. In the Isochronous approach, certain packets within a transport stream contain a program clock reference (PCR), which is a sample of the encoder's system clock. When the decoder receives a TS packet containing a PCR, it measures the number of ticks of its own system clock that have transpired since the last PCR packet. This "error term" can be used to feed a tracking circuit so as to correct the clock differences. In this way the decoder is slaved to the encoder. In order for this process to work, the time between consecutive MPEG TS packets (referred to as inter-packet spacing) must be preserved across the channel. If the TS packets incur a jitter along the path from encoder to decoder, this jitter must be removed to within 500 ns.1 Communication systems, such as Firewire (IEEE 1394), perform this inter-packet timing preservation. The end result is guaranteed delivery with less than 8 ms of latency, a 250- to 1000-times improvement over the data-centric approaches. the systems discussed here are designed to interface directly to MPEG encoders and decoders, without the need for bus translation from the PC (e.g. PCI) to the A/V world (parallel or serial TS interfaces). This reduces the total bill of materials, power consumption, size, and complexity. Understanding the Indoor Channel Contrary to the popular power-law based propagation model predictions that are based upon continuous-wave (CW) measurements with substantial spatial averaging, typical indoor frequency-selective fading can exhibit frequency nulls greater than 15 dB in depth, flat frequency-fades across the entire modulation bandwidth, and fade durations that can last seconds and even minutes at a time. Unless these channel realities are recognized and dealt with aggressively, it is impossible to deliver high QoS system performance even with a fully synchronous MAC layer. Indoor multipath and the frequency-selective fading that results is so complicated that a probabilistic-based system design is crucial. Simple buffering techniques are simply inadequate to deal with the difficulties presented by the indoor wireless channel. The true overall system QoS depends upon (i) the reliability with which the next time-slot is available without collision for the link in question and (ii) the PER for that time-slot. One proposed system solves the time-slot availability issue by using a truly synchronous medium access control (MAC) layer whereas low time-slot PER is achieved through careful co-design of the MAC, PHY and radio as described next. MAC/PHY Co-Design
As a consequence of these and other factors, the (coded) residual BER that is deliverable by a radio transceiver system is not necessarily that low for the higher signaling rates (e.g., 36, 48, 54 Mbit/s), and it may of course get considerably worse when frequency-selective multipath is introduced into the picture. Everything possible must be done to ease the radio design requirements because the minimum requirements are already quite challenging and manufacturing device yields will drive the overall product cost if excessive requirements are levied. One of the more significant factors that drive the required performance level (i.e., residual BER) of an OFDM radio is the data packet length used in the system. Assuming statistically independent bit errors and a 32-bit CRC field for each packet, the normalized payload throughput versus data packet length and bit error rate including FEC (CBER) is as shown in Figure 2.
In 64-QAM OFDM operation, a CBER of 10-4 to 10-3 is desirable based upon radio-related design issues whereas these CBER levels clearly require packet lengths that are substantially less than 1500 bytes in order to avoid a substantial throughput efficiency penalty. As shown in Figure 3 for 64-QAM rate 3/4 operation in an otherwise perfect 802.11a OFDM system, phase noise performance is very crucial in achieving CBER levels below roughly 10-3.
As Figure 3 shows, overall system phase noise (i.e., transmitter + receiver) must clearly be less than 2 degrees rms if the needed CBER performance level is to be approached based upon the allowable system PER. The system penalty for a non-zero PER is an increased throughput requirement level of approximately 1/(1-PER) in order to deal with the packet re-transmissions involved, but in the QoS sense, the penalty is much more severe. If the PER is 15% for example, and only three packet transmission attempts are allowed in order to bound the QoS-related time-jitter, the probability that the packet is never delivered without error is still 0.34 percent, which is generally not acceptable for video and audio. In a synchronous MAC system, more transmission attempts can generally be used because the time-jitter can be acceptably bounded whereas in asynchronous systems like 802.11x, the next time-slot availability question (without collisions) makes the argument for allowing more attempts much more dubious for high-QoS streams. Many other factors favor synchronous MAC operation over asynchronous when it comes to radio design and digital signal processing. Since all network users are dynamically assigned time-slots and power-levels, every receiver in the wireless network knows a priori the power-level, modulation-type, and more. This reduces linearity issues and estimation theory issues substantially in the receiver compared to an asynchronous collision-based system where no one knows what terminal will grab the next time-slot a priori. A synchronous MAC structure that allows periodic assessment of the RF channel (i.e., channel estimation) is also invaluable for dealing with frequency-selective multipath, which is very major issue for indoor environments. RF/PHY Issues OFDM alone is insufficient to deal with the indoor wireless channel when it comes to high-QoS applications like audio and video. Some of the reasons for this statement were mentioned above. Realistically, it is impractical to deliver a recovered SNR through a low-cost transmitter-receiver pair in excess of approximately 30 dB due to phase noise and linearity issues. Therefore, if the system requires a minimum SNR of perhaps 24 dB at sensitivity for 54-Mbit/s operation, it is impossible to operate the link with video-adequate performance if there are any frequency-selective fading nulls greater than about 6 dB. A large amount of recent research has been focused on space-time coding (STC), which is a technique that seeks to increase the throughput capacity of system in a bits/Hz bandwidth perspective. Although this perspective is valuable in many venues, in the A/V consumer realm, where large data buffers translate into cost and time latency, it is difficult at best to use the additional throughput offered by such STC systems if the throughput rate is unreliable over time as is generally the case for an indoor channel. STC systems exploit the multiple-input-multiple-output (MIMO) increased channel dimensionality that multipath channels exhibit to increase throughput capacity. However a measure of this same MIMO channel dimensionality can be similarly exploited to deliver dramatically more reliable link communications using OFDM signaling that is almost identical to IEEE802.11a. This probabilistic-based system design methodology, which focuses on link reliability rather than on just average or peak throughput, is extremely well suited to the A/V delivery needs of most applications. Computer simulations based upon information-theory metrics rather than specific design implementations have confirmed that a minimum of three receive antennas are required in order to deliver acceptable link throughput reliability for the 36-Mbit/s (16-QAM, R= 3/4) mode. This study assumed a random three-ray multipath model and a linear array of receive antennas. Several thousand random direction-of-arrivals were assumed for the multipath rays for each data point computed. The limited k=7 Viterbi FEC used in IEEE 802.11a along with other practical implementation factors force the number of receive antennas to 4. When other real-world factors like outage probability and finite SNR are included, a larger number of antennas is still advantageous. The computer analysis discussed above was performed upon cutoff rate per subcarrier and the results for the (a) two-antenna and (b) five-antenna cases are shown in Figures 4 and 5 respectively.
As shown in Figures 4 and 5, the single-antenna system delivers a wide span of throughput levels in 16-QAM 3/4 mode (Ro= 2.3 to 3.5 bits/subcarrier/channel-use) because it has no means to deal with the frequency-selective fading introduced by the three-ray multipath model. The use of two antennas improve the worst-case Ro to approximately 3 bits/subcarrier whereas the results dictated that five antennas be used in order to deliver the outstanding Ro performance shown in Figure 5.3 Real-World Results The spatial wavefront processing advocated herein avoids the multipath-related losses that are commonplace with indoor environments. Signal absorption losses are very environmental-dependent and Magis has therefore tested these concepts in a variety of homes, offices and business locations. Two home-testing trials are reported on separately in References 4 to 6. As reported in Reference 4, full US-HDTV video was delivered throughout a two-story 2,400 square-foot home as well as a second two-story 3,500 square-foot home. For data-networking applications, the throughput was consistently 40 Mbit/s or more in all but the most challenging areas of the homes. When running tests and developing WLAN equipment, spectrum etiquette is an important subject in the context of supporting high-QoS A/V links. While contention-based competition for time-slots per existing 802.11x standards has been adequate for data-centric applications, it only takes one non-QoS user to deny everyone on a given channel any chance for high-QoS performance. Unfortunately, 802.11x systems generally transmit at full power regardless of the distance between network nodes thereby preventing re-use of the specific RF channel for very large distances except for possibly other non-QoS users. Everyone should care about QoS regardless of whether they are trying to network with 802.11x systems or with other more QoS-centric systems if they are trying to deliver A/V content in the home. The FCC is moving rapidly to expand the useable 5 GHz spectrum in the US by 255 MHz. Usage of this spectrum carries with it certain caveats regarding other primary users like weather radar however, forcing yet another flavor of spectrum etiquette.7. More stringent transmit spectrum mask requirements for all 5 GHz devices need to be advocated as well as other recommendations that will enhance everyone's opportunities in the 5 GHz U-NII bands. Wrap Up References
About the Authors David Critchlow is the vice president of engineering at Sequoia Communications, Inc. Prior to holding this position, he served as the vice president of engineering at Magis Networks. David can be reached at dcritchlow@sequoia-communications.com.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |