|
|||
Tutorial on PLLs: Part 2In Part 1 of this article, we looked at a number of phase-locked loop (PLL) concepts ranging from continuous and sampled control systems to estimation theory based perspectives. Now, in Part 2, we will continue are examination of the PLL concept in the estimation theory sense by looking at maximum a posteriori (MAP)-based PLLs and the fundamental performance limits described by the Cramer-Rao bound. Our theoretic involvement will culminate with a Kalman filtering perspective for the PLL. The balance of this article will utilize the PLL concept in several real-world applications. MAP-based Estimators Equation 1 below can be re-written in the logarithmic form as shown in Equation 2 below. This log probability may be maximized by setting the derivative with respect to θ to zero thereby creating the necessary condition that is shown in Equation 3 below. If the density p(θ) is not known, we are forced to ignore the second term in Equation 3, which leads naturally to the maximum-likelihood form as shown in Equation 4 below. Although the MAP and ML estimators are not the same in the strict sense, the MAP estimator takes on the form of the maximum-likelihood (ML) estimator in the absence of sufficient prior knowledge of θ. The similarities between the minimum mean-square error (MMSE), ML, and MAP estimators should not go unnoticed. In the Gaussian noise case, the observed signal is given by: in which v(t) is the noise and θ is the nonrandom parameter of interest, it can be shown that the ML estimate for θ is given by: In the jointly Gaussian case where θ is assumed to be a random parameter having a variance of σθ2, use of Equation 3 leads to the result that: These two results illustrate how similar the ML and MAP estimates can appear. The Fundamental Theorem of Estimation Theory22 states that the estimator that minimizes the mean-square error is given by: It can also be shown that the best linear unbiased estimator (BLUE) form takes on the form of the weighted least-squares estimator given that the proper sample weighting is applied. In Part 1 of this article, we found that the ML estimator for the signal phase utilized a gradient error metric that sought to drive any quadrature (or orthogonal) signal components to zero. This is not unlike the Orthogonality Principle in estimation theory, which stipulates that any residual estimation error should be orthogonal (i.e., uncorrelated) with the observations as: More information on the fundamentals of estimation theory can be found in References 20, 21, and 22. Performance Limits From the Cramer-Rao Bound In the single-parameter case, the CRB is usually presented in two different equivalent forms as:20,21,22 When multiple parameters are being estimated (e.g., amplitude, phase, frequency), the CRB takes the form of the Fisher Information Matrix. The interested reader should consult References 19, 20, 21, and 22 for additional information on this topic. It is of interest to compare the MAP and PLL phase estimators in terms of some performance measures. In order to do this, we first obtain the variance of each estimator. As developed in Part 1, the steady-state first-order (and second-order) PLL phase estimator probability density is taken to be the Tikhonov probability density function that is given by: where I0() is the zeroth-order modified Bessel function of the first kind and α is the SNR within the PLL. The variance for the PLL estimator is given by:13 These results and the linear result in which σθ2 =α-1 are plotted for comparison purposes in Figure 1. Figure 1: Tracking variance for MAP and PLL phase estimators versus SNR. Kalman Filtering The Kalman filter addresses the general problem of trying to estimate the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation of the form: with measurements given by: The A, B, and H matrices can be time-variable but are shown here as constant. The random wk and vk represent the process and measurement noise respectively, and they are assumed to be statistically independent with covariance matrices Q and R respectively. Vector uk is the input to the system. The mean-squared filtered estimate of the system state at time k+1 is represented by , and it can be written in predictor-corrector form as shown in Figure 2. Figure 2: Organization of Kalman filter as a predictor-corrector sequence.10 The similarities of the Kalman filter with other time-stationary results are striking. In the case of the best linear unbiased estimate (BLUE), its recursive structure may be written in an almost identical form except that we have no time prediction steps since we have access to no system state information for the BLUE. Its formulation is shown in Figure 3. Figure 3: Recursive equation formulation for BLUE. The recursive Kalman equations lend themselves very easily to implementation within a second-order digital phase-locked loop (DPLL). This has been done before in References 11 and 12. Although the Kalman filter requires current information about the noise covariance in its execution through the Q and R matrices, it is particularly adept at improving the tracking performance in situations which are not time-stationary. As the structure content of the signal being tracked increases, the Kalman filter can deliver substantial performance gains over other methods that do not exploit state estimation. PLL Applications 1. Classic Type-2 Charge-Pump Implementation Figure 4: Circuit Diagram for type-2 fourth-order PLL using charge-pump phase detector. The normal approach that is taken to analyze this kind of system is to solve the nodal equations for the appropriate transfer functions algebraically.4. A streamlined approach is taken here where the same nodal equations are used but the customary algebraic manipulations are avoided by using matrix methods. The matrix equation that describes the circuit in Figure 4 can be quickly written down in Laplace transform form as: where Gi = (Ri)-1 and the Ij represent the Johnson current noise sources associated with each resistor. Analysis tools like Matlab and Mathcad can be used to numerically solve this equation for the transfer functions of interest and for closed-loop noise performance quantities. The noise current for the jth resistor is given as: and all of the noise sources are assumed to be statistically independent. The phase detector referenced phase noise floor for the National Semiconductor Platinum series devices is given by: where LFloor = -205/-210/-211/-218 dBc/Hz for the LMX2315/LMX2306/LMX2330/LMX2346 devices respectively. This model or another can be used for the reference noise level represented in Equation 15 by θrn. Leeson's model can be similarly used for the VCO self-noise term represented by θvn in Equation 15, recognizing that this noise contribution is frequency-dependent as given by: in which F is noise factor, k is the Boltzmann's constant, To= 290 degrees Kelvin, P0 is the power extracted from the actual resonator in Watts, Fc is the VCO center frequency in Hz, and QL is the loaded resonator Q-factor. Additional terms are often added within the parenthesis to account for 1/f noise, etc., but these rarely survive the closed-loop action of the PLL and have consequently not been included here. The transient response of the PLL to a step-change in phase or frequency can be similarly computed using numerical techniques. The approach taken here is to substitute the Laplace transform of a step-frequency error given by 2πΔ F/s2 in for θvn in Equation 15, and then compute the Fourier transform for θVCO at an equally-spaced grid of frequencies from which the inverse FFT provides the time-domain response. An example time-domain response is shown in Figure 5. The ensuing details for both of these example results have purposely been omitted for brevity whereas they can be found online.6 Figure 6: Example time-domain response to step-frequency change using FFTs. Several design procedures are available for designing "optimal" PLL loop filters. Whenever the word optimal is used however, designers should ask the question, "Optimal with respect to what criteria?" Some communication systems are primarily concerned with frequency error whereas others are concerned with phase error. If the wrong criteria is adopted, the design can often result in being more difficult than necessary. It is therefore very attractive to have an interactive tool that permits a simultaneous examination of both the time as well as output spectrum domains. Phase Noise Impact on Communication Systems In frequency-modulated systems, phase noise can be equivalently expressed as a residual FM noise and it similarly adds to confusion in the receiver as to which frequency was truly sent by the transmitter. Far-out phase noise degrades channel selectivity, adjacent channel occupancy, and receiver third-order intercept point due to reciprocal mixing. Only a few of the most common digital communication waveforms will be considered here due to space limitations. The close-in phase noise impairment to (uncoded) bit error rate performance is most often computed using the Tikhonov probability distribution function for the noise. For large signal-to-noise ratio (SNR) arguments, numerical evaluation of the zeroth-order Bessel function can become problematic and it is more convenient to closely approximate this density function as: in which σθ2 is the variance of the phase noise process involved. This variance is normally calculated as: where L(f) is the phase noise power spectral density of the local oscillators involved in rad2/Hz, FS is the symbol rate, and FL is a lower frequency limit normally given as 0.01 FS or thereabouts, depending upon the carrier-recovery baseband processing that may be present in the complete system. These definitions apply for a single-carrier system but need some additional enlargement in the case of multi-carrier systems like orthogonal frequency division multiplexing (OFDM). In the case of QAM-style digital modulations, the (uncoded) symbol error rate can then be computed as: Some results computed in this manner may be found in Reference 8. The conditional symbol-error-rate formulas for coherent binary phase-shift keying (BPSK) and quadrature PSK (QPSK) performance are respectively given as: The conditional symbol error rate relationships for other square-QAM signal constellations like 64-QAM can be found in Reference 7. It should come as no surprise that BPSK shows little susceptibility to phase noise related performance loss as shown in Table 1 since it is essentially an amplitude-based modulation type. Performance degrades significantly as the signal constellation size increases, culminating in sub-one degree rms phase noise being desirable for 64-QAM in order to avoid appreciable Eb/No loss. Table 1: QAM Uncoded Symbol Error Rate with Phase Noise In the case of carrier-recovery in which coherent demodulation is to be performed on QAM-type signals, the Costas loop has found wide-spread use as an unbiased low-variance practical solution. It can be shown that the Costas loops for BPSK and QPSK are equivalent to 2nd and 4th power non-linearities followed by a PLL. Block diagrams for the BPSK and QPSK Costas loops are shown in Figure 6 and Figure 7 respectively. Figure 6: MAP-based Costas carrier recovery PLL for BPSK.14 Figure 7: MAP-based carrier recovery PLL for QPSK.15 Symbol Timing Recovery In the example that we will consider, the transmit pulse shape is assumed to be a square-root raised-cosine pulse having an excess bandwidth parameter α of 0.50. The filter bandwidth time symbol time duration quantity for other filters used in this paper is referred to as BT. The Fourier transform for such a pulse is given by: The receiver is assumed to use a continuous-time N=3 Butterworth filter as a close approximation for the ideal matched filter and its Fourier transform may be written as: where ωc is the -3 dB corner frequency in rad/sec. In the absence of noise, the individual pulse shape observed at the output of Hrx( ) may be directly computed by multiplying Equations 23 and 24 together in the frequency domain and performing an inverse FFT. In the case where Hrx has BT= 0.50, this pulse shape is shown in Figure 8. In the absence of any timing error, the desired signal sample occurs coincident with the peak of the pulse as shown in Figure 8. ISI is clearly present as shown however, because sample values at time instants offset by +/-kTsym are not all zero (k= nonzero integer). For random data, these nonzero adjacent symbol samples create data-dependent noise at the receiver's decision making hardware thereby reducing the signal eye-opening thereby degrading system performance. Figure 8: Individual pulse shape at receive filter output. Insight into the ISI matter can be had by considering all possible sequences of four-symbol sequences possible and the eye diagram that is observed at the receive filter output. Eye diagrams for the α= 0.50 and 0.40 cases are shown in Figures 9 and 10 respectively. The ISI and eye-closure are substantially worse for the α= 0.40 case as shown. Figure 9: ISI pattern for 16 possible four-symbol sequences with excess bandwidth parameter = 0.50. Figure 10: ISI pattern for 16 possible four-symbol sequences with excess bandwidth parameter = 0.40. The symbol error rate for random +/-1 data can be mathematically computed by recognizing that the decision statistic consists of three components: (1) the desired signal, (2) additive Gaussian noise, and (3) data pattern-dependent ISI. Characteristic function methods may be used to combine the effects of the ISI and Gaussian noise as described in Reference 16 leading to the symbol error rate expression with static timing error τe being given as: where r(t) is the noise-free single-pulse shape at the receive filter output, σ2 is the variance of the Gaussian noise at the receive filter output, and C(ω) is the characteristic function of the ISI noise. This can be shown to be given by: The variance quantity specified in Equation 26 can be found from the equivalent noise bandwidth of the receive filter as: where No is the one-sided Gaussian white noise power spectral density. The foregoing results were used to compute the effect of timing error on symbol error rate performance as a function of the receive filter product as shown in Figure 11. The optimal value for best performance is BT= 0.50 which leads to a performance loss of only about 0.25 dB at an input Es/No value of 9.6 dB which is quite remarkable. Figure 11: Symbol error rate versus static timing error and filter BT product. The curves shown in Figure 11 can indirectly provide the needed conditional error probability like that used in Equation 21 thereby allowing the complete impact of imperfect PLL time-tracking behavior to be assessed. In the context of hardware-based symbol timing recovery, many different types of timing error metrics are available, but one stands out in particular for very high-speed data applications in which most other detectors fall prey to meta-stability problems. This detector type was first patented by Hogge18 and is shown here in Figure 12. This detector is rather uniquely equipped for extremely high-speed data applications and in this figure, is shown being used within a type-2 third-order PLL structure. Figure 12: Hogge clock-recovery circuit within PLL structure. Wrap Up References
About the Author |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |