Lockdown! Random Numbers Secure Network SoC Designs
Lockdown! Random Numbers Secure Network SoC Designs
By Alex Soohoo , Communication Systems Design
April 1, 2003 (9:37 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030401S0027
Editor's note: To view a PDF version of this article, click here. The demand for networking equipment that incorporates virtual private networking (VPN) functionality is growing, fueled by companies' use of the public Internet as an extension of their networks. To serve that market, semiconductor vendors are offering specialized products that integrate all the required security functions on one device. Some of these functions, including algorithms like AES and 3DES encryption/decryption, SHA1 and MD5 hashingwhich are dedicated to Internet Protocol security (IPsec)are well-known and generate a great deal of attention. The often-overlooked ability to create random numbers, however, is the critical block to maintaining a secure VPN system. Random numbers are essential to many cryptographic applications. The function is used to generate p ublic/private pair keys for such algorithms as Diffie-Hellman, Rivest-Shamir-Adelman and the Digital Signature Algorithm, as well as to generate initialization vectors for bulk-encryption algorithms and nonces (salts) for IPsec. There are numerous other security protocols that rely on the unpredictability of the random-number generator to prevent a system from being compromised. Complex mathematical functions are often used to generate high-quality pseudorandom-number-generator (PRNG) bit streams. There are many well-documented ways, however, to attack systems that utilize the PRNG approach. A cryptographically secure system demands the use of a high-quality random-number generator. But having established that, which truly random-number generator that produces random values based on a nondeterministic noise source is best optimized for a system-on-chip (SoC) design? The many options generally fall into three popular categories: direct amplification, discrete-time chaos and oscillator sampling. While dire ct amplification and discrete-time chaos are more suitable for custom cell designs, where the designer has control of the layout of actual circuits, oscillator-sampling techniques can be implemented as part of a standard-cell design flow, and for that reason, are popular for SoC designs. But once a designer decides to go with the oscillator technique, there are still many implementation issues that must be taken into account. Generation techniques
Direct amplification uses a high-gain, high-bandwidth amplifier to process voltage changes produced by noise sources such as thermal or shot noise. For example, n-well resistor pairs can be used to transform the thermal noise of the resistor pairs into a voltage-variation signal to seed a random-number generator block microarchitecture in the form of jitter (Figure 1)1. Designers using this approach must take additional design factors into consideration. The thermal noise of the system is typically coupled with local characteristics such as substrate noise and power supply fluctuations. If the circuit is not properly shielded, those factors will dominate and affect the designer's ability to capture the randomness of the thermal-noise source. The designer can overcome this by sampling an adjacent pair of resistors and using the differential to minimize the effect of the other sources.
Discrete-time chaos approaches use analog signal-processing techniques to generate random bit streams. Rather than thermal noise, randomness is obtained from robust dynamics. Designs that realize these systems are similar in nature to algorithmic analog-to-digital converters. An example of such a system would be a classical A/D converter where the residue signal is sampled and held, and fed back into the A/D input (Figure 2)2. Typically, this technique alone is not enough to produce random sequences because circuit inaccuracies that limit the A/D resolution can reduce the ability of the system to create random sequences. For that reason, this approach to capturing nondeterministic randomness is often used in tandem with other techniques.
The most popular approach by far in RNG designs is the oscillator-sampling method (Figure 3)3. This design exploits the relationship between two independent free-running oscillators, one centered at a low frequency and another centered at a high frequency, to capture a nondeterministic noise source. The basic concept is that a low-frequency oscillator with high jitter is used to sample the high-frequency oscillator to produce the random-number sequences. In the context of a digital circuit implementation, a low-frequency square-wave source would be used to clock a positive-edge-triggered D flip-flop, and the high-frequency square-wave source is applied to the flip-flop data input and is sampled on the rising edge of the clock.
The key components behind the creation of random values in this system are that the low-frequency oscillator is designed to have some frequency instabilitythat is, jitterand the ratio of the low and high frequencies is carefully selected to meet certain conditions. The key design component is the amount of jitter in the low-frequency oscillator: This jitter is the source of randomness. This frequency instability can be generated as a function of the type of oscillator (discussed in the next section) or can be seeded directly by another nondeterministic noise source. Thus, it is this variation in the sampling clock phase with respect to the high-frequency data input that provides the mechanism to capture a random bit stream.
If the two oscillators were run without drift, the sampled bits would exhibit predictable periodicity with respect to the frequency ratio, commonly known as beat frequencies. Additionally, the ratio of the two oscillator frequencies has a critical effect upon the resulting bit stream. Several studies have shown that to ensure a high degree of randomness, the ratio of twice the standard deviation of the low-frequency period variation to the high-frequency oscillator period should be greater than 3:2. Otherwise, there exists a significant bit-to-bit correlation, and an individual bit will become more predictable than preceding bits.
Oscillator implementation
Designers who choose to use the oscillator-sampling method as part of the random-number generator must consider several additional implementation issues. The choice of oscillator type will also affect the inherent randomness of the overall system design. Further, this choice of oscillator presents additional complications during circuit layout of the device, since the designer must make sure that correlated noise sources will not remove the characteristic randomness of the system. To compensate for some of the design risks, digital postprocessing techniques may be utilized to mitigate the risks and preserve the level of system randomness.
Designers can choose from several different oscillator families when considering the implementation of the oscillator-sampling method. These include differential, single-ended and their hybrids. Different types of oscillator designs are sensitive to different types of noise sources. Clearly, there is a large body of knowledge behind the comparative study of oscillator characteristics and the discussion here will only attempt to scratch the surface.
Generally, differential-oscillator circuit designs are less sensitive to power supply and substrate noise than single-ended oscillator circuits. This is because the differential-amplifier pair's power and ground nodes will swing at the same time and the difference between the two inputs will remain the same, keeping the output the same as well as exhibiting a high common-mode rejection ratio (CMRR). Differential logic is mostly used in analog-logic voltage-controlled oscillator designs like the one in a phase-locked loop, which requires a high CMRR. This makes differential-oscillator solutions less ideal for designs that actually want a nondeterministic noise source. On the other hand, single-ended, inverting-stage oscillators will be highly affected by voltage swings or a dc component at the input. If there is any kind of disturbance in the voltage level because of noise, it will ultimately affect the jitter of the oscillator. In contrast, differential, inductor-capacitor and relaxation-oscillator designs also require custom layout and cannot be integrated into standard-cell SoC designs. As a result, the most straightforward design in the case of an SoC will typically be the single-ended ring oscillator.
That being said, however, there are additional complications with the choice of a single-ended ring oscillator that must be considered. Thermal noise is typically negligible compared with supply/substrate noise in high-speed digital systems because of switching activity.4 Supply and substrate noise will be the primary source for noise coupling. The oscillator that was coupled by the noise will experience delta delay in its inverting stage. Supply-voltage variation or noise coupled through the substrate will alter the capacitance on the output nodes on each stage and hence alter the overall frequency of the oscillator from cycle to cycle. Additionally, supply and substrate noise in all the ring oscillator delay stages is correlated, while thermal noise is not.4 Thus, designers may not want to place two oscillator circuits too close together without a heavy ground ring to protect the circuit. Such placement, without appropriate shielding, can result in randomness correlation between the two streams' source. All of those factors must be considered in the final oscillator design.
Still, despite the best intentions, the resulting implementation may not produce a truly random bit stream. The designer may have to take some extra, more expensive measures to guarantee that the random-number generator system produces the desired results. As mentioned earlier, the main source of the randomness comes from the power supply and substrate noise coupling onto the oscillator stages. One may not want to place those oscillators too close, as they may couple from the same noise source. In addition, having the oscillators both lock up to the same noise source and intercouple with each other will also increase the correlation between oscillators. This may result in correlated randomness output between the two sources. By strategically locating oscillators away from each other in the final layout, the effect of correlated power supply and substrate noise can be mitigated.
One common practice in the oscillator-sampling method is to design in additional sets of oscillat or pairs. In instances where the primary source of randomness fails, this technique can limit the risk that the RNG system will not have a nondeterministic noise source. The sampled bit streams should then be mixed using a strong mixing function (see later section) to preserve the inherent randomness of the individual sources. To attain a better randomness from the mixed bit stream, the individual oscillators should also be assigned a unique prime nominal frequency or have the capability of being tuned. This can ensure that multiple sources will have minimal correlation among one another. Of course, the designer will have to weigh the extra cost vs. the risk of not being able to generate random values.
Bias correctors
The oscillator-sampling method relies on the fact that the high-frequency oscillator will maintain a 50 percent duty cycle over time and that the low-frequency oscillator has significant cycle-to-cycle variation. If this is not the case (which it most always will be), there will be a bias toward either the "1" or "0" bits in the resulting bit stream, known as an eccentricity. Fortunately, there are effective postprocessing methods for achieving bias correction to yield a more uniform distribution in a deterministic manner. Two of the simplest techniques are called parity generation and transition mapping. Other, more complex techniques for bias correction include using the fast Fourier transform function and more complex bit-shuffling/scrambling techniques that use longer combinations of delay elements and feedback paths to remove bit-to-bit correlation.
The goal of the bias correction is to uniformly distribute the bit stream so it produces statistically even 1s and 0s, essentially extracting more of the randomness in a skewed bit sequence. This postprocessing function is not specific to the oscillator-sampling technique. It can be applied regardless of the original noise source. Nor does the function need to be complex. One simple method that can serve this purpose is parity generation. This method has the advantage of being robust for large ranges of skew distribution. It is typically trivial to implement in hardware for a defined block length of bit samples. For example, a simple cascaded XOR chain can effectively serve as a parity generator and perform adequate bias correction (Figure 5).
Transition mapping, also known as a von Neumann corrector, converts pairs of sample input bits into output bits by converting the bit pairs [0,1] to produce output 1, [1,0] to produce output 0, and outputting nothing for [0,0] or [1,1]. This technique will completely eliminate bias but at the expense of creating an indeterminate amount of delay between inputting bits and the production of an arbitrary number of output bits.
Mixing functions
Even after all the aforementioned techniques have been applied, there still may be some concern that the inherent randomness of the system may have been compromised through a mixture of nonrandom sources. An example of such sources is the combining of multiple oscillator sources mentioned earlier. A way to ensure that the randomness of a source is maintained is through the use of strong mixing functions. These functions can be used to combine two or more sampled bit inputs and produce an output bit that is a complex nonlinear function of the previous input bits. Of course, it is impossible to get more randomness bits out than are input5. The desired functionality is when the changing of any input bit will induce changes in about half the output bits. These mixing functions are also useful as more complex methods for bias correction to remove the eccentricity of a bit stream, as discussed in the previous section.
Typically, a stronger mixing function comes at the cost of silicon real estate. A trivial version is the cascade XOR example discussed already. The DES encryption/decryption algorithm is a more complex example. It tak es up to 120 bits of input and produces 64 bits of output, each of which is dependent on a complex nonlinear function of all input bits. Other encryption/decryption might work equally well. Hashing functions can also be used as strong mixing functions. They take an arbitrary-size input and produce a message digest of a certain length. Again, the design engineer will have to weigh the additional expense vs. the possibility of generating bit streams that lack the desired level of randomness.
Statistical evaluation
The U.S. Department of Commerce has created a number of metrics to evaluate the level of randomness of random-number generators used in cryptographic applications. The National Institute of Standards and Technology (NIST) has a document entitled "Special Publication 800-22" that recommends a comprehensive statistical test suite for random-number generators and outlines stringent conditions that must be satisfied to meet defined levels of randomness.6 Test engineers can exec ute this suite of tests, or similar tests that check for the desired level of randomness, during the verification process to detect whether the design has any nonrandom characteristics. The NIST statistical suite specifies a battery of 16 different types of tests that try to find defects in the random-number generator (Figure 6).
The NIST FIPS 140-2, another document issued by the U.S. Department of Commerce, defines a set of requirements when a designer employs cryptographic devices in security applications. One of the most important requirements states that any real-time cryptographic module employing a random-number generator must provide the ability for power-up and continuous real-time testing of the RNG function to ensure that there are no failures during operation.7 If any of the defined statistical tests fail, the RNG module must enter an error state. In the context of an SoC d esign, this means that the tester block to perform the testing must be integrated on the device itself and meet the required conditions if the end product is to be certified for this U.S. government security standard.
Some techniques for generating a nondeterministic noise source may not be applicable, depending on whether a standard cell or a custom layout will be utilized. A design can be guaranteed to achieve the desired level of randomness, but that guarantee may be made at the cost of the additional silicon real estate needed to implement redundant structures or more complex postprocessing functions. Even a meticulously planned design must be verified at the end of the day via statistical test suites before it can be concluded that the desired level of randomness has been reached.
Related Articles
- "Taking a Stateful Approach to Firewall Design"; www.commsdesign.com/story/OEG20020404S0030
- "Secure Flow Pr ocessing Enhances QoS in Routers"; www.commsdesign.com/story/OEG20020611S0009
References
- Jun and Kocher, "Intel Random Number Generator," Cryptography Research Inc. white paper, April 1999.
- Craig Petrie and J. Alvin Connelly, "A Noise-Based IC Random Number Generator for Applications in Cryptography," IEEE TCAS I, Vol. 47, No. 5, May 2000, pp. 615-621.
- Fairfield, Mortenson, Coulthart, "An LSI Random Number Generator," Advances in Cryptology, Proceedings of Crypto 84, pp. 203-230.
- Frank Herzel and Behzad Razavi, "A Study of Oscillator Jitter Due to Supply and Substrate Noise," IEEE TCAS II, Vol. 46, No. 1, January 1999, pp. 56-62.
- Eastlake, Crocker and Schiller, "Randomness Recommendations for Security," RFC-1750.
- NIST Special Publication 800-22, "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications," Information Techno logy Laboratory of the National Institute of Standards and Technology, Gaithersburg, Md., September 2000.
- Federal Information Processing Standards Publication 140-2, "Security Requirements for Cryptographic Modules," Information Technology Laboratory of the National Institute of Standards and Technology, Gaithersburg, Md., May 2001.
About the Author
Alex Soohoo (alex.soohoo@idt.com) is a systems applications manager for the IDT Internetworking Products Division, where he oversees application strategies and directs customer support for integrated communications processors for gateway applications. Soohoo also manages new integrated processor introductions within the gateway space. He holds a BSEE from the University of California at Berkeley and an MSEE from UC Davis.
Related Articles
- Creating SoC Designs Better and Faster With Integration Automation
- Speeding Derivative SoC Designs With Networks-on-Chips
- Optimizing Communication and Data Sharing in Multi-Core SoC Designs
- Getting started in structured assembly in complex SoC designs
- Analog and Power Management Trends in ASIC and SoC Designs
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |