|
||||||||||
Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce CostsBy Philippe Legros, Product Design Architect, PLDA Introduction While server virtualization is being widely deployed in an effort to reduce costs and optimize data center resource usage, an additional key area where virtualization has an opportunity to shine is in the area of I/O performance and its role in enabling more efficient application execution. The advent of the Single Root I/O Virtualization (SR-IOV) by the PCI-SIG organization provides a step forward in making it easier to implement virtualization within the PCI bus itself. SR-IOV provides additional definitions to the PCI Express® (PCIe®) specification to enable multiple Virtual Machines (VMs) to share PCI hardware resources. Using virtualization provides several important benefits to system designers:
In this paper, we will explore why designing systems that have been natively built on SR-IOV-enabled hardware may be the most cost-effective way to improve I/O performance and how to easily implement SR-IOV in PCIe devices. Traditional Virtualized System Overview A virtualized system (Figure 1) is a discreet system which contains:
Within this virtualized system, the Supervisor plays a crucial role; it provides the interface between the hardware and the virtual machines, it is responsible for security, and ensures that there is no possible interaction between virtual machines.
Any system can be virtualized without specific SR-IOV technology. In a more traditional virtualization scenario, the Supervisor must emulate virtual devices and perform resource sharing on their behalf by instantiating a virtual Ethernet controller for each virtual machine (Figure 2). This creates an I/O bottleneck and often results in poor performance. In addition, it creates a tradeoff between the number of virtual machines a physical server can realistically support and the system’s I/O performance. Adding more VMs can aggravate the bottleneck. Providing a Better Way – The Benefits of SR-IOV Hardware Implementation Designing systems with hardware that incorporates SR-IOV support allows virtual devices to be implemented in hardware and enables resource sharing to be handled by a PCI Express® device such as an Ethernet controller (Figure 3). The benefit of using SR-IOV over more traditional network virtualization is that in SR-IOV virtualization, the VM is talking directly to the network adapter through Direct Memory Access (DMA). Using DMA allows the VM to bypass virtualization transports such as the VM Bus and avoids requiring any processing in the management partition. By avoiding the use of a switch, the best possible system performance is attained, providing close to “bare-metal” performance. Implementing SR-IOV in an Adapter Card The SR-IOV specification enables a hardware provider to modify their PCI card to define itself as independent devices to a VMM. To achieve this, the SR-IOV architecture distinguishes two types of functions (Figure 4): Physical Functions (PFs): Virtual Functions (VFs): In order to effectively implement SR-IOV, it is necessary to access the VF configuration space. Traditional routing allows 8 functions and the device number is always “0” in PCIe. So alternate routing is used with the standard fields to extend the number of functions. This change allows up to 256 functions. (Figure 5) In addition, SR-IOV extends this further by allowing the use of several consecutive bus numbers for a single device, enabling more than 256 functions. VFs are light-weight functions and their configuration space is significantly different from PFs. Significant changes are as follows:
In order to access the VF memory spaces, up to 6 VF BARs are implemented in the PF SR-IOV capability. They are similar to normal BAR registers except that their settings apply to all VFs. During initial SR-IOV set-up and initialization VFs are not enabled and are invisible. The Supervisor then detects the device and configures PFs. If the host system and device driver detect SR-IOV capability then they will:
Once set up, each VM can be assigned a virtual device and can access it directly via its VF driver. There must be at least one VF per VM, otherwise the Supervisor will need to perform some or all sharing management, reducing the benefits of utilizing SR-IOV. The current market trend is to have 64 VMs, so a SR-IOV capable device should support 64 VFs. Choosing the Right Partners for SR-IOV Success Many PCI card designers are realizing that the PCI IP they choose is crucial to the success of their SR-IOV implementation. By choosing PCI IP vendors who understand and design for SR-IOV inherently, the PCI cards provide a more seamless integration with the host system.
PLDA, a long-time leader in PCIe IP innovation, provides its XpressRICH3® PCIe Gen 3 IP (Figure 6) to many of the leading PCI Express hardware vendors. PLDA’s IP provides native SR-IOV support and the PLDA IP delivers industry-leading specifications to fully enable SR-IOV functionality. PLDA’s XpressRICH3 PCIe 3.0 IP delivers:
Because PLDA is the industry leader in PCI Express IP, with over 2,500 designs in working silicon, it is also an assurance of ease of integration and first-time right functionality. In addition, PLDA offers free evaluation to enable a hands-on trial of their IP before purchase and provides a comprehensive, SR-IOV Reference Design enabling quick implementation and reducing time-to-market. To schedule a demo of the XpressRICH3 IP running SR-IOV, in which the IP performs both “Read DMA” and “Write DMA”, visit PLDA at www.plda.com. Summary: In summary, the key benefits of using SR-IOV to achieve virtualization include:
PLDA’s XpressRICH3 IP for PCIe fully supports SR-IOV functionality and is an easy and cost-effective way to integrate SR-IOV support into PCI devices. For more information, please visit www.plda.com. About the Author: Philippe Legros is a PCI Express Product Design Architect at PLDA; he has more than 12 years experience in PCI, PCI-X and PCI-Express bus protocols and related ecosystems. Prior to joining PLDA, Philippe worked as an ASIC design engineer in digital video broadcasting. Philippe has a MSc from Middlesex University, London, UK. About PLDA: PLDA designs and sells intellectual property (IP) cores and prototyping tools for ASIC and FPGA that accelerate time-to-market for embedded electronic designers. They specialize in high-speed interface protocols and technologies such as PCIe and Ethernet. Visit them online at www.plda.com.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |