Secure Virtualization as an Enabler of Trusted Execution Environments in Embedded Computing
Update: Synopsys Expands Security Solutions with Acquisition of Elliptic Technologies (June 29, 2015)
Mike Borza, CTO, Elliptic Technologies Inc.
Use of virtualization in embedded environments is growing in such things as consumer electronics, automotive and IoT applications. Virtualization can provide secure separation of resources, and this secure virtualization can aid in the creation of multiple independent security domains. Secure virtualization can be made simpler and more secure by the addition of hardware that extends the Virtual Machine to incorporate subsystems beyond the CPU.
Introduction
Virtualization has become a major part of the data center environment, and is one of the key enablers of cloud computing. In these environments, resource-rich physical servers host virtual servers that are sized to meet the needs of specific applications. In addition to servers, network functions and storage can be virtualized. The resources allocated to virtual servers include units of computing power, RAM space, network bandwidth and disk storage. The benefits of virtualization in this environment are numerous, including:
- Standardization of hardware configu-rations in the data center that allows capital and operating expense optimi-zation and reduces the number of var-iants of spares that must be main-tained
- Improved reliability as failure of one application environment does not directly affect applications running on neighboring virtual servers
- Improved availability for applications as virtual servers can be automatically migrated to a redundant physical server when hardware fails
- The ability to deploy, re-size and with-draw services and applications on an as-needed basis
- The ability to host mixed operating system environments using just the right OS for a particular application
- Separation of applications from each other
In the embedded space, there has been rela-tively little use of virtualization, but this is starting to change. Embedded virtualization technology is being developed for consumer electronics, automotive and IoT applications. The reasons to consider virtualization in em-bedded environments are different from those in the data center. One reason, and the focus of this whitepaper, is the use of vir-tualization to provide secure separation of resources from each other. For our purposes here, we will simply refer to this as secure virtualization. The use of virtualization as a means to achieve secure separation has im-plications throughout the entire SoC architec-ture.
A bonus of a secure separation of environ-ments is that it also has positive implications for robustness, reliability and availability. Vir-tualization that protects against rogue appli-cations trying to violate the separation of en-vironments on the same SoC also protects against accidental separation violations through, for example, software coding errors. This makes secure virtualization immediately applicable to other environments that are not necessarily predominantly concerned with security, for example some automotive appli-cations.
Secure virtualization can be seen as a gen-eralization of the Trusted Execution Environ-ment (TEE) popularized by GlobalPlatform and others. The payoff is large: secure virtu-alization allows TEEs to be developed for specific applications, isolated from other TEEs that may be present in a device. This allows the creation of multiple security do-mains that are independent of each other, which has benefits for security and robust-ness such as the following:
- Independent domains need only pro-vide the functionality required for their specific application, not a general set of functions suitable for all applications
- A security weakness or breach in one domain does not compromise the security of other domains or the entire device
- It is possible to add new security do-mains to devices that are already in the field
The remainder of this whitepaper discusses what is involved in developing a secure virtu-alization environment.
The Security Model
For simplicity, consider the overall platform in terms of a hardware and software system that allows for the creation and simultaneous execution of multiple virtual machines (VMs). Each virtual machine can also be called an ¡§environment¡¨ in which applications may run. A VM may have an operating system (OS), but it need not have one: it is perfectly fine for an application to run on ¡§bare metal¡¨, that is, directly in the virtual machine. In general, different virtual machines can run different OSes if desired. VMs may run independently of each other, or they may work together to solve particular computing tasks. It is com-mon for some VMs to exist solely to provide services to client applications in other VMs or to a client VM itself.
The basic objectives for the secure virtualiza-tion system are the following:
- Only software that is authorized runs on the platform
- Operation of one VM is independent of other VMs in the system except when VMs must synchronize their operation to cooperate on a task
- Resources in one environment should only be accessible to authorized cli-ents in a different environment
The first objective requires that only author-ized software is allowed to run in any VM on the system. This model is fairly abstract, and ignores a number of underlying questions. For example, the system is assumed to exist because it performs useful work for a person or people, but the question of who authorizes software – its user or its designer – is left un-answered. In part, the answer is context specific. Although everybody agrees that it is desirable to prevent malware from taking over personal devices such as tablet com-puters, some users would prefer to have un-fettered access to downloaded or streamed movies, while distributors of those movies may object. In the above example, the dis-tributor delegates to the system designer the authorization of software that protects the distributor's interests.
The second and third objectives assure sep-aration of VMs from each other and that VMs do not interfere with the operation of each other. VMs themselves may or may not be trusted. So the VM and its applications should be designed in such a way as to allow them to keep operating when possible, in spite of what other VMs are doing. Faults or failures in an environment should affect only that environment (and possibly client applica-tions in other environments), but should not affect other unrelated VMs or applications. In particular, failures in one environment should not make it possible to access the protected resources of another environment.
If these objectives are achieved, it is possible for individual VMs to establish their own se-curity objectives. Secure separation creates a barrier around a VM that it can rely on to protect its internal state and user data from adjacent environments. If secure separation is provable and enforceable, it becomes the basis for creating a TEE in a VM. The Software Stack
Figure 1: Multiple secure VMs provide separate security domains for different applications
Most "larger" embedded systems (that is, those based on 32- or 64-bit processors with hardware memory management units) have a similar software structure: applications run on top of an operating system or task sched-uler. A virtualization system allows several of these systems to run in parallel on the same hardware, similar to the example shown in Figure 1. Low level system software called a hypervisor chooses which VM(s) run at any instant of time. The hypervisor in these em-bedded environments typically provides a few basic facilities:
- Thread creation, with each VM in-stance allocated to a thread
- Thread scheduling
- Memory allocation and management for threads
- Inter-process communications (IPC) mechanism
Correctness of the hypervisor is critical to the integrity of the system. Faults in the hypervi-sor can lead to separation violations between VMs in the same way that faults in an operat-ing system can lead to memory protection violations between user applications in a tra-ditional OS. For this reason, hypervisors are designed to be very small and easy to in-spect, to allow the correctness of the code to be validated.
When the hypervisor selects a VM for execu-tion, it must first save the state of any other VM that was executing using the same re-sources as the selected VM will use. If re-sources such as cache memory could be ac-cessed from the new VM to leak information from the previous VM's state, these must be cleared before transferring control to the next VM. For both performance and security rea-sons, it is preferable to provide hardware support for some aspects of virtualization. This is discussed further in the next section.
For hardware peripherals that do not provide direct support for virtualization, the hypervi-sor must intervene to control access to hard-ware resources from VMs. This intervention may take the form of dedicating a particular hardware resource to a single VM, or access to the hardware may be shared among multi-ple VMs.
Hardware resources for which access is me-diated by the hypervisor are said to be "para-virtualized". Paravirtualization is expensive: every access to a paravirtualized device re-quires that the hypervisor be invoked, which can involve changing processor states and saving a significant amount of VM state. In addition, paravirtualized devices require that special device drivers be written for each VM operating system in which the device is ac-cessed.
Hardware Support
As mentioned above, embedded virtualiza-tion can benefit from hardware virtualization support, both for performance and security reasons. Desirable hardware support in-cludes:
- Secure separation of VM memory spaces from those of other VMs
- One or more additional privilege levels that allow the hypervisor to run at higher privilege level than VM operat-ing systems
- I/O virtualization to associate device I/O activity with VMs that are using those devices
Consider a simplified tablet computer appli-cations processor shown in Figure 2. This processor has the following major subsys-tems:
- A general purpose CPU
- A CPU memory management unit (MMU) closely coupled to level 2 cache
- A graphics processing unit (GPU)
- A GPU MMU
- A system bus to interconnect the components
- Unified off-chip DDRx RAM
Figure 2: Simplified tablet applications processor
Typically when people think of virtualizing a system, they first think of the CPU, since this is where most user code runs. In today's embedded applications, the CPU and its MMU separate memory access into two lev-els: user space for application code, and ker-nel space for the operating system. Every memory access is flagged with a hardware bit that identifies whether the access is user space or kernel space.1 Memory pages are marked with a user or kernel space identifier when they are allocated, and the CPU MMU ensures that user space programs (those ex-ecuting in user privilege level) are not al-lowed to access kernel space pages. The MMU is also responsible for translating user or kernel space virtual addresses into physi-cal memory addresses.
When the system is virtualized, the hypervi-sor runs at the highest privilege level2, fol-lowed by VM OSes at the next highest privi-lege level, and finally VM user space applica-tions at the lowest privilege level. Each VM in the system is assigned a unique VM identi-fier (VMID; by convention, ID 0 is used for the hypervisor) that is used to track memory page allocations to the appropriate VM.
The CPU MMU provides two levels of ad-dress space translation. The top layer ap-pears to VMs as a conventional Translation Lookaside Buffer (TLB) based MMU, translat-ing VM virtual addresses to what appears to the VM to be a physical address space. The VM physical address space is actually a vir-tual address space created for it by the hy-pervisor when the VM was started. The low-er layer of the MMU performs translation from the VM physical address space to actual physi-cal addresses (referred to as root physical ad-dress space). This second layer of transla-tion enforces VM operation within a memory space allocated to it by the hypervisor. Any attempt by a VM to access memory outside of its assigned range generates a protection violation fault.
Virtualization beyond the CPU
In our example, the main application envi-ronment includes a GPU. The GPU is often thought of as a hardware peripheral that does the CPU's bidding. However, the GPU is a complex processor-based system in its own right. GPUs typically run firmware that is loaded at boot time, and operate on data that is in the CPU's address space. When the system is virtualized, it is logical that the GPU be virtualized and an instance be created as part of the VM for those VMs that need it.
As part of this approach, the GPU MMU should incorporate the means to enforce that memory access be constrained within the address space of the VM that instantiated it. This constraint prevents the GPU from being used to obtain access to memory regions al-located to the hypervisor or other VMs in the system.
An alternative is that the GPU is treated as a shared peripheral that performs services on behalf of client VMs. The difficulty with this approach is two-fold. First, the GPU oper-ates on data designated by the CPU using linked lists of command and data structures. These data structures point to data in the CPU's physical address space. When the CPU has been virtualized, this “physical” ad-dress space is in fact a virtual address space allocated by the hypervisor. Thus, part of the driver for the GPU must translate these ad-dresses, meaning that the GPU must be paravirtualized to access the lower level TLBs. This has potentially negative perfor-mance implications on both the GPU and the system as a whole.
Second, because the GPU is a software pro-grammable element, its ability to operate on physical memory makes it a desirable target for crackers and malware to try to take ad-vantage of in system attacks. Note that the GPU can operate on memory in any region, even physical memory that is part of VMs that do not incorporate a virtual GPU, making the GPU a potential vector to attack any VM in the system.
In modern GPUs, a single instance of firm-ware is used by the GPU, regardless of whether the GPU has been virtualized. This means that the GPU firmware should be con-trolled and loaded by the hypervisor, and is a root of trust element. Any crack in the sys-tem that allows the GPU code to be modified creates an opportunity to try to craft an attack using the GPU. Eliminating the ability for the GPU to operate on physical memory across VMs eliminates this threat for VMs that are not GPU users.
These same considerations apply to other subsystems within the SoC, such as video encoder and decoder, camera processor, network and encryption subsystems, etc. Particular attention needs to be given to devices that incorporate firmware programma-ble processors and that operate on memory shared with CPUs. Devices with program-mable DMA engines are also potential attack vectors. The key consideration for all these devices is that leakage across VMs must be prevented. Techniques to provide this pro-tection include segmenting memory into re-gions associated with individual VMs, or as-sociating bus transactions with individual VMs. The tradeoffs to be made are the per-formance impacts of paravirtualization versus the silicon area devoted to hardware units such as MMUs that translate from VM ad-dress space to physical address space while enforcing VM separation. These tradeoffs should be assessed as part of a comprehen-sive threat and risk assessment (TRA) during the architecture and design phases.
Creating the Trusted Execution Environment
Having created securely separated virtual machines, trusted execution environments can be created and customized for the par-ticular applications running on the processor. In this case, a TEE is a VM that provides cryptographic functions, secure storage and access to hardware features such as an em-bedded Device Unique Key stored securely in on-chip one-time-programmable (OTP) memory. TEE VMs will also typically take advantage of cryptographic hardware that is available in the SoC.
Application VMs communicate with the TEE through an API and device driver that con-nects to the TEE using a messaging layer. This provides secure services to client appli-cation VMs. This messaging layer is imple-mented on the hypervisor's IPC mechanism, which provides a more efficient means to communicate between VMs than the virtual network abstraction typically used in server virtualization implementations.
The nice thing about a system like this is that it can be readily extended with new applica-tions. Applications that need it can be dis-tributed with their own secure VM and TEE to provide security features customized to the application.
TEE design and development is still a highly specialized niche, and not for the faint of heart. But the specialization of TEEs and the provisioning of those TEEs in separate VMs reduce the attack surface of any individual TEE, and prevents damage that can spread to the rest of the system as a result of a suc-cessful attack.
Final Thoughts
Secure virtualization for embedded devices offers the prospect of being able to develop more robust, reliable and secure systems. The approach builds on, and is compatible with, well established principles for secure system design promulgated by organizations like GlobalPlatform. Adding hardware sup-port to extend the VM to incorporate subsys-tems beyond the CPU provides an efficient means to reliably separate VMs from each other, and is the basis for extending the se-curity of the system using special purpose VMs to support particular applications. This, in turn, enhances the prospects of maintain-ing a higher average level of security system-wide, and limits the damage done in a suc-cessful attack. In the end, making systems more secure and reliable for their users while enabling a great user experience is a primary objective of system designers everywhere.
About Elliptic Technologies
Elliptic Technologies is a leader in the virtual-ization space, working with the top global technology partners to enable an ecosystem of more securely connected devices. Elliptic’s highly integrated solutions enable the most efficient silicon design and highest security levels for some of the world’s most popular products in markets such as mobile, network-ing, home entertainment, smart grid and au-tomotive. Elliptic is leading the world in DRM and link protection solutions with flagship technology tVaultTM for downloading and sharing premium content between multiple devices, including Microsoft® PlayReady®, DTCP-IP and HDCP SDKs built for “trusted execution environments” used in consumer electronics.
Learn more at www.elliptictech.com or contact us: info@elliptictech.com
1 The real situation is much more complicated than depicted. The MMU also enforces access controls on user space memory accesses by process ID.
2 In practice, the hypervisor execution level may also be split into two levels as is done, for example, by MIPS implementation of virtualization. The highest privilege level is used for the hypervisor microkernel; the next highest level provides a hypervisor user level.
If you wish to download a copy of this white paper, click here
|
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- PCIe error logging and handling on a typical SoC
E-mail This Article | Printer-Friendly Page |