1.. _introduction:
2
3What Is ACRN
4############
5
6Introduction
7************
8
9IoT and Edge system developers face mounting demands on the systems they build, as connected
10devices are increasingly expected to support a range of hardware resources,
11operating systems, and software tools and applications. Virtualization is key to
12meeting these broad needs. Most existing hypervisor and Virtual Machine Manager
13solutions don't offer the right size, boot speed, real-time support, and
14flexibility for IoT and Edge systems. Data center hypervisor code is too big, doesn't
15offer safety or hard real-time capabilities, and requires too much performance
16overhead for embedded development. The ACRN hypervisor was built to fill this
17need.
18
19ACRN is a type 1 reference hypervisor stack that runs on bare-metal hardware,
20with fast booting, and is configurable for a variety of IoT, Edge, and embedded device
21solutions.  It provides a flexible, lightweight hypervisor, built with real-time
22and safety-criticality in mind, optimized to streamline embedded development
23through an open-source, scalable reference platform. It has an architecture that
24can run multiple OSs and VMs, managed securely, on a consolidated system by
25means of efficient virtualization.  Resource partitioning ensures
26co-existing heterogeneous workloads on one system hardware platform do not
27interfere with each other.
28
29ACRN defines a reference framework implementation for virtual device emulation,
30called the ACRN Device Model or DM, with rich I/O mediators. It also supports
31non-emulated device passthrough access to satisfy time-sensitive requirements
32and low-latency access needs of real-time applications.  To keep the hypervisor
33code base as small and efficient as possible, the bulk of the Device Model
34implementation resides in the Service VM to provide sharing and other
35capabilities.
36
37ACRN is built to virtualize embedded IoT and Edge development functions
38(for a camera, audio, graphics, storage, networking, and more), so it's ideal
39for a broad range of IoT and Edge uses, including industrial, automotive, and retail
40applications.
41
42Licensing
43*********
44.. _BSD-3-Clause: https://opensource.org/licenses/BSD-3-Clause
45
46The ACRN hypervisor and ACRN Device Model software are provided
47under the permissive `BSD-3-Clause`_ license, which allows
48*"redistribution and use in source and binary forms, with or without
49modification"* together with the intact copyright notice and
50disclaimers noted in the license.
51
52
53Key Capabilities
54****************
55
56ACRN has these key capabilities and benefits:
57
58* **Small Footprint**: The hypervisor is optimized for resource-constrained devices
59  with significantly fewer lines of code (about 40K) than datacenter-centric
60  hypervisors (over 150K).
61* **Built with Real-time in Mind**: Low-latency, fast boot times, and responsive
62  hardware device communication supporting near bare-metal performance. Both
63  soft and hard real-time VM needs are supported including no VMExit during
64  runtime operations, LAPIC and PCI passthrough, static CPU assignment, and
65  more.
66* **Built for Embedded IoT and Edge Virtualization**: ACRN supports virtualization beyond the
67  basics and includes CPU, I/O, and networking virtualization of embedded IoT
68  and Edge
69  device functions and a rich set of I/O mediators to share devices across
70  multiple VMs. The Service VM communicates directly with the system hardware
71  and devices ensuring low latency access. The hypervisor is booted directly by the
72  bootloader for fast and secure booting.
73* **Built with Safety-Critical Virtualization in Mind**: Safety-critical workloads
74  can be isolated from the rest of the VMs and have priority to meet their
75  design needs. Partitioning of resources supports safety-critical and
76  non-safety-critical domains coexisting on one SoC using Intel VT-backed
77  isolation.
78* **Adaptable and Flexible**: ACRN has multi-OS support with efficient
79  virtualization for VM OSs including Linux, Zephyr, and Windows, as
80  needed for a variety of application use cases. ACRN scenario configurations
81  support shared, partitioned, and hybrid VM models to support a variety of
82  application use cases.
83* **Truly Open Source**: With its permissive BSD licensing and reference
84  implementation, ACRN offers scalable support with a significant up-front R&D
85  cost saving, code transparency, and collaborative software development with
86  industry leaders.
87
88.. include:: ../../../../README.rst
89   :start-after: start_include_here
90
91
92Background
93**********
94
95The ACRN architecture has evolved since its initial v0.1 release in July 2018.
96Beginning with the v1.1 release, the ACRN architecture has flexibility to
97support VMs with shared HW resources, partitioned HW resources, and a hybrid
98VM model that simultaneously supported shared and partitioned resources. It enables a
99workload consolidation solution taking multiple separate systems and running
100them on a single compute platform to run heterogeneous workloads, with hard and
101soft real-time support.
102
103Workload management and orchestration are also enabled with ACRN, allowing
104open-source orchestrators such as OpenStack to manage ACRN VMs. ACRN supports
105secure container runtimes such as Kata Containers orchestrated via Docker or
106Kubernetes.
107
108
109High-Level Architecture
110***********************
111
112ACRN is a Type 1 hypervisor, meaning it runs directly on bare-metal
113hardware. It implements a hybrid Virtual Machine Manager (VMM) architecture,
114using a privileged Service VM that manages the I/O devices and provides I/O
115mediation. Multiple User VMs are supported with each of them potentially running
116different OSs. By running systems in separate VMs, you can isolate VMs
117and their applications, reducing potential attack surfaces and minimizing
118interference, but potentially introducing additional latency for applications.
119
120ACRN relies on Intel Virtualization Technology (Intel VT) and runs in Virtual
121Machine Extension (VMX) root operation, host mode, or VMM mode. All the User VMs
122and the Service VM run in VMX non-root operation, or guest mode.
123
124The Service VM runs with the system's highest virtual machine priority
125to ensure required device time-sensitive requirements and system quality
126of service (QoS). Service VM tasks run with mixed priority. Upon a
127callback servicing a particular User VM request, the corresponding
128software (or mediator) in the Service VM inherits the User VM priority.
129
130As mentioned earlier, hardware resources used by VMs can be configured into
131two parts, as shown in this hybrid VM sample configuration:
132
133.. figure:: images/ACRN-V2-high-level-arch-1-0.75x.png
134   :align: center
135   :name: V2-hl-arch
136
137   ACRN High-Level Architecture Hybrid Example
138
139Shown on the left of :numref:`V2-hl-arch`, we've partitioned resources dedicated
140to a User VM launched by the hypervisor and before the Service VM is started.
141This pre-launched VM runs independently of other virtual machines and owns
142dedicated hardware resources, such as a CPU core, memory, and I/O devices. Other
143VMs may not even be aware of the pre-launched VM's existence. Because of this,
144it can be used as a Safety VM that runs hardware failure detection code and can
145take emergency actions when system critical failures occur. Failures in other
146VMs or rebooting the Service VM will not directly impact execution of this
147pre-launched Safety VM.
148
149Shown on the right of :numref:`V2-hl-arch`, the remaining hardware resources are
150shared among the Service VM and User VMs. The Service VM is launched by the
151hypervisor after any pre-launched VMs are launched. The Service VM can access
152remaining hardware resources directly by running native drivers and provides
153device sharing services to the User VMs, through the Device Model.  These
154post-launched User VMs can run one of many OSs including Ubuntu or
155Windows, or a real-time OS such as Zephyr, VxWorks, or Xenomai. Because of its
156real-time capability, a real-time VM (RTVM) can be used for software
157programmable logic controller (PLC), inter-process communication (IPC), or
158Robotics applications.  These shared User VMs could be impacted by a failure in
159the Service VM since they may rely on its mediation services for device access.
160
161The Service VM owns most of the devices including the platform devices, and
162provides I/O mediation. The notable exceptions are the devices assigned to the
163pre-launched User VM. Some PCIe devices may be passed through to the
164post-launched User VMs via the VM configuration.
165
166The ACRN hypervisor also runs the ACRN VM manager to collect running
167information of the User VMs, and controls the User VMs such as starting,
168stopping, and pausing a VM, and pausing or resuming a virtual CPU.
169
170See the :ref:`hld-overview` developer reference material for more in-depth
171information.
172
173.. _static-configuration-scenarios:
174
175Static Configuration Based on Scenarios
176***************************************
177
178Scenarios are a way to describe the system configuration settings of the ACRN
179hypervisor, VMs, and resources they have access to that meet your specific
180application's needs such as compute, memory, storage, graphics, networking, and
181other devices.  Scenario configurations are stored in an XML format file and
182edited using the ACRN Configurator.
183
184Following a general embedded-system programming model, the ACRN hypervisor is
185designed to be statically customized at build time per hardware and scenario,
186rather than providing one binary for all scenarios.  Dynamic configuration
187parsing is not used in the ACRN hypervisor for these reasons:
188
189* **Reduce complexity**. ACRN is a lightweight reference hypervisor, built for
190  embedded IoT and Edge. As new platforms for embedded systems are rapidly introduced,
191  support for one binary could require more and more complexity in the
192  hypervisor, which is something we strive to avoid.
193* **Maintain small footprint**. Implementing dynamic parsing introduces hundreds or
194  thousands of lines of code. Avoiding dynamic parsing helps keep the
195  hypervisor's Lines of Code (LOC) in a desirable range (less than 40K).
196* **Improve boot time**. Dynamic parsing at runtime increases the boot time. Using a
197  static build-time configuration and not dynamic parsing helps improve the boot
198  time of the hypervisor.
199
200The scenario XML file together with a target board XML file are used to build
201the ACRN hypervisor image tailored to your hardware and application needs. The ACRN
202project provides the Board Inspector tool to automatically create the board XML
203file by inspecting the target hardware. ACRN also provides the
204:ref:`ACRN Configurator tool <acrn_configuration_tool>`
205to create and edit a tailored scenario XML file based on predefined sample
206scenario configurations.
207
208.. _usage-scenarios:
209
210Scenario Types
211**************
212
213Here are three sample scenario types and diagrams to illustrate how you
214can define your own configuration scenarios.
215
216
217* **Shared** is a traditional
218  computing, memory, and device resource sharing
219  model among VMs. The ACRN hypervisor launches the Service VM. The Service VM
220  then launches any post-launched User VMs and provides device and resource
221  sharing mediation through the Device Model.  The Service VM runs the native
222  device drivers to access the hardware and provides I/O mediation to the User
223  VMs.
224
225  .. figure:: images/ACRN-industry-example-1-0.75x.png
226     :align: center
227     :name: arch-shared-example
228
229     ACRN High-Level Architecture Shared Example
230
231  Virtualization is especially important in industrial environments because of
232  device and application longevity. Virtualization enables factories to
233  modernize their control system hardware by using VMs to run older control
234  systems and operating systems far beyond their intended retirement dates.
235
236  The ACRN hypervisor needs to run different workloads with little-to-no
237  interference, increase security functions that safeguard the system, run hard
238  real-time sensitive workloads together with general computing workloads, and
239  conduct data analytics for timely actions and predictive maintenance.
240
241  In this example, one post-launched User VM provides Human Machine Interface
242  (HMI) capability, another provides Artificial Intelligence (AI) capability,
243  some compute function is run in the Kata Container, and the RTVM runs the soft
244  Programmable Logic Controller (PLC) that requires hard real-time
245  characteristics.
246
247  - The Service VM provides device sharing functionalities, such as disk and
248    network mediation, to other virtual machines.  It can also run an
249    orchestration agent allowing User VM orchestration with tools such as
250    Kubernetes.
251  - The HMI Application OS can be Windows or Linux. Windows is dominant in
252    Industrial HMI environments.
253  - ACRN can support a soft real-time OS such as preempt-rt Linux for soft-PLC
254    control, or a hard real-time OS that offers less jitter.
255
256* **Partitioned** is a VM resource partitioning model when a User VM requires
257  independence and isolation from other VMs.  A partitioned VM's resources are
258  statically configured and are not shared with other VMs.  Partitioned User VMs
259  can be Real-Time VMs, Safety VMs, or standard VMs and are launched at boot
260  time by the hypervisor. There is no need for the Service VM or Device Model
261  since all partitioned VMs run native device drivers and directly access their
262  configured resources.
263
264  .. figure:: images/ACRN-partitioned-example-1-0.75x.png
265     :align: center
266     :name: arch-partitioned-example
267
268     ACRN High-Level Architecture Partitioned Example
269
270  This scenario is a simplified configuration showing VM partitioning: both
271  User VMs are independent and isolated, they do not share resources, and both
272  are automatically launched at boot time by the hypervisor.  The User VMs can
273  be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs.
274
275* **Hybrid** scenario simultaneously supports both sharing and partitioning on
276  the consolidated system. The pre-launched (partitioned) User VMs, with their
277  statically configured and unshared resources, are started by the hypervisor.
278  The hypervisor then launches the Service VM. The post-launched (shared) User
279  VMs are started by the Device Model in the Service VM and share the remaining
280  resources.
281
282  .. figure:: images/ACRN-hybrid-rt-example-1-0.75x.png
283     :align: center
284     :name: arch-hybrid-rt-example
285
286     ACRN High-Level Architecture Hybrid-RT Example
287
288  In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
289  hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
290  non-real-time tasks.
291
292The :ref:`acrn_configuration_tool` tutorial explains how to use the ACRN
293Configurator to create your own scenario, or to view and modify an existing one.
294
295.. _dm_architecture_intro:
296
297ACRN Device Model Architecture
298******************************
299
300Because devices may need to be shared between VMs, device emulation is
301used to give VM applications (and their OSs) access to these shared devices.
302Traditionally there are three architectural approaches to device
303emulation:
304
305* **Device emulation within the hypervisor**: a common method implemented within
306  the VMware workstation product (an operating system-based hypervisor). In
307  this method, the hypervisor includes emulations of common devices that the
308  various guest operating systems can share, including virtual disks, virtual
309  network adapters, and other necessary platform elements.
310
311* **User space device emulation**: rather than the device emulation embedded
312  within the hypervisor, it is implemented in a separate user space application.
313  QEMU, for example, provides this kind of device emulation also used by other
314  hypervisors. This model is advantageous, because the device emulation is
315  independent of the hypervisor and can therefore be shared for other
316  hypervisors. It also permits arbitrary device emulation without having to
317  burden the hypervisor (which operates in a privileged state) with this
318  functionality.
319
320* **Paravirtualized (PV) drivers**: a hypervisor-based device emulation model
321  introduced by the `XEN Project`_. In this model, the hypervisor includes the
322  physical device drivers, and each guest operating system includes a
323  hypervisor-aware driver that works in concert with the hypervisor drivers.
324
325.. _XEN Project:
326   https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum
327
328There's a price to pay for sharing devices. Whether device emulation is
329performed in the hypervisor, or in user space within an independent VM, overhead
330exists.  This overhead is worthwhile as long as the devices need to be shared by
331multiple guest operating systems. If sharing is not necessary, then there are
332more efficient methods for accessing devices, for example, "passthrough."
333
334All emulation, para-virtualization, and passthrough are used in ACRN project.
335ACRN defines a device emulation model where the Service VM owns all devices not
336previously partitioned to pre-launched User VMs, and emulates these devices for
337the User VM via the ACRN Device Model.  The ACRN Device Model is thereby a
338placeholder of the User VM. It allocates memory for the User VM OS, configures
339and initializes the devices used by the User VM, loads the virtual firmware,
340initializes the virtual CPU state, and invokes the ACRN hypervisor service to
341execute the guest instructions.  ACRN Device Model is an application running in
342the Service VM that emulates devices based on command line configuration.
343
344See the :ref:`hld-devicemodel` developer reference for more information.
345
346Device Passthrough
347******************
348
349At the highest level, device passthrough is about providing isolation
350of a device to a given guest operating system so that the device can be
351used exclusively by that User VM.
352
353.. figure:: images/device-passthrough.png
354   :align: center
355   :name: device-passthrough
356
357   Device Passthrough
358
359Near-native performance can be achieved by using device passthrough.  This is
360ideal for networking applications (or those with high disk I/O needs) that have
361not adopted virtualization because of contention and performance degradation
362through the hypervisor (using a driver in the hypervisor or through the
363hypervisor to a user space emulation).  Assigning devices to specific User VMs is
364also useful when those devices inherently wouldn't be shared. For example, if a
365system includes multiple video adapters, those adapters could be passed through
366to unique User VM domains.
367
368Finally, there may be specialized PCI devices that only one User VM uses,
369so they should be passed through to the User VM. Individual USB ports could be
370isolated to a given domain too, or a serial port (which is itself not shareable)
371could be isolated to a particular User VM. In the ACRN hypervisor, we support USB
372controller passthrough only, and we don't support passthrough for a legacy
373serial port (for example, ``0x3f8``).
374
375Hardware Support for Device Passthrough
376=======================================
377
378Intel's processor architectures provide support for device passthrough with
379Virtual Technology for Directed I/O (VT-d). VT-d maps User VM physical addresses to
380machine physical addresses, so devices can use User VM physical addresses directly.
381When this mapping occurs, the hardware takes care of access (and protection),
382and the User VM OS can use the device as if it were a
383non-virtualized system. In addition to mapping User VM to physical memory,
384isolation prevents this device from accessing memory belonging to other VMs
385or the hypervisor.
386
387Another innovation that helps interrupts scale to large numbers of VMs is called
388Message Signaled Interrupts (MSI). Rather than relying on physical interrupt
389pins to be associated with a User VM, MSI transforms interrupts into messages that
390are more easily virtualized, scaling to thousands of individual interrupts. MSI
391has been available since PCI version 2.2 and is also available in PCI Express
392(PCIe).  MSI is ideal for I/O virtualization, as it allows isolation of
393interrupt sources (as opposed to physical pins that must be multiplexed or
394routed through software).
395
396Hypervisor Support for Device Passthrough
397=========================================
398
399By using the latest virtualization-enhanced processor architectures, hypervisors
400and virtualization solutions can support device passthrough (using VT-d),
401including Xen, KVM, and ACRN hypervisor.  In most cases, the User VM OS
402must be compiled to support passthrough by using kernel
403build-time options.
404
405
406
407Boot Sequence
408*************
409
410.. _grub: https://www.gnu.org/software/grub/manual/grub/
411.. _Slim Bootloader: https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html
412
413The ACRN hypervisor can be booted from a third-party bootloader
414directly. A popular bootloader is `grub`_ and is
415also widely used by Linux distributions.
416
417:ref:`using_grub` has an introduction on how to boot ACRN hypervisor with GRUB.
418
419In :numref:`boot-flow-2`, we show the boot sequence:
420
421.. graphviz:: images/boot-flow-2.dot
422  :name: boot-flow-2
423  :align: center
424  :caption: ACRN Hypervisor Boot Flow
425
426The Boot process proceeds as follows:
427
428#. UEFI boots GRUB.
429#. GRUB boots the ACRN hypervisor and loads the VM kernels as Multi-boot
430   modules.
431#. The ACRN hypervisor verifies and boots kernels of the Pre-launched VM and
432   Service VM.
433#. In the Service VM launch path, the Service VM kernel verifies and loads
434   the ACRN Device Model and Virtual bootloader through ``dm-verity``.
435#. The virtual bootloader starts the User-side verified boot process.
436
437In this boot mode, the boot options of a pre-launched VM and the Service VM are defined
438in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config``
439in the source code ``configs/scenarios/$(SCENARIO)/vm_configurations.c`` (which
440resides under the hypervisor build directory) by default.
441These boot options can be overridden by the GRUB menu. See :ref:`using_grub` for
442details. The boot options of a post-launched VM are not covered by hypervisor
443source code or a GRUB menu; they are defined in the User VM's OS image file or specified by
444launch scripts.
445
446`Slim Bootloader`_ is an alternative boot firmware that can be used to
447boot ACRN. The `Boot ACRN Hypervisor
448<https://slimbootloader.github.io/how-tos/boot-acrn.html>`_ tutorial
449provides more information on how to use SBL with ACRN.
450
451Learn More
452**********
453
454The ACRN documentation offers more details of topics found in this introduction
455about the ACRN hypervisor architecture, Device Model, Service VM, and more.
456
457These documents provide introductory information about development with ACRN:
458
459* :ref:`overview_dev`
460* :ref:`gsg`
461* :ref:`acrn_configuration_tool`
462
463These documents provide more details and in-depth discussions of the ACRN
464hypervisor architecture and high-level design, and a collection of advanced
465guides and tutorials:
466
467* :ref:`hld`
468* :ref:`develop_acrn`
469
470