1.. _acrn_on_qemu:
2
3Enable ACRN Over QEMU/KVM
4#########################
5
6This document shows how to bring up ACRN as a nested hypervisor on top of
7QEMU/KVM with basic functionality such as running a Service VM and User VM.
8Running ACRN as a nested hypervisor gives you an easy way to evaluate ACRN in an
9emulated environment instead of setting up a separate hardware platform
10configuration.
11
12This setup was tested with the following configuration:
13
14- ACRN hypervisor: ``v3.0`` tag
15- ACRN kernel: ``acrn-v3.0`` tag
16- QEMU emulator version: 4.2.1
17- Host OS: Ubuntu 20.04
18- Service VM/User VM OS: Ubuntu 20.04
19- Platforms tested: Kaby Lake, Skylake, Whiskey Lake, Tiger Lake
20
21Prerequisites
22*************
23
241. Make sure the platform supports Intel VMX as well as VT-d
25   technologies. On Ubuntu 20.04, this
26   can be checked by installing the ``kvm-ok`` tool found in the ``cpu-checker`` package.
27
28
29   .. code-block:: bash
30
31      sudo apt install cpu-checker
32
33   Run the ``kvm-ok`` tool and if the output displays **KVM acceleration can be used**,
34   the platform supports Intel VMX and VT-d technologies.
35
36   .. code-block:: console
37
38      kvm-ok
39      INFO: /dev/kvm exists
40      KVM acceleration can be used
41
422. The host kernel version must be **at least 5.3.0** or above.
43   Ubuntu 20.04 uses a 5.8.0 kernel (or later),
44   so no changes are needed if you are using it.
45
463. Make sure KVM and the following utilities are installed.
47
48   .. code-block:: none
49
50      sudo apt update && sudo apt upgrade -y
51      sudo apt install qemu-kvm virtinst libvirt-daemon-system -y
52
53
54Prepare Service VM (L1 Guest)
55*****************************
56
571. Use the ``virt-install`` command to create the Service VM.
58
59   .. code-block:: none
60
61      virt-install \
62      --connect qemu:///system \
63      --name ServiceVM \
64      --machine q35 \
65      --ram 4096 \
66      --disk path=/var/lib/libvirt/images/servicevm.img,size=32 \
67      --vcpus 4 \
68      --virt-type kvm \
69      --os-type linux \
70      --os-variant ubuntu20.04 \
71      --graphics none \
72      --clock offset=utc,tsc_present=yes,kvmclock_present=no \
73      --qemu-commandline="-machine kernel-irqchip=split -cpu Denverton,+invtsc,+lm,+nx,+smep,+smap,+mtrr,+clflushopt,+vmx,+x2apic,+popcnt,-xsave,+sse,+rdrand,-vmx-apicv-vid,+vmx-apicv-xapic,+vmx-apicv-x2apic,+vmx-flexpriority,+tsc-deadline,+pdpe1gb -device intel-iommu,intremap=on,caching-mode=on,aw-bits=48" \
74      --location 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/' \
75      --extra-args "console=tty0 console=ttyS0,115200n8"
76
77#. Walk through the installation steps as prompted. Here are a few things to note:
78
79   a. Make sure to install an OpenSSH server so that once the installation is
80      complete, you can SSH into the system.
81
82      .. figure:: images/acrn_qemu_1.png
83         :align: center
84
85   b. We use GRUB to boot ACRN, so make sure you install it when prompted.
86
87      .. figure:: images/acrn_qemu_2.png
88         :align: center
89
90   c. After the installation is complete, the Service VM (guest) will restart.
91
92#. Log in to the Service VM guest. Find the IP address of the guest and use it
93   to connect via SSH. The IP address can be retrieved using the ``virsh``
94   command as shown below.
95
96   .. code-block:: console
97
98      virsh domifaddr ServiceVM
99       Name       MAC address          Protocol     Address
100      -------------------------------------------------------------------------------
101       vnet0      52:54:00:72:4e:71    ipv4         192.168.122.31/24
102
103#. Once logged into the Service VM, enable the serial console. Once ACRN is enabled,
104   the ``virsh`` command will no longer show the IP.
105
106   .. code-block:: none
107
108      sudo systemctl enable serial-getty@ttyS0.service
109      sudo systemctl start serial-getty@ttyS0.service
110
111#. Enable the GRUB menu to choose between Ubuntu and the ACRN hypervisor.
112   Modify :file:`/etc/default/grub` and edit these entries:
113
114   .. code-block:: none
115
116      GRUB_TIMEOUT_STYLE=menu
117      GRUB_TIMEOUT=5
118      GRUB_CMDLINE_LINUX_DEFAULT=""
119      GRUB_GFXMODE=text
120
121#. Check the rootfs partition  with ``lsblk``, it is ``vda5`` in this example.
122
123#. The Service VM guest can also be launched again later using
124   ``virsh start ServiceVM --console``. Make sure to use the domain name you
125   used while creating the VM in case it is different than ``ServiceVM``.
126
127This concludes the initial configuration of the Service VM. The next steps will
128install ACRN in it.
129
130.. _install_acrn_hypervisor:
131
132Install ACRN Hypervisor
133***********************
134
1351. Launch the ``ServiceVM`` Service VM guest and log into it (SSH is recommended
136   but the console is available too).
137
138   .. important:: All the steps below are performed **inside** the Service VM
139      guest that we built in the previous section.
140
141#. Install the ACRN build tools and dependencies following the :ref:`gsg`. Note
142   again that we're doing these steps within the Service VM and not on a development
143   system as described in the Getting Started Guide.
144#. Switch to the ACRN hypervisor ``v3.0`` tag.
145
146   .. code-block:: none
147
148      cd ~
149      git clone https://github.com/projectacrn/acrn-hypervisor.git
150      cd acrn-hypervisor
151      git checkout v3.0
152
153#. Build ACRN for QEMU:
154
155   We're using the qemu board XML and shared scenario XML files
156   supplied from the repo (``misc/config_tools/data/qemu``) and not
157   generated by the board inspector or configurator tools.
158
159   .. code-block:: none
160
161      make BOARD=qemu SCENARIO=shared
162
163   For more details, refer to the :ref:`gsg`.
164
165#. Install the ACRN Device Model and tools:
166
167   .. code-block:: none
168
169      sudo make install
170
171#. Copy ``acrn.32.out`` to the Service VM guest ``/boot`` directory.
172
173   .. code-block:: none
174
175      sudo cp build/hypervisor/acrn.32.out /boot
176
177#. Clone and configure the Service VM kernel repository following the
178   instructions in the :ref:`gsg` and using the ``acrn-v3.0`` tag. The User VM (L2
179   guest) uses the ``virtio-blk`` driver to mount the rootfs. This driver is
180   included in the default kernel configuration as of the ``acrn-v3.0`` tag.
181
182#. Update GRUB to boot the ACRN hypervisor and load the Service VM kernel.
183   Append the following configuration to the :file:`/etc/grub.d/40_custom`.
184
185   .. code-block:: none
186
187      menuentry 'ACRN hypervisor' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
188         recordfail
189         load_video
190         gfxmode $linux_gfx_mode
191         insmod gzio
192         insmod part_msdos
193         insmod ext2
194
195         echo 'Loading ACRN hypervisor ...'
196         multiboot --quirk-modules-after-kernel /boot/acrn.32.out  root=/dev/vda5
197         module /boot/vmlinuz-5.10.115-acrn-service-vm Linux_bzImage
198      }
199
200   .. note::
201      If your rootfs partition isn't vda5, please change it to match with yours.
202      vmlinuz-5.10.115-acrn-service-vm is the kernel image of Service VM.
203
204#. Update GRUB:
205
206   .. code-block:: none
207
208      sudo update-grub
209
210#. Enable networking for the User VMs:
211
212   .. code-block:: none
213
214      sudo systemctl enable systemd-networkd
215      sudo systemctl start systemd-networkd
216
217#. Shut down the guest and relaunch it using
218   ``virsh start ServiceVM --console``.
219   Select the ``ACRN hypervisor`` entry from the GRUB menu.
220
221   .. note::
222      You may occasionally run into the following error: ``Assertion failed in
223      file arch/x86/vtd.c,line 256 : fatal error``. This is a transient issue;
224      try to restart the VM when that happens. If you need a more stable setup,
225      you can work around the problem by switching your native host to a
226      non-graphical environment (``sudo systemctl set-default
227      multi-user.target``).
228
229#. Use ``dmesg`` to verify that you are now running ACRN.
230
231   .. code-block:: console
232
233      dmesg | grep ACRN
234      [    0.000000] Hypervisor detected: ACRN
235      [    2.337176] ACRNTrace: Initialized acrn trace module with 4 cpu
236      [    2.368358] ACRN HVLog: Initialized hvlog module with 4 cpu
237      [    2.727905] systemd[1]: Set hostname to <ServiceVM>.
238
239   .. note::
240      When shutting down the Service VM, make sure to cleanly destroy it with
241      these commands, to prevent crashes in subsequent boots.
242
243      .. code-block:: none
244
245         virsh destroy ServiceVM # where ServiceVM is the virsh domain name.
246
247Bring Up User VM (L2 Guest)
248***************************
249
2501. Build the User VM disk image (``UserVM.img``) following
251   :ref:`build-the-ubuntu-kvm-image` and copy it to the Service VM (L1 guest).
252   Alternatively you can use an
253   `Ubuntu Desktop ISO image <https://ubuntu.com/#download>`_.
254   Rename the downloaded ISO image to ``UserVM.iso``.
255
256#. Transfer the ``UserVM.img``  or ``UserVM.iso`` User VM disk image to the
257   Service VM (L1 guest).
258
259#. Copy OVMF.fd to launch User VM.
260
261   .. code-block:: none
262
263      cp ~/acrn-hypervisor/devicemodel/bios/OVMF.fd ~/
264
265#. Update the script to use your disk image (``UserVM.img`` or ``UserVM.iso``).
266
267   .. code-block:: none
268
269      #!/bin/bash
270      # Copyright (C) 2020-2022 Intel Corporation.
271      # SPDX-License-Identifier: BSD-3-Clause
272      function launch_ubuntu()
273      {
274      vm_name=ubuntu_vm$1
275      logger_setting="--logger_setting console,level=5;kmsg,level=6;disk,level=5"
276      #check if the vm is running or not
277      vm_ps=$(pgrep -a -f acrn-dm)
278      result=$(echo $vm_ps | grep "${vm_name}")
279      if [[ "$result" != "" ]]; then
280        echo "$vm_name is running, can't create twice!"
281        exit
282      fi
283      #for memsize setting
284      mem_size=1024M
285      acrn-dm -m $mem_size -s 0:0,hostbridge \
286      -s 3,virtio-blk,~/UserVM.img \
287      -s 4,virtio-net,tap=tap0 \
288      --cpu_affinity 1 \
289      -s 5,virtio-console,@stdio:stdio_port \
290      --ovmf ~/OVMF.fd \
291      $logger_setting \
292      $vm_name
293      }
294      # offline Service VM CPUs except BSP before launching User VM
295      for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
296        online=`cat $i/online`
297        idx=`echo $i | tr -cd "[1-99]"`
298        echo cpu$idx online=$online
299        if [ "$online" = "1" ]; then
300           echo 0 > $i/online
301                # during boot time, cpu hotplug may be disabled by pci_device_probe during a pci module insmod
302                while [ "$online" = "1" ]; do
303                sleep 1
304                echo 0 > $i/online
305                online=`cat $i/online`
306                done
307                echo $idx > /sys/devices/virtual/misc/acrn_hsm/remove_cpu
308        fi
309      done
310      launch_ubuntu 1
311