1.. _GSG_sample_app: 2 3Sample Application User Guide 4############################# 5 6This sample application shows how to create two VMs that are launched on 7your target system running ACRN. These VMs communicate with each other 8via inter-VM shared memory (IVSHMEM). One VM is a real-time VM running 9`cyclictest <https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/cyclictest/start>`__, 10an open source application commonly used to measure latencies in 11real-time systems. This real-time VM (RT_VM) uses inter-VM shared memory 12(IVSHMEM) to send data to a second Human-Machine Interface VM (HMI_VM). 13The HMI_VM formats and presents the collected data as a histogram on a web 14page shown by a browser. This guide shows how to configure, create, and 15launch the two VM images that make up this application. 16 17.. figure:: images/samp-image001.png 18 :class: drop-shadow 19 :align: center 20 :width: 900px 21 22 Sample Application Overview 23 24We build these two VM images on your development computer using scripts 25in the ACRN source code. Once we have the two VM images, we follow 26similar steps shown in the *Getting Started Guide* to define a new ACRN 27scenario with two post-launched user VMs with their IVSHMEM connection. 28We build a Service VM image and the Hypervisor image based on the 29scenario configuration (as we did in the Getting Started Guide). 30Finally, we put this all together on the target system, launch the 31sample application VMs on ACRN from the Service VM, run the application 32parts in each VM, and view the cyclictest histogram results in a browser 33running on our HMI VM (or development computer). 34 35While this sample application uses the cyclictest to generate data about 36performance latency in the RTVM, we aren't doing any configuration 37optimization in this sample to get the best RT performance. 38 39Prerequisites Environment and Images 40************************************ 41 42Before beginning, use the ``df`` command on your development computer and 43verify there's at least 30GB free disk space for building the ACRN 44sample application. You may see a different Filesystem name and sizes: 45 46.. code-block:: console 47 48 $ df -h / 49 50 Filesystem Size Used Avail Use% Mounted on 51 /dev/sda5 109G 42G 63G 41% / 52 53 54.. rst-class:: numbered-step 55 56Prepare the ACRN Development and Target Environment 57*************************************************** 58 59.. important:: 60 Before building the sample application, it's important that you complete 61 the :ref:`gsg` instructions and leave the development and target systems with 62 the files created in those instructions. 63 64The :ref:`gsg` instructions get all the tools and packages installed on your 65development and target systems that we'll also use to build and run this sample 66application. 67 68After following the Getting Started Guide, you'll have a directory 69``~/acrn-work`` on your development computer containing directories with the 70``acrn-hypervisor`` and ``acrn-kernel`` source code and build output. You'll 71also have the board XML file that's needed by the ACRN Configurator to 72configure the ACRN hypervisor and set up the VM launch scripts for this sample 73application. 74 75Preparing the Target System 76=========================== 77 78On the target system, reboot and choose the regular Ubuntu image (not the 79Ubuntu-ACRN Board Inspector choice created when following the Getting Started Guide). 80 811. Log in as the **acrn** user. We'll be making ssh connections to the target system 82 later in these steps, so install the ssh server on the target system using:: 83 84 sudo apt install -y openssh-server 85 86#. We'll need to know the IP address of the target system later. Use the 87 ``hostname -I`` command and look at the first IP address mentioned. You'll 88 likely see a different IP address than shown in this example: 89 90 .. code-block:: console 91 92 hostname -I | cut -d ' ' -f 1 93 10.0.0.200 94 95.. rst-class:: numbered-step 96 97Make the Sample Application 98*************************** 99 100On your development computer, build the applications used by the sample. The 101``rtApp`` app in the RT VM reads the output from the cyclictest program and 102sends it via inter-VM shared memory (IVSHMEM) to another regular HMI VM where 103the ``userApp`` app receives the data and formats it for presentation using the 104``histapp.py`` Python app. 105 106As a normal (e.g., **acrn**) user, follow these steps: 107 1081. Install some additional packages in your development computer used for 109 building the sample application:: 110 111 sudo apt install -y cloud-guest-utils schroot kpartx qemu-utils 112 113#. Check out the ``acrn-hypervisor`` source code branch (already cloned from the 114 ``acrn-hypervisor`` repo when you followed the :ref:`gsg`). We've tagged a 115 specific version of the hypervisor that you should use for the sample app's 116 HMI VM:: 117 118 cd ~/acrn-work/acrn-hypervisor 119 git fetch --all 120 git checkout release_3.3 121 122#. Build the ACRN sample application source code:: 123 124 cd misc/sample_application/ 125 make all 126 127 This builds the ``histapp.py``, ``userApp``, and ``rtApp`` used for the 128 sample application. 129 130.. rst-class:: numbered-step 131 132Make the HMI_VM Image 133********************* 134 1351. Make the HMI VM image. This script runs for about 10 minutes total and will 136 prompt you to input the passwords for the **acrn** and **root** user in the 137 HMI_VM image:: 138 139 cd ~/acrn-work/acrn-hypervisor/misc/sample_application/image_builder 140 ./create_image.sh hmi-vm 141 142 After the script is finished, the ``hmi_vm.img`` image file is created in the 143 ``build`` directory. You should see a final message from the script that 144 looks like this: 145 146 .. code-block:: console 147 148 2022-08-18T09:53:06+08:00 [ Info ] VM image created at /home/acrn/acrn-work/acrn-hypervisor/misc/sample_application/image_builder/build/hmi_vm.img. 149 150 If you don't see such a message, look back through the output to see what 151 errors are indicated. For example, there could have been a network error 152 while retrieving packages from the Internet. In such a case, simply trying 153 the ``create_image.sh`` command again might work. 154 155 The HMI VM image is a configured Ubuntu desktop image 156 ready to launch as an ACRN user VM with the HMI parts of the sample app 157 installed. 158 159.. rst-class:: numbered-step 160 161Make the RT_VM Image 162********************* 163 1641. Check out the ``acrn-kernel`` source code branch (already cloned from the 165 ``acrn-kernel`` repo when you followed the :ref:`gsg`). We use preempt-rt 166 branch of ``acrn-kernel`` for the sample app's RT VM:: 167 168 cd ~/acrn-work/acrn-kernel 169 git fetch --all 170 git checkout -b sample_rt origin/5.15/preempt-rt 171 172#. Build the preempt-rt patched kernel used by the RT VM:: 173 174 make mrproper 175 cp kernel_config .config 176 make olddefconfig 177 make -j $(nproc) deb-pkg 178 179 The kernel build can take 15 minutes on a fast computer but could 180 take two to three hours depending on the performance of your development 181 computer. When done, the build generates four Debian packages in the 182 directory above the build root directory, as shown by this command:: 183 184 ls ../*rtvm*.deb 185 186 You will see rtvm Debian packages for linux-headers, linux-image 187 (normal and debug), and linux-libc-dev (your file names might look a 188 bit different): 189 190 .. code-block:: console 191 192 linux-headers-5.15.71-rt46-acrn-kernel-rtvm+_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb 193 linux-image-5.15.71-rt46-acrn-kernel-rtvm+-dbg_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb 194 linux-image-5.15.71-rt46-acrn-kernel-rtvm+_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb 195 linux-libc-dev_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb 196 197#. Make the RT VM image:: 198 199 cd ~/acrn-work/acrn-hypervisor/misc/sample_application/image_builder 200 ./create_image.sh rt-vm 201 202 After the script is finished, the ``rt_vm.img`` image file is created in the ``build`` 203 directory. The RT VM image is a configured Ubuntu image with a 204 preempt-rt patched kernel used for real-time VMs. 205 206 207.. rst-class:: numbered-step 208 209Create and Configure the ACRN Scenario 210************************************** 211 212Now we turn to building the hypervisor based on the board and scenario 213configuration for our sample application. We'll use the board XML file 214and ACRN Configurator already on your development computer when you followed 215the :ref:`gsg`. 216 217Use the ACRN Configurator to define a new scenario for our two VMs 218and generate new launch scripts for this sample application. 219 2201. On your development computer, launch the ACRN Configurator:: 221 222 cd ~/acrn-work 223 acrn-configurator 224 225#. Under **Start a new configuration**, confirm that the working folder is 226 ``/home/acrn/acrn-work/MyConfiguration``. Click **Use This Folder**. (If 227 prompted, confirm it's **OK** to overwrite an existing configuration.) 228 229 230 .. image:: images/samp-image002.png 231 :class: drop-shadow 232 :align: center 233 234#. Import your board configuration file as follows: 235 236 a. In the **1. Import a board configuration file** panel, click **Browse 237 for file**. 238 239 #. Browse to ``/home/acrn/acrn-work/my_board.xml`` and click **Open**. 240 Then click **Import Board File**. 241 242 .. image:: images/samp-image003.png 243 :class: drop-shadow 244 :align: center 245 246 247#. **Create a new scenario**: select a shared scenario type with a Service VM and 248 two post-launched VMs. Click **OK**. 249 250 .. image:: images/samp-image004.png 251 :class: drop-shadow 252 :align: center 253 254 The ACRN Configurator will report some problems with the initial scenario 255 configuration that we'll resolve as we make updates. (Notice the error 256 indicators on the settings tabs and above the parameters tabs.) The 257 ACRN Configurator verifies the scenario when you open a saved 258 scenario and when you click the **Save Scenario And Launch Scripts** 259 button. 260 261 .. image:: images/samp-image004a.png 262 :class: drop-shadow 263 :align: center 264 265#. Select the VM0 (Service VM) tab and set the **Console virtual UART type** to 266 ``COM Port 1``. Edit the **Basic Parameters > Kernel 267 command-line parameters** by appending the existing parameters with ``i915.modeset=1 3`` 268 (to disable the GPU driver loading for Intel GPU device). 269 270 .. image:: images/samp-image005.png 271 :class: drop-shadow 272 :align: center 273 274#. Select the VM1 tab and change the VM name to HMI_VM. Configure the **Console 275 virtual UART type** to ``COM Port 1``, set the **Memory size** to ``2048``, 276 and add the **physical CPU affinity** to pCPU ``0`` and ``1`` (click the 277 **+** button to create the additional affinity setting), as shown below: 278 279 .. image:: images/samp-image006.png 280 :class: drop-shadow 281 :align: center 282 283#. Enable GVT-d configuration by clicking the **+** within the **PCI device 284 setting** options and selecting the VGA compatible controller. Click the 285 **+** button again to add the USB controller to passthrough to the HMI_VM. 286 287 .. image:: images/samp-image007.png 288 :class: drop-shadow 289 :align: center 290 291#. Configure the HMI_VM's **virtio console devices** and **virtio network 292 devices** by clicking the **+** button in the section and setting the values 293 as shown here (note the **Network interface name** must be ``tap0``): 294 295 .. image:: images/samp-image008.png 296 :class: drop-shadow 297 :align: center 298 299#. Configure the HMI_VM **virtio block device**. Add the absolute path of your 300 ``hmi_vm.img`` on the target system (we'll copy the generated ``hmi_vm.img`` 301 to this directory in a later step): 302 303 .. image:: images/samp-image009.png 304 :class: drop-shadow 305 :align: center 306 307 That completes the HMI_VM settings. 308 309#. Next, select the VM2 tab and change the **VM name** to RT_VM, change the 310 **VM type** to ``Real-time``, set the **Console virtual UART type** to ``COM port 1``, 311 set the **memory size** to ``1024``, set **pCPU affinity** to IDs ``2`` and ``3``, and 312 check the **Real-time vCPU box** for pCPU ID 2, as shown below: 313 314 .. image:: images/samp-image010.png 315 :class: drop-shadow 316 :align: center 317 318#. Configure the **virtio console device** for the RT_VM (unlike the HMI_VM, we 319 don't use a **virtio network device** for this RT_VM): 320 321 .. image:: images/samp-image011.png 322 :align: center 323 :class: drop-shadow 324 325#. Add the absolute path of your ``rt_vm.img`` on the target system (we'll copy 326 the ``rt_vm.img`` file we generated earlier to this directory in a later 327 step): 328 329 .. image:: images/samp-image012.png 330 :class: drop-shadow 331 :align: center 332 333#. Select the Hypervisor tab: Verify that the **build type** is ``Debug``, 334 define the 335 **InterVM shared memory region** settings as shown below, adding the 336 HMI_VM and RT_VM as the VMs doing the sharing of this region. (The 337 missing **Virtual BDF** values will be supplied by the ACRN Configurator 338 when you save the configuration.) 339 340 .. image:: images/samp-image013.png 341 :class: drop-shadow 342 :align: center 343 344 In the **Debug options**, set the **Serial console port** to 345 ``/dev/ttyS0``, as shown below (this will resolve the message about the 346 missing serial port configuration): 347 348 .. image:: images/samp-image014.png 349 :class: drop-shadow 350 :align: center 351 352#. Click the **Save Scenario and Launch Scripts** to validate and save this 353 configuration and launch scripts. You should see a dialog box saying the 354 scenario is saved and validated, launch scripts are generated, and all files 355 successfully saved. Click **OK**. 356 357 .. image:: images/samp-image015.png 358 :class: drop-shadow 359 :align: center 360 :width: 400px 361 362 363#. We're done configuring the sample application scenario. When you saved the 364 scenario, the ACRN Configurator did a re-verification of all the option 365 settings and found no issues, so all the error indicators are now cleared. 366 367 Exit the ACRN Configurator by clicking the **X** in the top right corner. 368 369 .. image:: images/samp-image015a.png 370 :class: drop-shadow 371 :align: center 372 373You can see the saved scenario and launch scripts in the working 374directory: 375 376.. code-block:: console 377 378 $ ls MyConfiguration 379 380 launch_user_vm_id1.sh launch_user_vm_id2.sh scenario.xml myboard.board.xml 381 382You'll see the two VM launch scripts (id1 for the HMI_VM, and id2 for 383the RT_VM) and the scenario XML file for your sample application (as 384well as your board XML file). 385 386.. rst-class:: numbered-step 387 388Build the ACRN Hypervisor and Service VM Images 389*********************************************** 390 3911. On the development computer, build the ACRN hypervisor using the 392 board XML and the scenario XML file we just generated:: 393 394 cd ~/acrn-work/acrn-hypervisor 395 396 make clean 397 debian/debian_build.sh clean && debian/debian_build.sh -c ~/acrn-work/MyConfiguration 398 399 The build typically takes about a minute. When done, the build 400 generates several Debian packages in the build directory. Only one 401 with your board and working folder name among these Debian packages 402 is different from genetated in the Getting Started Guide. So we only 403 need to copy and reinstall one Debian package to the target system. 404 405 This Debian package contains the ACRN hypervisor and tools for 406 installing ACRN on the target. 407 408#. Use the ACRN kernel for the Service VM already on your development computer 409 when you followed the Getting Started Guide (the sample application 410 requires the same version of the Service VM as generated in the 411 Getting Started Guide, so no need to generate it again). 412 413.. rst-class:: numbered-step 414 415Copy Files from the Development Computer to Your Target System 416************************************************************** 417 4181. Copy all the files generated on the development computer to the 419 target system. This includes the sample application executable files, 420 HMI_VM and RT_VM images, Debian packages for ACRN Hypervisor, 421 and the launch scripts. 422 423 Use ``scp`` to copy files from your development computer to the 424 ``~/acrn-work`` directory on the target (replace the IP address used in 425 this example with the target system's IP address you found earlier):: 426 427 cd ~/acrn-work 428 429 scp acrn-hypervisor/misc/sample_application/image_builder/build/*_vm.img \ 430 acrn-hypervisor*.deb \ 431 MyConfiguration/launch_user_vm_id*.sh \ 432 acrn@10.0.0.200:~/acrn-work 433 434 435.. rst-class:: numbered-step 436 437Install and Run ACRN on the Target System 438***************************************** 439 4401. On the target system, configure your network according to instruction of below link: 441 442 https://www.ubuntupit.com/how-to-configure-and-use-network-bridge-in-ubuntu-linux/ 443 444#. On your target system, install the ACRN Debian package and ACRN 445 kernel Debian packages using these commands:: 446 447 cd ~/acrn-work 448 cp ./acrn-hypervisor*.deb ./*acrn-service-vm*.deb /tmp 449 sudo apt purge acrn-hypervisor 450 sudo apt install /tmp/acrn-hypervisor*.deb /tmp/*acrn-service-vm*.deb 451 452#. Enable networking services for sharing with the HMI User VM: 453 454 .. warning:: 455 The IP address of Service VM may change after executing the following command. 456 457 .. code-block:: bash 458 459 cp /usr/share/doc/acrnd/examples/* /etc/systemd/network 460 sudo systemctl enable --now systemd-networkd 461 462#. Reboot the system:: 463 464 reboot 465 466#. The target system will boot automatically into the ACRN hypervisor and 467 launch the Service VM. 468 469 Log in to the Service VM (using the target's keyboard and HDMI monitor) using 470 the ``acrn`` username. 471 472#. Find the Service VM's IP address (the first IP address shown by this command): 473 474 .. code-block:: console 475 476 $ hostname -I | cut -d ' ' -f 1 477 10.0.0.200 478 479#. From your development computer, ssh to your target system's Service VM 480 using that IP address:: 481 482 ssh acrn@10.0.0.200 483 484#. In that ssh session, launch the HMI_VM by using the ``launch_user_vm_id1.sh`` launch 485 script:: 486 487 sudo chmod +x ~/acrn-work/launch_user_vm_id1.sh 488 sudo ~/acrn-work/launch_user_vm_id1.sh 489 490#. The launch script will start up the HMI_VM and show an Ubuntu login 491 prompt in your ssh session (and a graphical login on your target's HDMI 492 monitor). 493 494 Log in to the HMI_VM as **root** user (not **acrn**) using your development 495 computer's ssh session: 496 497 .. code-block:: console 498 :emphasize-lines: 1 499 500 ubuntu login: root 501 Password: 502 Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-52-generic x86_64) 503 504 . . . 505 506 (acrn-guest)root@ubuntu:~# 507 508#. Find the HMI_VM's IP address: 509 510 .. code-block:: console 511 512 (acrn-guest)root@ubuntu:~# hostname -I | cut -d ' ' -f 1 513 10.0.0.100 514 515 If no IP address is reported, run this command to request an IP address and check again:: 516 517 dhclient 518 519#. Run the HMI VM sample application ``userApp`` (in the background):: 520 521 sudo /root/userApp & 522 523 and then the ``histapp.py`` application:: 524 525 pip install "numpy<2" 526 sudo python3 /root/histapp.py 527 528 At this point, the HMI_VM is running and we've started the HMI parts of 529 the sample application. Next, we will launch the RT_VM and its parts of 530 the sample application. 531 532#. On your development computer, open a new terminal window and start a 533 new ssh connection to your target system's Service VM:: 534 535 ssh acrn@10.0.0.200 536 537#. In this ssh session, launch the RT_VM by using the vm_id2 launch 538 script:: 539 540 sudo chmod +x ~/acrn-work/launch_user_vm_id2.sh 541 sudo ~/acrn-work/launch_user_vm_id2.sh 542 543#. The launch script will start up the RT_VM. Lots of system messages will go 544 by and end with an Ubuntu login prompt. 545 546 Log in to the RT_VM as **root** user (not **acrn**) in this ssh session: 547 548 .. code-block:: console 549 :emphasize-lines: 1 550 551 ubuntu login: root 552 Password: 553 Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.71-rt46-acrn-kernel-rtvm+ x86_64) 554 555 . . . 556 557 (acrn-guest)root@ubuntu:~# 558 559 560#. Run the cyclictest in this RT_VM (in the background):: 561 562 cyclictest -p 80 --fifo="./data_pipe" -q & 563 564 and then the rtApp in this RT_VM:: 565 566 sudo /root/rtApp 567 568Now the two parts of the sample application are running: 569 570* The RT_VM is running cyclictest, which generates latency data, and the rtApp 571 sends this data via IVSHMEM to the HMI_VM. 572* In the HMI_VM, the userApp receives the cyclictest data and provides it to the 573 histapp.py Python application that is running a web server. 574 575We can view this data displayed as a histogram: 576 577Option 1: Use a browser on your development computer 578 Open a web browser on your development computer to the 579 HMI_VM IP address that we found in an earlier step (e.g., http://10.0.0.100). 580 581Option 2: Use a browser on the HMI VM using the target system console 582 Log in to the HMI_VM on the target system's console. (If you want to 583 log in as root, click the "Not listed?" link under the username choices you 584 do see and enter the root username and password.) Open the web browser to 585 http://localhost. 586 587Refresh the browser. You'll see a histogram graph showing the 588percentage of latency time intervals reported by cyclictest. The histogram will 589update every time you refresh the browser. (Notice the count of samples 590increases as reported on the vertical axis label.) 591 592.. figure:: images/samp-image018.png 593 :class: drop-shadow 594 :align: center 595 596 Example Histogram Output from Cyclictest as Reported by the Sample App 597 598The horizontal axis represents the latency values in microseconds, and the 599vertical axis represents the percentage of occurrences of those values. 600 601Congratulations 602*************** 603 604That completes the building and running of this sample application. You 605can view the application's code in the 606``~/acrn-work/acrn-hypervisor/misc/sample_application`` directory on your 607development computer (cloned from the ``acrn-hypervisor`` repo). 608 609.. note:: As mentioned at the beginning, while this sample application uses the 610 cyclictest to generate data about performance latency in the RT_VM, we 611 haven't done any configuration optimization in this sample to get the 612 best real-time performance. 613