1.. _hld-virtio-devices:
2.. _virtio-hld:
3
4Virtio Devices High-Level Design
5################################
6
7The ACRN hypervisor follows the `Virtual I/O Device (virtio)
8specification
9<http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html>`_ to
10realize I/O virtualization for many performance-critical devices
11supported in the ACRN project. Adopting the virtio specification lets us
12reuse many frontend virtio drivers already available in a Linux-based
13User VM, drastically reducing potential development effort for frontend
14virtio drivers.  To further reduce the development effort of backend
15virtio drivers, the hypervisor  provides the virtio backend service
16(VBS) APIs, that make  it very straightforward to implement a virtio
17device in the hypervisor.
18
19The virtio APIs can be divided into 3 groups: DM APIs, virtio backend
20service (VBS) APIs, and virtqueue (VQ) APIs, as shown in
21:numref:`be-interface`.
22
23.. figure:: images/virtio-hld-image0.png
24   :width: 900px
25   :align: center
26   :name: be-interface
27
28   ACRN Virtio Backend Service Interface
29
30-  **DM APIs** are exported by the DM, and are mainly used during the
31   device initialization phase and runtime. The DM APIs also include
32   PCIe emulation APIs because each virtio device is a PCIe device in
33   the Service VM and User VM.
34-  **VBS APIs** are mainly exported by the VBS and related modules.
35   Generally they are callbacks to be
36   registered into the DM.
37-  **VQ APIs** are used by a virtio backend device to access and parse
38   information from the shared memory between the frontend and backend
39   device drivers.
40
41Virtio framework is the para-virtualization specification that ACRN
42follows to implement I/O virtualization of performance-critical
43devices such as audio, eAVB/TSN, IPU, and CSMU devices. This section gives
44an overview about virtio history, motivation, and advantages, and then
45highlights virtio key concepts. Second, this section will describe
46ACRN's virtio architectures and elaborate on ACRN virtio APIs. Finally
47this section will introduce all the virtio devices supported
48by ACRN.
49
50Virtio Introduction
51*******************
52
53Virtio is an abstraction layer over devices in a para-virtualized
54hypervisor. Virtio was developed by Rusty Russell when he worked at IBM
55research to support his lguest hypervisor in 2007, and it quickly became
56the de facto standard for KVM's para-virtualized I/O devices.
57
58Virtio is very popular for virtual I/O devices because it provides a
59straightforward, efficient, standard, and extensible mechanism, and
60eliminates the need for boutique, per-environment, or per-OS mechanisms.
61For example, rather than having a variety of device emulation
62mechanisms, virtio provides a common frontend driver framework that
63standardizes device interfaces, and increases code reuse across
64different virtualization platforms.
65
66Given the advantages of virtio, ACRN also follows the virtio
67specification.
68
69Key Concepts
70************
71
72To better understand virtio, especially its usage in ACRN, we'll
73highlight several key virtio concepts important to ACRN:
74
75
76Frontend virtio driver (FE)
77  Virtio adopts a frontend-backend architecture that enables a simple but
78  flexible framework for both frontend and backend virtio drivers. The FE
79  driver merely needs to offer services that configure the interface, pass messages,
80  produce requests, and kick the backend virtio driver. As a result, the FE
81  driver is easy to implement and the performance overhead of emulating
82  a device is eliminated.
83
84Backend virtio driver (BE)
85  Similar to the FE driver, the BE driver, running either in userland or
86  kernel-land of the host OS, consumes requests from the FE driver and sends them
87  to the host native device driver. Once the requests are done by the host
88  native device driver, the BE driver notifies the FE driver that the
89  request is complete.
90
91  Note: To distinguish the BE driver from the host native device driver, the
92  host native device driver is called "native driver" in this document.
93
94Straightforward: virtio devices as standard devices on existing buses
95  Instead of creating new device buses from scratch, virtio devices are
96  built on existing buses. This gives a straightforward way for both FE
97  and BE drivers to interact with each other. For example, the FE driver could
98  read/write registers of the device, and the virtual device could
99  interrupt the FE driver, on behalf of the BE driver, in case something of
100  interest is happening.
101
102  The virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only
103  PCI/PCIe bus is supported, and all the virtio devices share the same
104  vendor ID 0x1AF4.
105
106  Note: For MMIO, the "bus" is an overstatement since
107  basically it is a few descriptors describing the devices.
108
109Efficient: batching operation is encouraged
110  Batching operation and deferred notification are important to achieve
111  high-performance I/O, since notification between the FE driver and BE driver
112  usually involves an expensive exit of the guest. Therefore batching
113  operating and notification suppression are highly encouraged if
114  possible. This will give an efficient implementation for
115  performance-critical devices.
116
117Standard: virtqueue
118  All virtio devices share a standard ring buffer and descriptor
119  mechanism, called a virtqueue, shown in :numref:`virtqueue`. A virtqueue is a
120  queue of scatter-gather buffers. There are three important methods on
121  virtqueues:
122
123  - **add_buf** is for adding a request/response buffer in a virtqueue,
124  - **get_buf** is for getting a response/request in a virtqueue, and
125  - **kick** is for notifying the other side for a virtqueue to consume buffers.
126
127  The virtqueues are created in guest physical memory by the FE drivers.
128  BE drivers only need to parse the virtqueue structures to obtain
129  the requests and process them. The virtqueue organization is
130  specific to the Guest OS. In the Linux implementation of virtio, the
131  virtqueue is implemented as a ring buffer structure called `vring``.
132
133  In ACRN, the virtqueue APIs can be leveraged directly so that users
134  don't need to worry about the details of the virtqueue. (Refer to guest
135  OS for more details about the virtqueue implementation.)
136
137.. figure:: images/virtio-hld-image2.png
138   :width: 900px
139   :align: center
140   :name: virtqueue
141
142   Virtqueue
143
144Extensible: feature bits
145  A simple extensible feature negotiation mechanism exists for each
146  virtual device and its driver. Each virtual device could claim its
147  device specific features while the corresponding driver could respond to
148  the device with the subset of features the driver understands. The
149  feature mechanism enables forward and backward compatibility for the
150  virtual device and driver.
151
152Virtio Device Modes
153  The virtio specification defines three modes of virtio devices:
154  a legacy mode device, a transitional mode device, and a modern mode
155  device. A legacy mode device is compliant to virtio specification
156  version 0.95, a transitional mode device is compliant to both
157  0.95 and 1.0 spec versions, and a modern mode
158  device is only compatible to the version 1.0 specification.
159
160  In ACRN, all the virtio devices are transitional devices, meaning that
161  they should be compatible with both the 0.95 and 1.0 versions of the virtio
162  specification.
163
164Virtio Device Discovery
165  Virtio devices are commonly implemented as PCI/PCIe devices. A
166  virtio device using virtio over a PCI/PCIe bus must expose an interface to
167  the Guest OS that meets the PCI/PCIe specifications.
168
169  Conventionally, any PCI device with Vendor ID 0x1AF4,
170  PCI_VENDOR_ID_REDHAT_QUMRANET, and Device ID 0x1000 through 0x107F
171  inclusive is a virtio device. Among the Device IDs, the
172  legacy/transitional mode virtio devices occupy the first 64 IDs ranging
173  from 0x1000 to 0x103F, while the range 0x1040-0x107F belongs to
174  virtio modern devices. In addition, the Subsystem Vendor ID should
175  reflect the PCI/PCIe vendor ID of the environment, and the Subsystem
176  Device ID indicates which virtio device is supported by the device.
177
178Virtio Frameworks
179*****************
180
181This section describes the overall architecture of virtio, and
182introduces the ACRN-specific implementations of the virtio framework.
183
184Architecture
185============
186
187Virtio adopts a frontend-backend
188architecture, as shown in :numref:`virtio-arch`. Basically the FE driver and BE
189driver
190communicate with each other through shared memory, via the
191virtqueues. The FE driver talks to the BE driver in the same way it
192would talk to a real PCIe device. The BE driver handles requests
193from the FE driver, and notifies the FE driver if the request has been
194processed.
195
196.. figure:: images/virtio-hld-image1.png
197   :width: 900px
198   :align: center
199   :name: virtio-arch
200
201   Virtio Architecture
202
203In addition to virtio's frontend-backend architecture, both FE and BE
204drivers follow a layered architecture, as shown in
205:numref:`virtio-fe-be`. Each
206side has three layers: transports, core models, and device types.
207All virtio devices share the same virtio infrastructure, including
208virtqueues, feature mechanisms, configuration space, and buses.
209
210.. figure:: images/virtio-hld-image4.png
211   :width: 900px
212   :align: center
213   :name: virtio-fe-be
214
215   Virtio Frontend/Backend Layered Architecture
216
217Userland Virtio Framework
218==========================
219
220The architecture of ACRN userland virtio framework (VBS-U) is shown in
221:numref:`virtio-userland`.
222
223The FE driver talks to the BE driver as if it were talking with a PCIe
224device. This means for "control plane", the FE driver could poke device
225registers through PIO or MMIO, and the device will interrupt the FE
226driver when something happens. For "data plane", the communication
227between the FE driver and BE driver is through shared memory, in the form of
228virtqueues.
229
230On the Service VM side where the BE driver is located, there are several
231key components in ACRN, including Device Model (DM), Hypervisor
232service module (HSM), VBS-U, and user-level vring service API helpers.
233
234DM bridges the FE driver and BE driver since each VBS-U module emulates
235a PCIe virtio device. HSM bridges DM and the hypervisor by providing
236remote memory map APIs and notification APIs. VBS-U accesses the
237virtqueue through the user-level vring service API helpers.
238
239.. figure:: images/virtio-hld-image3.png
240   :width: 900px
241   :align: center
242   :name: virtio-userland
243
244   ACRN Userland Virtio Framework
245
246Kernel-Land Virtio Framework
247============================
248
249ACRN supports one kernel-land virtio frameworks:
250
251* Vhost, compatible with Linux Vhost
252
253Vhost Framework
254---------------
255
256Vhost is a common solution upstreamed in the Linux kernel,
257with several kernel mediators based on it.
258
259Architecture
260~~~~~~~~~~~~
261
262Vhost/virtio is a semi-virtualized device abstraction interface
263specification that has been widely applied in various virtualization
264solutions. Vhost is a specific kind of virtio where the data plane is
265put into host kernel space to reduce the context switch while processing
266the IO request. It is usually called "virtio" when used as a frontend
267driver in a guest operating system or "vhost" when used as a backend
268driver in a host. Compared with a pure virtio solution on a host, vhost
269uses the same frontend driver as the virtio solution and can achieve better
270performance. :numref:`vhost-arch` shows the vhost architecture on ACRN.
271
272.. figure:: images/virtio-hld-image71.png
273   :align: center
274   :name: vhost-arch
275
276   Vhost Architecture on ACRN
277
278Compared with a userspace virtio solution, vhost decomposes data plane
279from user space to kernel space. The vhost general data plane workflow
280can be described as:
281
2821. The vhost proxy creates two eventfds per virtqueue, one is for kick
283   (an ioeventfd), the other is for call (an irqfd).
2842. The vhost proxy registers the two eventfds to HSM through HSM character
285   device:
286
287   a) Ioeventfd is bound with a PIO/MMIO range. If it is a PIO, it is
288      registered with ``(fd, port, len, value)``. If it is an MMIO, it is
289      registered with ``(fd, addr, len)``.
290   b) Irqfd is registered with MSI vector.
291
2923. The vhost proxy sets the two fds to vhost kernel through ioctl of vhost
293   device.
2944. The vhost starts polling the kick fd and wakes up when the guest kicks a
295   virtqueue, which results in an event_signal on the kick fd by the HSM
296   ioeventfd.
2975. The vhost device in the kernel signals on the irqfd to notify the guest.
298
299Ioeventfd Implementation
300~~~~~~~~~~~~~~~~~~~~~~~~
301
302Ioeventfd module is implemented in HSM, and can enhance a registered
303eventfd to listen to IO requests (PIO/MMIO) from the HSM ioreq module and
304signal the eventfd when needed. :numref:`ioeventfd-workflow`  shows the
305general workflow of ioeventfd.
306
307.. figure:: images/virtio-hld-image58.png
308   :align: center
309   :name: ioeventfd-workflow
310
311   Ioeventfd General Workflow
312
313The workflow can be summarized as:
314
3151. The vhost device initializes. The vhost proxy creates two eventfds for
316   ioeventfd and irqfd.
3172. The vhost proxy passes the ioeventfd to the vhost kernel driver.
3183. The vhost proxy passes the ioeventfd to the HSM driver.
3194. The User VM FE driver triggers an ioreq, which is forwarded through the
320   hypervisor to the Service VM.
3215. The HSM driver dispatches the ioreq to the related HSM client.
3226. The ioeventfd HSM client traverses the io_range list and finds the
323   corresponding eventfd.
3247. The ioeventfd HSM client triggers the signal to the related eventfd.
325
326Irqfd Implementation
327~~~~~~~~~~~~~~~~~~~~
328
329The irqfd module is implemented in HSM, and can enhance a registered
330eventfd to inject an interrupt to a guest OS when the eventfd gets
331signaled. :numref:`irqfd-workflow` shows the general flow for irqfd.
332
333.. figure:: images/virtio-hld-image60.png
334   :align: center
335   :name: irqfd-workflow
336
337   Irqfd General Flow
338
339The workflow can be summarized as:
340
3411. The vhost device initializes. The vhost proxy creates two eventfds for
342   ioeventfd and irqfd.
3432. The vhost proxy passes the irqfd to the vhost kernel driver.
3443. The vhost proxy passes the irqfd to the HSM driver.
3454. The vhost device driver triggers an IRQ eventfd signal once the related
346   native transfer is completed.
3475. The irqfd related logic traverses the irqfd list to retrieve related irq
348   information.
3496. The irqfd related logic injects an interrupt through the HSM interrupt API.
3507. The interrupt is delivered to the User VM FE driver through the hypervisor.
351
352.. _virtio-APIs:
353
354Virtio APIs
355***********
356
357This section provides details on the ACRN virtio APIs. As outlined previously,
358the ACRN virtio APIs can be divided into three groups: DM_APIs,
359VBS_APIs, and VQ_APIs. The following sections will elaborate on
360these APIs.
361
362VBS-U Key Data Structures
363=========================
364
365The key data structures for VBS-U are listed as follows, and their
366relationships are shown in :numref:`VBS-U-data`.
367
368``struct pci_virtio_blk``
369  An example virtio device, such as virtio-blk.
370``struct virtio_common``
371  A common component to any virtio device.
372``struct virtio_ops``
373  Virtio specific operation functions for this type of virtio device.
374``struct pci_vdev``
375  Instance of a virtual PCIe device, and any virtio
376  device is a virtual PCIe device.
377``struct pci_vdev_ops``
378  PCIe device's operation functions for this type
379  of device.
380``struct vqueue_info``
381  Instance of a virtqueue.
382
383.. figure:: images/virtio-hld-image5.png
384   :width: 900px
385   :align: center
386   :name: VBS-U-data
387
388   VBS-U Key Data Structures
389
390Each virtio device is a PCIe device. In addition, each virtio device
391could have none or multiple virtqueues, depending on the device type.
392The ``struct virtio_common`` is a key data structure to be manipulated by
393DM, and DM finds other key data structures through it. The ``struct
394virtio_ops`` abstracts a series of virtio callbacks to be provided by the
395device owner.
396
397VHOST Key Data Structures
398=========================
399
400The key data structures for vhost are listed as follows.
401
402.. doxygenstruct:: vhost_dev
403   :project: Project ACRN
404
405.. doxygenstruct:: vhost_vq
406   :project: Project ACRN
407
408DM APIs
409=======
410
411The DM APIs are exported by DM, and they should be used when implementing
412BE device drivers on ACRN.
413
414.. doxygenfunction:: paddr_guest2host
415   :project: Project ACRN
416
417.. doxygenfunction:: pci_set_cfgdata8
418   :project: Project ACRN
419
420.. doxygenfunction:: pci_set_cfgdata16
421   :project: Project ACRN
422
423.. doxygenfunction:: pci_set_cfgdata32
424   :project: Project ACRN
425
426.. doxygenfunction:: pci_get_cfgdata8
427   :project: Project ACRN
428
429.. doxygenfunction:: pci_get_cfgdata16
430   :project: Project ACRN
431
432.. doxygenfunction:: pci_get_cfgdata32
433   :project: Project ACRN
434
435.. doxygenfunction:: pci_lintr_assert
436   :project: Project ACRN
437
438.. doxygenfunction:: pci_lintr_deassert
439   :project: Project ACRN
440
441.. doxygenfunction:: pci_generate_msi
442   :project: Project ACRN
443
444.. doxygenfunction:: pci_generate_msix
445   :project: Project ACRN
446
447VBS APIs
448========
449
450The VBS APIs are exported by VBS related modules, including VBS, DM, and
451Service VM kernel modules.
452
453VBS-U APIs
454----------
455
456These APIs provided by VBS-U are callbacks to be registered to DM, and
457the virtio framework within DM will invoke them appropriately.
458
459.. doxygenstruct:: virtio_ops
460   :project: Project ACRN
461
462.. doxygenfunction:: virtio_pci_read
463   :project: Project ACRN
464
465.. doxygenfunction:: virtio_pci_write
466   :project: Project ACRN
467
468.. doxygenfunction:: virtio_interrupt_init
469   :project: Project ACRN
470
471.. doxygenfunction:: virtio_linkup
472   :project: Project ACRN
473
474.. doxygenfunction:: virtio_reset_dev
475   :project: Project ACRN
476
477.. doxygenfunction:: virtio_set_io_bar
478   :project: Project ACRN
479
480.. doxygenfunction:: virtio_set_modern_bar
481   :project: Project ACRN
482
483.. doxygenfunction:: virtio_config_changed
484   :project: Project ACRN
485
486APIs Provided by DM
487~~~~~~~~~~~~~~~~~~~
488
489.. doxygenfunction:: vbs_kernel_reset
490   :project: Project ACRN
491
492.. doxygenfunction:: vbs_kernel_start
493   :project: Project ACRN
494
495.. doxygenfunction:: vbs_kernel_stop
496   :project: Project ACRN
497
498VHOST APIs
499==========
500
501APIs Provided by DM
502-------------------
503
504.. doxygenfunction:: vhost_dev_init
505   :project: Project ACRN
506
507.. doxygenfunction:: vhost_dev_deinit
508   :project: Project ACRN
509
510.. doxygenfunction:: vhost_dev_start
511   :project: Project ACRN
512
513.. doxygenfunction:: vhost_dev_stop
514   :project: Project ACRN
515
516Linux Vhost IOCTLs
517------------------
518
519``#define VHOST_GET_FEATURES      _IOR(VHOST_VIRTIO, 0x00, __u64)``
520  This IOCTL is used to get the supported feature flags by vhost kernel driver.
521``#define VHOST_SET_FEATURES      _IOW(VHOST_VIRTIO, 0x00, __u64)``
522  This IOCTL is used to set the supported feature flags to vhost kernel driver.
523``#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)``
524  This IOCTL is used to set the current process as the exclusive owner of the
525  vhost char device. It must be called before any other vhost commands.
526``#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)``
527  This IOCTL is used to give up the ownership of the vhost char device.
528``#define VHOST_SET_MEM_TABLE     _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory)``
529  This IOCTL is used to convey the guest OS memory layout to the vhost kernel driver.
530``#define VHOST_SET_VRING_NUM _IOW(VHOST_VIRTIO, 0x10, struct vhost_vring_state)``
531  This IOCTL is used to set the number of descriptors in the virtio ring. It
532  cannot be modified while the virtio ring is running.
533``#define VHOST_SET_VRING_ADDR _IOW(VHOST_VIRTIO, 0x11, struct vhost_vring_addr)``
534  This IOCTL is used to set the address of the virtio ring.
535``#define VHOST_SET_VRING_BASE _IOW(VHOST_VIRTIO, 0x12, struct vhost_vring_state)``
536  This IOCTL is used to set the base value where virtqueue looks for available
537  descriptors.
538``#define VHOST_GET_VRING_BASE _IOWR(VHOST_VIRTIO, 0x12, struct vhost_vring_state)``
539  This IOCTL is used to get the base value where virtqueue looks for available
540  descriptors.
541``#define VHOST_SET_VRING_KICK _IOW(VHOST_VIRTIO, 0x20, struct vhost_vring_file)``
542  This IOCTL is used to set the eventfd on which vhost can poll for guest
543  virtqueue kicks.
544``#define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)``
545  This IOCTL is used to set the eventfd that is used by vhost to inject
546  virtual interrupts.
547
548HSM Eventfd IOCTLs
549------------------
550
551.. doxygenstruct:: acrn_ioeventfd
552   :project: Project ACRN
553
554``#define IC_EVENT_IOEVENTFD              _IC_ID(IC_ID, IC_ID_EVENT_BASE + 0x00)``
555  This IOCTL is used to register or unregister an ioeventfd with the appropriate
556  address, length, and data value.
557
558.. doxygenstruct:: acrn_irqfd
559   :project: Project ACRN
560
561``#define IC_EVENT_IRQFD                  _IC_ID(IC_ID, IC_ID_EVENT_BASE + 0x01)``
562  This IOCTL is used to register or unregister an irqfd with the appropriate MSI
563  information.
564
565VQ APIs
566=======
567
568The virtqueue APIs, or VQ APIs, are used by a BE device driver to
569access the virtqueues shared by the FE driver. The VQ APIs abstract the
570details of virtqueues so that users don't need to worry about the data
571structures within the virtqueues.
572
573.. doxygenfunction:: vq_interrupt
574   :project: Project ACRN
575
576.. doxygenfunction:: vq_getchain
577   :project: Project ACRN
578
579.. doxygenfunction:: vq_retchain
580   :project: Project ACRN
581
582.. doxygenfunction:: vq_relchain
583   :project: Project ACRN
584
585.. doxygenfunction:: vq_endchains
586   :project: Project ACRN
587
588Below is an example showing a typical logic of how a BE driver handles
589requests from a FE driver.
590
591.. code-block:: c
592
593   static void BE_callback(struct pci_virtio_xxx *pv, struct vqueue_info *vq ) {
594      while (vq_has_descs(vq)) {
595         vq_getchain(vq, &idx, &iov, 1, NULL);
596                /* handle requests in iov */
597                request_handle_proc();
598                /* Release this chain and handle more */
599                vq_relchain(vq, idx, len);
600         }
601      /* Generate interrupt if appropriate. 1 means ring empty \*/
602      vq_endchains(vq, 1);
603   }
604
605Supported Virtio Devices
606************************
607
608All the BE virtio drivers are implemented using the
609ACRN virtio APIs, and the FE drivers reuse the standard Linux FE
610virtio drivers. For the devices with FE drivers available in the Linux
611kernel, they should use standard virtio Vendor ID/Device ID and
612Subsystem Vendor ID/Subsystem Device ID. For other devices within ACRN,
613their temporary IDs are listed in the following table.
614
615.. table:: Virtio Devices without Existing FE Drivers in Linux
616   :align: center
617   :name: virtio-device-table
618
619   +--------------+-------------+-------------+-------------+-------------+
620   | virtio       | Vendor ID   | Device ID   | Subvendor   | Subdevice   |
621   | device       |             |             | ID          | ID          |
622   +--------------+-------------+-------------+-------------+-------------+
623   | RPMB         | 0x8086      | 0x8601      | 0x8086      | 0xFFFF      |
624   +--------------+-------------+-------------+-------------+-------------+
625   | HECI         | 0x8086      | 0x8602      | 0x8086      | 0xFFFE      |
626   +--------------+-------------+-------------+-------------+-------------+
627   | audio        | 0x8086      | 0x8603      | 0x8086      | 0xFFFD      |
628   +--------------+-------------+-------------+-------------+-------------+
629   | IPU          | 0x8086      | 0x8604      | 0x8086      | 0xFFFC      |
630   +--------------+-------------+-------------+-------------+-------------+
631   | TSN/AVB      | 0x8086      | 0x8605      | 0x8086      | 0xFFFB      |
632   +--------------+-------------+-------------+-------------+-------------+
633   | hyper_dmabuf | 0x8086      | 0x8606      | 0x8086      | 0xFFFA      |
634   +--------------+-------------+-------------+-------------+-------------+
635   | HDCP         | 0x8086      | 0x8607      | 0x8086      | 0xFFF9      |
636   +--------------+-------------+-------------+-------------+-------------+
637   | COREU        | 0x8086      | 0x8608      | 0x8086      | 0xFFF8      |
638   +--------------+-------------+-------------+-------------+-------------+
639   | I2C          | 0x8086      | 0x860a      | 0x8086      | 0xFFF6      |
640   +--------------+-------------+-------------+-------------+-------------+
641   | GPIO         | 0x8086      | 0x8609      | 0x8086      | 0xFFF7      |
642   +--------------+-------------+-------------+-------------+-------------+
643
644The following sections introduce the status of virtio devices
645supported in ACRN.
646
647.. toctree::
648   :maxdepth: 1
649
650   virtio-blk
651   virtio-net
652   virtio-input
653   virtio-console
654   virtio-rnd
655   virtio-i2c
656   virtio-gpio
657