1.. _hv-device-passthrough:
2
3Device Passthrough
4##################
5
6A critical part of virtualization is virtualizing devices: exposing all
7aspects of a device including its I/O, interrupts, DMA, and
8configuration.  There are three typical device virtualization methods:
9emulation, para-virtualization, and passthrough.  All emulation,
10para-virtualization and passthrough are used in ACRN project. Device
11emulation is discussed in :ref:`hld-io-emulation`, para-virtualization
12is discussed in :ref:`hld-virtio-devices` and device passthrough will be
13discussed here.
14
15.. rst-class:: rst-columns2
16
17.. contents::
18   :depth: 1
19   :local:
20
21--------
22
23In the ACRN project, device emulation means emulating all existing
24hardware resources through the Device Model, a software component running in
25the Service VM. Device emulation must maintain the same SW
26interface as a native device, providing transparency to the VM software
27stack. Passthrough implemented in the hypervisor assigns a physical device
28to a VM so the VM can access the hardware device directly with minimal
29(if any) VMM involvement.
30
31The difference between device emulation and passthrough is shown in
32:numref:`emu-passthru-diff`. You can notice device emulation has
33a longer access path which causes worse performance compared with
34passthrough. Passthrough can deliver near-native performance, but
35can't support device sharing.
36
37.. figure:: images/passthru-image30.png
38   :align: center
39   :name: emu-passthru-diff
40
41   Difference Between Emulation and Passthrough
42
43Passthrough in the hypervisor provides the following functionalities to
44allow the VM to access PCI devices directly:
45
46-  VT-d DMA remapping for PCI devices: hypervisor will set up DMA
47   remapping during VM initialization phase.
48-  VT-d interrupt-remapping for PCI devices: hypervisor will enable
49   VT-d interrupt-remapping for PCI devices for security considerations.
50-  MMIO remapping between virtual and physical BAR
51-  Device configuration emulation
52-  Remapping interrupts for PCI devices
53-  ACPI configuration virtualization
54-  GSI sharing violation check
55
56The following diagram details the passthrough initialization control flow in
57ACRN for a post-launched VM:
58
59.. figure:: images/passthru-image22.png
60   :align: center
61
62   Passthrough Devices Initialization Control Flow
63
64Passthrough Device Status
65*************************
66
67Most common devices on supported platforms are enabled for
68passthrough, as detailed here:
69
70.. figure:: images/passthru-image77.png
71   :align: center
72
73   Passthrough Device Status
74
75Owner of Passthrough Devices
76****************************
77
78ACRN hypervisor will do PCI enumeration to discover the PCI devices on the
79platform. According to the hypervisor/VM configurations, the owner of these PCI
80devices can be one of the following 4 cases:
81
82- **Hypervisor**: Hypervisor uses a UART device as the console in debug version
83  for debugging purposes, so the UART device is owned by the hypervisor and is
84  not visible to any VM. For now, UART is the only PCI device that can be owned
85  by the hypervisor.
86- **Pre-launched VM**: The passthrough devices that will be used in a
87  pre-launched VM are predefined in the VM configuration. These passthrough
88  devices are owned by the pre-launched VM after the VM is created. These
89  devices will not be removed from the pre-launched VM. There can be
90  pre-launched VMs in partitioned mode and hybrid mode.
91- **Service VM**: All the passthrough devices except those described above
92  (owned by hypervisor or pre-launched VMs) are assigned to the Service VM. And
93  some of these devices can be assigned to a post-launched VM according to the
94  passthrough device list specified in the parameters of the ACRN Device Model.
95- **Post-launched VM**: A list of passthrough devices can be specified in the
96  parameters of the ACRN Device Model. When creating a post-launched VM, these
97  specified devices will be moved from the Service VM domain to the
98  post-launched VM domain. After the post-launched VM is powered-off, these
99  devices will be moved back to the Service VM domain.
100
101
102VT-d DMA Remapping
103******************
104
105To enable passthrough, for VM DMA access the VM can only
106support GPA, while a physical DMA requires HPA. One work-around
107is building identity mapping so that GPA is equal to HPA, but this
108is not recommended as some VMs don't support relocation well. To
109address this issue, Intel introduces VT-d in the chipset to add one
110remapping engine to translate GPA to HPA for DMA operations.
111
112Each VT-d engine (DMAR Unit) maintains a remapping structure
113similar to a page table with device BDF (Bus/Dev/Func) as input and final
114page table for GPA/HPA translation as output. The GPA/HPA translation
115page table is similar to a normal multi-level page table.
116
117VM DMA depends on Intel VT-d to do the translation from GPA to HPA, so we need
118to enable VT-d IOMMU engine in ACRN before we can passthrough any device. The
119Service VM in ACRN is a VM running in non-root mode which also depends on VT-d
120to access a device. In Service VM DMA remapping engine settings, GPA is equal to
121HPA.
122
123ACRN hypervisor checks DMA-Remapping Hardware unit Definition (DRHD) in the host
124DMAR ACPI table to get basic information, then sets up each DMAR unit. For
125simplicity, ACRN reuses the EPT table as the translation table in the DMAR unit
126for each passthrough device. The control flow of assigning and deassigning a
127passthrough device to/from a post-launched VM is shown in the following figures:
128
129.. figure:: images/passthru-image86.png
130   :align: center
131
132   Ptdev Assignment Control Flow
133
134.. figure:: images/passthru-image42.png
135   :align: center
136
137   Ptdev Deassignment Control Flow
138
139.. _vtd-posted-interrupt:
140
141
142VT-d Interrupt-Remapping
143************************
144
145The VT-d interrupt-remapping architecture enables system software to control and
146censor external interrupt requests generated by all sources including those from
147interrupt controllers (I/OxAPICs), MSI/MSI-X capable devices including
148endpoints, root-ports and Root-Complex integrated end-points. ACRN requires
149enabling the VT-d interrupt-remapping feature for security reasons. If the VT-d
150hardware doesn't support interrupt-remapping, then ACRN will refuse to boot VMs.
151VT-d interrupt-remapping is NOT related to the translation from physical
152interrupt to virtual interrupt or vice versa. The term VT-d interrupt-remapping
153remaps the interrupt index in the VT-d interrupt-remapping table to the physical
154interrupt vector after checking the external interrupt request is valid. The
155hypervisor still needs to translate the physical vector to the virtual vector,
156which is also described in the below section :ref:`interrupt-remapping`.
157
158VT-d posted interrupt (PI) enables direct delivery of external interrupts from
159passthrough devices to VMs without having to exit to the hypervisor, thereby
160improving interrupt performance. ACRN uses VT-d posted interrupts if the
161platform supports them. VT-d distinguishes between remapped and posted interrupt
162modes by bit 15 in the low 64-bit of the interrupt-remapping table entry. If
163cleared, the entry is remapped. If set, it's posted. The idea is to keep a
164Posted Interrupt Descriptor (PID) in memory. The PID is a 64-byte data structure
165that contains several fields:
166
167Posted Interrupt Request (PIR):
168   a 256-bit field, one bit per request vector;
169   this is where the interrupts are posted.
170
171Suppress Notification (SN):
172   determines whether to notify (``SN=0``) or not notify (``SN=1``) the CPU for
173   non-urgent interrupts. For ACRN, all interrupts are treated as non-urgent.
174   ACRN sets SN=0 during initialization and then never changes it at runtime.
175
176Notification Vector (NV):
177   the CPU must be notified with an interrupt and this
178   field specifies the vector for notification.
179
180Notification Destination (NDST):
181   the physical APIC-ID of the destination.
182   ACRN does not support vCPU migration. One vCPU always runs on the same pCPU,
183   so for ACRN, NDST is never changed after initialization.
184
185Outstanding Notification (ON):
186   indicates if a notification event is outstanding
187
188The ACRN scheduler supports vCPU scheduling, where two or more vCPUs can
189share the same pCPU using a time sharing technique. One issue emerges
190here for the VT-d posted interrupt handling process, where IRQs could happen
191when the target vCPU is in a halted state. We need to handle the case
192where the running vCPU disrupted by the external interrupt, is not the
193target vCPU that should have received the external interrupt.
194
195Consider this scenario:
196
197* vCPU0 runs on pCPU0 and then enters a halted state,
198* ACRN scheduler now chooses vCPU1 to run on pCPU0.
199
200If an external interrupt from an assigned device destined to vCPU0
201happens at this time, we do not want this interrupt to be incorrectly
202consumed by vCPU1 running on pCPU0. This would happen if we
203allocate the same Activation Notification Vector (ANV) to all vCPUs.
204
205To circumvent this issue, ACRN allocates unique ANVs for each vCPU that
206belongs to the same pCPU. The ANVs need only be unique within each pCPU,
207not across all vCPUs. Since vCPU0's ANV is different from vCPU1's ANV,
208if vCPU0 is in a halted state, external interrupts from an assigned
209device destined to vCPU0 delivered through the PID will not trigger the
210posted interrupt processing. Instead, a VMExit to ACRN happens that can
211then process the event such as waking up the halted vCPU0 and kick it
212to run on pCPU0.
213
214For ACRN, ``CONFIG_MAX_VM_NUM`` vCPUs may be running on top of a pCPU. ACRN
215does not support two vCPUs of the same VM running on top of the same
216pCPU. This reduces the number of pre-allocated ANVs for posted
217interrupts to ``CONFIG_MAX_VM_NUM``, and enables ACRN to avoid switching
218between active and wake-up vector values in the posted interrupt
219descriptor on vCPU scheduling state changes. ACRN uses the following
220formula to assign posted interrupt vectors to vCPUs::
221
222   NV = POSTED_INTR_VECTOR + vcpu->vm->vm_id
223
224where ``POSTED_INTR_VECTOR`` is the starting vector (0xe3) for posted interrupts.
225
226ACRN maintains a per-PCPU vCPU array that stores the pointers to
227assigned vCPUs for each pCPU and is indexed by ``vcpu->vm->vm_id``.
228When the vCPU is created, ACRN adds the vCPU to the containing pCPU's
229vCPU array. When the vCPU is offline, ACRN removes the vCPU from the
230related vCPU array.
231
232An example to illustrate our solution:
233
234.. figure:: images/passthru-image50.png
235   :align: center
236
237ACRN sets ``SN=0`` during initialization and then never changes it at
238runtime. This means posted interrupt notification is never suppressed.
239After posting the interrupt in Posted Interrupt Request (PIR), VT-d will
240always notify the CPU using the interrupt vector NV, in both root and
241non-root mode. With this scheme, if the target vCPU is running under
242VMX non-root mode, it will receive the interrupts coming from the
243passthrough device without a VMExit (and therefore without any
244intervention of the ACRN hypervisor).
245
246If the target vCPU is in a halted state (under VMX non-root mode), a
247scheduling request will be raised to wake it up. This is needed to
248achieve real time behavior. If an RT-VM is waiting for an event, when
249the event is fired (a PI interrupt fires), we need to wake up the VM
250immediately.
251
252
253MMIO Remapping
254**************
255
256For PCI MMIO BAR, the hypervisor builds EPT mapping between the virtual BAR and
257physical BAR, then the VM can access MMIO directly. There is one exception: an
258MSI-X table is also in an MMIO BAR. The hypervisor needs to trap the accesses to
259the MSI-X table. So the pages that have an MSI-X table should not be accessed by
260the VM directly. EPT mapping is not built for pages that have an MSI-X table.
261
262Device Configuration Emulation
263******************************
264
265The PCI configuration space can be accessed by a PCI-compatible
266Configuration Mechanism (IO port 0xCF8/CFC) and the PCI Express Enhanced
267Configuration Access Mechanism (PCI MMCONFIG). The ACRN hypervisor traps
268this PCI configuration space access and emulates it. Refer to :ref:`split-device-model` for details.
269
270MSI-X Table Emulation
271*********************
272
273VM accesses to an MSI-X table should be trapped so that the hypervisor has the
274information to map the virtual vector and physical vector. EPT mapping should
275be skipped for the 4KB pages that have an MSI-X table.
276
277There are three situations for the emulation of MSI-X tables:
278
279- **Service VM**: Accesses to an MSI-X table are handled by the hypervisor MMIO
280  handler (4KB adjusted up and down). The hypervisor remaps the interrupts.
281- **Post-launched VM**: Accesses to an MSI-X table are handled by the Device
282  Model MMIO handler (4KB adjusted up and down). When the Device Model (Service
283  VM) writes to the table, it will be intercepted by the hypervisor MMIO
284  handler. The hypervisor remaps the interrupts.
285- **Pre-launched VM**: Writes to the MMIO region in an MSI-X table BAR are
286  handled by the hypervisor MMIO handler. If the offset falls within the MSI-X
287  table (offset, offset+tables_size), the hypervisor remaps the interrupts.
288
289
290.. _interrupt-remapping:
291
292Interrupt Remapping
293*******************
294
295When the physical interrupt of a passthrough device happens, the hypervisor has
296to distribute it to the relevant VM according to interrupt remapping
297relationships. The structure ``ptirq_remapping_info`` is used to define
298the subordination relation between physical interrupt and VM, the
299virtual destination, etc. See the following figure for details:
300
301.. figure:: images/passthru-image91.png
302   :align: center
303
304   Remapping of Physical Interrupts
305
306There are two different types of interrupt sources: IOAPIC and MSI.
307The hypervisor will record different information for interrupt
308distribution: physical and virtual IOAPIC pin for IOAPIC source,
309physical and virtual BDF and other information for MSI source.
310
311Service VM passthrough is also in the scope of interrupt remapping which is
312done on-demand rather than on hypervisor initialization.
313
314.. figure:: images/passthru-image102.png
315   :align: center
316   :name: init-remapping
317
318   Initialization of Remapping of Virtual IOAPIC Interrupts for Service VM
319
320:numref:`init-remapping` above illustrates how remapping of (virtual) IOAPIC
321interrupts are remapped for the Service VM. VM exit occurs whenever the Service
322VM tries to unmask an interrupt in (virtual) IOAPIC by writing to the
323Redirection Table Entry (or RTE). The hypervisor then invokes the IOAPIC
324emulation handler (refer to :ref:`hld-io-emulation` for details on I/O
325emulation) which calls APIs to set up a remapping for the to-be-unmasked
326interrupt.
327
328Remapping of (virtual) MSI interrupts are set up in a similar sequence:
329
330.. figure:: images/passthru-image98.png
331   :align: center
332
333   Initialization of Remapping of Virtual MSI for Service VM
334
335This figure illustrates how mappings of MSI or MSI-X are set up for the
336Service VM. The Service VM is responsible for issuing a hypercall to notify the
337hypervisor before it configures the PCI configuration space to enable an
338MSI. The hypervisor takes this opportunity to set up a remapping for the
339given MSI or MSI-X before it is actually enabled by the Service VM.
340
341When the User VM needs to access the physical device by passthrough, it uses
342the following steps:
343
344-  User VM gets a virtual interrupt.
345-  VM exit happens and the trapped vCPU is the target where the interrupt
346   will be injected.
347-  Hypervisor handles the interrupt and translates the vector
348   according to ``ptirq_remapping_info``.
349-  Hypervisor delivers the interrupt to the User VM.
350
351When the Service VM needs to use the physical device, the passthrough is also
352active because the Service VM is the first VM. The detail steps are:
353
354-  Service VM gets all physical interrupts. It assigns different interrupts for
355   different VMs during initialization and reassigns when a VM is created or
356   deleted.
357-  When a physical interrupt is trapped, an exception will happen after VMCS
358   has been set.
359-  Hypervisor handles the VM exit issue according to
360   ``ptirq_remapping_info`` and translates the vector.
361-  The interrupt is injected the same as a virtual interrupt.
362
363ACPI Virtualization
364*******************
365
366ACPI virtualization is designed in ACRN with these assumptions:
367
368-  Hypervisor has no knowledge of ACPI,
369-  Service VM owns all physical ACPI resources,
370-  User VM sees virtual ACPI resources emulated by the Device Model.
371
372Some passthrough devices require a physical ACPI table entry for initialization.
373The Device Model creates such device entry based on the physical one according
374to vendor ID and device ID. Virtualization is implemented in the Service VM
375Device Model and not in the scope of the hypervisor. For pre-launched VMs, the
376ACRN hypervisor doesn't support ACPI virtualization, so devices relying on ACPI
377tables are not supported.
378
379GSI Sharing Violation Check
380***************************
381
382All the PCI devices that share the same GSI should be assigned to the same
383VM to avoid physical GSI sharing between multiple VMs. In partitioned mode or
384hybrid mode, the PCI devices assigned to a pre-launched VM are statically
385predefined. Developers should take care not to violate the rule. For a
386post-launched VM, the ACRN Device Model puts the devices sharing the same GSI
387pin in a GSI sharing group (devices that don't support MSI). The devices in the
388same group should be assigned together to the current VM; otherwise, none of
389them should be assigned to the current VM. A device that violates the rule will
390be rejected to be passed-through. The checking logic is implemented in the
391Device Model and not in the scope of the hypervisor. The platform-specific GSI
392information shall be filled in ``devicemodel/hw/pci/platform_gsi_info.c`` for
393the target platform to activate the checking of GSI sharing violations.
394
395.. _PCIe PTM implementation:
396
397PCIe Precision Time Measurement (PTM)
398*************************************
399
400The PCI Express (PCIe) specification defines a Precision Time Measurement (PTM)
401mechanism that enables time coordination and synchronization of events across
402multiple PCI components with independent local time clocks within the same
403system.  Intel supports PTM on several of its systems and devices, such as PTM
404root capabilities support on Whiskey Lake and Tiger Lake PCIe root ports, and
405PTM device support on an Intel I225-V/I225-LM family Ethernet controller.  For
406further details on PTM, refer to the `PCIe specification
407<https://pcisig.com/specifications>`_.
408
409ACRN adds PCIe root port emulation in the hypervisor to support the PTM feature
410and emulates a simple PTM hierarchy.  ACRN enables PTM in a post-launched VM if
411the user sets the ``enable_ptm`` option when passing through a device to the
412post-launched VM.  When you enable PTM, the passthrough device is connected to a
413virtual root port instead of the host bridge.
414
415By default, the :ref:`vm.PTM` option is disabled in ACRN VMs. Use the
416:ref:`acrn_configurator_tool` to enable PTM
417in the scenario XML file that configures the VM.
418
419Here is an example launch script that configures a supported Ethernet card for
420passthrough and enables PTM on it:
421
422.. code-block:: bash
423   :emphasize-lines: 9-11,17
424
425   declare -A passthru_vpid
426   declare -A passthru_bdf
427   passthru_vpid=(
428    ["ethptm"]="8086 15f2"
429    )
430   passthru_bdf=(
431    ["ethptm"]="0000:aa:00.0"
432    )
433   echo ${passthru_vpid["ethptm"]} > /sys/bus/pci/drivers/pci-stub/new_id
434   echo ${passthru_bdf["ethptm"]} > /sys/bus/pci/devices/${passthru_bdf["ethptm"]}/driver/unbind
435   echo ${passthru_bdf["ethptm"]} > /sys/bus/pci/drivers/pci-stub/bind
436
437   acrn-dm -m $mem_size -s 0:0,hostbridge \
438      -s 3,virtio-blk,user-vm-test.img \
439      -s 4,virtio-net,tap=tap0 \
440      -s 5,virtio-console,@stdio:stdio_port \
441      -s 6,passthru,a9/00/0,enable_ptm \
442      --ovmf /usr/share/acrn/bios/OVMF.fd
443
444And here is the bus hierarchy from the User VM (as shown by the ``lspci`` command)::
445
446   lspci -tv
447   -[0000:00]-+-00.0  Network Appliance Corporation Device 1275
448              +-03.0  Red Hat, Inc. Virtio block device
449              +-04.0  Red Hat, Inc. Virtio network device
450              +-05.0  Red Hat, Inc. Virtio console
451              \-06.0-[01]----00.0  Intel Corporation Device 15f2
452
453
454PTM Implementation Notes
455========================
456
457To simplify PTM support implementation, the virtual root port only supports the
458most basic PCIe configuration and operation, in addition to PTM capabilities.
459
460For a post-launched VM, you enable PTM by setting the
461``enable_ptm`` option for the passthrough device (as shown above).
462
463.. figure:: images/PTM-hld-PTM-flow.png
464   :align: center
465   :width: 700
466   :name: ptm-flow
467
468   PTM-enabling Workflow in Post-launched VM
469
470As shown in :numref:`ptm-flow`, PTM is enabled in the root port during the
471hypervisor startup. The Device Model (DM) then checks whether the passthrough
472device supports PTM requestor capabilities and whether the corresponding root
473port supports PTM root capabilities, as well as some other sanity checks.  If an
474error is detected during these checks, the error will be reported and ACRN will
475not enable PTM in the post-launched VM. This doesn't prevent the user from
476launching the post-launched VM and passing through the device to the VM.  If no
477error is detected, the Device Model uses the ``add_vdev`` hypercall to add a
478virtual root port (VRP), acting as the PTM root, to the post-launched VM before
479passing through the device to the post-launched VM.
480
481.. figure:: images/PTM-hld-PTM-passthru.png
482   :align: center
483   :width: 700
484   :name: ptm-vrp
485
486   PTM-enabled PCI Device Passthrough to Post-launched VM
487
488:numref:`ptm-vrp` shows that, after enabling PTM, the passthrough device
489connects to the virtual root port instead of the virtual host bridge.
490
491To use PTM in a virtualized environment, you may want to first verify that PTM
492is supported by the device and is enabled on the bare metal machine.
493If supported, follow these steps to enable PTM in the post-launched VM:
494
4951. Make sure that PTM is enabled in the guest kernel.  In the Linux kernel,
496   for example, set ``CONFIG_PCIE_PTM=y``.
4972. Not every PCI device supports PTM.  One example that does is the Intel I225-V
498   Ethernet controller.  If you passthrough this card to the post-launched VM,
499   make sure the post-launched VM uses a version of the IGC driver that supports
500   PTM.
5013. In the Device Model launch script, add the ``enable_ptm`` option to the
502   passthrough device.  For example:
503
504   .. code-block:: bash
505      :emphasize-lines: 5
506
507      $ acrn-dm -m $mem_size -s 0:0,hostbridge \
508          -s 3,virtio-blk,user-vm-test.img \
509          -s 4,virtio-net,tap=tap0 \
510          -s 5,virtio-console,@stdio:stdio_port \
511          -s 6,passthru,a9/00/0,enable_ptm \
512          --ovmf /usr/share/acrn/bios/OVMF.fd \
513
5144. You can check that PTM is correctly enabled on the post-launched VM by
515   displaying the PCI bus hierarchy on the post-launched VM using the ``lspci``
516   command:
517
518   .. code-block:: bash
519      :emphasize-lines: 12,20
520
521      lspci -tv
522        -[0000:00]-+-00.0  Network Appliance Corporation Device 1275
523         +-03.0  Red Hat, Inc. Virtio block device
524         +-04.0  Red Hat, Inc. Virtio network device
525         +-05.0  Red Hat, Inc. Virtio console
526         \-06.0-[01]----00.0  Intel Corporation Device 15f2
527
528      sudo lspci -vv # (Only relevant output is shown)
529        00:00.0 Host bridge: Network Appliance Corporation Device 1275
530        00:06.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #5 (rev 02) (prog-if 00 [Normal decode])
531        . . .
532                Capabilities: [100 v1] Precision Time Measurement
533                        PTMCap: Requester:- Responder:+ Root:+
534                        PTMClockGranularity: 4ns
535                        PTMControl: Enabled:+ RootSelected:+
536                        PTMEffectiveGranularity: 4ns
537                Kernel driver in use: pcieport
538        01:00.0 Ethernet controller: Intel Corporation Device 15f2 (rev 01)
539        . . .
540                Capabilities: [1f0 v1] Precision Time Measurement
541                        PTMCap: Requester:+ Responder:- Root:-
542                        PTMClockGranularity: 4ns
543                        PTMControl: Enabled:+ RootSelected:-
544                        PTMEffectiveGranularity: 4ns
545                Kernel driver in use: igc
546
547
548API Data Structures and Interfaces
549**********************************
550
551The following are common APIs provided to initialize interrupt remapping for
552VMs:
553
554.. doxygenfunction:: ptirq_intx_pin_remap
555   :project: Project ACRN
556
557.. doxygenfunction:: ptirq_prepare_msix_remap
558   :project: Project ACRN
559
560Post-launched VMs need to pre-allocate interrupt entries during VM
561initialization. Post-launched VMs need to free interrupt entries during VM
562de-initialization. The following APIs are provided to pre-allocate/free
563interrupt entries for post-launched VMs:
564
565.. doxygenfunction:: ptirq_add_intx_remapping
566   :project: Project ACRN
567
568.. doxygenfunction:: ptirq_remove_intx_remapping
569   :project: Project ACRN
570
571.. doxygenfunction:: ptirq_remove_msix_remapping
572   :project: Project ACRN
573
574The following APIs are provided to acknowledge a virtual interrupt:
575
576.. doxygenfunction:: ptirq_intx_ack
577   :project: Project ACRN
578
579The following APIs are provided to handle a ptdev interrupt:
580
581.. doxygenfunction:: ptdev_init
582   :project: Project ACRN
583
584.. doxygenfunction:: ptirq_softirq
585   :project: Project ACRN
586
587.. doxygenfunction:: ptirq_alloc_entry
588   :project: Project ACRN
589
590.. doxygenfunction:: ptirq_release_entry
591   :project: Project ACRN
592
593.. doxygenfunction:: ptdev_release_all_entries
594   :project: Project ACRN
595
596.. doxygenfunction:: ptirq_activate_entry
597   :project: Project ACRN
598
599.. doxygenfunction:: ptirq_deactivate_entry
600   :project: Project ACRN
601
602.. doxygenfunction:: ptirq_dequeue_softirq
603   :project: Project ACRN
604
605.. doxygenfunction:: ptirq_get_intr_data
606   :project: Project ACRN
607