Home
last modified time | relevance | path

Searched refs:IPI (Results 1 – 25 of 36) sorted by relevance

12

/linux/Documentation/devicetree/bindings/mailbox/
A Dxlnx,zynqmp-ipi-mailbox.txt1 Xilinx IPI Mailbox Controller
5 messaging between two Xilinx Zynq UltraScale+ MPSoC IPI agents. Each IPI
9 | Xilinx ZynqMP IPI Controller |
21 Hardware | | IPI Agent | | IPI Buffers | |
26 | Xilinx IPI Agent Block |
34 IPI agent node:
39 - xlnx,ipi-id: local Xilinx IPI agent ID
43 Internal IPI mailbox node:
44 - reg: IPI buffers address ranges
89 /* APU<->RPU0 IPI mailbox controller */
[all …]
/linux/Documentation/virt/kvm/
A Dvcpu-requests.rst47 order to perform some KVM maintenance. To do so, an IPI is sent, forcing
53 1) Send an IPI. This forces a guest mode exit.
68 as well as to avoid sending unnecessary IPIs (see "IPI Reduction"), and
69 even to ensure IPI acknowledgements are waited upon (see "Waiting for
162 If, for example, the VCPU is sleeping, so no IPI is necessary, then
198 Reduction") must be certain when it's safe to not send the IPI. One
222 ...abort guest entry... ...send IPI...
233 IPI Reduction
249 KVM_REQUEST_WAIT flag changes the condition for sending an IPI from
256 As the determination of whether or not to send an IPI depends on the
[all …]
A Dhypercalls.rst166 :Purpose: Hypercall used to yield if the IPI target vCPU is preempted
170 :Usage example: When sending a call-function IPI-many to vCPUs, yield if
171 any of the IPI target vCPUs was preempted.
/linux/Documentation/features/sched/membarrier-sync-core/
A Darch-support.txt11 # when returning from IPI handler, and when returning to user-space.
15 # x86-32 uses IRET as return from interrupt, which takes care of the IPI.
19 # x86-64 uses IRET as return from interrupt, which takes care of the IPI.
/linux/Documentation/translations/zh_CN/virt/
A Dguest-halt-polling.rst25 IPI(以及处理IPI的相关成本)。
/linux/Documentation/admin-guide/hw-vuln/
A Dcore-scheduling.rst112 Once a task has been selected for all the siblings in the core, an IPI is sent to
113 siblings for whom a new task was selected. Siblings on receiving the IPI will
130 When the highest priority task is selected to run, a reschedule-IPI is sent to
142 (victim) to enter idle mode. This is because the sending of the IPI would bring
145 which may not be worth protecting. It is also possible that the IPI is received
171 IPI processing delays
173 Core scheduling selects only trusted tasks to run together. IPI is used to notify
175 receiving of the IPI on some arch (on x86, this has not been observed). This may
177 IPI. Even though cache is flushed on entry to user mode, victim tasks on siblings
/linux/Documentation/virt/kvm/devices/
A Dxics.rst50 * Pending IPI (inter-processor interrupt) priority, 8 bits
51 Zero is the highest priority, 255 means no IPI is pending.
54 Zero means no interrupt pending, 2 means an IPI is pending
A Dxive.rst61 interrupt of the device being passed-through or the initial IPI ESB
/linux/Documentation/devicetree/bindings/powerpc/fsl/
A Dmpic.txt71 non-IPI interrupts to a single CPU at a time (EG: Freescale MPIC).
127 2 = MPIC inter-processor interrupt (IPI)
130 the MPIC IPI number. The type-specific
193 * MPIC IPI interrupts. Note the interrupt
/linux/Documentation/RCU/Design/Expedited-Grace-Periods/
A DExpedited-Grace-Periods.rst27 each of which results in an IPI to the target CPU.
48 The dotted arrows denote indirect action, for example, an IPI
55 ``smp_call_function_single()`` to send the CPU an IPI, which
96 | IPI the CPU to safely interact with the upcoming |
128 that the CPU went idle while the IPI was in flight. If the CPU is idle,
142 grace periods. In addition, attempting to IPI offline CPUs will result
143 in splats, but failing to IPI online CPUs can result in too-short grace
231 For RCU-sched, there is an additional check: If the IPI has interrupted
245 bitmask of CPUs that must be IPIed, just before sending each IPI, and
246 (either explicitly or implicitly) within the IPI handler.
[all …]
/linux/Documentation/virt/
A Dguest-halt-polling.rst12 a remote vCPU to avoid sending an IPI (and the associated
13 cost of handling the IPI) when performing a wakeup.
/linux/arch/sh/kernel/cpu/sh4a/
A Dsetup-sh7770.c339 HAC, IPI, SPDIF, HUDI, I2C, enumerator
365 INTC_VECT(HAC, 0x580), INTC_VECT(IPI, 0x5c0),
425 DMAC, I2C, HUDI, SPDIF, IPI, HAC, TMU, GPIO } },
430 { 0xffe00004, 0, 32, 8, /* INT2PRI1 */ { IPI, SPDIF, HUDI, I2C } },
/linux/drivers/mailbox/
A DKconfig252 bool "Xilinx ZynqMP IPI Mailbox"
255 Say yes here to add support for Xilinx IPI mailbox driver.
257 between processors with Xilinx ZynqMP IPI. It will place the
258 message to the IPI buffer and will access the IPI control
/linux/arch/arc/kernel/
A Dentry-arcv2.S51 VECTOR handle_interrupt ; (19) Inter core Interrupt (IPI)
53 VECTOR handle_interrupt ; (21) Software Triggered Intr (Self IPI)
/linux/Documentation/devicetree/bindings/power/reset/
A Dxlnx,zynqmp-power.yaml69 // Example with IPI mailbox method:
/linux/drivers/rpmsg/
A DKconfig33 This use IPI and IPC to communicate with remote processors.
/linux/Documentation/translations/zh_CN/core-api/
A Dlocal_ops.rst147 /* IPI called on each CPU. */
/linux/kernel/irq/
A DKconfig82 # Generic IRQ IPI support
/linux/lib/
A DKconfig.kfence58 enabling and disabling static keys invoke IPI broadcasts, the latency
/linux/Documentation/block/
A Dnull_blk.rst57 1 Soft-irq. Uses IPI to complete IOs across CPU nodes. Simulates the overhead
/linux/Documentation/core-api/
A Dlocal_ops.rst153 /* IPI called on each CPU. */
A Dthis_cpu_ops.rst288 unless absolutely necessary. Please consider using an IPI to wake up
/linux/Documentation/timers/
A Dhighres.rst160 global clock event devices. The support of such hardware would involve IPI
/linux/Documentation/admin-guide/
A Dkernel-per-CPU-kthreads.rst161 CPU awakens, the scheduler will send an IPI that can result in
/linux/Documentation/RCU/
A Dstallwarn.rst354 This indicates that CPU 7 has failed to respond to a reschedule IPI.

Completed in 32 milliseconds

12