Searched refs:IPI (Results 1 – 25 of 36) sorted by relevance
12
1 Xilinx IPI Mailbox Controller5 messaging between two Xilinx Zynq UltraScale+ MPSoC IPI agents. Each IPI9 | Xilinx ZynqMP IPI Controller |21 Hardware | | IPI Agent | | IPI Buffers | |26 | Xilinx IPI Agent Block |34 IPI agent node:39 - xlnx,ipi-id: local Xilinx IPI agent ID43 Internal IPI mailbox node:44 - reg: IPI buffers address ranges89 /* APU<->RPU0 IPI mailbox controller */[all …]
47 order to perform some KVM maintenance. To do so, an IPI is sent, forcing53 1) Send an IPI. This forces a guest mode exit.68 as well as to avoid sending unnecessary IPIs (see "IPI Reduction"), and69 even to ensure IPI acknowledgements are waited upon (see "Waiting for162 If, for example, the VCPU is sleeping, so no IPI is necessary, then198 Reduction") must be certain when it's safe to not send the IPI. One222 ...abort guest entry... ...send IPI...233 IPI Reduction249 KVM_REQUEST_WAIT flag changes the condition for sending an IPI from256 As the determination of whether or not to send an IPI depends on the[all …]
166 :Purpose: Hypercall used to yield if the IPI target vCPU is preempted170 :Usage example: When sending a call-function IPI-many to vCPUs, yield if171 any of the IPI target vCPUs was preempted.
11 # when returning from IPI handler, and when returning to user-space.15 # x86-32 uses IRET as return from interrupt, which takes care of the IPI.19 # x86-64 uses IRET as return from interrupt, which takes care of the IPI.
25 IPI(以及处理IPI的相关成本)。
112 Once a task has been selected for all the siblings in the core, an IPI is sent to113 siblings for whom a new task was selected. Siblings on receiving the IPI will130 When the highest priority task is selected to run, a reschedule-IPI is sent to142 (victim) to enter idle mode. This is because the sending of the IPI would bring145 which may not be worth protecting. It is also possible that the IPI is received171 IPI processing delays173 Core scheduling selects only trusted tasks to run together. IPI is used to notify175 receiving of the IPI on some arch (on x86, this has not been observed). This may177 IPI. Even though cache is flushed on entry to user mode, victim tasks on siblings
50 * Pending IPI (inter-processor interrupt) priority, 8 bits51 Zero is the highest priority, 255 means no IPI is pending.54 Zero means no interrupt pending, 2 means an IPI is pending
61 interrupt of the device being passed-through or the initial IPI ESB
71 non-IPI interrupts to a single CPU at a time (EG: Freescale MPIC).127 2 = MPIC inter-processor interrupt (IPI)130 the MPIC IPI number. The type-specific193 * MPIC IPI interrupts. Note the interrupt
27 each of which results in an IPI to the target CPU.48 The dotted arrows denote indirect action, for example, an IPI55 ``smp_call_function_single()`` to send the CPU an IPI, which96 | IPI the CPU to safely interact with the upcoming |128 that the CPU went idle while the IPI was in flight. If the CPU is idle,142 grace periods. In addition, attempting to IPI offline CPUs will result143 in splats, but failing to IPI online CPUs can result in too-short grace231 For RCU-sched, there is an additional check: If the IPI has interrupted245 bitmask of CPUs that must be IPIed, just before sending each IPI, and246 (either explicitly or implicitly) within the IPI handler.[all …]
12 a remote vCPU to avoid sending an IPI (and the associated13 cost of handling the IPI) when performing a wakeup.
339 HAC, IPI, SPDIF, HUDI, I2C, enumerator365 INTC_VECT(HAC, 0x580), INTC_VECT(IPI, 0x5c0),425 DMAC, I2C, HUDI, SPDIF, IPI, HAC, TMU, GPIO } },430 { 0xffe00004, 0, 32, 8, /* INT2PRI1 */ { IPI, SPDIF, HUDI, I2C } },
252 bool "Xilinx ZynqMP IPI Mailbox"255 Say yes here to add support for Xilinx IPI mailbox driver.257 between processors with Xilinx ZynqMP IPI. It will place the258 message to the IPI buffer and will access the IPI control
51 VECTOR handle_interrupt ; (19) Inter core Interrupt (IPI)53 VECTOR handle_interrupt ; (21) Software Triggered Intr (Self IPI)
69 // Example with IPI mailbox method:
33 This use IPI and IPC to communicate with remote processors.
147 /* IPI called on each CPU. */
82 # Generic IRQ IPI support
58 enabling and disabling static keys invoke IPI broadcasts, the latency
57 1 Soft-irq. Uses IPI to complete IOs across CPU nodes. Simulates the overhead
153 /* IPI called on each CPU. */
288 unless absolutely necessary. Please consider using an IPI to wake up
160 global clock event devices. The support of such hardware would involve IPI
161 CPU awakens, the scheduler will send an IPI that can result in
354 This indicates that CPU 7 has failed to respond to a reschedule IPI.
Completed in 32 milliseconds