Lines Matching refs:fences

153    :doc: DMA fences overview
215 * Future fences, used in HWC1 to signal when a buffer isn't used by the display
219 * Proxy fences, proposed to handle &drm_syncobj for which the fence has not yet
222 * Userspace fences or gpu futexes, fine-grained locking within a command buffer
228 batch DMA fences for memory management instead of context preemption DMA
229 fences which get reattached when the compute job is rescheduled.
232 fences and controls when they fire. Mixing indefinite fences with normal
233 in-kernel DMA fences does not work, even when a fallback timeout is included to
239 * Only userspace knows about all dependencies in indefinite fences and when
243 for memory management needs, which means we must support indefinite fences being
244 dependent upon DMA fences. If the kernel also support indefinite fences in the
255 userspace [label="userspace controlled fences"]
270 fences in the kernel. This means:
272 * No future fences, proxy fences or userspace fences imported as DMA fences,
275 * No DMA fences that signal end of batchbuffer for command submission where
284 implications for DMA fences.
288 But memory allocations are not allowed to gate completion of DMA fences, which
289 means any workload using recoverable page faults cannot use DMA fences for
290 synchronization. Synchronization fences controlled by userspace must be used
294 Linux rely on DMA fences, which means without an entirely new userspace stack
295 built on top of userspace fences, they cannot benefit from recoverable page
330 requiring DMA fences or jobs requiring page fault handling: This means all DMA
331 fences must complete before a compute job with page fault handling can be
339 fences. This results very wide impact on the kernel, since resolving the page
345 GPUs do not have any impact. This allows us to keep using DMA fences internally
350 Fences` discussions: Infinite fences from compute workloads are allowed to
351 depend on DMA fences, but not the other way around. And not even the page fault