Lines Matching +full:default +full:- +full:blocked
1 /* SPDX-License-Identifier: MIT */
15 * Allocate a physical page for root of the page table structure, create default
19 * ------------
33 * ----------
35 * DRM_XE_VM_BIND_OP_MAP - Create mapping for a BO
36 * DRM_XE_VM_BIND_OP_UNMAP - Destroy mapping for a BO / userptr
37 * DRM_XE_VM_BIND_OP_MAP_USERPTR - Create mapping for userptr
54 * .. code-block::
56 * bind BO0 0x0-0x1000
62 * bind BO1 0x201000-0x202000
66 * bind BO2 0x1ff000-0x201000
74 * bind can be done immediately (all in-fences satisfied, VM dma-resv kernel
78 * -------------
83 * ----------
95 * ------------------------
105 * -------------------------
122 * ---------------------
126 * queue, B is now blocked on A's in fences even though it is ready to run. This
135 * is free to pass A. If the engine ID field is omitted, the default bind queue
141 * ------------------------
151 * ----------------------------
155 * .. code-block::
157 * 0x0000-0x2000 and 0x3000-0x5000 have mappings
158 * Munmap 0x1000-0x4000, results in mappings 0x0000-0x1000 and 0x4000-0x5000
163 * .. code-block::
165 * unbind 0x0000-0x2000
166 * unbind 0x3000-0x5000
167 * rebind 0x0000-0x1000
168 * rebind 0x4000-0x5000
170 * Why not just do a partial unbind of 0x1000-0x2000 and 0x3000-0x4000? This
178 * In this example there is a window of time where 0x0000-0x1000 and
179 * 0x4000-0x5000 are invalid but the user didn't ask for these addresses to be
186 * VM). The caveat is all dma-resv slots must be updated atomically with respect
188 * vm->lock in write mode from the first operation until the last.
191 * ----------------------------
194 * create mappings are by default deferred to the page fault handler (first
208 * ------------
214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots.
217 * -------
219 * Either the next exec (non-compute) or rebind worker (compute mode) will
221 * after the VM dma-resv wait if the VM is in compute mode.
235 * --------------
248 * dma-resv DMA_RESV_USAGE_PREEMPT_FENCE slot. The same preempt fence, for every
249 * engine using the VM, is also installed into the same dma-resv slot of every
253 * -------------
262 * .. code-block::
264 * <----------------------------------------------------------------------|
268 * Lock VM dma-resv and external BOs dma-resv |
273 * Wait VM's DMA_RESV_USAGE_KERNEL dma-resv slot |
278 * Wait all VM's dma-resv slots |
279 * Retry ----------------------------------------------------------
284 * -----------
297 * When VM is created, a default bind engine and PT table structure are created
301 * if this mask is zero then default to all the GTs where the VM has page
305 * various places plus exporting a composite fence for multi-GT binds to the
316 * such, dma fences are not allowed when VM is in fault mode. Because dma-fences
321 * ----------------
323 * By default, on a faulting VM binds just allocate the VMA and the actual
329 * ------------------
354 * .. code-block::
361 * <----------------------------------------------------------------------|
363 * Lock VM & BO dma-resv locks |
368 * Drop VM & BO dma-resv locks |
369 * Retry ----------------------------------------------------------
375 * ---------------
389 * .. code-block::
395 * Lock VM & BO dma-resv locks
403 * -------------------------------------------------
425 * -----
427 * VM global lock (vm->lock) - rw semaphore lock. Outer most lock which protects
434 * VM dma-resv lock (vm->gpuvm.r_obj->resv->lock) - WW lock. Protects VM dma-resv
439 * external BO dma-resv lock (bo->ttm.base.resv->lock) - WW lock. Protects
440 * external BO dma-resv slots. Expected to be acquired during VM binds (in
441 * addition to the VM dma-resv lock). All external BO dma-locks within a VM are
442 * expected to be acquired (in addition to the VM dma-resv lock) during execs
447 * -----------------------
450 * time (vm->lock).
453 * executing at the same time (vm->lock).
456 * the same VM is executing (vm->lock).
459 * compute mode rebind worker with the same VM is executing (vm->lock).
462 * executing (dma-resv locks).
465 * with the same VM is executing (dma-resv locks).
467 * dma-resv usage
473 * external BOs dma-resv slots. Let try to make this as clear as possible.
476 * -----------------
482 * 2. In non-compute mode, jobs from execs install themselves into the
485 * 3. In non-compute mode, jobs from execs install themselves into the
501 * ------------
507 * 2. In non-compute mode, the execution of all jobs from rebinds in execs shall
511 * 3. In non-compute mode, the execution of all jobs from execs shall wait on the
524 * -----------------------
526 * 1. New jobs from kernel ops are blocked behind any existing jobs from
527 * non-compute mode execs
529 * 2. New jobs from non-compute mode execs are blocked behind any existing jobs
532 * 3. New jobs from kernel ops are blocked behind all preempt fences signaling in
535 * 4. Compute mode engine resumes are blocked behind any existing jobs from
547 * wait on the dma-resv kernel slots of VM or BO, technically we only have to