Home
last modified time | relevance | path

Searched full:invalidation (Results 1 – 25 of 328) sorted by relevance

12345678910>>...14

/linux/drivers/gpu/drm/xe/ !
H A Dxe_gt_tlb_invalidation.c26 * invalidation time. Double up the time to process full CT queue
95 xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d", in xe_gt_tlb_fence_timeout()
109 * xe_gt_tlb_invalidation_init_early - Initialize GT TLB invalidation state
112 * Initialize GT TLB invalidation state, purely software initialization, should
130 * xe_gt_tlb_invalidation_reset - Initialize GT TLB invalidation reset
133 * Signal any pending invalidation fences, should be called during a GT reset
201 * XXX: The seqno algorithm relies on TLB invalidation being processed in send_tlb_invalidation()
254 * xe_gt_tlb_invalidation_guc - Issue a TLB invalidation on this GT for the GuC
256 * @fence: invalidation fence which will be signal on TLB invalidation
259 * Issue a TLB invalidation for the GuC. Completion of TLB is asynchronous and
[all …]
H A Dxe_gt_tlb_invalidation_types.h14 * struct xe_gt_tlb_invalidation_fence - XE GT TLB invalidation fence
17 * invalidation completion.
26 /** @seqno: seqno of TLB invalidation to signal fence one */
28 /** @invalidation_time: time of TLB invalidation */
H A Dxe_vm_doc.h182 * invalidation). The first operation waits on the VM's
207 * Invalidation
211 * whenever it wants. We register an invalidation MMU notifier to alert XE when
212 * a user pointer is about to move. The invalidation notifier needs to block
220 * rebind the userptr. The invalidation MMU notifier kicks the rebind worker
346 * invalidation responses are also in the critical path so these can also be
371 * Issue blocking TLB invalidation |
402 * Caveats with eviction / user pointer invalidation
405 * In the case of eviction and user pointer invalidation on a faulting VM, there
409 * needed. In both the case of eviction and user pointer invalidation locks are
[all …]
H A Dxe_gt_types.h188 /** @tlb_invalidation: TLB invalidation state */
190 /** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
194 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
213 /** @tlb_invalidation.lock: protects TLB invalidation fences */
398 * of a global invalidation of l2 cache
/linux/Documentation/ABI/testing/ !
H A Ddebugfs-intel-iommu121 This file exports invalidation queue internals of each
130 Invalidation queue on IOMMU: dmar0
144 Invalidation queue on IOMMU: dmar1
168 * 1 - enable sampling IOTLB invalidation latency data
170 * 2 - enable sampling devTLB invalidation latency data
172 * 3 - enable sampling intr entry cache invalidation latency data
185 2) Enable sampling IOTLB invalidation latency data
207 3) Enable sampling devTLB invalidation latency data
/linux/drivers/gpu/drm/i915/gt/ !
H A Dintel_tlb.c18 * HW architecture suggest typical invalidation time at 40us,
26 * On Xe_HP the TLB invalidation registers are located at the same MMIO offsets
100 "%s TLB invalidation did not complete in %ums!\n", in mmio_invalidate_full()
143 * Only perform GuC TLB invalidation if GuC is ready. in intel_gt_invalidate_tlb_full()
146 * any TLB invalidation path here unnecessary. in intel_gt_invalidate_tlb_full()
/linux/drivers/gpu/drm/xe/abi/ !
H A Dguc_actions_abi.h225 /* Flush PPC or SMRO caches along with TLB invalidation request */
236 * 0: Heavy mode of Invalidation:
237 * The pipeline of the engine(s) for which the invalidation is targeted to is
239 * Observed before completing the TLB invalidation
240 * 1: Lite mode of Invalidation:
243 * completing TLB invalidation.
244 * Light Invalidation Mode is to be used only when
246 * for the in-flight transactions across the TLB invalidation. In other words,
247 * this mode can be used when the TLB invalidation is intended to clear out the
248 * stale cached translations that are no longer in use. Light Invalidation Mode
[all …]
/linux/drivers/gpu/drm/i915/gt/uc/abi/ !
H A Dguc_actions_abi.h196 * 0: Heavy mode of Invalidation:
197 * The pipeline of the engine(s) for which the invalidation is targeted to is
199 * Observed before completing the TLB invalidation
200 * 1: Lite mode of Invalidation:
203 * completing TLB invalidation.
204 * Light Invalidation Mode is to be used only when
206 * for the in-flight transactions across the TLB invalidation. In other words,
207 * this mode can be used when the TLB invalidation is intended to clear out the
208 * stale cached translations that are no longer in use. Light Invalidation Mode
209 * is much faster than the Heavy Invalidation Mode, as it does not wait for the
/linux/drivers/iommu/intel/ !
H A Dpasid.c329 * VT-d spec 5.0 table28 states guides for cache invalidation: in intel_pasid_flush_present()
331 * - PASID-selective-within-Domain PASID-cache invalidation in intel_pasid_flush_present()
332 * - PASID-selective PASID-based IOTLB invalidation in intel_pasid_flush_present()
334 * - Global Device-TLB invalidation to affected functions in intel_pasid_flush_present()
336 * - PASID-based Device-TLB invalidation (with S=1 and in intel_pasid_flush_present()
626 * - PASID-selective-within-Domain PASID-cache invalidation in intel_pasid_setup_dirty_tracking()
628 * - Domain-selective IOTLB invalidation in intel_pasid_setup_dirty_tracking()
630 * - PASID-selective PASID-based IOTLB invalidation in intel_pasid_setup_dirty_tracking()
632 * - Global Device-TLB invalidation to affected functions in intel_pasid_setup_dirty_tracking()
634 * - PASID-based Device-TLB invalidation (with S=1 and in intel_pasid_setup_dirty_tracking()
[all …]
H A Ddmar.c1218 return "Context-cache Invalidation"; in qi_type_string()
1220 return "IOTLB Invalidation"; in qi_type_string()
1222 return "Device-TLB Invalidation"; in qi_type_string()
1224 return "Interrupt Entry Cache Invalidation"; in qi_type_string()
1226 return "Invalidation Wait"; in qi_type_string()
1228 return "PASID-based IOTLB Invalidation"; in qi_type_string()
1230 return "PASID-cache Invalidation"; in qi_type_string()
1232 return "PASID-based Device-TLB Invalidation"; in qi_type_string()
1247 pr_err("VT-d detected Invalidation Queue Error: Reason %llx", in qi_dump_fault()
1250 pr_err("VT-d detected Invalidation Time-out Error: SID %llx", in qi_dump_fault()
[all …]
H A Dcache.c3 * cache.c - Intel VT-d cache invalidation
324 * invalidation requests while address remapping hardware is disabled. in qi_batch_add_dev_iotlb()
338 * npages == -1 means a PASID-selective invalidation, otherwise, in qi_batch_add_piotlb()
339 * a positive value for Page-selective-within-PASID invalidation. in qi_batch_add_piotlb()
355 * Device-TLB invalidation requests while address remapping hardware in qi_batch_add_pasid_dev_iotlb()
493 * stage mapping requires explicit invalidation of the caches.
496 * flushing, if cache invalidation is not required.
/linux/drivers/infiniband/ulp/rtrs/ !
H A DREADME54 The procedure is the default behaviour of the driver. This invalidation and
165 the user header, flags (specifying if memory invalidation is necessary) and the
169 attaches an invalidation message if requested and finally an "empty" rdma
176 or in case client requested invalidation:
184 the user header, flags (specifying if memory invalidation is necessary) and the
190 attaches an invalidation message if requested and finally an "empty" rdma
201 or in case client requested invalidation:
/linux/arch/arm64/kvm/hyp/nvhe/ !
H A Dtlb.c36 * being either ish or nsh, depending on the invalidation in enter_vmid_context()
165 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa()
168 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa()
195 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
198 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa_nsh()
/linux/arch/powerpc/include/asm/ !
H A Dpnv-ocxl.h19 /* Radix Invalidation Control
28 /* Invalidation Criteria
35 /* Invalidation Flag */
/linux/include/uapi/linux/ !
H A Diommufd.h793 * enum iommu_hwpt_invalidate_data_type - IOMMU HWPT Cache Invalidation
795 * @IOMMU_HWPT_INVALIDATE_DATA_VTD_S1: Invalidation data for VTD_S1
796 * @IOMMU_VIOMMU_INVALIDATE_DATA_ARM_SMMUV3: Invalidation data for ARM SMMUv3
805 * stage-1 cache invalidation
806 * @IOMMU_VTD_INV_FLAGS_LEAF: Indicates whether the invalidation applies
815 * struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
823 * The Intel VT-d specific invalidation data for user-managed stage-1 cache
824 * invalidation in nested translation. Userspace uses this structure to
840 * struct iommu_viommu_arm_smmuv3_invalidate - ARM SMMUv3 cache invalidation
842 * @cmd: 128-bit cache invalidation command that runs in SMMU CMDQ.
[all …]
/linux/Documentation/filesystems/caching/ !
H A Dnetfs-api.rst36 (8) Data file invalidation
39 (11) Page release and invalidation
285 The read operation will fail with ESTALE if invalidation occurred whilst the
302 Data File Invalidation
319 This increases the invalidation counter in the cookie to cause outstanding
324 Invalidation runs asynchronously in a worker thread so that it doesn't block
427 Page Release and Invalidation
442 Page release and page invalidation should also wait for any mark left on the
/linux/Documentation/gpu/ !
H A Ddrm-vm-bind-locking.rst87 notifier invalidation. This is not a real seqlock but described in
95 invalidation notifiers.
103 invalidation. The userptr notifier lock is per gpu_vm.
406 <Invalidation example>` below). Note that when the core mm decides to
435 // invalidation notifier running anymore.
449 // of the MMU invalidation notifier. Hence the
476 The userptr gpu_vma MMU invalidation notifier might be called from
495 // invalidation callbacks, the mmu notifier core will flip
504 When this invalidation notifier returns, the GPU can no longer be
564 invalidation notifier where zapping happens. Hence, if the
/linux/include/linux/ !
H A Dmemregion.h41 * contents while performing the invalidation. It is only exported for
59 WARN_ON_ONCE("CPU cache invalidation required"); in cpu_cache_invalidate_memregion()
/linux/drivers/gpu/drm/amd/amdgpu/ !
H A Dgmc_v12_0.c216 * off cycle, add semaphore acquire before invalidation and semaphore in gmc_v12_0_flush_vm_hub()
217 * release after invalidation to avoid entering power gated state in gmc_v12_0_flush_vm_hub()
253 * add semaphore release after invalidation, in gmc_v12_0_flush_vm_hub()
259 /* Issue additional private vm invalidation to MMHUB */ in gmc_v12_0_flush_vm_hub()
266 /* Issue private invalidation */ in gmc_v12_0_flush_vm_hub()
268 /* Read back to ensure invalidation is done*/ in gmc_v12_0_flush_vm_hub()
328 * @inst: is used to select which instance of KIQ to use for the invalidation
369 * off cycle, add semaphore acquire before invalidation and semaphore in gmc_v12_0_emit_flush_gpu_tlb()
370 * release after invalidation to avoid entering power gated state in gmc_v12_0_emit_flush_gpu_tlb()
398 * add semaphore release after invalidation, in gmc_v12_0_emit_flush_gpu_tlb()
/linux/include/vdso/ !
H A Dhelpers.h54 /* Ensure the sequence invalidation is visible before data is modified */ in vdso_write_begin_clock()
71 /* Ensure the sequence invalidation is visible before data is modified */ in vdso_write_begin()
/linux/arch/powerpc/kernel/ !
H A Dl2cr_6xx.S60 - L2I set to perform a global invalidation
111 /* Before we perform the global invalidation, we must disable dynamic
207 /* Perform a global invalidation */
223 /* Wait for the invalidation to complete */
342 /* Perform a global invalidation */
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n1/ !
H A Dl2_cache.json12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line …
44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v1/ !
H A Dl2_cache.json12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line …
44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ !
H A Dl2_cache.json12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line …
44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
/linux/drivers/cxl/ !
H A DKconfig221 to invalidate caches when those events occur. If that invalidation
223 invalidation failure are due to the CPU not providing a cache
224 invalidation mechanism. For example usage of wbinvd is restricted to

12345678910>>...14