Lines Matching +full:pre +full:- +full:determined

1 /* SPDX-License-Identifier: GPL-2.0-only */
5 * Copyright (C) 1999-2003 Russell King
69 * - 4KB : 1
70 * - 16KB : 2
71 * - 64KB : 3
92 * Level-based TLBI operations.
94 * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
97 * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
98 * a non-hinted invalidation. Any provided level outside the hint range
99 * will also cause fall-back to non-hinted invalidation.
101 * For Stage-2 invalidation, use the level values provided to that effect
131 * +----------+------+-------+-------+-------+----------------------+
133 * +-----------------+-------+-------+-------+----------------------+
136 * The address range is determined by below formula: [BADDR, BADDR + (NUM + 1) *
139 * Note that the first argument, baddr, is pre-shifted; If LPA2 is in use, BADDR
164 * Generate 'num' values from -1 to 30 with -1 rejected by the
169 ((((pages) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) - 1)
175 * This header file implements the low-level TLB invalidation routines
180 * DSB ISHST // Ensure prior page-table updates have completed
188 * as documented in Documentation/core-api/cachetlb.rst:
198 * Invalidate the virtual-address range '[start, end)' on all
199 * CPUs for the user address space corresponding to 'vma->mm'.
200 * Note that this operation also invalidates any walk-cache
212 * address space corresponding to 'vma->mm'. Note that this
213 * operation only invalidates a single, last-level page-table
214 * entry and therefore does not affect any walk-caches.
225 * CPUs, ensuring that any walk-cache entries associated with the
229 * Invalidate the virtual-address range '[start, end)' on all
230 * CPUs for the user address space corresponding to 'vma->mm'.
232 * determined by 'stride' and only affect any walk-cache entries
236 * cannot be easily determined, the value TLBI_TTL_UNKNOWN will
237 * perform a non-hinted invalidation.
269 mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); in flush_tlb_mm()
288 return __flush_tlb_page_nosync(vma->vm_mm, uaddr); in flush_tlb_page_nosync()
345 * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
351 * __flush_tlb_range_op - Perform TLBI operation upon a range
369 * using the non-range operations. This step is skipped if LPA2 is not in
379 * 3. If there is 1 page remaining, flush it through non-range operations. Range
404 pages -= stride >> PAGE_SHIFT; \
416 pages -= __TLBI_RANGE_PAGES(num, scale); \
418 scale--; \
434 pages = (end - start) >> PAGE_SHIFT; in __flush_tlb_range()
438 * (MAX_DVM_OPS - 1) pages; in __flush_tlb_range()
440 * (MAX_TLBI_RANGE_PAGES - 1) pages. in __flush_tlb_range()
443 (end - start) >= (MAX_DVM_OPS * stride)) || in __flush_tlb_range()
445 flush_tlb_mm(vma->vm_mm); in __flush_tlb_range()
450 asid = ASID(vma->vm_mm); in __flush_tlb_range()
460 mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); in __flush_tlb_range()
467 * We cannot use leaf-only invalidation here, since we may be invalidating in flush_tlb_range()
479 if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) { in flush_tlb_kernel_range()
488 for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) in flush_tlb_kernel_range()