Lines Matching full:fault

38  * Returns 0 if mmiotrace is disabled, or if the fault is not
120 * If it was a exec (instruction fetch) fault on NX page, then in is_prefetch()
121 * do not ignore the fault: in is_prefetch()
194 * Handle a fault on the vmalloc or module mapping area
205 * unhandled page-fault when they are accessed.
416 * The OS sees this as a page fault with the upper 32bits of RIP cleared.
450 * We catch this in the page fault handler because these addresses
536 pr_alert("BUG: unable to handle page fault for address: %px\n", in show_fault_oops()
559 * contributory exception from user code and gets a page fault in show_fault_oops()
560 * during delivery, the page fault can be delivered as though in show_fault_oops()
644 /* Are we prepared to handle this kernel fault? */ in no_context()
647 * Any interrupt that takes a fault gets the fixup. This makes in no_context()
648 * the below recursive fault logic only apply to a faults from in no_context()
675 * Stack overflow? During boot, we can fault near the initial in no_context()
686 * double-fault even before we get this far, in which case in no_context()
687 * we're fine: the double-fault handler will deal with it. in no_context()
690 * and then double-fault, though, because we're likely to in no_context()
697 : "D" ("kernel stack overflow (page fault)"), in no_context()
707 * Valid to do another page fault here, because if this fault in no_context()
722 * Buggy firmware could access regions which might page fault, try to in no_context()
800 * Valid to do another page fault here because this one came in __bad_area_nosemaphore()
893 * A protection key fault means that the PKRU value did not allow in bad_area_access_error()
900 * fault and that there was a VMA once we got in the fault in bad_area_access_error()
908 * 5. T1 : enters fault handler, takes mmap_lock, etc... in bad_area_access_error()
922 vm_fault_t fault) in do_sigbus() argument
930 /* User-space => ok to do another page fault: */ in do_sigbus()
937 if (fault & (VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { in do_sigbus()
942 "MCE: Killing %s:%d due to hardware memory corruption fault at %lx\n", in do_sigbus()
944 if (fault & VM_FAULT_HWPOISON_LARGE) in do_sigbus()
945 lsb = hstate_index_to_shift(VM_FAULT_GET_HINDEX(fault)); in do_sigbus()
946 if (fault & VM_FAULT_HWPOISON) in do_sigbus()
957 unsigned long address, vm_fault_t fault) in mm_fault_error() argument
964 if (fault & VM_FAULT_OOM) { in mm_fault_error()
974 * userspace (which will retry the fault, or kill us if we got in mm_fault_error()
979 if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON| in mm_fault_error()
981 do_sigbus(regs, error_code, address, fault); in mm_fault_error()
982 else if (fault & VM_FAULT_SIGSEGV) in mm_fault_error()
1001 * Handle a spurious fault caused by a stale TLB entry.
1016 * Returns non-zero if a spurious fault was handled, zero otherwise.
1099 * a follow-up action to resolve the fault, like a COW. in access_error()
1162 * We can fault-in kernel-space virtual memory on-demand. The in do_kern_addr_fault()
1171 * fault is not any of the following: in do_kern_addr_fault()
1172 * 1. A fault on a PTE with a reserved bit set. in do_kern_addr_fault()
1173 * 2. A fault caused by a user-mode access. (Do not demand- in do_kern_addr_fault()
1174 * fault kernel memory due to user-mode accesses). in do_kern_addr_fault()
1175 * 3. A fault caused by a page-level protection violation. in do_kern_addr_fault()
1176 * (A demand fault would be on a non-present page which in do_kern_addr_fault()
1191 /* Was the fault spurious, caused by lazy TLB invalidation? */ in do_kern_addr_fault()
1202 * and handling kernel code that can fault, like get_user(). in do_kern_addr_fault()
1205 * fault we could otherwise deadlock: in do_kern_addr_fault()
1220 vm_fault_t fault; in do_user_addr_fault() local
1254 * in a region with pagefaults disabled then we must not take the fault in do_user_addr_fault()
1263 * vmalloc fault has been handled. in do_user_addr_fault()
1266 * potential system fault or CPU buglet: in do_user_addr_fault()
1304 * tables. But, an erroneous kernel fault occurring outside one of in do_user_addr_fault()
1306 * to validate the fault against the address space. in do_user_addr_fault()
1316 * Fault from code in kernel from in do_user_addr_fault()
1360 * If for any reason at all we couldn't handle the fault, in do_user_addr_fault()
1362 * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if in do_user_addr_fault()
1367 * repeat the page fault later with a VM_FAULT_NOPAGE retval in do_user_addr_fault()
1372 fault = handle_mm_fault(vma, address, flags, regs); in do_user_addr_fault()
1375 if (fault_signal_pending(fault, regs)) { in do_user_addr_fault()
1387 if (unlikely((fault & VM_FAULT_RETRY) && in do_user_addr_fault()
1394 if (unlikely(fault & VM_FAULT_ERROR)) { in do_user_addr_fault()
1395 mm_fault_error(regs, hw_error_code, address, fault); in do_user_addr_fault()
1425 /* Was the fault on kernel-controlled part of the address space? */ in handle_page_fault()
1431 * User address page fault handling might have reenabled in handle_page_fault()
1450 * (asynchronous page fault mechanism). The event happens when a in DEFINE_IDTENTRY_RAW_ERRORCODE()
1475 * be invoked because a kernel fault on a user space address might in DEFINE_IDTENTRY_RAW_ERRORCODE()
1478 * In case the fault hit a RCU idle region the conditional entry in DEFINE_IDTENTRY_RAW_ERRORCODE()