Lines Matching full:nmi

18 #include <linux/nmi.h>
31 #include <asm/nmi.h>
41 #include <trace/events/nmi.h>
44 * An emergency handler can be set in any context including NMI
95 * Prevent NMI reason port (0x61) being accessed simultaneously, can
96 * only be used in NMI handler.
132 "INFO: NMI handler (%ps) took too long to run: %lld.%03d msecs\n", in nmi_check_duration()
178 /* return total number of NMI events handled */ in nmi_handle()
195 * internal NMI handler call chains (SERR and IO_CHECK). in __register_nmi_handler()
224 * the name passed in to describe the nmi handler in unregister_nmi_handler()
229 "Trying to free NMI (%s) from NMI context!\n", n->name); in unregister_nmi_handler()
246 * @type: NMI type
249 * Set an emergency NMI handler which, if set, will preempt all the other
276 pr_emerg("NMI: PCI system error (SERR) for reason %02x on CPU %d.\n", in pci_serr_error()
280 nmi_panic(regs, "NMI: Not continuing"); in pci_serr_error()
300 "NMI: IOCK error (debug interrupt?) for reason %02x on CPU %d.\n", in io_check_error()
305 nmi_panic(regs, "NMI IOCK error: Not continuing"); in io_check_error()
308 * If we end up here, it means we have received an NMI while in io_check_error()
339 * if it caused the NMI) in unknown_nmi_error()
349 pr_emerg_ratelimited("Uhhuh. NMI received for unknown reason %02x on CPU %d.\n", in unknown_nmi_error()
353 nmi_panic(regs, "NMI: Not continuing"); in unknown_nmi_error()
369 * CPU-specific NMI must be processed before non-CPU-specific in default_do_nmi()
370 * NMI, otherwise we may lose it, because the CPU-specific in default_do_nmi()
371 * NMI can not be detected/processed on other CPUs. in default_do_nmi()
376 * be two NMI or more than two NMIs (any thing over two is dropped in default_do_nmi()
377 * due to NMI being edge-triggered). If this is the second half in default_do_nmi()
378 * of the back-to-back NMI, assume we dropped things and process in default_do_nmi()
379 * more handlers. Otherwise reset the 'swallow' NMI behaviour in default_do_nmi()
397 * There are cases when a NMI handler handles multiple in default_do_nmi()
398 * events in the current NMI. One of these events may in default_do_nmi()
399 * be queued for in the next NMI. Because the event is in default_do_nmi()
400 * already handled, the next NMI will result in an unknown in default_do_nmi()
401 * NMI. Instead lets flag this for a potential NMI to in default_do_nmi()
410 * Non-CPU-specific NMI: NMI sources can be processed on any CPU. in default_do_nmi()
431 * Reassert NMI in case it became active in default_do_nmi()
443 * Only one NMI can be latched at a time. To handle in default_do_nmi()
444 * this we may process multiple nmi handlers at once to in default_do_nmi()
445 * cover the case where an NMI is dropped. The downside in default_do_nmi()
446 * to this approach is we may process an NMI prematurely, in default_do_nmi()
447 * while its real NMI is sitting latched. This will cause in default_do_nmi()
448 * an unknown NMI on the next run of the NMI processing. in default_do_nmi()
453 * of a back-to-back NMI, so we flag that condition too. in default_do_nmi()
456 * NMI previously and we swallow it. Otherwise we reset in default_do_nmi()
460 * a 'real' unknown NMI. For example, while processing in default_do_nmi()
461 * a perf NMI another perf NMI comes in along with a in default_do_nmi()
462 * 'real' unknown NMI. These two NMIs get combined into in default_do_nmi()
463 * one (as described above). When the next NMI gets in default_do_nmi()
465 * no one will know that there was a 'real' unknown NMI sent in default_do_nmi()
467 * perf NMI returns two events handled then the second in default_do_nmi()
468 * NMI will get eaten by the logic below, again losing a in default_do_nmi()
469 * 'real' unknown NMI. But this is the best we can do in default_do_nmi()
483 * its NMI context with the CPU when the breakpoint or page fault does an IRET.
486 * NMI processing. On x86_64, the asm glue protects us from nested NMIs
487 * if the outer NMI came from kernel mode, but we can still nest if the
488 * outer NMI came from user mode.
496 * When no NMI is in progress, it is in the "not running" state.
497 * When an NMI comes in, it goes into the "executing" state.
498 * Normally, if another NMI is triggered, it does not interrupt
499 * the running NMI and the HW will simply latch it so that when
500 * the first NMI finishes, it will restart the second NMI.
502 * when one is running, are ignored. Only one NMI is restarted.)
504 * If an NMI executes an iret, another NMI can preempt it. We do not
505 * want to allow this new NMI to run, but we want to execute it when the
507 * the first NMI will perform a dec_return, if the result is zero
508 * (NOT_RUNNING), then it will simply exit the NMI handler. If not, the
511 * rerun the NMI handler again, and restart the 'latched' NMI.
518 * In case the NMI takes a page fault, we need to save off the CR2
519 * because the NMI could have preempted another page fault and corrupt
522 * CR2 must be done before converting the nmi state back to NOT_RUNNING.
523 * Otherwise, there would be a race of another nested NMI coming in
625 /* +--------- nmi_seq & 0x1: CPU is currently in NMI handler. */
628 /* | | | NMI handler has been invoked. */
672 msgp = "CPU entered NMI handler function, but has not exited"; in nmi_backtrace_stall_check()
682 msghp = " (CPU exited one NMI handler function)"; in nmi_backtrace_stall_check()
684 msghp = " (CPU currently in NMI handler function)"; in nmi_backtrace_stall_check()
686 msghp = " (CPU was never in an NMI handler function)"; in nmi_backtrace_stall_check()
702 * And NMI unblocking only happens when the stack frame indicates
705 * Thus, the NMI entry stub for FRED is really straightforward and
707 * during NMI handling.
720 * Save CR2 for eventual restore to cover the case where the NMI in DEFINE_FREDENTRY_NMI()
722 * prevents guest state corruption in case that the NMI handler in DEFINE_FREDENTRY_NMI()
749 /* reset the back-to-back NMI logic */