Searched refs:nmi_nesting (Results 1 – 4 of 4) sorted by relevance
32 .nmi_nesting = CT_NESTING_IRQ_NONIDLE,108 WRITE_ONCE(ct->nmi_nesting, 0); in ct_kernel_exit()168 WRITE_ONCE(ct->nmi_nesting, CT_NESTING_IRQ_NONIDLE); in ct_kernel_enter()203 WRITE_ONCE(ct->nmi_nesting, /* No store tearing. */ in ct_nmi_exit()211 WRITE_ONCE(ct->nmi_nesting, 0); /* Avoid store tearing. */ in ct_nmi_exit()280 WRITE_ONCE(ct->nmi_nesting, /* Prevent store tearing. */ in ct_nmi_enter()
37 long nmi_nesting; /* Track irq/NMI nesting level. */ member122 return __this_cpu_read(context_tracking.nmi_nesting); in ct_nmi_nesting()129 return ct->nmi_nesting; in ct_nmi_nesting_cpu()
972 2 long nmi_nesting;984 ``->nmi_nesting`` field. Because NMIs cannot be masked, changes988 represented by a ``->nmi_nesting`` value of nine. This counter997 ``->nmi_nesting`` field is set to a large positive number, and999 the ``->nmi_nesting`` field is set to zero. Assuming that1001 counter, this approach corrects the ``->nmi_nesting`` field1029 | ``->nmi_nesting`` counters into a single counter that just |
380 long nmi_nesting = ct_nmi_nesting(); in rcu_is_cpu_rrupt_from_idle() local394 if (nmi_nesting > 1) in rcu_is_cpu_rrupt_from_idle()401 if (nmi_nesting == 1) in rcu_is_cpu_rrupt_from_idle()405 if (!nmi_nesting) { in rcu_is_cpu_rrupt_from_idle()