Lines Matching +full:per +full:- +full:context

1 /* SPDX-License-Identifier: GPL-2.0 */
6 * include/linux/preempt.h - macros for accessing and manipulating
18 * - bits 0-7 are the preemption count (max preemption depth: 256)
19 * - bits 8-15 are the softirq count (max # of softirqs: 256)
43 #define __IRQ_MASK(x) ((1UL << (x))-1)
60 * Disable preemption until the scheduler is running -- use an unconditional
63 * Reset by start_kernel()->sched_init()->init_idle()->init_idle_preempt_count().
69 * which states that during context switches:
82 * interrupt_context_level - return interrupt context level
84 * Returns the current interrupt context level.
85 * 0 - normal context
86 * 1 - softirq context
87 * 2 - hardirq context
88 * 3 - NMI context
111 # define softirq_count() (current->softirq_disable_cnt & SOFTIRQ_MASK)
119 * Macros to retrieve the current execution context:
121 * in_nmi() - We're in NMI context
122 * in_hardirq() - We're in hard IRQ context
123 * in_serving_softirq() - We're in softirq context
124 * in_task() - We're in task context
137 * in_irq() - Obsolete version of in_hardirq()
138 * in_softirq() - We have BH disabled, or are processing softirqs
139 * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
180 * Are we running in atomic context? WARNING: this macro cannot
181 * always detect atomic context; in particular, it cannot know about
182 * held spinlocks in non-preemptible kernels. Thus it should not be
283 * that can cause faults and scheduling migrate into our preempt-protected
324 * preempt_ops - notifiers called when a task is preempted and rescheduled
344 * preempt_notifier - key for installing preemption notifiers
364 notifier->link.next = NULL; in preempt_notifier_init()
365 notifier->link.pprev = NULL; in preempt_notifier_init()
366 notifier->ops = ops; in preempt_notifier_init()
374 * Migrate-Disable and why it is undesired.
382 * Per this argument, the change from preempt_disable() to migrate_disable()
385 * - a higher priority tasks gains reduced wake-up latency; with preempt_disable()
388 * - a lower priority tasks; which under preempt_disable() could've instantly
402 * migration. This turns out to break a bunch of per-cpu usage. To this end,
406 * This is a 'temporary' work-around at best. The correct solution is getting
408 * per-cpu locking or short preempt-disable regions.
423 * Note: even non-work-conserving schedulers like semi-partitioned depends on
425 * work-conserving schedulers.
439 * preempt_disable_nested - Disable preemption inside a normally preempt disabled section
442 * section which has preemption disabled implicitly on non-PREEMPT_RT
444 * - holding a spinlock/rwlock
445 * - soft interrupt context
446 * - regular interrupt handlers
449 * interrupt context and regular interrupt handlers are preemptible and
452 * PREEMPT_RT. For non-PREEMPT_RT kernels this is a NOP.
456 * - seqcount write side critical sections where the seqcount is not
460 * - RMW per CPU variable updates like vmstat.
472 * preempt_enable_nested - Undo the effect of preempt_disable_nested()