| /linux/Documentation/locking/ |
| H A D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 47 section that must occur while preemption is disabled. Think what would happen 50 upon preemption, the FPU registers will be sold to the lowest bidder. Thus, 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 84 n-times in a code path, and preemption will not be reenabled until the n-th 86 preemption is not enabled. [all …]
|
| H A D | locktypes.rst | 59 preemption and interrupt disabling primitives. Contrary to other locking 60 mechanisms, disabling preemption or interrupts are pure CPU local 76 Spinning locks implicitly disable preemption and the lock / unlock functions 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 164 On non-PREEMPT_RT kernels local_lock operations map to the preemption and 200 local_lock should be used in situations where disabling preemption or 204 local_lock is not suitable to protect against preemption or interrupts on a 222 Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and [all …]
|
| H A D | seqlock.rst | 49 preemption, preemption must be explicitly disabled before entering the 74 /* Serialized context with disabled preemption */ 109 For lock types which do not implicitly disable preemption, preemption
|
| H A D | hwspinlock.rst | 95 Upon a successful return from this function, preemption is disabled so 111 Upon a successful return from this function, preemption and the local 127 Upon a successful return from this function, preemption is disabled, 178 Upon a successful return from this function, preemption is disabled so 195 Upon a successful return from this function, preemption and the local 211 Upon a successful return from this function, preemption is disabled, 268 Upon a successful return from this function, preemption and local 280 Upon a successful return from this function, preemption is reenabled,
|
| /linux/Documentation/trace/rv/ |
| H A D | monitor_sched.rst | 88 The schedule called with preemption disabled (scpd) monitor ensures schedule is 89 called with preemption disabled:: 110 does not enable preemption:: 180 The need resched preempts (nrp) monitor ensures preemption requires 181 ``need_resched``. Only kernel preemption is considered, since preemption 184 A kernel preemption is whenever ``__schedule`` is called with the preemption 186 type of preemption occurs after the need for ``rescheduling`` has been set. 188 userspace preemption. 190 case, a task goes through the scheduler from a preemption context but it is 195 In theory, a preemption can only occur after the ``need_resched`` flag is set. In [all …]
|
| H A D | monitor_wip.rst | 13 preemption disabled:: 30 The wakeup event always takes place with preemption disabled because
|
| /linux/Documentation/gpu/ |
| H A D | msm-preemption.rst | 12 When preemption is enabled 4 rings are initialized, corresponding to different 16 requesting preemption. When certain conditions are met, depending on the 32 configured by changing the preemption level, this allows to compromise between 33 latency (ie. the time that passes between when the kernel requests preemption 43 preemption of any kind. 57 skipped when using GMEM with Level 1 preemption. When enabling this userspace is 58 expected to set the state that isn't preserved whenever preemption occurs which 60 before and after preemption. 66 being executed. There are different kinds of preemption records and most of 67 those require one buffer per ring. This is because preemption never occurs [all …]
|
| H A D | drm-compute.rst | 17 not even to force preemption. The driver with is simply forced to unmap a BO 36 If job preemption and recoverable pagefaults are not available, those are the
|
| /linux/Documentation/core-api/real-time/ |
| H A D | theory.rst | 20 control of the scheduler and significantly increasing the number of preemption 33 the policy does not guarantee immediate preemption when a new SCHED_OTHER task 48 preemption and then actively spinning until the lock becomes available. Once 49 the lock is released, preemption is enabled. From a real-time perspective, 50 this approach is undesirable because disabling preemption prevents the 55 that do not disable preemption. On PREEMPT_RT, spinlock_t is implemented using 61 Disabling CPU migration provides the same effect as disabling preemption, while 62 still allowing preemption and ensuring that the task continues to run on the 86 Interrupt handlers are another source of code that executes with preemption 114 significantly reduces sections of code where interrupts or preemption is
|
| H A D | architecture-porting.rst | 29 Kernel preemption must be supported and requires that 54 not saved during kernel preemption. As a result, any kernel code that uses 58 interruptions from softirqs and to disable regular preemption. This allows the 63 preemption alone is sufficient. 66 the kernel_fpu_begin()/ kernel_fpu_end() section because it requires preemption 67 to be enabled. These preemption points are generally sufficient to avoid 80 preemption to be enabled. 91 Lazy preemption
|
| H A D | differences.rst | 35 traditional semantics: it disables preemption and, when used with _irq() or 74 other threads. Do not assume that softirq context runs with preemption 106 Because spinlock_t on PREEMPT_RT does not disable preemption, it cannot be used 107 to protect per-CPU data by relying on implicit preemption disabling. If this 108 inherited preemption disabling is essential and if local_lock_t cannot be used 111 non-PREEMPT_RT kernels, it verifies with lockdep that preemption is already 112 disabled. On PREEMPT_RT, it explicitly disables preemption. 134 from sections where preemption is disabled. This is because the allocator must 139 acquired when preemption is disabled. Fortunately, this is generally not a 141 with preemption or interrupts disabled into threaded context, where sleeping is [all …]
|
| H A D | index.rst | 4 Real-time preemption
|
| /linux/kernel/ |
| H A D | Kconfig.preempt | 28 This is the traditional Linux preemption model, geared towards 46 "explicit preemption points" to the kernel code. These new 47 preemption points have been selected to reduce the maximum 69 otherwise not be about to reach a natural preemption point. 79 bool "Scheduler controlled preemption model" 84 This option provides a scheduler driven preemption model that 85 is fundamentally similar to full preemption, but is less 87 reduce lock holder preemption and recover some of the performance 88 gains seen from using Voluntary preemption. 136 This option allows to define the preemption mode [all...] |
| /linux/Documentation/core-api/ |
| H A D | entry.rst | 167 irq_enter_rcu() updates the preemption count which makes in_hardirq() 172 irq_exit_rcu() handles interrupt time accounting, undoes the preemption 175 In theory, the preemption count could be updated in irqentry_enter(). In 176 practice, deferring this update to irq_enter_rcu() allows the preemption-count 180 preemption count has not yet been updated with the HARDIRQ_OFFSET state. 182 Note that irq_exit_rcu() must remove HARDIRQ_OFFSET from the preemption count 185 also requires that HARDIRQ_OFFSET has been removed from the preemption count. 223 Note that the update of the preemption counter has to be the first 226 preemption count modification in the NMI entry/exit case must not be
|
| H A D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
| H A D | this_cpu_ops.rst | 20 necessary to disable preemption or interrupts to ensure that the 44 The following this_cpu() operations with implied preemption protection 46 preemption and interrupts:: 110 reserved for a specific processor. Without disabling preemption in the 142 smp_processor_id() may be used, for example, where preemption has been 144 critical section. When preemption is re-enabled this pointer is usually 240 preemption. If a per cpu variable is not used in an interrupt context
|
| /linux/kernel/trace/rv/monitors/opid/ |
| H A D | Kconfig | 13 interrupts and preemption disabled or during IRQs, where preemption
|
| /linux/Documentation/RCU/ |
| H A D | NMI-RCU.rst | 45 The do_nmi() function processes each NMI. It first disables preemption 50 preemption is restored. 95 CPUs complete any preemption-disabled segments of code that they were 97 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
| /linux/Documentation/virt/kvm/devices/ |
| H A D | arm-vgic.rst | 99 maximum possible 128 preemption levels. The semantics of the register 100 indicate if any interrupts in a given preemption level are in the active 103 Thus, preemption level X has one or more active interrupts if and only if: 107 Bits for undefined preemption levels are RAZ/WI.
|
| /linux/arch/arc/kernel/ |
| H A D | entry-compact.S | 152 ; if L2 IRQ interrupted a L1 ISR, disable preemption 157 ; -preemption off IRQ, user task in syscall picked to run 172 ; bump thread_info->preempt_count (Disable preemption) 352 ; decrement thread_info->preempt_count (re-enable preemption)
|
| /linux/kernel/trace/rv/monitors/scpd/ |
| H A D | Kconfig | 11 Monitor to ensure schedule is called with preemption disabled.
|
| /linux/Documentation/mm/ |
| H A D | highmem.rst | 66 CPU while the mapping is active. Although preemption is never disabled by 73 As said, pagefaults and preemption are never disabled. There is no need to 74 disable preemption because, when context switches to a different task, the 110 effects of atomic mappings, i.e. disabling page faults or preemption, or both. 141 restrictions on preemption or migration. It comes with an overhead as mapping
|
| /linux/kernel/trace/rv/monitors/nrp/ |
| H A D | Kconfig | 10 Monitor to ensure preemption requires need resched.
|
| /linux/Documentation/tools/rtla/ |
| H A D | common_osnoise_description.txt | 3 time in a loop while with preemption, softirq and IRQs enabled, thus
|
| /linux/Documentation/tools/rv/ |
| H A D | rv-mon-wip.rst | 21 checks if the wakeup events always take place with preemption disabled.
|