Home
last modified time | relevance | path

Searched full:preemption (Results 1 – 25 of 527) sorted by relevance

12345678910>>...22

/linux-6.15/arch/csky/
DKconfig14 select ARCH_INLINE_READ_LOCK if !PREEMPTION
15 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
16 select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION
17 select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION
18 select ARCH_INLINE_READ_UNLOCK if !PREEMPTION
19 select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION
20 select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION
21 select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION
22 select ARCH_INLINE_WRITE_LOCK if !PREEMPTION
23 select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION
[all …]
/linux-6.15/Documentation/locking/
Dpreempt-locking.rst35 protect these situations by disabling preemption around them.
37 You can also use put_cpu() and get_cpu(), which will disable preemption.
44 Under preemption, the state of the CPU must be protected. This is arch-
47 section that must occur while preemption is disabled. Think what would happen
50 upon preemption, the FPU registers will be sold to the lowest bidder. Thus,
51 preemption must be disabled around such regions.
54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption.
72 Data protection under preemption is achieved by disabling preemption for the
84 n-times in a code path, and preemption will not be reenabled until the n-th
86 preemption is not enabled.
[all …]
Dlocktypes.rst59 preemption and interrupt disabling primitives. Contrary to other locking
60 mechanisms, disabling preemption or interrupts are pure CPU local
76 Spinning locks implicitly disable preemption and the lock / unlock functions
103 PI has limitations on non-PREEMPT_RT kernels due to preemption and
106 PI clearly cannot preempt preemption-disabled or interrupt-disabled
162 by disabling preemption or interrupts.
164 On non-PREEMPT_RT kernels local_lock operations map to the preemption and
200 local_lock should be used in situations where disabling preemption or
204 local_lock is not suitable to protect against preemption or interrupts on a
217 preemption or interrupts is required, for example, to safely access
[all …]
Dhwspinlock.rst95 Upon a successful return from this function, preemption is disabled so
111 Upon a successful return from this function, preemption and the local
127 Upon a successful return from this function, preemption is disabled,
178 Upon a successful return from this function, preemption is disabled so
195 Upon a successful return from this function, preemption and the local
211 Upon a successful return from this function, preemption is disabled,
268 Upon a successful return from this function, preemption and local
280 Upon a successful return from this function, preemption is reenabled,
Dseqlock.rst47 preemption, preemption must be explicitly disabled before entering the
72 /* Serialized context with disabled preemption */
107 For lock types which do not implicitly disable preemption, preemption
/linux-6.15/Documentation/gpu/
Dmsm-preemption.rst6 MSM Preemption
9 Preemption allows Adreno GPUs to switch to a higher priority ring when work is
12 When preemption is enabled 4 rings are initialized, corresponding to different
16 requesting preemption. When certain conditions are met, depending on the
28 Preemption levels
31 Preemption can only occur at certain boundaries. The exact conditions can be
32 configured by changing the preemption level, this allows to compromise between
33 latency (ie. the time that passes between when the kernel requests preemption
40 Preemption only occurs at the submission level. This requires the least amount
43 preemption of any kind.
[all …]
/linux-6.15/kernel/
DKconfig.preempt11 select PREEMPTION
18 prompt "Preemption Model"
22 bool "No Forced Preemption (Server)"
26 This is the traditional Linux preemption model, geared towards
37 bool "Voluntary Kernel Preemption (Desktop)"
43 "explicit preemption points" to the kernel code. These new
44 preemption points have been selected to reduce the maximum
66 otherwise not be about to reach a natural preemption point.
76 bool "Scheduler controlled preemption model"
81 This option provides a scheduler driven preemption model that
[all …]
DKconfig.locks104 # - DEBUG_SPINLOCK=n and PREEMPTION=n
142 depends on !PREEMPTION || ARCH_INLINE_SPIN_UNLOCK_IRQ
171 depends on !PREEMPTION || ARCH_INLINE_READ_UNLOCK
179 depends on !PREEMPTION || ARCH_INLINE_READ_UNLOCK_IRQ
208 depends on !PREEMPTION || ARCH_INLINE_WRITE_UNLOCK
216 depends on !PREEMPTION || ARCH_INLINE_WRITE_UNLOCK_IRQ
/linux-6.15/drivers/gpu/drm/msm/adreno/
Da5xx_gpu.h58 * In order to do lockless preemption we use a simple state machine to progress
61 * PREEMPT_NONE - no preemption in progress. Next state START.
62 * PREEMPT_START - The trigger is evaulating if preemption is possible. Next
66 * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
68 * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
70 * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
85 * CPU to store the state for preemption. The record itself is much larger
88 * There is a preemption record assigned per ringbuffer. When the CPU triggers a
89 * preemption, it fills out the record with the useful information (wptr, ring
91 * the preemption. When a ring is switched out, the CP will save the ringbuffer
[all …]
Da5xx_preempt.c9 * Try to transition the preemption state from old to new. Return
22 * Force the preemption state to the specified state. This is used in cases
30 * preemption or in the interrupt handler so barriers are needed in set_preempt_state()
89 DRM_DEV_ERROR(dev->dev, "%s: preemption timed out\n", gpu->name); in a5xx_preempt_timer()
93 /* Try to trigger a preemption switch */
105 * Serialize preemption start to ensure that we always make in a5xx_preempt_trigger()
112 * Try to start preemption by moving from NONE to START. If in a5xx_preempt_trigger()
113 * unsuccessful, a preemption is already in flight in a5xx_preempt_trigger()
127 * Its possible that while a preemption request is in progress in a5xx_preempt_trigger()
151 /* Set the address of the incoming preemption record */ in a5xx_preempt_trigger()
[all …]
Da6xx_preempt.c13 * Try to transition the preemption state from old to new. Return
26 * Force the preemption state to the specified state. This is used in cases
34 * preemption or in the interrupt handler so barriers are needed in set_preempt_state()
97 dev_err(dev->dev, "%s: preemption timed out\n", gpu->name); in a6xx_preempt_timer()
148 /* Delete the preemption watchdog timer */ in a6xx_preempt_irq()
161 "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n"); in a6xx_preempt_irq()
163 dev_err(dev->dev, "%s: Preemption failed to complete\n", in a6xx_preempt_irq()
181 * Retrigger preemption to avoid a deadlock that might occur when preemption in a6xx_preempt_irq()
193 /* No preemption if we only have one ring */ in a6xx_preempt_hw_init()
211 /* Enable the GMEM save/restore feature for preemption */ in a6xx_preempt_hw_init()
[all …]
Da6xx_gpu.h38 * @pwrup_reglist pwrup reglist for preemption
104 * In order to do lockless preemption we use a simple state machine to progress
107 * PREEMPT_NONE - no preemption in progress. Next state START.
108 * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
112 * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
114 * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
116 * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
131 * CPU to store the state for preemption. The record itself is much larger
134 * There is a preemption record assigned per ringbuffer. When the CPU triggers a
135 * preemption, it fills out the record with the useful information (wptr, ring
[all …]
/linux-6.15/arch/loongarch/
DKconfig35 select ARCH_INLINE_READ_LOCK if !PREEMPTION
36 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
37 select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION
38 select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION
39 select ARCH_INLINE_READ_UNLOCK if !PREEMPTION
40 select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION
41 select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION
42 select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION
43 select ARCH_INLINE_WRITE_LOCK if !PREEMPTION
44 select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION
[all …]
/linux-6.15/Documentation/core-api/
Dentry.rst10 * Preemption counter
167 irq_enter_rcu() updates the preemption count which makes in_hardirq()
172 irq_exit_rcu() handles interrupt time accounting, undoes the preemption
175 In theory, the preemption count could be updated in irqentry_enter(). In
176 practice, deferring this update to irq_enter_rcu() allows the preemption-count
180 preemption count has not yet been updated with the HARDIRQ_OFFSET state.
182 Note that irq_exit_rcu() must remove HARDIRQ_OFFSET from the preemption count
185 also requires that HARDIRQ_OFFSET has been removed from the preemption count.
215 * Preemption counter
223 Note that the update of the preemption counter has to be the first
[all …]
Dlocal_ops.rst42 making sure that we modify it from within a preemption safe context. It is
70 * Preemption (or interrupts) must be disabled when using local ops in
76 preemption already disabled. I suggest, however, to explicitly
77 disable preemption anyway to make sure it will still work correctly on
104 local atomic operations: it makes sure that preemption is disabled around write
110 If you are already in a preemption-safe context, you can use
161 * preemptible context (it disables preemption) :
Dthis_cpu_ops.rst20 necessary to disable preemption or interrupts to ensure that the
44 The following this_cpu() operations with implied preemption protection
46 preemption and interrupts::
110 reserved for a specific processor. Without disabling preemption in the
142 smp_processor_id() may be used, for example, where preemption has been
144 critical section. When preemption is re-enabled this pointer is usually
240 preemption. If a per cpu variable is not used in an interrupt context
/linux-6.15/include/linux/
Dpreempt.h7 * preempt_count (used for kernel preemption, interrupt count, etc.)
15 * We put the hardirq and softirq counter into the preemption
18 * - bits 0-7 are the preemption count (max preemption depth: 256)
60 * Disable preemption until the scheduler is running -- use an unconditional
160 /* Locks on RT do not disable preemption */
169 * Which need to disable both preemption (CONFIG_PREEMPT_COUNT) and
281 * Even if we don't have any preemption, we need preempt disable/enable
301 * Modules have no business playing preemption tricks.
345 * preempt_notifier - key for installing preemption notifiers
440 * preempt_disable_nested - Disable preemption inside a normally preempt disabled section
[all …]
Dhwspinlock.h162 * Upon a successful return from this function, preemption and local
183 * Upon a successful return from this function, preemption and local
238 * Upon a successful return from this function, preemption is disabled,
261 * Upon a successful return from this function, preemption and local interrupts
284 * Upon a successful return from this function, preemption and local interrupts
352 * Upon a successful return from this function, preemption is disabled
373 * This function will unlock a specific hwspinlock, enable preemption and
390 * This function will unlock a specific hwspinlock, enable preemption and
436 * This function will unlock a specific hwspinlock and enable preemption
/linux-6.15/tools/testing/selftests/kvm/x86/
Dvmx_preemption_timer_test.c3 * VMX-preemption timer test
60 * Now wait for the preemption timer to fire and in l2_guest_code()
84 * Check for Preemption timer support in l1_guest_code()
127 * Ensure the exit from L2 is due to preemption timer expiry in l1_guest_code()
203 * From L1's perspective verify Preemption timer hasn't in main()
205 * From L2's perspective verify Preemption timer hasn't in main()
/linux-6.15/kernel/sched/
Dfeatures.h17 * Inhibit (wakeup) preemption until the current task has either matched the
29 * wakeup-preemption), since its likely going to consume data we
37 * - NEXT_BUDDY (wakeup preemption)
62 * Allow wakeup-time preemption of the current task:
/linux-6.15/Documentation/RCU/
DNMI-RCU.rst45 The do_nmi() function processes each NMI. It first disables preemption
50 preemption is restored.
95 CPUs complete any preemption-disabled segments of code that they were
97 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
/linux-6.15/drivers/gpu/drm/i915/
Dintel_pcode.c147 * @timeout_base_ms: timeout for polling with preemption enabled
152 * applying @reply_mask. Polling is first attempted with preemption enabled
154 * preemption disabled.
188 * the poll with preemption disabled to maximize the number of in skl_pcode_request()
195 "PCODE timeout, retrying with preemption disabled\n"); in skl_pcode_request()
/linux-6.15/drivers/gpu/drm/xe/
Dxe_exec_queue_types.h115 /** @sched_props.preempt_timeout_us: preemption timeout in micro-seconds */
125 /** @lr.pfence: preemption fence */
127 /** @lr.context: preemption fence context */
129 /** @lr.seqno: preemption fence seqno */
176 /** @set_preempt_timeout: Set preemption timeout for exec queue */
/linux-6.15/fs/
Dstack.c18 * preemption (see include/linux/fs.h): we need nothing extra for in fsstack_copy_inode_size()
26 * i_blocks in sync despite SMP or PREEMPTION - though stat's in fsstack_copy_inode_size()
48 * i_blocks in sync despite SMP or PREEMPTION: use i_lock for that case in fsstack_copy_inode_size()
/linux-6.15/arch/arm64/
DKconfig65 select ARCH_INLINE_READ_LOCK if !PREEMPTION
66 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
67 select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION
68 select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPTION
69 select ARCH_INLINE_READ_UNLOCK if !PREEMPTION
70 select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPTION
71 select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPTION
72 select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPTION
73 select ARCH_INLINE_WRITE_LOCK if !PREEMPTION
74 select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPTION
[all …]

12345678910>>...22