Lines Matching +full:a +full:- +full:side
1 /* SPDX-License-Identifier: GPL-2.0+ */
3 * Read-Copy Update mechanism for mutual exclusion
15 * For detailed explanation of Read-Copy Update mechanism see -
34 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) argument
35 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) argument
38 #define RCU_SEQ_STATE_MASK ((1 << RCU_SEQ_CTR_SHIFT) - 1)
50 // not-yet-completed RCU grace periods.
54 * same_state_synchronize_rcu - Are two old-state values identical?
55 * @oldstate1: First old-state value.
56 * @oldstate2: Second old-state value.
58 * The two old-state values must have been obtained from either
62 * are tracked by old-state values to push these values to a list header,
76 * Defined as a macro as it is a very low level header included from
78 * nesting depth, but makes sense only if CONFIG_PREEMPT_RCU -- in other
81 #define rcu_preempt_depth() READ_ONCE(current->rcu_read_lock_nesting)
149 static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; } in rcu_nocb_cpu_offload()
158 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
159 * This is a macro rather than an inline function to avoid #include hell.
166 if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
167 WRITE_ONCE((t)->rcu_tasks_holdout, false); \
179 // Bits for ->trc_reader_special.b.need_qs field.
180 #define TRC_NEED_QS 0x1 // Task needs a quiescent state.
188 int ___rttq_nesting = READ_ONCE((t)->trc_reader_nesting); \
190 if (unlikely(READ_ONCE((t)->trc_reader_special.b.need_qs) == TRC_NEED_QS) && \
194 !READ_ONCE((t)->trc_reader_special.b.blocked)) { \
228 * rcu_trace_implies_rcu_gp - does an RCU Tasks Trace grace period imply an RCU grace period?
240 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
243 * report potential quiescent states to RCU-tasks even if the cond_resched()
253 * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
256 * This helper is for long-running softirq handlers, such as NAPI threads in
260 * provide both RCU and RCU-Tasks quiescent states. Note that this macro
263 * Because regions of code that have disabled softirq act as RCU read-side
269 * states. As a contrast, calling cond_resched() only won't achieve the same
270 * effect because cond_resched() does not provide RCU-Tasks quiescent states.
352 # define rcu_lock_acquire(a) do { } while (0) argument
353 # define rcu_try_lock_acquire(a) do { } while (0) argument
354 # define rcu_lock_release(a) do { } while (0) argument
386 * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
392 * and rechecks it after checking (c) to prevent false-positive splats
410 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
421 "Illegal context switch in RCU-bh read-side critical section"); \
423 "Illegal context switch in RCU-sched read-side critical section"); \
436 * lockdep_assert_in_rcu_read_lock - WARN if not protected by rcu_read_lock()
444 * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_lock_bh()
454 * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_read_lock_sched()
464 * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader
515 * unrcu_pointer - mark a pointer as not being RCU protected
518 * Converts @p from an __rcu pointer to a __kernel pointer.
552 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
558 * rcu_assign_pointer() - assign to RCU-protected pointer
562 * Assigns the specified value to the specified RCU-protected
570 * will be dereferenced by RCU read-side code.
573 * of rcu_assign_pointer(). RCU_INIT_POINTER() is a bit faster due
576 * rcu_assign_pointer() is a very bad thing that results in
577 * impossible-to-diagnose memory corruption. So please be careful.
584 * macros, this execute-arguments-only-once property is important, so
600 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
605 * Perform a replacement, where @rcu_ptr is an RCU-annotated
618 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
621 * Return the value of the specified RCU-protected pointer, but omit the
622 * lockdep checks for being in an RCU read-side critical section. This is
624 * not dereferenced, for example, when testing an RCU-protected pointer
626 * where update-side locks prevent the value of the pointer from changing,
628 * Within an RCU read-side critical section, there is little reason to
634 * rcu_access_pointer() return value to a local variable results in an
637 * It is also permissible to use rcu_access_pointer() when read-side
640 * or after a synchronize_rcu() returns. This can be useful when tearing
641 * down multi-linked structures after a grace period has elapsed. However,
647 * rcu_dereference_check() - rcu_dereference with debug checking
655 * An implicit check for being in an RCU read-side critical section
660 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock));
662 * could be used to indicate to lockdep that foo->bar may only be dereferenced
664 * the bar struct at foo->bar is held.
666 * Note that the list of conditions may also include indications of when a lock
670 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock) ||
671 * atomic_read(&foo->usage) == 0);
684 * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
688 * This is the RCU-bh counterpart to rcu_dereference_check(). However,
700 * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
704 * This is the RCU-sched counterpart to rcu_dereference_check().
720 * The no-tracing version of rcu_dereference_raw() must not call
727 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
731 * Return the value of the specified RCU-protected pointer, but omit
732 * the READ_ONCE(). This is useful in cases where update-side locks
738 * This function is only for update-side use. Using this function
747 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
750 * This is a simple wrapper around rcu_dereference_check().
755 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
763 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
771 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
774 * This is simply an identity function, but it documents where a pointer
783 * if (!atomic_inc_not_zero(p->refcnt))
793 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
796 * are within RCU read-side critical sections, then the
799 * on one CPU while other CPUs are within RCU read-side critical
808 * with new RCU read-side critical sections. One way that this can happen
810 * read-side critical section, (2) CPU 1 invokes call_rcu() to register
811 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
812 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
813 * callback is invoked. This is legal, because the RCU read-side critical
819 * RCU read-side critical sections may be nested. Any deferred actions
820 * will be deferred until the outermost RCU read-side critical section
825 * read-side critical section that would block in a !PREEMPTION kernel.
828 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
829 * it is illegal to block while in an RCU read-side critical section.
831 * kernel builds, RCU read-side critical sections may be preempted,
833 * implementations in real-time (with -rt patchset) kernel builds, RCU
834 * read-side critical sections may be preempted and they may also block, but
848 * way for writers to lock out RCU readers. This is a feature, not
849 * a bug -- this property is what provides RCU's performance benefits.
857 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
861 * and priority-inheritance spinlocks, courtesy of the quiescent-state
877 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
881 * read-side critical section. However, please note that this equivalence
900 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
914 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
917 * Read-side critical sections can also be introduced by anything else that
945 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
966 * RCU_INIT_POINTER() - initialize an RCU protected pointer
970 * Initialize an RCU-protected pointer in special cases where readers
980 * a. You have not made *any* reader-visible changes to
988 * result in impossible-to-diagnose memory corruption. As in the structures
990 * see pre-initialized values of the referenced data structure. So
993 * If you are creating an RCU-protected linked structure that is accessed
994 * by a single external-to-structure RCU-protected pointer, then you may
995 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
997 * external-to-structure pointer *after* you have completely initialized
998 * the reader-accessible portions of the linked structure.
1010 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
1014 * GCC-style initialization for an RCU-protected pointer in a structure field.
1020 * kfree_rcu() - kfree an object after a grace period.
1021 * @ptr: pointer to kfree for double-argument invocations.
1026 * when they are used in a kernel module, that module must invoke the
1027 * high-latency rcu_barrier() function at module-unload time.
1029 * The kfree_rcu() function handles this issue. In order to have a universal
1031 * to determine the starting address of the freed object, which can be a large
1034 * If the offset is larger than 4095 bytes, a compile-time error will
1051 * kfree_rcu_mightsleep() - kfree an object after a grace period.
1052 * @ptr: pointer to kfree for single-argument invocations.
1054 * When it comes to head-less variant, only one argument
1055 * is passed and that is just a pointer which has to be
1056 * freed after a grace period. Therefore the semantic is
1062 * Please note, head-less way of freeing is permitted to
1063 * use from a context that has to follow might_sleep()
1085 kvfree_call_rcu(&((___p)->rhf), (void *) (___p)); \
1098 * Place this after a lock-acquisition primitive to guarantee that
1099 * an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies
1113 * rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
1116 * If you intend to invoke rcu_head_after_call_rcu() to test whether a
1124 rhp->func = (rcu_callback_t)~0L; in rcu_head_init()
1128 * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
1133 * and @false otherwise. Emits a warning in any other case, including
1134 * the case where @rhp has already been invoked after a grace period.
1137 * in an RCU read-side critical section that includes a read-side fetch
1143 rcu_callback_t func = READ_ONCE(rhp->func); in rcu_head_after_call_rcu()