Lines Matching +full:a +full:- +full:side

1 /* SPDX-License-Identifier: GPL-2.0+ */
3 * Read-Copy Update mechanism for mutual exclusion
15 * For detailed explanation of Read-Copy Update mechanism see -
35 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) argument
36 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) argument
49 // not-yet-completed RCU grace periods.
53 * same_state_synchronize_rcu - Are two old-state values identical?
54 * @oldstate1: First old-state value.
55 * @oldstate2: Second old-state value.
57 * The two old-state values must have been obtained from either
61 * are tracked by old-state values to push these values to a list header,
75 * Defined as a macro as it is a very low level header included from
77 * nesting depth, but makes sense only if CONFIG_PREEMPT_RCU -- in other
80 #define rcu_preempt_depth() READ_ONCE(current->rcu_read_lock_nesting)
150 static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; } in rcu_nocb_cpu_offload()
156 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
157 * This is a macro rather than an inline function to avoid #include hell.
164 if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
165 WRITE_ONCE((t)->rcu_tasks_holdout, false); \
176 // Bits for ->trc_reader_special.b.need_qs field.
177 #define TRC_NEED_QS 0x1 // Task needs a quiescent state.
185 int ___rttq_nesting = READ_ONCE((t)->trc_reader_nesting); \
187 if (likely(!READ_ONCE((t)->trc_reader_special.b.need_qs)) && \
191 !READ_ONCE((t)->trc_reader_special.b.blocked)) { \
226 * rcu_trace_implies_rcu_gp - does an RCU Tasks Trace grace period imply an RCU grace period?
238 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
241 * report potential quiescent states to RCU-tasks even if the cond_resched()
319 # define rcu_lock_acquire(a) do { } while (0) argument
320 # define rcu_try_lock_acquire(a) do { } while (0) argument
321 # define rcu_lock_release(a) do { } while (0) argument
353 * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
359 * and rechecks it after checking (c) to prevent false-positive splats
377 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
388 "Illegal context switch in RCU-bh read-side critical section"); \
390 "Illegal context switch in RCU-sched read-side critical section"); \
422 * unrcu_pointer - mark a pointer as not being RCU protected
425 * Converts @p from an __rcu pointer to a __kernel pointer.
459 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
465 * rcu_assign_pointer() - assign to RCU-protected pointer
469 * Assigns the specified value to the specified RCU-protected
477 * will be dereferenced by RCU read-side code.
480 * of rcu_assign_pointer(). RCU_INIT_POINTER() is a bit faster due
483 * rcu_assign_pointer() is a very bad thing that results in
484 * impossible-to-diagnose memory corruption. So please be careful.
491 * macros, this execute-arguments-only-once property is important, so
507 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
512 * Perform a replacement, where @rcu_ptr is an RCU-annotated
525 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
528 * Return the value of the specified RCU-protected pointer, but omit the
529 * lockdep checks for being in an RCU read-side critical section. This is
531 * not dereferenced, for example, when testing an RCU-protected pointer
533 * where update-side locks prevent the value of the pointer from changing,
535 * Within an RCU read-side critical section, there is little reason to
541 * rcu_access_pointer() return value to a local variable results in an
544 * It is also permissible to use rcu_access_pointer() when read-side
547 * or after a synchronize_rcu() returns. This can be useful when tearing
548 * down multi-linked structures after a grace period has elapsed. However,
554 * rcu_dereference_check() - rcu_dereference with debug checking
562 * An implicit check for being in an RCU read-side critical section
567 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock));
569 * could be used to indicate to lockdep that foo->bar may only be dereferenced
571 * the bar struct at foo->bar is held.
573 * Note that the list of conditions may also include indications of when a lock
577 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock) ||
578 * atomic_read(&foo->usage) == 0);
591 * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
595 * This is the RCU-bh counterpart to rcu_dereference_check(). However,
607 * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
611 * This is the RCU-sched counterpart to rcu_dereference_check().
627 * The no-tracing version of rcu_dereference_raw() must not call
634 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
638 * Return the value of the specified RCU-protected pointer, but omit
639 * the READ_ONCE(). This is useful in cases where update-side locks
645 * This function is only for update-side use. Using this function
654 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
657 * This is a simple wrapper around rcu_dereference_check().
662 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
670 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
678 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
681 * This is simply an identity function, but it documents where a pointer
690 * if (!atomic_inc_not_zero(p->refcnt))
700 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
703 * are within RCU read-side critical sections, then the
706 * on one CPU while other CPUs are within RCU read-side critical
712 * code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
717 * with new RCU read-side critical sections. One way that this can happen
719 * read-side critical section, (2) CPU 1 invokes call_rcu() to register
720 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
721 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
722 * callback is invoked. This is legal, because the RCU read-side critical
728 * RCU read-side critical sections may be nested. Any deferred actions
729 * will be deferred until the outermost RCU read-side critical section
734 * read-side critical section that would block in a !PREEMPTION kernel.
737 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
738 * it is illegal to block while in an RCU read-side critical section.
740 * kernel builds, RCU read-side critical sections may be preempted,
742 * implementations in real-time (with -rt patchset) kernel builds, RCU
743 * read-side critical sections may be preempted and they may also block, but
757 * way for writers to lock out RCU readers. This is a feature, not
758 * a bug -- this property is what provides RCU's performance benefits.
766 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
771 * also extends to the scheduler's runqueue and priority-inheritance
772 * spinlocks, courtesy of the quiescent-state deferral that is carried
787 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
791 * read-side critical section. However, please note that this equivalence
810 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
824 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
827 * Read-side critical sections can also be introduced by anything else that
855 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
876 * RCU_INIT_POINTER() - initialize an RCU protected pointer
880 * Initialize an RCU-protected pointer in special cases where readers
890 * a. You have not made *any* reader-visible changes to
898 * result in impossible-to-diagnose memory corruption. As in the structures
900 * see pre-initialized values of the referenced data structure. So
903 * If you are creating an RCU-protected linked structure that is accessed
904 * by a single external-to-structure RCU-protected pointer, then you may
905 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
907 * external-to-structure pointer *after* you have completely initialized
908 * the reader-accessible portions of the linked structure.
920 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
924 * GCC-style initialization for an RCU-protected pointer in a structure field.
936 * kfree_rcu() - kfree an object after a grace period.
937 * @ptr: pointer to kfree for double-argument invocations.
942 * when they are used in a kernel module, that module must invoke the
943 * high-latency rcu_barrier() function at module-unload time.
945 * The kfree_rcu() function handles this issue. Rather than encoding a
948 * Because the functions are not allowed in the low-order 4096 bytes of
950 * If the offset is larger than 4095 bytes, a compile-time error will
967 * kfree_rcu_mightsleep() - kfree an object after a grace period.
968 * @ptr: pointer to kfree for single-argument invocations.
970 * When it comes to head-less variant, only one argument
971 * is passed and that is just a pointer which has to be
972 * freed after a grace period. Therefore the semantic is
978 * Please note, head-less way of freeing is permitted to
979 * use from a context that has to follow might_sleep()
992 kvfree_call_rcu(&((___p)->rhf), (void *) (___p)); \
1005 * Place this after a lock-acquisition primitive to guarantee that
1006 * an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies
1020 * rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
1023 * If you intend to invoke rcu_head_after_call_rcu() to test whether a
1031 rhp->func = (rcu_callback_t)~0L; in rcu_head_init()
1035 * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
1040 * and @false otherwise. Emits a warning in any other case, including
1041 * the case where @rhp has already been invoked after a grace period.
1044 * in an RCU read-side critical section that includes a read-side fetch
1050 rcu_callback_t func = READ_ONCE(rhp->func); in rcu_head_after_call_rcu()