Lines Matching full:rcu
50 // not-yet-completed RCU grace periods.
158 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
228 * rcu_trace_implies_rcu_gp - does an RCU Tasks Trace grace period imply an RCU grace period?
230 * As an accident of implementation, an RCU Tasks Trace grace period also
231 * acts as an RCU grace period. However, this could change at any time.
240 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
243 * report potential quiescent states to RCU-tasks even if the cond_resched()
253 * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
260 * provide both RCU and RCU-Tasks quiescent states. Note that this macro
263 * Because regions of code that have disabled softirq act as RCU read-side
270 * effect because cond_resched() does not provide RCU-Tasks quiescent states.
293 #error "Unknown RCU implementation specified to kernel configuration"
394 * ("rcu: Reject RCU_LOCKDEP_WARN() false positives") for more detail.
410 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
421 "Illegal context switch in RCU-bh read-side critical section"); \
423 "Illegal context switch in RCU-sched read-side critical section"); \
464 * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader
466 * Splats if lockdep is enabled and there is no RCU reader of any
469 * as RCU readers.
497 * multiple pointers markings to match different RCU implementations
515 * unrcu_pointer - mark a pointer as not being RCU protected
521 #define unrcu_pointer(p) __unrcu_pointer(p, __UNIQUE_ID(rcu))
549 #define rcu_dereference_raw(p) __rcu_dereference_raw(p, __UNIQUE_ID(rcu))
552 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
558 * rcu_assign_pointer() - assign to RCU-protected pointer
562 * Assigns the specified value to the specified RCU-protected
563 * pointer, ensuring that any concurrent RCU readers will see
570 * will be dereferenced by RCU read-side code.
600 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
601 * @rcu_ptr: RCU pointer, whose old value is returned
605 * Perform a replacement, where @rcu_ptr is an RCU-annotated
618 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
621 * Return the value of the specified RCU-protected pointer, but omit the
622 * lockdep checks for being in an RCU read-side critical section. This is
624 * not dereferenced, for example, when testing an RCU-protected pointer
628 * Within an RCU read-side critical section, there is little reason to
639 * the case in the context of the RCU callback that is freeing up the data,
644 #define rcu_access_pointer(p) __rcu_access_pointer((p), __UNIQUE_ID(rcu), __rcu)
655 * An implicit check for being in an RCU read-side critical section
676 * which pointers are protected by RCU and checks that the pointer is
680 __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
688 * This is the RCU-bh counterpart to rcu_dereference_check(). However,
689 * please note that starting in v5.0 kernels, vanilla RCU grace periods
696 __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
704 * This is the RCU-sched counterpart to rcu_dereference_check().
705 * However, please note that starting in v5.0 kernels, vanilla RCU grace
712 __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
717 * The tracing infrastructure traces RCU (we want that), but unfortunately
718 * some of the RCU checks causes tracing to lock up the system.
724 __rcu_dereference_check((p), __UNIQUE_ID(rcu), 1, __rcu)
727 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
731 * Return the value of the specified RCU-protected pointer, but omit
743 __rcu_dereference_protected((p), __UNIQUE_ID(rcu), (c), __rcu)
747 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
755 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
763 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
771 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
775 * is handed off from RCU to some other synchronization mechanism, for
793 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
796 * are within RCU read-side critical sections, then the
799 * on one CPU while other CPUs are within RCU read-side critical
800 * sections, invocation of the corresponding RCU callback is deferred
807 * Note, however, that RCU callbacks are permitted to run concurrently
808 * with new RCU read-side critical sections. One way that this can happen
809 * is via the following sequence of events: (1) CPU 0 enters an RCU
811 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
812 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
813 * callback is invoked. This is legal, because the RCU read-side critical
815 * therefore might be referencing something that the corresponding RCU
817 * RCU callback is invoked.
819 * RCU read-side critical sections may be nested. Any deferred actions
820 * will be deferred until the outermost RCU read-side critical section
824 * following this rule: don't put anything in an rcu_read_lock() RCU
828 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
829 * it is illegal to block while in an RCU read-side critical section.
830 * In preemptible RCU implementations (PREEMPT_RCU) in CONFIG_PREEMPTION
831 * kernel builds, RCU read-side critical sections may be preempted,
832 * but explicit blocking is illegal. Finally, in preemptible RCU
833 * implementations in real-time (with -rt patchset) kernel builds, RCU
840 __acquire(RCU); in rcu_read_lock()
848 * way for writers to lock out RCU readers. This is a feature, not
849 * a bug -- this property is what provides RCU's performance benefits.
852 * used as well. RCU does not care how the writers keep out of each
857 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
872 __release(RCU); in rcu_read_unlock()
877 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
880 * Note that anything else that disables softirqs can also serve as an RCU
900 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
914 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
945 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
966 * RCU_INIT_POINTER() - initialize an RCU protected pointer
970 * Initialize an RCU-protected pointer in special cases where readers
976 * RCU readers from concurrently accessing this pointer *or*
989 * will look OK in crash dumps, but any concurrent RCU readers might
993 * If you are creating an RCU-protected linked structure that is accessed
994 * by a single external-to-structure RCU-protected pointer, then you may
995 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
1010 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
1014 * GCC-style initialization for an RCU-protected pointer in a structure field.
1024 * Many rcu callbacks functions just call kfree() on the base structure.
1137 * in an RCU read-side critical section that includes a read-side fetch
1155 DEFINE_LOCK_GUARD_0(rcu,
1165 __release(RCU);