Lines Matching full:rcu

15  *	Documentation/RCU
18 #define pr_fmt(fmt) "rcu: " fmt
66 #include "rcu.h"
120 * RCU can assume that there is but one task, allowing RCU to (for example)
122 * is RCU_SCHEDULER_INIT, RCU must actually do all the hard work required
124 * boot-time false positives from lockdep-RCU error checking. Finally, it
125 * transitions from RCU_SCHEDULER_INIT to RCU_SCHEDULER_RUNNING after RCU
134 * is capable of creating new tasks. So RCU processing (for example,
135 * creating tasks for RCU priority boosting) must be delayed until after
137 * currently delay invocation of any RCU callbacks until after this point.
139 * It might later prove better for people registering RCU callbacks during
175 * This rcu parameter is runtime-read-only. It reflects
183 /* Retrieve RCU kthreads priority for rcutorture */
213 * Return true if an RCU grace period is in progress. The READ_ONCE()s
244 * RCU is watching prior to the call to this function and is no longer
253 * CPUs seeing atomic_add_return() must see prior RCU read-side in rcu_dynticks_eqs_enter()
259 // RCU is no longer watching. Better be in extended quiescent state! in rcu_dynticks_eqs_enter()
269 * called from an extended quiescent state, that is, RCU is not watching
279 * and we also must force ordering with the next RCU read-side in rcu_dynticks_eqs_exit()
283 // RCU is now watching. Better not be in an extended quiescent state! in rcu_dynticks_eqs_exit()
337 * indicates that RCU is in an extended quiescent state.
402 * Let the RCU core know that this CPU has gone through the scheduler,
405 * memory barriers to let the RCU core know about it, regardless of what
408 * We inform the RCU core by emulating a zero-duration dyntick-idle period.
446 "RCU dynticks_nesting counter underflow!"); in rcu_is_cpu_rrupt_from_idle()
448 "RCU dynticks_nmi_nesting counter underflow/zero!"); in rcu_is_cpu_rrupt_from_idle()
460 /* Does CPU appear to be idle from an RCU standpoint? */ in rcu_is_cpu_rrupt_from_idle()
521 pr_info("RCU calculated value of scheduler-enlistment delay is %ld jiffies.\n", j); in adjust_jiffies_till_sched_qs()
567 * Return the number of RCU GPs completed thus far for debug & stats.
576 * Return the number of RCU expedited batches completed thus far for
613 * Enter an RCU extended quiescent state, which can be either the
629 // RCU will still be watching, so just do accounting and leave. in rcu_eqs_enter()
648 // RCU is watching here ... in rcu_eqs_enter()
655 * rcu_idle_enter - inform RCU that current CPU is entering idle
657 * Enter idle mode, in other words, -leave- the mode in which RCU
658 * read-side critical sections can occur. (Though RCU read-side
674 * rcu_user_enter - inform RCU that we are resuming userspace.
676 * Enter RCU idle mode right before resuming userspace. No use of RCU
678 * CPU doesn't need to maintain the tick for RCU maintenance purposes
692 * rcu_nmi_exit - inform RCU of exit from NMI context
695 * RCU-idle period, update rdp->dynticks and rdp->dynticks_nmi_nesting
696 * to let the RCU grace-period handling know that the CPU is back to
697 * being RCU-idle.
709 * (We are exiting an NMI handler, so RCU better be paying attention in rcu_nmi_exit()
716 * If the nesting level is not 1, the CPU wasn't RCU-idle, so in rcu_nmi_exit()
717 * leave it in non-RCU-idle state. in rcu_nmi_exit()
728 /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ in rcu_nmi_exit()
739 // RCU is watching here ... in rcu_nmi_exit()
748 * rcu_irq_exit - inform RCU that current CPU is exiting irq towards idle
756 * architecture's idle loop violates this assumption, RCU will give you what
773 * rcu_irq_exit_preempt - Inform RCU that current CPU is exiting irq
777 * from RCU point of view. Invoked from return from interrupt before kernel
786 "RCU dynticks_nesting counter underflow/zero!"); in rcu_irq_exit_preempt()
789 "Bad RCU dynticks_nmi_nesting counter\n"); in rcu_irq_exit_preempt()
791 "RCU in extended quiescent state!"); in rcu_irq_exit_preempt()
803 "RCU dynticks_nesting counter underflow/zero!"); in rcu_irq_exit_check_preempt()
806 "Bad RCU dynticks_nmi_nesting counter\n"); in rcu_irq_exit_check_preempt()
808 "RCU in extended quiescent state!"); in rcu_irq_exit_check_preempt()
828 * Exit an RCU extended quiescent state, which can be either the
845 // RCU was already watching, so just do accounting and leave. in rcu_eqs_exit()
850 // RCU is not watching here ... in rcu_eqs_exit()
868 * rcu_idle_exit - inform RCU that current CPU is leaving idle
870 * Exit idle mode, in other words, -enter- the mode in which RCU
888 * rcu_user_exit - inform RCU that we are exiting userspace.
890 * Exit RCU idle mode while entering the kernel because it can
891 * run a RCU read side critical section anytime.
902 * __rcu_irq_enter_check_tick - Enable scheduler tick on CPU if RCU needs it.
906 * execution is an RCU quiescent state and the time executing in the kernel
909 * in the kernel, which can cause a number of problems, include RCU CPU
913 * in a timely manner, the RCU grace-period kthread sets that CPU's
916 * tick, which will enable RCU to detect that CPU's quiescent states,
922 * interrupt or exception. In that case, the RCU grace-period kthread
924 * controlled environments, this function allows RCU to get what it
941 // RCU doesn't need nohz_full help from this CPU, or it is in __rcu_irq_enter_check_tick()
947 // from interrupts (as opposed to NMIs). Therefore, (1) RCU is in __rcu_irq_enter_check_tick()
954 // A nohz_full CPU is in the kernel and RCU needs a in __rcu_irq_enter_check_tick()
964 * rcu_nmi_enter - inform RCU of entry to NMI context
966 * If the CPU was idle from RCU's viewpoint, update rdp->dynticks and
967 * rdp->dynticks_nmi_nesting to let the RCU grace-period handling know
984 * If idle from RCU viewpoint, atomically increment ->dynticks in rcu_nmi_enter()
988 * to be in the outermost NMI handler that interrupted an RCU-idle in rcu_nmi_enter()
996 // RCU is not watching here ... in rcu_nmi_enter()
1031 * rcu_irq_enter - inform RCU that current CPU is entering irq away from idle
1042 * irq_exit() functions), RCU will give you what you deserve, good and hard.
1090 * rcu_is_watching - see if RCU thinks that the current CPU is not idle
1092 * Return true if RCU is watching the running CPU, which means that this
1093 * CPU can safely enter RCU read-side critical sections. In other words,
1129 * Is the current CPU online as far as RCU is concerned?
1138 * RCU on an offline processor during initial boot, hence the check for
1211 * state. Either way, that CPU cannot possibly be in an RCU in rcu_implicit_dynticks_qs()
1213 * of the current RCU grace period. in rcu_implicit_dynticks_qs()
1222 * Complain if a CPU that is considered to be offline from RCU's in rcu_implicit_dynticks_qs()
1227 * last task on a leaf rcu_node structure exiting its RCU read-side in rcu_implicit_dynticks_qs()
1237 * of RCU's Requirements documentation. in rcu_implicit_dynticks_qs()
1259 * delay RCU grace periods: (1) At age jiffies_to_sched_qs, in rcu_implicit_dynticks_qs()
1299 * If more than halfway to RCU CPU stall-warning time, invoke in rcu_implicit_dynticks_qs()
1470 * ->gp_seq number while RCU is idle, but with reference to a non-root
1473 * the RCU grace-period kthread.
1492 * information requires acquiring a global lock... RCU therefore in rcu_accelerate_cbs()
1545 * Returns true if the RCU grace-period kthread needs to be awakened.
1570 * that the RCU grace-period kthread be awakened.
1705 * Handler for on_each_cpu() to invoke the target CPU's RCU core
1753 * wait for subsequent online CPUs, and that RCU hooks in the CPU in rcu_gp_init()
1757 * of RCU's Requirements documentation. in rcu_gp_init()
2004 * RCU grace-period initialization races by forcing the end of in rcu_gp_cleanup()
2202 * RCU grace period. The caller must hold the corresponding rnp->lock with
2318 * Tell RCU we are done (but rcu_report_qs_rdp() will be the in rcu_check_quiescent_state()
2345 * and all tasks that were preempted within an RCU read-side critical
2346 * section while running on one of those CPUs have since exited their RCU
2413 * Invoke any RCU callbacks that have made it to the end of their grace
2539 /* Re-invoke RCU core processing if there are callbacks remaining. */ in rcu_do_batch()
2548 * state, for example, user mode or idle loop. It also schedules RCU
2576 * blocking the current grace period, initiate RCU priority boosting.
2664 // Workqueue handler for an RCU reader for kernels enforcing struct RCU
2672 /* Perform RCU core processing work for the current CPU. */
2683 trace_rcu_utilization(TPS("Start RCU core")); in rcu_core()
2694 /* Update RCU state based on any recent quiescent states. */ in rcu_core()
2715 trace_rcu_utilization(TPS("End RCU core")); in rcu_core()
2717 // If strict GPs, schedule an RCU reader in a clean environment. in rcu_core()
2751 * Wake up this CPU's rcuc kthread to do RCU core processing.
2774 * Per-CPU kernel thread that invokes RCU callbacks. This replaces
2775 * the RCU softirq used in configurations of RCU that do not support RCU
2818 * Spawn per-CPU RCU core processing kthreads.
2835 * Handle any core-RCU processing required by a call_rcu() invocation.
2841 * If called from an extended quiescent state, invoke the RCU in __call_rcu_core()
2842 * core in order to force a re-evaluation of RCU's idleness. in __call_rcu_core()
2847 /* If interrupts were disabled or CPU offline, don't invoke RCU core. */ in __call_rcu_core()
2880 * RCU callback function to leak a callback.
2889 * number of queued RCU callbacks. The caller must hold the leaf rcu_node
2906 * number of queued RCU callbacks. No locks need be held, but the
2942 * Use rcu:rcu_callback trace event to find the previous in __call_rcu()
2980 /* Go handle any RCU core processing required. */ in __call_rcu()
2991 * call_rcu() - Queue an RCU callback for invocation after a grace period.
2992 * @head: structure to be used for queueing the RCU updates.
2996 * period elapses, in other words after all pre-existing RCU read-side
2998 * might well execute concurrently with RCU read-side critical sections
2999 * that started after call_rcu() was invoked. RCU read-side critical
3002 * preemption, or softirqs have been disabled also serve as RCU read-side
3007 * all pre-existing RCU read-side critical section. On systems with more
3010 * last RCU read-side critical section whose beginning preceded the call
3011 * to call_rcu(). It also means that each CPU executing an RCU read-side
3014 * of that RCU read-side critical section. Note that these guarantees
3019 * resulting RCU callback function "func()", then both CPU A and CPU B are
3073 * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period
3091 * the RCU files. Such extraction could allow further optimization of
3255 * Schedule the kfree batch RCU work to run in workqueue context after a GP.
3274 * a previous RCU batch is in progress, it means that in queue_kfree_rcu_work()
3327 // Previous RCU batch still in progress, try again later. in kfree_rcu_drain_unlock()
3576 * overhead: RCU still operates correctly.
3584 might_sleep(); /* Check for RCU read-side critical section. */ in rcu_blocking_is_gp()
3595 * period has elapsed, in other words after all currently executing RCU
3598 * concurrently with new RCU read-side critical sections that began while
3599 * synchronize_rcu() was waiting. RCU read-side critical sections are
3602 * softirqs have been disabled also serve as RCU read-side critical
3609 * the end of its last RCU read-side critical section whose beginning
3611 * an RCU read-side critical section that extends beyond the return from
3614 * that RCU read-side critical section. Note that these guarantees include
3629 "Illegal synchronize_rcu() in RCU read-side critical section"); in synchronize_rcu()
3640 * get_state_synchronize_rcu - Snapshot current RCU state
3649 * Any prior manipulation of RCU-protected data must happen in get_state_synchronize_rcu()
3658 * cond_synchronize_rcu - Conditionally wait for an RCU grace period
3662 * If a full RCU grace period has elapsed since the earlier call to
3681 * Check to see if there is any immediate RCU-related work to be done by
3700 /* Is this a nohz_full CPU in userspace or idle? (Ignore RCU if so.) */ in rcu_pending()
3704 /* Is the RCU core waiting for a quiescent state from this CPU? */ in rcu_pending()
3713 /* Has RCU gone idle with this CPU needing another grace period? */ in rcu_pending()
3720 /* Have RCU grace period completed or started? */ in rcu_pending()
3740 * RCU callback function for rcu_barrier(). If we are last, wake
3787 * Note that this primitive does not necessarily wait for an RCU grace period
3788 * to complete. For example, if there are no RCU callbacks queued anywhere
3790 * immediately, without waiting for anything, much less an RCU grace period.
3906 * Do boot-time initialization of a CPU's per-CPU RCU data.
3930 * Initializes a CPU's per-CPU RCU data. Note that only one online or
3933 * CPU cannot possibly have any non-offloaded RCU callbacks in flight yet.
3977 * Update RCU priority boot kthread affinity for CPU-hotplug changes.
4037 * incoming CPUs are not allowed to use RCU read-side critical sections
4070 if (rnp->qsmask & mask) { /* RCU waiting on incoming CPU? */ in rcu_cpu_starting()
4077 smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ in rcu_cpu_starting()
4081 * The outgoing function has no further need of RCU, so remove it from
4107 if (rnp->qsmask & mask) { /* RCU waiting on outgoing CPU? */ in rcu_report_dead()
4170 * On non-huge systems, use expedited RCU grace periods to make suspend
4192 * Spawn the kthreads that handle RCU's grace periods.
4247 * runtime RCU functionality.