Lines Matching full:rcu

6  * This is used by RCU to remove its dependency on the timer tick while a CPU
16 * RCU extended quiescent state bits imported from kernel/rcu/tree.c
26 #include <trace/events/rcu.h>
41 /* Record the current task on exiting RCU-tasks (dyntick-idle entry). */
49 /* Record no current task on entering RCU-tasks (dyntick-idle exit). */
57 /* Turn on heavyweight RCU tasks trace readers on kernel exit. */
66 /* Turn off heavyweight RCU tasks trace readers on kernel entry. */
78 * RCU is watching prior to the call to this function and is no longer
84 * CPUs seeing atomic_add_return() must see prior RCU read-side in ct_kernel_exit_state()
89 // RCU is still watching. Better not be in extended quiescent state! in ct_kernel_exit_state()
92 // RCU is no longer watching. in ct_kernel_exit_state()
97 * called from an extended quiescent state, that is, RCU is not watching
106 * and we also must force ordering with the next RCU read-side in ct_kernel_enter_state()
110 // RCU is now watching. Better not be in an extended quiescent state! in ct_kernel_enter_state()
116 * Enter an RCU extended quiescent state, which can be either the
132 // RCU will still be watching, so just do accounting and leave. in ct_kernel_exit()
148 // RCU is watching here ... in ct_kernel_exit()
155 * Exit an RCU extended quiescent state, which can be either the
171 // RCU was already watching, so just do accounting and leave. in ct_kernel_enter()
176 // RCU is not watching here ... in ct_kernel_enter()
193 * ct_nmi_exit - inform RCU of exit from NMI context
196 * RCU-idle period, update ct->state and ct->nmi_nesting
197 * to let the RCU grace-period handling know that the CPU is back to
198 * being RCU-idle.
210 * (We are exiting an NMI handler, so RCU better be paying attention in ct_nmi_exit()
217 * If the nesting level is not 1, the CPU wasn't RCU-idle, so in ct_nmi_exit()
218 * leave it in non-RCU-idle state. in ct_nmi_exit()
229 /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ in ct_nmi_exit()
237 // RCU is watching here ... in ct_nmi_exit()
246 * ct_nmi_enter - inform RCU of entry to NMI context
248 * If the CPU was idle from RCU's viewpoint, update ct->state and
249 * ct->nmi_nesting to let the RCU grace-period handling know
266 * If idle from RCU viewpoint, atomically increment CT state in ct_nmi_enter()
270 * to be in the outermost NMI handler that interrupted an RCU-idle in ct_nmi_enter()
278 // RCU is not watching here ... in ct_nmi_enter()
306 * ct_idle_enter - inform RCU that current CPU is entering idle
308 * Enter idle mode, in other words, -leave- the mode in which RCU
309 * read-side critical sections can occur. (Though RCU read-side
324 * ct_idle_exit - inform RCU that current CPU is leaving idle
326 * Exit idle mode, in other words, -enter- the mode in which RCU
343 * ct_irq_enter - inform RCU that current CPU is entering irq away from idle
354 * irq_exit() functions), RCU will give you what you deserve, good and hard.
371 * ct_irq_exit - inform RCU that current CPU is exiting irq towards idle
379 * architecture's idle loop violates this assumption, RCU will give you what
464 * instructions to execute won't use any RCU read side critical section
465 * because this function sets RCU in extended quiescent state.
483 * any RCU read-side critical section until the next call to in __ct_user_enter()
484 * user_exit() or ct_irq_enter(). Let's remove RCU's dependency in __ct_user_enter()
501 * Enter RCU idle mode right before resuming userspace. No use of RCU in __ct_user_enter()
503 * CPU doesn't need to maintain the tick for RCU maintenance purposes in __ct_user_enter()
510 * cputime accounting but we don't support RCU extended quiescent state. in __ct_user_enter()
526 * OTOH we can spare the calls to vtime and RCU when context_tracking.active in __ct_user_enter()
530 /* Tracking for vtime only, no concurrent RCU EQS accounting */ in __ct_user_enter()
534 * Tracking for vtime and RCU EQS. Make sure we don't race in __ct_user_enter()
536 * RCU only requires CT_RCU_WATCHING increments to be fully in __ct_user_enter()
550 * unsafe because it involves illegal RCU uses through tracing and lockdep.
565 * helpers are enough to protect RCU uses inside the exception. So in ct_user_enter()
584 * local_irq_restore(), involving illegal RCU uses through tracing and lockdep.
602 * guest space before any use of RCU read side critical section. This
619 * Exit RCU idle mode while entering the kernel because it can in __ct_user_exit()
620 * run a RCU read side critical section anytime. in __ct_user_exit()
632 * cputime accounting but we don't support RCU extended quiescent state. in __ct_user_exit()
640 /* Tracking for vtime only, no concurrent RCU EQS accounting */ in __ct_user_exit()
644 * Tracking for vtime and RCU EQS. Make sure we don't race in __ct_user_exit()
646 * RCU only requires CT_RCU_WATCHING increments to be fully in __ct_user_exit()
660 * unsafe because it involves illegal RCU uses through tracing and lockdep.
686 * involving illegal RCU uses through tracing and lockdep. This is unlikely