Lines Matching full:is

4 Read-copy update (RCU) is a synchronization mechanism that is used to
5 protect read-mostly data structures. RCU is very efficient and scalable
6 on the read side (it is wait-free), and thus can make the read paths
10 thus it is not used alone. Typically, the write-side will use a lock to
12 restricting updates to a single task). In QEMU, when a lock is used,
14 lock" (BQL). Also, restricting updates to a single task is done in
17 RCU is fundamentally a "wait-to-finish" mechanism. The read side marks
22 The key point here is that only the currently running critical sections
25 the updater. This is the reason why RCU is more scalable than,
26 for example, reader-writer locks. It is so much more scalable that
31 How is this possible? The basic idea is to split updates in two phases,
40 Here is a picture::
54 Note how thread 3 is still executing its critical section when thread 2
55 starts reclaiming data. This is possible, because the old version of the
63 The core RCU API is small:
66 Used by a reader to inform the reclaimer that the reader is
70 Used by a reader to inform the reclaimer that the reader is
80 ``synchronize_rcu`` is running. Because of this, it is better that
82 ``synchronize_rcu``. If this is not possible (for example, because
83 the updater is protected by the BQL), you can use ``call_rcu``.
112 ``call_rcu1`` is typically used via either the ``call_rcu`` or
114 ``rcu_head`` member is the first of the struct.
117 If the ``struct rcu_head`` is the first field in the struct, you can
121 This is a special-case version of ``call_rcu`` where the callback
122 function is ``g_free``.
128 ``qatomic_rcu_read()`` is similar to ``qatomic_load_acquire()``, but
134 data-dependent or need no ordering. This is almost always the
149 ``qatomic_rcu_set()`` is similar to ``qatomic_store_release()``,
155 data item that is already accessible to readers. This is the
157 structure; just ensure that initialization of ``*p`` is carried out
159 If this rule is observed, writes will happen in the opposite
161 there is just one update), and there will be no need for other
164 The following APIs must be used before RCU is used in a thread:
173 It is not a problem if such a thread reports quiescent points,
191 Is used at the head of a block to protect the code within the block.
198 - Waiting on a mutex is possible, though discouraged, within an RCU critical
199 section. This is because spinlocks are rarely (if ever) used in userspace
206 - ``call_rcu`` is a macro that has an extra argument (the name of the first
209 ``call_rcu1`` is the same as Linux's ``call_rcu``.
218 In general, RCU can be used whenever it is possible to create a new
228 linked list is effectively creating a new version of the list.
242 Because grace periods are not allowed to complete while there is an RCU
253 valid until after the ``rcu_read_unlock()``. In some sense, it is acquiring
254 a reference to ``p`` that is later released when the critical section ends.
265 is possible to combine this idiom with a "real" reference count::
292 happens when the last reference to a ``foo`` object is dropped.
293 Using ``synchronize_rcu()`` is undesirably expensive, because the
333 (or ``call_rcu``) only needs to take place when the array is resized.
336 - ensuring that the old version of the array is available between removal
342 The first problem is avoided simply by not using ``realloc``. Instead,