1.. _whatisrcu_doc: 2 3What is RCU? -- "Read, Copy, Update" 4====================================== 5 6Please note that the "What is RCU?" LWN series is an excellent place 7to start learning about RCU: 8 9| 1. What is RCU, Fundamentally? https://lwn.net/Articles/262464/ 10| 2. What is RCU? Part 2: Usage https://lwn.net/Articles/263130/ 11| 3. RCU part 3: the RCU API https://lwn.net/Articles/264090/ 12| 4. The RCU API, 2010 Edition https://lwn.net/Articles/418853/ 13| 2010 Big API Table https://lwn.net/Articles/419086/ 14| 5. The RCU API, 2014 Edition https://lwn.net/Articles/609904/ 15| 2014 Big API Table https://lwn.net/Articles/609973/ 16| 6. The RCU API, 2019 Edition https://lwn.net/Articles/777036/ 17| 2019 Big API Table https://lwn.net/Articles/777165/ 18 19For those preferring video: 20 21| 1. Unraveling RCU Mysteries: Fundamentals https://www.linuxfoundation.org/webinars/unraveling-rcu-usage-mysteries 22| 2. Unraveling RCU Mysteries: Additional Use Cases https://www.linuxfoundation.org/webinars/unraveling-rcu-usage-mysteries-additional-use-cases 23 24 25What is RCU? 26 27RCU is a synchronization mechanism that was added to the Linux kernel 28during the 2.5 development effort that is optimized for read-mostly 29situations. Although RCU is actually quite simple, making effective use 30of it requires you to think differently about your code. Another part 31of the problem is the mistaken assumption that there is "one true way" to 32describe and to use RCU. Instead, the experience has been that different 33people must take different paths to arrive at an understanding of RCU, 34depending on their experiences and use cases. This document provides 35several different paths, as follows: 36 37:ref:`1. RCU OVERVIEW <1_whatisRCU>` 38 39:ref:`2. WHAT IS RCU'S CORE API? <2_whatisRCU>` 40 41:ref:`3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? <3_whatisRCU>` 42 43:ref:`4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? <4_whatisRCU>` 44 45:ref:`5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? <5_whatisRCU>` 46 47:ref:`6. ANALOGY WITH READER-WRITER LOCKING <6_whatisRCU>` 48 49:ref:`7. ANALOGY WITH REFERENCE COUNTING <7_whatisRCU>` 50 51:ref:`8. FULL LIST OF RCU APIs <8_whatisRCU>` 52 53:ref:`9. ANSWERS TO QUICK QUIZZES <9_whatisRCU>` 54 55People who prefer starting with a conceptual overview should focus on 56Section 1, though most readers will profit by reading this section at 57some point. People who prefer to start with an API that they can then 58experiment with should focus on Section 2. People who prefer to start 59with example uses should focus on Sections 3 and 4. People who need to 60understand the RCU implementation should focus on Section 5, then dive 61into the kernel source code. People who reason best by analogy should 62focus on Section 6 and 7. Section 8 serves as an index to the docbook 63API documentation, and Section 9 is the traditional answer key. 64 65So, start with the section that makes the most sense to you and your 66preferred method of learning. If you need to know everything about 67everything, feel free to read the whole thing -- but if you are really 68that type of person, you have perused the source code and will therefore 69never need this document anyway. ;-) 70 71.. _1_whatisRCU: 72 731. RCU OVERVIEW 74---------------- 75 76The basic idea behind RCU is to split updates into "removal" and 77"reclamation" phases. The removal phase removes references to data items 78within a data structure (possibly by replacing them with references to 79new versions of these data items), and can run concurrently with readers. 80The reason that it is safe to run the removal phase concurrently with 81readers is the semantics of modern CPUs guarantee that readers will see 82either the old or the new version of the data structure rather than a 83partially updated reference. The reclamation phase does the work of reclaiming 84(e.g., freeing) the data items removed from the data structure during the 85removal phase. Because reclaiming data items can disrupt any readers 86concurrently referencing those data items, the reclamation phase must 87not start until readers no longer hold references to those data items. 88 89Splitting the update into removal and reclamation phases permits the 90updater to perform the removal phase immediately, and to defer the 91reclamation phase until all readers active during the removal phase have 92completed, either by blocking until they finish or by registering a 93callback that is invoked after they finish. Only readers that are active 94during the removal phase need be considered, because any reader starting 95after the removal phase will be unable to gain a reference to the removed 96data items, and therefore cannot be disrupted by the reclamation phase. 97 98So the typical RCU update sequence goes something like the following: 99 100a. Remove pointers to a data structure, so that subsequent 101 readers cannot gain a reference to it. 102 103b. Wait for all previous readers to complete their RCU read-side 104 critical sections. 105 106c. At this point, there cannot be any readers who hold references 107 to the data structure, so it now may safely be reclaimed 108 (e.g., kfree()d). 109 110Step (b) above is the key idea underlying RCU's deferred destruction. 111The ability to wait until all readers are done allows RCU readers to 112use much lighter-weight synchronization, in some cases, absolutely no 113synchronization at all. In contrast, in more conventional lock-based 114schemes, readers must use heavy-weight synchronization in order to 115prevent an updater from deleting the data structure out from under them. 116This is because lock-based updaters typically update data items in place, 117and must therefore exclude readers. In contrast, RCU-based updaters 118typically take advantage of the fact that writes to single aligned 119pointers are atomic on modern CPUs, allowing atomic insertion, removal, 120and replacement of data items in a linked structure without disrupting 121readers. Concurrent RCU readers can then continue accessing the old 122versions, and can dispense with the atomic operations, memory barriers, 123and communications cache misses that are so expensive on present-day 124SMP computer systems, even in absence of lock contention. 125 126In the three-step procedure shown above, the updater is performing both 127the removal and the reclamation step, but it is often helpful for an 128entirely different thread to do the reclamation, as is in fact the case 129in the Linux kernel's directory-entry cache (dcache). Even if the same 130thread performs both the update step (step (a) above) and the reclamation 131step (step (c) above), it is often helpful to think of them separately. 132For example, RCU readers and updaters need not communicate at all, 133but RCU provides implicit low-overhead communication between readers 134and reclaimers, namely, in step (b) above. 135 136So how the heck can a reclaimer tell when a reader is done, given 137that readers are not doing any sort of synchronization operations??? 138Read on to learn about how RCU's API makes this easy. 139 140.. _2_whatisRCU: 141 1422. WHAT IS RCU'S CORE API? 143--------------------------- 144 145The core RCU API is quite small: 146 147a. rcu_read_lock() 148b. rcu_read_unlock() 149c. synchronize_rcu() / call_rcu() 150d. rcu_assign_pointer() 151e. rcu_dereference() 152 153There are many other members of the RCU API, but the rest can be 154expressed in terms of these five, though most implementations instead 155express synchronize_rcu() in terms of the call_rcu() callback API. 156 157The five core RCU APIs are described below, the other 18 will be enumerated 158later. See the kernel docbook documentation for more info, or look directly 159at the function header comments. 160 161rcu_read_lock() 162^^^^^^^^^^^^^^^ 163 void rcu_read_lock(void); 164 165 This temporal primitive is used by a reader to inform the 166 reclaimer that the reader is entering an RCU read-side critical 167 section. It is illegal to block while in an RCU read-side 168 critical section, though kernels built with CONFIG_PREEMPT_RCU 169 can preempt RCU read-side critical sections. Any RCU-protected 170 data structure accessed during an RCU read-side critical section 171 is guaranteed to remain unreclaimed for the full duration of that 172 critical section. Reference counts may be used in conjunction 173 with RCU to maintain longer-term references to data structures. 174 175 Note that anything that disables bottom halves, preemption, 176 or interrupts also enters an RCU read-side critical section. 177 Acquiring a spinlock also enters an RCU read-side critical 178 sections, even for spinlocks that do not disable preemption, 179 as is the case in kernels built with CONFIG_PREEMPT_RT=y. 180 Sleeplocks do *not* enter RCU read-side critical sections. 181 182rcu_read_unlock() 183^^^^^^^^^^^^^^^^^ 184 void rcu_read_unlock(void); 185 186 This temporal primitives is used by a reader to inform the 187 reclaimer that the reader is exiting an RCU read-side critical 188 section. Anything that enables bottom halves, preemption, 189 or interrupts also exits an RCU read-side critical section. 190 Releasing a spinlock also exits an RCU read-side critical section. 191 192 Note that RCU read-side critical sections may be nested and/or 193 overlapping. 194 195synchronize_rcu() 196^^^^^^^^^^^^^^^^^ 197 void synchronize_rcu(void); 198 199 This temporal primitive marks the end of updater code and the 200 beginning of reclaimer code. It does this by blocking until 201 all pre-existing RCU read-side critical sections on all CPUs 202 have completed. Note that synchronize_rcu() will **not** 203 necessarily wait for any subsequent RCU read-side critical 204 sections to complete. For example, consider the following 205 sequence of events:: 206 207 CPU 0 CPU 1 CPU 2 208 ----------------- ------------------------- --------------- 209 1. rcu_read_lock() 210 2. enters synchronize_rcu() 211 3. rcu_read_lock() 212 4. rcu_read_unlock() 213 5. exits synchronize_rcu() 214 6. rcu_read_unlock() 215 216 To reiterate, synchronize_rcu() waits only for ongoing RCU 217 read-side critical sections to complete, not necessarily for 218 any that begin after synchronize_rcu() is invoked. 219 220 Of course, synchronize_rcu() does not necessarily return 221 **immediately** after the last pre-existing RCU read-side critical 222 section completes. For one thing, there might well be scheduling 223 delays. For another thing, many RCU implementations process 224 requests in batches in order to improve efficiencies, which can 225 further delay synchronize_rcu(). 226 227 Since synchronize_rcu() is the API that must figure out when 228 readers are done, its implementation is key to RCU. For RCU 229 to be useful in all but the most read-intensive situations, 230 synchronize_rcu()'s overhead must also be quite small. 231 232 The call_rcu() API is an asynchronous callback form of 233 synchronize_rcu(), and is described in more detail in a later 234 section. Instead of blocking, it registers a function and 235 argument which are invoked after all ongoing RCU read-side 236 critical sections have completed. This callback variant is 237 particularly useful in situations where it is illegal to block 238 or where update-side performance is critically important. 239 240 However, the call_rcu() API should not be used lightly, as use 241 of the synchronize_rcu() API generally results in simpler code. 242 In addition, the synchronize_rcu() API has the nice property 243 of automatically limiting update rate should grace periods 244 be delayed. This property results in system resilience in face 245 of denial-of-service attacks. Code using call_rcu() should limit 246 update rate in order to gain this same sort of resilience. See 247 checklist.rst for some approaches to limiting the update rate. 248 249rcu_assign_pointer() 250^^^^^^^^^^^^^^^^^^^^ 251 void rcu_assign_pointer(p, typeof(p) v); 252 253 Yes, rcu_assign_pointer() **is** implemented as a macro, though 254 it would be cool to be able to declare a function in this manner. 255 (And there has been some discussion of adding overloaded functions 256 to the C language, so who knows?) 257 258 The updater uses this spatial macro to assign a new value to an 259 RCU-protected pointer, in order to safely communicate the change 260 in value from the updater to the reader. This is a spatial (as 261 opposed to temporal) macro. It does not evaluate to an rvalue, 262 but it does provide any compiler directives and memory-barrier 263 instructions required for a given compile or CPU architecture. 264 Its ordering properties are that of a store-release operation, 265 that is, any prior loads and stores required to initialize the 266 structure are ordered before the store that publishes the pointer 267 to that structure. 268 269 Perhaps just as important, rcu_assign_pointer() serves to document 270 (1) which pointers are protected by RCU and (2) the point at which 271 a given structure becomes accessible to other CPUs. That said, 272 rcu_assign_pointer() is most frequently used indirectly, via 273 the _rcu list-manipulation primitives such as list_add_rcu(). 274 275rcu_dereference() 276^^^^^^^^^^^^^^^^^ 277 typeof(p) rcu_dereference(p); 278 279 Like rcu_assign_pointer(), rcu_dereference() must be implemented 280 as a macro. 281 282 The reader uses the spatial rcu_dereference() macro to fetch 283 an RCU-protected pointer, which returns a value that may 284 then be safely dereferenced. Note that rcu_dereference() 285 does not actually dereference the pointer, instead, it 286 protects the pointer for later dereferencing. It also 287 executes any needed memory-barrier instructions for a given 288 CPU architecture. Currently, only Alpha needs memory barriers 289 within rcu_dereference() -- on other CPUs, it compiles to a 290 volatile load. However, no mainstream C compilers respect 291 address dependencies, so rcu_dereference() uses volatile casts, 292 which, in combination with the coding guidelines listed in 293 rcu_dereference.rst, prevent current compilers from breaking 294 these dependencies. 295 296 Common coding practice uses rcu_dereference() to copy an 297 RCU-protected pointer to a local variable, then dereferences 298 this local variable, for example as follows:: 299 300 p = rcu_dereference(head.next); 301 return p->data; 302 303 However, in this case, one could just as easily combine these 304 into one statement:: 305 306 return rcu_dereference(head.next)->data; 307 308 If you are going to be fetching multiple fields from the 309 RCU-protected structure, using the local variable is of 310 course preferred. Repeated rcu_dereference() calls look 311 ugly, do not guarantee that the same pointer will be returned 312 if an update happened while in the critical section, and incur 313 unnecessary overhead on Alpha CPUs. 314 315 Note that the value returned by rcu_dereference() is valid 316 only within the enclosing RCU read-side critical section [1]_. 317 For example, the following is **not** legal:: 318 319 rcu_read_lock(); 320 p = rcu_dereference(head.next); 321 rcu_read_unlock(); 322 x = p->address; /* BUG!!! */ 323 rcu_read_lock(); 324 y = p->data; /* BUG!!! */ 325 rcu_read_unlock(); 326 327 Holding a reference from one RCU read-side critical section 328 to another is just as illegal as holding a reference from 329 one lock-based critical section to another! Similarly, 330 using a reference outside of the critical section in which 331 it was acquired is just as illegal as doing so with normal 332 locking. 333 334 As with rcu_assign_pointer(), an important function of 335 rcu_dereference() is to document which pointers are protected by 336 RCU, in particular, flagging a pointer that is subject to changing 337 at any time, including immediately after the rcu_dereference(). 338 And, again like rcu_assign_pointer(), rcu_dereference() is 339 typically used indirectly, via the _rcu list-manipulation 340 primitives, such as list_for_each_entry_rcu() [2]_. 341 342.. [1] The variant rcu_dereference_protected() can be used outside 343 of an RCU read-side critical section as long as the usage is 344 protected by locks acquired by the update-side code. This variant 345 avoids the lockdep warning that would happen when using (for 346 example) rcu_dereference() without rcu_read_lock() protection. 347 Using rcu_dereference_protected() also has the advantage 348 of permitting compiler optimizations that rcu_dereference() 349 must prohibit. The rcu_dereference_protected() variant takes 350 a lockdep expression to indicate which locks must be acquired 351 by the caller. If the indicated protection is not provided, 352 a lockdep splat is emitted. See Design/Requirements/Requirements.rst 353 and the API's code comments for more details and example usage. 354 355.. [2] If the list_for_each_entry_rcu() instance might be used by 356 update-side code as well as by RCU readers, then an additional 357 lockdep expression can be added to its list of arguments. 358 For example, given an additional "lock_is_held(&mylock)" argument, 359 the RCU lockdep code would complain only if this instance was 360 invoked outside of an RCU read-side critical section and without 361 the protection of mylock. 362 363The following diagram shows how each API communicates among the 364reader, updater, and reclaimer. 365:: 366 367 368 rcu_assign_pointer() 369 +--------+ 370 +---------------------->| reader |---------+ 371 | +--------+ | 372 | | | 373 | | | Protect: 374 | | | rcu_read_lock() 375 | | | rcu_read_unlock() 376 | rcu_dereference() | | 377 +---------+ | | 378 | updater |<----------------+ | 379 +---------+ V 380 | +-----------+ 381 +----------------------------------->| reclaimer | 382 +-----------+ 383 Defer: 384 synchronize_rcu() & call_rcu() 385 386 387The RCU infrastructure observes the temporal sequence of rcu_read_lock(), 388rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in 389order to determine when (1) synchronize_rcu() invocations may return 390to their callers and (2) call_rcu() callbacks may be invoked. Efficient 391implementations of the RCU infrastructure make heavy use of batching in 392order to amortize their overhead over many uses of the corresponding APIs. 393The rcu_assign_pointer() and rcu_dereference() invocations communicate 394spatial changes via stores to and loads from the RCU-protected pointer in 395question. 396 397There are at least three flavors of RCU usage in the Linux kernel. The diagram 398above shows the most common one. On the updater side, the rcu_assign_pointer(), 399synchronize_rcu() and call_rcu() primitives used are the same for all three 400flavors. However for protection (on the reader side), the primitives used vary 401depending on the flavor: 402 403a. rcu_read_lock() / rcu_read_unlock() 404 rcu_dereference() 405 406b. rcu_read_lock_bh() / rcu_read_unlock_bh() 407 local_bh_disable() / local_bh_enable() 408 rcu_dereference_bh() 409 410c. rcu_read_lock_sched() / rcu_read_unlock_sched() 411 preempt_disable() / preempt_enable() 412 local_irq_save() / local_irq_restore() 413 hardirq enter / hardirq exit 414 NMI enter / NMI exit 415 rcu_dereference_sched() 416 417These three flavors are used as follows: 418 419a. RCU applied to normal data structures. 420 421b. RCU applied to networking data structures that may be subjected 422 to remote denial-of-service attacks. 423 424c. RCU applied to scheduler and interrupt/NMI-handler tasks. 425 426Again, most uses will be of (a). The (b) and (c) cases are important 427for specialized uses, but are relatively uncommon. The SRCU, RCU-Tasks, 428RCU-Tasks-Rude, and RCU-Tasks-Trace have similar relationships among 429their assorted primitives. 430 431.. _3_whatisRCU: 432 4333. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? 434----------------------------------------------- 435 436This section shows a simple use of the core RCU API to protect a 437global pointer to a dynamically allocated structure. More-typical 438uses of RCU may be found in listRCU.rst and NMI-RCU.rst. 439:: 440 441 struct foo { 442 int a; 443 char b; 444 long c; 445 }; 446 DEFINE_SPINLOCK(foo_mutex); 447 448 struct foo __rcu *gbl_foo; 449 450 /* 451 * Create a new struct foo that is the same as the one currently 452 * pointed to by gbl_foo, except that field "a" is replaced 453 * with "new_a". Points gbl_foo to the new structure, and 454 * frees up the old structure after a grace period. 455 * 456 * Uses rcu_assign_pointer() to ensure that concurrent readers 457 * see the initialized version of the new structure. 458 * 459 * Uses synchronize_rcu() to ensure that any readers that might 460 * have references to the old structure complete before freeing 461 * the old structure. 462 */ 463 void foo_update_a(int new_a) 464 { 465 struct foo *new_fp; 466 struct foo *old_fp; 467 468 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); 469 spin_lock(&foo_mutex); 470 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); 471 *new_fp = *old_fp; 472 new_fp->a = new_a; 473 rcu_assign_pointer(gbl_foo, new_fp); 474 spin_unlock(&foo_mutex); 475 synchronize_rcu(); 476 kfree(old_fp); 477 } 478 479 /* 480 * Return the value of field "a" of the current gbl_foo 481 * structure. Use rcu_read_lock() and rcu_read_unlock() 482 * to ensure that the structure does not get deleted out 483 * from under us, and use rcu_dereference() to ensure that 484 * we see the initialized version of the structure (important 485 * for DEC Alpha and for people reading the code). 486 */ 487 int foo_get_a(void) 488 { 489 int retval; 490 491 rcu_read_lock(); 492 retval = rcu_dereference(gbl_foo)->a; 493 rcu_read_unlock(); 494 return retval; 495 } 496 497So, to sum up: 498 499- Use rcu_read_lock() and rcu_read_unlock() to guard RCU 500 read-side critical sections. 501 502- Within an RCU read-side critical section, use rcu_dereference() 503 to dereference RCU-protected pointers. 504 505- Use some solid design (such as locks or semaphores) to 506 keep concurrent updates from interfering with each other. 507 508- Use rcu_assign_pointer() to update an RCU-protected pointer. 509 This primitive protects concurrent readers from the updater, 510 **not** concurrent updates from each other! You therefore still 511 need to use locking (or something similar) to keep concurrent 512 rcu_assign_pointer() primitives from interfering with each other. 513 514- Use synchronize_rcu() **after** removing a data element from an 515 RCU-protected data structure, but **before** reclaiming/freeing 516 the data element, in order to wait for the completion of all 517 RCU read-side critical sections that might be referencing that 518 data item. 519 520See checklist.rst for additional rules to follow when using RCU. 521And again, more-typical uses of RCU may be found in listRCU.rst 522and NMI-RCU.rst. 523 524.. _4_whatisRCU: 525 5264. WHAT IF MY UPDATING THREAD CANNOT BLOCK? 527-------------------------------------------- 528 529In the example above, foo_update_a() blocks until a grace period elapses. 530This is quite simple, but in some cases one cannot afford to wait so 531long -- there might be other high-priority work to be done. 532 533In such cases, one uses call_rcu() rather than synchronize_rcu(). 534The call_rcu() API is as follows:: 535 536 void call_rcu(struct rcu_head *head, rcu_callback_t func); 537 538This function invokes func(head) after a grace period has elapsed. 539This invocation might happen from either softirq or process context, 540so the function is not permitted to block. The foo struct needs to 541have an rcu_head structure added, perhaps as follows:: 542 543 struct foo { 544 int a; 545 char b; 546 long c; 547 struct rcu_head rcu; 548 }; 549 550The foo_update_a() function might then be written as follows:: 551 552 /* 553 * Create a new struct foo that is the same as the one currently 554 * pointed to by gbl_foo, except that field "a" is replaced 555 * with "new_a". Points gbl_foo to the new structure, and 556 * frees up the old structure after a grace period. 557 * 558 * Uses rcu_assign_pointer() to ensure that concurrent readers 559 * see the initialized version of the new structure. 560 * 561 * Uses call_rcu() to ensure that any readers that might have 562 * references to the old structure complete before freeing the 563 * old structure. 564 */ 565 void foo_update_a(int new_a) 566 { 567 struct foo *new_fp; 568 struct foo *old_fp; 569 570 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); 571 spin_lock(&foo_mutex); 572 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); 573 *new_fp = *old_fp; 574 new_fp->a = new_a; 575 rcu_assign_pointer(gbl_foo, new_fp); 576 spin_unlock(&foo_mutex); 577 call_rcu(&old_fp->rcu, foo_reclaim); 578 } 579 580The foo_reclaim() function might appear as follows:: 581 582 void foo_reclaim(struct rcu_head *rp) 583 { 584 struct foo *fp = container_of(rp, struct foo, rcu); 585 586 foo_cleanup(fp->a); 587 588 kfree(fp); 589 } 590 591The container_of() primitive is a macro that, given a pointer into a 592struct, the type of the struct, and the pointed-to field within the 593struct, returns a pointer to the beginning of the struct. 594 595The use of call_rcu() permits the caller of foo_update_a() to 596immediately regain control, without needing to worry further about the 597old version of the newly updated element. It also clearly shows the 598RCU distinction between updater, namely foo_update_a(), and reclaimer, 599namely foo_reclaim(). 600 601The summary of advice is the same as for the previous section, except 602that we are now using call_rcu() rather than synchronize_rcu(): 603 604- Use call_rcu() **after** removing a data element from an 605 RCU-protected data structure in order to register a callback 606 function that will be invoked after the completion of all RCU 607 read-side critical sections that might be referencing that 608 data item. 609 610If the callback for call_rcu() is not doing anything more than calling 611kfree() on the structure, you can use kfree_rcu() instead of call_rcu() 612to avoid having to write your own callback:: 613 614 kfree_rcu(old_fp, rcu); 615 616If the occasional sleep is permitted, the single-argument form may 617be used, omitting the rcu_head structure from struct foo. 618 619 kfree_rcu_mightsleep(old_fp); 620 621This variant almost never blocks, but might do so by invoking 622synchronize_rcu() in response to memory-allocation failure. 623 624Again, see checklist.rst for additional rules governing the use of RCU. 625 626.. _5_whatisRCU: 627 6285. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? 629------------------------------------------------ 630 631One of the nice things about RCU is that it has extremely simple "toy" 632implementations that are a good first step towards understanding the 633production-quality implementations in the Linux kernel. This section 634presents two such "toy" implementations of RCU, one that is implemented 635in terms of familiar locking primitives, and another that more closely 636resembles "classic" RCU. Both are way too simple for real-world use, 637lacking both functionality and performance. However, they are useful 638in getting a feel for how RCU works. See kernel/rcu/update.c for a 639production-quality implementation, and see: 640 641 https://docs.google.com/document/d/1X0lThx8OK0ZgLMqVoXiR4ZrGURHrXK6NyLRbeXe3Xac/edit 642 643for papers describing the Linux kernel RCU implementation. The OLS'01 644and OLS'02 papers are a good introduction, and the dissertation provides 645more details on the current implementation as of early 2004. 646 647 6485A. "TOY" IMPLEMENTATION #1: LOCKING 649^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 650This section presents a "toy" RCU implementation that is based on 651familiar locking primitives. Its overhead makes it a non-starter for 652real-life use, as does its lack of scalability. It is also unsuitable 653for realtime use, since it allows scheduling latency to "bleed" from 654one read-side critical section to another. It also assumes recursive 655reader-writer locks: If you try this with non-recursive locks, and 656you allow nested rcu_read_lock() calls, you can deadlock. 657 658However, it is probably the easiest implementation to relate to, so is 659a good starting point. 660 661It is extremely simple:: 662 663 static DEFINE_RWLOCK(rcu_gp_mutex); 664 665 void rcu_read_lock(void) 666 { 667 read_lock(&rcu_gp_mutex); 668 } 669 670 void rcu_read_unlock(void) 671 { 672 read_unlock(&rcu_gp_mutex); 673 } 674 675 void synchronize_rcu(void) 676 { 677 write_lock(&rcu_gp_mutex); 678 smp_mb__after_spinlock(); 679 write_unlock(&rcu_gp_mutex); 680 } 681 682[You can ignore rcu_assign_pointer() and rcu_dereference() without missing 683much. But here are simplified versions anyway. And whatever you do, 684don't forget about them when submitting patches making use of RCU!]:: 685 686 #define rcu_assign_pointer(p, v) \ 687 ({ \ 688 smp_store_release(&(p), (v)); \ 689 }) 690 691 #define rcu_dereference(p) \ 692 ({ \ 693 typeof(p) _________p1 = READ_ONCE(p); \ 694 (_________p1); \ 695 }) 696 697 698The rcu_read_lock() and rcu_read_unlock() primitive read-acquire 699and release a global reader-writer lock. The synchronize_rcu() 700primitive write-acquires this same lock, then releases it. This means 701that once synchronize_rcu() exits, all RCU read-side critical sections 702that were in progress before synchronize_rcu() was called are guaranteed 703to have completed -- there is no way that synchronize_rcu() would have 704been able to write-acquire the lock otherwise. The smp_mb__after_spinlock() 705promotes synchronize_rcu() to a full memory barrier in compliance with 706the "Memory-Barrier Guarantees" listed in: 707 708 Design/Requirements/Requirements.rst 709 710It is possible to nest rcu_read_lock(), since reader-writer locks may 711be recursively acquired. Note also that rcu_read_lock() is immune 712from deadlock (an important property of RCU). The reason for this is 713that the only thing that can block rcu_read_lock() is a synchronize_rcu(). 714But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, 715so there can be no deadlock cycle. 716 717.. _quiz_1: 718 719Quick Quiz #1: 720 Why is this argument naive? How could a deadlock 721 occur when using this algorithm in a real-world Linux 722 kernel? How could this deadlock be avoided? 723 724:ref:`Answers to Quick Quiz <9_whatisRCU>` 725 7265B. "TOY" EXAMPLE #2: CLASSIC RCU 727^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 728This section presents a "toy" RCU implementation that is based on 729"classic RCU". It is also short on performance (but only for updates) and 730on features such as hotplug CPU and the ability to run in CONFIG_PREEMPTION 731kernels. The definitions of rcu_dereference() and rcu_assign_pointer() 732are the same as those shown in the preceding section, so they are omitted. 733:: 734 735 void rcu_read_lock(void) { } 736 737 void rcu_read_unlock(void) { } 738 739 void synchronize_rcu(void) 740 { 741 int cpu; 742 743 for_each_possible_cpu(cpu) 744 run_on(cpu); 745 } 746 747Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. 748This is the great strength of classic RCU in a non-preemptive kernel: 749read-side overhead is precisely zero, at least on non-Alpha CPUs. 750And there is absolutely no way that rcu_read_lock() can possibly 751participate in a deadlock cycle! 752 753The implementation of synchronize_rcu() simply schedules itself on each 754CPU in turn. The run_on() primitive can be implemented straightforwardly 755in terms of the sched_setaffinity() primitive. Of course, a somewhat less 756"toy" implementation would restore the affinity upon completion rather 757than just leaving all tasks running on the last CPU, but when I said 758"toy", I meant **toy**! 759 760So how the heck is this supposed to work??? 761 762Remember that it is illegal to block while in an RCU read-side critical 763section. Therefore, if a given CPU executes a context switch, we know 764that it must have completed all preceding RCU read-side critical sections. 765Once **all** CPUs have executed a context switch, then **all** preceding 766RCU read-side critical sections will have completed. 767 768So, suppose that we remove a data item from its structure and then invoke 769synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed 770that there are no RCU read-side critical sections holding a reference 771to that data item, so we can safely reclaim it. 772 773.. _quiz_2: 774 775Quick Quiz #2: 776 Give an example where Classic RCU's read-side 777 overhead is **negative**. 778 779:ref:`Answers to Quick Quiz <9_whatisRCU>` 780 781.. _quiz_3: 782 783Quick Quiz #3: 784 If it is illegal to block in an RCU read-side 785 critical section, what the heck do you do in 786 CONFIG_PREEMPT_RT, where normal spinlocks can block??? 787 788:ref:`Answers to Quick Quiz <9_whatisRCU>` 789 790.. _6_whatisRCU: 791 7926. ANALOGY WITH READER-WRITER LOCKING 793-------------------------------------- 794 795Although RCU can be used in many different ways, a very common use of 796RCU is analogous to reader-writer locking. The following unified 797diff shows how closely related RCU and reader-writer locking can be. 798:: 799 800 @@ -5,5 +5,5 @@ struct el { 801 int data; 802 /* Other data fields */ 803 }; 804 -rwlock_t listmutex; 805 +spinlock_t listmutex; 806 struct el head; 807 808 @@ -13,15 +14,15 @@ 809 struct list_head *lp; 810 struct el *p; 811 812 - read_lock(&listmutex); 813 - list_for_each_entry(p, head, lp) { 814 + rcu_read_lock(); 815 + list_for_each_entry_rcu(p, head, lp) { 816 if (p->key == key) { 817 *result = p->data; 818 - read_unlock(&listmutex); 819 + rcu_read_unlock(); 820 return 1; 821 } 822 } 823 - read_unlock(&listmutex); 824 + rcu_read_unlock(); 825 return 0; 826 } 827 828 @@ -29,15 +30,16 @@ 829 { 830 struct el *p; 831 832 - write_lock(&listmutex); 833 + spin_lock(&listmutex); 834 list_for_each_entry(p, head, lp) { 835 if (p->key == key) { 836 - list_del(&p->list); 837 - write_unlock(&listmutex); 838 + list_del_rcu(&p->list); 839 + spin_unlock(&listmutex); 840 + synchronize_rcu(); 841 kfree(p); 842 return 1; 843 } 844 } 845 - write_unlock(&listmutex); 846 + spin_unlock(&listmutex); 847 return 0; 848 } 849 850Or, for those who prefer a side-by-side listing:: 851 852 1 struct el { 1 struct el { 853 2 struct list_head list; 2 struct list_head list; 854 3 long key; 3 long key; 855 4 spinlock_t mutex; 4 spinlock_t mutex; 856 5 int data; 5 int data; 857 6 /* Other data fields */ 6 /* Other data fields */ 858 7 }; 7 }; 859 8 rwlock_t listmutex; 8 spinlock_t listmutex; 860 9 struct el head; 9 struct el head; 861 862:: 863 864 1 int search(long key, int *result) 1 int search(long key, int *result) 865 2 { 2 { 866 3 struct list_head *lp; 3 struct list_head *lp; 867 4 struct el *p; 4 struct el *p; 868 5 5 869 6 read_lock(&listmutex); 6 rcu_read_lock(); 870 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { 871 8 if (p->key == key) { 8 if (p->key == key) { 872 9 *result = p->data; 9 *result = p->data; 873 10 read_unlock(&listmutex); 10 rcu_read_unlock(); 874 11 return 1; 11 return 1; 875 12 } 12 } 876 13 } 13 } 877 14 read_unlock(&listmutex); 14 rcu_read_unlock(); 878 15 return 0; 15 return 0; 879 16 } 16 } 880 881:: 882 883 1 int delete(long key) 1 int delete(long key) 884 2 { 2 { 885 3 struct el *p; 3 struct el *p; 886 4 4 887 5 write_lock(&listmutex); 5 spin_lock(&listmutex); 888 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { 889 7 if (p->key == key) { 7 if (p->key == key) { 890 8 list_del(&p->list); 8 list_del_rcu(&p->list); 891 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); 892 10 synchronize_rcu(); 893 10 kfree(p); 11 kfree(p); 894 11 return 1; 12 return 1; 895 12 } 13 } 896 13 } 14 } 897 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); 898 15 return 0; 16 return 0; 899 16 } 17 } 900 901Either way, the differences are quite small. Read-side locking moves 902to rcu_read_lock() and rcu_read_unlock, update-side locking moves from 903a reader-writer lock to a simple spinlock, and a synchronize_rcu() 904precedes the kfree(). 905 906However, there is one potential catch: the read-side and update-side 907critical sections can now run concurrently. In many cases, this will 908not be a problem, but it is necessary to check carefully regardless. 909For example, if multiple independent list updates must be seen as 910a single atomic update, converting to RCU will require special care. 911 912Also, the presence of synchronize_rcu() means that the RCU version of 913delete() can now block. If this is a problem, there is a callback-based 914mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can 915be used in place of synchronize_rcu(). 916 917.. _7_whatisRCU: 918 9197. ANALOGY WITH REFERENCE COUNTING 920----------------------------------- 921 922The reader-writer analogy (illustrated by the previous section) is not 923always the best way to think about using RCU. Another helpful analogy 924considers RCU an effective reference count on everything which is 925protected by RCU. 926 927A reference count typically does not prevent the referenced object's 928values from changing, but does prevent changes to type -- particularly the 929gross change of type that happens when that object's memory is freed and 930re-allocated for some other purpose. Once a type-safe reference to the 931object is obtained, some other mechanism is needed to ensure consistent 932access to the data in the object. This could involve taking a spinlock, 933but with RCU the typical approach is to perform reads with SMP-aware 934operations such as smp_load_acquire(), to perform updates with atomic 935read-modify-write operations, and to provide the necessary ordering. 936RCU provides a number of support functions that embed the required 937operations and ordering, such as the list_for_each_entry_rcu() macro 938used in the previous section. 939 940A more focused view of the reference counting behavior is that, 941between rcu_read_lock() and rcu_read_unlock(), any reference taken with 942rcu_dereference() on a pointer marked as ``__rcu`` can be treated as 943though a reference-count on that object has been temporarily increased. 944This prevents the object from changing type. Exactly what this means 945will depend on normal expectations of objects of that type, but it 946typically includes that spinlocks can still be safely locked, normal 947reference counters can be safely manipulated, and ``__rcu`` pointers 948can be safely dereferenced. 949 950Some operations that one might expect to see on an object for 951which an RCU reference is held include: 952 953 - Copying out data that is guaranteed to be stable by the object's type. 954 - Using kref_get_unless_zero() or similar to get a longer-term 955 reference. This may fail of course. 956 - Acquiring a spinlock in the object, and checking if the object still 957 is the expected object and if so, manipulating it freely. 958 959The understanding that RCU provides a reference that only prevents a 960change of type is particularly visible with objects allocated from a 961slab cache marked ``SLAB_TYPESAFE_BY_RCU``. RCU operations may yield a 962reference to an object from such a cache that has been concurrently freed 963and the memory reallocated to a completely different object, though of 964the same type. In this case RCU doesn't even protect the identity of the 965object from changing, only its type. So the object found may not be the 966one expected, but it will be one where it is safe to take a reference 967(and then potentially acquiring a spinlock), allowing subsequent code 968to check whether the identity matches expectations. It is tempting 969to simply acquire the spinlock without first taking the reference, but 970unfortunately any spinlock in a ``SLAB_TYPESAFE_BY_RCU`` object must be 971initialized after each and every call to kmem_cache_alloc(), which renders 972reference-free spinlock acquisition completely unsafe. Therefore, when 973using ``SLAB_TYPESAFE_BY_RCU``, make proper use of a reference counter. 974If using refcount_t, the specialized refcount_{add|inc}_not_zero_acquire() 975and refcount_set_release() APIs should be used to ensure correct operation 976ordering when verifying object identity and when initializing newly 977allocated objects. Acquire fence in refcount_{add|inc}_not_zero_acquire() 978ensures that identity checks happen *after* reference count is taken. 979refcount_set_release() should be called after a newly allocated object is 980fully initialized and release fence ensures that new values are visible 981*before* refcount can be successfully taken by other users. Once 982refcount_set_release() is called, the object should be considered visible 983by other tasks. 984(Those willing to initialize their locks in a kmem_cache constructor 985may also use locking, including cache-friendly sequence locking.) 986 987With traditional reference counting -- such as that implemented by the 988kref library in Linux -- there is typically code that runs when the last 989reference to an object is dropped. With kref, this is the function 990passed to kref_put(). When RCU is being used, such finalization code 991must not be run until all ``__rcu`` pointers referencing the object have 992been updated, and then a grace period has passed. Every remaining 993globally visible pointer to the object must be considered to be a 994potential counted reference, and the finalization code is typically run 995using call_rcu() only after all those pointers have been changed. 996 997To see how to choose between these two analogies -- of RCU as a 998reader-writer lock and RCU as a reference counting system -- it is useful 999to reflect on the scale of the thing being protected. The reader-writer 1000lock analogy looks at larger multi-part objects such as a linked list 1001and shows how RCU can facilitate concurrency while elements are added 1002to, and removed from, the list. The reference-count analogy looks at 1003the individual objects and looks at how they can be accessed safely 1004within whatever whole they are a part of. 1005 1006.. _8_whatisRCU: 1007 10088. FULL LIST OF RCU APIs 1009------------------------- 1010 1011The RCU APIs are documented in docbook-format header comments in the 1012Linux-kernel source code, but it helps to have a full list of the 1013APIs, since there does not appear to be a way to categorize them 1014in docbook. Here is the list, by category. 1015 1016RCU list traversal:: 1017 1018 list_entry_rcu 1019 list_entry_lockless 1020 list_first_entry_rcu 1021 list_next_rcu 1022 list_for_each_entry_rcu 1023 list_for_each_entry_continue_rcu 1024 list_for_each_entry_from_rcu 1025 list_first_or_null_rcu 1026 list_next_or_null_rcu 1027 hlist_first_rcu 1028 hlist_next_rcu 1029 hlist_pprev_rcu 1030 hlist_for_each_entry_rcu 1031 hlist_for_each_entry_rcu_bh 1032 hlist_for_each_entry_from_rcu 1033 hlist_for_each_entry_continue_rcu 1034 hlist_for_each_entry_continue_rcu_bh 1035 hlist_nulls_first_rcu 1036 hlist_nulls_for_each_entry_rcu 1037 hlist_bl_first_rcu 1038 hlist_bl_for_each_entry_rcu 1039 1040RCU pointer/list update:: 1041 1042 rcu_assign_pointer 1043 list_add_rcu 1044 list_add_tail_rcu 1045 list_del_rcu 1046 list_replace_rcu 1047 hlist_add_behind_rcu 1048 hlist_add_before_rcu 1049 hlist_add_head_rcu 1050 hlist_add_tail_rcu 1051 hlist_del_rcu 1052 hlist_del_init_rcu 1053 hlist_replace_rcu 1054 list_splice_init_rcu 1055 list_splice_tail_init_rcu 1056 hlist_nulls_del_init_rcu 1057 hlist_nulls_del_rcu 1058 hlist_nulls_add_head_rcu 1059 hlist_bl_add_head_rcu 1060 hlist_bl_del_init_rcu 1061 hlist_bl_del_rcu 1062 hlist_bl_set_first_rcu 1063 1064RCU:: 1065 1066 Critical sections Grace period Barrier 1067 1068 rcu_read_lock synchronize_net rcu_barrier 1069 rcu_read_unlock synchronize_rcu 1070 rcu_dereference synchronize_rcu_expedited 1071 rcu_read_lock_held call_rcu 1072 rcu_dereference_check kfree_rcu 1073 rcu_dereference_protected 1074 1075bh:: 1076 1077 Critical sections Grace period Barrier 1078 1079 rcu_read_lock_bh call_rcu rcu_barrier 1080 rcu_read_unlock_bh synchronize_rcu 1081 [local_bh_disable] synchronize_rcu_expedited 1082 [and friends] 1083 rcu_dereference_bh 1084 rcu_dereference_bh_check 1085 rcu_dereference_bh_protected 1086 rcu_read_lock_bh_held 1087 1088sched:: 1089 1090 Critical sections Grace period Barrier 1091 1092 rcu_read_lock_sched call_rcu rcu_barrier 1093 rcu_read_unlock_sched synchronize_rcu 1094 [preempt_disable] synchronize_rcu_expedited 1095 [and friends] 1096 rcu_read_lock_sched_notrace 1097 rcu_read_unlock_sched_notrace 1098 rcu_dereference_sched 1099 rcu_dereference_sched_check 1100 rcu_dereference_sched_protected 1101 rcu_read_lock_sched_held 1102 1103 1104RCU-Tasks:: 1105 1106 Critical sections Grace period Barrier 1107 1108 N/A call_rcu_tasks rcu_barrier_tasks 1109 synchronize_rcu_tasks 1110 1111 1112RCU-Tasks-Rude:: 1113 1114 Critical sections Grace period Barrier 1115 1116 N/A N/A 1117 synchronize_rcu_tasks_rude 1118 1119 1120RCU-Tasks-Trace:: 1121 1122 Critical sections Grace period Barrier 1123 1124 rcu_read_lock_trace call_rcu_tasks_trace rcu_barrier_tasks_trace 1125 rcu_read_unlock_trace synchronize_rcu_tasks_trace 1126 1127 1128SRCU:: 1129 1130 Critical sections Grace period Barrier 1131 1132 srcu_read_lock call_srcu srcu_barrier 1133 srcu_read_unlock synchronize_srcu 1134 srcu_dereference synchronize_srcu_expedited 1135 srcu_dereference_check 1136 srcu_read_lock_held 1137 1138SRCU: Initialization/cleanup:: 1139 1140 DEFINE_SRCU 1141 DEFINE_STATIC_SRCU 1142 init_srcu_struct 1143 cleanup_srcu_struct 1144 1145All: lockdep-checked RCU utility APIs:: 1146 1147 RCU_LOCKDEP_WARN 1148 rcu_sleep_check 1149 1150All: Unchecked RCU-protected pointer access:: 1151 1152 rcu_dereference_raw 1153 1154All: Unchecked RCU-protected pointer access with dereferencing prohibited:: 1155 1156 rcu_access_pointer 1157 1158See the comment headers in the source code (or the docbook generated 1159from them) for more information. 1160 1161However, given that there are no fewer than four families of RCU APIs 1162in the Linux kernel, how do you choose which one to use? The following 1163list can be helpful: 1164 1165a. Will readers need to block? If so, you need SRCU. 1166 1167b. Will readers need to block and are you doing tracing, for 1168 example, ftrace or BPF? If so, you need RCU-tasks, 1169 RCU-tasks-rude, and/or RCU-tasks-trace. 1170 1171c. What about the -rt patchset? If readers would need to block in 1172 an non-rt kernel, you need SRCU. If readers would block when 1173 acquiring spinlocks in a -rt kernel, but not in a non-rt kernel, 1174 SRCU is not necessary. (The -rt patchset turns spinlocks into 1175 sleeplocks, hence this distinction.) 1176 1177d. Do you need to treat NMI handlers, hardirq handlers, 1178 and code segments with preemption disabled (whether 1179 via preempt_disable(), local_irq_save(), local_bh_disable(), 1180 or some other mechanism) as if they were explicit RCU readers? 1181 If so, RCU-sched readers are the only choice that will work 1182 for you, but since about v4.20 you use can use the vanilla RCU 1183 update primitives. 1184 1185e. Do you need RCU grace periods to complete even in the face of 1186 softirq monopolization of one or more of the CPUs? For example, 1187 is your code subject to network-based denial-of-service attacks? 1188 If so, you should disable softirq across your readers, for 1189 example, by using rcu_read_lock_bh(). Since about v4.20 you 1190 use can use the vanilla RCU update primitives. 1191 1192f. Is your workload too update-intensive for normal use of 1193 RCU, but inappropriate for other synchronization mechanisms? 1194 If so, consider SLAB_TYPESAFE_BY_RCU (which was originally 1195 named SLAB_DESTROY_BY_RCU). But please be careful! 1196 1197g. Do you need read-side critical sections that are respected even 1198 on CPUs that are deep in the idle loop, during entry to or exit 1199 from user-mode execution, or on an offlined CPU? If so, SRCU 1200 and RCU Tasks Trace are the only choices that will work for you, 1201 with SRCU being strongly preferred in almost all cases. 1202 1203h. Otherwise, use RCU. 1204 1205Of course, this all assumes that you have determined that RCU is in fact 1206the right tool for your job. 1207 1208.. _9_whatisRCU: 1209 12109. ANSWERS TO QUICK QUIZZES 1211---------------------------- 1212 1213Quick Quiz #1: 1214 Why is this argument naive? How could a deadlock 1215 occur when using this algorithm in a real-world Linux 1216 kernel? [Referring to the lock-based "toy" RCU 1217 algorithm.] 1218 1219Answer: 1220 Consider the following sequence of events: 1221 1222 1. CPU 0 acquires some unrelated lock, call it 1223 "problematic_lock", disabling irq via 1224 spin_lock_irqsave(). 1225 1226 2. CPU 1 enters synchronize_rcu(), write-acquiring 1227 rcu_gp_mutex. 1228 1229 3. CPU 0 enters rcu_read_lock(), but must wait 1230 because CPU 1 holds rcu_gp_mutex. 1231 1232 4. CPU 1 is interrupted, and the irq handler 1233 attempts to acquire problematic_lock. 1234 1235 The system is now deadlocked. 1236 1237 One way to avoid this deadlock is to use an approach like 1238 that of CONFIG_PREEMPT_RT, where all normal spinlocks 1239 become blocking locks, and all irq handlers execute in 1240 the context of special tasks. In this case, in step 4 1241 above, the irq handler would block, allowing CPU 1 to 1242 release rcu_gp_mutex, avoiding the deadlock. 1243 1244 Even in the absence of deadlock, this RCU implementation 1245 allows latency to "bleed" from readers to other 1246 readers through synchronize_rcu(). To see this, 1247 consider task A in an RCU read-side critical section 1248 (thus read-holding rcu_gp_mutex), task B blocked 1249 attempting to write-acquire rcu_gp_mutex, and 1250 task C blocked in rcu_read_lock() attempting to 1251 read_acquire rcu_gp_mutex. Task A's RCU read-side 1252 latency is holding up task C, albeit indirectly via 1253 task B. 1254 1255 Realtime RCU implementations therefore use a counter-based 1256 approach where tasks in RCU read-side critical sections 1257 cannot be blocked by tasks executing synchronize_rcu(). 1258 1259:ref:`Back to Quick Quiz #1 <quiz_1>` 1260 1261Quick Quiz #2: 1262 Give an example where Classic RCU's read-side 1263 overhead is **negative**. 1264 1265Answer: 1266 Imagine a single-CPU system with a non-CONFIG_PREEMPTION 1267 kernel where a routing table is used by process-context 1268 code, but can be updated by irq-context code (for example, 1269 by an "ICMP REDIRECT" packet). The usual way of handling 1270 this would be to have the process-context code disable 1271 interrupts while searching the routing table. Use of 1272 RCU allows such interrupt-disabling to be dispensed with. 1273 Thus, without RCU, you pay the cost of disabling interrupts, 1274 and with RCU you don't. 1275 1276 One can argue that the overhead of RCU in this 1277 case is negative with respect to the single-CPU 1278 interrupt-disabling approach. Others might argue that 1279 the overhead of RCU is merely zero, and that replacing 1280 the positive overhead of the interrupt-disabling scheme 1281 with the zero-overhead RCU scheme does not constitute 1282 negative overhead. 1283 1284 In real life, of course, things are more complex. But 1285 even the theoretical possibility of negative overhead for 1286 a synchronization primitive is a bit unexpected. ;-) 1287 1288:ref:`Back to Quick Quiz #2 <quiz_2>` 1289 1290Quick Quiz #3: 1291 If it is illegal to block in an RCU read-side 1292 critical section, what the heck do you do in 1293 CONFIG_PREEMPT_RT, where normal spinlocks can block??? 1294 1295Answer: 1296 Just as CONFIG_PREEMPT_RT permits preemption of spinlock 1297 critical sections, it permits preemption of RCU 1298 read-side critical sections. It also permits 1299 spinlocks blocking while in RCU read-side critical 1300 sections. 1301 1302 Why the apparent inconsistency? Because it is 1303 possible to use priority boosting to keep the RCU 1304 grace periods short if need be (for example, if running 1305 short of memory). In contrast, if blocking waiting 1306 for (say) network reception, there is no way to know 1307 what should be boosted. Especially given that the 1308 process we need to boost might well be a human being 1309 who just went out for a pizza or something. And although 1310 a computer-operated cattle prod might arouse serious 1311 interest, it might also provoke serious objections. 1312 Besides, how does the computer know what pizza parlor 1313 the human being went to??? 1314 1315:ref:`Back to Quick Quiz #3 <quiz_3>` 1316 1317ACKNOWLEDGEMENTS 1318 1319My thanks to the people who helped make this human-readable, including 1320Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. 1321 1322 1323For more information, see http://www.rdrop.com/users/paulmck/RCU. 1324