Lines Matching +full:fixed +full:- +full:links

1 Explanation of the Linux-Kernel Memory Consistency Model
15 7. THE PROGRAM ORDER RELATION: po AND po-loc
18 10. THE READS-FROM RELATION: rf, rfi, and rfe
20 12. THE FROM-READS RELATION: fr, fri, and fre
22 14. PROPAGATION ORDER RELATION: cumul-fence
28 20. THE HAPPENS-BEFORE RELATION: hb
29 21. THE PROPAGATES-BEFORE RELATION: pb
30 22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
38 ------------
40 The Linux-kernel memory consistency model (LKMM) is rather complex and
42 linux-kernel.bell and linux-kernel.cat files that make up the formal
68 ----------
86 factors such as DMA and mixed-size accesses.) But on multiprocessor
97 ----------------
158 predict that r1 = 42 or r2 = -7, because neither of those values ever
180 ----------------------------
254 -------------------
285 ------
305 Atomic read-modify-write accesses, such as atomic_inc() or xchg(),
312 logical computations, control-flow instructions, or accesses to
318 is concerned only with the store itself -- its value and its address
319 -- not the computation leading up to it.
327 THE PROGRAM ORDER RELATION: po AND po-loc
328 -----------------------------------------
335 that X is po-before Y (written as "X ->po Y" in formulas) if X occurs
338 This is inherently a single-CPU relation; two instructions executing
342 po-loc is a sub-relation of po. It links two memory accesses when the
344 same memory location (the "-loc" suffix).
347 program order we need to explain. The LKMM was inspired by low-level
374 need not even be stored in normal memory at all -- in principle a
380 ---------
421 memory model cannot assume there is a fixed program order relation
427 ------------------------------------------
482 come earlier in program order. Symbolically, if we have R ->data X,
483 R ->addr X, or R ->ctrl X (where R is a read event), then we must also
484 have R ->po X. It wouldn't make sense for a computation to depend
489 THE READS-FROM RELATION: rf, rfi, and rfe
490 -----------------------------------------
492 The reads-from relation (rf) links a write event to a read event when
495 write W ->rf R to indicate that the load R reads from the store W. We
497 the same CPU (internal reads-from, or rfi) and where they occur on
498 different CPUs (external reads-from, or rfe).
506 of load-tearing, where a load obtains some of its bits from one store
508 and WRITE_ONCE() will prevent load-tearing; it's not possible to have:
527 On the other hand, load-tearing is unavoidable when mixed-size
548 If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read
549 from both of P0's stores. It is possible to handle mixed-size and
556 ------------------------------------------------------------------
559 multi-processor system, the CPUs must share a consistent view of the
575 hardware-centric view, the order in which the stores get written to
576 x's cache line). We write W ->co W' if W comes before W' in the
583 Write-write coherence: If W ->po-loc W' (i.e., W comes before
585 and W' are two stores, then W ->co W'.
587 Write-read coherence: If W ->po-loc R, where W is a store and R
591 Read-write coherence: If R ->po-loc W, where R is a load and W
595 Read-read coherence: If R ->po-loc R', where R and R' are two
622 write-write coherence rule: Since the store of 23 comes later in
636 If r1 = 666 at the end, this would violate the read-write coherence
659 would violate the read-read coherence rule: The r1 load comes before
666 encoded in Itanium's Very-Long-Instruction-Word format, and it is yet
670 Just like the po relation, co is inherently an ordering -- it is not
678 related by po. Coherence order is strictly per-location, or if you
682 THE FROM-READS RELATION: fr, fri, and fre
683 -----------------------------------------
685 The from-reads relation (fr) can be a little difficult for people to
687 overwritten by a store. In other words, we have R ->fr W when the
707 stores to x, there would also be fr links from the READ_ONCE() to
716 event W for the same location, we will have R ->fr W if and only if
717 the write which R reads from is co-before W. In symbols,
719 (R ->fr W) := (there exists W' with W' ->rf R and W' ->co W).
723 --------------------
742 arrange for the store to be co-later than (i.e., to overwrite) any
746 whether there are any as-yet unexecuted store instructions, for the
748 uses the value of the po-latest such store as the value obtained by R,
752 of the co-latest store to the location in question which has already
761 First-In-First-Out order, and consequently the processing delay
763 have a partitioned design that results in non-FIFO behavior. We will
781 the CPU to execute all po-earlier instructions before any
782 po-later instructions;
784 smp_rmb() forces the CPU to execute all po-earlier loads
785 before any po-later loads;
787 smp_wmb() forces the CPU to execute all po-earlier stores
788 before any po-later stores;
792 part of an smp_load_acquire()) before any po-later
796 execute all po-earlier instructions before the store
803 For each other CPU C', smp_wmb() forces all po-earlier stores
804 on C to propagate to C' before any po-later stores do.
807 a release fence is executed (including all po-earlier
812 executed (including all po-earlier stores on C) is forced to
813 propagate to all other CPUs before any instructions po-after
820 strong fences are A-cumulative. By contrast, smp_wmb() fences are not
821 A-cumulative; they only affect the propagation of stores that are
829 PROPAGATION ORDER RELATION: cumul-fence
830 ---------------------------------------
833 smp_wmb() fences) are collectively referred to as cumul-fences, even
834 though smp_wmb() isn't A-cumulative. The cumul-fence relation is
841 where either X = E or else E ->rf X; or
844 order, where either X = E or else E ->rf X.
847 and W ->cumul-fence W', then W must propagate to any given CPU
853 -------------------------------------------------
866 Atomicity: This requires that atomic read-modify-write
870 Happens-before: This requires that certain instructions are
876 Rcu: This requires that RCU read-side critical sections and
878 Grace-Period Guarantee.
880 Plain-coherence: This requires that plain memory accesses
885 memory models (such as those for C11/C++11). The "happens-before" and
887 "rcu" and "plain-coherence" axioms are specific to the LKMM.
893 -----------------------------------
895 According to the principle of cache coherence, the stores to any fixed
904 and po-loc relations agree with this global ordering; in other words,
905 whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the
911 X0 -> X1 -> X2 -> ... -> Xn -> X0,
913 where each of the links is either rf, co, fr, or po-loc. This has to
914 hold if the accesses to the fixed memory location can be ordered as
923 -------------------
925 What does it mean to say that a read-modify-write (rmw) update, such
949 atomic read-modify-write and W' is the write event which R reads from,
953 (R ->rmw W) implies (there is no X with R ->fr X and X ->co W),
955 where the rmw relation links the read and write events making up each
960 -----------------------------------------
964 "preserved program order") relation, which links the po-earlier
965 instruction to the po-later instruction and is thus a sub-relation of
969 situation: Fences are a source of ppo links. Suppose X and Y are
970 memory accesses with X ->po Y; then the CPU must execute X before Y if
989 X and Y are both loads, X ->addr Y (i.e., there is an address
1011 To be fair about it, all Linux-supported architectures do execute
1015 the split-cache design used by Alpha can cause it to behave in a way
1023 store and a second, po-later load reads from that store:
1025 R ->dep W ->rfi R',
1052 R ->po-loc W
1054 (the po-loc link says that R comes before W in program order and they
1058 violation of the read-write coherence rule. Similarly, if we had
1060 W ->po-loc W'
1064 overwrite W', in violation of the write-write coherence rule.
1066 allowing out-of-order writes like this to occur. The model avoided
1067 violating the write-write coherence rule by requiring the CPU not to
1072 ------------------------
1079 int y = -1;
1126 effect of the fence is to cause the CPU not to execute any po-later
1147 share this property: They do not allow the CPU to execute any po-later
1148 instructions (or po-later loads in the case of smp_rmb()) until all
1151 po-earlier stores to propagate to every other CPU in the system; then
1153 as of that time -- not just the stores received when the strong fence
1160 THE HAPPENS-BEFORE RELATION: hb
1161 -------------------------------
1163 The happens-before relation (hb) links memory accesses that have to
1167 W ->rfe R implies that W and R are on different CPUs. It also means
1170 must have executed before R, and so we have W ->hb R.
1172 The equivalent fact need not hold if W ->rfi R (i.e., W and R are on
1179 W ->coe W'. This means that W and W' are stores to the same location,
1185 R ->fre W means that W overwrites the value which R reads, but it
1190 The third relation included in hb is like ppo, in that it only links
1193 cache coherence. The relation is called prop, and it links two events
1295 outcome is impossible -- as it should be.
1298 followed by an arbitrary number of cumul-fence links, ending with an
1302 followed by two cumul-fences and an rfe link, utilizing the fact that
1303 release fences are A-cumulative:
1334 store to y does (the first cumul-fence), the store to y propagates to P2
1337 store to z does (the second cumul-fence), and P0's load executes after the
1340 In summary, the fact that the hb relation links memory access events
1342 requirement is the content of the LKMM's "happens-before" axiom.
1350 THE PROPAGATES-BEFORE RELATION: pb
1351 ----------------------------------
1353 The propagates-before (pb) relation capitalizes on the special
1354 features of strong fences. It links two events E and F whenever some
1355 store is coherence-later than E and propagates to every CPU and to RAM
1357 F via a coe or fre link, an arbitrary number of cumul-fences, an
1359 links. Let's see how this definition works out.
1362 of links begins with coe). Then there are events W, X, Y, and Z such
1365 E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F,
1367 where the * suffix indicates an arbitrary number of links of the
1369 be equal to X). Because of the cumul-fence links, we know that W will
1373 And because of the hb links, we know that Z will execute before F.
1384 coherence order, contradicting the fact that E ->coe W. If E was a
1387 contradicting the fact that E ->fre W.
1415 In this example, the sequences of cumul-fence and hb links are empty.
1425 In summary, the fact that the pb relation links events in the order
1430 RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
1431 ------------------------------------------------------------------------
1433 RCU (Read-Copy-Update) is a powerful synchronization mechanism. It
1434 rests on two concepts: grace periods and read-side critical sections.
1437 synchronize_rcu(). A read-side critical section (or just critical
1443 Grace-Period Guarantee, which states that a critical section can never
1493 suitable places in the RCU-related code. Thus, if a critical section
1506 rcu-link relation. rcu-link encompasses a very general notion of
1509 E ->rcu-link F includes cases where E is po-before some memory-access
1510 event X, F is po-after some memory-access event Y, and we have any of
1511 X ->rfe Y, X ->co Y, or X ->fr Y.
1513 The formal definition of the rcu-link relation is more than a little
1517 about rcu-link is the information in the preceding paragraph.
1519 The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring
1520 grace periods and read-side critical sections into the picture, in the
1523 E ->rcu-gp F means that E and F are in fact the same event,
1527 E ->rcu-rscsi F means that E and F are the rcu_read_unlock()
1528 and rcu_read_lock() fence events delimiting some read-side
1530 that this relation is "inverted": It links the end of the
1533 If we think of the rcu-link relation as standing for an extended
1534 "before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a
1538 after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is
1541 The LKMM goes on to define the rcu-order relation as a sequence of
1542 rcu-gp and rcu-rscsi links separated by rcu-link links, in which the
1543 number of rcu-gp links is >= the number of rcu-rscsi links. For
1546 X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1548 would imply that X ->rcu-order V, because this sequence contains two
1549 rcu-gp links and one rcu-rscsi link. (It also implies that
1550 X ->rcu-order T and Z ->rcu-order V.) On the other hand:
1552 X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1554 does not imply X ->rcu-order V, because the sequence contains only
1555 one rcu-gp link but two rcu-rscsi links.
1557 The rcu-order relation is important because the Grace Period Guarantee
1558 means that rcu-order links act kind of like strong fences. In
1559 particular, E ->rcu-order F implies not only that E begins before F
1560 ends, but also that any write po-before E will propagate to every CPU
1561 before any instruction po-after F can execute. (However, it does not
1563 fence event is linked to itself by rcu-order as a degenerate case.)
1568 G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F.
1571 and there are events X, Y and a read-side critical section C such that:
1573 1. G = W is po-before or equal to X;
1577 3. Y is po-before Z;
1583 From 1 - 4 we deduce that the grace period G ends before the critical
1588 executing and hence before any instruction po-after F can execute.
1590 covered by rcu-order.
1592 The rcu-fence relation is a simple extension of rcu-order. While
1593 rcu-order only links certain fence events (calls to synchronize_rcu(),
1594 rcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events
1595 that are separated by an rcu-order link. This is analogous to the way
1596 the strong-fence relation links events that are separated by an
1597 smp_mb() fence event (as mentioned above, rcu-order links act kind of
1598 like strong fences). Written symbolically, X ->rcu-fence Y means
1601 X ->po E ->rcu-order F ->po Y.
1605 every CPU before Y executes. Thus rcu-fence is sort of a
1606 "super-strong" fence: Unlike the original strong fences (smp_mb() and
1607 synchronize_rcu()), rcu-fence is able to link events on different
1608 CPUs. (Perhaps this fact should lead us to say that rcu-fence isn't
1611 Finally, the LKMM defines the RCU-before (rb) relation in terms of
1612 rcu-fence. This is done in essentially the same way as the pb
1613 relation was defined in terms of strong-fence. We will omit the
1614 details; the end result is that E ->rb F implies E must execute
1615 before F, just as E ->pb F does (and for much the same reasons).
1620 and F with E ->rcu-link F ->rcu-order E. Or to put it a third way,
1621 the axiom requires that there are no cycles consisting of rcu-gp and
1622 rcu-rscsi alternating with rcu-link, where the number of rcu-gp links
1623 is >= the number of rcu-rscsi links.
1638 are events Q and R where Q is po-after L (which marks the start of the
1639 critical section), Q is "before" R in the sense used by the rcu-link
1640 relation, and R is po-before the grace period S. Thus we have:
1642 L ->rcu-link S.
1648 some event X which is po-after S. Symbolically, this amounts to:
1650 S ->po X ->hb* Z ->fr W ->rf Y ->po U.
1654 discussion of the rcu-link relation earlier) that S and U are related
1655 by rcu-link:
1657 S ->rcu-link U.
1659 Since S is a grace period we have S ->rcu-gp S, and since L and U are
1660 the start and end of the critical section C we have U ->rcu-rscsi L.
1663 S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S,
1668 For something a little more down-to-earth, let's see how the axiom
1693 P1's load at W reads from, so we have W ->fre Y. Since S ->po W and
1694 also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S
1698 so we have X ->rfe Z. Together with L ->po X and Z ->po S, this
1699 yields L ->rcu-link S. And since L and U are the start and end of a
1700 critical section, we have U ->rcu-rscsi L.
1702 Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a
1740 that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi
1741 L2 ->rcu-link U0. However this cycle is not forbidden, because the
1742 sequence of relations contains fewer instances of rcu-gp (one) than of
1743 rcu-rscsi (two). Consequently the outcome is allowed by the LKMM.
1748 -------------------- -------------------- --------------------
1769 Addendum: The LKMM now supports SRCU (Sleepable Read-Copy-Update) in
1771 above, with new relations srcu-gp and srcu-rscsi added to represent
1772 SRCU grace periods and read-side critical sections. There is a
1773 restriction on the srcu-gp and srcu-rscsi links that can appear in an
1774 rcu-order sequence (the srcu-rscsi links must be paired with srcu-gp
1775 links having the same SRCU domain with proper nesting); the details
1780 -------
1810 store-release in a spin_unlock() and the load-acquire which forms the
1812 spin_trylock() -- we can call these things lock-releases and
1813 lock-acquires -- have two properties beyond those of ordinary releases
1816 First, when a lock-acquire reads from a lock-release, the LKMM
1817 requires that every instruction po-before the lock-release must
1818 execute before any instruction po-after the lock-acquire. This would
1851 fences, only to lock-related operations. For instance, suppose P0()
1875 Second, when a lock-acquire reads from a lock-release, and some other
1876 stores W and W' occur po-before the lock-release and po-after the
1877 lock-acquire respectively, the LKMM requires that W must propagate to
1913 These two special requirements for lock-release and lock-acquire do
1921 -----------------------------
1926 operations of one kind or another. Ordinary C-language memory
1978 would be no possibility of a NULL-pointer dereference.
2003 are "race candidates" if they satisfy 1 - 4. Thus, whether or not two
2021 if they can be connected by a sequence of hb, pb, and rb links
2035 Therefore when X is a store, for X and Y to be non-concurrent the LKMM
2038 Y executes before X -- then Y must propagate to X's CPU before X
2040 relation (vis), where X ->vis Y is defined to hold if there is an
2044 cumul-fence links followed by an optional rfe link (if none of
2045 these links are present, X and Z are the same event),
2049 Z is connected to Y by a strong-fence link followed by a
2050 possibly empty sequence of xb links,
2055 empty sequence of xb links (again, if the sequence is empty it
2060 cumul-fence memory barriers force stores that are po-before
2062 po-after the barrier.
2068 strong-fence memory barriers force stores that are po-before
2071 po-after the barrier can execute.
2096 The smp_wmb() memory barrier gives a cumul-fence link from X to W, and
2101 Y). Therefore we have X ->vis Y: X must propagate to Y's CPU before Y
2108 cumul-fence, pb, and so on -- including vis) apply only to marked
2141 corresponding to the first group of accesses will all end po-before
2143 -- even if some of the accesses are plain. (Of course, the CPU may
2184 cumul-fence. It guarantees that no matter what hash of
2186 access U, all those instructions will be po-before the fence.
2193 executed, i.e., X ->vis Y. (And if there is no rfe link then
2198 fence. It guarantees that all the machine-level instructions
2199 corresponding to the access V will be po-after the fence, and
2211 X ->xb* E. If E was also a plain access, we would also look for a
2212 marked access Y such that X ->xb* Y, and Y and E are ordered by a
2214 "post-bounded" by X and E is "pre-bounded" by Y.
2217 "r-post-bounded" by X. Similarly, E would be "r-pre-bounded" or
2218 "w-pre-bounded" by Y, depending on whether E was a store or a load.
2222 say that a marked access pre-bounds and post-bounds itself (e.g., if R
2225 The need to distinguish between r- and w-bounding raises yet another
2240 w-pre-bounded or w-post-bounded by a marked access, it also requires
2241 the store to be r-pre-bounded or r-post-bounded, so as to handle cases
2249 Incidentally, the other tranformation -- augmenting a plain load by
2250 adding in a store to the same location -- is not allowed. This is
2260 The LKMM includes a second way to pre-bound plain accesses, in
2267 the LKMM says that the marked load of ptr pre-bounds the plain load of
2300 rcu_assign_pointer() performs a store-release, so the plain store to b
2301 is definitely w-post-bounded before the store to ptr, and the two
2305 that the load of ptr in P1 is r-pre-bounded before the load of *p
2335 not need to be w-post-bounded: when it is separated from the other
2336 race-candidate access by a fence. At first glance this may seem
2339 Well, normal fences don't -- but rcu-fence can! Here's an example:
2358 Do the plain stores to y race? Clearly not if P1 reads a non-zero
2360 means that the read-side critical section in P1 must finish executing
2361 before the grace period in P0 does, because RCU's Grace-Period
2365 from the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link
2368 This means there is an rcu-fence link from P1's "y = 2" store to P0's
2372 isn't w-post-bounded by any marked accesses.
2375 race-candidate stores W and W', where W ->co W', the LKMM says the
2378 w-post-bounded ; vis ; w-pre-bounded
2382 r-post-bounded ; xb* ; w-pre-bounded
2386 w-post-bounded ; vis ; r-pre-bounded
2388 sequence. For race-candidate load R and store W, the LKMM says the
2391 r-post-bounded ; xb* ; w-pre-bounded
2395 w-post-bounded ; vis ; r-pre-bounded
2400 strong-fence ; xb* ; {w and/or r}-pre-bounded
2402 sequence with no post-bounding, and in every case the LKMM also allows
2409 happens-before, propagates-before, and rcu axioms (which state that
2415 called the "plain-coherence" axiom because of their resemblance to the
2422 W by one of the xb* sequences listed above, then W ->rfe R is
2427 R by one of the vis sequences listed above, then R ->fre W is
2429 load must read from that store or one coherence-after it).
2432 to W' by one of the vis sequences listed above, then W' ->co W
2443 -------------
2468 that are part of a non-value-returning atomic update. For instance,
2477 non-value-returning atomic operations effectively to be executed off
2486 smp_store_release() -- which is basically how the Linux kernel treats
2496 pre-bounding by address dependencies, it is possible for the compiler
2504 all po-earlier events against all po-later events, as smp_mb() does,
2507 smp_mb__before_atomic() orders all po-earlier events against
2508 po-later atomic updates and the events following them;
2510 smp_mb__after_atomic() orders po-earlier atomic updates and
2511 the events preceding them against all po-later events;
2513 smp_mb_after_spinlock() orders po-earlier lock acquisition
2514 events and the events preceding them against all po-later
2535 non-deadlocking executions. For example:
2559 will self-deadlock in the executions where it stores 36 in y.