Lines Matching +full:compare +full:- +full:and +full:- +full:swap

2 On atomic types (atomic_t atomic64_t and atomic_long_t).
5 RMW operations between CPUs (atomic operations on MMIO are not supported and
9 ---
11 The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
14 Non-RMW ops:
31 atomic_{and,or,xor,andnot}()
32 atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
35 Swap:
60 -----
62 While atomic_t, atomic_long_t and atomic64_t use int, long and s64
63 respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
64 (which implies -fwrapv) and defines signed overflow to behave like
65 2s-complement.
68 unnecessary and we can simply cast, there is no UB.
70 There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
73 With this we also conform to the C/C++ _Atomic behaviour and things like
78 ---------
80 Non-RMW ops:
82 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
83 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
85 the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
86 and are doing it wrong.
91 C Atomic-RMW-ops-are-atomic-WRT-atomic_set
111 before the atomic_add_unless(), in which case that latter one would no-op, or
125 ret = READ_ONCE(v->counter); // == 1
127 if (ret != u) WRITE_ONCE(v->counter, 0);
128 WRITE_ONCE(v->counter, ret + 1);
138 - plain operations without return value: atomic_{}()
140 - operations which return the modified value: atomic_{}_return()
143 reversible. Bitops are irreversible and therefore the modified value
146 - operations which return the original value: atomic_fetch_{}()
148 - swap operations: xchg(), cmpxchg() and try_cmpxchg()
150 - misc; the special purpose operations that are commonly used and would,
152 are time critical and can, (typically) on LL/SC architectures, be more
156 atomic variable) can be fully ordered and no intermediate state is lost or
160 ORDERING (go read memory-barriers.txt first)
161 --------
165 - non-RMW operations are unordered;
167 - RMW operations that have no return value are unordered;
169 - RMW operations that have a return value are fully ordered;
171 - RMW operations that are conditional are unordered on FAILURE,
183 Fully ordered primitives are ordered against everything prior and everything
185 before and an smp_mb() after the primitive.
192 only apply to the RMW atomic ops and can be used to augment/upgrade the
195 itself and all accesses following it, and smp_mb__after_atomic() orders all
196 later accesses against the RMW op and all accesses preceding it. However,
197 accesses between the smp_mb__{before,after}_atomic() and the RMW op are not
203 provide full ordered atomics and these barriers are no-ops.
227 and write parts of the atomic_dec(), and against all following instructions
236 C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
258 This should not happen; but a hypothetical atomic_inc_acquire() --
259 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
277 ----------------------
294 and:
318 ----------------
321 operations -- those in the Arithmetic and Bitwise classes and xchg(). However
356 However, even the forward branch from the failed compare can cause the LL/SC
359 containing @v will stay on the local CPU and progress is made.
366 also strongly encouraged to inspect/audit the atomic fallbacks, refcount_t and