Lines Matching +full:foo +full:- +full:queue

23 See :ref:`Documentation/process/volatile-considered-harmful.rst
28 :ref:`Documentation/core-api/local_ops.rst <local_ops>` for the semantics of
35 #define atomic_set(v, i) ((v)->counter = (i))
52 struct foo { atomic_t counter; };
55 struct foo *k;
59 return -ENOMEM;
60 atomic_set(&k->counter, 0);
70 #define atomic_read(v) ((v)->counter)
179 Don't even -think- about doing this without proper use of memory barriers,
305 Preceding a non-value-returning read-modify-write atomic operation with
307 provides the same full ordering that is provided by value-returning
308 read-modify-write atomic operations.
312 obj->dead = 1;
314 atomic_dec(&obj->ref_count);
319 "1" to obj->dead will be globally visible to other cpus before the
324 to other cpus before the "obj->dead = 1;" assignment.
335 obj->active = 1;
336 list_add(&obj->list, head);
341 list_del(&obj->list);
342 obj->active = 0;
347 BUG_ON(obj->active);
356 obj = list_entry(head->next, struct obj, list);
357 atomic_inc(&obj->refcnt);
372 obj->ops->poke(obj);
373 if (atomic_dec_and_test(&obj->refcnt))
384 if (atomic_dec_and_test(&obj->refcnt))
390 This is a simplification of the ARP queue management in the generic
395 Given the above scheme, it must be the case that the obj->active
399 Otherwise, the counter could fall to zero, yet obj->active would still
408 obj->active = 0 ...
415 BUG() triggers since obj->active
417 obj->active update visibility occurs
423 obj->active update does.
425 As a historical note, 32-bit Sparc used to only allow usage of
426 24-bits of its atomic_t type. This was because it used 8 bits
428 type instruction. However, 32-bit Sparc has since been moved over
429 to a "hash table of spinlocks" scheme, that allows the full 32-bit
477 paths using these interfaces, so on 64-bit if the bit is set in the
478 upper 32-bits then testers will never see that.
492 obj->dead = 1;
493 if (test_and_set_bit(0, &obj->flags))
495 obj->killed = 1;
498 "obj->dead = 1;" is visible to cpus before the atomic memory operation
501 "obj->killed = 1;" is visible.
532 same as spinlocks). These operate in the same way as their non-_lock/unlock
541 The __clear_bit_unlock version is non-atomic, however it still implements
545 Finally, there are non-atomic versions of the bitmask operations
546 provided. They are used in contexts where some other higher-level SMP
548 expensive non-atomic operations may be used in the implementation.
559 These non-atomic variants also do not require any special memory
563 memory-barrier semantics as the atomic and bit operations returning
583 architecture-neutral version implemented in lib/dec_and_lock.c,
626 Let's use cas() in order to build a pseudo-C atomic_dec_and_lock()::
636 new = old - 1;