Lines Matching full:locked
48 * unlock the next pending (next->locked), we compress both these: {tail,
49 * next->locked} into a single u32 value.
145 * Wait for in-progress pending->locked hand-overs with a bounded in queued_spin_lock_slowpath()
191 * store-release that clears the locked bit and create lock in queued_spin_lock_slowpath()
197 smp_cond_load_acquire(&lock->locked, !VAL); in queued_spin_lock_slowpath()
251 node->locked = 0; in queued_spin_lock_slowpath()
291 arch_mcs_spin_lock_contended(&node->locked); in queued_spin_lock_slowpath()
311 * store-release that clears the locked bit and create lock in queued_spin_lock_slowpath()
318 * been designated yet, there is no way for the locked value to become in queued_spin_lock_slowpath()
326 goto locked; in queued_spin_lock_slowpath()
330 locked: in queued_spin_lock_slowpath()
370 arch_mcs_spin_unlock_contended(&next->locked); in queued_spin_lock_slowpath()