Lines Matching +full:left +full:- +full:most

8 ----------
15 A futex is in essence a user-space address, e.g. a 32-bit lock variable
21 waiter with the waker - without them having to know about each other.
26 state, and there's no in-kernel state associated with it. The kernel
33 pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need
39 prematurely - and the new owner can decide whether the data protected by
45 (and in most cases there is none, futexes being fast lightweight locks)
47 Userspace has no chance to clean up after the lock either - userspace is
48 the one that crashes, so it has no opportunity to clean up. Catch-22.
50 In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot
60 left:
62 - it has quite complex locking and race scenarios. The vma-based
66 - they have to scan _every_ vma at sys_exit() time, per thread!
77 in another task, and the futex variable might have been simply mmap()-ed
87 ------------------------------
89 At the heart of this new approach there is a per-thread private list of
90 robust locks that userspace is holding (maintained by glibc) - which
92 registration happens at most once per thread lifetime]. At do_exit()
93 time, the kernel checks this user-space list: are there any robust futex
97 the cost of robust futexes is just a simple current->robust_list != NULL
100 way then the list might be non-empty: in this case the kernel carefully
105 The list is guaranteed to be private and per-thread at do_exit() time,
112 also maintains a simple per-thread 'list_op_pending' field, to allow the
116 it after the list-add (or list-remove) has finished.
118 That's all that is needed - all the rest of robust-futex cleanup is done
124 Key differences of this userspace-list based approach, compared to the
127 - it's much, much faster: at thread exit time, there's no need to loop
128 over every vma (!), which the VM-based method has to do. Only a very
131 - no VM changes are needed - 'struct address_space' is left alone.
133 - no registration of individual locks is needed: robust mutexes don't
134 need any extra per-lock syscalls. Robust mutexes thus become a very
135 lightweight primitive - so they don't force the application designer
136 to do a hard choice between performance and robustness - robust
139 - no per-lock kernel allocation happens.
141 - no resource limits are needed.
143 - no kernel-space recovery call (FUTEX_RECOVER) is needed.
145 - the implementation and the locking is "obvious", and there are no
149 -----------
154 - with FUTEX_WAIT set [contended mutex]: 130 msecs
155 - without FUTEX_WAIT set [uncontended mutex]: 30 msecs
159 msecs - clearly slower, due to the 1 million FUTEX_WAKE syscalls
162 (1 million held locks are unheard of - we expect at most a handful of
167 ----------------------
181 current->robust_list. [Note that in the future, if robust futexes become
182 widespread, we could extend sys_clone() to register a robust-list head
196 and wakes up the next futex waiter (if any). User-space does the rest of
207 -----------------------------
210 parsing of the userspace list is robust [ ;-) ] even if the list is
215 robust-mutex testcases.
217 All other architectures should build just fine too - but they won't have