Lines Matching +full:render +full:- +full:max
7 Lock-class
8 ----------
18 The validator tracks the 'state' of lock-classes, and it tracks
19 dependencies between different lock-classes. The validator maintains a
22 Unlike an lock instantiation, the lock-class itself never goes away: when
23 a lock-class is used for the first time after bootup it gets registered,
24 and all subsequent uses of that lock-class will be attached to this
25 lock-class.
28 -----
30 The validator tracks lock-class usage history into 4n + 1 separate state bits:
32 - 'ever held in STATE context'
33 - 'ever held as readlock in STATE context'
34 - 'ever held with STATE enabled'
35 - 'ever held as readlock with STATE enabled'
38 - hardirq
39 - softirq
40 - reclaim_fs
42 - 'ever used' [ == !unused ]
48 (&sio_locks[i].lock){-.-...}, at: [<c02867fd>] mutex_lock+0x21/0x24
51 (&sio_locks[i].lock){-.-...}, at: [<c02867fd>] mutex_lock+0x21/0x24
54 The bit position indicates STATE, STATE-read, for each of the states listed
58 '-' acquired in irq context
65 Single-lock state rules:
66 ------------------------
68 A softirq-unsafe lock-class is automatically hardirq-unsafe as well. The
70 set for any lock-class:
72 <hardirq-safe> and <hardirq-unsafe>
73 <softirq-safe> and <softirq-unsafe>
76 single-lock state rules.
78 Multi-lock dependency rules:
79 ----------------------------
81 The same lock-class must not be acquired twice, because this could lead
86 <L1> -> <L2>
87 <L2> -> <L1>
91 other locking sequence between the acquire-lock operations, the
95 between any two lock-classes:
97 <hardirq-safe> -> <hardirq-unsafe>
98 <softirq-safe> -> <softirq-unsafe>
100 The first rule comes from the fact the a hardirq-safe lock could be
101 taken by a hardirq context, interrupting a hardirq-unsafe lock - and
102 thus could result in a lock inversion deadlock. Likewise, a softirq-safe
103 lock could be taken by an softirq context, interrupting a softirq-unsafe
110 When a lock-class changes its state, the following aspects of the above
113 - if a new hardirq-safe lock is discovered, we check whether it
114 took any hardirq-unsafe lock in the past.
116 - if a new softirq-safe lock is discovered, we check whether it took
117 any softirq-unsafe lock in the past.
119 - if a new hardirq-unsafe lock is discovered, we check whether any
120 hardirq-safe lock took it in the past.
122 - if a new softirq-unsafe lock is discovered, we check whether any
123 softirq-safe lock took it in the past.
126 could interrupt _any_ of the irq-unsafe or hardirq-unsafe locks, which
127 could lead to a lock inversion deadlock - even if that lock scenario did
131 -------------------------------------------------------------
134 instance of the same lock-class. Such cases typically happen when there
141 is that of a "whole disk" block-dev object and a "partition" block-dev
160 mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);
173 --------------------------
176 correctness) in the sense that for every simple, standalone single-task
182 I.e. complex multi-CPU and multi-task locking scenarios do not have to
187 a very unlikely constellation of tasks, irq-contexts and timings to
188 occur, can be detected on a plain, lightly loaded single-CPU system as
193 single-task locking dependencies in the kernel as possible, at least
194 once, to prove locking correctness - instead of having to trigger every
202 even hardirq-disabled codepaths] are correct and do not interfere
203 with the validator. We also assume that the 64-bit 'chain hash'
204 value is unique for every lock-chain in the system. Also, lock
208 ------------
211 that for every lock taken and for every irqs-enable event, it would
212 render the system practically unusably slow. The complexity of checking
213 is O(N^2), so even with just a few hundred lock-classes we'd have to do
218 held locks is maintained, and a lightweight 64-bit hash value is
221 table, which hash-table can be checked in a lockfree manner. If the
226 ----------------
235 normally results from lock-class leakage or failure to properly
239 will result in lock-class leakage. The issue here is that each
249 spinlock_t will consume 8192 lock classes -unless- each spinlock
251 run-time spin_lock_init() as opposed to compile-time initializers
253 the per-bucket spinlocks would guarantee lock-class overflow.
264 likely to be linked into the lock-dependency graph. This turns out to
271 grep "lock-classes" /proc/lockdep_stats
275 lock-classes: 748 [max: 8191]