Lines Matching +full:20 +full:a
17 When such an asynchronous execution context is needed, a work item
18 describing which function to execute is put on a queue. An
25 When a new work item gets queued, the worker begins executing again.
31 In the original wq implementation, a multi threaded (MT) wq had one
32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
34 number of workers as the number of CPUs. The kernel grew a lot of MT
39 Although MT wq wasted a lot of resource, the level of concurrency
55 Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
61 flexible level of concurrency on demand without wasting a lot of
71 In order to ease the asynchronous execution of functions a new
74 A work item is a simple struct that holds a pointer to the function
75 that is to be executed asynchronously. Whenever a driver or subsystem
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
99 get a detailed overview refer to the API description of
102 When a work item is queued to a workqueue, the target worker-pool is
105 unless specifically overridden, a work item of a bound workqueue will
111 tries to keep the concurrency at a minimal but sufficient level.
119 not expected to hog a CPU and consume many cycles. That means
122 workers on the CPU, the worker-pool doesn't start execution of a new
124 schedules a new worker so that the CPU doesn't sit idle while there
125 are pending work items. This allows using a minimal number of workers
129 for kthreads, so cmwq holds onto idle ones for a while before killing
136 regulating concurrency level is on the users. There is also a flag to
137 mark a bound wq to ignore the concurrency management. Please refer to
144 wq's that have a rescue-worker reserved for execution under memory
152 ``alloc_workqueue()`` allocates a wq. The original
158 A wq no longer manages execution resources but serves as a domain for
170 specific CPU. This makes the wq behave as a simple execution
185 A freezable wq participates in the freeze phase of the system
195 Work items of a highpri wq are queued to the highpri
204 Work items of a CPU intensive wq do not contribute to the
224 CPU which can be assigned to the work items of a wq. For example, with
226 at the same time per CPU. This is always a per-CPU attribute, even for
234 The number of active work items of a wq is usually regulated by the
236 may queue at the same time. Unless there is a specific need for
247 ST behavior within a given NUMA node. Instead ``alloc_ordered_workqueue()`` should
257 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
270 20 w0 finishes
271 20 w1 starts and burns CPU
288 20 w0 finishes
289 20 w1 wakes up and finishes
300 20 w0 finishes
301 20 w1 wakes up and finishes
302 20 w2 starts and burns CPU
306 Now, let's assume w1 and w2 are queued to a different wq q1 which has
316 20 w0 finishes
317 20 w1 wakes up and finishes
324 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
333 * Unless there is a specific need, using 0 for @max_active is
337 * A wq serves as a domain for forward progress guarantee
340 flushed as a part of a group of work items, and don't require any
342 difference in execution characteristics between using a dedicated wq
343 and a system wq.
345 * Unless work items are expected to consume a huge amount of CPU
346 cycles, using a bound wq is usually beneficial due to the increased
354 cache locality. For example, if a workqueue is using the default affinity
356 boundaries. A work item queued on the workqueue will be assigned to a worker
368 CPUs are not grouped. A work item issued on one CPU is processed by a
378 boundary is used is determined by the arch code. L3 is used in a lot of
385 All CPUs are put in the same group. Workqueue makes no effort to process a
386 work item on a CPU close to the issuing CPU.
389 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
403 0 by default indicating that affinity scopes are not strict. When a work
404 item starts execution, workqueue makes a best-effort attempt to ensure
423 kernel, there exists a pronounced trade-off between locality and utilization
432 The tests are run on a CPU with 12-cores/24-threads split across four L3
434 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
455 :widths: 16 20 20
490 a third of the issuers but is still enough total work to saturate the
494 :widths: 16 20 20
537 :widths: 16 20 20
557 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.
568 While the loss of work-conservation in certain scenarios hurts, it is a lot
574 that may consume a significant amount of CPU are recommended to configure
580 latter and an unbound workqueue provides a lot more flexibility.
636 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
638 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
640 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
642 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
646 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
647 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
648 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
704 there are a few tricks needed to shed some light on misbehaving
718 2. A single work item that consumes lots of cpu cycles
724 (wait a few secs)
743 Workqueue guarantees that a work item cannot be re-entrant if the following
744 conditions hold after a work item gets queued:
755 required when breaking the conditions inside a work function.