Lines Matching +full:4 +full:- +full:cpu

32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
35 wq users over the years and with the number of CPU cores continuously
42 worker pool. An MT wq could provide only one execution context per CPU
60 * Use per-CPU unified worker pools shared by all wq to provide
83 called worker-pools.
85 The cmwq design differentiates between the user-facing workqueues that
87 which manages worker-pools and processes the queued work items.
89 There are two worker-pools, one for normal work items and the other
90 for high priority ones, for each possible CPU and some extra
91 worker-pools to serve work items queued on unbound workqueues - the
98 things like CPU locality, concurrency limits, priority and more. To
102 When a work item is queued to a workqueue, the target worker-pool is
104 and appended on the shared worklist of the worker-pool. For example,
106 be queued on the worklist of either normal or highpri worker-pool that
107 is associated to the CPU the issuer is running on.
115 Each worker-pool bound to an actual CPU implements concurrency
116 management by hooking into the scheduler. The worker-pool is notified
119 not expected to hog a CPU and consume many cycles. That means
122 workers on the CPU, the worker-pool doesn't start execution of a new
124 schedules a new worker so that the CPU doesn't sit idle while there
144 wq's that have a rescue-worker reserved for execution under memory
145 pressure. Else it is possible that the worker-pool deadlocks waiting
154 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
165 ---------
169 worker-pools which host workers which are not bound to any
170 specific CPU. This makes the wq behave as a simple execution
172 worker-pools try to start execution of work items as soon as
181 * Long running CPU intensive workloads which can be better
196 worker-pool of the target cpu. Highpri worker-pools are
199 Note that normal and highpri worker-pools don't interact with
204 Work items of a CPU intensive wq do not contribute to the
205 concurrency level. In other words, runnable CPU intensive
207 worker-pool from starting execution. This is useful for bound
208 work items which are expected to hog CPU cycles so that their
211 Although CPU intensive work items don't contribute to the
214 non-CPU-intensive work items can delay execution of CPU
221 --------------
224 CPU which can be assigned to the work items of a wq. For example, with
226 at the same time per CPU. This is always a per-CPU attribute, even for
243 unbound worker-pools and only one work item could be active at any given
248 be used to achieve system-wide ST behavior.
257 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
258 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
259 again before finishing. w1 and w2 burn CPU for 5ms then sleep for
267 0 w0 starts and burns CPU
269 15 w0 wakes up and burns CPU
271 20 w1 starts and burns CPU
274 35 w2 starts and burns CPU
281 0 w0 starts and burns CPU
283 5 w1 starts and burns CPU
285 10 w2 starts and burns CPU
287 15 w0 wakes up and burns CPU
295 0 w0 starts and burns CPU
297 5 w1 starts and burns CPU
299 15 w0 wakes up and burns CPU
302 20 w2 starts and burns CPU
310 0 w0 starts and burns CPU
312 5 w1 and w2 start and burn CPU
315 15 w0 wakes up and burns CPU
345 * Unless work items are expected to consume a huge amount of CPU
357 on one of the CPUs which share the last level cache with the issuing CPU.
367 ``cpu``
368 CPUs are not grouped. A work item issued on one CPU is processed by a
369 worker on the same CPU. This makes unbound workqueues behave as per-cpu
374 logical threads of each physical CPU core are grouped together.
386 work item on a CPU close to the issuing CPU.
404 item starts execution, workqueue makes a best-effort attempt to ensure
423 kernel, there exists a pronounced trade-off between locality and utilization
427 the same number of consumed CPU cycles. However, higher locality may also
430 testing with dm-crypt clearly illustrates this trade-off.
432 The tests are run on a CPU with 12-cores/24-threads split across four L3
433 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency.
434 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
439 -------------------------------------------------------------
443 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
444 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
445 --name=iops-test-job --verify=sha512
447 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
450 are the read bandwidths and CPU utilizations depending on different affinity
452 MiBps, and CPU util in percents.
454 .. list-table::
456 :header-rows: 1
458 * - Affinity
459 - Bandwidth (MiBps)
460 - CPU util (%)
462 * - system
463 - 1159.40 ±1.34
464 - 99.31 ±0.02
466 * - cache
467 - 1166.40 ±0.89
468 - 99.34 ±0.01
470 * - cache (strict)
471 - 1166.00 ±0.71
472 - 99.35 ±0.01
476 machine but the cache-affine ones outperform by 0.6% thanks to improved
481 -----------------------------------------------------
485 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
486 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
487 --time_based --group_reporting --name=iops-test-job --verify=sha512
489 The only difference from the previous scenario is ``--numjobs=8``. There are
493 .. list-table::
495 :header-rows: 1
497 * - Affinity
498 - Bandwidth (MiBps)
499 - CPU util (%)
501 * - system
502 - 1155.40 ±0.89
503 - 97.41 ±0.05
505 * - cache
506 - 1154.40 ±1.14
507 - 96.15 ±0.09
509 * - cache (strict)
510 - 1112.00 ±4.64
511 - 93.26 ±0.35
515 less CPU but the better efficiency puts it at the same bandwidth as
524 -----------------------------------------------------------
528 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
529 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
530 --time_based --group_reporting --name=iops-test-job --verify=sha512
532 Again, the only difference is ``--numjobs=4``. With the number of issuers
536 .. list-table::
538 :header-rows: 1
540 * - Affinity
541 - Bandwidth (MiBps)
542 - CPU util (%)
544 * - system
545 - 993.60 ±1.82
546 - 75.49 ±0.06
548 * - cache
549 - 973.40 ±1.52
550 - 74.90 ±0.07
552 * - cache (strict)
553 - 828.20 ±4.49
554 - 66.84 ±0.29
561 ------------------------------
568 While the loss of work-conservation in certain scenarios hurts, it is a lot
574 that may consume a significant amount of CPU are recommended to configure
578 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
579 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
585 * The loss of work-conservation in non-strict affinity scopes is likely
588 work-conservation in most cases. As such, it is possible that future
595 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
603 CPU
604 nr_pods 4
610 nr_pods 4
630 pod_node [0]=-1
635 pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0
636 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
637 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1
638 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
639 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2
640 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
641 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3
642 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
646 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
647 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
648 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
650 Workqueue CPU -> pool
652 [ workqueue \ CPU 0 1 2 3 dfl]
653 events percpu 0 2 4 6
655 events_long percpu 0 2 4 6
657 events_freezable percpu 0 2 4 6
658 events_power_efficient percpu 0 2 4 6
659 events_freezable_power_ percpu 0 2 4 6
660 rcu_gp percpu 0 2 4 6
661 rcu_par_gp percpu 0 2 4 6
662 slub_flushwq percpu 0 2 4 6
676 events 18545 0 6.1 0 5 - -
677 events_highpri 8 0 0.0 0 0 - -
678 events_long 3 0 0.0 0 0 - -
679 events_unbound 38306 0 0.1 - 7 - -
680 events_freezable 0 0 0.0 0 0 - -
681 events_power_efficient 29598 0 0.2 0 0 - -
682 events_freezable_power_ 10 0 0.0 0 0 - -
683 sock_diag_events 0 0 0.0 0 0 - -
686 events 18548 0 6.1 0 5 - -
687 events_highpri 8 0 0.0 0 0 - -
688 events_long 3 0 0.0 0 0 - -
689 events_unbound 38322 0 0.1 - 7 - -
690 events_freezable 0 0 0.0 0 0 - -
691 events_power_efficient 29603 0 0.2 0 0 - -
692 events_freezable_power_ 10 0 0.0 0 0 - -
693 sock_diag_events 0 0 0.0 0 0 - -
714 If kworkers are going crazy (using too much cpu), there are two types
718 2. A single work item that consumes lots of cpu cycles
740 Non-reentrance Conditions
743 Workqueue guarantees that a work item cannot be re-entrant if the following
751 executed by at most one worker system-wide at any given time.
761 .. kernel-doc:: include/linux/workqueue.h
763 .. kernel-doc:: kernel/workqueue.c