Lines Matching +full:no +full:- +full:idle
24 there is no work item left on the workqueue the worker becomes idle.
33 thread system-wide. A single MT wq needed to keep around the same
50 limitation that no two polling PIOs can progress at the same time. As
60 * Use per-CPU unified worker pools shared by all wq to provide
81 off of the queue, one after the other. If no work is queued, the
82 worker threads become idle. These worker threads are managed in so
83 called worker-pools.
85 The cmwq design differentiates between the user-facing workqueues that
87 which manages worker-pools and processes the queued work items.
89 There are two worker-pools, one for normal work items and the other
91 worker-pools to serve work items queued on unbound workqueues - the
102 When a work item is queued to a workqueue, the target worker-pool is
104 and appended on the shared worklist of the worker-pool. For example,
106 be queued on the worklist of either normal or highpri worker-pool that
115 Each worker-pool bound to an actual CPU implements concurrency
116 management by hooking into the scheduler. The worker-pool is notified
122 workers on the CPU, the worker-pool doesn't start execution of a new
124 schedules a new worker so that the CPU doesn't sit idle while there
128 Keeping idle workers around doesn't cost other than the memory space
129 for kthreads, so cmwq holds onto idle ones for a while before killing
144 wq's that have a rescue-worker reserved for execution under memory
145 pressure. Else it is possible that the worker-pool deadlocks waiting
154 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
158 A wq no longer manages execution resources but serves as a domain for
165 ---------
169 worker-pools which host workers which are not bound to any
172 worker-pools try to start execution of work items as soon as
186 suspend operations. Work items on the wq are drained and no
196 worker-pool of the target cpu. Highpri worker-pools are
199 Note that normal and highpri worker-pools don't interact with
207 worker-pool from starting execution. This is useful for bound
214 non-CPU-intensive work items can delay execution of CPU
221 --------------
226 at the same time per CPU. This is always a per-CPU attribute, even for
243 unbound worker-pools and only one work item could be active at any given
248 be used to achieve system-wide ST behavior.
331 * Unless strict ordering is required, there is no need to use ST wq.
341 special attribute, can use one of the system wq. There is no
369 worker on the same CPU. This makes unbound workqueues behave as per-cpu
385 All CPUs are put in the same group. Workqueue makes no effort to process a
404 item starts execution, workqueue makes a best-effort attempt to ensure
423 kernel, there exists a pronounced trade-off between locality and utilization
430 testing with dm-crypt clearly illustrates this trade-off.
432 The tests are run on a CPU with 12-cores/24-threads split across four L3
434 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
439 -------------------------------------------------------------
443 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
444 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
445 --name=iops-test-job --verify=sha512
447 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
454 .. list-table::
456 :header-rows: 1
458 * - Affinity
459 - Bandwidth (MiBps)
460 - CPU util (%)
462 * - system
463 - 1159.40 ±1.34
464 - 99.31 ±0.02
466 * - cache
467 - 1166.40 ±0.89
468 - 99.34 ±0.01
470 * - cache (strict)
471 - 1166.00 ±0.71
472 - 99.35 ±0.01
474 With enough issuers spread across the system, there is no downside to
476 machine but the cache-affine ones outperform by 0.6% thanks to improved
481 -----------------------------------------------------
485 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
486 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
487 --time_based --group_reporting --name=iops-test-job --verify=sha512
489 The only difference from the previous scenario is ``--numjobs=8``. There are
493 .. list-table::
495 :header-rows: 1
497 * - Affinity
498 - Bandwidth (MiBps)
499 - CPU util (%)
501 * - system
502 - 1155.40 ±0.89
503 - 97.41 ±0.05
505 * - cache
506 - 1154.40 ±1.14
507 - 96.15 ±0.09
509 * - cache (strict)
510 - 1112.00 ±4.64
511 - 93.26 ±0.35
524 -----------------------------------------------------------
528 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
529 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
530 --time_based --group_reporting --name=iops-test-job --verify=sha512
532 Again, the only difference is ``--numjobs=4``. With the number of issuers
536 .. list-table::
538 :header-rows: 1
540 * - Affinity
541 - Bandwidth (MiBps)
542 - CPU util (%)
544 * - system
545 - 993.60 ±1.82
546 - 75.49 ±0.06
548 * - cache
549 - 973.40 ±1.52
550 - 74.90 ±0.07
552 * - cache (strict)
553 - 828.20 ±4.49
554 - 66.84 ±0.29
561 ------------------------------
568 While the loss of work-conservation in certain scenarios hurts, it is a lot
573 * As there is no one option which is great for most cases, workqueue usages
579 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
585 * The loss of work-conservation in non-strict affinity scopes is likely
586 originating from the scheduler. There is no theoretical reason why the
588 work-conservation in most cases. As such, it is possible that future
630 pod_node [0]=-1
635 pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0
636 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
637 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1
638 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
639 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2
640 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
641 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3
642 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
643 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f
644 pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003
645 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c
646 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
647 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
648 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
650 Workqueue CPU -> pool
676 events 18545 0 6.1 0 5 - -
677 events_highpri 8 0 0.0 0 0 - -
678 events_long 3 0 0.0 0 0 - -
679 events_unbound 38306 0 0.1 - 7 - -
680 events_freezable 0 0 0.0 0 0 - -
681 events_power_efficient 29598 0 0.2 0 0 - -
682 events_freezable_power_ 10 0 0.0 0 0 - -
683 sock_diag_events 0 0 0.0 0 0 - -
686 events 18548 0 6.1 0 5 - -
687 events_highpri 8 0 0.0 0 0 - -
688 events_long 3 0 0.0 0 0 - -
689 events_unbound 38322 0 0.1 - 7 - -
690 events_freezable 0 0 0.0 0 0 - -
691 events_power_efficient 29603 0 0.2 0 0 - -
692 events_freezable_power_ 10 0 0.0 0 0 - -
693 sock_diag_events 0 0 0.0 0 0 - -
740 Non-reentrance Conditions
743 Workqueue guarantees that a work item cannot be re-entrant if the following
747 2. No one queues the work item to another workqueue.
751 executed by at most one worker system-wide at any given time.
761 .. kernel-doc:: include/linux/workqueue.h
763 .. kernel-doc:: kernel/workqueue.c