Lines Matching full:workqueue

2 Workqueue  title
14 is needed and the workqueue (wq) API is the most commonly used
20 queue is called workqueue and the thread is called worker.
22 While there are work items on the workqueue the worker executes the
24 there is no work item left on the workqueue the worker becomes idle.
28 Why Concurrency Managed Workqueue?
55 Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
58 * Maintain compatibility with the original workqueue API.
78 workqueue.
99 the BH execution context. A BH workqueue can be considered a convenience
103 workqueue API functions as they see fit. They can influence some
105 workqueue they are putting the work item on. These flags include
110 When a work item is queued to a workqueue, the target worker-pool is
111 determined according to the queue parameters and workqueue attributes
113 unless specifically overridden, a work item of a bound workqueue will
141 Unbound workqueue can be assigned custom attributes using
142 ``apply_workqueue_attrs()`` and workqueue will automatically create
161 ``create_*workqueue()`` functions are deprecated and scheduled for
369 dedicated workqueue rather than the system wq.
379 An unbound workqueue groups CPUs according to its affinity scope to improve
380 cache locality. For example, if a workqueue is using the default affinity
382 boundaries. A work item queued on the workqueue will be assigned to a worker
387 Workqueue currently supports the following affinity scopes.
390 Use the scope in module parameter ``workqueue.default_affinity_scope``
411 All CPUs are put in the same group. Workqueue makes no effort to process a
415 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
418 If ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope
419 related interface files under its ``/sys/devices/virtual/workqueue/WQ_NAME/``
430 item starts execution, workqueue makes a best-effort attempt to ensure
440 isolation. Strict NUMA scope can also be used to match the workqueue
447 It'd be ideal if an unbound workqueue's behavior is optimal for vast
595 better than "cache (strict)" and maximizing workqueue utilization is
599 * As there is no one option which is great for most cases, workqueue usages
604 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
605 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
606 latter and an unbound workqueue provides a lot more flexibility.
621 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
624 $ tools/workqueue/wq_dump.py
676 Workqueue CPU -> pool
678 [ workqueue \ CPU 0 1 2 3 dfl]
698 Use tools/workqueue/wq_monitor.py to monitor workqueue operations: ::
700 $ tools/workqueue/wq_monitor.py events
731 workqueue users.
748 $ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event
769 Workqueue guarantees that a work item cannot be re-entrant if the following
773 2. No one queues the work item to another workqueue.
787 .. kernel-doc:: include/linux/workqueue.h
789 .. kernel-doc:: kernel/workqueue.c