Lines Matching full:workqueues

22  * pools for workqueues which are not bound to any specific CPU - the
337 struct list_head list; /* PR: list of all workqueues */
374 * the workqueues list without grabbing wq_pool_mutex.
375 * This is used to dump all workqueues from sysrq.
386 * Each pod type describes how CPUs should be grouped for unbound workqueues.
440 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
446 static LIST_HEAD(workqueues); /* PR: list of all workqueues */
1294 * workqueues as appropriate. To avoid flooding the console, each violating work
1546 * - %NULL for per-cpu workqueues as they don't need to use shared nr_active.
1806 * This function should only be called for ordered workqueues where only the
1928 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock.
1939 * workqueues. in pwq_dec_nr_active()
1982 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock
2457 * This current implementation is specific to unbound workqueues. in queue_work_node()
3234 * workqueues), so hiding them isn't a problem. in process_one_work()
3340 * exception is work items which belong to workqueues with a rescuer which
3433 * workqueues which have works queued on the pool and let them process
3775 * BH and threaded workqueues need separate lockdep keys to avoid in insert_wq_barrier()
4182 * For single threaded workqueues the deadlock happens when the work in start_flush_work()
4184 * workqueues the deadlock happens when the rescuer stalls, blocking in start_flush_work()
5311 * For initialized ordered workqueues, there should only be one pwq in apply_wqattrs_prepare()
5360 /* only unbound workqueues can change attributes */ in apply_workqueue_attrs_locked()
5417 * may execute on any CPU. This is similar to how per-cpu workqueues behave on
5551 * Workqueues which may be used during memory reclaim should have a rescuer
5701 * BH workqueues always share a single execution context per CPU in __alloc_workqueue()
5731 * wq_pool_mutex protects the workqueues list, allocations of PWQs, in __alloc_workqueue()
5743 list_add_tail_rcu(&wq->list, &workqueues); in __alloc_workqueue()
5935 /* max_active doesn't mean anything for BH workqueues */ in workqueue_set_max_active()
5938 /* disallow meddling with max_active for ordered workqueues */ in workqueue_set_max_active()
5961 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
5972 /* min_active is only meaningful for non-ordered unbound workqueues */ in workqueue_set_min_active()
6025 * With the exception of ordered workqueues, all workqueues have per-cpu
6400 * Called from a sysrq handler and prints out all busy workqueues and pools.
6410 pr_info("Showing busy workqueues and worker pools:\n"); in show_all_workqueues()
6412 list_for_each_entry_rcu(wq, &workqueues, list) in show_all_workqueues()
6424 * Called from try_to_freeze_tasks() and prints out all freezable workqueues
6433 pr_info("Showing freezable workqueues that are still busy:\n"); in show_freezable_workqueues()
6435 list_for_each_entry_rcu(wq, &workqueues, list) { in show_freezable_workqueues()
6666 /* update pod affinity of unbound workqueues */ in workqueue_online_cpu()
6667 list_for_each_entry(wq, &workqueues, list) { in workqueue_online_cpu()
6697 /* update pod affinity of unbound workqueues */ in workqueue_offline_cpu()
6702 list_for_each_entry(wq, &workqueues, list) { in workqueue_offline_cpu()
6790 * freeze_workqueues_begin - begin freezing workqueues
6792 * Start freezing workqueues. After this function returns, all freezable
6793 * workqueues will queue new works to their inactive_works list instead of
6808 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_begin()
6818 * freeze_workqueues_busy - are freezable workqueues still busy?
6827 * %true if some freezable workqueues are still busy. %false if freezing
6840 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_busy()
6864 * thaw_workqueues - thaw workqueues
6866 * Thaw workqueues. Normal queueing is restored and all collected
6884 list_for_each_entry(wq, &workqueues, list) { in thaw_workqueues()
6904 list_for_each_entry(wq, &workqueues, list) { in workqueue_apply_unbound_cpumask()
6995 list_for_each_entry(wq, &workqueues, list) { in wq_affn_dfl_set()
7020 * Workqueues with WQ_SYSFS flag set is visible to userland via
7021 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
7027 * Unbound workqueues have the following extra attributes.
7265 * The low-level workqueues cpumask is a global cpumask that limits
7266 * the affinity of all unbound workqueues. This function check the @cpumask
7267 * and apply it to all unbound workqueues and updates all pwqs of them.
7388 * ordered workqueues. in workqueue_sysfs_register()
7729 * up. It sets up all the data structures and system workqueues and allows early
7730 * boot code to create workqueues and queue/cancel work items. Actual work item
7883 * and invoked as soon as kthreads can be created and scheduled. Workqueues have
7900 * up. Also, create a rescuer for workqueues that requested it. in workqueue_init()
7909 list_for_each_entry(wq, &workqueues, list) { in workqueue_init()
8004 * workqueue_init_topology - initialize CPU pods for unbound workqueues
8025 * Workqueues allocated earlier would have all CPUs sharing the default in workqueue_init_topology()
8029 list_for_each_entry(wq, &workqueues, list) { in workqueue_init_topology()
8044 pr_warn("WARNING: Flushing system-wide workqueues will be prohibited in near future.\n"); in __warn_flushing_systemwide_wq()