Lines Matching full:workqueues

22  * pools for workqueues which are not bound to any specific CPU - the
342 struct list_head list; /* PR: list of all workqueues */
379 * the workqueues list without grabbing wq_pool_mutex.
380 * This is used to dump all workqueues from sysrq.
391 * Each pod type describes how CPUs should be grouped for unbound workqueues.
445 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
451 static LIST_HEAD(workqueues); /* PR: list of all workqueues */
1313 * workqueues as appropriate. To avoid flooding the console, each violating work
1565 * - %NULL for per-cpu workqueues as they don't need to use shared nr_active.
1822 * This function should only be called for ordered workqueues where only the
1944 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock.
1955 * workqueues. in pwq_dec_nr_active()
1998 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock
2470 * This current implementation is specific to unbound workqueues. in queue_work_node()
3272 * workqueues), so hiding them isn't a problem. in process_one_work()
3380 * exception is work items which belong to workqueues with a rescuer which
3529 * workqueues which have works queued on the pool and let them process
3859 * BH and threaded workqueues need separate lockdep keys to avoid in insert_wq_barrier()
4266 * For single threaded workqueues the deadlock happens when the work in start_flush_work()
4268 * workqueues the deadlock happens when the rescuer stalls, blocking in start_flush_work()
5411 * For initialized ordered workqueues, there should only be one pwq in apply_wqattrs_prepare()
5455 /* only unbound workqueues can change attributes */ in apply_workqueue_attrs_locked()
5512 * may execute on any CPU. This is similar to how per-cpu workqueues behave on
5646 * Workqueues which may be used during memory reclaim should have a rescuer
5799 * BH workqueues always share a single execution context per CPU in __alloc_workqueue()
5829 * wq_pool_mutex protects the workqueues list, allocations of PWQs, in __alloc_workqueue()
5841 list_add_tail_rcu(&wq->list, &workqueues); in __alloc_workqueue()
6038 /* max_active doesn't mean anything for BH workqueues */ in workqueue_set_max_active()
6041 /* disallow meddling with max_active for ordered workqueues */ in workqueue_set_max_active()
6064 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
6075 /* min_active is only meaningful for non-ordered unbound workqueues */ in workqueue_set_min_active()
6128 * With the exception of ordered workqueues, all workqueues have per-cpu
6503 * Called from a sysrq handler and prints out all busy workqueues and pools.
6513 pr_info("Showing busy workqueues and worker pools:\n"); in show_all_workqueues()
6515 list_for_each_entry_rcu(wq, &workqueues, list) in show_all_workqueues()
6527 * Called from try_to_freeze_tasks() and prints out all freezable workqueues
6536 pr_info("Showing freezable workqueues that are still busy:\n"); in show_freezable_workqueues()
6538 list_for_each_entry_rcu(wq, &workqueues, list) { in show_freezable_workqueues()
6769 /* update pod affinity of unbound workqueues */ in workqueue_online_cpu()
6770 list_for_each_entry(wq, &workqueues, list) { in workqueue_online_cpu()
6800 /* update pod affinity of unbound workqueues */ in workqueue_offline_cpu()
6805 list_for_each_entry(wq, &workqueues, list) { in workqueue_offline_cpu()
6868 * freeze_workqueues_begin - begin freezing workqueues
6870 * Start freezing workqueues. After this function returns, all freezable
6871 * workqueues will queue new works to their inactive_works list instead of
6886 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_begin()
6896 * freeze_workqueues_busy - are freezable workqueues still busy?
6905 * %true if some freezable workqueues are still busy. %false if freezing
6918 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_busy()
6942 * thaw_workqueues - thaw workqueues
6944 * Thaw workqueues. Normal queueing is restored and all collected
6962 list_for_each_entry(wq, &workqueues, list) { in thaw_workqueues()
6982 list_for_each_entry(wq, &workqueues, list) { in workqueue_apply_unbound_cpumask()
7009 list_for_each_entry(wq, &workqueues, list) { in workqueue_apply_unbound_cpumask()
7094 list_for_each_entry(wq, &workqueues, list) { in wq_affn_dfl_set()
7119 * Workqueues with WQ_SYSFS flag set is visible to userland via
7120 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
7126 * Unbound workqueues have the following extra attributes.
7364 * The low-level workqueues cpumask is a global cpumask that limits
7365 * the affinity of all unbound workqueues. This function check the @cpumask
7366 * and apply it to all unbound workqueues and updates all pwqs of them.
7487 * ordered workqueues. in workqueue_sysfs_register()
7859 * up. It sets up all the data structures and system workqueues and allows early
7860 * boot code to create workqueues and queue/cancel work items. Actual work item
8015 * and invoked as soon as kthreads can be created and scheduled. Workqueues have
8032 * up. Also, create a rescuer for workqueues that requested it. in workqueue_init()
8041 list_for_each_entry(wq, &workqueues, list) { in workqueue_init()
8136 * workqueue_init_topology - initialize CPU pods for unbound workqueues
8157 * Workqueues allocated earlier would have all CPUs sharing the default in workqueue_init_topology()
8161 list_for_each_entry(wq, &workqueues, list) { in workqueue_init_topology()
8176 pr_warn("WARNING: Flushing system-wide workqueues will be prohibited in near future.\n"); in __warn_flushing_systemwide_wq()