Lines Matching full:workers

66 	 * While associated (!DISASSOCIATED), all workers are bound to the
70 * While DISASSOCIATED, the cpu may be offline and all workers have
83 POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
121 * Rescue workers are used only on emergencies and shared by
204 int nr_workers; /* L: total number of workers */
205 int nr_idle; /* L: currently idle workers */
207 struct list_head idle_list; /* L: list of idle workers */
211 struct timer_list mayday_timer; /* L: SOS timer for workers */
213 /* a workers is either on busy_hash or idle_list, or the manager */
215 /* L: hash of busy workers */
218 struct list_head workers; /* A: attached workers */ member
243 PWQ_STAT_REPATRIATED, /* unbound workers brought back into scope */
572 * for_each_pool_worker - iterate through all workers of a worker_pool
574 * @pool: worker_pool to iterate workers of
582 list_for_each_entry((worker), &(pool)->workers, node) \
926 * running workers.
928 * Note that, because unbound workers never contribute to nr_running, this
937 /* Can I start working? Called from busy but !running workers. */
943 /* Do I need to keep working? Called from currently running workers. */
955 /* Do we have too many workers and should some go away? */
1186 * A single work shouldn't be executed concurrently by multiple workers. in assign_work()
1189 * @work is not executed concurrently by multiple workers from the same in assign_work()
1261 * now. If this becomes pronounced, we can skip over workers which are in kick_pool()
1426 * workers, also reach here, let's not access anything before in wq_worker_sleeping()
1519 * to sleep. It's used by psi to identify aggregation workers during
2683 * details. BH workers are, while per-CPU, always DISASSOCIATED. in worker_attach_to_pool()
2695 list_add_tail(&worker->node, &pool->workers); in worker_attach_to_pool()
2901 * idle_worker_timeout - check if some idle workers can now be deleted.
2939 * idle_cull_fn - cull workers that have been idle for too long.
2940 * @work: the pool's work for handling these idle workers
2942 * This goes through a pool's idle workers and gets rid of those that have been
3130 * interaction with other workers on the same cpu, queueing and
3194 * workers such as the UNBOUND and CPU_INTENSIVE ones. in process_one_work()
3337 * The worker thread function. All workers belong to a worker_pool -
3338 * either a per-cpu one or dynamic unbound one. These workers process all
3407 * manage, sleep. Workers are woken up only while holding in worker_thread()
3459 * pwq(s) queued. This can happen by non-rescuer workers consuming in rescuer_thread()
3522 * Leave this pool. Notify regular workers; otherwise, we end up in rescuer_thread()
3606 bh_worker(list_first_entry(&pool->workers, struct worker, node)); in workqueue_softirq_action()
3633 bh_worker(list_first_entry(&pool->workers, struct worker, node)); in drain_dead_softirq_workfn()
4780 INIT_LIST_HEAD(&pool->workers); in init_worker_pool()
4948 * Become the manager and destroy all workers. This prevents in put_unbound_pool()
4949 * @pool's workers from blocking on attach_mutex. We're the last in put_unbound_pool()
5415 * with a cpumask spanning multiple pods, the workers which were already
6375 pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers); in show_one_worker_pool()
6508 * We've blocked all attach/detach operations. Make all workers in unbind_workers()
6509 * unbound and set DISASSOCIATED. Before this, all workers in unbind_workers()
6526 * are served by workers tied to the pool. in unbind_workers()
6547 * rebind_workers - rebind all workers of a pool to the associated CPU
6550 * @pool->cpu is coming online. Rebind all workers to the CPU.
6559 * Restore CPU affinity of all workers. As all idle workers should in rebind_workers()
6562 * of all workers first and then clear UNBOUND. As we're called in rebind_workers()
6603 * restore_unbound_workers_cpumask - restore cpumask of unbound workers
6610 * online CPU before, cpus_allowed of all its workers should be restored.
6691 /* unbinding per-cpu workers should happen on the local CPU */ in workqueue_offline_cpu()
7029 * nice RW int : nice value of the workers
7030 * cpumask RW mask : bitmask of allowed CPUs for the workers
7482 * Show workers that might prevent the processing of pending work items.
7483 * The only candidates are CPU-bound workers in the running state.
7519 pr_info("Showing backtraces of running workers in stalled CPU-bound worker pools:\n"); in show_cpu_pools_hogs()
7886 * workers and enable future kworker creations.
7918 * Create the initial workers. A BH pool has one pseudo worker that in workqueue_init()
7920 * affected by hotplug events. Create the BH pseudo workers for all in workqueue_init()