Lines Matching +full:many +full:- +full:to +full:- +full:one
13 There are many cases where an asynchronous process execution context
18 describing which function to execute is put on a queue. An
23 functions associated with the work items one after the other. When
31 In the original wq implementation, a multi threaded (MT) wq had one
32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
40 provided was unsatisfactory. The limitation was common to both ST and
42 worker pool. An MT wq could provide only one execution context per CPU
43 while an ST wq one for the whole system. Work items had to compete for
44 those very limited execution contexts leading to various problems
45 including proneness to deadlocks around the single execution context.
48 usage also forced its users to make unnecessary tradeoffs like libata
49 choosing to use ST wq for polling PIOs and accepting an unnecessary
52 higher level of concurrency, like async or fscache, had to implement
60 * Use per-CPU unified worker pools shared by all wq to provide
65 the API users don't need to worry about such details.
71 In order to ease the asynchronous execution of functions a new
74 A work item is a simple struct that holds a pointer to the function
75 that is to be executed asynchronously. Whenever a driver or subsystem
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
81 off of the queue, one after the other. If no work is queued, the
83 called worker-pools.
85 The cmwq design differentiates between the user-facing workqueues that
87 which manages worker-pools and processes the queued work items.
89 There are two worker-pools, one for normal work items and the other
91 worker-pools to serve work items queued on unbound workqueues - the
98 things like CPU locality, concurrency limits, priority and more. To
99 get a detailed overview refer to the API description of
102 When a work item is queued to a workqueue, the target worker-pool is
103 determined according to the queue parameters and workqueue attributes
104 and appended on the shared worklist of the worker-pool. For example,
106 be queued on the worklist of either normal or highpri worker-pool that
107 is associated to the CPU the issuer is running on.
110 (how many execution contexts are active) is an important issue. cmwq
111 tries to keep the concurrency at a minimal but sufficient level.
112 Minimal to save resources and sufficient in that the system is used at
115 Each worker-pool bound to an actual CPU implements concurrency
116 management by hooking into the scheduler. The worker-pool is notified
119 not expected to hog a CPU and consume many cycles. That means
120 maintaining just enough concurrency to prevent work processing from
121 stalling should be optimal. As long as there are one or more runnable
122 workers on the CPU, the worker-pool doesn't start execution of a new
123 work, but, when the last running worker goes to sleep, it immediately
136 regulating concurrency level is on the users. There is also a flag to
137 mark a bound wq to ignore the concurrency management. Please refer to
143 on code paths that handle memory reclaim are required to be queued on
144 wq's that have a rescue-worker reserved for execution under memory
145 pressure. Else it is possible that the worker-pool deadlocks waiting
146 for execution contexts to free up.
154 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
156 also used as the name of the rescuer thread if there is one.
165 ---------
168 Work items queued to an unbound wq are served by the special
169 worker-pools which host workers which are not bound to any
172 worker-pools try to start execution of work items as soon as
191 have this flag set. The wq is guaranteed to have at least one
195 Work items of a highpri wq are queued to the highpri
196 worker-pool of the target cpu. Highpri worker-pools are
199 Note that normal and highpri worker-pools don't interact with
204 Work items of a CPU intensive wq do not contribute to the
207 worker-pool from starting execution. This is useful for bound
208 work items which are expected to hog CPU cycles so that their
211 Although CPU intensive work items don't contribute to the
214 non-CPU-intensive work items can delay execution of CPU
220 workqueues are now non-reentrant - any work item is guaranteed to be
221 executed by at most one worker system-wide at any given time.
225 --------------
228 per CPU which can be assigned to the work items of a wq. For example,
240 users of the wq, more specifically, by how many work items the users
246 combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to
247 achieve this behavior. Work items on such wq were always queued to the
248 unbound worker-pools and only one work item could be active at any given
253 be used to achieve system-wide ST behavior.
259 The following example execution scenarios try to illustrate how cmwq
262 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
268 simple FIFO scheduling, the following is one highly simplified version
311 Now, let's assume w1 and w2 are queued to a different wq q1 which has
329 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
333 reclaim, they should be queued to separate wq each with
336 * Unless strict ordering is required, there is no need to use ST wq.
344 which are not involved in memory reclaim and don't need to be
346 special attribute, can use one of the system wq. There is no
350 * Unless work items are expected to consume a huge amount of CPU
351 cycles, using a bound wq is usually beneficial due to the increased
359 there are a few tricks needed to shed some light on misbehaving
375 The first one can be tracked using tracing: ::
386 For the second type of problems it should be possible to just check
398 .. kernel-doc:: include/linux/workqueue.h
400 .. kernel-doc:: kernel/workqueue.c