Lines Matching full:work

17 When such an asynchronous execution context is needed, a work item
22 While there are work items on the workqueue the worker executes the
23 functions associated with the work items one after the other. When
24 there is no work item left on the workqueue the worker becomes idle.
25 When a new work item gets queued, the worker begins executing again.
43 while an ST wq one for the whole system. Work items had to compete for
72 abstraction, the work item, is introduced.
74 A work item is a simple struct that holds a pointer to the function
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
81 off of the queue, one after the other. If no work is queued, the
86 subsystems and drivers queue work items on and the backend mechanism
87 which manages worker-pools and processes the queued work items.
89 There are two worker-pools, one for normal work items and the other
91 worker-pools to serve work items queued on unbound workqueues - the
94 Subsystems and drivers can create and queue work items through special
96 aspects of the way the work items are executed by setting flags on the
97 workqueue they are putting the work item on. These flags include
102 When a work item is queued to a workqueue, the target worker-pool is
105 unless specifically overridden, a work item of a bound workqueue will
118 number of the currently runnable workers. Generally, work items are
120 maintaining just enough concurrency to prevent work processing from
123 work, but, when the last running worker goes to sleep, it immediately
125 are pending work items. This allows using a minimal number of workers
142 through the use of rescue workers. All work items which might be used
159 forward progress guarantee, flush and work item attributes. ``@flags``
160 and ``@max_active`` control how work items are assigned execution
168 Work items queued to an unbound wq are served by the special
172 worker-pools try to start execution of work items as soon as
186 suspend operations. Work items on the wq are drained and no
187 new work item starts execution until thawed.
195 Work items of a highpri wq are queued to the highpri
204 Work items of a CPU intensive wq do not contribute to the
206 work items will not prevent other work items in the same
208 work items which are expected to hog CPU cycles so that their
211 Although CPU intensive work items don't contribute to the
214 non-CPU-intensive work items can delay execution of CPU
215 intensive work items.
224 CPU which can be assigned to the work items of a wq. For example, with
225 ``@max_active`` of 16, at most 16 work items of the wq can be executing
234 The number of active work items of a wq is usually regulated by the
235 users of the wq, more specifically, by how many work items the users
237 throttling the number of active work items, specifying '0' is
242 achieve this behavior. Work items on such wq were always queued to the
243 unbound worker-pools and only one work item could be active at any given
257 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
324 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
327 there is dependency among multiple work items used during memory
338 (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items
340 flushed as a part of a group of work items, and don't require any
345 * Unless work items are expected to consume a huge amount of CPU
347 level of locality in wq operations and work item execution.
356 boundaries. A work item queued on the workqueue will be assigned to a worker
368 CPUs are not grouped. A work item issued on one CPU is processed by a
386 work item on a CPU close to the issuing CPU.
403 0 by default indicating that affinity scopes are not strict. When a work
426 Higher locality leads to higher efficiency where more work is performed for
428 cause lower overall system utilization if the work items are not spread
438 Scenario 1: Enough issuers and work spread across the machine
480 Scenario 2: Fewer issuers, enough work for saturation
490 a third of the issuers but is still enough total work to saturate the
513 This is more than enough work to saturate the system. Both "system" and
519 (strict)" to mostly saturate the machine but the loss of work conservation
523 Scenario 3: Even fewer issuers, not enough work to saturate
533 reduced to four, there now isn't enough work to saturate the whole system
568 While the loss of work-conservation in certain scenarios hurts, it is a lot
585 * The loss of work-conservation in non-strict affinity scopes is likely
588 work-conservation in most cases. As such, it is possible that future
703 Because the work functions are executed by generic worker threads
718 2. A single work item that consumes lots of cpu cycles
727 If something is busy looping on work queueing, it would be dominating
728 the output and the offender can be determined with the work item
736 The work item's function should be trivially visible in the stack
743 Workqueue guarantees that a work item cannot be re-entrant if the following
744 conditions hold after a work item gets queued:
746 1. The work function hasn't been changed.
747 2. No one queues the work item to another workqueue.
748 3. The work item hasn't been reinitiated.
750 In other words, if the above conditions hold, the work item is guaranteed to be
753 Note that requeuing the work item (to the same queue) in the self function
755 required when breaking the conditions inside a work function.