Lines Matching full:napi
6 NAPI title
9 NAPI is the event handling mechanism used by the Linux networking stack.
10 The name NAPI no longer stands for anything in particular [#]_.
14 The host then schedules a NAPI instance to process the events.
15 The device may also be polled for events via NAPI without receiving
18 NAPI processing usually happens in the software interrupt context,
20 for NAPI processing.
22 All in all NAPI abstracts away from the drivers the context and configuration
28 The two most important elements of NAPI are the struct napi_struct
30 of the NAPI instance while the method is the driver-specific event
39 netif_napi_add() and netif_napi_del() add/remove a NAPI instance
45 A disabled NAPI can't be scheduled and its poll method is guaranteed
46 to not be invoked. napi_disable() waits for ownership of the NAPI
57 napi_schedule() is the basic method of scheduling a NAPI poll.
60 will take ownership of the NAPI instance.
62 Later, after NAPI is scheduled, the driver's poll method will be
82 the NAPI instance will be serviced/polled again (without the
121 the NAPI instance - until NAPI polling finishes any further
130 if (napi_schedule_prep(&v->napi)) {
133 __napi_schedule(&v->napi);
140 if (budget && napi_complete_done(&v->napi, work_done)) {
154 Modern devices have multiple NAPI instances (struct napi_struct) per
156 mapped to queues and interrupts. NAPI is primarily a polling/processing
158 devices end up using NAPI in fairly similar ways.
160 NAPI instances most often correspond 1:1:1 to interrupts and queue pairs
163 In less common cases a NAPI instance may be used for multiple queues
164 or Rx and Tx queues can be serviced by separate NAPI instances on a single
166 a 1:1 mapping between NAPI instances and interrupts.
171 a channel as an IRQ/NAPI which services queues of a given type. For example,
178 User interactions with NAPI depend on NAPI instance ID. The instance IDs
185 NAPI does not perform any explicit event coalescing by default.
189 NAPI can be configured to arm a repoll timer instead of unmasking
194 before NAPI gives up and goes back to using hardware IRQs.
203 off CPU cycles for lower latency (production uses of NAPI busy polling
208 ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
228 The NAPI budget for busy polling is lower than the default (which makes
235 Threaded NAPI
238 Threaded NAPI is an operating mode that uses dedicated kernel
239 threads rather than software IRQ context for NAPI processing.
241 NAPI instances of that device. Each NAPI instance will spawn a separate
242 thread (called ``napi/${ifc-name}-${napi-id}``).
246 between IRQs and NAPI instances may not be trivial (and is driver
247 dependent). The NAPI instance IDs will be assigned in the opposite
250 Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
255 .. [#] NAPI was originally referred to as New API in 2.4 Linux. argument