Lines Matching full:napi
6 NAPI title
9 NAPI is the event handling mechanism used by the Linux networking stack.
10 The name NAPI no longer stands for anything in particular [#]_.
14 The host then schedules a NAPI instance to process the events.
15 The device may also be polled for events via NAPI without receiving
18 NAPI processing usually happens in the software interrupt context,
20 for NAPI processing.
22 All in all NAPI abstracts away from the drivers the context and configuration
28 The two most important elements of NAPI are the struct napi_struct
30 of the NAPI instance while the method is the driver-specific event
39 netif_napi_add() and netif_napi_del() add/remove a NAPI instance
45 A disabled NAPI can't be scheduled and its poll method is guaranteed
46 to not be invoked. napi_disable() waits for ownership of the NAPI
57 napi_schedule() is the basic method of scheduling a NAPI poll.
60 will take ownership of the NAPI instance.
62 Later, after NAPI is scheduled, the driver's poll method will be
82 the NAPI instance will be serviced/polled again (without the
121 the NAPI instance - until NAPI polling finishes any further
130 if (napi_schedule_prep(&v->napi)) {
133 __napi_schedule(&v->napi);
140 if (budget && napi_complete_done(&v->napi, work_done)) {
153 Modern devices have multiple NAPI instances (struct napi_struct) per
155 mapped to queues and interrupts. NAPI is primarily a polling/processing
157 devices end up using NAPI in fairly similar ways.
159 NAPI instances most often correspond 1:1:1 to interrupts and queue pairs
162 In less common cases a NAPI instance may be used for multiple queues
163 or Rx and Tx queues can be serviced by separate NAPI instances on a single
165 a 1:1 mapping between NAPI instances and interrupts.
170 a channel as an IRQ/NAPI which services queues of a given type. For example,
174 Persistent NAPI config
177 Drivers often allocate and free NAPI instances dynamically. This leads to loss
178 of NAPI-related user configuration each time NAPI instances are reallocated.
180 associating each NAPI instance with a persistent NAPI configuration based on
183 Using this API allows for persistent NAPI IDs (among other settings), which can
185 sections below for other NAPI configuration settings.
192 User interactions with NAPI depend on NAPI instance ID. The instance IDs
195 Users can query NAPI IDs for a device or device queue using netlink. This can
200 will reveal each queue's NAPI ID):
215 NAPI does not perform any explicit event coalescing by default.
219 NAPI can be configured to arm a repoll timer instead of unmasking
224 before NAPI gives up and goes back to using hardware IRQs.
226 The above parameters can also be set on a per-NAPI basis using netlink via
227 netdev-genl. When used with netlink and configured on a per-NAPI basis, the
229 ``gro-flush-timeout`` and ``napi-defer-hard-irqs``.
231 Per-NAPI configuration can be done programmatically in a user application
241 --do napi-set \
260 off CPU cycles for lower latency (production uses of NAPI busy polling
265 ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
266 also exists. Threaded polling of NAPI also has a mode to busy poll for
267 packets (:ref:`threaded busy polling<threaded_busy_poll>`) using the NAPI
275 all file descriptors which are added to an epoll context have the same NAPI ID.
278 the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
281 has an epoll context with FDs that have the same NAPI ID.
285 is only given incoming connections with the same NAPI ID. Care must be taken to
330 The NAPI budget for busy polling is lower than the default (which makes
354 triggers NAPI packet processing.
367 1. The per-NAPI config parameter ``irq-suspend-timeout`` should be set to the
376 2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
383 4. The application uses epoll as described above to trigger NAPI packet
411 1) hardirq -> softirq -> napi poll; basic interrupt delivery
412 2) timer -> softirq -> napi poll; deferred irq processing
413 3) epoll -> busy-poll -> napi poll; busy looping
433 Threaded NAPI busy polling
436 Threaded NAPI busy polling extends threaded NAPI and adds support to do
437 continuous busy polling of the NAPI. This can be useful for forwarding or
440 Threaded NAPI busy polling can be enabled on per NIC queue basis using Netlink.
446 $ ynl --family netdev --do napi-set \
449 The kernel will create a kthread that busy polls on this NAPI.
452 core to improve how often the NAPI is polled at the expense of wasted CPU
455 Once threaded busy polling is enabled for a NAPI, PID of the kthread can be
462 $ ynl --family netdev --do napi-get --json='{"id": 66}'
465 kthread that is polling this NAPI.
479 Threaded NAPI
482 Threaded NAPI is an operating mode that uses dedicated kernel
483 threads rather than software IRQ context for NAPI processing.
484 Each threaded NAPI instance will spawn a separate thread
485 (called ``napi/${ifc-name}-${napi-id}``).
489 between IRQs and NAPI instances may not be trivial (and is driver
490 dependent). The NAPI instance IDs will be assigned in the opposite
493 Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
494 netdev's sysfs directory. It can also be enabled for a specific NAPI using
501 $ ynl --family netdev --do napi-set --json='{"id": 66, "threaded": 1}'
505 .. [#] NAPI was originally referred to as New API in 2.4 Linux. argument