Lines Matching +full:in +full:- +full:application

1 .. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
10 The name NAPI no longer stands for anything in particular [#]_.
12 In basic operation the device notifies the host about new events
18 NAPI processing usually happens in the software interrupt context,
22 All in all NAPI abstracts away from the drivers the context and configuration
30 of the NAPI instance while the method is the driver-specific event
37 -----------
42 unregistered). Instances are added in a disabled state.
51 calls may result in crashes, deadlocks, or race conditions. For example,
52 calling napi_disable() multiple times in a row will deadlock.
55 ------------
58 Drivers should call this function in their interrupt handler
64 argument - drivers can process completions for any number of Tx
68 In other words for Rx processing the ``budget`` argument limits how many
69 packets driver can process in a single poll. Rx specific APIs like page
81 the poll method should return exactly ``budget``. In that case,
96 or return ``budget - 1``.
101 -------------
109 As mentioned in the :ref:`drv_ctrl` section - napi_disable() and subsequent
118 --------------------------
121 the NAPI instance - until NAPI polling finishes any further
125 to IRQ being auto-masked by the device) should use the napi_schedule_prep()
128 .. code-block:: c
130 if (napi_schedule_prep(&v->napi)) {
131 mydrv_mask_rxtx_irq(v->idx);
133 __napi_schedule(&v->napi);
138 .. code-block:: c
140 if (budget && napi_complete_done(&v->napi, work_done)) {
141 mydrv_unmask_rxtx_irq(v->idx);
142 return min(work_done, budget - 1);
146 of guarantees given by being invoked in IRQ context (no need to
151 -------------------------
156 abstraction without specific user-facing semantics. That said, most networking
157 devices end up using NAPI in fairly similar ways.
162 In less common cases a NAPI instance may be used for multiple queues
175 ----------------------
178 of NAPI-related user configuration each time NAPI instances are reallocated.
196 be done programmatically in a user application or by using a script included in
202 .. code-block:: bash
204 $ kernel-source/tools/net/ynl/pyynl/cli.py \
205 --spec Documentation/netlink/specs/netdev.yaml \
206 --dump queue-get \
207 --json='{"ifindex": 2}'
213 -----------------------
216 In most scenarios batching happens due to IRQ coalescing which is done
226 The above parameters can also be set on a per-NAPI basis using netlink via
227 netdev-genl. When used with netlink and configured on a per-NAPI basis, the
229 ``gro-flush-timeout`` and ``napi-defer-hard-irqs``.
231 Per-NAPI configuration can be done programmatically in a user application
232 or by using a script included in the kernel source tree:
237 .. code-block:: bash
239 $ kernel-source/tools/net/ynl/pyynl/cli.py \
240 --spec Documentation/netlink/specs/netdev.yaml \
241 --do napi-set \
242 --json='{"id": 345,
243 "defer-hard-irqs": 111,
244 "gro-flush-timeout": 11111}'
246 Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
247 via netdev-genl. There is no global sysfs parameter for this value.
249 ``irq-suspend-timeout`` is used to determine how long an application can
250 completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
251 which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
256 ------------
268 epoll-based busy polling
269 ------------------------
272 ``epoll_wait``. In order to use this feature, a user application must ensure
275 If the application uses a dedicated acceptor thread, the application can obtain
281 Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program can
286 In order to enable busy polling, there are two choices:
288 1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
289 loop waiting for events. This is a system-wide setting and will cause all
290 epoll-based applications to busy poll when they call epoll_wait. This may
297 .. code-block:: c
309 ---------------
314 Very high request-per-second applications (especially routing/forwarding
323 if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
331 with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
333 in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
339 ``gro_flush_timeout`` can cause interference of the user application which is
341 should be chosen carefully with these tradeoffs in mind. epoll-based busy
349 --------------
354 While application calls to epoll_wait successfully retrieve events, the kernel will
365 1. The per-NAPI config parameter ``irq-suspend-timeout`` should be set to the
366 maximum time (in nanoseconds) the application can have its IRQs
369 the application has stalled. This value should be chosen so that it covers
370 the amount of time the user application needs to process data from its
374 2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
381 4. The application uses epoll as described above to trigger NAPI packet
385 userland, the ``irq-suspend-timeout`` is deferred and IRQs are disabled. This
386 allows the application to process data without interference.
388 Once a call to epoll_wait results in no events being found, IRQ suspension is
392 It is expected that ``irq-suspend-timeout`` will be set to a value much larger
393 than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for
401 irq-driven packet delivery. During busy periods, ``irq-suspend-timeout``
409 1) hardirq -> softirq -> napi poll; basic interrupt delivery
410 2) timer -> softirq -> napi poll; deferred irq processing
411 3) epoll -> busy-poll -> napi poll; busy looping
419 During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2,
420 which essentially tilts network processing in favour of Loop 3.
426 the recommended usage, because otherwise setting ``irq-suspend-timeout``
432 -------------
438 thread (called ``napi/${ifc-name}-${napi-id}``).
443 dependent). The NAPI instance IDs will be assigned in the opposite
446 Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
451 .. [#] NAPI was originally referred to as New API in 2.4 Linux.