| c61a3753 | 07-Feb-2026 |
Jakub Kicinski <kuba@kernel.org> |
tools: ynltool: add qstats analysis for HW-GRO efficiency / savings
Extend ynltool to compute HW GRO savings metric - how many packets has HW GRO been able to save the kernel from seeing.
Note that
tools: ynltool: add qstats analysis for HW-GRO efficiency / savings
Extend ynltool to compute HW GRO savings metric - how many packets has HW GRO been able to save the kernel from seeing.
Note that this definition does not actually take into account whether the segments were or weren't eligible for HW GRO. If a machine is receiving all-UDP traffic - new metric will show HW-GRO savings of 0%. Conversely since the super-packet still counts as a received packet, savings of 100% is not achievable. Perfect HW-GRO on a machine with 4k MTU and 64kB super-frames would show ~93.75% savings. With 1.5k MTU we may see up to ~97.8% savings (if my math is right).
Example after 10 sec of iperf on a freshly booted machine with 1.5k MTU:
$ ynltool qstats show eth0 rx-packets: 40681280 rx-bytes: 61575208437 rx-alloc-fail: 0 rx-hw-gro-packets: 1225133 rx-hw-gro-wire-packets: 40656633 $ ynltool qstats hw-gro eth0: 96.9% savings
None of the NICs I have access to can report "missed" HW-GRO opportunities so computing a true "effectiveness" metric is not possible. One could also argue that effectiveness metric is inferior in environments where we control both senders and receivers, the savings metrics will capture both regressions in receiver's HW GRO effectiveness but also regressions in senders sending smaller TSO trains. And we care about both. The main downside is that it's hard to tell at a glance how well the NIC is doing because the savings will be dependent on traffic patterns.
Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/20260207003509.3927744-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
| 9eef97a9 | 07-Nov-2025 |
Jakub Kicinski <kuba@kernel.org> |
tools: ynltool: add traffic distribution balance
The main if not only use case for per-queue stats today is checking for traffic imbalance. Add simple traffic balance analysis to qstats.
$ ynltool
tools: ynltool: add traffic distribution balance
The main if not only use case for per-queue stats today is checking for traffic imbalance. Add simple traffic balance analysis to qstats.
$ ynltool qstat balance eth0 rx 44 queues: rx-packets : cv=6.9% ns=24.2% stddev=512006493 min=6278921110 max=8011570575 mean=7437054644 rx-bytes : cv=6.9% ns=24.1% stddev=759670503060 min=9326315769440 max=11884393670786 mean=11035439201354 ...
$ ynltool -j qstat balance | jq [ { "ifname": "eth0", "ifindex": 2, "queue-type": "rx", "rx-packets": { "queue-count": 44, "min": 6278301665, "max": 8010780185, "mean": 7.43635E+9, "stddev": 5.12012E+8, "coefficient-of-variation": 6.88525, "normalized-spread": 24.249 }, ...
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20251107162227.980672-5-kuba@kernel.org Acked-by: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
show more ...
|
| 3f0a638d | 07-Nov-2025 |
Jakub Kicinski <kuba@kernel.org> |
tools: ynltool: add qstats support
$ ynltool qstat eth0 rx-packets: 493192163 rx-bytes: 1442544543997 tx-packets: 745999838 tx-bytes: 45742158264
tools: ynltool: add qstats support
$ ynltool qstat eth0 rx-packets: 493192163 rx-bytes: 1442544543997 tx-packets: 745999838 tx-bytes: 4574215826482 tx-stop: 7033 tx-wake: 7033
$ ynltool qstat show group-by queue eth0 rx-0 packets: 70196880 bytes: 178633973750 eth0 rx-1 packets: 63623419 bytes: 197274745250 ... eth0 tx-1 packets: 98645810 bytes: 631247647938 stop: 1048 wake: 1048 eth0 tx-2 packets: 86775824 bytes: 563930471952 stop: 1126 wake: 1126 ...
$ ynltool -j qstat | jq [ { "ifname": "eth0", "ifindex": 2, "rx": { "packets": 493396439, "bytes": 1443608198921 }, "tx": { "packets": 746239978, "bytes": 4574333772645, "stop": 7072, "wake": 7072 } } ]
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20251107162227.980672-4-kuba@kernel.org Acked-by: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
show more ...
|
| 124dac9b | 07-Nov-2025 |
Jakub Kicinski <kuba@kernel.org> |
tools: ynltool: add page-pool stats
Replace the page-pool sample with page pool support in ynltool.
# ynltool page-pool stats eth0[2] page pools: 18 (zombies: 0) refs: 171456 bytes: 70228377
tools: ynltool: add page-pool stats
Replace the page-pool sample with page pool support in ynltool.
# ynltool page-pool stats eth0[2] page pools: 18 (zombies: 0) refs: 171456 bytes: 702283776 (refs: 0 bytes: 0) recycling: 97.3% (alloc: 2679:6134966 recycle: 1250981:4719386) # ynltool -j page-pool stats | jq [ { "ifname": "eth0", "ifindex": 2, "page_pools": 18, "zombies": 0, "live": { "refs": 171456, "bytes": 702283776 }, "zombie": { "refs": 0, "bytes": 0 }, "recycling_pct": 97.2746, "alloc": { "slow": 2679, "fast": 6135029 }, "recycle": { "ring": 1250997, "cache": 4719432 } } ]
# ynltool page-pool stats group-by pp pool id: 108 dev: eth0[2] napi: 530 inflight: 9472 pages 38797312 bytes recycling: 95.5% (alloc: 148:208379 recycle: 45386:153842) pool id: 107 dev: eth0[2] napi: 529 inflight: 9408 pages 38535168 bytes recycling: 94.9% (alloc: 147:180178 recycle: 42251:128808)
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20251107162227.980672-3-kuba@kernel.org Acked-by: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
show more ...
|