History log of /linux/net/core/dev.c (Results 1 - 25 of 1387)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
Revision tags: v4.19-rc2, v4.19-rc1
# 13ba17be 24-Aug-2018 Mukesh Ojha <mojha@codeaurora.org>

notifier: Remove notifier header file wherever not used

The conversion of the hotplug notifiers to a state machine left the
notifier.h includes around in some places. Remove them.

notifier: Remove notifier header file wherever not used

The conversion of the hotplug notifiers to a state machine left the
notifier.h includes around in some places. Remove them.

Signed-off-by: Mukesh Ojha <mojha@codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/1535114033-4605-1-git-send-email-mojha@codeaurora.org

show more ...


Revision tags: v4.18
# 4d99f660 09-Aug-2018 Andrei Vagin <avagin@gmail.com>

net: allow to call netif_reset_xps_queues() under cpus_read_lock

The definition of static_key_slow_inc() has cpus_read_lock in place. In the
virtio_net driver, XPS queues are initialized

net: allow to call netif_reset_xps_queues() under cpus_read_lock

The definition of static_key_slow_inc() has cpus_read_lock in place. In the
virtio_net driver, XPS queues are initialized after setting the queue:cpu
affinity in virtnet_set_affinity() which is already protected within
cpus_read_lock. Lockdep prints a warning when we are trying to acquire
cpus_read_lock when it is already held.

This patch adds an ability to call __netif_set_xps_queue under
cpus_read_lock().
Acked-by: Jason Wang <jasowang@redhat.com>

============================================
WARNING: possible recursive locking detected
4.18.0-rc3-next-20180703+ #1 Not tainted
--------------------------------------------
swapper/0/1 is trying to acquire lock:
00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: static_key_slow_inc+0xe/0x20

but task is already holding lock:
00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: init_vqs+0x513/0x5a0

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(cpu_hotplug_lock.rw_sem);
lock(cpu_hotplug_lock.rw_sem);

*** DEADLOCK ***

May be due to missing lock nesting notation

3 locks held by swapper/0/1:
#0: 00000000244bc7da (&dev->mutex){....}, at: __driver_attach+0x5a/0x110
#1: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: init_vqs+0x513/0x5a0
#2: 000000005cd8463f (xps_map_mutex){+.+.}, at: __netif_set_xps_queue+0x8d/0xc60

v2: move cpus_read_lock() out of __netif_set_xps_queue()

Cc: "Nambiar, Amritha" <amritha.nambiar@intel.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Fixes: 8af2c06ff4b1 ("net-sysfs: Add interface for Rx queue(s) map per Tx queue")

Signed-off-by: Andrei Vagin <avagin@gmail.com>

Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


Revision tags: v4.18-rc8
# a6bcfc89 03-Aug-2018 Li RongQing <lirongqing@baidu.com>

net: check extack._msg before print

dev_set_mtu_ext is able to fail with a valid mtu value, at that
condition, extack._msg is not set and random since it is in stack,
then kernel wil

net: check extack._msg before print

dev_set_mtu_ext is able to fail with a valid mtu value, at that
condition, extack._msg is not set and random since it is in stack,
then kernel will crash when print it.

Fixes: 7a4c53bee3324a ("net: report invalid mtu value via netlink extack")
Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# cd11b164 30-Jul-2018 Paolo Abeni <pabeni@redhat.com>

net/tc: introduce TC_ACT_REINSERT.

This is similar TC_ACT_REDIRECT, but with a slightly different
semantic:
- on ingress the mirred skbs are passed to the target device
network s

net/tc: introduce TC_ACT_REINSERT.

This is similar TC_ACT_REDIRECT, but with a slightly different
semantic:
- on ingress the mirred skbs are passed to the target device
network stack without any additional check not scrubbing.
- the rcu-protected stats provided via the tcf_result struct
are updated on error conditions.

This new tcfa_action value is not exposed to the user-space
and can be used only internally by clsact.

v1 -> v2: do not touch TC_ACT_REDIRECT code path, introduce
a new action type instead
v2 -> v3:
- rename the new action value TC_ACT_REINJECT, update the
helper accordingly
- take care of uncloned reinjected packets in XDP generic
hook
v3 -> v4:
- renamed again the new action value (JiriP)
v4 -> v5:
- fix build error with !NET_CLS_ACT (kbuild bot)

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


Revision tags: v4.18-rc7
# 7a4c53be 27-Jul-2018 Stephen Hemminger <stephen@networkplumber.org>

net: report invalid mtu value via netlink extack

If an invalid MTU value is set through rtnetlink return extra error
information instead of putting message in kernel log. For other cases

net: report invalid mtu value via netlink extack

If an invalid MTU value is set through rtnetlink return extra error
information instead of putting message in kernel log. For other cases
where there is no visible API, keep the error report in the log.

Example:
# ip li set dev enp12s0 mtu 10000
Error: mtu greater than device maximum.

# ifconfig enp12s0 mtu 10000
SIOCSIFMTU: Invalid argument
# dmesg | tail -1
[ 2047.795467] enp12s0: mtu greater than device maximum

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


Revision tags: v4.18-rc6
# 7c4ec749 21-Jul-2018 David S. Miller <davem@davemloft.net>

net: Init backlog NAPI's gro_hash.

Based upon a patch by Sean Tranchetti.

Fixes: d4546c2509b1 ("net: Convert GRO SKB handling to list_head.")
Signed-off-by: David S. Miller <dav

net: Init backlog NAPI's gro_hash.

Based upon a patch by Sean Tranchetti.

Fixes: d4546c2509b1 ("net: Convert GRO SKB handling to list_head.")
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# ccdb5171 17-Jul-2018 David S. Miller <davem@davemloft.net>

net: Fix GRO_HASH_BUCKETS assertion.

FIELD_SIZEOF() is in bytes, but we want bits.

Fixes: d9f37d01e294 ("net: convert gro_count to bitmask")
Suggested-by: Eric Dumazet <eric.dum

net: Fix GRO_HASH_BUCKETS assertion.

FIELD_SIZEOF() is in bytes, but we want bits.

Fixes: d9f37d01e294 ("net: convert gro_count to bitmask")
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


Revision tags: v4.18-rc5
# d9f37d01 13-Jul-2018 Li RongQing <lirongqing@baidu.com>

net: convert gro_count to bitmask

gro_hash size is 192 bytes, and uses 3 cache lines, if there is few
flows, gro_hash may be not fully used, so it is unnecessary to iterate
all gro_h

net: convert gro_count to bitmask

gro_hash size is 192 bytes, and uses 3 cache lines, if there is few
flows, gro_hash may be not fully used, so it is unnecessary to iterate
all gro_hash in napi_gro_flush(), to occupy unnecessary cacheline.

convert gro_count to a bitmask, and rename it as gro_bitmask, each bit
represents a element of gro_hash, only flush a gro_hash element if the
related bit is set, to speed up napi_gro_flush().

and update gro_bitmask only if it will be changed, to reduce cache
update

Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Cc: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# a25717d2 12-Jul-2018 Jakub Kicinski <jakub.kicinski@netronome.com>

xdp: support simultaneous driver and hw XDP attachment

Split the query of HW-attached program from the software one.
Introduce new .ndo_bpf command to query HW-attached program.
This

xdp: support simultaneous driver and hw XDP attachment

Split the query of HW-attached program from the software one.
Introduce new .ndo_bpf command to query HW-attached program.
This will allow drivers to install different programs in HW
and SW at the same time. Netlink can now also carry multiple
programs on dump (in which case mode will be set to
XDP_ATTACHED_MULTI and user has to check per-attachment point
attributes, IFLA_XDP_PROG_ID will not be present). We reuse
IFLA_XDP_PROG_ID skb space for second mode, so rtnl_xdp_size()
doesn't need to be updated.

Note that the installation side is still not there, since all
drivers currently reject installing more than one program at
the time.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

show more ...


# 6b867589 12-Jul-2018 Jakub Kicinski <jakub.kicinski@netronome.com>

xdp: don't make drivers report attachment mode

prog_attached of struct netdev_bpf should have been superseded
by simply setting prog_id long time ago, but we kept it around
to allow

xdp: don't make drivers report attachment mode

prog_attached of struct netdev_bpf should have been superseded
by simply setting prog_id long time ago, but we kept it around
to allow offloading drivers to communicate attachment mode (drv
vs hw). Subsequently drivers were also allowed to report back
attachment flags (prog_flags), and since nowadays only programs
attached will XDP_FLAGS_HW_MODE can get offloaded, we can tell
the attachment mode from the flags driver reports. Remove
prog_attached member.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

show more ...


# 68d2f84a 12-Jul-2018 Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>

net: gro: properly remove skb from list

Following crash occurs in validate_xmit_skb_list() when same skb is
iterated multiple times in the loop and consume_skb() is called.

The

net: gro: properly remove skb from list

Following crash occurs in validate_xmit_skb_list() when same skb is
iterated multiple times in the loop and consume_skb() is called.

The root cause is calling list_del_init(&skb->list) and not clearing
skb->next in d4546c2509b1. list_del_init(&skb->list) sets skb->next
to point to skb itself. skb->next needs to be cleared because other
parts of network stack uses another kind of SKB lists.
validate_xmit_skb_list() uses such list.

A similar type of bugfix was reported by Jesper Dangaard Brouer.
https://patchwork.ozlabs.org/patch/942541/

This patch clears skb->next and changes list_del_init() to list_del()
so that list->prev will maintain the list poison.

[ 148.185511] ==================================================================
[ 148.187865] BUG: KASAN: use-after-free in validate_xmit_skb_list+0x4b/0xa0
[ 148.190158] Read of size 8 at addr ffff8801e52eefc0 by task swapper/1/0
[ 148.192940]
[ 148.193642] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.18.0-rc3+ #25
[ 148.195423] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20180531_142017-buildhw-08.phx2.fedoraproject.org-1.fc28 04/01/2014
[ 148.199129] Call Trace:
[ 148.200565] <IRQ>
[ 148.201911] dump_stack+0xc6/0x14c
[ 148.203572] ? dump_stack_print_info.cold.1+0x2f/0x2f
[ 148.205083] ? kmsg_dump_rewind_nolock+0x59/0x59
[ 148.206307] ? validate_xmit_skb+0x2c6/0x560
[ 148.207432] ? debug_show_held_locks+0x30/0x30
[ 148.208571] ? validate_xmit_skb_list+0x4b/0xa0
[ 148.211144] print_address_description+0x6c/0x23c
[ 148.212601] ? validate_xmit_skb_list+0x4b/0xa0
[ 148.213782] kasan_report.cold.6+0x241/0x2fd
[ 148.214958] validate_xmit_skb_list+0x4b/0xa0
[ 148.216494] sch_direct_xmit+0x1b0/0x680
[ 148.217601] ? dev_watchdog+0x4e0/0x4e0
[ 148.218675] ? do_raw_spin_trylock+0x10/0x120
[ 148.219818] ? do_raw_spin_lock+0xe0/0xe0
[ 148.221032] __dev_queue_xmit+0x1167/0x1810
[ 148.222155] ? sched_clock+0x5/0x10
[...]

[ 148.474257] Allocated by task 0:
[ 148.475363] kasan_kmalloc+0xbf/0xe0
[ 148.476503] kmem_cache_alloc+0xb4/0x1b0
[ 148.477654] __build_skb+0x91/0x250
[ 148.478677] build_skb+0x67/0x180
[ 148.479657] e1000_clean_rx_irq+0x542/0x8a0
[ 148.480757] e1000_clean+0x652/0xd10
[ 148.481772] net_rx_action+0x4ea/0xc20
[ 148.482808] __do_softirq+0x1f9/0x574
[ 148.483831]
[ 148.484575] Freed by task 0:
[ 148.485504] __kasan_slab_free+0x12e/0x180
[ 148.486589] kmem_cache_free+0xb4/0x240
[ 148.487634] kfree_skbmem+0xed/0x150
[ 148.488648] consume_skb+0x146/0x250
[ 148.489665] validate_xmit_skb+0x2b7/0x560
[ 148.490754] validate_xmit_skb_list+0x70/0xa0
[ 148.491897] sch_direct_xmit+0x1b0/0x680
[ 148.493949] __dev_queue_xmit+0x1167/0x1810
[ 148.495103] br_dev_queue_push_xmit+0xce/0x250
[ 148.496196] br_forward_finish+0x276/0x280
[ 148.497234] __br_forward+0x44f/0x520
[ 148.498260] br_forward+0x19f/0x1b0
[ 148.499264] br_handle_frame_finish+0x65e/0x980
[ 148.500398] NF_HOOK.constprop.10+0x290/0x2a0
[ 148.501522] br_handle_frame+0x417/0x640
[ 148.502582] __netif_receive_skb_core+0xaac/0x18f0
[ 148.503753] __netif_receive_skb_one_core+0x98/0x120
[ 148.504958] netif_receive_skb_internal+0xe3/0x330
[ 148.506154] napi_gro_complete+0x190/0x2a0
[ 148.507243] dev_gro_receive+0x9f7/0x1100
[ 148.508316] napi_gro_receive+0xcb/0x260
[ 148.509387] e1000_clean_rx_irq+0x2fc/0x8a0
[ 148.510501] e1000_clean+0x652/0xd10
[ 148.511523] net_rx_action+0x4ea/0xc20
[ 148.512566] __do_softirq+0x1f9/0x574
[ 148.513598]
[ 148.514346] The buggy address belongs to the object at ffff8801e52eefc0
[ 148.514346] which belongs to the cache skbuff_head_cache of size 232
[ 148.517047] The buggy address is located 0 bytes inside of
[ 148.517047] 232-byte region [ffff8801e52eefc0, ffff8801e52ef0a8)
[ 148.519549] The buggy address belongs to the page:
[ 148.520726] page:ffffea000794bb00 count:1 mapcount:0 mapping:ffff880106f4dfc0 index:0xffff8801e52ee840 compound_mapcount: 0
[ 148.524325] flags: 0x17ffffc0008100(slab|head)
[ 148.525481] raw: 0017ffffc0008100 ffff880106b938d0 ffff880106b938d0 ffff880106f4dfc0
[ 148.527503] raw: ffff8801e52ee840 0000000000190011 00000001ffffffff 0000000000000000
[ 148.529547] page dumped because: kasan: bad access detected

Fixes: d4546c2509b1 ("net: Convert GRO SKB handling to list_head.")
Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
Reported-by: Tyler Hicks <tyhicks@canonical.com>
Tested-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 9af86f93 09-Jul-2018 Edward Cree <ecree@solarflare.com>

net: core: fix use-after-free in __netif_receive_skb_list_core

__netif_receive_skb_core can free the skb, so we have to use the dequeue-
enqueue model when calling it from __netif_recei

net: core: fix use-after-free in __netif_receive_skb_list_core

__netif_receive_skb_core can free the skb, so we have to use the dequeue-
enqueue model when calling it from __netif_receive_skb_list_core.

Fixes: 88eb1944e18c ("net: core: propagate SKB lists through packet_type lookup")
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 8c057efa 09-Jul-2018 Edward Cree <ecree@solarflare.com>

net: core: fix uses-after-free in list processing

In netif_receive_skb_list_internal(), all of skb_defer_rx_timestamp(),
do_xdp_generic() and enqueue_to_backlog() can lead to kfree(skb)

net: core: fix uses-after-free in list processing

In netif_receive_skb_list_internal(), all of skb_defer_rx_timestamp(),
do_xdp_generic() and enqueue_to_backlog() can lead to kfree(skb). Thus,
we cannot wait until after they return to remove the skb from the list;
instead, we remove it first and, in the pass case, add it to a sublist
afterwards.
In the case of enqueue_to_backlog() we have already decided not to pass
when we call the function, so we do not need a sublist.

Fixes: 7da517a3bc52 ("net: core: Another step of skb receive list processing")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 8ec56fc3 09-Jul-2018 Alexander Duyck <alexander.h.duyck@intel.com>

net: allow fallback function to pass netdev

For most of these calls we can just pass NULL through to the fallback
function as the sb_dev. The only cases where we cannot are the cases whe

net: allow fallback function to pass netdev

For most of these calls we can just pass NULL through to the fallback
function as the sb_dev. The only cases where we cannot are the cases where
we might be dealing with either an upper device or a driver that would
have configured things to support an sb_dev itself.

The only driver that has any significant change in this patch set should be
ixgbe as we can drop the redundant functionality that existed in both the
ndo_select_queue function and the fallback function that was passed through
to us.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

show more ...


# 4f49dec9 09-Jul-2018 Alexander Duyck <alexander.h.duyck@intel.com>

net: allow ndo_select_queue to pass netdev

This patch makes it so that instead of passing a void pointer as the
accel_priv we instead pass a net_device pointer as sb_dev. Making this

net: allow ndo_select_queue to pass netdev

This patch makes it so that instead of passing a void pointer as the
accel_priv we instead pass a net_device pointer as sb_dev. Making this
change allows us to pass the subordinate device through to the fallback
function eventually so that we can keep the actual code in the
ndo_select_queue call as focused on possible on the exception cases.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

show more ...


# a4ea8a3d 09-Jul-2018 Alexander Duyck <alexander.h.duyck@intel.com>

net: Add generic ndo_select_queue functions

This patch adds a generic version of the ndo_select_queue functions for
either returning 0 or selecting a queue based on the processor ID. Thi

net: Add generic ndo_select_queue functions

This patch adds a generic version of the ndo_select_queue functions for
either returning 0 or selecting a queue based on the processor ID. This is
generally meant to just reduce the number of functions we have to change
in the future when we have to deal with ndo_select_queue changes.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

show more ...


# eadec877 09-Jul-2018 Alexander Duyck <alexander.h.duyck@intel.com>

net: Add support for subordinate traffic classes to netdev_pick_tx

This change makes it so that we can support the concept of subordinate
device traffic classes to the core networking co

net: Add support for subordinate traffic classes to netdev_pick_tx

This change makes it so that we can support the concept of subordinate
device traffic classes to the core networking code. In doing this we can
start pulling out the driver specific bits needed to support selecting a
queue based on an upper device.

The solution at is currently stands is only partially implemented. I have
the start of some XPS bits in here, but I would still need to allow for
configuration of the XPS maps on the queues reserved for the subordinate
devices. For now I am using the reference to the sb_dev XPS map as just a
way to skip the lookup of the lower device XPS map for now as that would
result in the wrong queue being picked.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

show more ...


# ffcfe25b 09-Jul-2018 Alexander Duyck <alexander.h.duyck@intel.com>

net: Add support for subordinate device traffic classes

This patch is meant to provide the basic tools needed to allow us to create
subordinate device traffic classes. The general idea h

net: Add support for subordinate device traffic classes

This patch is meant to provide the basic tools needed to allow us to create
subordinate device traffic classes. The general idea here is to allow
subdividing the queues of a device into queue groups accessible through an
upper device such as a macvlan.

The idea here is to enforce the idea that an upper device has to be a
single queue device, ideally with IFF_NO_QUQUE set. With that being the
case we can pretty much guarantee that the tc_to_txq mappings and XPS maps
for the upper device are unused. As such we could reuse those in order to
support subdividing the lower device and distributing those queues between
the subordinate devices.

In order to distinguish between a regular set of traffic classes and if a
device is carrying subordinate traffic classes I changed num_tc from a u8
to a s16 value and use the negative values to represent the subordinate
pool values. So starting at -1 and running to -32768 we can encode those as
pool values, and the existing values of 0 to 15 can be maintained.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

show more ...


Revision tags: v4.18-rc4
# 6312fe77 05-Jul-2018 Li RongQing <lirongqing@baidu.com>

net: limit each hash list length to MAX_GRO_SKBS

After commit 07d78363dcff ("net: Convert NAPI gro list into a small hash
table.")' there is 8 hash buckets, which allows more flows to be

net: limit each hash list length to MAX_GRO_SKBS

After commit 07d78363dcff ("net: Convert NAPI gro list into a small hash
table.")' there is 8 hash buckets, which allows more flows to be held for
merging. but MAX_GRO_SKBS, the total held skb for merging, is 8 skb still,
limit the hash table performance.

keep MAX_GRO_SKBS as 8 skb, but limit each hash list length to 8 skb, not
the total 8 skb

Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# b9f463d6 02-Jul-2018 Edward Cree <ecree@solarflare.com>

net: don't bother calling list RX functions on empty lists

Generally the check should be very cheap, as the sk_buff_head is in cache.

Signed-off-by: Edward Cree <ecree@solarflare.co

net: don't bother calling list RX functions on empty lists

Generally the check should be very cheap, as the sk_buff_head is in cache.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 17266ee9 02-Jul-2018 Edward Cree <ecree@solarflare.com>

net: ipv4: listified version of ip_rcv

Also involved adding a way to run a netfilter hook over a list of packets.
Rather than attempting to make netfilter know about lists (which would

net: ipv4: listified version of ip_rcv

Also involved adding a way to run a netfilter hook over a list of packets.
Rather than attempting to make netfilter know about lists (which would be
a major project in itself) we just let it call the regular okfn (in this
case ip_rcv_finish()) for any packets it steals, and have it give us back
a list of packets it's synchronously accepted (which normally NF_HOOK
would automatically call okfn() on, but we want to be able to potentially
pass the list to a listified version of okfn().)
The netfilter hooks themselves are indirect calls that still happen per-
packet (see nf_hook_entry_hookfn()), but again, changing that can be left
for future work.

There is potential for out-of-order receives if the netfilter hook ends up
synchronously stealing packets, as they will be processed before any
accepts earlier in the list. However, it was already possible for an
asynchronous accept to cause out-of-order receives, so presumably this is
considered OK.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 88eb1944 02-Jul-2018 Edward Cree <ecree@solarflare.com>

net: core: propagate SKB lists through packet_type lookup

__netif_receive_skb_core() does a depressingly large amount of per-packet
work that can't easily be listified, because the anot

net: core: propagate SKB lists through packet_type lookup

__netif_receive_skb_core() does a depressingly large amount of per-packet
work that can't easily be listified, because the another_round looping
makes it nontrivial to slice up into smaller functions.
Fortunately, most of that work disappears in the fast path:
* Hardware devices generally don't have an rx_handler
* Unless you're tcpdumping or something, there is usually only one ptype
* VLAN processing comes before the protocol ptype lookup, so doesn't force
a pt_prev deliver
so normally, __netif_receive_skb_core() will run straight through and pass
back the one ptype found in ptype_base[hash of skb->protocol].

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 4ce0017a 02-Jul-2018 Edward Cree <ecree@solarflare.com>

net: core: another layer of lists, around PF_MEMALLOC skb handling

First example of a layer splitting the list (rather than merely taking
individual packets off it).
Involves new li

net: core: another layer of lists, around PF_MEMALLOC skb handling

First example of a layer splitting the list (rather than merely taking
individual packets off it).
Involves new list.h function, list_cut_before(), like list_cut_position()
but cuts on the other side of the given entry.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 7da517a3 02-Jul-2018 Edward Cree <ecree@solarflare.com>

net: core: Another step of skb receive list processing

netif_receive_skb_list_internal() now processes a list and hands it
on to the next function.

Signed-off-by: Edward Cree <

net: core: Another step of skb receive list processing

netif_receive_skb_list_internal() now processes a list and hands it
on to the next function.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 920572b7 02-Jul-2018 Edward Cree <ecree@solarflare.com>

net: core: unwrap skb list receive slightly further

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>


12345678910>>...56