diff options
author | Eric Dumazet <edumazet@google.com> | 2023-02-06 17:31:03 +0000 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2023-02-07 10:59:58 -0800 |
commit | bf9f1baa279f0758dc2297080360c5a616843927 (patch) | |
tree | 63eee2d53a03648d9d0aeb3a2535315a0dc6bf25 /lib/find_bit.c | |
parent | 5c0e820cbbbe2d1c4cea5cd2bfc1302c123436df (diff) | |
download | linux-bf9f1baa279f0758dc2297080360c5a616843927.tar.gz linux-bf9f1baa279f0758dc2297080360c5a616843927.tar.bz2 linux-bf9f1baa279f0758dc2297080360c5a616843927.zip |
net: add dedicated kmem_cache for typical/small skb->head
Recent removal of ksize() in alloc_skb() increased
performance because we no longer read
the associated struct page.
We have an equivalent cost at kfree_skb() time.
kfree(skb->head) has to access a struct page,
often cold in cpu caches to get the owning
struct kmem_cache.
Considering that many allocations are small (at least for TCP ones)
we can have our own kmem_cache to avoid the cache line miss.
This also saves memory because these small heads
are no longer padded to 1024 bytes.
CONFIG_SLUB=y
$ grep skbuff_small_head /proc/slabinfo
skbuff_small_head 2907 2907 640 51 8 : tunables 0 0 0 : slabdata 57 57 0
CONFIG_SLAB=y
$ grep skbuff_small_head /proc/slabinfo
skbuff_small_head 607 624 640 6 1 : tunables 54 27 8 : slabdata 104 104 5
Notes:
- After Kees Cook patches and this one, we might
be able to revert commit
dbae2b062824 ("net: skb: introduce and use a single page frag cache")
because GRO_MAX_HEAD is also small.
- This patch is a NOP for CONFIG_SLOB=y builds.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'lib/find_bit.c')
0 files changed, 0 insertions, 0 deletions