diff options
author | Hou Tao <houtao1@huawei.com> | 2023-10-20 21:31:59 +0800 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2023-10-20 14:15:13 -0700 |
commit | 3f2189e4f77b7a3e979d143dc4ff586488c7e8a5 (patch) | |
tree | d2b472bb05a3c5e94c26e69d30c09dc91139bf00 /include/linux/bpf_mem_alloc.h | |
parent | baa8fdecd87bb8751237b45e3bcb5a179e5a12ca (diff) | |
download | linux-3f2189e4f77b7a3e979d143dc4ff586488c7e8a5.tar.gz linux-3f2189e4f77b7a3e979d143dc4ff586488c7e8a5.tar.bz2 linux-3f2189e4f77b7a3e979d143dc4ff586488c7e8a5.zip |
bpf: Use pcpu_alloc_size() in bpf_mem_free{_rcu}()
For bpf_global_percpu_ma, the pointer passed to bpf_mem_free_rcu() is
allocated by kmalloc() and its size is fixed (16-bytes on x86-64). So
no matter which cache allocates the dynamic per-cpu area, on x86-64
cache[2] will always be used to free the per-cpu area.
Fix the unbalance by checking whether the bpf memory allocator is
per-cpu or not and use pcpu_alloc_size() instead of ksize() to
find the correct cache for per-cpu free.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20231020133202.4043247-5-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include/linux/bpf_mem_alloc.h')
-rw-r--r-- | include/linux/bpf_mem_alloc.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h index d644bbb298af..bb1223b21308 100644 --- a/include/linux/bpf_mem_alloc.h +++ b/include/linux/bpf_mem_alloc.h @@ -11,6 +11,7 @@ struct bpf_mem_caches; struct bpf_mem_alloc { struct bpf_mem_caches __percpu *caches; struct bpf_mem_cache __percpu *cache; + bool percpu; struct work_struct work; }; |