diff options
author | Alan Maguire <alan.maguire@oracle.com> | 2025-02-05 17:00:59 +0000 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2025-02-27 04:30:19 -0800 |
commit | c1f3f3892d4526f18aaeffdb6068ce861e793ee3 (patch) | |
tree | b4e38fbbb6146d0d7edbcbe88ece730a377865ce /kernel | |
parent | f579afacd0a66971fc8481f30d2d377e230a8342 (diff) | |
download | linux-c1f3f3892d4526f18aaeffdb6068ce861e793ee3.tar.gz linux-c1f3f3892d4526f18aaeffdb6068ce861e793ee3.tar.bz2 linux-c1f3f3892d4526f18aaeffdb6068ce861e793ee3.zip |
bpf: Fix softlockup in arena_map_free on 64k page kernel
[ Upstream commit 517e8a7835e8cfb398a0aeb0133de50e31cae32b ]
On an aarch64 kernel with CONFIG_PAGE_SIZE_64KB=y,
arena_htab tests cause a segmentation fault and soft lockup.
The same failure is not observed with 4k pages on aarch64.
It turns out arena_map_free() is calling
apply_to_existing_page_range() with the address returned by
bpf_arena_get_kern_vm_start(). If this address is not page-aligned
the code ends up calling apply_to_pte_range() with that unaligned
address causing soft lockup.
Fix it by round up GUARD_SZ to PAGE_SIZE << 1 so that the
division by 2 in bpf_arena_get_kern_vm_start() returns
a page-aligned value.
Fixes: 317460317a02 ("bpf: Introduce bpf_arena.")
Reported-by: Colm Harrington <colm.harrington@oracle.com>
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Link: https://lore.kernel.org/r/20250205170059.427458-1-alan.maguire@oracle.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/bpf/arena.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c index 93e48c7cad4e..8c775a1401d3 100644 --- a/kernel/bpf/arena.c +++ b/kernel/bpf/arena.c @@ -37,7 +37,7 @@ */ /* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */ -#define GUARD_SZ (1ull << sizeof_field(struct bpf_insn, off) * 8) +#define GUARD_SZ round_up(1ull << sizeof_field(struct bpf_insn, off) * 8, PAGE_SIZE << 1) #define KERN_VM_SZ (SZ_4G + GUARD_SZ) struct bpf_arena { |