diff options
| author | Hou Tao <houtao1@huawei.com> | 2025-02-20 12:22:59 +0800 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2025-04-10 14:29:40 +0200 |
| commit | 794d6b4b4e8c2fd406c8cf20594f611c42dc1241 (patch) | |
| tree | 037d09f6f520e2f5d3899ddbe8c244a8233ed9e8 /kernel | |
| parent | 671760004d52f1a2d2a0ac8e4725f4d489ed0248 (diff) | |
| download | linux-794d6b4b4e8c2fd406c8cf20594f611c42dc1241.tar.gz linux-794d6b4b4e8c2fd406c8cf20594f611c42dc1241.tar.bz2 linux-794d6b4b4e8c2fd406c8cf20594f611c42dc1241.zip | |
bpf: Use preempt_count() directly in bpf_send_signal_common()
[ Upstream commit b4a8b5bba712a711d8ca1f7d04646db63f9c88f5 ]
bpf_send_signal_common() uses preemptible() to check whether or not the
current context is preemptible. If it is preemptible, it will use
irq_work to send the signal asynchronously instead of trying to hold a
spin-lock, because spin-lock is sleepable under PREEMPT_RT.
However, preemptible() depends on CONFIG_PREEMPT_COUNT. When
CONFIG_PREEMPT_COUNT is turned off (e.g., CONFIG_PREEMPT_VOLUNTARY=y),
!preemptible() will be evaluated as 1 and bpf_send_signal_common() will
use irq_work unconditionally.
Fix it by unfolding "!preemptible()" and using "preempt_count() != 0 ||
irqs_disabled()" instead.
Fixes: 87c544108b61 ("bpf: Send signals asynchronously if !preemptible")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20250220042259.1583319-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/trace/bpf_trace.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index ac3125d0c73f..75ea2ab53213 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -653,7 +653,7 @@ BPF_CALL_1(bpf_send_signal, u32, sig) if (unlikely(is_global_init(current))) return -EPERM; - if (!preemptible()) { + if (preempt_count() != 0 || irqs_disabled()) { /* Do an early check on signal validity. Otherwise, * the error is lost in deferred irq_work. */ |
