summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorJeongjun Park <aha310510@gmail.com>2025-01-11 01:26:12 +0900
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-02-17 10:04:49 +0100
commit22a1a758183da84ad1ef7cd68beb45932fe8cbc9 (patch)
tree6dc453e6f082e7c72ac6942b76bd6d2ba0fed290 /kernel
parentd1544dc32c67c80b9e3512fa4e187931be1a9e8c (diff)
downloadlinux-22a1a758183da84ad1ef7cd68beb45932fe8cbc9.tar.gz
linux-22a1a758183da84ad1ef7cd68beb45932fe8cbc9.tar.bz2
linux-22a1a758183da84ad1ef7cd68beb45932fe8cbc9.zip
ring-buffer: Make reading page consistent with the code logic
[ Upstream commit 6e31b759b076eebb4184117234f0c4eb9e4bc460 ] In the loop of __rb_map_vma(), the 's' variable is calculated from the same logic that nr_pages is and they both come from nr_subbufs. But the relationship is not obvious and there's a WARN_ON_ONCE() around the 's' variable to make sure it never becomes equal to nr_subbufs within the loop. If that happens, then the code is buggy and needs to be fixed. The 'page' variable is calculated from cpu_buffer->subbuf_ids[s] which is an array of 'nr_subbufs' entries. If the code becomes buggy and 's' becomes equal to or greater than 'nr_subbufs' then this will be an out of bounds hit before the WARN_ON() is triggered and the code exiting safely. Make the 'page' initialization consistent with the code logic and assign it after the out of bounds check. Link: https://lore.kernel.org/20250110162612.13983-1-aha310510@gmail.com Signed-off-by: Jeongjun Park <aha310510@gmail.com> [ sdr: rewrote change log ] Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/trace/ring_buffer.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 703978b2d557..28fad7bcfcf8 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -7059,7 +7059,7 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
}
while (p < nr_pages) {
- struct page *page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
+ struct page *page;
int off = 0;
if (WARN_ON_ONCE(s >= nr_subbufs)) {
@@ -7067,6 +7067,8 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
goto out;
}
+ page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
+
for (; off < (1 << (subbuf_order)); off++, page++) {
if (p >= nr_pages)
break;