summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorBreno Leitao <leitao@debian.org>2025-07-31 02:57:18 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-08-28 16:28:32 +0200
commit1da95d3d4b7b1d380ebd87b71a61e7e6aed3265d (patch)
tree3650894bad7ca6ebfd04194145355172bd0c9d5b /mm
parente21a3ddd58733ce31afcb1e5dc3cb80a4b5bc29b (diff)
downloadlinux-1da95d3d4b7b1d380ebd87b71a61e7e6aed3265d.tar.gz
linux-1da95d3d4b7b1d380ebd87b71a61e7e6aed3265d.tar.bz2
linux-1da95d3d4b7b1d380ebd87b71a61e7e6aed3265d.zip
mm/kmemleak: avoid deadlock by moving pr_warn() outside kmemleak_lock
commit 47b0f6d8f0d2be4d311a49e13d2fd5f152f492b2 upstream. When netpoll is enabled, calling pr_warn_once() while holding kmemleak_lock in mem_pool_alloc() can cause a deadlock due to lock inversion with the netconsole subsystem. This occurs because pr_warn_once() may trigger netpoll, which eventually leads to __alloc_skb() and back into kmemleak code, attempting to reacquire kmemleak_lock. This is the path for the deadlock. mem_pool_alloc() -> raw_spin_lock_irqsave(&kmemleak_lock, flags); -> pr_warn_once() -> netconsole subsystem -> netpoll -> __alloc_skb -> __create_object -> raw_spin_lock_irqsave(&kmemleak_lock, flags); Fix this by setting a flag and issuing the pr_warn_once() after kmemleak_lock is released. Link: https://lkml.kernel.org/r/20250731-kmemleak_lock-v1-1-728fd470198f@debian.org Fixes: c5665868183f ("mm: kmemleak: use the memory pool for early allocations") Signed-off-by: Breno Leitao <leitao@debian.org> Reported-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/kmemleak.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 7de85ac08d29..e2e41de55c02 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -452,6 +452,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
{
unsigned long flags;
struct kmemleak_object *object;
+ bool warn = false;
/* try the slab allocator first */
if (object_cache) {
@@ -469,8 +470,10 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
else if (mem_pool_free_count)
object = &mem_pool[--mem_pool_free_count];
else
- pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
+ warn = true;
raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
+ if (warn)
+ pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
return object;
}