diff options
| author | Breno Leitao <leitao@debian.org> | 2025-08-19 11:26:13 -0400 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2025-08-28 16:21:34 +0200 |
| commit | c7b6ea0ede687e7460e593c5ea478f50aa41682a (patch) | |
| tree | 5631cdc0c2af2b12aecf9c27d42ac17fec53ae48 /mm | |
| parent | 6742e3425abdf6005bf8e3ca5e5d86865773846d (diff) | |
| download | linux-c7b6ea0ede687e7460e593c5ea478f50aa41682a.tar.gz linux-c7b6ea0ede687e7460e593c5ea478f50aa41682a.tar.bz2 linux-c7b6ea0ede687e7460e593c5ea478f50aa41682a.zip | |
mm/kmemleak: avoid deadlock by moving pr_warn() outside kmemleak_lock
[ Upstream commit 47b0f6d8f0d2be4d311a49e13d2fd5f152f492b2 ]
When netpoll is enabled, calling pr_warn_once() while holding
kmemleak_lock in mem_pool_alloc() can cause a deadlock due to lock
inversion with the netconsole subsystem. This occurs because
pr_warn_once() may trigger netpoll, which eventually leads to
__alloc_skb() and back into kmemleak code, attempting to reacquire
kmemleak_lock.
This is the path for the deadlock.
mem_pool_alloc()
-> raw_spin_lock_irqsave(&kmemleak_lock, flags);
-> pr_warn_once()
-> netconsole subsystem
-> netpoll
-> __alloc_skb
-> __create_object
-> raw_spin_lock_irqsave(&kmemleak_lock, flags);
Fix this by setting a flag and issuing the pr_warn_once() after
kmemleak_lock is released.
Link: https://lkml.kernel.org/r/20250731-kmemleak_lock-v1-1-728fd470198f@debian.org
Fixes: c5665868183f ("mm: kmemleak: use the memory pool for early allocations")
Signed-off-by: Breno Leitao <leitao@debian.org>
Reported-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
| -rw-r--r-- | mm/kmemleak.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 55df2f6b1fd3..a7fc6b23c37e 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -417,6 +417,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) { unsigned long flags; struct kmemleak_object *object; + bool warn = false; /* try the slab allocator first */ if (object_cache) { @@ -434,8 +435,10 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) else if (mem_pool_free_count) object = &mem_pool[--mem_pool_free_count]; else - pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n"); + warn = true; raw_spin_unlock_irqrestore(&kmemleak_lock, flags); + if (warn) + pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n"); return object; } |
