diff options
author | Jinliang Zheng <alexjlzheng@tencent.com> | 2024-06-20 20:21:24 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-07-18 11:40:51 +0200 |
commit | 25ab2411cb91b00bfe26cd26585ffaf7bb64a028 (patch) | |
tree | 5e906a7db6a7ef3efda02bf01ffdc10782d20a84 /kernel | |
parent | 07c176e7acc5579c133bb923ab21316d192d0a95 (diff) | |
download | linux-25ab2411cb91b00bfe26cd26585ffaf7bb64a028.tar.gz linux-25ab2411cb91b00bfe26cd26585ffaf7bb64a028.tar.bz2 linux-25ab2411cb91b00bfe26cd26585ffaf7bb64a028.zip |
mm: optimize the redundant loop of mm_update_owner_next()
commit cf3f9a593dab87a032d2b6a6fb205e7f3de4f0a1 upstream.
When mm_update_owner_next() is racing with swapoff (try_to_unuse()) or
/proc or ptrace or page migration (get_task_mm()), it is impossible to
find an appropriate task_struct in the loop whose mm_struct is the same as
the target mm_struct.
If the above race condition is combined with the stress-ng-zombie and
stress-ng-dup tests, such a long loop can easily cause a Hard Lockup in
write_lock_irq() for tasklist_lock.
Recognize this situation in advance and exit early.
Link: https://lkml.kernel.org/r/20240620122123.3877432-1-alexjlzheng@tencent.com
Signed-off-by: Jinliang Zheng <alexjlzheng@tencent.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Tycho Andersen <tandersen@netflix.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/exit.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/exit.c b/kernel/exit.c index c764d16328f6..56d3a099825f 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -433,6 +433,8 @@ retry: * Search through everything else, we should not get here often. */ for_each_process(g) { + if (atomic_read(&mm->mm_users) <= 1) + break; if (g->flags & PF_KTHREAD) continue; for_each_thread(g, c) { |