diff options
author | Ma Wupeng <mawupeng1@huawei.com> | 2025-02-17 09:43:28 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2025-03-13 13:07:50 +0100 |
commit | 1bc47f46d00ae2a3cc91ed77b46955abce111e0f (patch) | |
tree | 8b7d5eb54b64705494565ded66f7b3a6dd8d4b00 | |
parent | 629dfc6ba5431056701d4e44830f3409b989955a (diff) | |
download | linux-1bc47f46d00ae2a3cc91ed77b46955abce111e0f.tar.gz linux-1bc47f46d00ae2a3cc91ed77b46955abce111e0f.tar.bz2 linux-1bc47f46d00ae2a3cc91ed77b46955abce111e0f.zip |
mm: memory-hotplug: check folio ref count first in do_migrate_range
commit 773b9a6aa6d38894b95088e3ed6f8a701d9f50fd upstream.
If a folio has an increased reference count, folio_try_get() will acquire
it, perform necessary operations, and then release it. In the case of a
poisoned folio without an elevated reference count (which is unlikely for
memory-failure), folio_try_get() will simply bypass it.
Therefore, relocate the folio_try_get() function, responsible for checking
and acquiring this reference count at first.
Link: https://lkml.kernel.org/r/20250217014329.3610326-3-mawupeng1@huawei.com
Signed-off-by: Ma Wupeng <mawupeng1@huawei.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | mm/memory_hotplug.c | 20 |
1 files changed, 7 insertions, 13 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2815bd4ea483..c3de35389269 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1795,12 +1795,12 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (folio_test_large(folio)) pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1; - /* - * HWPoison pages have elevated reference counts so the migration would - * fail on them. It also doesn't make any sense to migrate them in the - * first place. Still try to unmap such a page in case it is still mapped - * (keep the unmap as the catch all safety net). - */ + if (!folio_try_get(folio)) + continue; + + if (unlikely(page_folio(page) != folio)) + goto put_folio; + if (folio_test_hwpoison(folio) || (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { if (WARN_ON(folio_test_lru(folio))) @@ -1811,14 +1811,8 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) folio_unlock(folio); } - continue; - } - - if (!folio_try_get(folio)) - continue; - - if (unlikely(page_folio(page) != folio)) goto put_folio; + } if (!isolate_folio_to_list(folio, &source)) { if (__ratelimit(&migrate_rs)) { |