diff options
| author | Hugh Dickins <hughd@google.com> | 2025-09-08 15:23:15 -0700 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2025-10-02 13:42:49 +0200 |
| commit | 1744aff07b833a5e69719e0db649ccfd454e0108 (patch) | |
| tree | 83875cbc1af9385837151655ceb5f35531ef139e /mm/swap.c | |
| parent | d37ec803b281363120af797ee8b4432e2dac307c (diff) | |
| download | linux-1744aff07b833a5e69719e0db649ccfd454e0108.tar.gz linux-1744aff07b833a5e69719e0db649ccfd454e0108.tar.bz2 linux-1744aff07b833a5e69719e0db649ccfd454e0108.zip | |
mm: folio_may_be_lru_cached() unless folio_test_large()
[ Upstream commit 2da6de30e60dd9bb14600eff1cc99df2fa2ddae3 ]
mm/swap.c and mm/mlock.c agree to drain any per-CPU batch as soon as a
large folio is added: so collect_longterm_unpinnable_folios() just wastes
effort when calling lru_add_drain[_all]() on a large folio.
But although there is good reason not to batch up PMD-sized folios, we
might well benefit from batching a small number of low-order mTHPs (though
unclear how that "small number" limitation will be implemented).
So ask if folio_may_be_lru_cached() rather than !folio_test_large(), to
insulate those particular checks from future change. Name preferred to
"folio_is_batchable" because large folios can well be put on a batch: it's
just the per-CPU LRU caches, drained much later, which need care.
Marked for stable, to counter the increase in lru_add_drain_all()s from
"mm/gup: check ref_count instead of lru before migration".
Link: https://lkml.kernel.org/r/57d2eaf8-3607-f318-e0c5-be02dce61ad0@google.com
Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region")
Signed-off-by: Hugh Dickins <hughd@google.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Keir Fraser <keirf@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Li Zhe <lizhe.67@bytedance.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Shivank Garg <shivankg@amd.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Xu <weixugc@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: yangge <yangge1116@126.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Resolved conflicts in mm/swap.c ]
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm/swap.c')
| -rw-r--r-- | mm/swap.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/swap.c b/mm/swap.c index 42082eba42de..8fde1a27aa48 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -220,8 +220,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) static void folio_batch_add_and_move(struct folio_batch *fbatch, struct folio *folio, move_fn_t move_fn) { - if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) && - !lru_cache_disabled()) + if (folio_batch_add(fbatch, folio) && + folio_may_be_lru_cached(folio) && !lru_cache_disabled()) return; folio_batch_move_lru(fbatch, move_fn); } |
