diff options
| author | Wei Yang <richard.weiyang@gmail.com> | 2025-12-01 17:18:18 -0500 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2025-12-07 06:25:02 +0900 |
| commit | 592db83615a9f0164472ec789c2ed34ad35f732f (patch) | |
| tree | 812365df0ac5cf72bc40bcf7ad1a0ae3c0834c83 /mm | |
| parent | 10014310193cf6736c1aeb4105c5f4a0818d0c65 (diff) | |
| download | linux-592db83615a9f0164472ec789c2ed34ad35f732f.tar.gz linux-592db83615a9f0164472ec789c2ed34ad35f732f.tar.bz2 linux-592db83615a9f0164472ec789c2ed34ad35f732f.zip | |
mm/huge_memory: fix NULL pointer deference when splitting folio
[ Upstream commit cff47b9e39a6abf03dde5f4f156f841b0c54bba0 ]
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages")
introduced an early check on the folio's order via mapping->flags before
proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and
truncated folios, the mapping pointer can be NULL. Accessing
mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before
any attempt to access mapping->flags.
Link: https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ applied fix to split_huge_page_to_list_to_order() instead of __folio_split() ]
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
| -rw-r--r-- | mm/huge_memory.c | 17 |
1 files changed, 10 insertions, 7 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d68a22c729fb..2065374c7e9e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3404,6 +3404,16 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, if (new_order >= folio_order(folio)) return -EINVAL; + /* + * Folios that just got truncated cannot get split. Signal to the + * caller that there was a race. + * + * TODO: this will also currently refuse shmem folios that are in the + * swapcache. + */ + if (!is_anon && !folio->mapping) + return -EBUSY; + if (is_anon) { /* order-1 is not supported for anonymous THP. */ if (new_order == 1) { @@ -3466,13 +3476,6 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, gfp_t gfp; mapping = folio->mapping; - - /* Truncated ? */ - if (!mapping) { - ret = -EBUSY; - goto out; - } - min_order = mapping_min_folio_order(folio->mapping); if (new_order < min_order) { VM_WARN_ONCE(1, "Cannot split mapped folio below min-order: %u", |
