summaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)AuthorFilesLines
5 daysmm/damon/core: disallow non-power of two min_region_szSeongJae Park1-0/+3
[ Upstream commit c80f46ac228b48403866d65391ad09bdf0e8562a ] DAMON core uses min_region_sz parameter value as the DAMON region alignment. The alignment is made using ALIGN() and ALIGN_DOWN(), which support only the power of two alignments. But DAMON core API callers can set min_region_sz to an arbitrary number. Users can also set it indirectly, using addr_unit. When the alignment is not properly set, DAMON behavior becomes difficult to expect and understand, makes it effectively broken. It doesn't cause a kernel crash-like significant issue, though. Fix the issue by disallowing min_region_sz input that is not a power of two. Add the check to damon_commit_ctx(), as all DAMON API callers who set min_region_sz uses the function. This can be a sort of behavioral change, but it does not break users, for the following reasons. As the symptom is making DAMON effectively broken, it is not reasonable to believe there are real use cases of non-power of two min_region_sz. There is no known use case or issue reports from the setup, either. In future, if we find real use cases of non-power of two alignments and we can support it with low enough overhead, we can consider moving the restriction. But, for now, simply disallowing the corner case should be good enough as a hot fix. Link: https://lkml.kernel.org/r/20260214214124.87689-1-sj@kernel.org Fixes: d8f867fa0825 ("mm/damon: add damon_ctx->min_sz_region") Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Quanmin Yan <yanquanmin1@huawei.com> Cc: <stable@vger.kernel.org> [6.18+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ min_region_sz => min_sz_region ] Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysmm: Fix a hmm_range_fault() livelock / starvation problemThomas Hellström2-5/+11
commit b570f37a2ce480be26c665345c5514686a8a0274 upstream. If hmm_range_fault() fails a folio_trylock() in do_swap_page, trying to acquire the lock of a device-private folio for migration, to ram, the function will spin until it succeeds grabbing the lock. However, if the process holding the lock is depending on a work item to be completed, which is scheduled on the same CPU as the spinning hmm_range_fault(), that work item might be starved and we end up in a livelock / starvation situation which is never resolved. This can happen, for example if the process holding the device-private folio lock is stuck in migrate_device_unmap()->lru_add_drain_all() sinc lru_add_drain_all() requires a short work-item to be run on all online cpus to complete. A prerequisite for this to happen is: a) Both zone device and system memory folios are considered in migrate_device_unmap(), so that there is a reason to call lru_add_drain_all() for a system memory folio while a folio lock is held on a zone device folio. b) The zone device folio has an initial mapcount > 1 which causes at least one migration PTE entry insertion to be deferred to try_to_migrate(), which can happen after the call to lru_add_drain_all(). c) No or voluntary only preemption. This all seems pretty unlikely to happen, but indeed is hit by the "xe_exec_system_allocator" igt test. Resolve this by waiting for the folio to be unlocked if the folio_trylock() fails in do_swap_page(). Rename migration_entry_wait_on_locked() to softleaf_entry_wait_unlock() and update its documentation to indicate the new use-case. Future code improvements might consider moving the lru_add_drain_all() call in migrate_device_unmap() to be called *after* all pages have migration entries inserted. That would eliminate also b) above. v2: - Instead of a cond_resched() in hmm_range_fault(), eliminate the problem by waiting for the folio to be unlocked in do_swap_page() (Alistair Popple, Andrew Morton) v3: - Add a stub migration_entry_wait_on_locked() for the !CONFIG_MIGRATION case. (Kernel Test Robot) v4: - Rename migrate_entry_wait_on_locked() to softleaf_entry_wait_on_locked() and update docs (Alistair Popple) v5: - Add a WARN_ON_ONCE() for the !CONFIG_MIGRATION version of softleaf_entry_wait_on_locked(). - Modify wording around function names in the commit message (Andrew Morton) Suggested-by: Alistair Popple <apopple@nvidia.com> Fixes: 1afaeb8293c9 ("mm/migrate: Trylock device page in do_swap_page") Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Leon Romanovsky <leon@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Matthew Brost <matthew.brost@intel.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: linux-mm@kvack.org Cc: <dri-devel@lists.freedesktop.org> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: <stable@vger.kernel.org> # v6.15+ Reviewed-by: John Hubbard <jhubbard@nvidia.com> #v3 Reviewed-by: Alistair Popple <apopple@nvidia.com> Link: https://patch.msgid.link/20260210115653.92413-1-thomas.hellstrom@linux.intel.com (cherry picked from commit a69d1ab971a624c6f112cea61536569d579c3215) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysmemcg: fix slab accounting in refill_obj_stock() trylock pathHao Li1-1/+1
commit dccd5ee2625d50239510bcd73ed78559005e00a3 upstream. In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use the real alloc/free bytes (i.e., nr_acct) for accounting, rather than nr_bytes. The user-visible impact is that the NR_SLAB_RECLAIMABLE_B and NR_SLAB_UNRECLAIMABLE_B stats can end up being incorrect. For example, if a user allocates a 6144-byte object, then before this fix efill_obj_stock() calls mod_objcg_mlstate(..., nr_bytes=2048), even though it should account for 6144 bytes (i.e., nr_acct). When the user later frees the same object with kfree(), refill_obj_stock() calls mod_objcg_mlstate(..., nr_bytes=6144). This ends up adding 6144 to the stats, but it should be applying -6144 (i.e., nr_acct) since the object is being freed. Link: https://lkml.kernel.org/r/20260226115145.62903-1-hao.li@linux.dev Fixes: 200577f69f29 ("memcg: objcg stock trylock without irq disabling") Signed-off-by: Hao Li <hao.li@linux.dev> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Vlastimil Babka <vbabka@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysslab: distinguish lock and trylock for sheaf_flush_main()Vlastimil Babka1-10/+37
commit 48647d3f9a644d1e81af6558102d43cdb260597b upstream. sheaf_flush_main() can be called from __pcs_replace_full_main() where it's fine if the trylock fails, and pcs_flush_all() where it's not expected to and for some flush callers (when destroying the cache or memory hotremove) it would be actually a problem if it failed and left the main sheaf not flushed. The flush callers can however safely use local_lock() instead of trylock. The trylock failure should not happen in practice on !PREEMPT_RT, but can happen on PREEMPT_RT. The impact is limited in practice because when a trylock fails in the kmem_cache_destroy() path, it means someone is using the cache while destroying it, which is a bug on its own. The memory hotremove path is unlikely to be employed in a production RT config, but it's possible. To fix this, split the function into sheaf_flush_main() (using local_lock()) and sheaf_try_flush_main() (using local_trylock()) where both call __sheaf_flush_main_batch() to flush a single batch of objects. This will also allow lockdep to verify our context assumptions. The problem was raised in an off-list question by Marcelo. Fixes: 2d517aa09bbc ("slab: add opt-in caching layer of percpu sheaves") Cc: stable@vger.kernel.org Reported-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Hao Li <hao.li@linux.dev> Link: https://patch.msgid.link/20260211-b4-sheaf-flush-v1-1-4e7f492f0055@suse.cz Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysmm/slab: fix an incorrect check in obj_exts_alloc_size()Harry Yoo1-7/+0
commit 8dafa9f5900c4855a65dbfee51e3bd00636deee1 upstream. obj_exts_alloc_size() prevents recursive allocation of slabobj_ext array from the same cache, to avoid creating slabs that are never freed. There is one mistake that returns the original size when memory allocation profiling is disabled. The assumption was that memcg-triggered slabobj_ext allocation is always served from KMALLOC_CGROUP type. But this is wrong [1]: when the caller specifies both __GFP_RECLAIMABLE and __GFP_ACCOUNT with SLUB_TINY enabled, the allocation is served from normal kmalloc. This is because kmalloc_type() prioritizes __GFP_RECLAIMABLE over __GFP_ACCOUNT, and SLUB_TINY aliases KMALLOC_RECLAIM with KMALLOC_NORMAL. As a result, the recursion guard is bypassed and the problematic slabs can be created. Fix this by removing the mem_alloc_profiling_enabled() check entirely. The remaining is_kmalloc_normal() check is still sufficient to detect whether the cache is of KMALLOC_NORMAL type and avoid bumping the size if it's not. Without SLUB_TINY, no functional change intended. With SLUB_TINY, allocations with __GFP_ACCOUNT|__GFP_RECLAIMABLE now allocate a larger array if the sizes equal. Reported-by: Zw Tang <shicenci@gmail.com> Fixes: 280ea9c3154b ("mm/slab: avoid allocating slabobj_ext array from its own slab") Closes: https://lore.kernel.org/linux-mm/CAPHJ_VKuMKSke8b11AZQw1PTSFN4n2C0gFxC6xGOG0ZLHgPmnA@mail.gmail.com [1] Cc: stable@vger.kernel.org Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20260309072219.22653-1-harry.yoo@oracle.com Tested-by: Zw Tang <shicenci@gmail.com> Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysmm/damon/core: clear walk_control on inactive context in damos_walk()Raul Pazemecxas De Andrade1-1/+6
commit d210fdcac9c0d1380eab448aebc93f602c1cd4e6 upstream. damos_walk() sets ctx->walk_control to the caller-provided control structure before checking whether the context is running. If the context is inactive (damon_is_running() returns false), the function returns -EINVAL without clearing ctx->walk_control. This leaves a dangling pointer to a stack-allocated structure that will be freed when the caller returns. This is structurally identical to the bug fixed in commit f9132fbc2e83 ("mm/damon/core: remove call_control in inactive contexts") for damon_call(), which had the same pattern of linking a control object and returning an error without unlinking it. The dangling walk_control pointer can cause: 1. Use-after-free if the context is later started and kdamond    dereferences ctx->walk_control (e.g., in damos_walk_cancel()    which writes to control->canceled and calls complete()) 2. Permanent -EBUSY from subsequent damos_walk() calls, since the    stale pointer is non-NULL Nonetheless, the real user impact is quite restrictive. The use-after-free is impossible because there is no damos_walk() callers who starts the context later. The permanent -EBUSY can actually confuse users, as DAMON is not running. But the symptom is kept only while the context is turned off. Turning it on again will make DAMON internally uses a newly generated damon_ctx object that doesn't have the invalid damos_walk_control pointer, so everything will work fine again. Fix this by clearing ctx->walk_control under walk_control_lock before returning -EINVAL, mirroring the fix pattern from f9132fbc2e83. Link: https://lkml.kernel.org/r/20260224011102.56033-1-sj@kernel.org Fixes: bf0eaba0ff9c ("mm/damon/core: implement damos_walk()") Reported-by: Raul Pazemecxas De Andrade <raul_pazemecxas@hotmail.com> Closes: https://lore.kernel.org/CPUPR80MB8171025468965E583EF2490F956CA@CPUPR80MB8171.lamprd80.prod.outlook.com Signed-off-by: Raul Pazemecxas De Andrade <raul_pazemecxas@hotmail.com> Signed-off-by: SeongJae Park <sj@kernel.org> Reviewed-by: SeongJae Park <sj@kernel.org> Cc: <stable@vger.kernel.org> [6.14+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysmm/kfence: disable KFENCE upon KASAN HW tags enablementAlexander Potapenko1-0/+15
commit 09833d99db36d74456a4d13eb29c32d56ff8f2b6 upstream. KFENCE does not currently support KASAN hardware tags. As a result, the two features are incompatible when enabled simultaneously. Given that MTE provides deterministic protection and KFENCE is a sampling-based debugging tool, prioritize the stronger hardware protections. Disable KFENCE initialization and free the pre-allocated pool if KASAN hardware tags are detected to ensure the system maintains the security guarantees provided by MTE. Link: https://lkml.kernel.org/r/20260213095410.1862978-1-glider@google.com Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Alexander Potapenko <glider@google.com> Suggested-by: Marco Elver <elver@google.com> Reviewed-by: Marco Elver <elver@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Kees Cook <kees@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
5 daysmm/kfence: fix KASAN hardware tag faults during late enablementAlexander Potapenko2-7/+10
commit d155aab90fffa00f93cea1f107aef0a3d548b2ff upstream. When KASAN hardware tags are enabled, re-enabling KFENCE late (via /sys/module/kfence/parameters/sample_interval) causes KASAN faults. This happens because the KFENCE pool and metadata are allocated via the page allocator, which tags the memory, while KFENCE continues to access it using untagged pointers during initialization. Use __GFP_SKIP_KASAN for late KFENCE pool and metadata allocations to ensure the memory remains untagged, consistent with early allocations from memblock. To support this, add __GFP_SKIP_KASAN to the allowlist in __alloc_contig_verify_gfp_mask(). Link: https://lkml.kernel.org/r/20260220144940.2779209-1-glider@google.com Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Alexander Potapenko <glider@google.com> Suggested-by: Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Kees Cook <kees@kernel.org> Cc: Marco Elver <elver@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
13 daysmm: thp: deny THP for files on anonymous inodesDeepanshu Kartikey1-0/+3
commit dd085fe9a8ebfc5d10314c60452db38d2b75e609 upstream. file_thp_enabled() incorrectly allows THP for files on anonymous inodes (e.g. guest_memfd and secretmem). These files are created via alloc_file_pseudo(), which does not call get_write_access() and leaves inode->i_writecount at 0. Combined with S_ISREG(inode->i_mode) being true, they appear as read-only regular files when CONFIG_READ_ONLY_THP_FOR_FS is enabled, making them eligible for THP collapse. Anonymous inodes can never pass the inode_is_open_for_write() check since their i_writecount is never incremented through the normal VFS open path. The right thing to do is to exclude them from THP eligibility altogether, since CONFIG_READ_ONLY_THP_FOR_FS was designed for real filesystem files (e.g. shared libraries), not for pseudo-filesystem inodes. For guest_memfd, this allows khugepaged and MADV_COLLAPSE to create large folios in the page cache via the collapse path, but the guest_memfd fault handler does not support large folios. This triggers WARN_ON_ONCE(folio_test_large(folio)) in kvm_gmem_fault_user_mapping(). For secretmem, collapse_file() tries to copy page contents through the direct map, but secretmem pages are removed from the direct map. This can result in a kernel crash: BUG: unable to handle page fault for address: ffff88810284d000 RIP: 0010:memcpy_orig+0x16/0x130 Call Trace: collapse_file hpage_collapse_scan_file madvise_collapse Secretmem is not affected by the crash on upstream as the memory failure recovery handles the failed copy gracefully, but it still triggers confusing false memory failure reports: Memory failure: 0x106d96f: recovery action for clean unevictable LRU page: Recovered Check IS_ANON_FILE(inode) in file_thp_enabled() to deny THP for all anonymous inode files. Link: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44 Link: https://lore.kernel.org/linux-mm/CAEvNRgHegcz3ro35ixkDw39ES8=U6rs6S7iP0gkR9enr7HoGtA@mail.gmail.com Link: https://lkml.kernel.org/r/20260214001535.435626-1-kartikey406@gmail.com Fixes: 7fbb5e188248 ("mm: remove VM_EXEC requirement for THP eligibility") Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com> Reported-by: syzbot+33a04338019ac7e43a44@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44 Tested-by: syzbot+33a04338019ac7e43a44@syzkaller.appspotmail.com Tested-by: Lance Yang <lance.yang@linux.dev> Acked-by: David Hildenbrand (Arm) <david@kernel.org> Reviewed-by: Barry Song <baohua@kernel.org> Reviewed-by: Ackerley Tng <ackerleytng@google.com> Tested-by: Ackerley Tng <ackerleytng@google.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Fangrui Song <i@maskray.me> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Nico Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yang Shi <shy828301@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
13 daysmm/slab: use prandom if !allow_spinHarry Yoo1-4/+24
[ Upstream commit a1e244a9f177894969c6cd5ebbc6d72c19fc4a7a ] When CONFIG_SLAB_FREELIST_RANDOM is enabled and get_random_u32() is called in an NMI context, lockdep complains because it acquires a local_lock: ================================ WARNING: inconsistent lock state 6.19.0-rc5-slab-for-next+ #325 Tainted: G N -------------------------------- inconsistent {INITIAL USE} -> {IN-NMI} usage. kunit_try_catch/8312 [HC2[2]:SC0[0]:HE0:SE1] takes: ffff88a02ec49cc0 (batched_entropy_u32.lock){-.-.}-{3:3}, at: get_random_u32+0x7f/0x2e0 {INITIAL USE} state was registered at: lock_acquire+0xd9/0x2f0 get_random_u32+0x93/0x2e0 __get_random_u32_below+0x17/0x70 cache_random_seq_create+0x121/0x1c0 init_cache_random_seq+0x5d/0x110 do_kmem_cache_create+0x1e0/0xa30 __kmem_cache_create_args+0x4ec/0x830 create_kmalloc_caches+0xe6/0x130 kmem_cache_init+0x1b1/0x660 mm_core_init+0x1d8/0x4b0 start_kernel+0x620/0xcd0 x86_64_start_reservations+0x18/0x30 x86_64_start_kernel+0xf3/0x140 common_startup_64+0x13e/0x148 irq event stamp: 76 hardirqs last enabled at (75): [<ffffffff8298b77a>] exc_nmi+0x11a/0x240 hardirqs last disabled at (76): [<ffffffff8298b991>] sysvec_irq_work+0x11/0x110 softirqs last enabled at (0): [<ffffffff813b2dda>] copy_process+0xc7a/0x2350 softirqs last disabled at (0): [<0000000000000000>] 0x0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(batched_entropy_u32.lock); <Interrupt> lock(batched_entropy_u32.lock); *** DEADLOCK *** Fix this by using pseudo-random number generator if !allow_spin. This means kmalloc_nolock() users won't get truly random numbers, but there is not much we can do about it. Note that an NMI handler might interrupt prandom_u32_state() and change the random state, but that's safe. Link: https://lore.kernel.org/all/0c33bdee-6de8-4d9f-92ca-4f72c1b6fb9f@suse.cz Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().") Cc: stable@vger.kernel.org Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20260210081900.329447-3-harry.yoo@oracle.com Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
13 daysslub: remove CONFIG_SLUB_TINY specific code pathsVlastimil Babka2-105/+4
[ Upstream commit 31e0886fd57d426d18a239dd55e176032c9c1cb0 ] CONFIG_SLUB_TINY minimizes the SLUB's memory overhead in multiple ways, mainly by avoiding percpu caching of slabs and objects. It also reduces code size by replacing some code paths with simplified ones through ifdefs, but the benefits of that are smaller and would complicate the upcoming changes. Thus remove these code paths and associated ifdefs and simplify the code base. Link: https://patch.msgid.link/20251105-sheaves-cleanups-v1-4-b8218e1ac7ef@suse.cz Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Stable-dep-of: a1e244a9f177 ("mm/slab: use prandom if !allow_spin") Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm: numa_memblks: Identify the accurate NUMA ID of CFMWCui Chao1-4/+5
[ Upstream commit f043a93fff9e3e3e648b6525483f59104b0819fa ] In some physical memory layout designs, the address space of CFMW (CXL Fixed Memory Window) resides between multiple segments of system memory belonging to the same NUMA node. In numa_cleanup_meminfo, these multiple segments of system memory are merged into a larger numa_memblk. When identifying which NUMA node the CFMW belongs to, it may be incorrectly assigned to the NUMA node of the merged system memory. When a CXL RAM region is created in userspace, the memory capacity of the newly created region is not added to the CFMW-dedicated NUMA node. Instead, it is accumulated into an existing NUMA node (e.g., NUMA0 containing RAM). This makes it impossible to clearly distinguish between the two types of memory, which may affect memory-tiering applications. Example memory layout: Physical address space: 0x00000000 - 0x1FFFFFFF System RAM (node0) 0x20000000 - 0x2FFFFFFF CXL CFMW (node2) 0x40000000 - 0x5FFFFFFF System RAM (node0) 0x60000000 - 0x7FFFFFFF System RAM (node1) After numa_cleanup_meminfo, the two node0 segments are merged into one: 0x00000000 - 0x5FFFFFFF System RAM (node0) // CFMW is inside the range 0x60000000 - 0x7FFFFFFF System RAM (node1) So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0. To address this scenario, accurately identifying the correct NUMA node can be achieved by checking whether the region belongs to both numa_meminfo and numa_reserved_meminfo. While this issue is only observed in a QEMU configuration, and no known end users are impacted by this problem, it is likely that some firmware implementation is leaving memory map holes in a CXL Fixed Memory Window. CXL hotplug depends on mapping free window capacity, and it seems to be only a coincidence to have not hit this problem yet. Fixes: 779dd20cfb56 ("cxl/region: Add region creation support") Signed-off-by: Cui Chao <cuichao1753@phytium.com.cn> Cc: stable@vger.kernel.org Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Reviewed-by: Gregory Price <gourry@gourry.net> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://patch.msgid.link/20260213060347.2389818-2-cuichao1753@phytium.com.cn Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/page_alloc: clear page->private in free_pages_prepare()Mikhail Gavrilov1-0/+1
[ Upstream commit ac1ea219590c09572ed5992dc233bbf7bb70fef9 ] Several subsystems (slub, shmem, ttm, etc.) use page->private but don't clear it before freeing pages. When these pages are later allocated as high-order pages and split via split_page(), tail pages retain stale page->private values. This causes a use-after-free in the swap subsystem. The swap code uses page->private to track swap count continuations, assuming freshly allocated pages have page->private == 0. When stale values are present, swap_count_continued() incorrectly assumes the continuation list is valid and iterates over uninitialized page->lru containing LIST_POISON values, causing a crash: KASAN: maybe wild-memory-access in range [0xdead000000000100-0xdead000000000107] RIP: 0010:__do_sys_swapoff+0x1151/0x1860 Fix this by clearing page->private in free_pages_prepare(), ensuring all freed pages have clean state regardless of previous use. Link: https://lkml.kernel.org/r/20260207173615.146159-1-mikhail.v.gavrilov@gmail.com Fixes: 3b8000ae185c ("mm/vmalloc: huge vmalloc backing pages should be split rather than compound") Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Suggested-by: Zi Yan <ziy@nvidia.com> Acked-by: Zi Yan <ziy@nvidia.com> Acked-by: David Hildenbrand (Arm) <david@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Brendan Jackman <jackmanb@google.com> Cc: Chris Li <chrisl@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kairui Song <ryncsn@gmail.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/vmscan: fix demotion targets checks in reclaim/demotionBing Jiao2-14/+36
[ Upstream commit 1aceed565ff172fc0331dd1d5e7e65139b711139 ] Patch series "mm/vmscan: fix demotion targets checks in reclaim/demotion", v9. This patch series addresses two issues in demote_folio_list(), can_demote(), and next_demotion_node() in reclaim/demotion. 1. demote_folio_list() and can_demote() do not correctly check demotion target against cpuset.mems_effective, which will cause (a) pages to be demoted to not-allowed nodes and (b) pages fail demotion even if the system still has allowed demotion nodes. Patch 1 fixes this bug by updating cpuset_node_allowed() and mem_cgroup_node_allowed() to return effective_mems, allowing directly logic-and operation against demotion targets. 2. next_demotion_node() returns a preferred demotion target, but it does not check the node against allowed nodes. Patch 2 ensures that next_demotion_node() filters against the allowed node mask and selects the closest demotion target to the source node. This patch (of 2): Fix two bugs in demote_folio_list() and can_demote() due to incorrect demotion target checks against cpuset.mems_effective in reclaim/demotion. Commit 7d709f49babc ("vmscan,cgroup: apply mems_effective to reclaim") introduces the cpuset.mems_effective check and applies it to can_demote(). However: 1. It does not apply this check in demote_folio_list(), which leads to situations where pages are demoted to nodes that are explicitly excluded from the task's cpuset.mems. 2. It checks only the nodes in the immediate next demotion hierarchy and does not check all allowed demotion targets in can_demote(). This can cause pages to never be demoted if the nodes in the next demotion hierarchy are not set in mems_effective. These bugs break resource isolation provided by cpuset.mems. This is visible from userspace because pages can either fail to be demoted entirely or are demoted to nodes that are not allowed in multi-tier memory systems. To address these bugs, update cpuset_node_allowed() and mem_cgroup_node_allowed() to return effective_mems, allowing directly logic-and operation against demotion targets. Also update can_demote() and demote_folio_list() accordingly. Bug 1 reproduction: Assume a system with 4 nodes, where nodes 0-1 are top-tier and nodes 2-3 are far-tier memory. All nodes have equal capacity. Test script: echo 1 > /sys/kernel/mm/numa/demotion_enabled mkdir /sys/fs/cgroup/test echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control echo "0-2" > /sys/fs/cgroup/test/cpuset.mems echo $$ > /sys/fs/cgroup/test/cgroup.procs swapoff -a # Expectation: Should respect node 0-2 limit. # Observation: Node 3 shows significant allocation (MemFree drops) stress-ng --oomable --vm 1 --vm-bytes 150% --mbind 0,1 Bug 2 reproduction: Assume a system with 6 nodes, where nodes 0-2 are top-tier, node 3 is a far-tier node, and nodes 4-5 are the farthest-tier nodes. All nodes have equal capacity. Test script: echo 1 > /sys/kernel/mm/numa/demotion_enabled mkdir /sys/fs/cgroup/test echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control echo "0-2,4-5" > /sys/fs/cgroup/test/cpuset.mems echo $$ > /sys/fs/cgroup/test/cgroup.procs swapoff -a # Expectation: Pages are demoted to Nodes 4-5 # Observation: No pages are demoted before oom. stress-ng --oomable --vm 1 --vm-bytes 150% --mbind 0,1,2 Link: https://lkml.kernel.org/r/20260114205305.2869796-1-bingjiao@google.com Link: https://lkml.kernel.org/r/20260114205305.2869796-2-bingjiao@google.com Fixes: 7d709f49babc ("vmscan,cgroup: apply mems_effective to reclaim") Signed-off-by: Bing Jiao <bingjiao@google.com> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@kernel.org> Cc: Gregory Price <gourry@gourry.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Waiman Long <longman@redhat.com> Cc: Wei Xu <weixugc@google.com> Cc: Yuanchu Xie <yuanchu@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/page_alloc: skip debug_check_no_{obj,locks}_freed with FPI_TRYLOCKHarry Yoo1-6/+11
[ Upstream commit 338ad1e84d15078a9ae46d7dd7466329ae0bfa61 ] When CONFIG_DEBUG_OBJECTS_FREE is enabled, debug_check_no_{obj,locks}_freed() functions are called. Since both of them spin on a lock, they are not safe to be called if the FPI_TRYLOCK flag is specified. This leads to a lockdep splat: ================================ WARNING: inconsistent lock state 6.19.0-rc5-slab-for-next+ #326 Tainted: G N -------------------------------- inconsistent {INITIAL USE} -> {IN-NMI} usage. kunit_try_catch/9046 [HC2[2]:SC0[0]:HE0:SE1] takes: ffffffff84ed6bf8 (&obj_hash[i].lock){-.-.}-{2:2}, at: __debug_check_no_obj_freed+0xe0/0x300 {INITIAL USE} state was registered at: lock_acquire+0xd9/0x2f0 _raw_spin_lock_irqsave+0x4c/0x80 __debug_object_init+0x9d/0x1f0 debug_object_init+0x34/0x50 __init_work+0x28/0x40 init_cgroup_housekeeping+0x151/0x210 init_cgroup_root+0x3d/0x140 cgroup_init_early+0x30/0x240 start_kernel+0x3e/0xcd0 x86_64_start_reservations+0x18/0x30 x86_64_start_kernel+0xf3/0x140 common_startup_64+0x13e/0x148 irq event stamp: 2998 hardirqs last enabled at (2997): [<ffffffff8298b77a>] exc_nmi+0x11a/0x240 hardirqs last disabled at (2998): [<ffffffff8298b991>] sysvec_irq_work+0x11/0x110 softirqs last enabled at (1416): [<ffffffff813c1f72>] __irq_exit_rcu+0x132/0x1c0 softirqs last disabled at (1303): [<ffffffff813c1f72>] __irq_exit_rcu+0x132/0x1c0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&obj_hash[i].lock); <Interrupt> lock(&obj_hash[i].lock); *** DEADLOCK *** Rename free_pages_prepare() to __free_pages_prepare(), add an fpi_t parameter, and skip those checks if FPI_TRYLOCK is set. To keep the fpi_t definition in mm/page_alloc.c, add a wrapper function free_pages_prepare() that always passes FPI_NONE and use it in mm/compaction.c. Link: https://lkml.kernel.org/r/20260209062639.16577-1-harry.yoo@oracle.com Fixes: 8c57b687e833 ("mm, bpf: Introduce free_pages_nolock()") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Zi Yan <ziy@nvidia.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Jackman <jackmanb@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/hugetlb: restore failed global reservations to subpoolJoshua Hahn1-0/+9
[ Upstream commit 1d3f9bb4c8af70304d19c22e30f5d16a2d589bb5 ] Commit a833a693a490 ("mm: hugetlb: fix incorrect fallback for subpool") fixed an underflow error for hstate->resv_huge_pages caused by incorrectly attributing globally requested pages to the subpool's reservation. Unfortunately, this fix also introduced the opposite problem, which would leave spool->used_hpages elevated if the globally requested pages could not be acquired. This is because while a subpool's reserve pages only accounts for what is requested and allocated from the subpool, its "used" counter keeps track of what is consumed in total, both from the subpool and globally. Thus, we need to adjust spool->used_hpages in the other direction, and make sure that globally requested pages are uncharged from the subpool's used counter. Each failed allocation attempt increments the used_hpages counter by how many pages were requested from the global pool. Ultimately, this renders the subpool unusable, as used_hpages approaches the max limit. The issue can be reproduced as follows: 1. Allocate 4 hugetlb pages 2. Create a hugetlb mount with max=4, min=2 3. Consume 2 pages globally 4. Request 3 pages from the subpool (2 from subpool + 1 from global) 4.1 hugepage_subpool_get_pages(spool, 3) succeeds. used_hpages += 3 4.2 hugetlb_acct_memory(h, 1) fails: no global pages left used_hpages -= 2 5. Subpool now has used_hpages = 1, despite not being able to successfully allocate any hugepages. It believes it can now only allocate 3 more hugepages, not 4. With each failed allocation attempt incrementing the used counter, the subpool eventually reaches a point where its used counter equals its max counter. At that point, any future allocations that try to allocate hugeTLB pages from the subpool will fail, despite the subpool not having any of its hugeTLB pages consumed by any user. Once this happens, there is no way to make the subpool usable again, since there is no way to decrement the used counter as no process is really consuming the hugeTLB pages. The underflow issue that the original commit fixes still remains fixed as well. Without this fix, used_hpages would keep on leaking if hugetlb_acct_memory() fails. Link: https://lkml.kernel.org/r/20260116204037.2270096-1-joshua.hahnjy@gmail.com Fixes: a833a693a490 ("mm: hugetlb: fix incorrect fallback for subpool") Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com> Acked-by: Usama Arif <usama.arif@linux.dev> Cc: David Hildenbrand <david@kernel.org> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Ma Wupeng <mawupeng1@huawei.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Waiman Long <longman@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/slab: do not access current->mems_allowed_seq if !allow_spinHarry Yoo1-2/+11
[ Upstream commit 144080a5823b2dbd635acb6decf7ab23182664f3 ] Lockdep complains when get_from_any_partial() is called in an NMI context, because current->mems_allowed_seq is seqcount_spinlock_t and not NMI-safe: ================================ WARNING: inconsistent lock state 6.19.0-rc5-kfree-rcu+ #315 Tainted: G N -------------------------------- inconsistent {INITIAL USE} -> {IN-NMI} usage. kunit_try_catch/9989 [HC1[1]:SC0[0]:HE0:SE1] takes: ffff889085799820 (&____s->seqcount#3){.-.-}-{0:0}, at: ___slab_alloc+0x58f/0xc00 {INITIAL USE} state was registered at: lock_acquire+0x185/0x320 kernel_init_freeable+0x391/0x1150 kernel_init+0x1f/0x220 ret_from_fork+0x736/0x8f0 ret_from_fork_asm+0x1a/0x30 irq event stamp: 56 hardirqs last enabled at (55): [<ffffffff850a68d7>] _raw_spin_unlock_irq+0x27/0x70 hardirqs last disabled at (56): [<ffffffff850858ca>] __schedule+0x2a8a/0x6630 softirqs last enabled at (0): [<ffffffff81536711>] copy_process+0x1dc1/0x6a10 softirqs last disabled at (0): [<0000000000000000>] 0x0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&____s->seqcount#3); <Interrupt> lock(&____s->seqcount#3); *** DEADLOCK *** According to Documentation/locking/seqlock.rst, seqcount_t is not NMI-safe and seqcount_latch_t should be used when read path can interrupt the write-side critical section. In this case, do not access current->mems_allowed_seq and avoid retry. Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().") Cc: stable@vger.kernel.org Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20260210081900.329447-2-harry.yoo@oracle.com Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/slab: use unsigned long for orig_size to ensure proper metadata alignHarry Yoo1-7/+7
[ Upstream commit b85f369b81aed457acbea4ad3314218254a72fd2 ] When both KASAN and SLAB_STORE_USER are enabled, accesses to struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. This occurs because orig_size is currently defined as unsigned int, which only guarantees 4-byte alignment. When struct kasan_alloc_meta is placed after orig_size, it may end up at a 4-byte boundary rather than the required 8-byte boundary on 64-bit systems. Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. Change orig_size from unsigned int to unsigned long to ensure proper alignment for any subsequent metadata. This should not waste additional memory because kmalloc objects are already aligned to at least ARCH_KMALLOC_MINALIGN. Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: stable@vger.kernel.org Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/ Link: https://patch.msgid.link/20260113061845.159790-2-harry.yoo@oracle.com Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/slab: avoid allocating slabobj_ext array from its own slabHarry Yoo1-7/+53
[ Upstream commit 280ea9c3154b2af7d841f992c9fc79e9d6534e03 ] When allocating slabobj_ext array in alloc_slab_obj_exts(), the array can be allocated from the same slab we're allocating the array for. This led to obj_exts_in_slab() incorrectly returning true [1], although the array is not allocated from wasted space of the slab. Vlastimil Babka observed that this problem should be fixed even when ignoring its incompatibility with obj_exts_in_slab(), because it creates slabs that are never freed as there is always at least one allocated object. To avoid this, use the next kmalloc size or large kmalloc when the array can be allocated from the same cache we're allocating the array for. In case of random kmalloc caches, there are multiple kmalloc caches for the same size and the cache is selected based on the caller address. Because it is fragile to ensure the same caller address is passed to kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the size to (s->object_size + 1) when the sizes are equal, instead of directly comparing the kmem_cache pointers. Note that this doesn't happen when memory allocation profiling is disabled, as when the allocation of the array is triggered by memory cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL. Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1] Cc: stable@vger.kernel.org Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20260126125714.88008-1-harry.yoo@oracle.com Reviewed-by: Hao Li <hao.li@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/highmem: fix __kmap_to_page() build errorWilliam Tambe1-1/+2
[ Upstream commit 94350fe6cad77b46c3dcb8c96543bef7647efbc0 ] This changes fixes following build error which is a miss from ef6e06b2ef87 ("highmem: fix kmap_to_page() for kmap_local_page() addresses"). mm/highmem.c:184:66: error: 'pteval' undeclared (first use in this function); did you mean 'pte_val'? 184 | idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); In __kmap_to_page(), pteval is used but does not exist in the function. (akpm: affects xtensa only) Link: https://lkml.kernel.org/r/SJ0PR07MB86317E00EC0C59DA60935FDCD18DA@SJ0PR07MB8631.namprd07.prod.outlook.com Fixes: ef6e06b2ef87 ("highmem: fix kmap_to_page() for kmap_local_page() addresses") Signed-off-by: William Tambe <williamt@cadence.com> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache()Vlastimil Babka1-1/+4
[ Upstream commit b55b423e8518361124ff0a9e15df431b3682ee4f ] After we submit the rcu_free sheaves to call_rcu() we need to make sure the rcu callbacks complete. kvfree_rcu_barrier() does that via flush_all_rcu_sheaves() but kvfree_rcu_barrier_on_cache() doesn't. Fix that. This currently causes no issues because the caches with sheaves we have are never destroyed. The problem flagged by kernel test robot was reported for a patch that enables sheaves for (almost) all caches, and occurred only with CONFIG_KASAN. Harry Yoo found the root cause [1]: It turns out the object freed by sheaf_flush_unused() was in KASAN percpu quarantine list (confirmed by dumping the list) by the time __kmem_cache_shutdown() returns an error. Quarantined objects are supposed to be flushed by kasan_cache_shutdown(), but things go wrong if the rcu callback (rcu_free_sheaf_nobarn()) is processed after kasan_cache_shutdown() finishes. That's why rcu_barrier() in __kmem_cache_shutdown() didn't help, because it's called after kasan_cache_shutdown(). Calling rcu_barrier() in kvfree_rcu_barrier_on_cache() guarantees that it'll be added to the quarantine list before kasan_cache_shutdown() is called. So it's a valid fix! [1] https://lore.kernel.org/all/aWd6f3jERlrB5yeF@hyeyoo/ Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202601121442.c530bed3-lkp@intel.com Fixes: 0f35040de593 ("mm/slab: introduce kvfree_rcu_barrier_on_cache() for cache destruction") Cc: stable@vger.kernel.org Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Tested-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_nodeDeepanshu Kartikey1-0/+8
[ Upstream commit 5747435e0fd474c24530ef1a6822f47e7d264b27 ] When CONFIG_PAGE_OWNER is enabled, freeing KASAN shadow pages during vmalloc cleanup triggers expensive stack unwinding that acquires RCU read locks. Processing a large purge_list without rescheduling can cause the task to hold CPU for extended periods (10+ seconds), leading to RCU stalls and potential OOM conditions. The issue manifests in purge_vmap_node() -> kasan_release_vmalloc_node() where iterating through hundreds or thousands of vmap_area entries and freeing their associated shadow pages causes: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P6229/1:b..l ... task:kworker/0:17 state:R running task stack:28840 pid:6229 ... kasan_release_vmalloc_node+0x1ba/0xad0 mm/vmalloc.c:2299 purge_vmap_node+0x1ba/0xad0 mm/vmalloc.c:2299 Each call to kasan_release_vmalloc() can free many pages, and with page_owner tracking, each free triggers save_stack() which performs stack unwinding under RCU read lock. Without yielding, this creates an unbounded RCU critical section. Add periodic cond_resched() calls within the loop to allow: - RCU grace periods to complete - Other tasks to run - Scheduler to preempt when needed The fix uses need_resched() for immediate response under load, with a batch count of 32 as a guaranteed upper bound to prevent worst-case stalls even under light load. Link: https://lkml.kernel.org/r/20260112103612.627247-1-kartikey406@gmail.com Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com> Reported-by: syzbot+d8d4c31d40f868eaea30@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=d8d4c31d40f868eaea30 Link: https://lore.kernel.org/all/20260112084723.622910-1-kartikey406@gmail.com/T/ [v1] Suggested-by: Uladzislau Rezki <urezki@gmail.com> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Hillf Danton <hdanton@sina.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-03-04mm, page_alloc, thp: prevent reclaim for __GFP_THISNODE THP allocationsVlastimil Babka1-0/+14
[ Upstream commit 9c9828d3ead69416d731b1238802af31760c823e ] Since commit cc638f329ef6 ("mm, thp: tweak reclaim/compaction effort of local-only and all-node allocations"), THP page fault allocations have settled on the following scheme (from the commit log): 1. local node only THP allocation with no reclaim, just compaction. 2. for madvised VMA's or when synchronous compaction is enabled always - THP allocation from any node with effort determined by global defrag setting and VMA madvise 3. fallback to base pages on any node Recent customer reports however revealed we have a gap in step 1 above. What we have seen is excessive reclaim due to THP page faults on a NUMA node that's close to its high watermark, while other nodes have plenty of free memory. The problem with step 1 is that it promises no reclaim after the compaction attempt, however reclaim is only avoided for certain compaction outcomes (deferred, or skipped due to insufficient free base pages), and not e.g. when compaction is actually performed but fails (we did see compact_fail vmstat counter increasing). THP page faults can therefore exhibit a zone_reclaim_mode-like behavior, which is not the intention. Thus add a check for __GFP_THISNODE that corresponds to this exact situation and prevents continuing with reclaim/compaction once the initial compaction attempt isn't successful in allocating the page. Note that commit cc638f329ef6 has not introduced this over-reclaim possibility; it appears to exist in some form since commit 2f0799a0ffc0 ("mm, thp: restore node-local hugepage allocations"). Followup commits b39d0ee2632d ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed") and cc638f329ef6 have moved in the right direction, but left the abovementioned gap. Link: https://lkml.kernel.org/r/20251219-costly-noretry-thisnode-fix-v1-1-e1085a4a0c34@suse.cz Fixes: 2f0799a0ffc0 ("mm, thp: restore node-local hugepage allocations") Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Pedro Falcato <pfalcato@suse.de> Acked-by: Zi Yan <ziy@nvidia.com> Cc: Brendan Jackman <jackmanb@google.com> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-02-26mm/slab: fix false lockdep warning in __kfree_rcu_sheaf()Harry Yoo1-0/+20
[ Upstream commit f8b4cd2dad097e4ea5aed3511f42b9eb771e7b19 ] kvfree_call_rcu() can be called while holding a raw_spinlock_t. Since __kfree_rcu_sheaf() may acquire a spinlock_t (which becomes a sleeping lock on PREEMPT_RT) and violate lock nesting rules, kvfree_call_rcu() bypasses the sheaves layer entirely on PREEMPT_RT. However, lockdep still complains about acquiring spinlock_t while holding raw_spinlock_t, even on !PREEMPT_RT where spinlock_t is a spinning lock. This causes a false lockdep warning [1]: ============================= [ BUG: Invalid wait context ] 6.19.0-rc6-next-20260120 #21508 Not tainted ----------------------------- migration/1/23 is trying to lock: ffff8afd01054e98 (&barn->lock){..-.}-{3:3}, at: barn_get_empty_sheaf+0x1d/0xb0 other info that might help us debug this: context-{5:5} 3 locks held by migration/1/23: #0: ffff8afd01fd89a8 (&p->pi_lock){-.-.}-{2:2}, at: __balance_push_cpu_stop+0x3f/0x200 #1: ffffffff9f15c5c8 (rcu_read_lock){....}-{1:3}, at: cpuset_cpus_allowed_fallback+0x27/0x250 #2: ffff8afd1f470be0 ((local_lock_t *)&pcs->lock){+.+.}-{3:3}, at: __kfree_rcu_sheaf+0x52/0x3d0 stack backtrace: CPU: 1 UID: 0 PID: 23 Comm: migration/1 Not tainted 6.19.0-rc6-next-20260120 #21508 PREEMPTLAZY Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 Stopper: __balance_push_cpu_stop+0x0/0x200 <- balance_push+0x118/0x170 Call Trace: <TASK> __dump_stack+0x22/0x30 dump_stack_lvl+0x60/0x80 dump_stack+0x19/0x24 __lock_acquire+0xd3a/0x28e0 ? __lock_acquire+0x5a9/0x28e0 ? __lock_acquire+0x5a9/0x28e0 ? barn_get_empty_sheaf+0x1d/0xb0 lock_acquire+0xc3/0x270 ? barn_get_empty_sheaf+0x1d/0xb0 ? __kfree_rcu_sheaf+0x52/0x3d0 _raw_spin_lock_irqsave+0x47/0x70 ? barn_get_empty_sheaf+0x1d/0xb0 barn_get_empty_sheaf+0x1d/0xb0 ? __kfree_rcu_sheaf+0x52/0x3d0 __kfree_rcu_sheaf+0x19f/0x3d0 kvfree_call_rcu+0xaf/0x390 set_cpus_allowed_force+0xc8/0xf0 [...] </TASK> This wasn't triggered until sheaves were enabled for all slab caches, since kfree_rcu() wasn't being called with a raw spinlock held for caches with sheaves (vma, maple node). As suggested by Vlastimil Babka, fix this by using a lockdep map with LD_WAIT_CONFIG wait type to tell lockdep that acquiring spinlock_t is valid in this case, as those spinlocks won't be used on PREEMPT_RT. Note that kfree_rcu_sheaf_map should be acquired using _try() variant, otherwise the acquisition of the lockdep map itself will trigger an invalid wait context warning. Reported-by: Paul E. McKenney <paulmck@kernel.org> Closes: https://lore.kernel.org/linux-mm/c858b9af-2510-448b-9ab3-058f7b80dd42@paulmck-laptop [1] Fixes: ec66e0d59952 ("slab: add sheaf support for batching kfree_rcu() operations") Suggested-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2026-02-19mm/hugetlb: fix excessive IPI broadcasts when unsharing PMD tables using ↵David Hildenbrand (Red Hat)3-59/+122
mmu_gather commit 8ce720d5bd91e9dc16db3604aa4b1bf76770a9a1 upstream. As reported, ever since commit 1013af4f585f ("mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race") we can end up in some situations where we perform so many IPI broadcasts when unsharing hugetlb PMD page tables that it severely regresses some workloads. In particular, when we fork()+exit(), or when we munmap() a large area backed by many shared PMD tables, we perform one IPI broadcast per unshared PMD table. There are two optimizations to be had: (1) When we process (unshare) multiple such PMD tables, such as during exit(), it is sufficient to send a single IPI broadcast (as long as we respect locking rules) instead of one per PMD table. Locking prevents that any of these PMD tables could get reused before we drop the lock. (2) When we are not the last sharer (> 2 users including us), there is no need to send the IPI broadcast. The shared PMD tables cannot become exclusive (fully unshared) before an IPI will be broadcasted by the last sharer. Concurrent GUP-fast could walk into a PMD table just before we unshared it. It could then succeed in grabbing a page from the shared page table even after munmap() etc succeeded (and supressed an IPI). But there is not difference compared to GUP-fast just sleeping for a while after grabbing the page and re-enabling IRQs. Most importantly, GUP-fast will never walk into page tables that are no-longer shared, because the last sharer will issue an IPI broadcast. (if ever required, checking whether the PUD changed in GUP-fast after grabbing the page like we do in the PTE case could handle this) So let's rework PMD sharing TLB flushing + IPI sync to use the mmu_gather infrastructure so we can implement these optimizations and demystify the code at least a bit. Extend the mmu_gather infrastructure to be able to deal with our special hugetlb PMD table sharing implementation. To make initialization of the mmu_gather easier when working on a single VMA (in particular, when dealing with hugetlb), provide tlb_gather_mmu_vma(). We'll consolidate the handling for (full) unsharing of PMD tables in tlb_unshare_pmd_ptdesc() and tlb_flush_unshared_tables(), and track in "struct mmu_gather" whether we had (full) unsharing of PMD tables. Because locking is very special (concurrent unsharing+reuse must be prevented), we disallow deferring flushing to tlb_finish_mmu() and instead require an explicit earlier call to tlb_flush_unshared_tables(). From hugetlb code, we call huge_pmd_unshare_flush() where we make sure that the expected lock protecting us from concurrent unsharing+reuse is still held. Check with a VM_WARN_ON_ONCE() in tlb_finish_mmu() that tlb_flush_unshared_tables() was properly called earlier. Document it all properly. Notes about tlb_remove_table_sync_one() interaction with unsharing: There are two fairly tricky things: (1) tlb_remove_table_sync_one() is a NOP on architectures without CONFIG_MMU_GATHER_RCU_TABLE_FREE. Here, the assumption is that the previous TLB flush would send an IPI to all relevant CPUs. Careful: some architectures like x86 only send IPIs to all relevant CPUs when tlb->freed_tables is set. The relevant architectures should be selecting MMU_GATHER_RCU_TABLE_FREE, but x86 might not do that in stable kernels and it might have been problematic before this patch. Also, the arch flushing behavior (independent of IPIs) is different when tlb->freed_tables is set. Do we have to enlighten them to also take care of tlb->unshared_tables? So far we didn't care, so hopefully we are fine. Of course, we could be setting tlb->freed_tables as well, but that might then unnecessarily flush too much, because the semantics of tlb->freed_tables are a bit fuzzy. This patch changes nothing in this regard. (2) tlb_remove_table_sync_one() is not a NOP on architectures with CONFIG_MMU_GATHER_RCU_TABLE_FREE that actually don't need a sync. Take x86 as an example: in the common case (!pv, !X86_FEATURE_INVLPGB) we still issue IPIs during TLB flushes and don't actually need the second tlb_remove_table_sync_one(). This optimized can be implemented on top of this, by checking e.g., in tlb_remove_table_sync_one() whether we really need IPIs. But as described in (1), it really must honor tlb->freed_tables then to send IPIs to all relevant CPUs. Notes on TLB flushing changes: (1) Flushing for non-shared PMD tables We're converting from flush_hugetlb_tlb_range() to tlb_remove_huge_tlb_entry(). Given that we properly initialize the MMU gather in tlb_gather_mmu_vma() to be hugetlb aware, similar to __unmap_hugepage_range(), that should be fine. (2) Flushing for shared PMD tables We're converting from various things (flush_hugetlb_tlb_range(), tlb_flush_pmd_range(), flush_tlb_range()) to tlb_flush_pmd_range(). tlb_flush_pmd_range() achieves the same that tlb_remove_huge_tlb_entry() would achieve in these scenarios. Note that tlb_remove_huge_tlb_entry() also calls __tlb_remove_tlb_entry(), however that is only implemented on powerpc, which does not support PMD table sharing. Similar to (1), tlb_gather_mmu_vma() should make sure that TLB flushing keeps on working as expected. Further, note that the ptdesc_pmd_pts_dec() in huge_pmd_share() is not a concern, as we are holding the i_mmap_lock the whole time, preventing concurrent unsharing. That ptdesc_pmd_pts_dec() usage will be removed separately as a cleanup later. There are plenty more cleanups to be had, but they have to wait until this is fixed. [david@kernel.org: fix kerneldoc] Link: https://lkml.kernel.org/r/f223dd74-331c-412d-93fc-69e360a5006c@kernel.org Link: https://lkml.kernel.org/r/20251223214037.580860-5-david@kernel.org Fixes: 1013af4f585f ("mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race") Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org> Reported-by: "Uschakow, Stanislav" <suschako@amazon.de> Closes: https://lore.kernel.org/all/4d3878531c76479d9f8ca9789dc6485d@amazon.de/ Tested-by: Laurence Oberman <loberman@redhat.com> Acked-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: Liu Shixin <liushixin2@huawei.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Rik van Riel <riel@surriel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-11mm, shmem: prevent infinite loop on truncate raceKairui Song1-9/+14
commit 2030dddf95451b4e7a389f052091e7c4b7b274c6 upstream. When truncating a large swap entry, shmem_free_swap() returns 0 when the entry's index doesn't match the given index due to lookup alignment. The failure fallback path checks if the entry crosses the end border and aborts when it happens, so truncate won't erase an unexpected entry or range. But one scenario was ignored. When `index` points to the middle of a large swap entry, and the large swap entry doesn't go across the end border, find_get_entries() will return that large swap entry as the first item in the batch with `indices[0]` equal to `index`. The entry's base index will be smaller than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the "base < index" check. The code will then call shmem_confirm_swap(), get the order, check if it crosses the END boundary (which it doesn't), and retry with the same index. The next iteration will find the same entry again at the same index with same indices, leading to an infinite loop. Fix this by retrying with a round-down index, and abort if the index is smaller than the truncate range. Link: https://lkml.kernel.org/r/aXo6ltB5iqAKJzY8@KASONG-MC4 Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") Fixes: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split") Signed-off-by: Kairui Song <kasong@tencent.com> Reported-by: Chris Mason <clm@meta.com> Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/ Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <baohua@kernel.org> Cc: Chris Li <chrisl@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-11mm/slab: Add alloc_tagging_slab_free_hook for memcg_alloc_abort_singleHao Ge1-1/+5
commit e6c53ead2d8fa73206e0a63e9cd9aea6bc929837 upstream. When CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled, the following warning may be noticed: [ 3959.023862] ------------[ cut here ]------------ [ 3959.023891] alloc_tag was not cleared (got tag for lib/xarray.c:378) [ 3959.023947] WARNING: ./include/linux/alloc_tag.h:155 at alloc_tag_add+0x128/0x178, CPU#6: mkfs.ntfs/113998 [ 3959.023978] Modules linked in: dns_resolver tun brd overlay exfat btrfs blake2b libblake2b xor xor_neon raid6_pq loop sctp ip6_udp_tunnel udp_tunnel ext4 crc16 mbcache jbd2 rfkill sunrpc vfat fat sg fuse nfnetlink sr_mod virtio_gpu cdrom drm_client_lib virtio_dma_buf drm_shmem_helper drm_kms_helper ghash_ce drm sm4 backlight virtio_net net_failover virtio_scsi failover virtio_console virtio_blk virtio_mmio dm_mirror dm_region_hash dm_log dm_multipath dm_mod i2c_dev aes_neon_bs aes_ce_blk [last unloaded: hwpoison_inject] [ 3959.024170] CPU: 6 UID: 0 PID: 113998 Comm: mkfs.ntfs Kdump: loaded Tainted: G W 6.19.0-rc7+ #7 PREEMPT(voluntary) [ 3959.024182] Tainted: [W]=WARN [ 3959.024186] Hardware name: QEMU KVM Virtual Machine, BIOS unknown 2/2/2022 [ 3959.024192] pstate: 604000c5 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3959.024199] pc : alloc_tag_add+0x128/0x178 [ 3959.024207] lr : alloc_tag_add+0x128/0x178 [ 3959.024214] sp : ffff80008b696d60 [ 3959.024219] x29: ffff80008b696d60 x28: 0000000000000000 x27: 0000000000000240 [ 3959.024232] x26: 0000000000000000 x25: 0000000000000240 x24: ffff800085d17860 [ 3959.024245] x23: 0000000000402800 x22: ffff0000c0012dc0 x21: 00000000000002d0 [ 3959.024257] x20: ffff0000e6ef3318 x19: ffff800085ae0410 x18: 0000000000000000 [ 3959.024269] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 [ 3959.024281] x14: 0000000000000000 x13: 0000000000000001 x12: ffff600064101293 [ 3959.024292] x11: 1fffe00064101292 x10: ffff600064101292 x9 : dfff800000000000 [ 3959.024305] x8 : 00009fff9befed6e x7 : ffff000320809493 x6 : 0000000000000001 [ 3959.024316] x5 : ffff000320809490 x4 : ffff600064101293 x3 : ffff800080691838 [ 3959.024328] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0000d5bcd640 [ 3959.024340] Call trace: [ 3959.024346] alloc_tag_add+0x128/0x178 (P) [ 3959.024355] __alloc_tagging_slab_alloc_hook+0x11c/0x1a8 [ 3959.024362] kmem_cache_alloc_lru_noprof+0x1b8/0x5e8 [ 3959.024369] xas_alloc+0x304/0x4f0 [ 3959.024381] xas_create+0x1e0/0x4a0 [ 3959.024388] xas_store+0x68/0xda8 [ 3959.024395] __filemap_add_folio+0x5b0/0xbd8 [ 3959.024409] filemap_add_folio+0x16c/0x7e0 [ 3959.024416] __filemap_get_folio_mpol+0x2dc/0x9e8 [ 3959.024424] iomap_get_folio+0xfc/0x180 [ 3959.024435] __iomap_get_folio+0x2f8/0x4b8 [ 3959.024441] iomap_write_begin+0x198/0xc18 [ 3959.024448] iomap_write_iter+0x2ec/0x8f8 [ 3959.024454] iomap_file_buffered_write+0x19c/0x290 [ 3959.024461] blkdev_write_iter+0x38c/0x978 [ 3959.024470] vfs_write+0x4d4/0x928 [ 3959.024482] ksys_write+0xfc/0x1f8 [ 3959.024489] __arm64_sys_write+0x74/0xb0 [ 3959.024496] invoke_syscall+0xd4/0x258 [ 3959.024507] el0_svc_common.constprop.0+0xb4/0x240 [ 3959.024514] do_el0_svc+0x48/0x68 [ 3959.024520] el0_svc+0x40/0xf8 [ 3959.024526] el0t_64_sync_handler+0xa0/0xe8 [ 3959.024533] el0t_64_sync+0x1ac/0x1b0 [ 3959.024540] ---[ end trace 0000000000000000 ]--- When __memcg_slab_post_alloc_hook() fails, there are two different free paths depending on whether size == 1 or size != 1. In the kmem_cache_free_bulk() path, we do call alloc_tagging_slab_free_hook(). However, in memcg_alloc_abort_single() we don't, the above warning will be triggered on the next allocation. Therefore, add alloc_tagging_slab_free_hook() to the memcg_alloc_abort_single() path. Fixes: 9f9796b413d3 ("mm, slab: move memcg charging to post-alloc hook") Cc: stable@vger.kernel.org Suggested-by: Hao Li <hao.li@linux.dev> Signed-off-by: Hao Ge <hao.ge@linux.dev> Reviewed-by: Hao Li <hao.li@linux.dev> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20260204101401.202762-1-hao.ge@linux.dev Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-06mm/shmem, swap: fix race of truncate and swap entry splitKairui Song1-11/+34
commit 8a1968bd997f45a9b11aefeabdd1232e1b6c7184 upstream. The helper for shmem swap freeing is not handling the order of swap entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but it gets the entry order before that using xa_get_order without lock protection, and it may get an outdated order value if the entry is split or changed in other ways after the xa_get_order and before the xa_cmpxchg_irq. And besides, the order could grow and be larger than expected, and cause truncation to erase data beyond the end border. For example, if the target entry and following entries are swapped in or freed, then a large folio was added in place and swapped out, using the same entry, the xa_cmpxchg_irq will still succeed, it's very unlikely to happen though. To fix that, open code the Xarray cmpxchg and put the order retrieval and value checking in the same critical section. Also, ensure the order won't exceed the end border, skip it if the entry goes across the border. Skipping large swap entries crosses the end border is safe here. Shmem truncate iterates the range twice, in the first iteration, find_lock_entries already filtered such entries, and shmem will swapin the entries that cross the end border and partially truncate the folio (split the folio or at least zero part of it). So in the second loop here, if we see a swap entry that crosses the end order, it must at least have its content erased already. I observed random swapoff hangs and kernel panics when stress testing ZSWAP with shmem. After applying this patch, all problems are gone. Link: https://lkml.kernel.org/r/20260120-shmem-swap-fix-v3-1-3d33ebfbc057@tencent.com Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") Signed-off-by: Kairui Song <kasong@tencent.com> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Acked-by: Chris Li <chrisl@kernel.org> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <baohua@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-06mm/memory-failure: teach kill_accessing_process to accept hugetlb tail page pfnJane Chu1-6/+8
commit 057a6f2632c956483e2b2628477f0fcd1cd8a844 upstream. When a hugetlb folio is being poisoned again, try_memory_failure_hugetlb() passed head pfn to kill_accessing_process(), that is not right. The precise pfn of the poisoned page should be used in order to determine the precise vaddr as the SIGBUS payload. This issue has already been taken care of in the normal path, that is, hwpoison_user_mappings(), see [1][2]. Further more, for [3] to work correctly in the hugetlb repoisoning case, it's essential to inform VM the precise poisoned page, not the head page. [1] https://lkml.kernel.org/r/20231218135837.3310403-1-willy@infradead.org [2] https://lkml.kernel.org/r/20250224211445.2663312-1-jane.chu@oracle.com [3] https://lore.kernel.org/lkml/20251116013223.1557158-1-jiaqiyan@google.com/ Link: https://lkml.kernel.org/r/20260120232234.3462258-2-jane.chu@oracle.com Signed-off-by: Jane Chu <jane.chu@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Acked-by: Miaohe Lin <linmiaohe@huawei.com> Cc: Chris Mason <clm@meta.com> Cc: David Hildenbrand <david@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Jiaqi Yan <jiaqiyan@google.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Naoya Horiguchi <nao.horiguchi@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Suren Baghdasaryan <surenb@google.com> Cc: William Roche <william.roche@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-06mm, swap: restore swap_space attr aviod kernel panicrobin.kuo2-3/+2
commit a0f3c0845a4ff68d403c568266d17e9cc553e561 upstream. commit 8b47299a411a ("mm, swap: mark swap address space ro and add context debug check") made the swap address space read-only. It may lead to kernel panic if arch_prepare_to_swap returns a failure under heavy memory pressure as follows, el1_abort+0x40/0x64 el1h_64_sync_handler+0x48/0xcc el1h_64_sync+0x84/0x88 errseq_set+0x4c/0xb8 (P) __filemap_set_wb_err+0x20/0xd0 shrink_folio_list+0xc20/0x11cc evict_folios+0x1520/0x1be4 try_to_shrink_lruvec+0x27c/0x3dc shrink_one+0x9c/0x228 shrink_node+0xb3c/0xeac do_try_to_free_pages+0x170/0x4f0 try_to_free_pages+0x334/0x534 __alloc_pages_direct_reclaim+0x90/0x158 __alloc_pages_slowpath+0x334/0x588 __alloc_frozen_pages_noprof+0x224/0x2fc __folio_alloc_noprof+0x14/0x64 vma_alloc_zeroed_movable_folio+0x34/0x44 do_pte_missing+0xad4/0x1040 handle_mm_fault+0x4a4/0x790 do_page_fault+0x288/0x5f8 do_translation_fault+0x38/0x54 do_mem_abort+0x54/0xa8 Restore swap address space as not ro to avoid the panic. Link: https://lkml.kernel.org/r/20260116062535.306453-2-robin.kuo@mediatek.com Fixes: 8b47299a411a ("mm, swap: mark swap address space ro and add context debug check") Signed-off-by: robin.kuo <robin.kuo@mediatek.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: andrew.yang <andrew.yang@mediatek.com> Cc: AngeloGiaocchino Del Regno <angelogioacchino.delregno@collabora.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <baohua@kernel.org> Cc: Chinwen Chang <chinwen.chang@mediatek.com> Cc: Chris Li <chrisl@kernel.org> Cc: Kairui Song <kasong@tencent.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Mathias Brugger <matthias.bgg@gmail.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Qun-wei Lin <Qun-wei.Lin@mediatek.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-06mm/memory-failure: fix missing ->mf_stats count in hugetlb poisonJane Chu1-37/+56
commit a148a2040191b12b45b82cb29c281cb3036baf90 upstream. When a newly poisoned subpage ends up in an already poisoned hugetlb folio, 'num_poisoned_pages' is incremented, but the per node ->mf_stats is not. Fix the inconsistency by designating action_result() to update them both. While at it, define __get_huge_page_for_hwpoison() return values in terms of symbol names for better readibility. Also rename folio_set_hugetlb_hwpoison() to hugetlb_update_hwpoison() since the function does more than the conventional bit setting and the fact three possible return values are expected. Link: https://lkml.kernel.org/r/20260120232234.3462258-1-jane.chu@oracle.com Fixes: 18f41fa616ee ("mm: memory-failure: bump memory failure stats to pglist_data") Signed-off-by: Jane Chu <jane.chu@oracle.com> Acked-by: Miaohe Lin <linmiaohe@huawei.com> Cc: Chris Mason <clm@meta.com> Cc: David Hildenbrand <david@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Jiaqi Yan <jiaqiyan@google.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Naoya Horiguchi <nao.horiguchi@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Suren Baghdasaryan <surenb@google.com> Cc: William Roche <william.roche@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-06mm/kfence: randomize the freelist on initializationPimyn Girgis1-4/+19
commit 870ff19251bf3910dda7a7245da826924045fedd upstream. Randomize the KFENCE freelist during pool initialization to make allocation patterns less predictable. This is achieved by shuffling the order in which metadata objects are added to the freelist using get_random_u32_below(). Additionally, ensure the error path correctly calculates the address range to be reset if initialization fails, as the address increment logic has been moved to a separate loop. Link: https://lkml.kernel.org/r/20260120161510.3289089-1-pimyn@google.com Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Pimyn Girgis <pimyn@google.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Marco Elver <elver@google.com> Cc: Ernesto Martnez Garca <ernesto.martinezgarcia@tugraz.at> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Kees Cook <kees@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-02-06mm/kasan: fix KASAN poisoning in vrealloc()Andrey Ryabinin2-5/+23
commit 9b47d4eea3f7c1f620e95bda1d6221660bde7d7b upstream. A KASAN warning can be triggered when vrealloc() changes the requested size to a value that is not aligned to KASAN_GRANULE_SIZE. ------------[ cut here ]------------ WARNING: CPU: 2 PID: 1 at mm/kasan/shadow.c:174 kasan_unpoison+0x40/0x48 ... pc : kasan_unpoison+0x40/0x48 lr : __kasan_unpoison_vmalloc+0x40/0x68 Call trace: kasan_unpoison+0x40/0x48 (P) vrealloc_node_align_noprof+0x200/0x320 bpf_patch_insn_data+0x90/0x2f0 convert_ctx_accesses+0x8c0/0x1158 bpf_check+0x1488/0x1900 bpf_prog_load+0xd20/0x1258 __sys_bpf+0x96c/0xdf0 __arm64_sys_bpf+0x50/0xa0 invoke_syscall+0x90/0x160 Introduce a dedicated kasan_vrealloc() helper that centralizes KASAN handling for vmalloc reallocations. The helper accounts for KASAN granule alignment when growing or shrinking an allocation and ensures that partial granules are handled correctly. Use this helper from vrealloc_node_align_noprof() to fix poisoning logic. [ryabinin.a.a@gmail.com: move kasan_enabled() check, fix build] Link: https://lkml.kernel.org/r/20260119144509.32767-1-ryabinin.a.a@gmail.com Link: https://lkml.kernel.org/r/20260113191516.31015-1-ryabinin.a.a@gmail.com Fixes: d699440f58ce ("mm: fix vrealloc()'s KASAN poisoning logic") Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Reported-by: Maciej Żenczykowski <maze@google.com> Reported-by: <joonki.min@samsung-slsi.corp-partner.google.com> Closes: https://lkml.kernel.org/r/CANP3RGeuRW53vukDy7WDO3FiVgu34-xVJYkfpm08oLO3odYFrA@mail.gmail.com Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30mm/vma: enforce VMA fork limit on unfaulted,faulted mremap merge tooLorenzo Stoakes1-12/+15
[ Upstream commit 3b617fd3d317bf9dd7e2c233e56eafef05734c9d ] The is_mergeable_anon_vma() function uses vmg->middle as the source VMA. However when merging a new VMA, this field is NULL. In all cases except mremap(), the new VMA will either be newly established and thus lack an anon_vma, or will be an expansion of an existing VMA thus we do not care about whether VMA is CoW'd or not. In the case of an mremap(), we can end up in a situation where we can accidentally allow an unfaulted/faulted merge with a VMA that has been forked, violating the general rule that we do not permit this for reasons of anon_vma lock scalability. Now we have the ability to be aware of the fact we are copying a VMA and also know which VMA that is, we can explicitly check for this, so do so. This is pertinent since commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges"), as this patch permits unfaulted/faulted merges that were previously disallowed running afoul of this issue. While we are here, vma_had_uncowed_parents() is a confusing name, so make it simple and rename it to vma_is_fork_child(). Link: https://lkml.kernel.org/r/6e2b9b3024ae1220961c8b81d74296d4720eaf2b.1767638272.git.lorenzo.stoakes@oracle.com Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges") Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Jeongjun Park <aha310510@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Pedro Falcato <pfalcato@suse.de> Cc: Rik van Riel <riel@surriel.com> Cc: Yeoreum Yun <yeoreum.yun@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ with upstream commit 61f67c230a5e backported, this simply applied correctly. Built + tested ] Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted mergeLorenzo Stoakes2-18/+56
[ upstream commit 61f67c230a5e7c741c352349ea80147fbe65bfae ] Patch series "mm/vma: fix anon_vma UAF on mremap() faulted, unfaulted merge", v2. Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges") introduced the ability to merge previously unavailable VMA merge scenarios. However, it is handling merges incorrectly when it comes to mremap() of a faulted VMA adjacent to an unfaulted VMA. The issues arise in three cases: 1. Previous VMA unfaulted: copied -----| v |-----------|.............| | unfaulted |(faulted VMA)| |-----------|.............| prev 2. Next VMA unfaulted: copied -----| v |.............|-----------| |(faulted VMA)| unfaulted | |.............|-----------| next 3. Both adjacent VMAs unfaulted: copied -----| v |-----------|.............|-----------| | unfaulted |(faulted VMA)| unfaulted | |-----------|.............|-----------| prev next This series fixes each of these cases, and introduces self tests to assert that the issues are corrected. I also test a further case which was already handled, to assert that my changes continues to correctly handle it: 4. prev unfaulted, next faulted: copied -----| v |-----------|.............|-----------| | unfaulted |(faulted VMA)| faulted | |-----------|.............|-----------| prev next This bug was discovered via a syzbot report, linked to in the first patch in the series, I confirmed that this series fixes the bug. I also discovered that we are failing to check that the faulted VMA was not forked when merging a copied VMA in cases 1-3 above, an issue this series also addresses. I also added self tests to assert that this is resolved (and confirmed that the tests failed prior to this). I also cleaned up vma_expand() as part of this work, renamed vma_had_uncowed_parents() to vma_is_fork_child() as the previous name was unduly confusing, and simplified the comments around this function. This patch (of 4): Commit 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges") introduced the ability to merge previously unavailable VMA merge scenarios. The key piece of logic introduced was the ability to merge a faulted VMA immediately next to an unfaulted VMA, which relies upon dup_anon_vma() to correctly handle anon_vma state. In the case of the merge of an existing VMA (that is changing properties of a VMA and then merging if those properties are shared by adjacent VMAs), dup_anon_vma() is invoked correctly. However in the case of the merge of a new VMA, a corner case peculiar to mremap() was missed. The issue is that vma_expand() only performs dup_anon_vma() if the target (the VMA that will ultimately become the merged VMA): is not the next VMA, i.e. the one that appears after the range in which the new VMA is to be established. A key insight here is that in all other cases other than mremap(), a new VMA merge either expands an existing VMA, meaning that the target VMA will be that VMA, or would have anon_vma be NULL. Specifically: * __mmap_region() - no anon_vma in place, initial mapping. * do_brk_flags() - expanding an existing VMA. * vma_merge_extend() - expanding an existing VMA. * relocate_vma_down() - no anon_vma in place, initial mapping. In addition, we are in the unique situation of needing to duplicate anon_vma state from a VMA that is neither the previous or next VMA being merged with. dup_anon_vma() deals exclusively with the target=unfaulted, src=faulted case. This leaves four possibilities, in each case where the copied VMA is faulted: 1. Previous VMA unfaulted: copied -----| v |-----------|.............| | unfaulted |(faulted VMA)| |-----------|.............| prev target = prev, expand prev to cover. 2. Next VMA unfaulted: copied -----| v |.............|-----------| |(faulted VMA)| unfaulted | |.............|-----------| next target = next, expand next to cover. 3. Both adjacent VMAs unfaulted: copied -----| v |-----------|.............|-----------| | unfaulted |(faulted VMA)| unfaulted | |-----------|.............|-----------| prev next target = prev, expand prev to cover. 4. prev unfaulted, next faulted: copied -----| v |-----------|.............|-----------| | unfaulted |(faulted VMA)| faulted | |-----------|.............|-----------| prev next target = prev, expand prev to cover. Essentially equivalent to 3, but with additional requirement that next's anon_vma is the same as the copied VMA's. This is covered by the existing logic. To account for this very explicitly, we introduce vma_merge_copied_range(), which sets a newly introduced vmg->copied_from field, then invokes vma_merge_new_range() which handles the rest of the logic. We then update the key vma_expand() function to clean up the logic and make what's going on clearer, making the 'remove next' case less special, before invoking dup_anon_vma() unconditionally should we be copying from a VMA. Note that in case 3, the if (remove_next) ... branch will be a no-op, as next=src in this instance and src is unfaulted. In case 4, it won't be, but since in this instance next=src and it is faulted, this will have required tgt=faulted, src=faulted to be compatible, meaning that next->anon_vma == vmg->copied_from->anon_vma, and thus a single dup_anon_vma() of next suffices to copy anon_vma state for the copied-from VMA also. If we are copying from a VMA in a successful merge we must _always_ propagate anon_vma state. This issue can be observed most directly by invoked mremap() to move around a VMA and cause this kind of merge with the MREMAP_DONTUNMAP flag specified. This will result in unlink_anon_vmas() being called after failing to duplicate anon_vma state to the target VMA, which results in the anon_vma itself being freed with folios still possessing dangling pointers to the anon_vma and thus a use-after-free bug. This bug was discovered via a syzbot report, which this patch resolves. We further make a change to update the mergeable anon_vma check to assert the copied-from anon_vma did not have CoW parents, as otherwise dup_anon_vma() might incorrectly propagate CoW ancestors from the next VMA in case 4 despite the anon_vma's being identical for both VMAs. Link: https://lkml.kernel.org/r/cover.1767638272.git.lorenzo.stoakes@oracle.com Link: https://lkml.kernel.org/r/b7930ad2b1503a657e29fe928eb33061d7eadf5b.1767638272.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Fixes: 879bca0a2c4f ("mm/vma: fix incorrectly disallowed anonymous VMA merges") Reported-by: syzbot+b165fc2e11771c66d8ba@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/694a2745.050a0220.19928e.0017.GAE@google.com/ Reported-by: syzbot+5272541ccbbb14e2ec30@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/694e3dc6.050a0220.35954c.0066.GAE@google.com/ Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Jeongjun Park <aha310510@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Yeoreum Yun <yeoreum.yun@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Pedro Falcato <pfalcato@suse.de> Cc: Rik van Riel <riel@surriel.com> Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ updated to account for lack of sticky VMA flags + built, tested confirmed working ] Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30mm/hugetlb: fix two comments related to huge_pmd_unshare()David Hildenbrand (Red Hat)1-16/+8
[ Upstream commit 3937027caecb4f8251e82dd857ba1d749bb5a428 ] Ever since we stopped using the page count to detect shared PMD page tables, these comments are outdated. The only reason we have to flush the TLB early is because once we drop the i_mmap_rwsem, the previously shared page table could get freed (to then get reallocated and used for other purpose). So we really have to flush the TLB before that could happen. So let's simplify the comments a bit. The "If we unshared PMDs, the TLB flush was not recorded in mmu_gather." part introduced as in commit a4a118f2eead ("hugetlbfs: flush TLBs correctly after huge_pmd_unshare") was confusing: sure it is recorded in the mmu_gather, otherwise tlb_flush_mmu_tlbonly() wouldn't do anything. So let's drop that comment while at it as well. We'll centralize these comments in a single helper as we rework the code next. Link: https://lkml.kernel.org/r/20251223214037.580860-3-david@kernel.org Fixes: 59d9094df3d7 ("mm: hugetlb: independent PMD page table shared count") Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Rik van Riel <riel@surriel.com> Tested-by: Laurence Oberman <loberman@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: Oscar Salvador <osalvador@suse.de> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Cc: Liu Shixin <liushixin2@huawei.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: "Uschakow, Stanislav" <suschako@amazon.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30mm: fix some typos in mm modulejianyun.gao14-20/+20
[ Upstream commit b6c46600bfb28b4be4e9cff7bad4f2cf357e0fb7 ] Below are some typos in the code comments: intevals ==> intervals addesses ==> addresses unavaliable ==> unavailable facor ==> factor droping ==> dropping exlusive ==> exclusive decription ==> description confict ==> conflict desriptions ==> descriptions otherwize ==> otherwise vlaue ==> value cheching ==> checking exisitng ==> existing modifed ==> modified differenciate ==> differentiate refernece ==> reference permissons ==> permissions indepdenent ==> independent spliting ==> splitting Just fix it. Link: https://lkml.kernel.org/r/20250929002608.1633825-1-jianyungao89@gmail.com Signed-off-by: jianyun.gao <jianyungao89@gmail.com> Reviewed-by: SeongJae Park <sj@kernel.org> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Acked-by: Chris Li <chrisl@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 3937027caecb ("mm/hugetlb: fix two comments related to huge_pmd_unshare()") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30migrate: correct lock ordering for hugetlb file foliosMatthew Wilcox (Oracle)1-6/+6
commit b7880cb166ab62c2409046b2347261abf701530e upstream. Syzbot has found a deadlock (analyzed by Lance Yang): 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock). 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire folio_lock. migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() <- Takes folio_lock! -> remove_migration_ptes() -> __rmap_walk_file() -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)! hugetlbfs_fallocate() -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)! -> hugetlbfs_zero_partial_page() -> filemap_lock_hugetlb_folio() -> filemap_lock_folio() -> __filemap_get_folio <- Waits for folio_lock! The migration path is the one taking locks in the wrong order according to the documentation at the top of mm/rmap.c. So expand the scope of the existing i_mmap_lock to cover the calls to remove_migration_ptes() too. This is (mostly) how it used to be after commit c0d0381ade79. That was removed by 336bf30eb765 for both file & anon hugetlb pages when it should only have been removed for anon hugetlb pages. Link: https://lkml.kernel.org/r/20260109041345.3863089-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Fixes: 336bf30eb765 ("hugetlbfs: fix anon huge page migration race") Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com Debugged-by: Lance Yang <lance.yang@linux.dev> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Acked-by: Zi Yan <ziy@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Byungchul Park <byungchul@sk.com> Cc: Gregory Price <gourry@gourry.net> Cc: Jann Horn <jannh@google.com> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rakie Kim <rakie.kim@sk.com> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Ying Huang <ying.huang@linux.alibaba.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30mm: restore per-memcg proactive reclaim with !CONFIG_NUMAYosry Ahmed2-10/+11
commit 16aca2c98a6fdf071e5a1a765a295995d7c7e346 upstream. Commit 2b7226af730c ("mm/memcg: make memory.reclaim interface generic") moved proactive reclaim logic from memory.reclaim handler to a generic user_proactive_reclaim() helper to be used for per-node proactive reclaim. However, user_proactive_reclaim() was only defined under CONFIG_NUMA, with a stub always returning 0 otherwise. This broke memory.reclaim on !CONFIG_NUMA configs, causing it to report success without actually attempting reclaim. Move the definition of user_proactive_reclaim() outside CONFIG_NUMA, and instead define a stub for __node_reclaim() in the !CONFIG_NUMA case. __node_reclaim() is only called from user_proactive_reclaim() when a write is made to sys/devices/system/node/nodeX/reclaim, which is only defined with CONFIG_NUMA. Link: https://lkml.kernel.org/r/20260116205247.928004-1-yosry.ahmed@linux.dev Fixes: 2b7226af730c ("mm/memcg: make memory.reclaim interface generic") Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@kernel.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Wei Xu <weixugc@google.com> Cc: Yuanchu Xie <yuanchu@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30mm/rmap: fix two comments related to huge_pmd_unshare()David Hildenbrand (Red Hat)1-16/+4
commit a8682d500f691b6dfaa16ae1502d990aeb86e8be upstream. PMD page table unsharing no longer touches the refcount of a PMD page table. Also, it is not about dropping the refcount of a "PMD page" but the "PMD page table". Let's just simplify by saying that the PMD page table was unmapped, consequently also unmapping the folio that was mapped into this page. This code should be deduplicated in the future. Link: https://lkml.kernel.org/r/20251223214037.580860-4-david@kernel.org Fixes: 59d9094df3d7 ("mm: hugetlb: independent PMD page table shared count") Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Rik van Riel <riel@surriel.com> Tested-by: Laurence Oberman <loberman@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Liu Shixin <liushixin2@huawei.com> Cc: Harry Yoo <harry.yoo@oracle.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: "Uschakow, Stanislav" <suschako@amazon.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-30slab: fix kmalloc_nolock() context check for PREEMPT_RTSwaraj Gaikwad1-2/+6
commit 99a3e3a1cfc93b8fe318c0a3a5cfb01f1d4ad53c upstream. On PREEMPT_RT kernels, local_lock becomes a sleeping lock. The current check in kmalloc_nolock() only verifies we're not in NMI or hard IRQ context, but misses the case where preemption is disabled. When a BPF program runs from a tracepoint with preemption disabled (preempt_count > 0), kmalloc_nolock() proceeds to call local_lock_irqsave() which attempts to acquire a sleeping lock, triggering: BUG: sleeping function called from invalid context in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6128 preempt_count: 2, expected: 0 Fix this by checking !preemptible() on PREEMPT_RT, which directly expresses the constraint that we cannot take a sleeping lock when preemption is disabled. This encompasses the previous checks for NMI and hard IRQ contexts while also catching cases where preemption is disabled. Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().") Reported-by: syzbot+b1546ad4a95331b2101e@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=b1546ad4a95331b2101e Signed-off-by: Swaraj Gaikwad <swarajgaikwad1925@gmail.com> Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20260113150639.48407-1-swarajgaikwad1925@gmail.co Cc: <stable@vger.kernel.org> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/page_alloc: prevent pcp corruption with SMP=nVlastimil Babka1-8/+39
commit 038a102535eb49e10e93eafac54352fcc5d78847 upstream. The kernel test robot has reported: BUG: spinlock trylock failure on UP on CPU#0, kcompactd0/28 lock: 0xffff888807e35ef0, .magic: dead4ead, .owner: kcompactd0/28, .owner_cpu: 0 CPU: 0 UID: 0 PID: 28 Comm: kcompactd0 Not tainted 6.18.0-rc5-00127-ga06157804399 #1 PREEMPT 8cc09ef94dcec767faa911515ce9e609c45db470 Call Trace: <IRQ> __dump_stack (lib/dump_stack.c:95) dump_stack_lvl (lib/dump_stack.c:123) dump_stack (lib/dump_stack.c:130) spin_dump (kernel/locking/spinlock_debug.c:71) do_raw_spin_trylock (kernel/locking/spinlock_debug.c:?) _raw_spin_trylock (include/linux/spinlock_api_smp.h:89 kernel/locking/spinlock.c:138) __free_frozen_pages (mm/page_alloc.c:2973) ___free_pages (mm/page_alloc.c:5295) __free_pages (mm/page_alloc.c:5334) tlb_remove_table_rcu (include/linux/mm.h:? include/linux/mm.h:3122 include/asm-generic/tlb.h:220 mm/mmu_gather.c:227 mm/mmu_gather.c:290) ? __cfi_tlb_remove_table_rcu (mm/mmu_gather.c:289) ? rcu_core (kernel/rcu/tree.c:?) rcu_core (include/linux/rcupdate.h:341 kernel/rcu/tree.c:2607 kernel/rcu/tree.c:2861) rcu_core_si (kernel/rcu/tree.c:2879) handle_softirqs (arch/x86/include/asm/jump_label.h:36 include/trace/events/irq.h:142 kernel/softirq.c:623) __irq_exit_rcu (arch/x86/include/asm/jump_label.h:36 kernel/softirq.c:725) irq_exit_rcu (kernel/softirq.c:741) sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1052) </IRQ> <TASK> RIP: 0010:_raw_spin_unlock_irqrestore (arch/x86/include/asm/preempt.h:95 include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:194) free_pcppages_bulk (mm/page_alloc.c:1494) drain_pages_zone (include/linux/spinlock.h:391 mm/page_alloc.c:2632) __drain_all_pages (mm/page_alloc.c:2731) drain_all_pages (mm/page_alloc.c:2747) kcompactd (mm/compaction.c:3115) kthread (kernel/kthread.c:465) ? __cfi_kcompactd (mm/compaction.c:3166) ? __cfi_kthread (kernel/kthread.c:412) ret_from_fork (arch/x86/kernel/process.c:164) ? __cfi_kthread (kernel/kthread.c:412) ret_from_fork_asm (arch/x86/entry/entry_64.S:255) </TASK> Matthew has analyzed the report and identified that in drain_page_zone() we are in a section protected by spin_lock(&pcp->lock) and then get an interrupt that attempts spin_trylock() on the same lock. The code is designed to work this way without disabling IRQs and occasionally fail the trylock with a fallback. However, the SMP=n spinlock implementation assumes spin_trylock() will always succeed, and thus it's normally a no-op. Here the enabled lock debugging catches the problem, but otherwise it could cause a corruption of the pcp structure. The problem has been introduced by commit 574907741599 ("mm/page_alloc: leave IRQs enabled for per-cpu page allocations"). The pcp locking scheme recognizes the need for disabling IRQs to prevent nesting spin_trylock() sections on SMP=n, but the need to prevent the nesting in spin_lock() has not been recognized. Fix it by introducing local wrappers that change the spin_lock() to spin_lock_iqsave() with SMP=n and use them in all places that do spin_lock(&pcp->lock). [vbabka@suse.cz: add pcp_ prefix to the spin_lock_irqsave wrappers, per Steven] Link: https://lkml.kernel.org/r/20260105-fix-pcp-up-v1-1-5579662d2071@suse.cz Fixes: 574907741599 ("mm/page_alloc: leave IRQs enabled for per-cpu page allocations") Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202512101320.e2f2dd6f-lkp@intel.com Analyzed-by: Matthew Wilcox <willy@infradead.org> Link: https://lore.kernel.org/all/aUW05pyc9nZkvY-1@casper.infradead.org/ Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Brendan Jackman <jackmanb@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Zi Yan <ziy@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/page_alloc: batch page freeing in decay_pcp_highJoshua Hahn1-3/+6
commit fc4b909c368f3a7b08c895dd5926476b58e85312 upstream. It is possible for pcp->count - pcp->high to exceed pcp->batch by a lot. When this happens, we should perform batching to ensure that free_pcppages_bulk isn't called with too many pages to free at once and starve out other threads that need the pcp or zone lock. Since we are still only freeing the difference between the initial pcp->count and pcp->high values, there should be no change to how many pages are freed. Link: https://lkml.kernel.org/r/20251014145011.3427205-3-joshua.hahnjy@gmail.com Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com> Suggested-by: Chris Mason <clm@fb.com> Suggested-by: Andrew Morton <akpm@linux-foundation.org> Co-developed-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Brendan Jackman <jackmanb@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: SeongJae Park <sj@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 038a102535eb ("mm/page_alloc: prevent pcp corruption with SMP=n") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/page_alloc/vmstat: simplify refresh_cpu_vm_stats change detectionJoshua Hahn2-17/+19
commit 0acc67c4030c39f39ac90413cc5d0abddd3a9527 upstream. Patch series "mm/page_alloc: Batch callers of free_pcppages_bulk", v5. Motivation & Approach ===================== While testing workloads with high sustained memory pressure on large machines in the Meta fleet (1Tb memory, 316 CPUs), we saw an unexpectedly high number of softlockups. Further investigation showed that the zone lock in free_pcppages_bulk was being held for a long time, and was called to free 2k+ pages over 100 times just during boot. This causes starvation in other processes for the zone lock, which can lead to the system stalling as multiple threads cannot make progress without the locks. We can see these issues manifesting as warnings: [ 4512.591979] rcu: INFO: rcu_sched self-detected stall on CPU [ 4512.604370] rcu: 20-....: (9312 ticks this GP) idle=a654/1/0x4000000000000000 softirq=309340/309344 fqs=5426 [ 4512.626401] rcu: hardirqs softirqs csw/system [ 4512.638793] rcu: number: 0 145 0 [ 4512.651177] rcu: cputime: 30 10410 174 ==> 10558(ms) [ 4512.666657] rcu: (t=21077 jiffies g=783665 q=1242213 ncpus=316) While these warnings don't indicate a crash or a kernel panic, they do point to the underlying issue of lock contention. To prevent starvation in both locks, batch the freeing of pages using pcp->batch. Because free_pcppages_bulk is called with the pcp lock and acquires the zone lock, relinquishing and reacquiring the locks are only effective when both of them are broken together (unless the system was built with queued spinlocks). Thus, instead of modifying free_pcppages_bulk to break both locks, batch the freeing from its callers instead. A similar fix has been implemented in the Meta fleet, and we have seen significantly less softlockups. Testing ======= The following are a few synthetic benchmarks, made on three machines. The first is a large machine with 754GiB memory and 316 processors. The second is a relatively smaller machine with 251GiB memory and 176 processors. The third and final is the smallest of the three, which has 62GiB memory and 36 processors. On all machines, I kick off a kernel build with -j$(nproc). Negative delta is better (faster compilation). Large machine (754GiB memory, 316 processors) make -j$(nproc) +------------+---------------+-----------+ | Metric (s) | Variation (%) | Delta(%) | +------------+---------------+-----------+ | real | 0.8070 | - 1.4865 | | user | 0.2823 | + 0.4081 | | sys | 5.0267 | -11.8737 | +------------+---------------+-----------+ Medium machine (251GiB memory, 176 processors) make -j$(nproc) +------------+---------------+----------+ | Metric (s) | Variation (%) | Delta(%) | +------------+---------------+----------+ | real | 0.2806 | +0.0351 | | user | 0.0994 | +0.3170 | | sys | 0.6229 | -0.6277 | +------------+---------------+----------+ Small machine (62GiB memory, 36 processors) make -j$(nproc) +------------+---------------+----------+ | Metric (s) | Variation (%) | Delta(%) | +------------+---------------+----------+ | real | 0.1503 | -2.6585 | | user | 0.0431 | -2.2984 | | sys | 0.1870 | -3.2013 | +------------+---------------+----------+ Here, variation is the coefficient of variation, i.e. standard deviation / mean. Based on these results, it seems like there are varying degrees to how much lock contention this reduces. For the largest and smallest machines that I ran the tests on, it seems like there is quite some significant reduction. There is also some performance increases visible from userspace. Interestingly, the performance gains don't scale with the size of the machine, but rather there seems to be a dip in the gain there is for the medium-sized machine. One possible theory is that because the high watermark depends on both memory and the number of local CPUs, what impacts zone contention the most is not these individual values, but rather the ratio of mem:processors. This patch (of 5): Currently, refresh_cpu_vm_stats returns an int, indicating how many changes were made during its updates. Using this information, callers like vmstat_update can heuristically determine if more work will be done in the future. However, all of refresh_cpu_vm_stats's callers either (a) ignore the result, only caring about performing the updates, or (b) only care about whether changes were made, but not *how many* changes were made. Simplify the code by returning a bool instead to indicate if updates were made. In addition, simplify fold_diff and decay_pcp_high to return a bool for the same reason. Link: https://lkml.kernel.org/r/20251014145011.3427205-1-joshua.hahnjy@gmail.com Link: https://lkml.kernel.org/r/20251014145011.3427205-2-joshua.hahnjy@gmail.com Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: SeongJae Park <sj@kernel.org> Cc: Brendan Jackman <jackmanb@google.com> Cc: Chris Mason <clm@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 038a102535eb ("mm/page_alloc: prevent pcp corruption with SMP=n") Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23iommu/sva: invalidate stale IOTLB entries for kernel address spaceLu Baolu1-0/+2
commit e37d5a2d60a338c5917c45296bac65da1382eda5 upstream. Introduce a new IOMMU interface to flush IOTLB paging cache entries for the CPU kernel address space. This interface is invoked from the x86 architecture code that manages combined user and kernel page tables, specifically before any kernel page table page is freed and reused. This addresses the main issue with vfree() which is a common occurrence and can be triggered by unprivileged users. While this resolves the primary problem, it doesn't address some extremely rare case related to memory unplug of memory that was present as reserved memory at boot, which cannot be triggered by unprivileged users. The discussion can be found at the link below. Enable SVA on x86 architecture since the IOMMU can now receive notification to flush the paging cache before freeing the CPU kernel page table pages. Link: https://lkml.kernel.org/r/20251022082635.2462433-9-baolu.lu@linux.intel.com Link: https://lore.kernel.org/linux-iommu/04983c62-3b1d-40d4-93ae-34ca04b827e5@intel.com/ Co-developed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Suggested-by: Jann Horn <jannh@google.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Mike Rapoport (Microsoft) <rppt@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robin Murohy <robin.murphy@arm.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com> Cc: Vinicius Costa Gomes <vinicius.gomes@intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Cc: Yi Lai <yi1.lai@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm: introduce deferred freeing for kernel page tablesDave Hansen2-0/+40
commit 5ba2f0a1556479638ac11a3c201421f5515e89f5 upstream. This introduces a conditional asynchronous mechanism, enabled by CONFIG_ASYNC_KERNEL_PGTABLE_FREE. When enabled, this mechanism defers the freeing of pages that are used as page tables for kernel address mappings. These pages are now queued to a work struct instead of being freed immediately. This deferred freeing allows for batch-freeing of page tables, providing a safe context for performing a single expensive operation (TLB flush) for a batch of kernel page tables instead of performing that expensive operation for each page table. Link: https://lkml.kernel.org/r/20251022082635.2462433-8-baolu.lu@linux.intel.com Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Cc: Alistair Popple <apopple@nvidia.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robin Murohy <robin.murphy@arm.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com> Cc: Vasant Hegde <vasant.hegde@amd.com> Cc: Vinicius Costa Gomes <vinicius.gomes@intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Cc: Yi Lai <yi1.lai@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/damon/sysfs: cleanup attrs subdirs on context dir setup failureSeongJae Park1-2/+3
commit 9814cc832b88bd040fc2a1817c2b5469d0f7e862 upstream. When a context DAMON sysfs directory setup is failed after setup of attrs/ directory, subdirectories of attrs/ directory are not cleaned up. As a result, DAMON sysfs interface is nearly broken until the system reboots, and the memory for the unremoved directory is leaked. Cleanup the directories under such failures. Link: https://lkml.kernel.org/r/20251225023043.18579-3-sj@kernel.org Fixes: c951cd3b8901 ("mm/damon: implement a minimal stub for sysfs-based DAMON interface") Signed-off-by: SeongJae Park <sj@kernel.org> Cc: chongjiapeng <jiapeng.chong@linux.alibaba.com> Cc: <stable@vger.kernel.org> # 5.18.x Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/damon/sysfs: cleanup intervals subdirs on attrs dir setup failureSeongJae Park1-1/+3
commit a24ca8ebb0cd5ea07a1462b77be0f0823c40f319 upstream. Patch series "mm/damon/sysfs: free setup failures generated zombie sub-sub dirs". Some DAMON sysfs directory setup functions generates its sub and sub-sub directories. For example, 'monitoring_attrs/' directory setup creates 'intervals/' and 'intervals/intervals_goal/' directories under 'monitoring_attrs/' directory. When such sub-sub directories are successfully made but followup setup is failed, the setup function should recursively clean up the subdirectories. However, such setup functions are only dereferencing sub directory reference counters. As a result, under certain setup failures, the sub-sub directories keep having non-zero reference counters. It means the directories cannot be removed like zombies, and the memory for the directories cannot be freed. The user impact of this issue is limited due to the following reasons. When the issue happens, the zombie directories are still taking the path. Hence attempts to generate the directories again will fail, without additional memory leak. This means the upper bound memory leak is limited. Nonetheless this also implies controlling DAMON with a feature that requires the setup-failed sysfs files will be impossible until the system reboots. Also, the setup operations are quite simple. The certain failures would hence only rarely happen, and are difficult to artificially trigger. This patch (of 4): When attrs/ DAMON sysfs directory setup is failed after setup of intervals/ directory, intervals/intervals_goal/ directory is not cleaned up. As a result, DAMON sysfs interface is nearly broken until the system reboots, and the memory for the unremoved directory is leaked. Cleanup the directory under such failures. Link: https://lkml.kernel.org/r/20251225023043.18579-1-sj@kernel.org Link: https://lkml.kernel.org/r/20251225023043.18579-2-sj@kernel.org Fixes: 8fbbcbeaafeb ("mm/damon/sysfs: implement intervals tuning goal directory") Signed-off-by: SeongJae Park <sj@kernel.org> Cc: chongjiapeng <jiapeng.chong@linux.alibaba.com> Cc: <stable@vger.kernel.org> # 6.15.x Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/damon/sysfs-scheme: cleanup access_pattern subdirs on scheme dir setup ↵SeongJae Park1-2/+3
failure commit 392b3d9d595f34877dd745b470c711e8ebcd225c upstream. When a DAMOS-scheme DAMON sysfs directory setup fails after setup of access_pattern/ directory, subdirectories of access_pattern/ directory are not cleaned up. As a result, DAMON sysfs interface is nearly broken until the system reboots, and the memory for the unremoved directory is leaked. Cleanup the directories under such failures. Link: https://lkml.kernel.org/r/20251225023043.18579-5-sj@kernel.org Fixes: 9bbb820a5bd5 ("mm/damon/sysfs: support DAMOS quotas") Signed-off-by: SeongJae Park <sj@kernel.org> Cc: chongjiapeng <jiapeng.chong@linux.alibaba.com> Cc: <stable@vger.kernel.org> # 5.18.x Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2026-01-23mm/damon/sysfs-scheme: cleanup quotas subdirs on scheme dir setup failureSeongJae Park1-2/+3
commit dc7e1d75fd8c505096d0cddeca9e2efb2b55aaf9 upstream. When a DAMOS-scheme DAMON sysfs directory setup fails after setup of quotas/ directory, subdirectories of quotas/ directory are not cleaned up. As a result, DAMON sysfs interface is nearly broken until the system reboots, and the memory for the unremoved directory is leaked. Cleanup the directories under such failures. Link: https://lkml.kernel.org/r/20251225023043.18579-4-sj@kernel.org Fixes: 1b32234ab087 ("mm/damon/sysfs: support DAMOS watermarks") Signed-off-by: SeongJae Park <sj@kernel.org> Cc: chongjiapeng <jiapeng.chong@linux.alibaba.com> Cc: <stable@vger.kernel.org> # 5.18.x Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>