diff options
| author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2024-12-11 20:25:37 +0000 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-12-27 14:02:13 +0100 |
| commit | ad7c9f1f4322ba9a9f5c5c37dcebc587eeba89f1 (patch) | |
| tree | 3cc7650ca2782fd957cfec17775f2186625f24a4 /mm/vmalloc.c | |
| parent | 6fb92e9a52e3feae309a213950f21dfcd1eb0b40 (diff) | |
| download | linux-ad7c9f1f4322ba9a9f5c5c37dcebc587eeba89f1.tar.gz linux-ad7c9f1f4322ba9a9f5c5c37dcebc587eeba89f1.tar.bz2 linux-ad7c9f1f4322ba9a9f5c5c37dcebc587eeba89f1.zip | |
vmalloc: fix accounting with i915
commit a2e740e216f5bf49ccb83b6d490c72a340558a43 upstream.
If the caller of vmap() specifies VM_MAP_PUT_PAGES (currently only the
i915 driver), we will decrement nr_vmalloc_pages and MEMCG_VMALLOC in
vfree(). These counters are incremented by vmalloc() but not by vmap() so
this will cause an underflow. Check the VM_MAP_PUT_PAGES flag before
decrementing either counter.
Link: https://lkml.kernel.org/r/20241211202538.168311-1-willy@infradead.org
Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Balbir Singh <balbirs@nvidia.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm/vmalloc.c')
| -rw-r--r-- | mm/vmalloc.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 0161cb4391e1..3f9255dfacb0 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3369,7 +3369,8 @@ void vfree(const void *addr) struct page *page = vm->pages[i]; BUG_ON(!page); - mod_memcg_page_state(page, MEMCG_VMALLOC, -1); + if (!(vm->flags & VM_MAP_PUT_PAGES)) + mod_memcg_page_state(page, MEMCG_VMALLOC, -1); /* * High-order allocs for huge vmallocs are split, so * can be freed as an array of order-0 allocations @@ -3377,7 +3378,8 @@ void vfree(const void *addr) __free_page(page); cond_resched(); } - atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); + if (!(vm->flags & VM_MAP_PUT_PAGES)) + atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); kvfree(vm->pages); kfree(vm); } |
