diff options
| author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2024-12-11 20:25:37 +0000 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-12-27 13:58:53 +0100 |
| commit | 90ae5b7a1c526b535ae1645b7c0f6d18e7bf944e (patch) | |
| tree | e9d6faf30dec8b1fdb20db2acbdc551bab624d41 /mm | |
| parent | 0b5b0b65561b34e6e360de317e4bcd031bfabf42 (diff) | |
| download | linux-90ae5b7a1c526b535ae1645b7c0f6d18e7bf944e.tar.gz linux-90ae5b7a1c526b535ae1645b7c0f6d18e7bf944e.tar.bz2 linux-90ae5b7a1c526b535ae1645b7c0f6d18e7bf944e.zip | |
vmalloc: fix accounting with i915
commit a2e740e216f5bf49ccb83b6d490c72a340558a43 upstream.
If the caller of vmap() specifies VM_MAP_PUT_PAGES (currently only the
i915 driver), we will decrement nr_vmalloc_pages and MEMCG_VMALLOC in
vfree(). These counters are incremented by vmalloc() but not by vmap() so
this will cause an underflow. Check the VM_MAP_PUT_PAGES flag before
decrementing either counter.
Link: https://lkml.kernel.org/r/20241211202538.168311-1-willy@infradead.org
Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Balbir Singh <balbirs@nvidia.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
| -rw-r--r-- | mm/vmalloc.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 0148be0814af..5c2b5f93cb66 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2851,7 +2851,8 @@ void vfree(const void *addr) struct page *page = vm->pages[i]; BUG_ON(!page); - mod_memcg_page_state(page, MEMCG_VMALLOC, -1); + if (!(vm->flags & VM_MAP_PUT_PAGES)) + mod_memcg_page_state(page, MEMCG_VMALLOC, -1); /* * High-order allocs for huge vmallocs are split, so * can be freed as an array of order-0 allocations @@ -2859,7 +2860,8 @@ void vfree(const void *addr) __free_page(page); cond_resched(); } - atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); + if (!(vm->flags & VM_MAP_PUT_PAGES)) + atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); kvfree(vm->pages); kfree(vm); } |
