diff options
| author | Vlastimil Babka <vbabka@suse.cz> | 2023-03-29 10:48:39 +0200 |
|---|---|---|
| committer | Vlastimil Babka <vbabka@suse.cz> | 2023-03-29 10:48:39 +0200 |
| commit | ed4cdfbeb8735c36a2e31009866dfc2dfa26db3f (patch) | |
| tree | 714a066c2656d0cadfba415a2cf7ebb5e9b84afe /fs/proc/page.c | |
| parent | 8f0293bf7aeb9339f724e306e7a0a741f633c738 (diff) | |
| parent | ae65a5211d90e54ae604012ce9cf234c48780929 (diff) | |
| download | linux-ed4cdfbeb8735c36a2e31009866dfc2dfa26db3f.tar.gz linux-ed4cdfbeb8735c36a2e31009866dfc2dfa26db3f.tar.bz2 linux-ed4cdfbeb8735c36a2e31009866dfc2dfa26db3f.zip | |
Merge branch 'slab/for-6.4/slob-removal' into slab/for-next
A series by myself to remove CONFIG_SLOB:
The SLOB allocator was deprecated in 6.2 and there have been no
complaints so far so let's proceed with the removal.
Besides the code cleanup, the main immediate benefit will be allowing
kfree() family of function to work on kmem_cache_alloc() objects, which
was incompatible with SLOB. This includes kfree_rcu() which had no
kmem_cache_free_rcu() counterpart yet and now it shouldn't be necessary
anymore.
Otherwise it's all straightforward removal. After this series, 'git grep
slob' or 'git grep SLOB' will have 3 remaining relevant hits in non-mm
code:
- tomoyo - patch submitted and carried there, doesn't need to wait for
this series
- skbuff - patch to cleanup now-unnecessary #ifdefs will be posted to
netdev after this is merged, as requested to avoid conflicts
- ftrace ring_buffer - patch to remove obsolete comment is carried there
The rest of 'git grep SLOB' hits are false positives, or intentional
(CREDITS, and mm/Kconfig SLUB_TINY description to help those that will
happen to migrate later).
Diffstat (limited to 'fs/proc/page.c')
| -rw-r--r-- | fs/proc/page.c | 9 |
1 files changed, 4 insertions, 5 deletions
diff --git a/fs/proc/page.c b/fs/proc/page.c index 6249c347809a..195b077c0fac 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page) /* * pseudo flags for the well known (anonymous) memory mapped pages * - * Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the + * Note that page->_mapcount is overloaded in SLAB, so the * simple test in page_mapped() is not enough. */ if (!PageSlab(page) && page_mapped(page)) @@ -165,9 +165,8 @@ u64 stable_page_flags(struct page *page) /* - * Caveats on high order pages: page->_refcount will only be set - * -1 on the head page; SLUB/SLQB do the same for PG_slab; - * SLOB won't set PG_slab at all on compound pages. + * Caveats on high order pages: PG_buddy and PG_slab will only be set + * on the head page. */ if (PageBuddy(page)) u |= 1 << KPF_BUDDY; @@ -185,7 +184,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); - if (PageTail(page) && PageSlab(compound_head(page))) + if (PageTail(page) && PageSlab(page)) u |= 1 << KPF_SLAB; u |= kpf_copy_bit(k, KPF_ERROR, PG_error); |
