diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-01 17:17:24 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-01 17:17:24 -0700 |
commit | e267992f9ef0bf717d70a9ee18049782f77e4b3a (patch) | |
tree | 6caf3664452672f41e8039f6af4279e2df709d66 /mm/percpu-km.c | |
parent | 19b438592238b3b40c3f945bb5f9c4ca971c0c45 (diff) | |
parent | e4d777003a43feab2e000749163e531f6c48c385 (diff) | |
download | linux-e267992f9ef0bf717d70a9ee18049782f77e4b3a.tar.gz linux-e267992f9ef0bf717d70a9ee18049782f77e4b3a.tar.bz2 linux-e267992f9ef0bf717d70a9ee18049782f77e4b3a.zip |
Merge branch 'for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu
Pull percpu updates from Dennis Zhou:
- percpu chunk depopulation - depopulate backing pages for chunks with
empty pages when we exceed a global threshold without those pages.
This lets us reclaim a portion of memory that would previously be
lost until the full chunk would be freed (possibly never).
- memcg accounting cleanup - previously separate chunks were managed
for normal allocations and __GFP_ACCOUNT allocations. These are now
consolidated which cleans up the code quite a bit.
- a few misc clean ups for clang warnings
* 'for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu:
percpu: optimize locking in pcpu_balance_workfn()
percpu: initialize best_upa variable
percpu: rework memcg accounting
mm, memcg: introduce mem_cgroup_kmem_disabled()
mm, memcg: mark cgroup_memory_nosocket, nokmem and noswap as __ro_after_init
percpu: make symbol 'pcpu_free_slot' static
percpu: implement partial chunk depopulation
percpu: use pcpu_free_slot instead of pcpu_nr_slots - 1
percpu: factor out pcpu_check_block_hint()
percpu: split __pcpu_balance_workfn()
percpu: fix a comment about the chunks ordering
Diffstat (limited to 'mm/percpu-km.c')
-rw-r--r-- | mm/percpu-km.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/mm/percpu-km.c b/mm/percpu-km.c index 35c9941077ee..c9d529dc7651 100644 --- a/mm/percpu-km.c +++ b/mm/percpu-km.c @@ -44,8 +44,7 @@ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, /* nada */ } -static struct pcpu_chunk *pcpu_create_chunk(enum pcpu_chunk_type type, - gfp_t gfp) +static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) { const int nr_pages = pcpu_group_sizes[0] >> PAGE_SHIFT; struct pcpu_chunk *chunk; @@ -53,7 +52,7 @@ static struct pcpu_chunk *pcpu_create_chunk(enum pcpu_chunk_type type, unsigned long flags; int i; - chunk = pcpu_alloc_chunk(type, gfp); + chunk = pcpu_alloc_chunk(gfp); if (!chunk) return NULL; @@ -118,3 +117,8 @@ static int __init pcpu_verify_alloc_info(const struct pcpu_alloc_info *ai) return 0; } + +static bool pcpu_should_reclaim_chunk(struct pcpu_chunk *chunk) +{ + return false; +} |