diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-01-09 11:18:47 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-01-09 11:18:47 -0800 |
| commit | fb46e22a9e3863e08aef8815df9f17d0f4b9aede (patch) | |
| tree | 83e052911fa8d8d90bcf9de2796e17e19040613f /mm | |
| parent | d30e51aa7b1f6fa7dd78d4598d1e4c047fcc3fb9 (diff) | |
| parent | 5e0a760b44417f7cadd79de2204d6247109558a0 (diff) | |
| download | linux-fb46e22a9e3863e08aef8815df9f17d0f4b9aede.tar.gz linux-fb46e22a9e3863e08aef8815df9f17d0f4b9aede.tar.bz2 linux-fb46e22a9e3863e08aef8815df9f17d0f4b9aede.zip | |
Merge tag 'mm-stable-2024-01-08-15-31' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
"Many singleton patches against the MM code. The patch series which are
included in this merge do the following:
- Peng Zhang has done some mapletree maintainance work in the series
'maple_tree: add mt_free_one() and mt_attr() helpers'
'Some cleanups of maple tree'
- In the series 'mm: use memmap_on_memory semantics for dax/kmem'
Vishal Verma has altered the interworking between memory-hotplug
and dax/kmem so that newly added 'device memory' can more easily
have its memmap placed within that newly added memory.
- Matthew Wilcox continues folio-related work (including a few fixes)
in the patch series
'Add folio_zero_tail() and folio_fill_tail()'
'Make folio_start_writeback return void'
'Fix fault handler's handling of poisoned tail pages'
'Convert aops->error_remove_page to ->error_remove_folio'
'Finish two folio conversions'
'More swap folio conversions'
- Kefeng Wang has also contributed folio-related work in the series
'mm: cleanup and use more folio in page fault'
- Jim Cromie has improved the kmemleak reporting output in the series
'tweak kmemleak report format'.
- In the series 'stackdepot: allow evicting stack traces' Andrey
Konovalov to permits clients (in this case KASAN) to cause eviction
of no longer needed stack traces.
- Charan Teja Kalla has fixed some accounting issues in the page
allocator's atomic reserve calculations in the series 'mm:
page_alloc: fixes for high atomic reserve caluculations'.
- Dmitry Rokosov has added to the samples/ dorectory some sample code
for a userspace memcg event listener application. See the series
'samples: introduce cgroup events listeners'.
- Some mapletree maintanance work from Liam Howlett in the series
'maple_tree: iterator state changes'.
- Nhat Pham has improved zswap's approach to writeback in the series
'workload-specific and memory pressure-driven zswap writeback'.
- DAMON/DAMOS feature and maintenance work from SeongJae Park in the
series
'mm/damon: let users feed and tame/auto-tune DAMOS'
'selftests/damon: add Python-written DAMON functionality tests'
'mm/damon: misc updates for 6.8'
- Yosry Ahmed has improved memcg's stats flushing in the series 'mm:
memcg: subtree stats flushing and thresholds'.
- In the series 'Multi-size THP for anonymous memory' Ryan Roberts
has added a runtime opt-in feature to transparent hugepages which
improves performance by allocating larger chunks of memory during
anonymous page faults.
- Matthew Wilcox has also contributed some cleanup and maintenance
work against eh buffer_head code int he series 'More buffer_head
cleanups'.
- Suren Baghdasaryan has done work on Andrea Arcangeli's series
'userfaultfd move option'. UFFDIO_MOVE permits userspace heap
compaction algorithms to move userspace's pages around rather than
UFFDIO_COPY'a alloc/copy/free.
- Stefan Roesch has developed a 'KSM Advisor', in the series 'mm/ksm:
Add ksm advisor'. This is a governor which tunes KSM's scanning
aggressiveness in response to userspace's current needs.
- Chengming Zhou has optimized zswap's temporary working memory use
in the series 'mm/zswap: dstmem reuse optimizations and cleanups'.
- Matthew Wilcox has performed some maintenance work on the writeback
code, both code and within filesystems. The series is 'Clean up the
writeback paths'.
- Andrey Konovalov has optimized KASAN's handling of alloc and free
stack traces for secondary-level allocators, in the series 'kasan:
save mempool stack traces'.
- Andrey also performed some KASAN maintenance work in the series
'kasan: assorted clean-ups'.
- David Hildenbrand has gone to town on the rmap code. Cleanups, more
pte batching, folio conversions and more. See the series 'mm/rmap:
interface overhaul'.
- Kinsey Ho has contributed some maintenance work on the MGLRU code
in the series 'mm/mglru: Kconfig cleanup'.
- Matthew Wilcox has contributed lruvec page accounting code cleanups
in the series 'Remove some lruvec page accounting functions'"
* tag 'mm-stable-2024-01-08-15-31' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (361 commits)
mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER
mm, treewide: introduce NR_PAGE_ORDERS
selftests/mm: add separate UFFDIO_MOVE test for PMD splitting
selftests/mm: skip test if application doesn't has root privileges
selftests/mm: conform test to TAP format output
selftests: mm: hugepage-mmap: conform to TAP format output
selftests/mm: gup_test: conform test to TAP format output
mm/selftests: hugepage-mremap: conform test to TAP format output
mm/vmstat: move pgdemote_* out of CONFIG_NUMA_BALANCING
mm: zsmalloc: return -ENOSPC rather than -EINVAL in zs_malloc while size is too large
mm/memcontrol: remove __mod_lruvec_page_state()
mm/khugepaged: use a folio more in collapse_file()
slub: use a folio in __kmalloc_large_node
slub: use folio APIs in free_large_kmalloc()
slub: use alloc_pages_node() in alloc_slab_page()
mm: remove inc/dec lruvec page state functions
mm: ratelimit stat flush from workingset shrinker
kasan: stop leaking stack trace handles
mm/mglru: remove CONFIG_TRANSPARENT_HUGEPAGE
mm/mglru: add dummy pmd_dirty()
...
Diffstat (limited to 'mm')
82 files changed, 5351 insertions, 2530 deletions
diff --git a/mm/Kconfig b/mm/Kconfig index ddf246bf785d..1902cfe4cc4f 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -61,6 +61,20 @@ config ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON The cost is that if the page was never dirtied and needs to be swapped out again, it will be re-compressed. +config ZSWAP_SHRINKER_DEFAULT_ON + bool "Shrink the zswap pool on memory pressure" + depends on ZSWAP + default n + help + If selected, the zswap shrinker will be enabled, and the pages + stored in the zswap pool will become available for reclaim (i.e + written back to the backing swap device) on memory pressure. + + This means that zswap writeback could happen even if the pool is + not yet full, or the cgroup zswap limit has not been reached, + reducing the chance that cold pages will reside in the zswap pool + and consume memory indefinitely. + choice prompt "Default compressor" depends on ZSWAP @@ -329,7 +343,7 @@ config SHUFFLE_PAGE_ALLOCATOR the presence of a memory-side-cache. There are also incidental security benefits as it reduces the predictability of page allocations to compliment SLAB_FREELIST_RANDOM, but the - default granularity of shuffling on the MAX_ORDER i.e, 10th + default granularity of shuffling on the MAX_PAGE_ORDER i.e, 10th order of pages is selected based on cache utilization benefits on x86. @@ -661,8 +675,8 @@ config HUGETLB_PAGE_SIZE_VARIABLE HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available on a platform. - Note that the pageblock_order cannot exceed MAX_ORDER and will be - clamped down to MAX_ORDER. + Note that the pageblock_order cannot exceed MAX_PAGE_ORDER and will be + clamped down to MAX_PAGE_ORDER. config CONTIG_ALLOC def_bool (MEMORY_ISOLATION && COMPACTION) || CMA @@ -718,7 +732,7 @@ config DEFAULT_MMAP_MIN_ADDR from userspace allocation. Keeping a user from writing to low pages can help reduce the impact of kernel NULL pointer bugs. - For most ia64, ppc64 and x86 users with lots of address space + For most ppc64 and x86 users with lots of address space a value of 65536 is reasonable and should cause no problems. On arm and other archs it should not be higher than 32768. Programs which use vm86 functionality or have some need to map @@ -821,6 +835,12 @@ choice madvise(MADV_HUGEPAGE) but it won't risk to increase the memory footprint of applications without a guaranteed benefit. + + config TRANSPARENT_HUGEPAGE_NEVER + bool "never" + help + Disable Transparent Hugepage by default. It can still be + enabled at runtime via sysfs. endchoice config THP_SWAP @@ -1216,6 +1236,10 @@ config LRU_GEN_STATS from evicted generations for debugging purpose. This option has a per-memcg and per-node memory overhead. + +config LRU_GEN_WALKS_MMU + def_bool y + depends on LRU_GEN && ARCH_HAS_HW_PTE_YOUNG # } config ARCH_SUPPORTS_PER_VMA_LOCK @@ -244,7 +244,7 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, { phys_addr_t memblock_end = memblock_end_of_DRAM(); phys_addr_t highmem_start; - int ret = 0; + int ret; /* * We can't use __pa(high_memory) directly, since high_memory diff --git a/mm/compaction.c b/mm/compaction.c index 01ba298739dd..27ada42924d5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -999,7 +999,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * a valid page order. Consider only values in the * valid order range to prevent low_pfn overflow. */ - if (freepage_order > 0 && freepage_order <= MAX_ORDER) { + if (freepage_order > 0 && freepage_order <= MAX_PAGE_ORDER) { low_pfn += (1UL << freepage_order) - 1; nr_scanned += (1UL << freepage_order) - 1; } @@ -1017,7 +1017,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (PageCompound(page) && !cc->alloc_contig) { const unsigned int order = compound_order(page); - if (likely(order <= MAX_ORDER)) { + if (likely(order <= MAX_PAGE_ORDER)) { low_pfn += (1UL << order) - 1; nr_scanned += (1UL << order) - 1; } @@ -1611,6 +1611,9 @@ static void fast_isolate_freepages(struct compact_control *cc) min(pageblock_end_pfn(min_pfn), zone_end_pfn(cc->zone)), cc->zone); + if (page && !suitable_migration_target(cc, page)) + page = NULL; + cc->free_pfn = min_pfn; } } @@ -2226,7 +2229,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) /* Direct compactor: Is a suitable page free? */ ret = COMPACT_NO_SUITABLE_PAGE; - for (order = cc->order; order <= MAX_ORDER; order++) { + for (order = cc->order; order < NR_PAGE_ORDERS; order++) { struct free_area *area = &cc->zone->free_area[order]; bool can_steal; diff --git a/mm/damon/core-test.h b/mm/damon/core-test.h index 649adf91ebc5..0cee634f3544 100644 --- a/mm/damon/core-test.h +++ b/mm/damon/core-test.h @@ -4,7 +4,7 @@ * * Copyright 2019 Amazon.com, Inc. or its affiliates. All rights reserved. * - * Author: SeongJae Park <sjpark@amazon.de> + * Author: SeongJae Park <sj@kernel.org> */ #ifdef CONFIG_DAMON_KUNIT_TEST @@ -122,18 +122,25 @@ static void damon_test_split_at(struct kunit *test) { struct damon_ctx *c = damon_new_ctx(); struct damon_target *t; - struct damon_region *r; + struct damon_region *r, *r_new; t = damon_new_target(); r = damon_new_region(0, 100); + r->nr_accesses_bp = 420000; + r->nr_accesses = 42; + r->last_nr_accesses = 15; damon_add_region(r, t); damon_split_region_at(t, r, 25); KUNIT_EXPECT_EQ(test, r->ar.start, 0ul); KUNIT_EXPECT_EQ(test, r->ar.end, 25ul); - r = damon_next_region(r); - KUNIT_EXPECT_EQ(test, r->ar.start, 25ul); - KUNIT_EXPECT_EQ(test, r->ar.end, 100ul); + r_new = damon_next_region(r); + KUNIT_EXPECT_EQ(test, r_new->ar.start, 25ul); + KUNIT_EXPECT_EQ(test, r_new->ar.end, 100ul); + + KUNIT_EXPECT_EQ(test, r->nr_accesses_bp, r_new->nr_accesses_bp); + KUNIT_EXPECT_EQ(test, r->nr_accesses, r_new->nr_accesses); + KUNIT_EXPECT_EQ(test, r->last_nr_accesses, r_new->last_nr_accesses); damon_free_target(t); damon_destroy_ctx(c); @@ -295,6 +302,16 @@ static void damon_test_set_regions(struct kunit *test) damon_destroy_target(t); } +static void damon_test_nr_accesses_to_accesses_bp(struct kunit *test) +{ + struct damon_attrs attrs = { + .sample_interval = 10, + .aggr_interval = ((unsigned long)UINT_MAX + 1) * 10 + }; + + KUNIT_EXPECT_EQ(test, damon_nr_accesses_to_accesses_bp(123, &attrs), 0); +} + static void damon_test_update_monitoring_result(struct kunit *test) { struct damon_attrs old_attrs = { @@ -439,6 +456,37 @@ static void damos_test_filter_out(struct kunit *test) damos_free_filter(f); } +static void damon_test_feed_loop_next_input(struct kunit *test) +{ + unsigned long last_input = 900000, current_score = 200; + + /* + * If current score is lower than the goal, which is always 10,000 + * (read the comment on damon_feed_loop_next_input()'s comment), next + * input should be higher than the last input. + */ + KUNIT_EXPECT_GT(test, + damon_feed_loop_next_input(last_input, current_score), + last_input); + + /* + * If current score is higher than the goal, next input should be lower + * than the last input. + */ + current_score = 250000000; + KUNIT_EXPECT_LT(test, + damon_feed_loop_next_input(last_input, current_score), + last_input); + + /* + * The next input depends on the distance between the current score and + * the goal + */ + KUNIT_EXPECT_GT(test, + damon_feed_loop_next_input(last_input, 200), + damon_feed_loop_next_input(last_input, 2000)); +} + static struct kunit_case damon_test_cases[] = { KUNIT_CASE(damon_test_target), KUNIT_CASE(damon_test_regions), @@ -449,11 +497,13 @@ static struct kunit_case damon_test_cases[] = { KUNIT_CASE(damon_test_split_regions_of), KUNIT_CASE(damon_test_ops_registration), KUNIT_CASE(damon_test_set_regions), + KUNIT_CASE(damon_test_nr_accesses_to_accesses_bp), KUNIT_CASE(damon_test_update_monitoring_result), KUNIT_CASE(damon_test_set_attrs), KUNIT_CASE(damon_test_moving_sum), KUNIT_CASE(damos_test_new_filter), KUNIT_CASE(damos_test_filter_out), + KUNIT_CASE(damon_test_feed_loop_next_input), {}, }; diff --git a/mm/damon/core.c b/mm/damon/core.c index 3a05e71509b9..36f6f1d21ff0 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -2,7 +2,7 @@ /* * Data Access Monitor * - * Author: SeongJae Park <sjpark@amazon.de> + * Author: SeongJae Park <sj@kernel.org> */ #define pr_fmt(fmt) "damon: " fmt @@ -1043,26 +1043,76 @@ static void damon_do_apply_schemes(struct damon_ctx *c, } } -/* Shouldn't be called if quota->ms and quota->sz are zero */ +/* + * damon_feed_loop_next_input() - get next input to achieve a target score. + * @last_input The last input. + * @score Current score that made with @last_input. + * + * Calculate next input to achieve the target score, based on the last input + * and current score. Assuming the input and the score are positively + * proportional, calculate how much compensation should be added to or + * subtracted from the last input as a proportion of the last input. Avoid + * next input always being zero by setting it non-zero always. In short form + * (assuming support of float and signed calculations), the algorithm is as + * below. + * + * next_input = max(last_input * ((goal - current) / goal + 1), 1) + * + * For simple implementation, we assume the target score is always 10,000. The + * caller should adjust @score for this. + * + * Returns next input that assumed to achieve the target score. + */ +static unsigned long damon_feed_loop_next_input(unsigned long last_input, + unsigned long score) +{ + const unsigned long goal = 10000; + unsigned long score_goal_diff = max(goal, score) - min(goal, score); + unsigned long score_goal_diff_bp = score_goal_diff * 10000 / goal; + unsigned long compensation = last_input * score_goal_diff_bp / 10000; + /* Set minimum input as 10000 to avoid compensation be zero */ + const unsigned long min_input = 10000; + + if (goal > score) + return last_input + compensation; + if (last_input > compensation + min_input) + return last_input - compensation; + return min_input; +} + +/* Shouldn't be called if quota->ms, quota->sz, and quota->get_score unset */ static void damos_set_effective_quota(struct damos_quota *quota) { unsigned long throughput; unsigned long esz; - if (!quota->ms) { + if (!quota->ms && !quota->get_score) { quota->esz = quota->sz; return; } - if (quota->total_charged_ns) - throughput = quota->total_charged_sz * 1000000 / - quota->total_charged_ns; - else - throughput = PAGE_SIZE * 1024; - esz = throughput * quota->ms; + if (quota->get_score) { + quota->esz_bp = damon_feed_loop_next_input( + max(quota->esz_bp, 10000UL), + quota->get_score(quota->get_score_arg)); + esz = quota->esz_bp / 10000; + } + + if (quota->ms) { + if (quota->total_charged_ns) + throughput = quota->total_charged_sz * 1000000 / + quota->total_charged_ns; + else + throughput = PAGE_SIZE * 1024; + if (quota->get_score) + esz = min(throughput * quota->ms, esz); + else + esz = throughput * quota->ms; + } if (quota->sz && quota->sz < esz) esz = quota->sz; + quota->esz = esz; } @@ -1074,7 +1124,7 @@ static void damos_adjust_quota(struct damon_ctx *c, struct damos *s) unsigned long cum |
