summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
41 hourslib/crypto: curve25519-hacl64: Fix older clang KASAN workaround for GCCNathan Chancellor1-1/+1
commit 2b81082ad37cc3f28355fb73a6a69b91ff7dbf20 upstream. Commit 2f13daee2a72 ("lib/crypto/curve25519-hacl64: Disable KASAN with clang-17 and older") inadvertently disabled KASAN in curve25519-hacl64.o for GCC unconditionally because clang-min-version will always evaluate to nothing for GCC. Add a check for CONFIG_CC_IS_CLANG to avoid applying the workaround for GCC, which is only needed for clang-17 and older. Cc: stable@vger.kernel.org Fixes: 2f13daee2a72 ("lib/crypto/curve25519-hacl64: Disable KASAN with clang-17 and older") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251103-curve25519-hacl64-fix-kasan-workaround-v2-1-ab581cbd8035@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
41 hourskunit: test_dev_action: Correctly cast 'priv' pointer to long*Florian Schmaus1-1/+1
[ Upstream commit 2551a1eedc09f5a86f94b038dc1bb16855c256f1 ] The previous implementation incorrectly assumed the original type of 'priv' was void**, leading to an unnecessary and misleading cast. Correct the cast of the 'priv' pointer in test_dev_action() to its actual type, long*, removing an unnecessary cast. As an additional benefit, this fixes an out-of-bounds CHERI fault on hardware with architectural capabilities. The original implementation tried to store a capability-sized pointer using the priv pointer. However, the priv pointer's capability only granted access to the memory region of its original long type, leading to a bounds violation since the size of a long is smaller than the size of a capability. This change ensures that the pointer usage respects the capabilities' bounds. Link: https://lore.kernel.org/r/20251017092814.80022-1-florian.schmaus@codasip.com Fixes: d03c720e03bd ("kunit: Add APIs for managing devices") Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Florian Schmaus <florian.schmaus@codasip.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-10-19lib/crypto/curve25519-hacl64: Disable KASAN with clang-17 and olderNathan Chancellor1-0/+4
commit 2f13daee2a72bb962f5fd356c3a263a6f16da965 upstream. After commit 6f110a5e4f99 ("Disable SLUB_TINY for build testing"), which causes CONFIG_KASAN to be enabled in allmodconfig again, arm64 allmodconfig builds with clang-17 and older show an instance of -Wframe-larger-than (which breaks the build with CONFIG_WERROR=y): lib/crypto/curve25519-hacl64.c:757:6: error: stack frame size (2336) exceeds limit (2048) in 'curve25519_generic' [-Werror,-Wframe-larger-than] 757 | void curve25519_generic(u8 mypublic[CURVE25519_KEY_SIZE], | ^ When KASAN is disabled, the stack usage is roughly quartered: lib/crypto/curve25519-hacl64.c:757:6: error: stack frame size (608) exceeds limit (128) in 'curve25519_generic' [-Werror,-Wframe-larger-than] 757 | void curve25519_generic(u8 mypublic[CURVE25519_KEY_SIZE], | ^ Using '-Rpass-analysis=stack-frame-layout' shows the following variables and many, many 8-byte spills when KASAN is enabled: Offset: [SP-144], Type: Variable, Align: 8, Size: 40 Offset: [SP-464], Type: Variable, Align: 8, Size: 320 Offset: [SP-784], Type: Variable, Align: 8, Size: 320 Offset: [SP-864], Type: Variable, Align: 32, Size: 80 Offset: [SP-896], Type: Variable, Align: 32, Size: 32 Offset: [SP-1016], Type: Variable, Align: 8, Size: 120 When KASAN is disabled, there are still spills but not at many and the variables list is smaller: Offset: [SP-192], Type: Variable, Align: 32, Size: 80 Offset: [SP-224], Type: Variable, Align: 32, Size: 32 Offset: [SP-344], Type: Variable, Align: 8, Size: 120 Disable KASAN for this file when using clang-17 or older to avoid blowing out the stack, clearing up the warning. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250609-curve25519-hacl64-disable-kasan-clang-v1-1-08ea0ac5ccff@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-10-19lib/genalloc: fix device leak in of_gen_pool_get()Johan Hovold1-1/+4
commit 1260cbcffa608219fc9188a6cbe9c45a300ef8b5 upstream. Make sure to drop the reference taken when looking up the genpool platform device in of_gen_pool_get() before returning the pool. Note that holding a reference to a device does typically not prevent its devres managed resources from being released so there is no point in keeping the reference. Link: https://lkml.kernel.org/r/20250924080207.18006-1-johan@kernel.org Fixes: 9375db07adea ("genalloc: add devres support, allow to find a managed pool by device") Signed-off-by: Johan Hovold <johan@kernel.org> Cc: Philipp Zabel <p.zabel@pengutronix.de> Cc: Vladimir Zapolskiy <vz@mleia.com> Cc: <stable@vger.kernel.org> [3.10+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-08-20lib/sbitmap: convert shallow_depth from one word to the whole sbitmapYu Kuai1-27/+29
[ Upstream commit 42e6c6ce03fd3e41e39a0f93f9b1a1d9fa664338 ] Currently elevators will record internal 'async_depth' to throttle asynchronous requests, and they both calculate shallow_dpeth based on sb->shift, with the respect that sb->shift is the available tags in one word. However, sb->shift is not the availbale tags in the last word, see __map_depth: if (index == sb->map_nr - 1) return sb->depth - (index << sb->shift); For consequence, if the last word is used, more tags can be get than expected, for example, assume nr_requests=256 and there are four words, in the worst case if user set nr_requests=32, then the first word is the last word, and still use bits per word, which is 64, to calculate async_depth is wrong. One the ohter hand, due to cgroup qos, bfq can allow only one request to be allocated, and set shallow_dpeth=1 will still allow the number of words request to be allocated. Fix this problems by using shallow_depth to the whole sbitmap instead of per word, also change kyber, mq-deadline and bfq to follow this, a new helper __map_depth_with_shallow() is introduced to calculate available bits in each word. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Link: https://lore.kernel.org/r/20250807032413.1469456-2-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-17lib/alloc_tag: do not acquire non-existent lock in alloc_tag_top_users()Harry Yoo1-0/+3
commit 99af22cd34688cc0d535a1919e0bea4cbc6c1ea1 upstream. alloc_tag_top_users() attempts to lock alloc_tag_cttype->mod_lock even when the alloc_tag_cttype is not allocated because: 1) alloc tagging is disabled because mem profiling is disabled (!alloc_tag_cttype) 2) alloc tagging is enabled, but not yet initialized (!alloc_tag_cttype) 3) alloc tagging is enabled, but failed initialization (!alloc_tag_cttype or IS_ERR(alloc_tag_cttype)) In all cases, alloc_tag_cttype is not allocated, and therefore alloc_tag_top_users() should not attempt to acquire the semaphore. This leads to a crash on memory allocation failure by attempting to acquire a non-existent semaphore: Oops: general protection fault, probably for non-canonical address 0xdffffc000000001b: 0000 [#3] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x00000000000000d8-0x00000000000000df] CPU: 2 UID: 0 PID: 1 Comm: systemd Tainted: G D 6.16.0-rc2 #1 VOLUNTARY Tainted: [D]=DIE Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:down_read_trylock+0xaa/0x3b0 Code: d0 7c 08 84 d2 0f 85 a0 02 00 00 8b 0d df 31 dd 04 85 c9 75 29 48 b8 00 00 00 00 00 fc ff df 48 8d 6b 68 48 89 ea 48 c1 ea 03 <80> 3c 02 00 0f 85 88 02 00 00 48 3b 5b 68 0f 85 53 01 00 00 65 ff RSP: 0000:ffff8881002ce9b8 EFLAGS: 00010016 RAX: dffffc0000000000 RBX: 0000000000000070 RCX: 0000000000000000 RDX: 000000000000001b RSI: 000000000000000a RDI: 0000000000000070 RBP: 00000000000000d8 R08: 0000000000000001 R09: ffffed107dde49d1 R10: ffff8883eef24e8b R11: ffff8881002cec20 R12: 1ffff11020059d37 R13: 00000000003fff7b R14: ffff8881002cec20 R15: dffffc0000000000 FS: 00007f963f21d940(0000) GS:ffff888458ca6000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f963f5edf71 CR3: 000000010672c000 CR4: 0000000000350ef0 Call Trace: <TASK> codetag_trylock_module_list+0xd/0x20 alloc_tag_top_users+0x369/0x4b0 __show_mem+0x1cd/0x6e0 warn_alloc+0x2b1/0x390 __alloc_frozen_pages_noprof+0x12b9/0x21a0 alloc_pages_mpol+0x135/0x3e0 alloc_slab_page+0x82/0xe0 new_slab+0x212/0x240 ___slab_alloc+0x82a/0xe00 </TASK> As David Wang points out, this issue became easier to trigger after commit 780138b12381 ("alloc_tag: check mem_profiling_support in alloc_tag_init"). Before the commit, the issue occurred only when it failed to allocate and initialize alloc_tag_cttype or if a memory allocation fails before alloc_tag_init() is called. After the commit, it can be easily triggered when memory profiling is compiled but disabled at boot. To properly determine whether alloc_tag_init() has been called and its data structures initialized, verify that alloc_tag_cttype is a valid pointer before acquiring the semaphore. If the variable is NULL or an error value, it has not been properly initialized. In such a case, just skip and do not attempt to acquire the semaphore. [harry.yoo@oracle.com: v3] Link: https://lkml.kernel.org/r/20250624072513.84219-1-harry.yoo@oracle.com Link: https://lkml.kernel.org/r/20250620195305.1115151-1-harry.yoo@oracle.com Fixes: 780138b12381 ("alloc_tag: check mem_profiling_support in alloc_tag_init") Fixes: 1438d349d16b ("lib: add memory allocations report in show_mem()") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202506181351.bba867dd-lkp@intel.com Acked-by: Suren Baghdasaryan <surenb@google.com> Tested-by: Raghavendra K T <raghavendra.kt@amd.com> Cc: Casey Chen <cachen@purestorage.com> Cc: David Wang <00107082@163.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Yuanyuan Zhong <yzhong@purestorage.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-07-17maple_tree: fix mt_destroy_walk() on root leaf nodeWei Yang1-0/+1
commit ea9b77f98d94c4d5c1bd1ac1db078f78b40e8bf5 upstream. On destroy, we should set each node dead. But current code miss this when the maple tree has only the root node. The reason is mt_destroy_walk() leverage mte_destroy_descend() to set node dead, but this is skipped since the only root node is a leaf. Fixes this by setting the node dead if it is a leaf. Link: https://lore.kernel.org/all/20250407231354.11771-1-richard.weiyang@gmail.com/ Link: https://lkml.kernel.org/r/20250624191841.64682-1-Liam.Howlett@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-07-10ubsan: integer-overflow: depend on BROKEN to keep this out of CIKees Cook1-0/+2
[ Upstream commit d6a0e0bfecccdcecb08defe75a137c7262352102 ] Depending on !COMPILE_TEST isn't sufficient to keep this feature out of CI because we can't stop it from being included in randconfig builds. This feature is still highly experimental, and is developed in lock-step with Clang's Overflow Behavior Types[1]. Depend on BROKEN to keep it from being enabled by anyone not expecting it. Link: https://discourse.llvm.org/t/rfc-v2-clang-introduce-overflowbehaviortypes-for-wrapping-and-non-wrapping-arithmetic/86507 [1] Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202505281024.f42beaa7-lkp@intel.com Fixes: 557f8c582a9b ("ubsan: Reintroduce signed overflow sanitizer") Acked-by: Eric Biggers <ebiggers@kernel.org> Link: https://lore.kernel.org/r/20250528182616.work.296-kees@kernel.org Reviewed-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-10lib: test_objagg: Set error message in check_expect_hints_stats()Dan Carpenter1-1/+3
[ Upstream commit e6ed134a4ef592fe1fd0cafac9683813b3c8f3e8 ] Smatch complains that the error message isn't set in the caller: lib/test_objagg.c:923 test_hints_case2() error: uninitialized symbol 'errmsg'. This static checker warning only showed up after a recent refactoring but the bug dates back to when the code was originally added. This likely doesn't affect anything in real life. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/r/202506281403.DsuyHFTZ-lkp@intel.com/ Fixes: 0a020d416d0a ("lib: introduce initial implementation of object aggregation manager") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/8548f423-2e3b-4bb7-b816-5041de2762aa@sabinyo.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-07-06maple_tree: fix MA_STATE_PREALLOC flag in mas_preallocate()Liam R. Howlett1-1/+3
commit fba46a5d83ca8decb338722fb4899026d8d9ead2 upstream. Temporarily clear the preallocation flag when explicitly requesting allocations. Pre-existing allocations are already counted against the request through mas_node_count_gfp(), but the allocations will not happen if the MA_STATE_PREALLOC flag is set. This flag is meant to avoid re-allocating in bulk allocation mode, and to detect issues with preallocation calculations. The MA_STATE_PREALLOC flag should also always be set on zero allocations so that detection of underflow allocations will print a WARN_ON() during consumption. User visible effect of this flaw is a WARN_ON() followed by a null pointer dereference when subsequent requests for larger number of nodes is ignored, such as the vma merge retry in mmap_region() caused by drivers altering the vma flags (which happens in v6.6, at least) Link: https://lkml.kernel.org/r/20250616184521.3382795-3-Liam.Howlett@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> Reported-by: Hailong Liu <hailong.liu@oppo.com> Link: https://lore.kernel.org/all/1652f7eb-a51b-4fee-8058-c73af63bacd1@oppo.com/ Link: https://lore.kernel.org/all/20250428184058.1416274-1-Liam.Howlett@oracle.com/ Link: https://lore.kernel.org/all/20250429014754.1479118-1-Liam.Howlett@oracle.com/ Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Hailong Liu <hailong.liu@oppo.com> Cc: zhangpeng.00@bytedance.com <zhangpeng.00@bytedance.com> Cc: Steve Kang <Steve.Kang@unisoc.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-07-06lib/group_cpus: fix NULL pointer dereference from group_cpus_evenly()Yu Kuai1-1/+8
commit df831e97739405ecbaddb85516bc7d4d1c933d6b upstream. While testing null_blk with configfs, echo 0 > poll_queues will trigger following panic: BUG: kernel NULL pointer dereference, address: 0000000000000010 Oops: Oops: 0000 [#1] SMP NOPTI CPU: 27 UID: 0 PID: 920 Comm: bash Not tainted 6.15.0-02023-gadbdb95c8696-dirty #1238 PREEMPT(undef) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.1-2.fc37 04/01/2014 RIP: 0010:__bitmap_or+0x48/0x70 Call Trace: <TASK> __group_cpus_evenly+0x822/0x8c0 group_cpus_evenly+0x2d9/0x490 blk_mq_map_queues+0x1e/0x110 null_map_queues+0xc9/0x170 [null_blk] blk_mq_update_queue_map+0xdb/0x160 blk_mq_update_nr_hw_queues+0x22b/0x560 nullb_update_nr_hw_queues+0x71/0xf0 [null_blk] nullb_device_poll_queues_store+0xa4/0x130 [null_blk] configfs_write_iter+0x109/0x1d0 vfs_write+0x26e/0x6f0 ksys_write+0x79/0x180 __x64_sys_write+0x1d/0x30 x64_sys_call+0x45c4/0x45f0 do_syscall_64+0xa5/0x240 entry_SYSCALL_64_after_hwframe+0x76/0x7e Root cause is that numgrps is set to 0, and ZERO_SIZE_PTR is returned from kcalloc(), and later ZERO_SIZE_PTR will be deferenced. Fix the problem by checking numgrps first in group_cpus_evenly(), and return NULL directly if numgrps is zero. [yukuai3@huawei.com: also fix the non-SMP version] Link: https://lkml.kernel.org/r/20250620010958.1265984-1-yukuai1@huaweicloud.com Link: https://lkml.kernel.org/r/20250619132655.3318883-1-yukuai1@huaweicloud.com Fixes: 6a6dcae8f486 ("blk-mq: Build default queue map via group_cpus_evenly()") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Jens Axboe <axboe@kernel.dk> Cc: ErKun Yang <yangerkun@huawei.com> Cc: John Garry <john.g.garry@oracle.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: "zhangyi (F)" <yi.zhang@huawei.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-06-27pldmfw: Select CRC32 when PLDMFW is selectedSimon Horman1-0/+1
[ Upstream commit 1224b218a4b9203656ecc932152f4c81a97b4fcc ] pldmfw calls crc32 code and depends on it being enabled, else there is a link error as follows. So PLDMFW should select CRC32. lib/pldmfw/pldmfw.o: In function `pldmfw_flash_image': pldmfw.c:(.text+0x70f): undefined reference to `crc32_le_base' This problem was introduced by commit b8265621f488 ("Add pldmfw library for PLDM firmware update"). It manifests as of commit d69ea414c9b4 ("ice: implement device flash update via devlink"). And is more likely to occur as of commit 9ad19171b6d6 ("lib/crc: remove unnecessary prompt for CONFIG_CRC32 and drop 'default y'"). Found by chance while exercising builds based on tinyconfig. Fixes: b8265621f488 ("Add pldmfw library for PLDM firmware update") Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250613-pldmfw-crc32-v1-1-f3fad109eee6@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-27Kunit to check the longest symbol lengthSergio González Collado3-0/+93
commit c104c16073b7fdb3e4eae18f66f4009f6b073d6f upstream. The longest length of a symbol (KSYM_NAME_LEN) was increased to 512 in the reference [1]. This patch adds kunit test suite to check the longest symbol length. These tests verify that the longest symbol length defined is supported. This test can also help other efforts for longer symbol length, like [2]. The test suite defines one symbol with the longest possible length. The first test verify that functions with names of the created symbol, can be called or not. The second test, verify that the symbols are created (or not) in the kernel symbol table. [1] https://lore.kernel.org/lkml/20220802015052.10452-6-ojeda@kernel.org/ [2] https://lore.kernel.org/lkml/20240605032120.3179157-1-song@kernel.org/ Link: https://lore.kernel.org/r/20250302221518.76874-1-sergio.collado@gmail.com Tested-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org> Reviewed-by: Rae Moar <rmoar@google.com> Signed-off-by: Sergio González Collado <sergio.collado@gmail.com> Link: https://github.com/Rust-for-Linux/linux/issues/504 Reviewed-by: Rae Moar <rmoar@google.com> Acked-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <shuah@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-06-19iov_iter: use iov_offset for length calculation in iov_iter_aligned_bvecNitesh Shetty1-1/+1
[ Upstream commit 334d7c4fb60cf21e0abac134d92fe49e9b04377e ] If iov_offset is non-zero, then we need to consider iov_offset in length calculation, otherwise we might pass smaller IOs such as 512 bytes, in below scenario [1]. This issue is reproducible using lib-uring test/fixed-seg.c application with fixed buffer on a 512 LBA formatted device. [1] At present we pass the alignment check, for 512 LBA formatted devices, len_mask = 511 when IO is smaller, i->count = 512 has an offset, i->io_offset = 3584 with bvec values, bvec->bv_offset = 256, bvec->bv_len = 3840. In short, the first 256 bytes are in the current page, next 256 bytes are in the another page. Ideally we expect to fail the IO. I can think of 2 userspace scenarios where we experience this. a: From userspace, we observe a different behaviour when device LBA size is 512 vs 4096 bytes. For 4096 LBA formatted device, I see the same liburing test [2] failing, whereas 512 the test passes without this. This is reproducible everytime. [2] https://github.com/axboe/liburing/ b: Although I was not able to reproduce the below condition, but I suspect below case should be possible from user space for devices with 512 LBA formatted device. Lets say from userspace while allocating a virtually single chunk of memory, if we get 2 physical chunk of memory, and IO happens to be at the boundary of first physical chunk with length crossing first chunk, then we allow IOs to proceed and hence we might map wrong physical address length and proceed with IO rather than failing. : --- a/test/fixed-seg.c : +++ b/test/fixed-seg.c : @@ -64,7 +64,7 @@ static int test(struct io_uring *ring, int fd, int : vec_off) : return T_EXIT_FAIL; : } : : - ret = read_it(ring, fd, 4096, vec_off); : + ret = read_it(ring, fd, 4096, 7*512 + 256); : if (ret) { : fprintf(stderr, "4096 0 failed\n"); : return T_EXIT_FAIL; Effectively this is a write crossing the page boundary. Link: https://lkml.kernel.org/r/20250428095849.11709-1-nj.shetty@samsung.com Fixes: 2263639f96f2 ("iov_iter: streamline iovec/bvec alignment iteration") Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Anuj Gupta <anuj20.g@samsung.com> Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Keith Busch <kbusch@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-19kunit/usercopy: Disable u64 test on 32-bit SPARCThomas Weißschuh1-0/+1
[ Upstream commit 0d6efa20e384a41a7f4afdcd8a0aec442c19d33e ] usercopy of 64 bit values does not work on 32-bit SPARC: # usercopy_test_valid: EXPECTATION FAILED at lib/tests/usercopy_kunit.c:209 Expected val_u64 == 0x5a5b5c5d6a6b6c6d, but val_u64 == 1515936861 (0x5a5b5c5d) 0x5a5b5c5d6a6b6c6d == 6510899242581322861 (0x5a5b5c5d6a6b6c6d) Disable the test. Fixes: 4c5d7bc63775 ("usercopy: Add tests for all get_user() sizes") Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Link: https://lore.kernel.org/r/20250416-kunit-sparc-usercopy-v1-1-a772054db3af@linutronix.de Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-06-19kunit: Fix wrong parameter to kunit_deactivate_static_stub()Tzung-Bi Shih1-1/+1
[ Upstream commit 772e50a76ee664e75581624f512df4e45582605a ] kunit_deactivate_static_stub() accepts real_fn_addr instead of replacement_addr. In the case, it always passes NULL to kunit_deactivate_static_stub(). Fix it. Link: https://lore.kernel.org/r/20250520082050.2254875-1-tzungbi@kernel.org Fixes: e047c5eaa763 ("kunit: Expose 'static stub' API to redirect functions") Signed-off-by: Tzung-Bi Shih <tzungbi@kernel.org> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-29crypto: lzo - Fix compression buffer overrunHerbert Xu3-26/+96
[ Upstream commit cc47f07234f72cbd8e2c973cdbf2a6730660a463 ] Unlike the decompression code, the compression code in LZO never checked for output overruns. It instead assumes that the caller always provides enough buffer space, disregarding the buffer length provided by the caller. Add a safe compression interface that checks for the end of buffer before each write. Use the safe interface in crypto/lzo. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-29dql: Fix dql->limit value when reset.Jing Su1-1/+1
[ Upstream commit 3a17f23f7c36bac3a3584aaf97d3e3e0b2790396 ] Executing dql_reset after setting a non-zero value for limit_min can lead to an unreasonable situation where dql->limit is less than dql->limit_min. For instance, after setting /sys/class/net/eth*/queues/tx-0/byte_queue_limits/limit_min, an ifconfig down/up operation might cause the ethernet driver to call netdev_tx_reset_queue, which in turn invokes dql_reset. In this case, dql->limit is reset to 0 while dql->limit_min remains non-zero value, which is unexpected. The limit should always be greater than or equal to limit_min. Signed-off-by: Jing Su <jingsusu@didiglobal.com> Link: https://patch.msgid.link/Z9qHD1s/NEuQBdgH@pilot-ThinkCentre-M930t-N000 Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-02crypto: lib/Kconfig - Hide arch options from userHerbert Xu1-20/+21
commit 17ec3e71ba797cdb62164fea9532c81b60f47167 upstream. The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may also lead to arch options being enabled but ineffective because of modular/built-in conflicts. As the primary user of all these options wireguard is selecting the arch options anyway, make the same selections at the lib/crypto option level and hide the arch options from the user. Instead of selecting them centrally from lib/crypto, simply set the default of each arch option as suggested by Eric Biggers. Change the Crypto API generic algorithms to select the top-level lib/crypto options instead of the generic one as otherwise there is no way to enable the arch options (Eric Biggers). Introduce a set of INTERNAL options to work around dependency cycles on the CONFIG_CRYPTO symbol. Fixes: 1047e21aecdf ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Arnd Bergmann <arnd@kernel.org> Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-05-02ubsan: Fix panic from test_ubsan_out_of_boundsMostafa Saleh1-7/+11
[ Upstream commit 9b044614be12d78d3a93767708b8d02fb7dfa9b0 ] Running lib_ubsan.ko on arm64 (without CONFIG_UBSAN_TRAP) panics the kernel: [ 31.616546] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: test_ubsan_out_of_bounds+0x158/0x158 [test_ubsan] [ 31.646817] CPU: 3 UID: 0 PID: 179 Comm: insmod Not tainted 6.15.0-rc2 #1 PREEMPT [ 31.648153] Hardware name: linux,dummy-virt (DT) [ 31.648970] Call trace: [ 31.649345] show_stack+0x18/0x24 (C) [ 31.650960] dump_stack_lvl+0x40/0x84 [ 31.651559] dump_stack+0x18/0x24 [ 31.652264] panic+0x138/0x3b4 [ 31.652812] __ktime_get_real_seconds+0x0/0x10 [ 31.653540] test_ubsan_load_invalid_value+0x0/0xa8 [test_ubsan] [ 31.654388] init_module+0x24/0xff4 [test_ubsan] [ 31.655077] do_one_initcall+0xd4/0x280 [ 31.655680] do_init_module+0x58/0x2b4 That happens because the test corrupts other data in the stack: 400: d5384108 mrs x8, sp_el0 404: f9426d08 ldr x8, [x8, #1240] 408: f85f83a9 ldur x9, [x29, #-8] 40c: eb09011f cmp x8, x9 410: 54000301 b.ne 470 <test_ubsan_out_of_bounds+0x154> // b.any As there is no guarantee the compiler will order the local variables as declared in the module: volatile char above[4] = { }; /* Protect surrounding memory. */ volatile int arr[4]; volatile char below[4] = { }; /* Protect surrounding memory. */ There is another problem where the out-of-bound index is 5 which is larger than the extra surrounding memory for protection. So, use a struct to enforce the ordering, and fix the index to be 4. Also, remove some of the volatiles and rely on OPTIMIZER_HIDE_VAR() Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20250415203354.4109415-1-smostafa@google.com Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-02crypto: lib/Kconfig - Fix lib built-in failure when arch is modularHerbert Xu1-8/+18
[ Upstream commit 1047e21aecdf17c8a9ab9fd4bd24c6647453f93d ] The HAVE_ARCH Kconfig options in lib/crypto try to solve the modular versus built-in problem, but it still fails when the the LIB option (e.g., CRYPTO_LIB_CURVE25519) is selected externally. Fix this by introducing a level of indirection with ARCH_MAY_HAVE Kconfig options, these then go on to select the ARCH_HAVE options if the ARCH Kconfig options matches that of the LIB option. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202501230223.ikroNDr1-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-05-02lib/Kconfig.ubsan: Remove 'default UBSAN' from UBSAN_INTEGER_WRAPNathan Chancellor1-1/+0
commit cdc2e1d9d929d7f7009b3a5edca52388a2b0891f upstream. CONFIG_UBSAN_INTEGER_WRAP is 'default UBSAN', which is problematic for a couple of reasons. The first is that this sanitizer is under active development on the compiler side to come up with a solution that is maintainable on the compiler side and usable on the kernel side. As a result of this, there are many warnings when the sanitizer is enabled that have no clear path to resolution yet but users may see them and report them in the meantime. The second is that this option was renamed from CONFIG_UBSAN_SIGNED_WRAP, meaning that if a configuration has CONFIG_UBSAN=y but CONFIG_UBSAN_SIGNED_WRAP=n and it is upgraded via olddefconfig (common in non-interactive scenarios such as CI), CONFIG_UBSAN_INTEGER_WRAP will be silently enabled again. Remove 'default UBSAN' from CONFIG_UBSAN_INTEGER_WRAP until it is ready for regular usage and testing from a broader community than the folks actively working on the feature. Cc: stable@vger.kernel.org Fixes: 557f8c582a9b ("ubsan: Reintroduce signed overflow sanitizer") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20250414-drop-default-ubsan-integer-wrap-v1-1-392522551d6b@kernel.org Signed-off-by: Kees Cook <kees@kernel.org> [nathan: Fix conflict due to lack of rename from ed2b548f1017 in stable] Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-25string: Add load_unaligned_zeropad() code path to sized_strscpy()Peter Collingbourne1-3/+10
commit d94c12bd97d567de342fd32599e7cd9e50bfa140 upstream. The call to read_word_at_a_time() in sized_strscpy() is problematic with MTE because it may trigger a tag check fault when reading across a tag granule (16 bytes) boundary. To make this code MTE compatible, let's start using load_unaligned_zeropad() on architectures where it is available (i.e. architectures that define CONFIG_DCACHE_WORD_ACCESS). Because load_unaligned_zeropad() takes care of page boundaries as well as tag granule boundaries, also disable the code preventing crossing page boundaries when using load_unaligned_zeropad(). Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/If4b22e43b5a4ca49726b4bf98ada827fdf755548 Fixes: 94ab5b61ee16 ("kasan, arm64: enable CONFIG_KASAN_HW_TAGS") Cc: stable@vger.kernel.org Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250403000703.2584581-2-pcc@google.com Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-04-20lib: scatterlist: fix sg_split_phys to preserve original scatterlist offsetsT Pratham1-2/+0
commit 8b46fdaea819a679da176b879e7b0674a1161a5e upstream. The split_sg_phys function was incorrectly setting the offsets of all scatterlist entries (except the first) to 0. Only the first scatterlist entry's offset and length needs to be modified to account for the skip. Setting the rest entries' offsets to 0 could lead to incorrect data access. I am using this function in a crypto driver that I'm currently developing (not yet sent to mailing list). During testing, it was observed that the output scatterlists (except the first one) contained incorrect garbage data. I narrowed this issue down to the call of sg_split(). Upon debugging inside this function, I found that this resetting of offset is the cause of the problem, causing the subsequent scatterlists to point to incorrect memory locations in a page. By removing this code, I am obtaining expected data in all the split output scatterlists. Thus, this was indeed causing observable runtime effects! This patch removes the offending code, ensuring that the page offsets in the input scatterlist are preserved in the output scatterlist. Link: https://lkml.kernel.org/r/20250319111437.1969903-1-t-pratham@ti.com Fixes: f8bcbe62acd0 ("lib: scatterlist: add sg splitting function") Signed-off-by: T Pratham <t-pratham@ti.com> Cc: Robert Jarzmik <robert.jarzmik@free.fr> Cc: Jens Axboe <axboe@kernel.dk> Cc: Kamlesh Gurudasani <kamlesh@ti.com> Cc: Praneeth Bajjuri <praneeth@ti.com> Cc: Vignesh Raghavendra <vigneshr@ti.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-04-20zstd: Increase DYNAMIC_BMI2 GCC version cutoff from 4.8 to 11.0 to work ↵Ingo Molnar1-1/+1
around compiler segfault [ Upstream commit 1400c87e6cac47eb243f260352c854474d9a9073 ] Due to pending percpu improvements in -next, GCC9 and GCC10 are crashing during the build with: lib/zstd/compress/huf_compress.c:1033:1: internal compiler error: Segmentation fault 1033 | { | ^ Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-9/README.Bugs> for instructions. The DYNAMIC_BMI2 feature is a known-challenging feature of the ZSTD library, with an existing GCC quirk turning it off for GCC versions below 4.8. Increase the DYNAMIC_BMI2 version cutoff to GCC 11.0 - GCC 10.5 is the last version known to crash. Reported-by: Michael Kelley <mhklinux@outlook.com> Debugged-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: https://lore.kernel.org/r/SN6PR02MB415723FBCD79365E8D72CA5FD4D82@SN6PR02MB4157.namprd02.prod.outlook.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-10rust: fix signature of rust_fmt_argumentAlice Ryhl1-1/+1
[ Upstream commit 901b3290bd4dc35e613d13abd03c129e754dd3dd ] Without this change, the rest of this series will emit the following error message: error[E0308]: `if` and `else` have incompatible types --> <linux>/rust/kernel/print.rs:22:22 | 21 | #[export] | --------- expected because of this 22 | unsafe extern "C" fn rust_fmt_argument( | ^^^^^^^^^^^^^^^^^ expected `u8`, found `i8` | = note: expected fn item `unsafe extern "C" fn(*mut u8, *mut u8, *mut c_void) -> *mut u8 {bindings::rust_fmt_argument}` found fn item `unsafe extern "C" fn(*mut i8, *mut i8, *const c_void) -> *mut i8 {print::rust_fmt_argument}` The error may be different depending on the architecture. To fix this, change the void pointer argument to use a const pointer, and change the imports to use crate::ffi instead of core::ffi for integer types. Fixes: 787983da7718 ("vsprintf: add new `%pA` format specifier") Reviewed-by: Tamir Duberstein <tamird@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Alice Ryhl <aliceryhl@google.com> Acked-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20250303-export-macro-v3-1-41fbad85a27f@google.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-10lib: 842: Improve error handling in sw842_compress()Tanya Agarwal1-0/+2
[ Upstream commit af324dc0e2b558678aec42260cce38be16cc77ca ] The static code analysis tool "Coverity Scan" pointed the following implementation details out for further development considerations: CID 1309755: Unused value In sw842_compress: A value assigned to a variable is never used. (CWE-563) returned_value: Assigning value from add_repeat_template(p, repeat_count) to ret here, but that stored value is overwritten before it can be used. Conclusion: Add error handling for the return value from an add_repeat_template() call. Fixes: 2da572c959dd ("lib: add software 842 compression/decompression") Signed-off-by: Tanya Agarwal <tanyaagarwal25699@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-04-10kunit/stackinit: Use fill byte different from Clang i386 patternKees Cook1-10/+20
[ Upstream commit d985e4399adffb58e10b38dbb5479ef29d53cde6 ] The byte initialization values used with -ftrivial-auto-var-init=pattern (CONFIG_INIT_STACK_ALL_PATTERN=y) depends on the compiler, architecture, and byte position relative to struct member types. On i386 with Clang, this includes the 0xFF value, which means it looks like nothing changes between the leaf byte filling pass and the expected "stack wiping" pass of the stackinit test. Use the byte fill value of 0x99 instead, fixing the test for i386 Clang builds. Reported-by: ernsteiswuerfel Closes: https://github.com/ClangBuiltLinux/linux/issues/2071 Fixes: 8c30d32b1a32 ("lib/test_stackinit: Handle Clang auto-initialization pattern") Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20250304225606.work.030-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-03-07rcuref: Plug slowpath race in rcuref_put()Thomas Gleixner1-3/+2
commit b9a49520679e98700d3d89689cc91c08a1c88c1d upstream. Kernel test robot reported an "imbalanced put" in the rcuref_put() slow path, which turned out to be a false positive. Consider the following race: ref = 0 (via rcuref_init(ref, 1)) T1 T2 rcuref_put(ref) -> atomic_add_negative_release(-1, ref) # ref -> 0xffffffff -> rcuref_put_slowpath(ref) rcuref_get(ref) -> atomic_add_negative_relaxed(1, &ref->refcnt) -> return true; # ref -> 0 rcuref_put(ref) -> atomic_add_negative_release(-1, ref) # ref -> 0xffffffff -> rcuref_put_slowpath() -> cnt = atomic_read(&ref->refcnt); # cnt -> 0xffffffff / RCUREF_NOREF -> atomic_try_cmpxchg_release(&ref->refcnt, &cnt, RCUREF_DEAD)) # ref -> 0xe0000000 / RCUREF_DEAD -> return true -> cnt = atomic_read(&ref->refcnt); # cnt -> 0xe0000000 / RCUREF_DEAD -> if (cnt > RCUREF_RELEASED) # 0xe0000000 > 0xc0000000 -> WARN_ONCE(cnt >= RCUREF_RELEASED, "rcuref - imbalanced put()") The problem is the additional read in the slow path (after it decremented to RCUREF_NOREF) which can happen after the counter has been marked RCUREF_DEAD. Prevent this by reusing the return value of the decrement. Now every "final" put uses RCUREF_NOREF in the slow path and attempts the final cmpxchg() to RCUREF_DEAD. [ bigeasy: Add changelog ] Fixes: ee1ee6db07795 ("atomics: Provide rcuref - scalable reference counting") Reported-by: kernel test robot <oliver.sang@intel.com> Debugged-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: stable@vger.kernel.org Closes: https://lore.kernel.org/oe-lkp/202412311453.9d7636a2-lkp@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-27lib/iov_iter: fix import_iovec_ubuf iovec managementPavel Begunkov1-1/+2
commit f4b78260fc678ccd7169f32dc9f3bfa3b93931c7 upstream. import_iovec() says that it should always be fine to kfree the iovec returned in @iovp regardless of the error code. __import_iovec_ubuf() never reallocates it and thus should clear the pointer even in cases when copy_iovec_*() fail. Link: https://lkml.kernel.org/r/378ae26923ffc20fd5e41b4360d673bf47b1775b.1738332461.git.asml.silence@gmail.com Fixes: 3b2deb0e46da ("iov_iter: import single vector iovecs as ITER_UBUF") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Jens Axboe <axboe@kernel.dk> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-17maple_tree: simplify split calculationWei Yang1-17/+6
commit 4f6a6bed0bfef4b966f076f33eb4f5547226056a upstream. Patch series "simplify split calculation", v3. This patch (of 3): The current calculation for splitting nodes tries to enforce a minimum span on the leaf nodes. This code is complex and never worked correctly to begin with, due to the min value being passed as 0 for all leaves. The calculation should just split the data as equally as possible between the new nodes. Note that b_end will be one more than the data, so the left side is still favoured in the calculation. The current code may also lead to a deficient node by not leaving enough data for the right side of the split. This issue is also addressed with the split calculation change. [Liam.Howlett@Oracle.com: rephrase the change log] Link: https://lkml.kernel.org/r/20241113031616.10530-1-richard.weiyang@gmail.com Link: https://lkml.kernel.org/r/20241113031616.10530-2-richard.weiyang@gmail.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-17atomic64: Use arch_spin_locks instead of raw_spin_locksSteven Rostedt1-30/+48
commit 6c8ad3ab45ad0e94bfb7a9c71f2fa9c6cacea4b2 upstream. raw_spin_locks can be traced by lockdep or tracing itself. Atomic64 operations can be used in the tracing infrastructure. When an architecture does not have true atomic64 operations it can use the generic version that disables interrupts and uses spin_locks. The tracing ring buffer code uses atomic64 operations for the time keeping. But because some architectures use the default operations, the locking inside the atomic operations can cause an infinite recursion. As atomic64 implementation is architecture specific, it should not be using raw_spin_locks() but instead arch_spin_locks as that is the purpose of arch_spin_locks. To be used in architecture specific implementations of generic infrastructure like atomic64 operations. Note, by switching from raw_spin_locks to arch_spin_locks, the locks taken to emulate the atomic64 operations will not have lockdep, mmio, or any kind of checks done on them. They will not even disable preemption, although the code will disable interrupts preventing the tasks that hold the locks from being preempted. As the locks held are done so for very short periods of time, and the logic is only done to emulate atomic64, not having them be instrumented should not be an issue. Cc: stable@vger.kernel.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andreas Larsson <andreas@gaisler.com> Link: https://lore.kernel.org/20250122144311.64392baf@gandalf.local.home Fixes: c84897c0ff592 ("ring-buffer: Remove 32bit timestamp logic") Closes: https://lore.kernel.org/all/86fb4f86-a0e4-45a2-a2df-3154acc4f086@gaisler.com/ Reported-by: Ludwig Rydberg <ludwig.rydberg@gaisler.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-17lockdep: Fix upper limit for LOCKDEP_*_BITS configsCarlos Llamas1-4/+4
[ Upstream commit e638072e61726cae363d48812815197a2a0e097f ] Lockdep has a set of configs used to determine the size of the static arrays that it uses. However, the upper limit that was initially setup for these configs is too high (30 bit shift). This equates to several GiB of static memory for individual symbols. Using such high values leads to linker errors: $ make defconfig $ ./scripts/config -e PROVE_LOCKING --set-val LOCKDEP_BITS 30 $ make olddefconfig all [...] ld: kernel image bigger than KERNEL_IMAGE_SIZE ld: section .bss VMA wraps around address space Adjust the upper limits to the maximum values that avoid these issues. The need for anything more, likely points to a problem elsewhere. Note that LOCKDEP_CHAINS_BITS was intentionally left out as its upper limit had a different symptom and has already been fixed [1]. Reported-by: J. R. Okajima <hooanon05g@gmail.com> Closes: https://lore.kernel.org/all/30795.1620913191@jrobl/ [1] Cc: Peter Zijlstra <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will@kernel.org> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Carlos Llamas <cmllamas@google.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/20241024183631.643450-2-cmllamas@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-02-08rhashtable: Fix rhashtable_try_insert testHerbert Xu