summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2021-07-20x86/fpu: Fix copy_xstate_to_kernel() gap handlingThomas Gleixner1-44/+61
[ Upstream commit 9625895011d130033d1bc7aac0d77a9bf68ff8a6 ] The gap handling in copy_xstate_to_kernel() is wrong when XSAVES is in use. Using init_fpstate for copying the init state of features which are not set in the xstate header is only correct for the legacy area, but not for the extended features area because when XSAVES is in use then init_fpstate is in compacted form which means the xstate offsets which are used to copy from init_fpstate are not valid. Fortunately, this is not a real problem today because all extended features in use have an all-zeros init state, but it is wrong nevertheless and with a potentially dynamically sized init_fpstate this would result in an access outside of the init_fpstate. Fix this by keeping track of the last copied state in the target buffer and explicitly zero it when there is a feature or alignment gap. Use the compacted offset when accessing the extended feature space in init_fpstate. As this is not a functional issue on older kernels this is intentionally not tagged for stable. Fixes: b8be15d58806 ("x86/fpu/xstate: Re-enable XSAVES") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210623121451.294282032@linutronix.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20um: fix error return code in winch_tramp()Zhen Lei1-1/+2
[ Upstream commit ccf1236ecac476d9d2704866d9a476c86e387971 ] Fix to return a negative error code from the error handling case instead of 0, as done elsewhere in this function. Fixes: 89df6bfc0405 ("uml: DEBUG_SHIRQ fixes") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-By: anton.ivanov@cambridgegreys.com Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20um: fix error return code in slip_open()Zhen Lei1-1/+2
[ Upstream commit b77e81fbe5f5fb4ad9a61ec80f6d1e30b6da093a ] Fix to return a negative error code from the error handling case instead of 0, as done elsewhere in this function. Fixes: a3c77c67a443 ("[PATCH] uml: slirp and slip driver cleanups and fixes") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-By: anton.ivanov@cambridgegreys.com Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20x86/signal: Detect and prevent an alternate signal stack overflowChang S. Bae1-4/+20
[ Upstream commit 2beb4a53fc3f1081cedc1c1a198c7f56cc4fc60c ] The kernel pushes context on to the userspace stack to prepare for the user's signal handler. When the user has supplied an alternate signal stack, via sigaltstack(2), it is easy for the kernel to verify that the stack size is sufficient for the current hardware context. Check if writing the hardware context to the alternate stack will exceed it's size. If yes, then instead of corrupting user-data and proceeding with the original signal handler, an immediate SIGSEGV signal is delivered. Refactor the stack pointer check code from on_sig_stack() and use the new helper. While the kernel allows new source code to discover and use a sufficient alternate signal stack size, this check is still necessary to protect binaries with insufficient alternate signal stack size from data corruption. Fixes: c2bc11f10a39 ("x86, AVX-512: Enable AVX-512 States Context Switch") Reported-by: Florian Weimer <fweimer@redhat.com> Suggested-by: Jann Horn <jannh@google.com> Suggested-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Len Brown <len.brown@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20210518200320.17239-6-chang.seok.bae@intel.com Link: https://bugzilla.kernel.org/show_bug.cgi?id=153531 Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20x86/fpu: Return proper error codes from user access functionsThomas Gleixner1-7/+12
[ Upstream commit aee8c67a4faa40a8df4e79316dbfc92d123989c1 ] When *RSTOR from user memory raises an exception, there is no way to differentiate them. That's bad because it forces the slow path even when the failure was not a fault. If the operation raised eg. #GP then going through the slow path is pointless. Use _ASM_EXTABLE_FAULT() which stores the trap number and let the exception fixup return the negated trap number as error. This allows to separate the fast path and let it handle faults directly and avoid the slow path for all other exceptions. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210623121457.601480369@linutronix.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1Nick Desaulniers1-5/+5
[ Upstream commit 8b95a7d90ce8160ac5cffd5bace6e2eba01a871e ] There's a few instructions that GAS infers operands but Clang doesn't; from what I can tell the Arm ARM doesn't say these are optional. F5.1.257 TBB, TBH T1 Halfword variant F5.1.238 STREXD T1 variant F5.1.84 LDREXD T1 variant Link: https://github.com/ClangBuiltLinux/linux/issues/1309 Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Jian Cai <jiancai@google.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20powerpc/boot: Fixup device-tree on little endianBenjamin Herrenschmidt2-27/+41
[ Upstream commit c93f80849bdd9b45d834053ae1336e28f0026c84 ] This fixes the core devtree.c functions and the ns16550 UART backend. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/YMwXrPT8nc4YUdJ9@thinks.paulus.ozlabs.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20s390/mem_detect: fix tprot() program check new psw handlingHeiko Carstens1-11/+17
[ Upstream commit da9057576785aaab52e706e76c0475c85b77ec14 ] The tprot() inline asm temporarily changes the program check new psw to redirect a potential program check on the diag instruction. Restoring of the program check new psw is done in C code behind the inline asm. This can be problematic, especially if the function is inlined, since the compiler can reorder instructions in such a way that a different instruction, which may result in a program check, might be executed before the program check new psw has been restored. To avoid such a scenario move restoring into the inline asm. For consistency reasons move also saving of the original program check new psw into the inline asm. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20s390/mem_detect: fix diag260() program check new psw handlingHeiko Carstens1-8/+11
[ Upstream commit 86807f348f418a84970eebb8f9912a7eea16b497 ] The __diag260() inline asm temporarily changes the program check new psw to redirect a potential program check on the diag instruction. Restoring of the program check new psw is done in C code behind the inline asm. This can be problematic, especially if the function is inlined, since the compiler can reorder instructions in such a way that a different instruction, which may result in a program check, might be executed before the program check new psw has been restored. To avoid such a scenario move restoring into the inline asm. For consistency reasons move also saving of the original program check new psw into the inline asm. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20s390/ipl_parm: fix program check new psw handlingHeiko Carstens1-8/+11
[ Upstream commit 88c2510cecb7e2b518e3c4fdf3cf0e13ebe9377c ] The __diag308() inline asm temporarily changes the program check new psw to redirect a potential program check on the diag instruction. Restoring of the program check new psw is done in C code behind the inline asm. This can be problematic, especially if the function is inlined, since the compiler can reorder instructions in such a way that a different instruction, which may result in a program check, might be executed before the program check new psw has been restored. To avoid such a scenario move restoring into the inline asm. For consistency reasons move also saving of the original program check new psw into the inline asm. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20s390/processor: always inline stap() and __load_psw_mask()Heiko Carstens1-2/+2
[ Upstream commit 9c9a915afd90f7534c16a71d1cd44b58596fddf3 ] s390 is the only architecture which makes use of the __no_kasan_or_inline attribute for two functions. Given that both stap() and __load_psw_mask() are very small functions they can and should be always inlined anyway. Therefore get rid of __no_kasan_or_inline and always inline these functions. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20powerpc/mm/book3s64: Fix possible build errorAneesh Kumar K.V1-9/+17
[ Upstream commit 07d8ad6fd8a3d47f50595ca4826f41dbf4f3a0c6 ] Update _tlbiel_pid() such that we can avoid build errors like below when using this function in other places. arch/powerpc/mm/book3s64/radix_tlb.c: In function ‘__radix__flush_tlb_range_psize’: arch/powerpc/mm/book3s64/radix_tlb.c:114:2: warning: ‘asm’ operand 3 probably does not match constraints 114 | asm volatile(PPC_TLBIEL(%0, %4, %3, %2, %1) | ^~~ arch/powerpc/mm/book3s64/radix_tlb.c:114:2: error: impossible constraint in ‘asm’ make[4]: *** [scripts/Makefile.build:271: arch/powerpc/mm/book3s64/radix_tlb.o] Error 1 m With this fix, we can also drop the __always_inline in __radix_flush_tlb_range_psize which was added by commit e12d6d7d46a6 ("powerpc/mm/radix: mark __radix__flush_tlb_range_psize() as __always_inline") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210610083639.387365-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20powerpc/ps3: Add dma_mask to ps3_dma_regionGeoff Levand2-0/+14
[ Upstream commit 9733862e50fdba55e7f1554e4286fcc5302ff28e ] Commit f959dcd6ddfd29235030e8026471ac1b022ad2b0 (dma-direct: Fix potential NULL pointer dereference) added a null check on the dma_mask pointer of the kernel's device structure. Add a dma_mask variable to the ps3_dma_region structure and set the device structure's dma_mask pointer to point to this new variable. Fixes runtime errors like these: # WARNING: Fixes tag on line 10 doesn't match correct format # WARNING: Fixes tag on line 10 doesn't match correct format ps3_system_bus_match:349: dev=8.0(sb_01), drv=8.0(ps3flash): match WARNING: CPU: 0 PID: 1 at kernel/dma/mapping.c:151 .dma_map_page_attrs+0x34/0x1e0 ps3flash sb_01: ps3stor_setup:193: map DMA region failed Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/562d0c9ea0100a30c3b186bcc7adb34b0bbd2cd7.1622746428.git.geoff@infradead.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20s390: disable SSP when neededFabrice Fontaine2-0/+2
[ Upstream commit 42e8d652438f5ddf04e5dac299cb5e623d113dc0 ] Though -nostdlib is passed in PURGATORY_LDFLAGS and -ffreestanding in KBUILD_CFLAGS_DECOMPRESSOR, -fno-stack-protector must also be passed to avoid linking errors related to undefined references to '__stack_chk_guard' and '__stack_chk_fail' if toolchain enforces -fstack-protector. Fixes - https://gitlab.com/kubu93/buildroot/-/jobs/1247043361 Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Tested-by: Alexander Egorenkov <egorenar@linux.ibm.com> Link: https://lore.kernel.org/r/20210510053133.1220167-1-fontaine.fabrice@gmail.com Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20s390/sclp_vt220: fix console name to match deviceValentin Vidic1-1/+1
[ Upstream commit b7d91d230a119fdcc334d10c9889ce9c5e15118b ] Console name reported in /proc/consoles: ttyS1 -W- (EC p ) 4:65 does not match the char device name: crw--w---- 1 root root 4, 65 May 17 12:18 /dev/ttysclp0 so debian-installer inside a QEMU s390x instance gets confused and fails to start with the following error: steal-ctty: No such file or directory Signed-off-by: Valentin Vidic <vvidic@valentin-vidic.from.hr> Link: https://lore.kernel.org/r/20210427194010.9330-1-vvidic@valentin-vidic.from.hr Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-20KVM: X86: Disable hardware breakpoints unconditionally before kvm_x86->run()Lai Jiangshan1-0/+2
commit f85d40160691881a17a397c448d799dfc90987ba upstream. When the host is using debug registers but the guest is not using them nor is the guest in guest-debug state, the kvm code does not reset the host debug registers before kvm_x86->run(). Rather, it relies on the hardware vmentry instruction to automatically reset the dr7 registers which ensures that the host breakpoints do not affect the guest. This however violates the non-instrumentable nature around VM entry and exit; for example, when a host breakpoint is set on vcpu->arch.cr2, Another issue is consistency. When the guest debug registers are active, the host breakpoints are reset before kvm_x86->run(). But when the guest debug registers are inactive, the host breakpoints are delayed to be disabled. The host tracing tools may see different results depending on what the guest is doing. To fix the problems, we clear %db7 unconditionally before kvm_x86->run() if the host has set any breakpoints, no matter if the guest is using them or not. Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com> Message-Id: <20210628172632.81029-1-jiangshanlai@gmail.com> Cc: stable@vger.kernel.org [Only clear %db7 instead of reloading all debug registers. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-20KVM: nSVM: Check the value written to MSR_VM_HSAVE_PAVitaly Kuznetsov1-1/+10
commit fce7e152ffc8f89d02a80617b16c7aa1527847c8 upstream. APM states that #GP is raised upon write to MSR_VM_HSAVE_PA when the supplied address is not page-aligned or is outside of "maximum supported physical address for this implementation". page_address_valid() check seems suitable. Also, forcefully page-align the address when it's written from VMM. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210628104425.391276-2-vkuznets@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> [Add comment about behavior for host-provided values. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-20KVM: x86/mmu: Do not apply HPA (memory encryption) mask to GPAsSean Christopherson4-8/+18
commit fc9bf2e087efcd81bda2e52d09616d2a1bf982a8 upstream. Ignore "dynamic" host adjustments to the physical address mask when generating the masks for guest PTEs, i.e. the guest PA masks. The host physical address space and guest physical address space are two different beasts, e.g. even though SEV's C-bit is the same bit location for both host and guest, disabling SME in the host (which clears shadow_me_mask) does not affect the guest PTE->GPA "translation". For non-SEV guests, not dropping bits is the correct behavior. Assuming KVM and userspace correctly enumerate/configure guest MAXPHYADDR, bits that are lost as collateral damage from memory encryption are treated as reserved bits, i.e. KVM will never get to the point where it attempts to generate a gfn using the affected bits. And if userspace wants to create a bogus vCPU, then userspace gets to deal with the fallout of hardware doing odd things with bad GPAs. For SEV guests, not dropping the C-bit is technically wrong, but it's a moot point because KVM can't read SEV guest's page tables in any case since they're always encrypted. Not to mention that the current KVM code is also broken since sme_me_mask does not have to be non-zero for SEV to be supported by KVM. The proper fix would be to teach all of KVM to correctly handle guest private memory, but that's a task for the future. Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM") Cc: stable@vger.kernel.org Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210623230552.4027702-5-seanjc@google.com> [Use a new header instead of adding header guards to paging_tmpl.h. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-20KVM: x86: Use guest MAXPHYADDR from CPUID.0x8000_0008 iff TDP is enabledSean Christopherson1-1/+7
commit 4bf48e3c0aafd32b960d341c4925b48f416f14a5 upstream. Ignore the guest MAXPHYADDR reported by CPUID.0x8000_0008 if TDP, i.e. NPT, is disabled, and instead use the host's MAXPHYADDR. Per AMD'S APM: Maximum guest physical address size in bits. This number applies only to guests using nested paging. When this field is zero, refer to the PhysAddrSize field for the maximum guest physical address size. Fixes: 24c82e576b78 ("KVM: Sanitize cpuid") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210623230552.4027702-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19arm64: tlb: fix the TTL value of tlb_get_levelZhenyu Ye1-0/+4
commit 52218fcd61cb42bde0d301db4acb3ffdf3463cc7 upstream. The TTL field indicates the level of page table walk holding the *leaf* entry for the address being invalidated. But currently, the TTL field may be set to an incorrent value in the following stack: pte_free_tlb __pte_free_tlb tlb_remove_table tlb_table_invalidate tlb_flush_mmu_tlbonly tlb_flush In this case, we just want to flush a PTE page, but the tlb->cleared_pmds is set and we get tlb_level = 2 in the tlb_get_level() function. This may cause some unexpected problems. This patch set the TTL field to 0 if tlb->freed_tables is set. The tlb->freed_tables indicates page table pages are freed, not the leaf entry. Cc: <stable@vger.kernel.org> # 5.9.x Fixes: c4ab2cbc1d87 ("arm64: tlb: Set the TTL field in flush_tlb_range") Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: ZhuRui <zhurui3@huawei.com> Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/b80ead47-1f88-3a00-18e1-cacc22f54cc4@huawei.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19powerpc/powernv/vas: Release reference to tgid during window closeHaren Myneni1-4/+5
commit 91cdbb955aa94ee0841af4685be40937345d29b8 upstream. The kernel handles the NX fault by updating CSB or sending signal to process. In multithread applications, children can open VAS windows and can exit without closing them. But the parent can continue to send NX requests with these windows. To prevent pid reuse, reference will be taken on pid and tgid when the window is opened and release them during window close. The current code is not releasing the tgid reference which can cause pid leak and this patch fixes the issue. Fixes: db1c08a740635 ("powerpc/vas: Take reference to PID and mm for user space windows") Cc: stable@vger.kernel.org # 5.8+ Reported-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6020fc4d444864fe20f7dcdc5edfe53e67480a1c.camel@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19powerpc/barrier: Avoid collision with clang's __lwsync macroNathan Chancellor1-0/+2
commit 015d98149b326e0f1f02e44413112ca8b4330543 upstream. A change in clang 13 results in the __lwsync macro being defined as __builtin_ppc_lwsync, which emits 'lwsync' or 'msync' depending on what the target supports. This breaks the build because of -Werror in arch/powerpc, along with thousands of warnings: In file included from arch/powerpc/kernel/pmc.c:12: In file included from include/linux/bug.h:5: In file included from arch/powerpc/include/asm/bug.h:109: In file included from include/asm-generic/bug.h:20: In file included from include/linux/kernel.h:12: In file included from include/linux/bitops.h:32: In file included from arch/powerpc/include/asm/bitops.h:62: arch/powerpc/include/asm/barrier.h:49:9: error: '__lwsync' macro redefined [-Werror,-Wmacro-redefined] #define __lwsync() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory") ^ <built-in>:308:9: note: previous definition is here #define __lwsync __builtin_ppc_lwsync ^ 1 error generated. Undefine this macro so that the runtime patching introduced by commit 2d1b2027626d ("powerpc: Fixup lwsync at runtime") continues to work properly with clang and the build no longer breaks. Cc: stable@vger.kernel.org Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1386 Link: https://github.com/llvm/llvm-project/commit/62b5df7fe2b3fda1772befeda15598fbef96a614 Link: https://lore.kernel.org/r/20210528182752.1852002-1-nathan@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19powerpc/mm: Fix lockup on kernel exec faultChristophe Leroy1-3/+1
commit cd5d5e602f502895e47e18cd46804d6d7014e65c upstream. The powerpc kernel is not prepared to handle exec faults from kernel. Especially, the function is_exec_fault() will return 'false' when an exec fault is taken by kernel, because the check is based on reading current->thread.regs->trap which contains the trap from user. For instance, when provoking a LKDTM EXEC_USERSPACE test, current->thread.regs->trap is set to SYSCALL trap (0xc00), and the fault taken by the kernel is not seen as an exec fault by set_access_flags_filter(). Commit d7df2443cd5f ("powerpc/mm: Fix spurious segfaults on radix with autonuma") made it clear and handled it properly. But later on commit d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults") removed that handling, introducing test based on error_code. And here is the problem, because on the 603 all upper bits of SRR1 get cleared when the TLB instruction miss handler bails out to ISI. Until commit cbd7e6ca0210 ("powerpc/fault: Avoid heavy search_exception_tables() verification"), an exec fault from kernel at a userspace address was indirectly caught by the lack of entry for that address in the exception tables. But after that commit the kernel mainly relies on KUAP or on core mm handling to catch wrong user accesses. Here the access is not wrong, so mm handles it. It is a minor fault because PAGE_EXEC is not set, set_access_flags_filter() should set PAGE_EXEC and voila. But as is_exec_fault() returns false as explained in the beginning, set_access_flags_filter() bails out without setting PAGE_EXEC flag, which leads to a forever minor exec fault. As the kernel is not prepared to handle such exec faults, the thing to do is to fire in bad_kernel_fault() for any exec fault taken by the kernel, as it was prior to commit d3ca587404b3. Fixes: d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/024bb05105050f704743a0083fe3548702be5706.1625138205.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19arm64: dts: rockchip: Enable USB3 for rk3328 Rock64Cameron Nemo1-0/+5
commit bbac8bd65f5402281cb7b0452c1c5f367387b459 upstream. Enable USB3 nodes for the rk3328-based PINE Rock64 board. The separate power regulator is not added as it is controlled by the same GPIO line as the existing VBUS regulators, so it is already enabled. Also there is no port representation to tie the regulator to. [wens@csie.org: Rebased onto v5.12] Signed-off-by: Cameron Nemo <cnemo@tutanota.com> [wens@csie.org: Rewrote commit message] Signed-off-by: Chen-Yu Tsai <wens@csie.org> Link: https://lore.kernel.org/r/20210504083616.9654-2-wens@kernel.org Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19arm64: dts: rockchip: add rk3328 dwc3 usb controller nodeCameron Nemo1-0/+19
commit 44dd5e2106dc2fd01697b539085818d1d1c58df0 upstream. RK3328 SoCs have one USB 3.0 OTG controller which uses DWC_USB3 core's general architecture. It can act as static xHCI host controller, static device controller, USB 3.0/2.0 OTG basing on ID of USB3.0 PHY. Signed-off-by: William Wu <william.wu@rock-chips.com> Signed-off-by: Cameron Nemo <cnemo@tutanota.com> Signed-off-by: Johan Jonker <jbx6244@gmail.com> Signed-off-by: Heiko Stuebner <heiko@sntech.de> Link: https://lore.kernel.org/r/20210209192350.7130-7-jbx6244@gmail.com Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19MIPS: MT extensions are not available on MIPS32r1Paul Cercueil1-1/+3
commit cad065ed8d8831df67b9754cc4437ed55d8b48c0 upstream. MIPS MT extensions were added with the MIPS 34K processor, which was based on the MIPS32r2 ISA. This fixes a build error when building a generic kernel for a MIPS32r1 CPU. Fixes: c434b9f80b09 ("MIPS: Kconfig: add MIPS_GENERIC_KERNEL symbol") Cc: stable@vger.kernel.org # v5.9 Signed-off-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-19MIPS: set mips32r5 for virt extensionsNick Desaulniers1-4/+4
[ Upstream commit c994a3ec7ecc8bd2a837b2061e8a76eb8efc082b ] Clang's integrated assembler only accepts these instructions when the cpu is set to mips32r5. With this change, we can assemble malta_defconfig with Clang via `make LLVM_IAS=1`. Link: https://github.com/ClangBuiltLinux/linux/issues/763 Reported-by: Dmitry Golovin <dima@golovin.in> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-19MIPS: loongsoon64: Reserve memory below starting pfn to prevent Oopszhanglianjie1-0/+3
[ Upstream commit 6817c944430d00f71ccaa9c99ff5b0096aeb7873 ] The cause of the problem is as follows: 1. when cat /sys/devices/system/memory/memory0/valid_zones, test_pages_in_a_zone() will be called. 2. test_pages_in_a_zone() finds the zone according to stat_pfn = 0. The smallest pfn of the numa node in the mips architecture is 128, and the page corresponding to the previous 0~127 pfn is not initialized (page->flags is 0xFFFFFFFF) 3. The nid and zonenum obtained using page_zone(pfn_to_page(0)) are out of bounds in the corresponding array, &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)], access to the out-of-bounds zone member variables appear abnormal, resulting in Oops. Therefore, it is necessary to keep the page between 0 and the minimum pfn to prevent Oops from appearing. Signed-off-by: zhanglianjie <zhanglianjie@uniontech.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-19MIPS: add PMD table accounting into MIPS'pmd_alloc_oneHuang Pei1-3/+7
[ Upstream commit ed914d48b6a1040d1039d371b56273d422c0081e ] This fixes Page Table accounting bug. MIPS is the ONLY arch just defining __HAVE_ARCH_PMD_ALLOC_ONE alone. Since commit b2b29d6d011944 (mm: account PMD tables like PTE tables), "pmd_free" in asm-generic with PMD table accounting and "pmd_alloc_one" in MIPS without PMD table accounting causes PageTable accounting number negative, which read by global_zone_page_state(), always returns 0. Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-19MIPS: ingenic: Select CPU_SUPPORTS_CPUFREQ && MIPS_EXTERNAL_TIMERPaul Cercueil1-0/+2
[ Upstream commit eb3849370ae32b571e1f9a63ba52c61adeaf88f7 ] The clock driving the XBurst CPUs in Ingenic SoCs is integer divided from the main PLL. As such, it is possible to control the frequency of the CPU, either by changing the divider, or by changing the rate of the main PLL. The XBurst CPUs also lack the CP0 timer; the TCU, a separate piece of hardware in the SoC, provides this functionality. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-19MIPS: cpu-probe: Fix FPU detection on Ingenic JZ4760(B)Paul Cercueil1-0/+5
[ Upstream commit fc52f92a653215fbd6bc522ac5311857b335e589 ] Ingenic JZ4760 and JZ4760B do have a FPU, but the config registers don't report it. Force the FPU detection in case the processor ID match the JZ4760(B) one. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-19hugetlb: clear huge pte during flush function on mips platformBibo Mao1-1/+7
[ Upstream commit 33ae8f801ad8bec48e886d368739feb2816478f2 ] If multiple threads are accessing the same huge page at the same time, hugetlb_cow will be called if one thread write the COW huge page. And function huge_ptep_clear_flush is called to notify other threads to clear the huge pte tlb entry. The other threads clear the huge pte tlb entry and reload it from page table, the reload huge pte entry may be old. This patch fixes this issue on mips platform, and it clears huge pte entry before notifying other threads to flush current huge page entry, it is similar with other architectures. Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc/preempt: Don't touch the idle task's preempt_count during hotplugValentin Schneider2-6/+0
commit 2c669ef6979c370f98d4b876e54f19613c81e075 upstream. Powerpc currently resets a CPU's idle task preempt_count to 0 before said task starts executing the secondary startup routine (and becomes an idle task proper). This conflicts with commit f1a0a376ca0c ("sched/core: Initialize the idle task with preemption disabled"). which initializes all of the idle tasks' preempt_count to PREEMPT_DISABLED during smp_init(). Note that this was superfluous before said commit, as back then the hotplug machinery would invoke init_idle() via idle_thread_get(), which would have already reset the CPU's idle task's preempt_count to PREEMPT_ENABLED. Get rid of this preempt_count write. Fixes: f1a0a376ca0c ("sched/core: Initialize the idle task with preemption disabled") Reported-by: Bharata B Rao <bharata@linux.ibm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210707183831.2106509-1-valentin.schneider@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-14s390: preempt: Fix preempt_count initializationValentin Schneider3-12/+6
commit 6a942f5780545ebd11aca8b3ac4b163397962322 upstream. S390's init_idle_preempt_count(p, cpu) doesn't actually let us initialize the preempt_count of the requested CPU's idle task: it unconditionally writes to the current CPU's. This clearly conflicts with idle_threads_init(), which intends to initialize *all* the idle tasks, including their preempt_count (or their CPU's, if the arch uses a per-CPU preempt_count). Unfortunately, it seems the way s390 does things doesn't let us initialize every possible CPU's preempt_count early on, as the pages where this resides are only allocated when a CPU is brought up and are freed when it is brought down. Let the arch-specific code set a CPU's preempt_count when its lowcore is allocated, and turn init_idle_preempt_count() into an empty stub. Fixes: f1a0a376ca0c ("sched/core: Initialize the idle task with preemption disabled") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20210707163338.1623014-1-valentin.schneider@arm.com Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-14csky: syscache: Fixup duplicate cache flushGuo Ren1-5/+7
[ Upstream commit 6ea42c84f33368eb3fe1ec1bff8d7cb1a5c7b07a ] The current csky logic of sys_cacheflush is wrong, it'll cause icache flush call dcache flush again. Now fixup it with a conditional "break & fallthrough". Fixes: 997153b9a75c ("csky: Add flush_icache_mm to defer flush icache all") Fixes: 0679d29d3e23 ("csky: fix syscache.c fallthrough warning") Acked-by: Randy Dunlap <rdunlap@infradead.org> Co-Developed-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14csky: fix syscache.c fallthrough warningRandy Dunlap1-0/+1
[ Upstream commit 0679d29d3e2351a1c3049c26a63ce1959cad5447 ] This case of the switch statement falls through to the following case. This appears to be on purpose, so declare it as OK. ../arch/csky/mm/syscache.c: In function '__do_sys_cacheflush': ../arch/csky/mm/syscache.c:17:3: warning: this statement may fall through [-Wimplicit-fallthrough=] 17 | flush_icache_mm_range(current->mm, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 18 | (unsigned long)addr, | ~~~~~~~~~~~~~~~~~~~~ 19 | (unsigned long)addr + bytes); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../arch/csky/mm/syscache.c:20:2: note: here 20 | case DCACHE: | ^~~~ Fixes: 997153b9a75c ("csky: Add flush_icache_mm to defer flush icache all") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Guo Ren <guoren@kernel.org> Cc: linux-csky@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14arm64: dts: marvell: armada-37xx: Fix reg for standard variant of UARTPali Rohár1-1/+1
[ Upstream commit 2cbfdedef39fb5994b8f1e1df068eb8440165975 ] UART1 (standard variant with DT node name 'uart0') has register space 0x12000-0x12018 and not whole size 0x200. So fix also this in example. Signed-off-by: Pali Rohár <pali@kernel.org> Fixes: c737abc193d1 ("arm64: dts: marvell: Fix A37xx UART0 register size") Link: https://lore.kernel.org/r/20210624224909.6350-6-pali@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc/papr_scm: Make 'perf_stats' invisible if perf-stats unavailableVaibhav Jain1-11/+24
[ Upstream commit ed78f56e1271f108e8af61baeba383dcd77adbec ] In case performance stats for an nvdimm are not available, reading the 'perf_stats' sysfs file returns an -ENOENT error. A better approach is to make the 'perf_stats' file entirely invisible to indicate that performance stats for an nvdimm are unavailable. So this patch updates 'papr_nd_attribute_group' to add a 'is_visible' callback implemented as newly introduced 'papr_nd_attribute_visible()' that returns an appropriate mode in case performance stats aren't supported in a given nvdimm. Also the initialization of 'papr_scm_priv.stat_buffer_len' is moved from papr_scm_nvdimm_init() to papr_scm_probe() so that it value is available when 'papr_nd_attribute_visible()' is called during nvdimm initialization. Even though 'perf_stats' attribute is available since v5.9, there are no known user-space tools/scripts that are dependent on presence of its sysfs file. Hence I dont expect any user-space breakage with this patch. Fixes: 2d02bf835e57 ("powerpc/papr_scm: Fetch nvdimm performance stats from PHYP") Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210513092349.285021-1-vaibhav@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc/64s: Fix copy-paste data exposure into newly created tasksNicholas Piggin1-16/+32
[ Upstream commit f35d2f249ef05b9671e7898f09ad89aa78f99122 ] copy-paste contains implicit "copy buffer" state that can contain arbitrary user data (if the user process executes a copy instruction). This could be snooped by another process if a context switch hits while the state is live. So cp_abort is executed on context switch to clear out possible sensitive data and prevent the leak. cp_abort is done after the low level _switch(), which means it is never reached by newly created tasks, so they could snoop on this buffer between their first and second context switch. Fix this by doing the cp_abort before calling _switch. Add some comments which should make the issue harder to miss. Fixes: 07d2a628bc000 ("powerpc/64s: Avoid cpabort in context switch when possible") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210622053036.474678-1-npiggin@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc/papr_scm: Properly handle UUID types and APIAndy Shevchenko1-9/+18
[ Upstream commit 0e8554b5d7801b0aebc6c348a0a9f7706aa17b3b ] Parse to and export from UUID own type, before dereferencing. This also fixes wrong comment (Little Endian UUID is something else) and should eliminate the direct strict types assignments. Fixes: 43001c52b603 ("powerpc/papr_scm: Use ibm,unit-guid as the iset cookie") Fixes: 259a948c4ba1 ("powerpc/pseries/scm: Use a specific endian format for storing uuid from the device tree") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210616134303.58185-1-andriy.shevchenko@linux.intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc: Offline CPU in stop_this_cpu()Nicholas Piggin1-0/+11
[ Upstream commit bab26238bbd44d5a4687c0a64fd2c7f2755ea937 ] printk_safe_flush_on_panic() has special lock breaking code for the case where we panic()ed with the console lock held. It relies on panic IPI causing other CPUs to mark themselves offline. Do as most other architectures do. This effectively reverts commit de6e5d38417e ("powerpc: smp_send_stop do not offline stopped CPUs"), unfortunately it may result in some false positive warnings, but the alternative is more situations where we can crash without getting messages out. Fixes: de6e5d38417e ("powerpc: smp_send_stop do not offline stopped CPUs") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210623041245.865134-1-npiggin@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc/powernv: Fix machine check reporting of async store errorsNicholas Piggin1-8/+40
[ Upstream commit 3729e0ec59a20825bd4c8c70996b2df63915e1dd ] POWER9 and POWER10 asynchronous machine checks due to stores have their cause reported in SRR1 but SRR1[42] is set, which in other cases indicates DSISR cause. Check for these cases and clear SRR1[42], so the cause matching uses the i-side (SRR1) table. Fixes: 7b9f71f974a1 ("powerpc/64s: POWER9 machine check handler") Fixes: 201220bb0e8c ("powerpc/powernv: Machine check handler for POWER10") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210517140355.2325406-1-npiggin@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14s390: appldata depends on PROC_SYSCTLRandy Dunlap1-1/+1
[ Upstream commit 5d3516b3647621d5a1180672ea9e0817fb718ada ] APPLDATA_BASE should depend on PROC_SYSCTL instead of PROC_FS. Building with PROC_FS but not PROC_SYSCTL causes a build error, since appldata_base.c uses data and APIs from fs/proc/proc_sysctl.c. arch/s390/appldata/appldata_base.o: in function `appldata_generic_handler': appldata_base.c:(.text+0x192): undefined reference to `sysctl_vals' Fixes: c185b783b099 ("[S390] Remove config options.") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Link: https://lore.kernel.org/r/20210528002420.17634-1-rdunlap@infradead.org Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14s390: enable HAVE_IOREMAP_PROTNiklas Schnelle2-0/+20
[ Upstream commit d460bb6c6417588dd8b0907d34f69b237918812a ] In commit b02002cc4c0f ("s390/pci: Implement ioremap_wc/prot() with MIO") we implemented both ioremap_wc() and ioremap_prot() however until now we had not set HAVE_IOREMAP_PROT in Kconfig, do so now. This also requires implementing pte_pgprot() as this is used in the generic_access_phys() code enabled by CONFIG_HAVE_IOREMAP_PROT. As with ioremap_wc() we need to take the MMIO Write Back bit index into account. Moreover since the pgprot value returned from pte_pgprot() is to be used for mappings into kernel address space we must make sure that it uses appropriate kernel page table protection bits. In particular a pgprot value originally coming from userspace could have the _PAGE_PROTECT bit set to enable fault based dirty bit accounting which would then make the mapping inaccessible when used in kernel address space. Fixes: b02002cc4c0f ("s390/pci: Implement ioremap_wc/prot() with MIO") Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14