summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2021-11-02riscv: Fix asan-stack clang buildAlexandre Ghiti3-2/+10
commit 54c5639d8f507ebefa814f574cb6f763033a72a5 upstream. Nathan reported that because KASAN_SHADOW_OFFSET was not defined in Kconfig, it prevents asan-stack from getting disabled with clang even when CONFIG_KASAN_STACK is disabled: fix this by defining the corresponding config. Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com> Fixes: 8ad8b72721d0 ("riscv: Add KASAN support") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02riscv: fix misalgned trap vector base addressChen Lu1-0/+1
commit 64a19591a2938b170aa736443d5d3bf4c51e1388 upstream. The trap vector marked by label .Lsecondary_park must align on a 4-byte boundary, as the {m,s}tvec is defined to require 4-byte alignment. Signed-off-by: Chen Lu <181250012@smail.nju.edu.cn> Reviewed-by: Anup Patel <anup.patel@wdc.com> Fixes: e011995e826f ("RISC-V: Move relocate and few other functions out of __init") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02KVM: s390: preserve deliverable_mask in __airqs_kick_single_vcpuHalil Pasic1-2/+3
[ Upstream commit 0e9ff65f455dfd0a8aea5e7843678ab6fe097e21 ] Changing the deliverable mask in __airqs_kick_single_vcpu() is a bug. If one idle vcpu can't take the interrupts we want to deliver, we should look for another vcpu that can, instead of saying that we don't want to deliver these interrupts by clearing the bits from the deliverable_mask. Fixes: 9f30f6216378 ("KVM: s390: add gib_alert_irq_handler()") Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20211019175401.3757927-3-pasic@linux.ibm.com Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-02KVM: s390: clear kicked_mask before sleeping againHalil Pasic1-0/+1
[ Upstream commit 9b57e9d5010bbed7c0d9d445085840f7025e6f9a ] The idea behind kicked mask is that we should not re-kick a vcpu that is already in the "kick" process, i.e. that was kicked and is is about to be dispatched if certain conditions are met. The problem with the current implementation is, that it assumes the kicked vcpu is going to enter SIE shortly. But under certain circumstances, the vcpu we just kicked will be deemed non-runnable and will remain in wait state. This can happen, if the interrupt(s) this vcpu got kicked to deal with got already cleared (because the interrupts got delivered to another vcpu). In this case kvm_arch_vcpu_runnable() would return false, and the vcpu would remain in kvm_vcpu_block(), but this time with its kicked_mask bit set. So next time around we wouldn't kick the vcpu form __airqs_kick_single_vcpu(), but would assume that we just kicked it. Let us make sure the kicked_mask is cleared before we give up on re-dispatching the vcpu. Fixes: 9f30f6216378 ("KVM: s390: add gib_alert_irq_handler()") Reported-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20211019175401.3757927-2-pasic@linux.ibm.com Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-02nios2: Make NIOS2_DTB_SOURCE_BOOL depend on !COMPILE_TESTGuenter Roeck1-0/+1
commit 4a089e95b4d6bb625044d47aed0c442a8f7bd093 upstream. nios2:allmodconfig builds fail with make[1]: *** No rule to make target 'arch/nios2/boot/dts/""', needed by 'arch/nios2/boot/dts/built-in.a'. Stop. make: [Makefile:1868: arch/nios2/boot/dts] Error 2 (ignored) This is seen with compile tests since those enable NIOS2_DTB_SOURCE_BOOL, which in turn enables NIOS2_DTB_SOURCE. This causes the build error because the default value for NIOS2_DTB_SOURCE is an empty string. Disable NIOS2_DTB_SOURCE_BOOL for compile tests to avoid the error. Fixes: 2fc8483fdcde ("nios2: Build infrastructure") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02arm64: dts: allwinner: h5: NanoPI Neo 2: Fix ethernet nodeClément Bœsch1-1/+1
commit 0764e365dacd0b8f75c1736f9236be280649bd18 upstream. RX and TX delay are provided by ethernet PHY. Reflect that in ethernet node. Fixes: 44a94c7ef989 ("arm64: dts: allwinner: H5: Restore EMAC changes") Signed-off-by: Clément Bœsch <u@pkh.me> Reviewed-by: Jernej Skrabec <jernej.skrabec@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Maxime Ripard <maxime@cerno.tech> Link: https://lore.kernel.org/r/20210905002027.171984-1-u@pkh.me Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02riscv, bpf: Fix potential NULL dereferenceBjörn Töpel1-1/+2
commit 27de809a3d83a6199664479ebb19712533d6fd9b upstream. The bpf_jit_binary_free() function requires a non-NULL argument. When the RISC-V BPF JIT fails to converge in NR_JIT_ITERATIONS steps, jit_data->header will be NULL, which triggers a NULL dereference. Avoid this by checking the argument, prior calling the function. Fixes: ca6cb5447cec ("riscv, bpf: Factor common RISC-V JIT code") Signed-off-by: Björn Töpel <bjorn@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20211028125115.514587-1-bjorn@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02arm64: Avoid premature usercopy failureRobin Murphy3-13/+35
commit 295cf156231ca3f9e3a66bde7fab5e09c41835e0 upstream. Al reminds us that the usercopy API must only return complete failure if absolutely nothing could be copied. Currently, if userspace does something silly like giving us an unaligned pointer to Device memory, or a size which overruns MTE tag bounds, we may fail to honour that requirement when faulting on a multi-byte access even though a smaller access could have succeeded. Add a mitigation to the fixup routines to fall back to a single-byte copy if we faulted on a larger access before anything has been written to the destination, to guarantee making *some* forward progress. We needn't be too concerned about the overall performance since this should only occur when callers are doing something a bit dodgy in the first place. Particularly broken userspace might still be able to trick generic_perform_write() into an infinite loop by targeting write() at an mmap() of some read-only device register where the fault-in load succeeds but any store synchronously aborts such that copy_to_user() is genuinely unable to make progress, but, well, don't do that... CC: stable@vger.kernel.org Reported-by: Chen Huang <chenhuang5@huawei.com> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Chen Huang <chenhuang5@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02powerpc/bpf: Fix BPF_MOD when imm == 1Naveen N. Rao1-2/+8
commit 8bbc9d822421d9ac8ff9ed26a3713c9afc69d6c8 upstream. Only ignore the operation if dividing by 1. Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF") Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c674ca18c3046885602caebb326213731c675d06.1633464148.git.naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02ARM: 9141/1: only warn about XIP address when not compile testingArnd Bergmann1-1/+1
commit 48ccc8edf5b90622cdc4f8878e0042ab5883e2ca upstream. In randconfig builds, we sometimes come across this warning: arm-linux-gnueabi-ld: XIP start address may cause MPU programming issues While this is helpful for actual systems to figure out why it fails, the warning does not provide any benefit for build testing, so guard it in a check for CONFIG_COMPILE_TEST, which is usually set on randconfig builds. Fixes: 216218308cfb ("ARM: 8713/1: NOMMU: Support MPU in XIP configuration") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02ARM: 9139/1: kprobes: fix arch_init_kprobes() prototypeArnd Bergmann1-1/+1
commit 1f323127cab086e4fd618981b1e5edc396eaf0f4 upstream. With extra warnings enabled, gcc complains about this function definition: arch/arm/probes/kprobes/core.c: In function 'arch_init_kprobes': arch/arm/probes/kprobes/core.c:465:12: warning: old-style function definition [-Wold-style-definition] 465 | int __init arch_init_kprobes() Link: https://lore.kernel.org/all/20201027093057.c685a14b386acacb3c449e3d@kernel.org/ Fixes: 24ba613c9d6c ("ARM kprobes: core code") Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02ARM: 9138/1: fix link warning with XIP + frame-pointerArnd Bergmann1-0/+4
commit 44cc6412e66b2b84544eaf2e14cf1764301e2a80 upstream. When frame pointers are used instead of the ARM unwinder, and the kernel is built using clang with an external assembler and CONFIG_XIP_KERNEL, every file produces two warnings like: arm-linux-gnueabi-ld: warning: orphan section `.ARM.extab' from `net/mac802154/util.o' being placed in section `.ARM.extab' arm-linux-gnueabi-ld: warning: orphan section `.ARM.exidx' from `net/mac802154/util.o' being placed in section `.ARM.exidx' The same fix was already merged for the normal (non-XIP) linker script, with a longer description. Fixes: c39866f268f8 ("arm/build: Always handle .ARM.exidx and .ARM.extab sections") Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02ARM: 9134/1: remove duplicate memcpy() definitionArnd Bergmann1-0/+3
commit eaf6cc7165c9c5aa3c2f9faa03a98598123d0afb upstream. Both the decompressor code and the kasan logic try to override the memcpy() and memmove() definitions, which leading to a clash in a KASAN-enabled kernel with XZ decompression: arch/arm/boot/compressed/decompress.c:50:9: error: 'memmove' macro redefined [-Werror,-Wmacro-redefined] #define memmove memmove ^ arch/arm/include/asm/string.h:59:9: note: previous definition is here #define memmove(dst, src, len) __memmove(dst, src, len) ^ arch/arm/boot/compressed/decompress.c:51:9: error: 'memcpy' macro redefined [-Werror,-Wmacro-redefined] #define memcpy memcpy ^ arch/arm/include/asm/string.h:58:9: note: previous definition is here #define memcpy(dst, src, len) __memcpy(dst, src, len) ^ Here we want the set of functions from the decompressor, so undefine the other macros before the override. Link: https://lore.kernel.org/linux-arm-kernel/CACRpkdZYJogU_SN3H9oeVq=zJkRgRT1gDz3xp59gdqWXxw-B=w@mail.gmail.com/ Link: https://lore.kernel.org/lkml/202105091112.F5rmd4By-lkp@intel.com/ Fixes: d6d51a96c7d6 ("ARM: 9014/2: Replace string mem* functions for KASan") Fixes: a7f464f3db93 ("ARM: 7001/2: Wire up support for the XZ decompressor") Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02ARM: 9133/1: mm: proc-macros: ensure *_tlb_fns are 4B alignedNick Desaulniers1-0/+1
commit e6a0c958bdf9b2e1b57501fc9433a461f0a6aadd upstream. A kernel built with CONFIG_THUMB2_KERNEL=y and using clang as the assembler could generate non-naturally-aligned v7wbi_tlb_fns which results in a boot failure. The original commit adding the macro missed the .align directive on this data. Link: https://github.com/ClangBuiltLinux/linux/issues/1447 Link: https://lore.kernel.org/all/0699da7b-354f-aecc-a62f-e25693209af4@linaro.org/ Debugged-by: Ard Biesheuvel <ardb@kernel.org> Debugged-by: Nathan Chancellor <nathan@kernel.org> Debugged-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 66a625a88174 ("ARM: mm: proc-macros: Add generic proc/cache/tlb struct definition macros") Suggested-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-02ARM: 9132/1: Fix __get_user_check failure with ARM KASAN imagesLexi Shao1-1/+3
commit df909df0770779f1a5560c2bb641a2809655ef28 upstream. ARM: kasan: Fix __get_user_check failure with kasan In macro __get_user_check defined in arch/arm/include/asm/uaccess.h, error code is store in register int __e(r0). When kasan is enabled, assigning value to kernel address might trigger kasan check, which unexpectedly overwrites r0 and causes undefined behavior on arm kasan images. One example is failure in do_futex and results in process soft lockup. Log: watchdog: BUG: soft lockup - CPU#0 stuck for 62946ms! [rs:main Q:Reg:1151] ... (__asan_store4) from (futex_wait_setup+0xf8/0x2b4) (futex_wait_setup) from (futex_wait+0x138/0x394) (futex_wait) from (do_futex+0x164/0xe40) (do_futex) from (sys_futex_time32+0x178/0x230) (sys_futex_time32) from (ret_fast_syscall+0x0/0x50) The soft lockup happens in function futex_wait_setup. The reason is function get_futex_value_locked always return EINVAL, thus pc jump back to retry label and causes looping. This line in function get_futex_value_locked ret = __get_user(*dest, from); is expanded to *dest = (typeof(*(p))) __r2; , in macro __get_user_check. Writing to pointer dest triggers kasan check and overwrites the return value of __get_user_x function. The assembly code of get_futex_value_locked in kernel/futex.c: ... c01f6dc8: eb0b020e bl c04b7608 <__get_user_4> // "x = (typeof(*(p))) __r2;" triggers kasan check and r0 is overwritten c01f6dCc: e1a00007 mov r0, r7 c01f6dd0: e1a05002 mov r5, r2 c01f6dd4: eb04f1e6 bl c0333574 <__asan_store4> c01f6dd8: e5875000 str r5, [r7] // save ret value of __get_user(*dest, from), which is dest address now c01f6ddc: e1a05000 mov r5, r0 ... // checking return value of __get_user failed c01f6e00: e3550000 cmp r5, #0 ... c01f6e0c: 01a00005 moveq r0, r5 // assign return value to EINVAL c01f6e10: 13e0000d mvnne r0, #13 Return value is the destination address of get_user thus certainly non-zero, so get_futex_value_locked always return EINVAL. Fix it by using a tmp vairable to store the error code before the assignment. This fix has no effects to non-kasan images thanks to compiler optimization. It only affects cases that overwrite r0 due to kasan check. This should fix bug discussed in Link: [1] https://lore.kernel.org/linux-arm-kernel/0ef7c2a5-5d8b-c5e0-63fa-31693fd4495c@gmail.com/ Fixes: 421015713b30 ("ARM: 9017/2: Enable KASan for ARM") Signed-off-by: Lexi Shao <shaolexi@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27ARM: 9122/1: select HAVE_FUTEX_CMPXCHGNick Desaulniers1-0/+1
commit 9d417cbe36eee7afdd85c2e871685f8dab7c2dba upstream. tglx notes: This function [futex_detect_cmpxchg] is only needed when an architecture has to runtime discover whether the CPU supports it or not. ARM has unconditional support for this, so the obvious thing to do is the below. Fixes linkage failure from Clang randconfigs: kernel/futex.o:(.text.fixup+0x5c): relocation truncated to fit: R_ARM_JUMP24 against `.init.text' and boot failures for CONFIG_THUMB2_KERNEL. Link: https://github.com/ClangBuiltLinux/linux/issues/325 Comments from Nick Desaulniers: See-also: 03b8c7b623c8 ("futex: Allow architectures to skip futex_atomic_cmpxchg_inatomic() test") Reported-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Nathan Chancellor <nathan@kernel.org> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Cc: stable@vger.kernel.org # v3.14+ Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27s390/pci: fix zpci_zdev_put() on reserveNiklas Schnelle3-7/+45
commit a46044a92add6a400f4dada7b943b30221f7cc80 upstream. Since commit 2a671f77ee49 ("s390/pci: fix use after free of zpci_dev") the reference count of a zpci_dev is incremented between pcibios_add_device() and pcibios_release_device() which was supposed to prevent the zpci_dev from being freed while the common PCI code has access to it. It was missed however that the handling of zPCI availability events assumed that once zpci_zdev_put() was called no later availability event would still see the device. With the previously mentioned commit however this assumption no longer holds and we must make sure that we only drop the initial long-lived reference the zPCI subsystem holds exactly once. Do so by introducing a zpci_device_reserved() function that handles when a device is reserved. Here we make sure the zpci_dev will not be considered for further events by removing it from the zpci_list. This also means that the device actually stays in the ZPCI_FN_STATE_RESERVED state between the time we know it has been reserved and the final reference going away. We thus need to consider it a real state instead of just a conceptual state after the removal. The final cleanup of PCI resources, removal from zbus, and destruction of the IOMMU stays in zpci_release_device() to make sure holders of the reference do see valid data until the release. Fixes: 2a671f77ee49 ("s390/pci: fix use after free of zpci_dev") Cc: stable@vger.kernel.org Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27perf/x86/msr: Add Sapphire Rapids CPU supportKan Liang1-0/+1
[ Upstream commit 71920ea97d6d1d800ee8b51951dc3fda3f5dc698 ] SMI_COUNT MSR is supported on Sapphire Rapids CPU. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/1633551137-192083-1-git-send-email-kan.liang@linux.intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27ARM: dts: spear3xx: Fix gmac nodeHerve Codina1-1/+1
[ Upstream commit 6636fec29cdf6665bd219564609e8651f6ddc142 ] On SPEAr3xx, ethernet driver is not compatible with the SPEAr600 one. Indeed, SPEAr3xx uses an earlier version of this IP (v3.40) and needs some driver tuning compare to SPEAr600. The v3.40 IP support was added to stmmac driver and this patch fixes this issue and use the correct compatible string for SPEAr3xx Signed-off-by: Herve Codina <herve.codina@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27KVM: nVMX: promptly process interrupts delivered while in guest modePaolo Bonzini1-11/+6
commit 3a25dfa67fe40f3a2690af2c562e0947a78bd6a0 upstream. Since commit c300ab9f08df ("KVM: x86: Replace late check_nested_events() hack with more precise fix") there is no longer the certainty that check_nested_events() tries to inject an external interrupt vmexit to L1 on every call to vcpu_enter_guest. Therefore, even in that case we need to set KVM_REQ_EVENT. This ensures that inject_pending_event() is called, and from there kvm_check_nested_events(). Fixes: c300ab9f08df ("KVM: x86: Replace late check_nested_events() hack with more precise fix") Cc: stable@vger.kernel.org Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27powerpc/idle: Don't corrupt back chain when going idleMichael Ellerman1-4/+6
commit 496c5fe25c377ddb7815c4ce8ecfb676f051e9b6 upstream. In isa206_idle_insn_mayloss() we store various registers into the stack red zone, which is allowed. However inside the IDLE_STATE_ENTER_SEQ_NORET macro we save r2 again, to 0(r1), which corrupts the stack back chain. We used to do the same in isa206_idle_insn_mayloss() itself, but we fixed that in 73287caa9210 ("powerpc64/idle: Fix SP offsets when saving GPRs"), however we missed that the macro also corrupts the back chain. Corrupting the back chain is bad for debuggability but doesn't necessarily cause a bug. However we recently changed the stack handling in some KVM code, and it now relies on the stack back chain being valid when it returns. The corruption causes that code to return with r1 pointing somewhere in kernel data, at some point LR is restored from the stack and we branch to NULL or somewhere else invalid. Only affects Power8 hosts running KVM guests, with dynamic_mt_modes enabled (which it is by default). The fixes tag below points to the commit that changed the KVM stack handling, exposing this bug. The actual corruption of the back chain has always existed since 948cf67c4726 ("powerpc: Add NAP mode support on Power7 in HV mode"). Fixes: 9b4416c5095c ("KVM: PPC: Book3S HV: Fix stack handling in idle_kvm_start_guest()") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211020094826.3222052-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27KVM: PPC: Book3S HV: Make idle_kvm_start_guest() return 0 if it went to guestMichael Ellerman1-2/+7
commit cdeb5d7d890e14f3b70e8087e745c4a6a7d9f337 upstream. We call idle_kvm_start_guest() from power7_offline() if the thread has been requested to enter KVM. We pass it the SRR1 value that was returned from power7_idle_insn() which tells us what sort of wakeup we're processing. Depending on the SRR1 value we pass in, the KVM code might enter the guest, or it might return to us to do some host action if the wakeup requires it. If idle_kvm_start_guest() is able to handle the wakeup, and enter the guest it is supposed to indicate that by returning a zero SRR1 value to us. That was the behaviour prior to commit 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C"), however in that commit the handling of SRR1 was reworked, and the zeroing behaviour was lost. Returning from idle_kvm_start_guest() without zeroing the SRR1 value can confuse the host offline code, causing the guest to crash and other weirdness. Fixes: 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211015133929.832061-2-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27KVM: PPC: Book3S HV: Fix stack handling in idle_kvm_start_guest()Michael Ellerman1-9/+10
commit 9b4416c5095c20e110c82ae602c254099b83b72f upstream. In commit 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C") kvm_start_guest() became idle_kvm_start_guest(). The old code allocated a stack frame on the emergency stack, but didn't use the frame to store anything, and also didn't store anything in its caller's frame. idle_kvm_start_guest() on the other hand is written more like a normal C function, it creates a frame on entry, and also stores CR/LR into its callers frame (per the ABI). The problem is that there is no caller frame on the emergency stack. The emergency stack for a given CPU is allocated with: paca_ptrs[i]->emergency_sp = alloc_stack(limit, i) + THREAD_SIZE; So emergency_sp actually points to the first address above the emergency stack allocation for a given CPU, we must not store above it without first decrementing it to create a frame. This is different to the regular kernel stack, paca->kstack, which is initialised to point at an initial frame that is ready to use. idle_kvm_start_guest() stores the backchain, CR and LR all of which write outside the allocation for the emergency stack. It then creates a stack frame and saves the non-volatile registers. Unfortunately the frame it creates is not large enough to fit the non-volatiles, and so the saving of the non-volatile registers also writes outside the emergency stack allocation. The end result is that we corrupt whatever is at 0-24 bytes, and 112-248 bytes above the emergency stack allocation. In practice this has gone unnoticed because the memory immediately above the emergency stack happens to be used for other stack allocations, either another CPUs mc_emergency_sp or an IRQ stack. See the order of calls to irqstack_early_init() and emergency_stack_init(). The low addresses of another stack are the top of that stack, and so are only used if that stack is under extreme pressue, which essentially never happens in practice - and if it did there's a high likelyhood we'd crash due to that stack overflowing. Still, we shouldn't be corrupting someone else's stack, and it is purely luck that we aren't corrupting something else. To fix it we save CR/LR into the caller's frame using the existing r1 on entry, we then create a SWITCH_FRAME_SIZE frame (which has space for pt_regs) on the emergency stack with the backchain pointing to the existing stack, and then finally we switch to the new frame on the emergency stack. Fixes: 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211015133929.832061-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27powerpc64/idle: Fix SP offsets when saving GPRsChristopher M. Riedl1-65/+73
commit 73287caa9210ded6066833195f4335f7f688a46b upstream. The idle entry/exit code saves/restores GPRs in the stack "red zone" (Protected Zone according to PowerPC64 ELF ABI v2). However, the offset used for the first GPR is incorrect and overwrites the back chain - the Protected Zone actually starts below the current SP. In practice this is probably not an issue, but it's still incorrect so fix it. Also expand the comments to explain why using the stack "red zone" instead of creating a new stackframe is appropriate here. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210206072342.5067-1-cmr@codefail.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-27powerpc/smp: do not decrement idle task preempt count in CPU offlineNathan Lynch1-2/+0
[ Upstream commit 787252a10d9422f3058df9a4821f389e5326c440 ] With PREEMPT_COUNT=y, when a CPU is offlined and then onlined again, we get: BUG: scheduling while atomic: swapper/1/0/0x00000000 no locks held by swapper/1/0. CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.15.0-rc2+ #100 Call Trace: dump_stack_lvl+0xac/0x108 __schedule_bug+0xac/0xe0 __schedule+0xcf8/0x10d0 schedule_idle+0x3c/0x70 do_idle+0x2d8/0x4a0 cpu_startup_entry+0x38/0x40 start_secondary+0x2ec/0x3a0 start_secondary_prolog+0x10/0x14 This is because powerpc's arch_cpu_idle_dead() decrements the idle task's preempt count, for reasons explained in commit a7c2bb8279d2 ("powerpc: Re-enable preemption before cpu_die()"), specifically "start_secondary() expects a preempt_count() of 0." However, since commit 2c669ef6979c ("powerpc/preempt: Don't touch the idle task's preempt_count during hotplug") and commit f1a0a376ca0c ("sched/core: Initialize the idle task with preemption disabled"), that justification no longer holds. The idle task isn't supposed to re-enable preemption, so remove the vestigial preempt_enable() from the CPU offline path. Tested with pseries and powernv in qemu, and pseries on PowerVM. Fixes: 2c669ef6979c ("powerpc/preempt: Don't touch the idle task's preempt_count during hotplug") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211015173902.2278118-1-nathanl@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27NIOS2: irqflags: rename a redefined register nameRandy Dunlap2-3/+3
[ Upstream commit 4cce60f15c04d69eff6ffc539ab09137dbe15070 ] Both arch/nios2/ and drivers/mmc/host/tmio_mmc.c define a macro with the name "CTL_STATUS". Change the one in arch/nios2/ to be "CTL_FSTATUS" (flags status) to eliminate the build warning. In file included from ../drivers/mmc/host/tmio_mmc.c:22: drivers/mmc/host/tmio_mmc.h:31: warning: "CTL_STATUS" redefined 31 | #define CTL_STATUS 0x1c arch/nios2/include/asm/registers.h:14: note: this is the location of the previous definition 14 | #define CTL_STATUS 0 Fixes: b31ebd8055ea ("nios2: Nios2 registers") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27xtensa: xtfpga: Try software restart before simulating CPU resetGuenter Roeck1-2/+6
[ Upstream commit 012e974501a270d8dfd4ee2039e1fdf7579c907e ] Rebooting xtensa images loaded with the '-kernel' option in qemu does not work. When executing a reboot command, the qemu session either hangs or experiences an endless sequence of error messages. Kernel panic - not syncing: Unrecoverable error in exception handler Reset code jumps to the CPU restart address, but Linux can not recover from there because code and data in the kernel init sections have been discarded and overwritten at this point. XTFPGA platforms have a means to reset the CPU by writing 0xdead into a specific FPGA IO address. When used in QEMU the kernel image loaded with the '-kernel' option gets restored to its original state allowing the machine to boot successfully. Use that mechanism to attempt a platform reset. If it does not work, fall back to the existing mechanism. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27xtensa: xtfpga: use CONFIG_USE_OF instead of CONFIG_OFMax Filippov1-2/+2
[ Upstream commit f3d7c2cdf6dc0d5402ec29c3673893b3542c5ad1 ] Use platform data to initialize xtfpga device drivers when CONFIG_USE_OF is not selected. This fixes xtfpga networking when CONFIG_USE_OF is not selected but CONFIG_OF is. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27xen/x86: prevent PVH type from getting clobberedJan Beulich1-5/+4
[ Upstream commit 9172b5c4a778da1f855b2e3780b1afabb3cfd523 ] Like xen_start_flags, xen_domain_type gets set before .bss gets cleared. Hence this variable also needs to be prevented from getting put in .bss, which is possible because XEN_NATIVE is an enumerator evaluating to zero. Any use prior to init_hvm_pv_info() setting the variable again would lead to wrong decisions; one such case is xenboot_console_setup() when called as a result of "earlyprintk=xen". Use __ro_after_init as more applicable than either __section(".data") or __read_mostly. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/d301677b-6f22-5ae6-bd36-458e1f323d0b@suse.com Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27ARM: dts: at91: sama5d2_som1_ek: disable ISC node by defaultEugen Hristev1-1/+0
[ Upstream commit 4348cc10da6377a86940beb20ad357933b8f91bb ] Without a sensor node, the ISC will simply fail to probe, as the corresponding port node is missing. It is then logical to disable the node in the devicetree. If we add a port with a connection to a sensor endpoint, ISC can be enabled. Signed-off-by: Eugen Hristev <eugen.hristev@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Link: https://lore.kernel.org/r/20210902121358.503589-1-eugen.hristev@microchip.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27arm: dts: vexpress-v2p-ca9: Fix the SMB unit-addressRob Herring2-2/+2
[ Upstream commit 2e9edc07df2ec6f835222151fa4e536e9e54856a ] Based on 'ranges', the 'bus@4000000' node unit-address is off by 1 '0'. Link: https://lore.kernel.org/r/20210819184239.1192395-5-robh@kernel.org Cc: Andre Przywara <andre.przywara@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-10-27parisc: math-emu: Fix fall-through warningsHelge Deller1-3/+53
commit 6f1fce595b78b775d7fb585c15c2dc3a6994f96e upstream. Fix lots of fallthrough warnings, e.g.: arch/parisc/math-emu/fpudispatch.c:323:33: warning: this statement may fall through [-Wimplicit-fallthrough=] Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20ARM: dts: bcm2711-rpi-4-b: Fix pcie0's unit address formattingNicolas Saenz Julienne1-1/+2
commit 13dbc954b3c9a9de0ad5b7279e8d3b708d31068b upstream. dtbs_check currently complains that: arch/arm/boot/dts/bcm2711-rpi-4-b.dts:220.10-231.4: Warning (pci_device_reg): /scb/pcie@7d500000/pci@1,0: PCI unit address format error, expected "0,0" Unsurprisingly pci@0,0 is the right address, as illustrated by its reg property: &pcie0 { pci@0,0 { /* * As defined in the IEEE Std 1275-1994 document, * reg is a five-cell address encoded as (phys.hi * phys.mid phys.lo size.hi size.lo). phys.hi * should contain the device's BDF as 0b00000000 * bbbbbbbb dddddfff 00000000. The other cells * should be zero. */ reg = <0 0 0 0 0>; }; }; The device is clearly 0. So fix it. Also add a missing 'device_type = "pci"'. Fixes: 258f92d2f840 ("ARM: dts: bcm2711: Add reset controller to xHCI node") Suggested-by: Rob Herring <robh@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20210831125843.1233488-1-nsaenzju@redhat.com Signed-off-by: Nicolas Saenz Julienne <nsaenz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20ARM: dts: bcm2711-rpi-4-b: fix sd_io_1v8_reg regulator statesStefan Wahren1-2/+2
commit b55ec7528879a822a4d350248daa04bbb27f25fd upstream. DT schema check complains at sd_io_1v8_reg about the following: [1800000, 1, 3300000, 0] is too long Additional items are not allowed (3300000, 0 were unexpected) So fix the states definition. Fixes: 7dbe8c62ceeb ("ARM: dts: Add minimal Raspberry Pi 4 support") Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Link: https://lore.kernel.org/r/1628334401-6577-3-git-send-email-stefan.wahren@i2se.com Signed-off-by: Nicolas Saenz Julienne <nsaenz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20ARM: dts: bcm2711: fix MDIO #address- and #size-cellsStefan Wahren1-2/+2
commit 2faff6737a8a684b077264f0aed131526c99eec4 upstream. The values of #address-cells and #size-cells are swapped. Fix this and avoid the following DT schema warnings for mdio@e14: #address-cells:0:0: 1 was expected #size-cells:0:0: 0 was expected Fixes: be8af7a9e3cc ("ARM: dts: bcm2711-rpi-4: Enable GENET support") Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Link: https://lore.kernel.org/r/1628334401-6577-2-git-send-email-stefan.wahren@i2se.com Signed-off-by: Nicolas Saenz Julienne <nsaenz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20ARM: dts: bcm2711-rpi-4-b: Fix usb's unit addressNicolas Saenz Julienne1-2/+2
commit 3f32472854614d6f53b09b4812372dba9fc5c7de upstream. The unit address is supposed to represent '<device>,<function>'. Which are both 0 for RPi4b's XHCI controller. On top of that although OpenFirmware states bus number goes in the high part of the last reg parameter, FDT doesn't seem to care for it[1], so remove it. [1] https://patchwork.kernel.org/project/linux-arm-kernel/patch/20210830103909.323356-1-nsaenzju@redhat.com/#24414633 Fixes: 258f92d2f840 ("ARM: dts: bcm2711: Add reset controller to xHCI node") Suggested-by: Rob Herring <robh@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20210831125843.1233488-2-nsaenzju@redhat.com Signed-off-by: Nicolas Saenz Julienne <nsaenz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20powerpc/xive: Discard disabled interrupts in get_irqchip_state()Cédric Le Goater1-1/+2
commit 6f779e1d359b8d5801f677c1d49dcfa10bf95674 upstream. When an interrupt is passed through, the KVM XIVE device calls the set_vcpu_affinity() handler which raises the P bit to mask the interrupt and to catch any in-flight interrupts while routing the interrupt to the guest. On the guest side, drivers (like some Intels) can request at probe time some MSIs and call synchronize_irq() to check that there are no in flight interrupts. This will call the XIVE get_irqchip_state() handler which will always return true as the interrupt P bit has been set on the host side and lock the CPU in an infinite loop. Fix that by discarding disabled interrupts in get_irqchip_state(). Fixes: da15c03b047d ("powerpc/xive: Implement get_irqchip_state method for XIVE to fix shutdown race") Cc: stable@vger.kernel.org #v5.4+ Signed-off-by: Cédric Le Goater <clg@kaod.org> Tested-by: seeteena <s1seetee@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211011070203.99726-1-clg@kaod.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20x86/Kconfig: Do not enable AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT automaticallyBorislav Petkov1-1/+0
commit 711885906b5c2df90746a51f4cd674f1ab9fbb1d upstream. This Kconfig option was added initially so that memory encryption is enabled by default on machines which support it. However, devices which have DMA masks that are less than the bit position of the encryption bit, aka C-bit, require the use of an IOMMU or the use of SWIOTLB. If the IOMMU is disabled or in passthrough mode, the kernel would switch to SWIOTLB bounce-buffering for those transfers. In order to avoid that, 2cc13bb4f59f ("iommu: Disable passthrough mode when SME is active") disables the default IOMMU passthrough mode so that devices for which the default 256K DMA is insufficient, can use the IOMMU instead. However 2, there are cases where the IOMMU is disabled in the BIOS, etc. (think the usual hardware folk "oops, I dropped the ball there" cases) or a driver doesn't properly use the DMA APIs or a device has a firmware or hardware bug, e.g.: ea68573d408f ("drm/amdgpu: Fail to load on RAVEN if SME is active") However 3, in the above GPU use case, there are APIs like Vulkan and some OpenGL/OpenCL extensions which are under the assumption that user-allocated memory can be passed in to the kernel driver and both the GPU and CPU can do coherent and concurrent access to the same memory. That cannot work with SWIOTLB bounce buffers, of course. So, in order for those devices to function, drop the "default y" for the SME by default active option so that users who want to have SME enabled, will need to either enable it in their config or use "mem_encrypt=on" on the kernel command line. [ tlendacky: Generalize commit message. ] Fixes: 7744ccdbc16f ("x86/mm: Add Secure Memory Encryption (SME) support") Reported-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/8bbacd0e-4580-3194-19d2-a0ecad7df09c@molgen.mpg.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20x86/resctrl: Free the ctrlval arrays when domain_setup_mon_state() failsJames Morse1-0/+2
commit 64e87d4bd3201bf8a4685083ee4daf5c0d001452 upstream. domain_add_cpu() is called whenever a CPU is brought online. The earlier call to domain_setup_ctrlval() allocates the control value arrays. If domain_setup_mon_state() fails, the control value arrays are not freed. Add the missing kfree() calls. Fixes: 1bd2a63b4f0de ("x86/intel_rdt/mba_sc: Add initialization support") Fixes: edf6fa1c4a951 ("x86/intel_rdt/cqm: Add RMID (Resource monitoring ID) management") Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20210917165958.28313-1-james.morse@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZEMike Kravetz1-1/+1
commit 2e5809a4ddb15969503e43b06662a9a725f613ea upstream. For non-4K PAGE_SIZE configs, the largest gigantic huge page size is CONT_PMD_SHIFT order. On arm64 with 64K PAGE_SIZE, the gigantic page is 16G. Therefore, one should be able to specify 'hugetlb_cma=16G' on the kernel command line so that one gigantic page can be allocated from CMA. However, when adding such an option the following message is produced: hugetlb_cma: cma area should be at least 8796093022208 MiB This is because the calculation for non-4K gigantic page order is incorrect in the arm64 specific routine arm64_hugetlb_cma_reserve(). Fixes: abb7962adc80 ("arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs") Cc: <stable@vger.kernel.org> # 5.9.x Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20211005202529.213812-1-mike.kravetz@oracle.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20csky: Fixup regs.sr broken in ptraceGuo Ren1-1/+2
commit af89ebaa64de726ca0a39bbb0bf0c81a1f43ad50 upstream. gpr_get() return the entire pt_regs (include sr) to userspace, if we don't restore the C bit in gpr_set, it may break the ALU result in that context. So the C flag bit is part of gpr context, that's why riscv totally remove the C bit in the ISA. That makes sr reg clear from userspace to supervisor privilege. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-10-20csky: don't let sigreturn play with priveleged bits of status registerAl Viro1-0/+4
commit fbd63c08cdcca5fb1315aca3172b3c9c272cfb4f upstream. csky restore_sigcontext() blindly overwrites regs->sr with the value it finds in sigcontext. Attacker can store whatever they want in there, which includes things like S-bit. Userland shouldn't be able to set that, or anything other than C flag (bit 0). Do the same thing other architectures with protected bits in flags register do - preserve everything that shouldn'