summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2021-04-14KVM: x86/mmu: preserve pending TLB flush across calls to kvm_tdp_mmu_zap_spPaolo Bonzini1-1/+1
[ Upstream commit 315f02c60d9425b38eb8ad7f21b8a35e40db23f9 ] Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller will skip the TLB flush, which is wrong. There are two ways to fix it: - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to use "flush |= ..." - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down to __kvm_tdp_mmu_zap_gfn_range. Note that kvm_tdp_mmu_zap_sp will neither yield nor flush, so flush would never go from true to false. This patch does the former to simplify application to stable kernels, and to make it further clearer that kvm_tdp_mmu_zap_sp will not flush. Cc: seanjc@google.com Fixes: 048f49809c526 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") Cc: <stable@vger.kernel.org> # 5.10.x: 048f49809c: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping Cc: <stable@vger.kernel.org> # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages Cc: <stable@vger.kernel.org> Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pagesSean Christopherson3-7/+22
[ Upstream commit 33a3164161fc86b9cc238f7f2aa2ccb1d5559b1c ] Prevent the TDP MMU from yielding when zapping a gfn range during NX page recovery. If a flush is pending from a previous invocation of the zapping helper, either in the TDP MMU or the legacy MMU, but the TDP MMU has not accumulated a flush for the current invocation, then yielding will release mmu_lock with stale TLB entries. That being said, this isn't technically a bug fix in the current code, as the TDP MMU will never yield in this case. tdp_mmu_iter_cond_resched() will yield if and only if it has made forward progress, as defined by the current gfn vs. the last yielded (or starting) gfn. Because zapping a single shadow page is guaranteed to (a) find that page and (b) step sideways at the level of the shadow page, the TDP iter will break its loop before getting a chance to yield. But that is all very, very subtle, and will break at the slightest sneeze, e.g. zapping while holding mmu_lock for read would break as the TDP MMU wouldn't be guaranteed to see the present shadow page, and thus could step sideways at a lower level. Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210325200119.1359384-4-seanjc@google.com> [Add lockdep assertion. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zappingSean Christopherson1-5/+8
[ Upstream commit 048f49809c526348775425420fb5b8e84fd9a133 ] Honor the "flush needed" return from kvm_tdp_mmu_zap_gfn_range(), which does the flush itself if and only if it yields (which it will never do in this particular scenario), and otherwise expects the caller to do the flush. If pages are zapped from the TDP MMU but not the legacy MMU, then no flush will occur. Fixes: 29cf0f5007a2 ("kvm: x86/mmu: NX largepage recovery for TDP MMU") Cc: stable@vger.kernel.org Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210325200119.1359384-3-seanjc@google.com> Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Ensure TLBs are flushed when yielding during GFN range zapSean Christopherson1-11/+13
[ Upstream commit a835429cda91621fca915d80672a157b47738afb ] When flushing a range of GFNs across multiple roots, ensure any pending flush from a previous root is honored before yielding while walking the tables of the current root. Note, kvm_tdp_mmu_zap_gfn_range() now intentionally overwrites its local "flush" with the result to avoid redundant flushes. zap_gfn_range() preserves and return the incoming "flush", unless of course the flush was performed prior to yielding and no new flush was triggered. Fixes: 1af4a96025b3 ("KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed") Cc: stable@vger.kernel.org Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210325200119.1359384-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changedBen Gardon1-10/+22
[ Upstream commit 1af4a96025b33587ca953c7ef12a1b20c6e70412 ] Given certain conditions, some TDP MMU functions may not yield reliably / frequently enough. For example, if a paging structure was very large but had few, if any writable entries, wrprot_gfn_range could traverse many entries before finding a writable entry and yielding because the check for yielding only happens after an SPTE is modified. Fix this issue by moving the yield to the beginning of the loop. Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU") Reviewed-by: Peter Feiner <pfeiner@google.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210202185734.1680553-15-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iterBen Gardon3-23/+23
[ Upstream commit ed5e484b79e8a9b8be714bd85b6fc70bd6dc99a7 ] In some functions the TDP iter risks not making forward progress if two threads livelock yielding to one another. This is possible if two threads are trying to execute wrprot_gfn_range. Each could write protect an entry and then yield. This would reset the tdp_iter's walk over the paging structure and the loop would end up repeating the same entry over and over, preventing either thread from making forward progress. Fix this issue by only yielding if the loop has made forward progress since the last yield. Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU") Reviewed-by: Peter Feiner <pfeiner@google.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210202185734.1680553-14-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Rename goal_gfn to next_last_level_gfnBen Gardon2-12/+12
[ Upstream commit 74953d3530280dc53256054e1906f58d07bfba44 ] The goal_gfn field in tdp_iter can be misleading as it implies that it is the iterator's final goal. It is really a target for the lowest gfn mapped by the leaf level SPTE the iterator will traverse towards. Change the field's name to be more precise. Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210202185734.1680553-13-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_reschedBen Gardon1-29/+13
[ Upstream commit e139a34ef9d5627a41e1c02210229082140d1f92 ] The flushing and non-flushing variants of tdp_mmu_iter_cond_resched have almost identical implementations. Merge the two functions and add a flush parameter. Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210202185734.1680553-12-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14KVM: x86/mmu: change TDP MMU yield function returns to match cond_reschedBen Gardon1-10/+29
[ Upstream commit e28a436ca4f65384cceaf3f4da0e00aa74244e6a ] Currently the TDP MMU yield / cond_resched functions either return nothing or return true if the TLBs were not flushed. These are confusing semantics, especially when making control flow decisions in calling functions. To clean things up, change both functions to have the same return value semantics as cond_resched: true if the thread yielded, false if it did not. If the function yielded in the _flush_ version, then the TLBs will have been flushed. Reviewed-by: Peter Feiner <pfeiner@google.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Ben Gardon <bgardon@google.com> Message-Id: <20210202185734.1680553-2-bgardon@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-14ARM: dts: turris-omnia: configure LED[2]/INTn pin as interrupt pinMarek Behún1-0/+1
commit a26c56ae67fa9fbb45a8a232dcd7ebaa7af16086 upstream. Use the `marvell,reg-init` DT property to configure the LED[2]/INTn pin of the Marvell 88E1514 ethernet PHY on Turris Omnia into interrupt mode. Without this the pin is by default in LED[2] mode, and the Marvell PHY driver configures LED[2] into "On - Link, Blink - Activity" mode. This fixes the issue where the pca9538 GPIO/interrupt controller (which can't mask interrupts in HW) received too many interrupts and after a time started ignoring the interrupt with error message: IRQ 71: nobody cared There is a work in progress to have the Marvell PHY driver support parsing PHY LED nodes from OF and registering the LEDs as Linux LED class devices. Once this is done the PHY driver can also automatically set the pin into INTn mode if it does not find LED[2] in OF. Until then, though, we fix this via `marvell,reg-init` DT property. Signed-off-by: Marek Behún <kabel@kernel.org> Reported-by: Rui Salvaterra <rsalvaterra@gmail.com> Fixes: 26ca8b52d6e1 ("ARM: dts: add support for Turris Omnia") Cc: Uwe Kleine-König <uwe@kleine-koenig.org> Cc: linux-arm-kernel@lists.infradead.org Cc: Andrew Lunn <andrew@lunn.ch> Cc: Gregory CLEMENT <gregory.clement@bootlin.com> Cc: <stable@vger.kernel.org> Tested-by: Rui Salvaterra <rsalvaterra@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-14parisc: avoid a warning on u8 cast for cmpxchg on u8 pointersGao Xiang1-1/+1
commit 4d752e5af63753ab5140fc282929b98eaa4bd12e upstream. commit b344d6a83d01 ("parisc: add support for cmpxchg on u8 pointers") can generate a sparse warning ("cast truncates bits from constant value"), which has been reported several times [1] [2] [3]. The original code worked as expected, but anyway, let silence such sparse warning as what others did [4]. [1] https://lore.kernel.org/r/202104061220.nRMBwCXw-lkp@intel.com [2] https://lore.kernel.org/r/202012291914.T5Agcn99-lkp@intel.com [3] https://lore.kernel.org/r/202008210829.KVwn7Xeh%25lkp@intel.com [4] https://lore.kernel.org/r/20210315131512.133720-2-jacopo+renesas@jmondi.org Cc: Liam Beguin <liambeguin@gmail.com> Cc: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Gao Xiang <hsiangkao@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-14nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoffMike Rapoport1-1/+1
commit a3a8833dffb7e7329c2586b8bfc531adb503f123 upstream. Commit cb9f753a3731 ("mm: fix races between swapoff and flush dcache") updated flush_dcache_page implementations on several architectures to use page_mapping_file() in order to avoid races between page_mapping() and swapoff(). This update missed arch/nds32 and there is a possibility of a race there. Replace page_mapping() with page_mapping_file() in nds32 implementation of flush_dcache_page(). Link: https://lkml.kernel.org/r/20210330175126.26500-1-rppt@kernel.org Fixes: cb9f753a3731 ("mm: fix races between swapoff and flush dcache") Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Greentime Hu <green.hu@gmail.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-14ia64: fix user_stack_pointer() for ptrace()Sergei Trofimovich1-7/+1
commit 7ad1e366167837daeb93d0bacb57dee820b0b898 upstream. ia64 has two stacks: - memory stack (or stack), pointed at by by r12 - register backing store (register stack), pointed at by ar.bsp/ar.bspstore with complications around dirty register frame on CPU. In [1] Dmitry noticed that PTRACE_GET_SYSCALL_INFO returns the register stack instead memory stack. The bug comes from the fact that user_stack_pointer() and current_user_stack_pointer() don't return the same register: ulong user_stack_pointer(struct pt_regs *regs) { return regs->ar_bspstore; } #define current_user_stack_pointer() (current_pt_regs()->r12) The change gets both back in sync. I think ptrace(PTRACE_GET_SYSCALL_INFO) is the only affected user by this bug on ia64. The change fixes 'rt_sigreturn.gen.test' strace test where it was observed initially. Link: https://bugs.gentoo.org/769614 [1] Link: https://lkml.kernel.org/r/20210331084447.2561532-1-slyfox@gentoo.org Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Reported-by: Dmitry V. Levin <ldv@altlinux.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-14ACPI: processor: Fix build when CONFIG_ACPI_PROCESSOR=mVitaly Kuznetsov2-15/+13
commit fa26d0c778b432d3d9814ea82552e813b33eeb5c upstream. Commit 8cdddd182bd7 ("ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()") tried to fix CPU0 hotplug breakage by copying wakeup_cpu0() + start_cpu0() logic from hlt_play_dead()//mwait_play_dead() into acpi_idle_play_dead(). The problem is that these functions are not exported to modules so when CONFIG_ACPI_PROCESSOR=m build fails. The issue could've been fixed by exporting both wakeup_cpu0()/start_cpu0() (the later from assembly) but it seems putting the whole pattern into a new function and exporting it instead is better. Reported-by: kernel test robot <lkp@intel.com> Fixes: 8cdddd182bd7 ("CPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()") Cc: <stable@vger.kernel.org> # 5.10+ Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-10bpf, x86: Validate computation of branch displacements for x86-32Piotr Krysiuk1-1/+10
commit 26f55a59dc65ff77cd1c4b37991e26497fc68049 upstream. The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by: Piotr Krysiuk <piotras@gmail.com> Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-10bpf, x86: Validate computation of branch displacements for x86-64Piotr Krysiuk1-1/+10
commit e4d4d456436bfb2fe412ee2cd489f7658449b098 upstream. The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by: Piotr Krysiuk <piotras@gmail.com> Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-10ia64: fix format strings for err_injectSergei Trofimovich1-11/+11
[ Upstream commit 95d44a470a6814207d52dd6312203b0f4ef12710 ] Fix warning with %lx / u64 mismatch: arch/ia64/kernel/err_inject.c: In function 'show_resources': arch/ia64/kernel/err_inject.c:62:22: warning: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'u64' {aka 'long long unsigned int'} 62 | return sprintf(buf, "%lx", name[cpu]); \ | ^~~~~~~ Link: https://lkml.kernel.org/r/20210313104312.1548232-1-slyfox@gentoo.org Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10ia64: mca: allocate early mca with GFP_ATOMICSergei Trofimovich1-1/+1
[ Upstream commit f2a419cf495f95cac49ea289318b833477e1a0e2 ] The sleep warning happens at early boot right at secondary CPU activation bootup: smp: Bringing up secondary CPUs ... BUG: sleeping function called from invalid context at mm/page_alloc.c:4942 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 0, name: swapper/1 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.12.0-rc2-00007-g79e228d0b611-dirty #99 .. Call Trace: show_stack+0x90/0xc0 dump_stack+0x150/0x1c0 ___might_sleep+0x1c0/0x2a0 __might_sleep+0xa0/0x160 __alloc_pages_nodemask+0x1a0/0x600 alloc_page_interleave+0x30/0x1c0 alloc_pages_current+0x2c0/0x340 __get_free_pages+0x30/0xa0 ia64_mca_cpu_init+0x2d0/0x3a0 cpu_init+0x8b0/0x1440 start_secondary+0x60/0x700 start_ap+0x750/0x780 Fixed BSP b0 value from CPU 1 As I understand interrupts are not enabled yet and system has a lot of memory. There is little chance to sleep and switch to GFP_ATOMIC should be a no-op. Link: https://lkml.kernel.org/r/20210315085045.204414-1-slyfox@gentoo.org Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10x86/build: Turn off -fcf-protection for realmode targetsArnd Bergmann1-1/+1
[ Upstream commit 9fcb51c14da2953de585c5c6e50697b8a6e91a7b ] The new Ubuntu GCC packages turn on -fcf-protection globally, which causes a build failure in the x86 realmode code: cc1: error: ‘-fcf-protection’ is not compatible with this target Turn it off explicitly on compilers that understand this option. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210323124846.1584944-1-arnd@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10bpf, x86: Use kvmalloc_array instead kmalloc_array in bpf_jit_compYonghong Song1-2/+2
[ Upstream commit de920fc64cbaa031f947e9be964bda05fd090380 ] x86 bpf_jit_comp.c used kmalloc_array to store jited addresses for each bpf insn. With a large bpf program, we have see the following allocation failures in our production server: page allocation failure: order:5, mode:0x40cc0(GFP_KERNEL|__GFP_COMP), nodemask=(null),cpuset=/,mems_allowed=0" Call Trace: dump_stack+0x50/0x70 warn_alloc.cold.120+0x72/0xd2 ? __alloc_pages_direct_compact+0x157/0x160 __alloc_pages_slowpath+0xcdb/0xd00 ? get_page_from_freelist+0xe44/0x1600 ? vunmap_page_range+0x1ba/0x340 __alloc_pages_nodemask+0x2c9/0x320 kmalloc_order+0x18/0x80 kmalloc_order_trace+0x1d/0xa0 bpf_int_jit_compile+0x1e2/0x484 ? kmalloc_order_trace+0x1d/0xa0 bpf_prog_select_runtime+0xc3/0x150 bpf_prog_load+0x480/0x720 ? __mod_memcg_lruvec_state+0x21/0x100 __do_sys_bpf+0xc31/0x2040 ? close_pdeo+0x86/0xe0 do_syscall_64+0x42/0x110 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f2f300f7fa9 Code: Bad RIP value. Dumped assembly: ffffffff810b6d70 <bpf_int_jit_compile>: ; { ffffffff810b6d70: e8 eb a5 b4 00 callq 0xffffffff81c01360 <__fentry__> ffffffff810b6d75: 41 57 pushq %r15 ... ffffffff810b6f39: e9 72 fe ff ff jmp 0xffffffff810b6db0 <bpf_int_jit_compile+0x40> ; addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); ffffffff810b6f3e: 8b 45 0c movl 12(%rbp), %eax ; return __kmalloc(bytes, flags); ffffffff810b6f41: be c0 0c 00 00 movl $3264, %esi ; addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); ffffffff810b6f46: 8d 78 01 leal 1(%rax), %edi ; if (unlikely(check_mul_overflow(n, size, &bytes))) ffffffff810b6f49: 48 c1 e7 02 shlq $2, %rdi ; return __kmalloc(bytes, flags); ffffffff810b6f4d: e8 8e 0c 1d 00 callq 0xffffffff81287be0 <__kmalloc> ; if (!addrs) { ffffffff810b6f52: 48 85 c0 testq %rax, %rax Change kmalloc_array() to kvmalloc_array() to avoid potential allocation error for big bpf programs. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210309015647.3657852-1-yhs@fb.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-10ARM: dts: am33xx: add aliases for mmc interfacesMans Rullgard1-0/+3
[ Upstream commit 9bbce32a20d6a72c767a7f85fd6127babd1410ac ] Without DT aliases, the numbering of mmc interfaces is unpredictable. Adding them makes it possible to refer to devices consistently. The popular suggestion to use UUIDs obviously doesn't work with a blank device fresh from the factory. See commit fa2d0aa96941 ("mmc: core: Allow setting slot index via device tree alias") for more discussion. Signed-off-by: Mans Rullgard <mans@mansr.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-07bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIGStanislav Fomichev1-1/+2
commit b9082970478009b778aa9b22d5561eef35b53b63 upstream. __bpf_arch_text_poke does rewrite only for atomic nop5, emit_nops(xxx, 5) emits non-atomic one which breaks fentry/fexit with k8 atomics: P6_NOP5 == P6_NOP5_ATOMIC (0f1f440000 == 0f1f440000) K8_NOP5 != K8_NOP5_ATOMIC (6666906690 != 6666666690) Can be reproduced by doing "ideal_nops = k8_nops" in "arch_init_ideal_nops() and running fexit_bpf2bpf selftest. Fixes: e21aa341785c ("bpf: Fix fexit trampoline.") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210320000001.915366-1-sdf@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07riscv: evaluate put_user() arg before enabling user accessBen Dooks1-2/+5
commit 285a76bb2cf51b0c74c634f2aaccdb93e1f2a359 upstream. The <asm/uaccess.h> header has a problem with put_user(a, ptr) if the 'a' is not a simple variable, such as a function. This can lead to the compiler producing code as so: 1: enable_user_access() 2: evaluate 'a' into register 'r' 3: put 'r' to 'ptr' 4: disable_user_acess() The issue is that 'a' is now being evaluated with the user memory protections disabled. So we try and force the evaulation by assigning 'x' to __val at the start, and hoping the compiler barriers in enable_user_access() do the job of ordering step 2 before step 1. This has shown up in a bug where 'a' sleeps and thus schedules out and loses the SR_SUM flag. This isn't sufficient to fully fix, but should reduce the window of opportunity. The first instance of this we found is in scheudle_tail() where the code does: $ less -N kernel/sched/core.c 4263 if (current->set_child_tid) 4264 put_user(task_pid_vnr(current), current->set_child_tid); Here, the task_pid_vnr(current) is called within the block that has enabled the user memory access. This can be made worse with KASAN which makes task_pid_vnr() a rather large call with plenty of opportunity to sleep. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reported-by: syzbot+e74b94fe601ab9552d69@syzkaller.appspotmail.com Suggested-by: Arnd Bergman <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -- Changes since v1: - fixed formatting and updated the patch description with more info Changes since v2: - fixed commenting on __put_user() (schwab@linux-m68k.org) Change since v3: - fixed RFC in patch title. Should be ready to merge. Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-04-07s390/vdso: fix tod_steering_delta typeHeiko Carstens1-1/+1
commit b24bacd67ffddd9192c4745500fd6f73dbfe565e upstream. The s390 specific vdso function __arch_get_hw_counter() is supposed to consider tod clock steering. If a tod clock steering event happens and the tod clock is set to a new value __arch_get_hw_counter() will not return the real tod clock value but slowly drift it from the old delta until the returned value finally matches the real tod clock value again. Unfortunately the type of tod_steering_delta unsigned while it is supposed to be signed. It depends on if tod_steering_delta is negative or positive in which direction the vdso code drifts the clock value. Worst case is now that instead of drifting the clock slowly it will jump into the opposite direction by a factor of two. Fix this by simply making tod_steering_delta signed. Fixes: 4bff8cb54502 ("s390: convert to GENERIC_VDSO") Cc: <stable@vger.kernel.org> # 5.10 Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07s390/vdso: copy tod_steering_delta value to vdso_data pageHeiko Carstens1-0/+1
commit 72bbc226ed2ef0a46c165a482861fff00dd6d4e1 upstream. When converting the vdso assembler code to C it was forgotten to actually copy the tod_steering_delta value to vdso_data page. Which in turn means that tod clock steering will not work correctly. Fix this by simply copying the value whenever it is updated. Fixes: 4bff8cb54502 ("s390: convert to GENERIC_VDSO") Cc: <stable@vger.kernel.org> # 5.10 Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07KVM: SVM: ensure that EFER.SVME is set when running nested guest or on ↵Paolo Bonzini1-1/+17
nested vmexit commit 3c346c0c60ab06a021d1c0884a0ef494bc4ee3a7 upstream. Fixing nested_vmcb_check_save to avoid all TOC/TOU races is a bit harder in released kernels, so do the bare minimum by avoiding that EFER.SVME is cleared. This is problematic because svm_set_efer frees the data structures for nested virtualization if EFER.SVME is cleared. Also check that EFER.SVME remains set after a nested vmexit; clearing it could happen if the bit is zero in the save area that is passed to KVM_SET_NESTED_STATE (the save area of the nested state corresponds to the nested hypervisor's state and is restored on the next nested vmexit). Cc: stable@vger.kernel.org Fixes: 2fcf4876ada ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07KVM: SVM: load control fields from VMCB12 before checking themPaolo Bonzini1-4/+6
commit a58d9166a756a0f4a6618e4f593232593d6df134 upstream. Avoid races between check and use of the nested VMCB controls. This for example ensures that the VMRUN intercept is always reflected to the nested hypervisor, instead of being processed by the host. Without this patch, it is possible to end up with svm->nested.hsave pointing to the MSR permission bitmap for nested guests. This bug is CVE-2021-29657. Reported-by: Felix Wilhelm <fwilhelm@google.com> Cc: stable@vger.kernel.org Fixes: 2fcf4876ada ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07xtensa: move coprocessor_flush to the .text sectionMax Filippov1-31/+33
commit ab5eb336411f18fd449a1fb37d36a55ec422603f upstream. coprocessor_flush is not a part of fast exception handlers, but it uses parts of fast coprocessor handling code that's why it's in the same source file. It uses call0 opcode to invoke those parts so there are no limitations on their relative location, but the rest of the code calls coprocessor_flush with call8 and that doesn't work when vectors are placed in a different gigabyte-aligned area than the rest of the kernel. Move coprocessor_flush from the .exception.text section to the .text so that it's reachable from the rest of the kernel with call8. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07xtensa: fix uaccess-related livelock in do_page_faultMax Filippov1-1/+4
commit 7b9acbb6aad4f54623dcd4bd4b1a60fe0c727b09 upstream. If a uaccess (e.g. get_user()) triggers a fault and there's a fault signal pending, the handler will return to the uaccess without having performed a uaccess fault fixup, and so the CPU will immediately execute the uaccess instruction again, whereupon it will livelock bouncing between that instruction and the fault handler. https://lore.kernel.org/lkml/20210121123140.GD48431@C02TD0UTHF1T.local/ Cc: stable@vger.kernel.org Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()Vitaly Kuznetsov2-1/+2
commit 8cdddd182bd7befae6af49c5fd612893f55d6ccb upstream. Commit 496121c02127 ("ACPI: processor: idle: Allow probing on platforms with one ACPI C-state") broke CPU0 hotplug on certain systems, e.g. I'm observing the following on AWS Nitro (e.g r5b.xlarge but other instance types are affected as well): # echo 0 > /sys/devices/system/cpu/cpu0/online # echo 1 > /sys/devices/system/cpu/cpu0/online <10 seconds delay> -bash: echo: write error: Input/output error In fact, the above mentioned commit only revealed the problem and did not introduce it. On x86, to wakeup CPU an NMI is being used and hlt_play_dead()/mwait_play_dead() loops are prepared to handle it: /* * If NMI wants to wake up CPU0, start CPU0. */ if (wakeup_cpu0()) start_cpu0(); cpuidle_play_dead() -> acpi_idle_play_dead() (which is now being called on systems where it wasn't called before the above mentioned commit) serves the same purpose but it doesn't have a path for CPU0. What happens now on wakeup is: - NMI is sent to CPU0 - wakeup_cpu0_nmi() works as expected - we get back to while (1) loop in acpi_idle_play_dead() - safe_halt() puts CPU0 to sleep again. The straightforward/minimal fix is add the special handling for CPU0 on x86 and that's what the patch is doing. Fixes: 496121c02127 ("ACPI: processor: idle: Allow probing on platforms with one ACPI C-state") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: 5.10+ <stable@vger.kernel.org> # 5.10+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07ACPI: tables: x86: Reserve memory occupied by ACPI tablesRafael J. Wysocki2-18/+15
commit 1a1c130ab7575498eed5bcf7220037ae09cd1f8a upstream. The following problem has been reported by George Kennedy: Since commit 7fef431be9c9 ("mm/page_alloc: place pages to tail in __free_pages_core()") the following use after free occurs intermittently when ACPI tables are accessed. BUG: KASAN: use-after-free in ibft_init+0x134/0xc49 Read of size 4 at addr ffff8880be453004 by task swapper/0/1 CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.12.0-rc1-7a7fd0d #1 Call Trace: dump_stack+0xf6/0x158 print_address_description.constprop.9+0x41/0x60 kasan_report.cold.14+0x7b/0xd4 __asan_report_load_n_noabort+0xf/0x20 ibft_init+0x134/0xc49 do_one_initcall+0xc4/0x3e0 kernel_init_freeable+0x5af/0x66b kernel_init+0x16/0x1d0 ret_from_fork+0x22/0x30 ACPI tables mapped via kmap() do not have their mapped pages reserved and the pages can be "stolen" by the buddy allocator. Apparently, on the affected system, the ACPI table in question is not located in "reserved" memory, like ACPI NVS or ACPI Data, that will not be used by the buddy allocator, so the memory occupied by that table has to be explicitly reserved to prevent the buddy allocator from using it. In order to address this problem, rearrange the initialization of the ACPI tables on x86 to locate the initial tables earlier and reserve the memory occupied by them. The other architectures using ACPI should not be affected by this change. Link: https://lore.kernel.org/linux-acpi/1614802160-29362-1-git-send-email-george.kennedy@oracle.com/ Reported-by: George Kennedy <george.kennedy@oracle.com> Tested-by: George Kennedy <george.kennedy@oracle.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Cc: 5.10+ <stable@vger.kernel.org> # 5.10+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-07bpf: Fix fexit trampoline.Alexei Starovoitov1-4/+22
[ Upstream commit e21aa341785c679dd409c8cb71f864c00fe6c463 ] The fexit/fmod_ret programs can be attached to kernel functions that can sleep. The synchronize_rcu_tasks() will not wait for such tasks to complete. In such case the trampoline image will be freed and when the task wakes up the return IP will point to freed memory causing the crash. Solve this by adding percpu_ref_get/put for the duration of trampoline and separate trampoline vs its image life times. The "half page" optimization has to be removed, since first_half->second_half->first_half transition cannot be guaranteed to complete in deterministic time. Every trampoline update becomes a new image. The image with fmod_ret or fexit progs will be freed via percpu_ref_kill and call_rcu_tasks. Together they will wait for the original function and trampoline asm to complete. The trampoline is patched from nop to jmp to skip fexit progs. They are freed independently from the trampoline. The image with fentry progs only will be freed via call_rcu_tasks_trace+call_rcu_tasks which will wait for both sleepable and non-sleepable progs to complete. Fixes: fec56f5890d9 ("bpf: Introduce BPF trampoline") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Paul E. McKenney <paulmck@kernel.org> # for RCU Link: https://lore.kernel.org/bpf/20210316210007.38949-1-alexei.starovoitov@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-04-07arm64: mm: correct the inside linear map range during hotplug checkPavel Tatashin1-2/+18
[ Upstream commit ee7febce051945be28ad86d16a15886f878204de ] Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the linear map range is not checked correctly. The start physical address that linear map covers can be actually at the end of the range because of randomization. Check that and if so reduce it to 0. This can be verified on QEMU with setting kaslr-seed to ~0ul: memstart_offset_seed = 0xffff START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 END: __pa(PAGE_END - 1) = 1000bfffffff Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping") Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210216150351.129018-2-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30Revert "xen: fix p2m size in dom0 for disabled memory hotplug case"Roger Pau Monne3-17/+14
commit af44a387e743ab7aa39d3fb5e29c0a973cf91bdc upstream. This partially reverts commit 882213990d32 ("xen: fix p2m size in dom0 for disabled memory hotplug case") There's no need to special case XEN_UNPOPULATED_ALLOC anymore in order to correctly size the p2m. The generic memory hotplug option has already been tied together with the Xen hotplug limit, so enabling memory hotplug should already trigger a properly sized p2m on Xen PV. Note that XEN_UNPOPULATED_ALLOC depends on ZONE_DEVICE which pulls in MEMORY_HOTPLUG. Leave the check added to __set_phys_to_machine and the adjusted comment about EXTRA_MEM_RATIO. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-3-roger.pau@citrix.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [boris: fixed formatting issues] Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2021-03-30x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()Isaku Yamahata1-1/+1
commit 8249d17d3194eac064a8ca5bc5ca0abc86feecde upstream. The pfn variable contains the page frame number as returned by the pXX_pfn() functions, shifted to the right by PAGE_SHIFT to remove the page bits. After page protection computations are done to it, it gets shifted back to the physical address using page_level_shift(). That is wrong, of course, because that function determines the shift length based on the level of the page in the page table but in all the cases, it was shifted by PAGE_SHIFT before. Therefore, shift it back using PAGE_SHIFT to get the correct physical address. [ bp: Rewrite commit message. ] Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/81abbae1657053eccc535c16151f63cd049dcb97.1616098294.git.isaku.yamahata@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-30xen/x86: make XEN_BALLOON_MEMORY_HOTPLUG_LIMIT depend on MEMORY_HOTPLUGRoger Pau Monne1-2/+2
[ Upstream commit 2b514ec72706a31bea0c3b97e622b81535b5323a ] The Xen memory hotplug limit should depend on the memory hotplug generic option, rather than the Xen balloon configuration. It's possible to have a kernel with generic memory hotplug enabled, but without Xen balloon enabled, at which point memory hotplug won't work correctly due to the size limitation of the p2m. Rename the option to XEN_MEMORY_HOTPLUG_LIMIT since it's no longer tied to ballooning. Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory") Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-2-roger.pau@citrix.com Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30arm64: kdump: update ppos when reading elfcorehdrPavel Tatashin1-0/+2
[ Upstream commit 141f8202cfa4192c3af79b6cbd68e7760bb01b5a ] The ppos points to a position in the old kernel memory (and in case of arm64 in the crash kernel since elfcorehdr is passed as a segment). The function should update the ppos by the amount that was read. This bug is not exposed by accident, but other platforms update this value properly. So, fix it in ARM64 version of elfcorehdr_read() as well. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Fixes: e62aaeac426a ("arm64: kdump: provide /proc/vmcore file") Reviewed-by: Tyler Hicks <tyhicks@linux.microsoft.com> Link: https://lore.kernel.org/r/20210319205054.743368-1-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ARM: dts: imx6ull: fix ubi filesystem mount faileddillon min1-0/+1
[ Upstream commit e4817a1b6b77db538bc0141c3b138f2df803ce87 ] For NAND Ecc layout, there is a dependency from old kernel's nand driver setting and current. if old kernel use 4 bit ecc , we should use 4 bit in new kernel either. else will run into following error at filesystem mounting. So, enable fsl,use-minimum-ecc from device tree, to fix this mismatch [ 9.449265] ubi0: scanning is finished [ 9.463968] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read only 22528 bytes, retry [ 9.486940] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read only 22528 bytes, retry [ 9.509906] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read only 22528 bytes, retry [ 9.532845] ubi0 error: ubi_io_read: error -74 (ECC error) while reading 22528 bytes from PEB 513:4096, read 22528 bytes Fixes: f9ecf10cb88c ("ARM: dts: imx6ull: add MYiR MYS-6ULX SBC") Signed-off-by: dillon min <dillon.minfei@gmail.com> Reviewed-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ARM: OMAP2+: Fix smartreflex init regression after dropping legacy dataTony Lindgren1-17/+58
[ Upstream commit fbfa463be8dc7957ee4f81556e9e1ea2a951807d ] When I dropped legacy data for omap4 and dra7 smartreflex in favor of device tree based data, it seems I only testd for the "SmartReflex Class3 initialized" line in dmesg. I missed the fact that there is also omap_devinit_smartreflex() that happens later, and now it produces an error on boot for "No Voltage table for the corresponding vdd. Cannot create debugfs entries for n-values". This happens as we no longer have the smartreflex instance legacy data, and have not yet moved completely to device tree based booting for the driver. Let's fix the issue by changing the smartreflex init to use names. This should all eventually go away in favor of doing the init in the driver based on devicetree compatible value. Note that dra7xx_init_early() is not calling any voltage domain init like omap54xx_voltagedomains_init(), or a dra7 specific voltagedomains init. This means that on dra7 smartreflex is still not fully initialized, and also seems to be missing the related devicetree nodes. Fixes: a6b1e717e942 ("ARM: OMAP2+: Drop legacy platform data for omap4 smartreflex") Fixes: e54740b4afe8 ("ARM: OMAP2+: Drop legacy platform data for dra7 smartreflex") Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-30ARM: dts: at91-sama5d27_som1: fix phy address to 7Claudiu Beznea1-2/+2
commit 221c3a09ddf70a0a51715e6c2878d8305e95c558 upstream. Fix the phy address to 7 for Ethernet PHY on SAMA5D27 SOM1. No connection established if phy address 0 is used. The board uses the 24 pins version of the KSZ8081RNA part, KSZ8081RNA pin 16 REFCLK as PHYAD bit [2] has weak internal pull-down. But at reset, connected to PD09 of the MPU it's connected with an internal pull-up forming PHYAD[2:0] = 7. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Fixes: 2f61929eb10a ("ARM: dts: at91: at91-sama5d27_som1: fix PHY ID") Cc: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: <stable@vger.kernel.org> # 4.14+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>