<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch, branch v4.18.8</title>
<subtitle>Clone of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git</subtitle>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/'/>
<entry>
<title>x86: kvm: avoid unused variable warning</title>
<updated>2018-09-15T07:47:02+00:00</updated>
<author>
<name>Arnd Bergmann</name>
<email>arnd@arndb.de</email>
</author>
<published>2018-08-20T21:37:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=838ddbf08cc724778431c927aa0f8c2c2ad4501e'/>
<id>838ddbf08cc724778431c927aa0f8c2c2ad4501e</id>
<content type='text'>
commit 7288bde1f9df6c1475675419bdd7725ce84dec56 upstream.

Removing one of the two accesses of the maxphyaddr variable led to
a harmless warning:

arch/x86/kvm/x86.c: In function 'kvm_set_mmio_spte_mask':
arch/x86/kvm/x86.c:6563:6: error: unused variable 'maxphyaddr' [-Werror=unused-variable]

Removing the #ifdef seems to be the nicest workaround, as it
makes the code look cleaner than adding another #ifdef.

Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: stable@vger.kernel.org # L1TF
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 7288bde1f9df6c1475675419bdd7725ce84dec56 upstream.

Removing one of the two accesses of the maxphyaddr variable led to
a harmless warning:

arch/x86/kvm/x86.c: In function 'kvm_set_mmio_spte_mask':
arch/x86/kvm/x86.c:6563:6: error: unused variable 'maxphyaddr' [-Werror=unused-variable]

Removing the #ifdef seems to be the nicest workaround, as it
makes the code look cleaner than adding another #ifdef.

Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: stable@vger.kernel.org # L1TF
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>kvm: x86: Set highest physical address bits in non-present/reserved SPTEs</title>
<updated>2018-09-15T07:47:02+00:00</updated>
<author>
<name>Junaid Shahid</name>
<email>junaids@google.com</email>
</author>
<published>2018-08-14T17:15:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=d9b47449c1a17be65332e07c1e8acba0f8b27e10'/>
<id>d9b47449c1a17be65332e07c1e8acba0f8b27e10</id>
<content type='text'>
commit 28a1f3ac1d0c8558ee4453d9634dad891a6e922e upstream.

Always set the 5 upper-most supported physical address bits to 1 for SPTEs
that are marked as non-present or reserved, to make them unusable for
L1TF attacks from the guest. Currently, this just applies to MMIO SPTEs.
(We do not need to mark PTEs that are completely 0 as physical page 0
is already reserved.)

This allows mitigation of L1TF without disabling hyper-threading by using
shadow paging mode instead of EPT.

Signed-off-by: Junaid Shahid &lt;junaids@google.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 28a1f3ac1d0c8558ee4453d9634dad891a6e922e upstream.

Always set the 5 upper-most supported physical address bits to 1 for SPTEs
that are marked as non-present or reserved, to make them unusable for
L1TF attacks from the guest. Currently, this just applies to MMIO SPTEs.
(We do not need to mark PTEs that are completely 0 as physical page 0
is already reserved.)

This allows mitigation of L1TF without disabling hyper-threading by using
shadow paging mode instead of EPT.

Signed-off-by: Junaid Shahid &lt;junaids@google.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>x86/xen: don't write ptes directly in 32-bit PV guests</title>
<updated>2018-09-15T07:47:02+00:00</updated>
<author>
<name>Juergen Gross</name>
<email>jgross@suse.com</email>
</author>
<published>2018-08-21T15:37:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=30566a3520bb5d5010782519b3efa70eb231c42f'/>
<id>30566a3520bb5d5010782519b3efa70eb231c42f</id>
<content type='text'>
commit f7c90c2aa4004808dff777ba6ae2c7294dd06851 upstream.

In some cases 32-bit PAE PV guests still write PTEs directly instead of
using hypercalls. This is especially bad when clearing a PTE as this is
done via 32-bit writes which will produce intermediate L1TF attackable
PTEs.

Change the code to use hypercalls instead.

Signed-off-by: Juergen Gross &lt;jgross@suse.com&gt;
Reviewed-by: Jan Beulich &lt;jbeulich@suse.com&gt;
Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit f7c90c2aa4004808dff777ba6ae2c7294dd06851 upstream.

In some cases 32-bit PAE PV guests still write PTEs directly instead of
using hypercalls. This is especially bad when clearing a PTE as this is
done via 32-bit writes which will produce intermediate L1TF attackable
PTEs.

Change the code to use hypercalls instead.

Signed-off-by: Juergen Gross &lt;jgross@suse.com&gt;
Reviewed-by: Jan Beulich &lt;jbeulich@suse.com&gt;
Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>x86/pae: use 64 bit atomic xchg function in native_ptep_get_and_clear</title>
<updated>2018-09-15T07:47:01+00:00</updated>
<author>
<name>Juergen Gross</name>
<email>jgross@suse.com</email>
</author>
<published>2018-08-21T15:37:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=22b734b0c850139bb0cd31dcaa37cde7f00ccbd6'/>
<id>22b734b0c850139bb0cd31dcaa37cde7f00ccbd6</id>
<content type='text'>
commit b2d7a075a1ccef2fb321d595802190c8e9b39004 upstream.

Using only 32-bit writes for the pte will result in an intermediate
L1TF vulnerable PTE. When running as a Xen PV guest this will at once
switch the guest to shadow mode resulting in a loss of performance.

Use arch_atomic64_xchg() instead which will perform the requested
operation atomically with all 64 bits.

Some performance considerations according to:

https://software.intel.com/sites/default/files/managed/ad/dc/Intel-Xeon-Scalable-Processor-throughput-latency.pdf

The main number should be the latency, as there is no tight loop around
native_ptep_get_and_clear().

"lock cmpxchg8b" has a latency of 20 cycles, while "lock xchg" (with a
memory operand) isn't mentioned in that document. "lock xadd" (with xadd
having 3 cycles less latency than xchg) has a latency of 11, so we can
assume a latency of 14 for "lock xchg".

Signed-off-by: Juergen Gross &lt;jgross@suse.com&gt;
Reviewed-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Jan Beulich &lt;jbeulich@suse.com&gt;
Tested-by: Jason Andryuk &lt;jandryuk@gmail.com&gt;
Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b2d7a075a1ccef2fb321d595802190c8e9b39004 upstream.

Using only 32-bit writes for the pte will result in an intermediate
L1TF vulnerable PTE. When running as a Xen PV guest this will at once
switch the guest to shadow mode resulting in a loss of performance.

Use arch_atomic64_xchg() instead which will perform the requested
operation atomically with all 64 bits.

Some performance considerations according to:

https://software.intel.com/sites/default/files/managed/ad/dc/Intel-Xeon-Scalable-Processor-throughput-latency.pdf

The main number should be the latency, as there is no tight loop around
native_ptep_get_and_clear().

"lock cmpxchg8b" has a latency of 20 cycles, while "lock xchg" (with a
memory operand) isn't mentioned in that document. "lock xadd" (with xadd
having 3 cycles less latency than xchg) has a latency of 11, so we can
assume a latency of 14 for "lock xchg".

Signed-off-by: Juergen Gross &lt;jgross@suse.com&gt;
Reviewed-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Jan Beulich &lt;jbeulich@suse.com&gt;
Tested-by: Jason Andryuk &lt;jandryuk@gmail.com&gt;
Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>x86/tsc: Prevent result truncation on 32bit</title>
<updated>2018-09-15T07:47:01+00:00</updated>
<author>
<name>Chuanhua Lei</name>
<email>chuanhua.lei@linux.intel.com</email>
</author>
<published>2018-09-06T10:03:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=dd458c42c28a36f76df001f5643ef8323ce9b28f'/>
<id>dd458c42c28a36f76df001f5643ef8323ce9b28f</id>
<content type='text'>
commit 17f6bac2249356c795339e03a0742cd79be3cab8 upstream.

Loops per jiffy is calculated by multiplying tsc_khz with 1e3 and then
dividing it by HZ.

Both tsc_khz and the temporary variable holding the multiplication result
are of type unsigned long, so on 32bit the result is truncated to the lower
32bit.

Use u64 as type for the temporary variable and cast tsc_khz to it before
multiplying.

[ tglx: Massaged changelog and removed pointless braces ]

[ tglx: Backport to stable. Due to massive code changes is the upstream
  	commit not applicable anymore. The issue has gone unnoticed in
  	kernels pre 4.19 because the bogus LPJ value gets fixed up in a
  	later stage of early boot, but it still might cause subtle and hard
  	to debug issues between these two points. ]

Fixes: cf7a63ef4e02 ("x86/tsc: Calibrate tsc only once")
Signed-off-by: Chuanhua Lei &lt;chuanhua.lei@linux.intel.com&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: yixin.zhu@linux.intel.com
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Len Brown &lt;len.brown@intel.com&gt;
Cc: Pavel Tatashin &lt;pasha.tatashin@microsoft.com&gt;
Cc: Rajvi Jingar &lt;rajvi.jingar@intel.com&gt;
Cc: Dou Liyang &lt;douly.fnst@cn.fujitsu.com&gt;
Link: https://lkml.kernel.org/r/1536228203-18701-1-git-send-email-chuanhua.lei@linux.intel.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 17f6bac2249356c795339e03a0742cd79be3cab8 upstream.

Loops per jiffy is calculated by multiplying tsc_khz with 1e3 and then
dividing it by HZ.

Both tsc_khz and the temporary variable holding the multiplication result
are of type unsigned long, so on 32bit the result is truncated to the lower
32bit.

Use u64 as type for the temporary variable and cast tsc_khz to it before
multiplying.

[ tglx: Massaged changelog and removed pointless braces ]

[ tglx: Backport to stable. Due to massive code changes is the upstream
  	commit not applicable anymore. The issue has gone unnoticed in
  	kernels pre 4.19 because the bogus LPJ value gets fixed up in a
  	later stage of early boot, but it still might cause subtle and hard
  	to debug issues between these two points. ]

Fixes: cf7a63ef4e02 ("x86/tsc: Calibrate tsc only once")
Signed-off-by: Chuanhua Lei &lt;chuanhua.lei@linux.intel.com&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: yixin.zhu@linux.intel.com
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Len Brown &lt;len.brown@intel.com&gt;
Cc: Pavel Tatashin &lt;pasha.tatashin@microsoft.com&gt;
Cc: Rajvi Jingar &lt;rajvi.jingar@intel.com&gt;
Cc: Dou Liyang &lt;douly.fnst@cn.fujitsu.com&gt;
Link: https://lkml.kernel.org/r/1536228203-18701-1-git-send-email-chuanhua.lei@linux.intel.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: rockchip: Force CONFIG_PM on Rockchip systems</title>
<updated>2018-09-15T07:46:56+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2018-08-24T15:06:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=8515518d6365f19bc40ef6f8b690145c641cb4eb'/>
<id>8515518d6365f19bc40ef6f8b690145c641cb4eb</id>
<content type='text'>
[ Upstream commit d1558dfd9f22c99a5b8e1354ad881ee40749da89 ]

A number of the Rockchip-specific drivers (IOMMU, display controllers)
are now assuming that CONFIG_PM is set, and may completely misbehave
if that's not the case.

Since there is hardly any reason for this configuration option not
to be selected anyway, let's require it (in the same way Tegra already
does).

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit d1558dfd9f22c99a5b8e1354ad881ee40749da89 ]

A number of the Rockchip-specific drivers (IOMMU, display controllers)
are now assuming that CONFIG_PM is set, and may completely misbehave
if that's not the case.

Since there is hardly any reason for this configuration option not
to be selected anyway, let's require it (in the same way Tegra already
does).

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: rockchip: Force CONFIG_PM on Rockchip systems</title>
<updated>2018-09-15T07:46:56+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2018-08-24T15:06:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=c4e3acea0ebc79ed3b8d9c3e30a9d68275f16cee'/>
<id>c4e3acea0ebc79ed3b8d9c3e30a9d68275f16cee</id>
<content type='text'>
[ Upstream commit 7db7a8f5638a2ffe0c0c0d55b5186b6191fd6af7 ]

A number of the Rockchip-specific drivers (IOMMU, display controllers)
are now assuming that CONFIG_PM is set, and may completely misbehave
if that's not the case.

Since there is hardly any reason for this configuration option not
to be selected anyway, let's require it (in the same way Tegra already
does).

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 7db7a8f5638a2ffe0c0c0d55b5186b6191fd6af7 ]

A number of the Rockchip-specific drivers (IOMMU, display controllers)
are now assuming that CONFIG_PM is set, and may completely misbehave
if that's not the case.

Since there is hardly any reason for this configuration option not
to be selected anyway, let's require it (in the same way Tegra already
does).

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>kvm: nVMX: Fix fault vector for VMX operation at CPL &gt; 0</title>
<updated>2018-09-15T07:46:54+00:00</updated>
<author>
<name>Jim Mattson</name>
<email>jmattson@google.com</email>
</author>
<published>2018-07-27T16:18:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=fc73680f9cf770785ef0f7e79a0f4c613a6ebc43'/>
<id>fc73680f9cf770785ef0f7e79a0f4c613a6ebc43</id>
<content type='text'>
[ Upstream commit 36090bf43a6b835a42f515cb515ff6fa293a25fe ]

The fault that should be raised for a privilege level violation is #GP
rather than #UD.

Fixes: 727ba748e110b4 ("kvm: nVMX: Enforce cpl=0 for VMX instructions")
Signed-off-by: Jim Mattson &lt;jmattson@google.com&gt;
Reviewed-by: David Hildenbrand &lt;david@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 36090bf43a6b835a42f515cb515ff6fa293a25fe ]

The fault that should be raised for a privilege level violation is #GP
rather than #UD.

Fixes: 727ba748e110b4 ("kvm: nVMX: Enforce cpl=0 for VMX instructions")
Signed-off-by: Jim Mattson &lt;jmattson@google.com&gt;
Reviewed-by: David Hildenbrand &lt;david@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: vmx: track host_state.loaded using a loaded_vmcs pointer</title>
<updated>2018-09-15T07:46:54+00:00</updated>
<author>
<name>Sean Christopherson</name>
<email>sean.j.christopherson@intel.com</email>
</author>
<published>2018-07-23T19:32:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=68b0ce42a7f4146e03d93b8a5c59dcfd2b4e6608'/>
<id>68b0ce42a7f4146e03d93b8a5c59dcfd2b4e6608</id>
<content type='text'>
[ Upstream commit bd9966de4e14fb559e89a06f7f5c9aab2cc028b9 ]

Using 'struct loaded_vmcs*' to track whether the CPU registers
contain host or guest state kills two birds with one stone.

  1. The (effective) boolean host_state.loaded is poorly named.
     It does not track whether or not host state is loaded into
     the CPU registers (which most readers would expect), but
     rather tracks if host state has been saved AND guest state
     is loaded.

  2. Using a loaded_vmcs pointer provides a more robust framework
     for the optimized guest/host state switching, especially when
     consideration per-VMCS enhancements.  To that end, WARN_ONCE
     if we try to switch to host state with a different VMCS than
     was last used to save host state.

Resolve an occurrence of the new WARN by setting loaded_vmcs after
the call to vmx_vcpu_put() in vmx_switch_vmcs().

Signed-off-by: Sean Christopherson &lt;sean.j.christopherson@intel.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit bd9966de4e14fb559e89a06f7f5c9aab2cc028b9 ]

Using 'struct loaded_vmcs*' to track whether the CPU registers
contain host or guest state kills two birds with one stone.

  1. The (effective) boolean host_state.loaded is poorly named.
     It does not track whether or not host state is loaded into
     the CPU registers (which most readers would expect), but
     rather tracks if host state has been saved AND guest state
     is loaded.

  2. Using a loaded_vmcs pointer provides a more robust framework
     for the optimized guest/host state switching, especially when
     consideration per-VMCS enhancements.  To that end, WARN_ONCE
     if we try to switch to host state with a different VMCS than
     was last used to save host state.

Resolve an occurrence of the new WARN by setting loaded_vmcs after
the call to vmx_vcpu_put() in vmx_switch_vmcs().

Signed-off-by: Sean Christopherson &lt;sean.j.christopherson@intel.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc/pseries: Avoid using the size greater than RTAS_ERROR_LOG_MAX.</title>
<updated>2018-09-15T07:46:54+00:00</updated>
<author>
<name>Mahesh Salgaonkar</name>
<email>mahesh@linux.vnet.ibm.com</email>
</author>
<published>2018-07-04T17:57:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=214d87aee3af055c96a6fc693ad39d0f9625e594'/>
<id>214d87aee3af055c96a6fc693ad39d0f9625e594</id>
<content type='text'>
[ Upstream commit 74e96bf44f430cf7a01de19ba6cf49b361cdfd6e ]

The global mce data buffer that used to copy rtas error log is of 2048
(RTAS_ERROR_LOG_MAX) bytes in size. Before the copy we read
extended_log_length from rtas error log header, then use max of
extended_log_length and RTAS_ERROR_LOG_MAX as a size of data to be copied.
Ideally the platform (phyp) will never send extended error log with
size &gt; 2048. But if that happens, then we have a risk of buffer overrun
and corruption. Fix this by using min_t instead.

Fixes: d368514c3097 ("powerpc: Fix corruption when grabbing FWNMI data")
Reported-by: Michal Suchanek &lt;msuchanek@suse.com&gt;
Signed-off-by: Mahesh Salgaonkar &lt;mahesh@linux.vnet.ibm.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 74e96bf44f430cf7a01de19ba6cf49b361cdfd6e ]

The global mce data buffer that used to copy rtas error log is of 2048
(RTAS_ERROR_LOG_MAX) bytes in size. Before the copy we read
extended_log_length from rtas error log header, then use max of
extended_log_length and RTAS_ERROR_LOG_MAX as a size of data to be copied.
Ideally the platform (phyp) will never send extended error log with
size &gt; 2048. But if that happens, then we have a risk of buffer overrun
and corruption. Fix this by using min_t instead.

Fixes: d368514c3097 ("powerpc: Fix corruption when grabbing FWNMI data")
Reported-by: Michal Suchanek &lt;msuchanek@suse.com&gt;
Signed-off-by: Mahesh Salgaonkar &lt;mahesh@linux.vnet.ibm.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@microsoft.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
