<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/powerpc/kernel/process.c, branch v6.6.131</title>
<subtitle>Clone of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git</subtitle>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/'/>
<entry>
<title>powerpc/64s/slb: Fix SLB multihit issue during SLB preload</title>
<updated>2026-01-11T14:22:12+00:00</updated>
<author>
<name>Donet Tom</name>
<email>donettom@linux.ibm.com</email>
</author>
<published>2025-10-30T14:57:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=b13a3dbfa196af68eae2031f209743735ad416bf'/>
<id>b13a3dbfa196af68eae2031f209743735ad416bf</id>
<content type='text'>
commit 00312419f0863964625d6dcda8183f96849412c6 upstream.

On systems using the hash MMU, there is a software SLB preload cache that
mirrors the entries loaded into the hardware SLB buffer. This preload
cache is subject to periodic eviction — typically after every 256 context
switches — to remove old entry.

To optimize performance, the kernel skips switch_mmu_context() in
switch_mm_irqs_off() when the prev and next mm_struct are the same.
However, on hash MMU systems, this can lead to inconsistencies between
the hardware SLB and the software preload cache.

If an SLB entry for a process is evicted from the software cache on one
CPU, and the same process later runs on another CPU without executing
switch_mmu_context(), the hardware SLB may retain stale entries. If the
kernel then attempts to reload that entry, it can trigger an SLB
multi-hit error.

The following timeline shows how stale SLB entries are created and can
cause a multi-hit error when a process moves between CPUs without a
MMU context switch.

CPU 0                                   CPU 1
-----                                    -----
Process P
exec                                    swapper/1
 load_elf_binary
  begin_new_exc
    activate_mm
     switch_mm_irqs_off
      switch_mmu_context
       switch_slb
       /*
        * This invalidates all
        * the entries in the HW
        * and setup the new HW
        * SLB entries as per the
        * preload cache.
        */
context_switch
sched_migrate_task migrates process P to cpu-1

Process swapper/0                       context switch (to process P)
(uses mm_struct of Process P)           switch_mm_irqs_off()
                                         switch_slb
                                           load_slb++
                                            /*
                                            * load_slb becomes 0 here
                                            * and we evict an entry from
                                            * the preload cache with
                                            * preload_age(). We still
                                            * keep HW SLB and preload
                                            * cache in sync, that is
                                            * because all HW SLB entries
                                            * anyways gets evicted in
                                            * switch_slb during SLBIA.
                                            * We then only add those
                                            * entries back in HW SLB,
                                            * which are currently
                                            * present in preload_cache
                                            * (after eviction).
                                            */
                                        load_elf_binary continues...
                                         setup_new_exec()
                                          slb_setup_new_exec()

                                        sched_switch event
                                        sched_migrate_task migrates
                                        process P to cpu-0

context_switch from swapper/0 to Process P
 switch_mm_irqs_off()
  /*
   * Since both prev and next mm struct are same we don't call
   * switch_mmu_context(). This will cause the HW SLB and SW preload
   * cache to go out of sync in preload_new_slb_context. Because there
   * was an SLB entry which was evicted from both HW and preload cache
   * on cpu-1. Now later in preload_new_slb_context(), when we will try
   * to add the same preload entry again, we will add this to the SW
   * preload cache and then will add it to the HW SLB. Since on cpu-0
   * this entry was never invalidated, hence adding this entry to the HW
   * SLB will cause a SLB multi-hit error.
   */
load_elf_binary continues...
 START_THREAD
  start_thread
   preload_new_slb_context
   /*
    * This tries to add a new EA to preload cache which was earlier
    * evicted from both cpu-1 HW SLB and preload cache. This caused the
    * HW SLB of cpu-0 to go out of sync with the SW preload cache. The
    * reason for this was, that when we context switched back on CPU-0,
    * we should have ideally called switch_mmu_context() which will
    * bring the HW SLB entries on CPU-0 in sync with SW preload cache
    * entries by setting up the mmu context properly. But we didn't do
    * that since the prev mm_struct running on cpu-0 was same as the
    * next mm_struct (which is true for swapper / kernel threads). So
    * now when we try to add this new entry into the HW SLB of cpu-0,
    * we hit a SLB multi-hit error.
    */

WARNING: CPU: 0 PID: 1810970 at arch/powerpc/mm/book3s64/slb.c:62
assert_slb_presence+0x2c/0x50(48 results) 02:47:29 [20157/42149]
Modules linked in:
CPU: 0 UID: 0 PID: 1810970 Comm: dd Not tainted 6.16.0-rc3-dirty #12
VOLUNTARY
Hardware name: IBM pSeries (emulated by qemu) POWER8 (architected)
0x4d0200 0xf000004 of:SLOF,HEAD hv:linux,kvm pSeries
NIP:  c00000000015426c LR: c0000000001543b4 CTR: 0000000000000000
REGS: c0000000497c77e0 TRAP: 0700   Not tainted  (6.16.0-rc3-dirty)
MSR:  8000000002823033 &lt;SF,VEC,VSX,FP,ME,IR,DR,RI,LE&gt;  CR: 28888482  XER: 00000000
CFAR: c0000000001543b0 IRQMASK: 3
&lt;...&gt;
NIP [c00000000015426c] assert_slb_presence+0x2c/0x50
LR [c0000000001543b4] slb_insert_entry+0x124/0x390
Call Trace:
  0x7fffceb5ffff (unreliable)
  preload_new_slb_context+0x100/0x1a0
  start_thread+0x26c/0x420
  load_elf_binary+0x1b04/0x1c40
  bprm_execve+0x358/0x680
  do_execveat_common+0x1f8/0x240
  sys_execve+0x58/0x70
  system_call_exception+0x114/0x300
  system_call_common+0x160/0x2c4

&gt;From the above analysis, during early exec the hardware SLB is cleared,
and entries from the software preload cache are reloaded into hardware
by switch_slb. However, preload_new_slb_context and slb_setup_new_exec
also attempt to load some of the same entries, which can trigger a
multi-hit. In most cases, these additional preloads simply hit existing
entries and add nothing new. Removing these functions avoids redundant
preloads and eliminates the multi-hit issue. This patch removes these
two functions.

We tested process switching performance using the context_switch
benchmark on POWER9/hash, and observed no regression.

Without this patch: 129041 ops/sec
With this patch:    129341 ops/sec

We also measured SLB faults during boot, and the counts are essentially
the same with and without this patch.

SLB faults without this patch: 19727
SLB faults with this patch:    19786

Fixes: 5434ae74629a ("powerpc/64s/hash: Add a SLB preload cache")
cc: stable@vger.kernel.org
Suggested-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Donet Tom &lt;donettom@linux.ibm.com&gt;
Signed-off-by: Ritesh Harjani (IBM) &lt;ritesh.list@gmail.com&gt;
Signed-off-by: Madhavan Srinivasan &lt;maddy@linux.ibm.com&gt;
Link: https://patch.msgid.link/0ac694ae683494fe8cadbd911a1a5018d5d3c541.1761834163.git.ritesh.list@gmail.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 00312419f0863964625d6dcda8183f96849412c6 upstream.

On systems using the hash MMU, there is a software SLB preload cache that
mirrors the entries loaded into the hardware SLB buffer. This preload
cache is subject to periodic eviction — typically after every 256 context
switches — to remove old entry.

To optimize performance, the kernel skips switch_mmu_context() in
switch_mm_irqs_off() when the prev and next mm_struct are the same.
However, on hash MMU systems, this can lead to inconsistencies between
the hardware SLB and the software preload cache.

If an SLB entry for a process is evicted from the software cache on one
CPU, and the same process later runs on another CPU without executing
switch_mmu_context(), the hardware SLB may retain stale entries. If the
kernel then attempts to reload that entry, it can trigger an SLB
multi-hit error.

The following timeline shows how stale SLB entries are created and can
cause a multi-hit error when a process moves between CPUs without a
MMU context switch.

CPU 0                                   CPU 1
-----                                    -----
Process P
exec                                    swapper/1
 load_elf_binary
  begin_new_exc
    activate_mm
     switch_mm_irqs_off
      switch_mmu_context
       switch_slb
       /*
        * This invalidates all
        * the entries in the HW
        * and setup the new HW
        * SLB entries as per the
        * preload cache.
        */
context_switch
sched_migrate_task migrates process P to cpu-1

Process swapper/0                       context switch (to process P)
(uses mm_struct of Process P)           switch_mm_irqs_off()
                                         switch_slb
                                           load_slb++
                                            /*
                                            * load_slb becomes 0 here
                                            * and we evict an entry from
                                            * the preload cache with
                                            * preload_age(). We still
                                            * keep HW SLB and preload
                                            * cache in sync, that is
                                            * because all HW SLB entries
                                            * anyways gets evicted in
                                            * switch_slb during SLBIA.
                                            * We then only add those
                                            * entries back in HW SLB,
                                            * which are currently
                                            * present in preload_cache
                                            * (after eviction).
                                            */
                                        load_elf_binary continues...
                                         setup_new_exec()
                                          slb_setup_new_exec()

                                        sched_switch event
                                        sched_migrate_task migrates
                                        process P to cpu-0

context_switch from swapper/0 to Process P
 switch_mm_irqs_off()
  /*
   * Since both prev and next mm struct are same we don't call
   * switch_mmu_context(). This will cause the HW SLB and SW preload
   * cache to go out of sync in preload_new_slb_context. Because there
   * was an SLB entry which was evicted from both HW and preload cache
   * on cpu-1. Now later in preload_new_slb_context(), when we will try
   * to add the same preload entry again, we will add this to the SW
   * preload cache and then will add it to the HW SLB. Since on cpu-0
   * this entry was never invalidated, hence adding this entry to the HW
   * SLB will cause a SLB multi-hit error.
   */
load_elf_binary continues...
 START_THREAD
  start_thread
   preload_new_slb_context
   /*
    * This tries to add a new EA to preload cache which was earlier
    * evicted from both cpu-1 HW SLB and preload cache. This caused the
    * HW SLB of cpu-0 to go out of sync with the SW preload cache. The
    * reason for this was, that when we context switched back on CPU-0,
    * we should have ideally called switch_mmu_context() which will
    * bring the HW SLB entries on CPU-0 in sync with SW preload cache
    * entries by setting up the mmu context properly. But we didn't do
    * that since the prev mm_struct running on cpu-0 was same as the
    * next mm_struct (which is true for swapper / kernel threads). So
    * now when we try to add this new entry into the HW SLB of cpu-0,
    * we hit a SLB multi-hit error.
    */

WARNING: CPU: 0 PID: 1810970 at arch/powerpc/mm/book3s64/slb.c:62
assert_slb_presence+0x2c/0x50(48 results) 02:47:29 [20157/42149]
Modules linked in:
CPU: 0 UID: 0 PID: 1810970 Comm: dd Not tainted 6.16.0-rc3-dirty #12
VOLUNTARY
Hardware name: IBM pSeries (emulated by qemu) POWER8 (architected)
0x4d0200 0xf000004 of:SLOF,HEAD hv:linux,kvm pSeries
NIP:  c00000000015426c LR: c0000000001543b4 CTR: 0000000000000000
REGS: c0000000497c77e0 TRAP: 0700   Not tainted  (6.16.0-rc3-dirty)
MSR:  8000000002823033 &lt;SF,VEC,VSX,FP,ME,IR,DR,RI,LE&gt;  CR: 28888482  XER: 00000000
CFAR: c0000000001543b0 IRQMASK: 3
&lt;...&gt;
NIP [c00000000015426c] assert_slb_presence+0x2c/0x50
LR [c0000000001543b4] slb_insert_entry+0x124/0x390
Call Trace:
  0x7fffceb5ffff (unreliable)
  preload_new_slb_context+0x100/0x1a0
  start_thread+0x26c/0x420
  load_elf_binary+0x1b04/0x1c40
  bprm_execve+0x358/0x680
  do_execveat_common+0x1f8/0x240
  sys_execve+0x58/0x70
  system_call_exception+0x114/0x300
  system_call_common+0x160/0x2c4

&gt;From the above analysis, during early exec the hardware SLB is cleared,
and entries from the software preload cache are reloaded into hardware
by switch_slb. However, preload_new_slb_context and slb_setup_new_exec
also attempt to load some of the same entries, which can trigger a
multi-hit. In most cases, these additional preloads simply hit existing
entries and add nothing new. Removing these functions avoids redundant
preloads and eliminates the multi-hit issue. This patch removes these
two functions.

We tested process switching performance using the context_switch
benchmark on POWER9/hash, and observed no regression.

Without this patch: 129041 ops/sec
With this patch:    129341 ops/sec

We also measured SLB faults during boot, and the counts are essentially
the same with and without this patch.

SLB faults without this patch: 19727
SLB faults with this patch:    19786

Fixes: 5434ae74629a ("powerpc/64s/hash: Add a SLB preload cache")
cc: stable@vger.kernel.org
Suggested-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Donet Tom &lt;donettom@linux.ibm.com&gt;
Signed-off-by: Ritesh Harjani (IBM) &lt;ritesh.list@gmail.com&gt;
Signed-off-by: Madhavan Srinivasan &lt;maddy@linux.ibm.com&gt;
Link: https://patch.msgid.link/0ac694ae683494fe8cadbd911a1a5018d5d3c541.1761834163.git.ritesh.list@gmail.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: PPC: Book3S HV: Fix KVM_RUN clobbering FP/VEC user registers</title>
<updated>2023-12-08T07:52:19+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-11-22T02:58:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=10098448a51c1c1f27df9b9b2b4a2860855c3d53'/>
<id>10098448a51c1c1f27df9b9b2b4a2860855c3d53</id>
<content type='text'>
commit dc158d23b33df9033bcc8e7117e8591dd2f9d125 upstream.

Before running a guest, the host process (e.g., QEMU) FP/VEC registers
are saved if they were being used, similarly to when the kernel uses FP
registers. The guest values are then loaded into regs, and the host
process registers will be restored lazily when it uses FP/VEC.

KVM HV has a bug here: the host process registers do get saved, but the
user MSR bits remain enabled, which indicates the registers are valid
for the process. After they are clobbered by running the guest, this
valid indication causes the host process to take on the FP/VEC register
values of the guest.

Fixes: 34e119c96b2b ("KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs")
Cc: stable@vger.kernel.org # v5.17+
Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20231122025811.2973-1-npiggin@gmail.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit dc158d23b33df9033bcc8e7117e8591dd2f9d125 upstream.

Before running a guest, the host process (e.g., QEMU) FP/VEC registers
are saved if they were being used, similarly to when the kernel uses FP
registers. The guest values are then loaded into regs, and the host
process registers will be restored lazily when it uses FP/VEC.

KVM HV has a bug here: the host process registers do get saved, but the
user MSR bits remain enabled, which indicates the registers are valid
for the process. After they are clobbered by running the guest, this
valid indication causes the host process to take on the FP/VEC register
values of the guest.

Fixes: 34e119c96b2b ("KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs")
Cc: stable@vger.kernel.org # v5.17+
Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20231122025811.2973-1-npiggin@gmail.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: Hide empty pt_regs at base of the stack</title>
<updated>2023-11-20T10:59:28+00:00</updated>
<author>
<name>Michael Ellerman</name>
<email>mpe@ellerman.id.au</email>
</author>
<published>2023-08-24T06:42:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=e3742a5789e4a140c9c86512ea7ac8fdc9eba712'/>
<id>e3742a5789e4a140c9c86512ea7ac8fdc9eba712</id>
<content type='text'>
[ Upstream commit d45c4b48dafb5820e5cc267ff9a6d7784d13a43c ]

A thread started via eg. user_mode_thread() runs in the kernel to begin
with and then may later return to userspace. While it's running in the
kernel it has a pt_regs at the base of its kernel stack, but that
pt_regs is all zeroes.

If the thread oopses in that state, it leads to an ugly stack trace with
a big block of zero GPRs, as reported by Joel:

  Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc7-00004-gf7757129e3de-dirty #3
  Hardware name: IBM PowerNV (emulated by qemu) POWER9 0x4e1200 opal:v7.0 PowerNV
  Call Trace:
  [c0000000036afb00] [c0000000010dd058] dump_stack_lvl+0x6c/0x9c (unreliable)
  [c0000000036afb30] [c00000000013c524] panic+0x178/0x424
  [c0000000036afbd0] [c000000002005100] mount_root_generic+0x250/0x324
  [c0000000036afca0] [c0000000020057d0] prepare_namespace+0x2d4/0x344
  [c0000000036afd20] [c0000000020049c0] kernel_init_freeable+0x358/0x3ac
  [c0000000036afdf0] [c0000000000111b0] kernel_init+0x30/0x1a0
  [c0000000036afe50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c
  --- interrupt: 0 at 0x0
  NIP:  0000000000000000 LR: 0000000000000000 CTR: 0000000000000000
  REGS: c0000000036afe80 TRAP: 0000   Not tainted  (6.5.0-rc7-00004-gf7757129e3de-dirty)
  MSR:  0000000000000000 &lt;&gt;  CR: 00000000  XER: 00000000
  CFAR: 0000000000000000 IRQMASK: 0
  GPR00: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR04: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR12: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR24: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR28: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  NIP [0000000000000000] 0x0
  LR [0000000000000000] 0x0
  --- interrupt: 0

The all-zero pt_regs looks ugly and conveys no useful information, other
than its presence. So detect that case and just show the presence of the
frame by printing the interrupt marker, eg:

  Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc3-00126-g18e9506562a0-dirty #301
  Hardware name: IBM pSeries (emulated by qemu) POWER9 (raw) 0x4e1202 0xf000005 of:SLOF,HEAD hv:linux,kvm pSeries
  Call Trace:
  [c000000003aabb00] [c000000001143db8] dump_stack_lvl+0x6c/0x9c (unreliable)
  [c000000003aabb30] [c00000000014c624] panic+0x178/0x424
  [c000000003aabbd0] [c0000000020050fc] mount_root_generic+0x250/0x324
  [c000000003aabca0] [c0000000020057cc] prepare_namespace+0x2d4/0x344
  [c000000003aabd20] [c0000000020049bc] kernel_init_freeable+0x358/0x3ac
  [c000000003aabdf0] [c0000000000111b0] kernel_init+0x30/0x1a0
  [c000000003aabe50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c
  --- interrupt: 0 at 0x0

To avoid ever suppressing a valid pt_regs make sure the pt_regs has a
zero MSR and TRAP value, and is located at the very base of the stack.

Fixes: 6895dfc04741 ("powerpc: copy_thread fill in interrupt frame marker and back chain")
Reported-by: Joel Stanley &lt;joel@jms.id.au&gt;
Reported-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Reviewed-by: Joel Stanley &lt;joel@jms.id.au&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230824064210.907266-1-mpe@ellerman.id.au
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit d45c4b48dafb5820e5cc267ff9a6d7784d13a43c ]

A thread started via eg. user_mode_thread() runs in the kernel to begin
with and then may later return to userspace. While it's running in the
kernel it has a pt_regs at the base of its kernel stack, but that
pt_regs is all zeroes.

If the thread oopses in that state, it leads to an ugly stack trace with
a big block of zero GPRs, as reported by Joel:

  Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc7-00004-gf7757129e3de-dirty #3
  Hardware name: IBM PowerNV (emulated by qemu) POWER9 0x4e1200 opal:v7.0 PowerNV
  Call Trace:
  [c0000000036afb00] [c0000000010dd058] dump_stack_lvl+0x6c/0x9c (unreliable)
  [c0000000036afb30] [c00000000013c524] panic+0x178/0x424
  [c0000000036afbd0] [c000000002005100] mount_root_generic+0x250/0x324
  [c0000000036afca0] [c0000000020057d0] prepare_namespace+0x2d4/0x344
  [c0000000036afd20] [c0000000020049c0] kernel_init_freeable+0x358/0x3ac
  [c0000000036afdf0] [c0000000000111b0] kernel_init+0x30/0x1a0
  [c0000000036afe50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c
  --- interrupt: 0 at 0x0
  NIP:  0000000000000000 LR: 0000000000000000 CTR: 0000000000000000
  REGS: c0000000036afe80 TRAP: 0000   Not tainted  (6.5.0-rc7-00004-gf7757129e3de-dirty)
  MSR:  0000000000000000 &lt;&gt;  CR: 00000000  XER: 00000000
  CFAR: 0000000000000000 IRQMASK: 0
  GPR00: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR04: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR12: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR24: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR28: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  NIP [0000000000000000] 0x0
  LR [0000000000000000] 0x0
  --- interrupt: 0

The all-zero pt_regs looks ugly and conveys no useful information, other
than its presence. So detect that case and just show the presence of the
frame by printing the interrupt marker, eg:

  Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc3-00126-g18e9506562a0-dirty #301
  Hardware name: IBM pSeries (emulated by qemu) POWER9 (raw) 0x4e1202 0xf000005 of:SLOF,HEAD hv:linux,kvm pSeries
  Call Trace:
  [c000000003aabb00] [c000000001143db8] dump_stack_lvl+0x6c/0x9c (unreliable)
  [c000000003aabb30] [c00000000014c624] panic+0x178/0x424
  [c000000003aabbd0] [c0000000020050fc] mount_root_generic+0x250/0x324
  [c000000003aabca0] [c0000000020057cc] prepare_namespace+0x2d4/0x344
  [c000000003aabd20] [c0000000020049bc] kernel_init_freeable+0x358/0x3ac
  [c000000003aabdf0] [c0000000000111b0] kernel_init+0x30/0x1a0
  [c000000003aabe50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c
  --- interrupt: 0 at 0x0

To avoid ever suppressing a valid pt_regs make sure the pt_regs has a
zero MSR and TRAP value, and is located at the very base of the stack.

Fixes: 6895dfc04741 ("powerpc: copy_thread fill in interrupt frame marker and back chain")
Reported-by: Joel Stanley &lt;joel@jms.id.au&gt;
Reported-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Reviewed-by: Joel Stanley &lt;joel@jms.id.au&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230824064210.907266-1-mpe@ellerman.id.au
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc/dexcr: Support userspace ROP protection</title>
<updated>2023-06-19T07:36:26+00:00</updated>
<author>
<name>Benjamin Gray</name>
<email>bgray@linux.ibm.com</email>
</author>
<published>2023-06-19T07:36:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=be98fcf7c10dea74e9c3e2cd0018e47bdee67442'/>
<id>be98fcf7c10dea74e9c3e2cd0018e47bdee67442</id>
<content type='text'>
The ISA 3.1B hashst and hashchk instructions use a per-cpu SPR HASHKEYR
to hold a key used in the hash calculation. This key should be different
for each process to make it harder for a malicious process to recreate
valid hash values for a victim process.

Add support for storing a per-thread hash key, and setting/clearing
HASHKEYR appropriately.

Signed-off-by: Benjamin Gray &lt;bgray@linux.ibm.com&gt;
Reviewed-by: Russell Currey &lt;ruscur@russell.cc&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230616034846.311705-6-bgray@linux.ibm.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The ISA 3.1B hashst and hashchk instructions use a per-cpu SPR HASHKEYR
to hold a key used in the hash calculation. This key should be different
for each process to make it harder for a malicious process to recreate
valid hash values for a victim process.

Add support for storing a per-thread hash key, and setting/clearing
HASHKEYR appropriately.

Signed-off-by: Benjamin Gray &lt;bgray@linux.ibm.com&gt;
Reviewed-by: Russell Currey &lt;ruscur@russell.cc&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230616034846.311705-6-bgray@linux.ibm.com
</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: copy_thread don't set PPR in user interrupt frame regs</title>
<updated>2023-04-11T13:13:33+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-03-25T12:29:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=89fb39134ae3b1e1f207af44a037721d92b32f70'/>
<id>89fb39134ae3b1e1f207af44a037721d92b32f70</id>
<content type='text'>
syscalls do not set the PPR field in their interrupt frame and
return from syscall always sets the default PPR for userspace,
so setting the value in the ret_from_fork frame is not necessary
and mildly inconsistent. Remove it.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-9-npiggin@gmail.com

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
syscalls do not set the PPR field in their interrupt frame and
return from syscall always sets the default PPR for userspace,
so setting the value in the ret_from_fork frame is not necessary
and mildly inconsistent. Remove it.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-9-npiggin@gmail.com

</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: copy_thread don't set _TIF_RESTOREALL</title>
<updated>2023-04-11T13:13:33+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-03-25T12:29:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=d195ce4695ca1061993424e2d6c8995e5fc81606'/>
<id>d195ce4695ca1061993424e2d6c8995e5fc81606</id>
<content type='text'>
In the kernel user thread path, don't set _TIF_RESTOREALL because
the thread is required to call kernel_execve() before it returns,
which will set _TIF_RESTOREALL if necessary via start_thread().

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-8-npiggin@gmail.com

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In the kernel user thread path, don't set _TIF_RESTOREALL because
the thread is required to call kernel_execve() before it returns,
which will set _TIF_RESTOREALL if necessary via start_thread().

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-8-npiggin@gmail.com

</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: differentiate kthread from user kernel thread start</title>
<updated>2023-04-11T13:13:33+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-03-25T12:29:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=b504b6aade0403eaffa9ce51b8207d710705beaf'/>
<id>b504b6aade0403eaffa9ce51b8207d710705beaf</id>
<content type='text'>
Kernel created user threads start similarly to kernel threads in that
they call a kernel function after first returning from _switch, so
they share ret_from_kernel_thread for this. Kernel threads never return
from that function though, whereas user threads often do (although some
don't, e.g., IO threads).

Split these startup functions in two, and catch kernel threads that
improperly return from their function. This is intended to make the
complicated code a little bit easier to understand.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-7-npiggin@gmail.com

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Kernel created user threads start similarly to kernel threads in that
they call a kernel function after first returning from _switch, so
they share ret_from_kernel_thread for this. Kernel threads never return
from that function though, whereas user threads often do (although some
don't, e.g., IO threads).

Split these startup functions in two, and catch kernel threads that
improperly return from their function. This is intended to make the
complicated code a little bit easier to understand.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-7-npiggin@gmail.com

</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: copy_thread differentiate kthreads and user mode threads</title>
<updated>2023-04-11T13:13:33+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-03-25T12:29:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=eed7c420aac7fde5e5915d2747c3ebbbda225835'/>
<id>eed7c420aac7fde5e5915d2747c3ebbbda225835</id>
<content type='text'>
When copy_thread is given a kernel function to run in arg-&gt;fn, this
does not necessarily mean it is a kernel thread. User threads can be
created this way (e.g., kernel_init, see also x86's copy_thread()).
These threads run a kernel function which may call kernel_execve()
and return, which returns like a userspace exec(2) syscall.

Kernel threads are to  be differentiated with PF_KTHREAD, will always
have arg-&gt;fn set, and should never return from that function, instead
calling kthread_exit() to exit.

Create separate paths for the kthread and user kernel thread creation
logic. The kthread path will never exit and does not require a user
interrupt frame, so it gets a minimal stack frame.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-6-npiggin@gmail.com

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When copy_thread is given a kernel function to run in arg-&gt;fn, this
does not necessarily mean it is a kernel thread. User threads can be
created this way (e.g., kernel_init, see also x86's copy_thread()).
These threads run a kernel function which may call kernel_execve()
and return, which returns like a userspace exec(2) syscall.

Kernel threads are to  be differentiated with PF_KTHREAD, will always
have arg-&gt;fn set, and should never return from that function, instead
calling kthread_exit() to exit.

Create separate paths for the kthread and user kernel thread creation
logic. The kthread path will never exit and does not require a user
interrupt frame, so it gets a minimal stack frame.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-6-npiggin@gmail.com

</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: use switch frame for ret_from_kernel_thread parameters</title>
<updated>2023-04-11T13:13:32+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-03-25T12:28:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=af5ca9d5c8b45244b237d7a5534e1ec2d01cce8e'/>
<id>af5ca9d5c8b45244b237d7a5534e1ec2d01cce8e</id>
<content type='text'>
The kernel thread path in copy_thread creates a user interrupt frame on
stack and stores the function and arg parameters there, and
ret_from_kernel_thread loads them. This is a slightly confusing way to
overload that frame. Non-volatile registers are loaded from the switch
frame, so the parameters can be stored there. The user interrupt frame
is now only used by user threads when they return to user.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-4-npiggin@gmail.com

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The kernel thread path in copy_thread creates a user interrupt frame on
stack and stores the function and arg parameters there, and
ret_from_kernel_thread loads them. This is a slightly confusing way to
overload that frame. Non-volatile registers are loaded from the switch
frame, so the parameters can be stored there. The user interrupt frame
is now only used by user threads when they return to user.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-4-npiggin@gmail.com

</pre>
</div>
</content>
</entry>
<entry>
<title>powerpc: copy_thread make ret_from_fork register setup consistent</title>
<updated>2023-04-11T13:13:32+00:00</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2023-03-25T12:28:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=959791e45fd2a580403e03611a5aefb9e7abcfc0'/>
<id>959791e45fd2a580403e03611a5aefb9e7abcfc0</id>
<content type='text'>
The ret_from_fork code for 64e and 32-bit set r3 for
syscall_exit_prepare the same way that 64s does, so there should
be no need to special-case them in copy_thread.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-3-npiggin@gmail.com

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The ret_from_fork code for 64e and 32-bit set r3 for
syscall_exit_prepare the same way that 64s does, so there should
be no need to special-case them in copy_thread.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://msgid.link/20230325122904.2375060-3-npiggin@gmail.com

</pre>
</div>
</content>
</entry>
</feed>
