<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/include/kvm, branch v6.1.168</title>
<subtitle>Clone of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git</subtitle>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/'/>
<entry>
<title>KVM: arm64: Fix host-programmed guest events in nVHE</title>
<updated>2024-04-10T14:28:23+00:00</updated>
<author>
<name>Oliver Upton</name>
<email>oliver.upton@linux.dev</email>
</author>
<published>2024-03-05T18:48:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=923579201dece7cb9fc30662b4f6e7d7801042a8'/>
<id>923579201dece7cb9fc30662b4f6e7d7801042a8</id>
<content type='text'>
commit e89c928bedd77d181edc2df01cb6672184775140 upstream.

Programming PMU events in the host that count during guest execution is
a feature supported by perf, e.g.

  perf stat -e cpu_cycles:G ./lkvm run

While this works for VHE, the guest/host event bitmaps are not carried
through to the hypervisor in the nVHE configuration. Make
kvm_pmu_update_vcpu_events() conditional on whether or not _hardware_
supports PMUv3 rather than if the vCPU as vPMU enabled.

Cc: stable@vger.kernel.org
Fixes: 84d751a019a9 ("KVM: arm64: Pass pmu events to hyp via vcpu")
Reviewed-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20240305184840.636212-3-oliver.upton@linux.dev
Signed-off-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit e89c928bedd77d181edc2df01cb6672184775140 upstream.

Programming PMU events in the host that count during guest execution is
a feature supported by perf, e.g.

  perf stat -e cpu_cycles:G ./lkvm run

While this works for VHE, the guest/host event bitmaps are not carried
through to the hypervisor in the nVHE configuration. Make
kvm_pmu_update_vcpu_events() conditional on whether or not _hardware_
supports PMUv3 rather than if the vCPU as vPMU enabled.

Cc: stable@vger.kernel.org
Fixes: 84d751a019a9 ("KVM: arm64: Pass pmu events to hyp via vcpu")
Reviewed-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20240305184840.636212-3-oliver.upton@linux.dev
Signed-off-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm64: vgic-v4: Make the doorbell request robust w.r.t preemption</title>
<updated>2023-08-23T15:52:28+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2023-07-13T07:06:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=3d2d051be16115ca930cd2b76b18e54532bf56cd'/>
<id>3d2d051be16115ca930cd2b76b18e54532bf56cd</id>
<content type='text'>
[ Upstream commit b321c31c9b7b309dcde5e8854b741c8e6a9a05f0 ]

Xiang reports that VMs occasionally fail to boot on GICv4.1 systems when
running a preemptible kernel, as it is possible that a vCPU is blocked
without requesting a doorbell interrupt.

The issue is that any preemption that occurs between vgic_v4_put() and
schedule() on the block path will mark the vPE as nonresident and *not*
request a doorbell irq. This occurs because when the vcpu thread is
resumed on its way to block, vcpu_load() will make the vPE resident
again. Once the vcpu actually blocks, we don't request a doorbell
anymore, and the vcpu won't be woken up on interrupt delivery.

Fix it by tracking that we're entering WFI, and key the doorbell
request on that flag. This allows us not to make the vPE resident
when going through a preempt/schedule cycle, meaning we don't lose
any state.

Cc: stable@vger.kernel.org
Fixes: 8e01d9a396e6 ("KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put")
Reported-by: Xiang Chen &lt;chenxiang66@hisilicon.com&gt;
Suggested-by: Zenghui Yu &lt;yuzenghui@huawei.com&gt;
Tested-by: Xiang Chen &lt;chenxiang66@hisilicon.com&gt;
Co-developed-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Acked-by: Zenghui Yu &lt;yuzenghui@huawei.com&gt;
Link: https://lore.kernel.org/r/20230713070657.3873244-1-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit b321c31c9b7b309dcde5e8854b741c8e6a9a05f0 ]

Xiang reports that VMs occasionally fail to boot on GICv4.1 systems when
running a preemptible kernel, as it is possible that a vCPU is blocked
without requesting a doorbell interrupt.

The issue is that any preemption that occurs between vgic_v4_put() and
schedule() on the block path will mark the vPE as nonresident and *not*
request a doorbell irq. This occurs because when the vcpu thread is
resumed on its way to block, vcpu_load() will make the vPE resident
again. Once the vcpu actually blocks, we don't request a doorbell
anymore, and the vcpu won't be woken up on interrupt delivery.

Fix it by tracking that we're entering WFI, and key the doorbell
request on that flag. This allows us not to make the vPE resident
when going through a preempt/schedule cycle, meaning we don't lose
any state.

Cc: stable@vger.kernel.org
Fixes: 8e01d9a396e6 ("KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put")
Reported-by: Xiang Chen &lt;chenxiang66@hisilicon.com&gt;
Suggested-by: Zenghui Yu &lt;yuzenghui@huawei.com&gt;
Tested-by: Xiang Chen &lt;chenxiang66@hisilicon.com&gt;
Co-developed-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Acked-by: Zenghui Yu &lt;yuzenghui@huawei.com&gt;
Link: https://lore.kernel.org/r/20230713070657.3873244-1-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode</title>
<updated>2023-04-13T14:55:17+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-11-13T16:38:18+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=eb3df961021b9c018cc609aa63aec9b3938e9fc3'/>
<id>eb3df961021b9c018cc609aa63aec9b3938e9fc3</id>
<content type='text'>
[ Upstream commit bead02204e9806807bb290137b1ccabfcb4b16fd ]

Ricardo recently pointed out that the PMU chained counter emulation
in KVM wasn't quite behaving like the one on actual hardware, in
the sense that a chained counter would expose an overflow on
both halves of a chained counter, while KVM would only expose the
overflow on the top half.

The difference is subtle, but significant. What does the architecture
say (DDI0087 H.a):

- Up to PMUv3p4, all counters but the cycle counter are 32bit

- A 32bit counter that overflows generates a CHAIN event on the
  adjacent counter after exposing its own overflow status

- The CHAIN event is accounted if the counter is correctly
  configured (CHAIN event selected and counter enabled)

This all means that our current implementation (which uses 64bit
perf events) prevents us from emulating this overflow on the lower half.

How to fix this? By implementing the above, to the letter.

This largely results in code deletion, removing the notions of
"counter pair", "chained counters", and "canonical counter".
The code is further restructured to make the CHAIN handling similar
to SWINC, as the two are now extremely similar in behaviour.

Reported-by: Ricardo Koller &lt;ricarkol@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Reviewed-by: Reiji Watanabe &lt;reijiw@google.com&gt;
Link: https://lore.kernel.org/r/20221113163832.3154370-3-maz@kernel.org
Stable-dep-of: f6da81f650fa ("KVM: arm64: PMU: Don't save PMCR_EL0.{C,P} for the vCPU")
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit bead02204e9806807bb290137b1ccabfcb4b16fd ]

Ricardo recently pointed out that the PMU chained counter emulation
in KVM wasn't quite behaving like the one on actual hardware, in
the sense that a chained counter would expose an overflow on
both halves of a chained counter, while KVM would only expose the
overflow on the top half.

The difference is subtle, but significant. What does the architecture
say (DDI0087 H.a):

- Up to PMUv3p4, all counters but the cycle counter are 32bit

- A 32bit counter that overflows generates a CHAIN event on the
  adjacent counter after exposing its own overflow status

- The CHAIN event is accounted if the counter is correctly
  configured (CHAIN event selected and counter enabled)

This all means that our current implementation (which uses 64bit
perf events) prevents us from emulating this overflow on the lower half.

How to fix this? By implementing the above, to the letter.

This largely results in code deletion, removing the notions of
"counter pair", "chained counters", and "canonical counter".
The code is further restructured to make the CHAIN handling similar
to SWINC, as the two are now extremely similar in behaviour.

Reported-by: Ricardo Koller &lt;ricarkol@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Reviewed-by: Reiji Watanabe &lt;reijiw@google.com&gt;
Link: https://lore.kernel.org/r/20221113163832.3154370-3-maz@kernel.org
Stable-dep-of: f6da81f650fa ("KVM: arm64: PMU: Don't save PMCR_EL0.{C,P} for the vCPU")
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm64: vgic: Consolidate userspace access for base address setting</title>
<updated>2022-07-17T10:55:33+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-07-05T13:39:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=4b85080f4e378f617f88964dec94fd282bcf2af4'/>
<id>4b85080f4e378f617f88964dec94fd282bcf2af4</id>
<content type='text'>
Align kvm_vgic_addr() with the rest of the code by moving the
userspace accesses into it. kvm_vgic_addr() is also made static.

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Align kvm_vgic_addr() with the rest of the code by moving the
userspace accesses into it. kvm_vgic_addr() is also made static.

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm64: vgic-v2: Add helper for legacy dist/cpuif base address setting</title>
<updated>2022-07-17T10:55:33+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-07-05T13:34:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=9f968c9266aa30b0e81be0c6a560e45b93bed3dc'/>
<id>9f968c9266aa30b0e81be0c6a560e45b93bed3dc</id>
<content type='text'>
We carry a legacy interface to set the base addresses for GICv2.
As this is currently plumbed into the same handling code as
the modern interface, it limits the evolution we can make there.

Add a helper dedicated to this handling, with a view of maybe
removing this in the future.

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We carry a legacy interface to set the base addresses for GICv2.
As this is currently plumbed into the same handling code as
the modern interface, it limits the evolution we can make there.

Add a helper dedicated to this handling, with a view of maybe
removing this in the future.

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch kvm-arm64/per-vcpu-host-pmu-data into kvmarm-master/next</title>
<updated>2022-05-16T16:48:36+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-05-16T16:48:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=8794b4f510f722f37ae6b583e4b12b1af2fb692a'/>
<id>8794b4f510f722f37ae6b583e4b12b1af2fb692a</id>
<content type='text'>
* kvm-arm64/per-vcpu-host-pmu-data:
  : .
  : Pass the host PMU state in the vcpu to avoid the use of additional
  : shared memory between EL1 and EL2 (this obviously only applies
  : to nVHE and Protected setups).
  :
  : Patches courtesy of Fuad Tabba.
  : .
  KVM: arm64: pmu: Restore compilation when HW_PERF_EVENTS isn't selected
  KVM: arm64: Reenable pmu in Protected Mode
  KVM: arm64: Pass pmu events to hyp via vcpu
  KVM: arm64: Repack struct kvm_pmu to reduce size
  KVM: arm64: Wrapper for getting pmu_events

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* kvm-arm64/per-vcpu-host-pmu-data:
  : .
  : Pass the host PMU state in the vcpu to avoid the use of additional
  : shared memory between EL1 and EL2 (this obviously only applies
  : to nVHE and Protected setups).
  :
  : Patches courtesy of Fuad Tabba.
  : .
  KVM: arm64: pmu: Restore compilation when HW_PERF_EVENTS isn't selected
  KVM: arm64: Reenable pmu in Protected Mode
  KVM: arm64: Pass pmu events to hyp via vcpu
  KVM: arm64: Repack struct kvm_pmu to reduce size
  KVM: arm64: Wrapper for getting pmu_events

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch kvm-arm64/vgic-invlpir into kvmarm-master/next</title>
<updated>2022-05-16T16:48:35+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-05-16T16:48:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=ec2cff6cbdbecc4897f97b01992efc0ab86aa492'/>
<id>ec2cff6cbdbecc4897f97b01992efc0ab86aa492</id>
<content type='text'>
* kvm-arm64/vgic-invlpir:
  : .
  : Implement MMIO-based LPI invalidation for vGICv3.
  : .
  KVM: arm64: vgic-v3: Advertise GICR_CTLR.{IR, CES} as a new GICD_IIDR revision
  KVM: arm64: vgic-v3: Implement MMIO-based LPI invalidation
  KVM: arm64: vgic-v3: Expose GICR_CTLR.RWP when disabling LPIs
  irqchip/gic-v3: Exposes bit values for GICR_CTLR.{IR, CES}

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* kvm-arm64/vgic-invlpir:
  : .
  : Implement MMIO-based LPI invalidation for vGICv3.
  : .
  KVM: arm64: vgic-v3: Advertise GICR_CTLR.{IR, CES} as a new GICD_IIDR revision
  KVM: arm64: vgic-v3: Implement MMIO-based LPI invalidation
  KVM: arm64: vgic-v3: Expose GICR_CTLR.RWP when disabling LPIs
  irqchip/gic-v3: Exposes bit values for GICR_CTLR.{IR, CES}

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch kvm-arm64/hcall-selection into kvmarm-master/next</title>
<updated>2022-05-16T16:47:03+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-05-16T16:47:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=0586e28aaa32ad8eb49b3556eb2f73ae3ca34bd3'/>
<id>0586e28aaa32ad8eb49b3556eb2f73ae3ca34bd3</id>
<content type='text'>
* kvm-arm64/hcall-selection:
  : .
  : Introduce a new set of virtual sysregs for userspace to
  : select the hypercalls it wants to see exposed to the guest.
  :
  : Patches courtesy of Raghavendra and Oliver.
  : .
  KVM: arm64: Fix hypercall bitmap writeback when vcpus have already run
  KVM: arm64: Hide KVM_REG_ARM_*_BMAP_BIT_COUNT from userspace
  Documentation: Fix index.rst after psci.rst renaming
  selftests: KVM: aarch64: Add the bitmap firmware registers to get-reg-list
  selftests: KVM: aarch64: Introduce hypercall ABI test
  selftests: KVM: Create helper for making SMCCC calls
  selftests: KVM: Rename psci_cpu_on_test to psci_test
  tools: Import ARM SMCCC definitions
  Docs: KVM: Add doc for the bitmap firmware registers
  Docs: KVM: Rename psci.rst to hypercalls.rst
  KVM: arm64: Add vendor hypervisor firmware register
  KVM: arm64: Add standard hypervisor firmware register
  KVM: arm64: Setup a framework for hypercall bitmap firmware registers
  KVM: arm64: Factor out firmware register handling from psci.c

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* kvm-arm64/hcall-selection:
  : .
  : Introduce a new set of virtual sysregs for userspace to
  : select the hypercalls it wants to see exposed to the guest.
  :
  : Patches courtesy of Raghavendra and Oliver.
  : .
  KVM: arm64: Fix hypercall bitmap writeback when vcpus have already run
  KVM: arm64: Hide KVM_REG_ARM_*_BMAP_BIT_COUNT from userspace
  Documentation: Fix index.rst after psci.rst renaming
  selftests: KVM: aarch64: Add the bitmap firmware registers to get-reg-list
  selftests: KVM: aarch64: Introduce hypercall ABI test
  selftests: KVM: Create helper for making SMCCC calls
  selftests: KVM: Rename psci_cpu_on_test to psci_test
  tools: Import ARM SMCCC definitions
  Docs: KVM: Add doc for the bitmap firmware registers
  Docs: KVM: Rename psci.rst to hypercalls.rst
  KVM: arm64: Add vendor hypervisor firmware register
  KVM: arm64: Add standard hypervisor firmware register
  KVM: arm64: Setup a framework for hypercall bitmap firmware registers
  KVM: arm64: Factor out firmware register handling from psci.c

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm64: pmu: Restore compilation when HW_PERF_EVENTS isn't selected</title>
<updated>2022-05-16T12:42:41+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2022-05-16T12:02:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=20492a62b99bd4367b79a76ca288d018f11980db'/>
<id>20492a62b99bd4367b79a76ca288d018f11980db</id>
<content type='text'>
Moving kvm_pmu_events into the vcpu (and refering to it) broke the
somewhat unusual case where the kernel has no support for a PMU
at all.

In order to solve this, move things around a bit so that we can
easily avoid refering to the pmu structure outside of PMU-aware
code. As a bonus, pmu.c isn't compiled in when HW_PERF_EVENTS
isn't selected.

Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Reviewed-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/202205161814.KQHpOzsJ-lkp@intel.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Moving kvm_pmu_events into the vcpu (and refering to it) broke the
somewhat unusual case where the kernel has no support for a PMU
at all.

In order to solve this, move things around a bit so that we can
easily avoid refering to the pmu structure outside of PMU-aware
code. As a bonus, pmu.c isn't compiled in when HW_PERF_EVENTS
isn't selected.

Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Reviewed-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/202205161814.KQHpOzsJ-lkp@intel.com
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm64: Pass pmu events to hyp via vcpu</title>
<updated>2022-05-15T10:26:41+00:00</updated>
<author>
<name>Fuad Tabba</name>
<email>tabba@google.com</email>
</author>
<published>2022-05-10T09:57:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=84d751a019a9792f5b4884e1d598b603c360ec22'/>
<id>84d751a019a9792f5b4884e1d598b603c360ec22</id>
<content type='text'>
Instead of the host accessing hyp data directly, pass the pmu
events of the current cpu to hyp via the vcpu.

This adds 64 bits (in two fields) to the vcpu that need to be
synced before every vcpu run in nvhe and protected modes.
However, it isolates the hypervisor from the host, which allows
us to use pmu in protected mode in a subsequent patch.

No visible side effects in behavior intended.

Signed-off-by: Fuad Tabba &lt;tabba@google.com&gt;
Reviewed-by: Oliver Upton &lt;oupton@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20220510095710.148178-4-tabba@google.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Instead of the host accessing hyp data directly, pass the pmu
events of the current cpu to hyp via the vcpu.

This adds 64 bits (in two fields) to the vcpu that need to be
synced before every vcpu run in nvhe and protected modes.
However, it isolates the hypervisor from the host, which allows
us to use pmu in protected mode in a subsequent patch.

No visible side effects in behavior intended.

Signed-off-by: Fuad Tabba &lt;tabba@google.com&gt;
Reviewed-by: Oliver Upton &lt;oupton@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20220510095710.148178-4-tabba@google.com
</pre>
</div>
</content>
</entry>
</feed>
