<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/arm/kernel/entry-header.S, branch v6.12.80</title>
<subtitle>Clone of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git</subtitle>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/'/>
<entry>
<title>context_tracking: Split user tracking Kconfig</title>
<updated>2022-06-30T00:04:09+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>frederic@kernel.org</email>
</author>
<published>2022-06-08T14:40:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=24a9c54182b3758801b8ca6c8c237cc2ff654732'/>
<id>24a9c54182b3758801b8ca6c8c237cc2ff654732</id>
<content type='text'>
Context tracking is going to be used not only to track user transitions
but also idle/IRQs/NMIs. The user tracking part will then become a
separate feature. Prepare Kconfig for that.

[ frederic: Apply Max Filippov feedback. ]

Signed-off-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Neeraj Upadhyay &lt;quic_neeraju@quicinc.com&gt;
Cc: Uladzislau Rezki &lt;uladzislau.rezki@sony.com&gt;
Cc: Joel Fernandes &lt;joel@joelfernandes.org&gt;
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Nicolas Saenz Julienne &lt;nsaenz@kernel.org&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Xiongfeng Wang &lt;wangxiongfeng2@huawei.com&gt;
Cc: Yu Liao &lt;liaoyu15@huawei.com&gt;
Cc: Phil Auld &lt;pauld@redhat.com&gt;
Cc: Paul Gortmaker&lt;paul.gortmaker@windriver.com&gt;
Cc: Alex Belits &lt;abelits@marvell.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@kernel.org&gt;
Reviewed-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
Tested-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Context tracking is going to be used not only to track user transitions
but also idle/IRQs/NMIs. The user tracking part will then become a
separate feature. Prepare Kconfig for that.

[ frederic: Apply Max Filippov feedback. ]

Signed-off-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Neeraj Upadhyay &lt;quic_neeraju@quicinc.com&gt;
Cc: Uladzislau Rezki &lt;uladzislau.rezki@sony.com&gt;
Cc: Joel Fernandes &lt;joel@joelfernandes.org&gt;
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Nicolas Saenz Julienne &lt;nsaenz@kernel.org&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Xiongfeng Wang &lt;wangxiongfeng2@huawei.com&gt;
Cc: Yu Liao &lt;liaoyu15@huawei.com&gt;
Cc: Phil Auld &lt;pauld@redhat.com&gt;
Cc: Paul Gortmaker&lt;paul.gortmaker@windriver.com&gt;
Cc: Alex Belits &lt;abelits@marvell.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@kernel.org&gt;
Reviewed-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
Tested-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>context_tracking: Rename context_tracking_user_enter/exit() to user_enter/exit_callable()</title>
<updated>2022-06-30T00:03:27+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>frederic@kernel.org</email>
</author>
<published>2022-06-08T14:40:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=f163f0302ab69722c052519f4014814bf10026a9'/>
<id>f163f0302ab69722c052519f4014814bf10026a9</id>
<content type='text'>
context_tracking_user_enter() and context_tracking_user_exit() are
ASM callable versions of user_enter() and user_exit() for architectures
that didn't manage to check the context tracking static key from ASM.
Change those function names to better reflect their purpose.

[ frederic: Apply Max Filippov feedback. ]

Signed-off-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Neeraj Upadhyay &lt;quic_neeraju@quicinc.com&gt;
Cc: Uladzislau Rezki &lt;uladzislau.rezki@sony.com&gt;
Cc: Joel Fernandes &lt;joel@joelfernandes.org&gt;
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Nicolas Saenz Julienne &lt;nsaenz@kernel.org&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Xiongfeng Wang &lt;wangxiongfeng2@huawei.com&gt;
Cc: Yu Liao &lt;liaoyu15@huawei.com&gt;
Cc: Phil Auld &lt;pauld@redhat.com&gt;
Cc: Paul Gortmaker&lt;paul.gortmaker@windriver.com&gt;
Cc: Alex Belits &lt;abelits@marvell.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@kernel.org&gt;
Reviewed-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
Tested-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
context_tracking_user_enter() and context_tracking_user_exit() are
ASM callable versions of user_enter() and user_exit() for architectures
that didn't manage to check the context tracking static key from ASM.
Change those function names to better reflect their purpose.

[ frederic: Apply Max Filippov feedback. ]

Signed-off-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Neeraj Upadhyay &lt;quic_neeraju@quicinc.com&gt;
Cc: Uladzislau Rezki &lt;uladzislau.rezki@sony.com&gt;
Cc: Joel Fernandes &lt;joel@joelfernandes.org&gt;
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Nicolas Saenz Julienne &lt;nsaenz@kernel.org&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Xiongfeng Wang &lt;wangxiongfeng2@huawei.com&gt;
Cc: Yu Liao &lt;liaoyu15@huawei.com&gt;
Cc: Phil Auld &lt;pauld@redhat.com&gt;
Cc: Paul Gortmaker&lt;paul.gortmaker@windriver.com&gt;
Cc: Alex Belits &lt;abelits@marvell.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@kernel.org&gt;
Reviewed-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
Tested-by: Nicolas Saenz Julienne &lt;nsaenzju@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 9195/1: entry: avoid explicit literal loads</title>
<updated>2022-05-20T11:32:32+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2022-04-20T08:41:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=508074607c7b95b24f0adf633fdf606761bb7824'/>
<id>508074607c7b95b24f0adf633fdf606761bb7824</id>
<content type='text'>
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into
registers without having to rely on literal loads that go via the
D-cache.  For older cores, we now support a similar arrangement, based
on PC-relative group relocations.

This means we can elide most literal loads entirely from the entry path,
by switching to the ldr_va macro to emit the appropriate sequence
depending on the target architecture revision.

While at it, switch to the bl_r macro for invoking the right PABT/DABT
helpers instead of setting the LR register explicitly, which does not
play well with cores that speculate across function returns.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into
registers without having to rely on literal loads that go via the
D-cache.  For older cores, we now support a similar arrangement, based
on PC-relative group relocations.

This means we can elide most literal loads entirely from the entry path,
by switching to the ldr_va macro to emit the appropriate sequence
depending on the target architecture revision.

While at it, switch to the bl_r macro for invoking the right PABT/DABT
helpers instead of setting the LR register explicitly, which does not
play well with cores that speculate across function returns.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: smp: elide HWCAP_TLS checks or __entry_task updates on SMP+v6</title>
<updated>2022-01-25T08:53:52+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2022-01-24T18:28:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=75fa4adc4f50ee52d8cdfa3e84798176ccb4a354'/>
<id>75fa4adc4f50ee52d8cdfa3e84798176ccb4a354</id>
<content type='text'>
Use the SMP_ON_UP patching framework to elide HWCAP_TLS tests from the
context switch and return to userspace code paths, as SMP systems are
guaranteed to have this h/w capability.

At the same time, omit the update of __entry_task if the system is
detected to be UP at runtime, as in that case, the value is never used.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Use the SMP_ON_UP patching framework to elide HWCAP_TLS tests from the
context switch and return to userspace code paths, as SMP systems are
guaranteed to have this h/w capability.

At the same time, omit the update of __entry_task if the system is
detected to be UP at runtime, as in that case, the value is never used.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: smp: defer TPIDRURO update for SMP v6 configurations too</title>
<updated>2021-12-06T11:49:17+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2021-11-25T22:21:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=c2755910373bb5dfb9aa68ba2924036686815c9e'/>
<id>c2755910373bb5dfb9aa68ba2924036686815c9e</id>
<content type='text'>
Defer TPIDURO updates for user space until exit also for CPU_V6+SMP
configurations so that we can decide at runtime whether to use it to
carry the current pointer, provided that we are running on a CPU that
actually implements this register. This is needed for
THREAD_INFO_IN_TASK support for UP systems, which requires that all SMP
capable systems use the TPIDRURO based access to 'current' as the only
remaining alternative will be a global variable which only works on UP.

Acked-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Defer TPIDURO updates for user space until exit also for CPU_V6+SMP
configurations so that we can decide at runtime whether to use it to
carry the current pointer, provided that we are running on a CPU that
actually implements this register. This is needed for
THREAD_INFO_IN_TASK support for UP systems, which requires that all SMP
capable systems use the TPIDRURO based access to 'current' as the only
remaining alternative will be a global variable which only works on UP.

Acked-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: assembler: add optimized ldr/str macros to load variables from memory</title>
<updated>2021-12-06T11:49:16+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2021-11-26T18:37:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=4e918ab13eaf40f19938659cb5a22c93172778a8'/>
<id>4e918ab13eaf40f19938659cb5a22c93172778a8</id>
<content type='text'>
We will be adding variable loads to various hot paths, so it makes sense
to add a helper macro that can load variables from asm code without the
use of literal pool entries. On v7 or later, we can simply use MOVW/MOVT
pairs, but on earlier cores, this requires a bit of hackery to emit a
instruction sequence that implements this using a sequence of ADD/LDR
instructions.

Acked-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We will be adding variable loads to various hot paths, so it makes sense
to add a helper macro that can load variables from asm code without the
use of literal pool entries. On v7 or later, we can simply use MOVW/MOVT
pairs, but on earlier cores, this requires a bit of hackery to emit a
instruction sequence that implements this using a sequence of ADD/LDR
instructions.

Acked-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: implement support for vmap'ed stacks</title>
<updated>2021-12-03T14:11:33+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2021-09-23T07:15:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=a1c510d0adc604bb143c86052bc5be48cbcfa17c'/>
<id>a1c510d0adc604bb143c86052bc5be48cbcfa17c</id>
<content type='text'>
Wire up the generic support for managing task stack allocations via vmalloc,
and implement the entry code that detects whether we faulted because of a
stack overrun (or future stack overrun caused by pushing the pt_regs array)

While this adds a fair amount of tricky entry asm code, it should be
noted that it only adds a TST + branch to the svc_entry path. The code
implementing the non-trivial handling of the overflow stack is emitted
out-of-line into the .text section.

Since on ARM, we rely on do_translation_fault() to keep PMD level page
table entries that cover the vmalloc region up to date, we need to
ensure that we don't hit such a stale PMD entry when accessing the
stack. So we do a dummy read from the new stack while still running from
the old one on the context switch path, and bump the vmalloc_seq counter
when PMD level entries in the vmalloc range are modified, so that the MM
switch fetches the latest version of the entries.

Note that we need to increase the per-mode stack by 1 word, to gain some
space to stash a GPR until we know it is safe to touch the stack.
However, due to the cacheline alignment of the struct, this does not
actually increase the memory footprint of the struct stack array at all.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Keith Packard &lt;keithpac@amazon.com&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Wire up the generic support for managing task stack allocations via vmalloc,
and implement the entry code that detects whether we faulted because of a
stack overrun (or future stack overrun caused by pushing the pt_regs array)

While this adds a fair amount of tricky entry asm code, it should be
noted that it only adds a TST + branch to the svc_entry path. The code
implementing the non-trivial handling of the overflow stack is emitted
out-of-line into the .text section.

Since on ARM, we rely on do_translation_fault() to keep PMD level page
table entries that cover the vmalloc region up to date, we need to
ensure that we don't hit such a stale PMD entry when accessing the
stack. So we do a dummy read from the new stack while still running from
the old one on the context switch path, and bump the vmalloc_seq counter
when PMD level entries in the vmalloc range are modified, so that the MM
switch fetches the latest version of the entries.

Note that we need to increase the per-mode stack by 1 word, to gain some
space to stash a GPR until we know it is safe to touch the stack.
However, due to the cacheline alignment of the struct, this does not
actually increase the memory footprint of the struct stack array at all.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Keith Packard &lt;keithpac@amazon.com&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: smp: Free up the TLS register while running in the kernel</title>
<updated>2021-09-27T14:54:02+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2021-09-18T08:44:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=3855ab614df4818c833864572559a97fd9f9a299'/>
<id>3855ab614df4818c833864572559a97fd9f9a299</id>
<content type='text'>
To prepare for a subsequent patch that stores the current task pointer
in the user space TLS register while running in the kernel, modify the
set_tls and switch_tls routines not to touch the register directly, and
update the return to user space code to load the correct value.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Tested-by: Amit Daniel Kachhap &lt;amit.kachhap@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
To prepare for a subsequent patch that stores the current task pointer
in the user space TLS register while running in the kernel, modify the
set_tls and switch_tls routines not to touch the register directly, and
update the return to user space code to load the correct value.

Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Tested-by: Amit Daniel Kachhap &lt;amit.kachhap@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.h</title>
<updated>2020-05-03T16:30:24+00:00</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2020-05-03T12:03:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=747ffc2fcf969eff9309d7f2d1d61cb8b9e1bb40'/>
<id>747ffc2fcf969eff9309d7f2d1d61cb8b9e1bb40</id>
<content type='text'>
Consolidate the user access assembly code to asm/uaccess-asm.h.  This
moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable,
uaccess_disable, uaccess_save, uaccess_restore macros, and creates two
new ones for exception entry and exit - uaccess_entry and uaccess_exit.

This makes the uaccess_save and uaccess_restore macros private to
asm/uaccess-asm.h.

Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Consolidate the user access assembly code to asm/uaccess-asm.h.  This
moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable,
uaccess_disable, uaccess_save, uaccess_restore macros, and creates two
new ones for exception entry and exit - uaccess_entry and uaccess_exit.

This makes the uaccess_save and uaccess_restore macros private to
asm/uaccess-asm.h.

Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8844/1: use unified assembler in assembly files</title>
<updated>2019-02-26T11:26:07+00:00</updated>
<author>
<name>Stefan Agner</name>
<email>stefan@agner.ch</email>
</author>
<published>2019-02-17T23:57:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=e44fc38818ed795f4c661d5414c6e0affae0fa63'/>
<id>e44fc38818ed795f4c661d5414c6e0affae0fa63</id>
<content type='text'>
Use unified assembler syntax (UAL) in assembly files. Divided
syntax is considered deprecated. This will also allow to build
the kernel using LLVM's integrated assembler.

Signed-off-by: Stefan Agner &lt;stefan@agner.ch&gt;
Acked-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Use unified assembler syntax (UAL) in assembly files. Divided
syntax is considered deprecated. This will also allow to build
the kernel using LLVM's integrated assembler.

Signed-off-by: Stefan Agner &lt;stefan@agner.ch&gt;
Acked-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
</feed>
