diff options
| author | Ard Biesheuvel <ardb@kernel.org> | 2024-10-09 18:04:40 +0200 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-12-09 10:31:43 +0100 |
| commit | 83d123e27623713dd69eed2569eacf5f1b3c9033 (patch) | |
| tree | 830a3604cfab5c4b947d1638d61974a873e83464 /arch | |
| parent | f662b4a69e1d6c15db3354de6fc9f923417a5a10 (diff) | |
| download | linux-83d123e27623713dd69eed2569eacf5f1b3c9033.tar.gz linux-83d123e27623713dd69eed2569eacf5f1b3c9033.tar.bz2 linux-83d123e27623713dd69eed2569eacf5f1b3c9033.zip | |
x86/pvh: Call C code via the kernel virtual mapping
[ Upstream commit e8fbc0d9cab6c1ee6403f42c0991b0c1d5dbc092 ]
Calling C code via a different mapping than it was linked at is
problematic, because the compiler assumes that RIP-relative and absolute
symbol references are interchangeable. GCC in particular may use
RIP-relative per-CPU variable references even when not using -fpic.
So call xen_prepare_pvh() via its kernel virtual mapping on x86_64, so
that those RIP-relative references produce the correct values. This
matches the pre-existing behavior for i386, which also invokes
xen_prepare_pvh() via the kernel virtual mapping before invoking
startup_32 with paging disabled again.
Fixes: 7243b93345f7 ("xen/pvh: Bootstrap PVH guest")
Tested-by: Jason Andryuk <jason.andryuk@amd.com>
Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Message-ID: <20241009160438.3884381-8-ardb+git@google.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'arch')
| -rw-r--r-- | arch/x86/platform/pvh/head.S | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S index c994ea58bdf7..008a80552224 100644 --- a/arch/x86/platform/pvh/head.S +++ b/arch/x86/platform/pvh/head.S @@ -107,7 +107,14 @@ SYM_CODE_START_LOCAL(pvh_start_xen) movq %rbp, %rbx subq $_pa(pvh_start_xen), %rbx movq %rbx, phys_base(%rip) - call xen_prepare_pvh + + /* Call xen_prepare_pvh() via the kernel virtual mapping */ + leaq xen_prepare_pvh(%rip), %rax + subq phys_base(%rip), %rax + addq $__START_KERNEL_map, %rax + ANNOTATE_RETPOLINE_SAFE + call *%rax + /* * Clear phys_base. __startup_64 will *add* to its value, * so reset to 0. |
