diff options
| author | Paolo Bonzini <pbonzini@redhat.com> | 2017-04-27 17:33:14 +0200 |
|---|---|---|
| committer | Paolo Bonzini <pbonzini@redhat.com> | 2017-04-27 17:33:14 +0200 |
| commit | c24a7be2110ddac2ab75abcded76c62dccb6b1f0 (patch) | |
| tree | cb4ef87091258db6b0f94cf2d4bd938eae1e26de | |
| parent | 70f3aac964ae2bc9a0a1d5d65a62e258591ade18 (diff) | |
| parent | 1edb632133efb6226b6bef3e7d9fa8c7134ac4e2 (diff) | |
| download | linux-c24a7be2110ddac2ab75abcded76c62dccb6b1f0.tar.gz linux-c24a7be2110ddac2ab75abcded76c62dccb6b1f0.tar.bz2 linux-c24a7be2110ddac2ab75abcded76c62dccb6b1f0.zip | |
Merge tag 'kvm-arm-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/ARM Changes for v4.12.
Changes include:
- Using the common sysreg definitions between KVM and arm64
- Improved hyp-stub implementation with support for kexec and kdump on the 32-bit side
- Proper PMU exception handling
- Performance improvements of our GIC handling
- Support for irqchip in userspace with in-kernel arch-timers and PMU support
- A fix for a race condition in our PSCI code
Conflicts:
Documentation/virtual/kvm/api.txt
include/uapi/linux/kvm.h
50 files changed, 1174 insertions, 938 deletions
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index dc674c2b8b31..f038f8cafa70 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -4120,3 +4120,44 @@ This capability indicates that guest using memory monotoring instructions (MWAIT/MWAITX) to stop the virtual CPU will not cause a VM exit. As such time spent while virtual CPU is halted in this way will then be accounted for as guest running time on the host (as opposed to e.g. HLT). + +8.9 KVM_CAP_ARM_USER_IRQ + +Architectures: arm, arm64 +This capability, if KVM_CHECK_EXTENSION indicates that it is available, means +that if userspace creates a VM without an in-kernel interrupt controller, it +will be notified of changes to the output level of in-kernel emulated devices, +which can generate virtual interrupts, presented to the VM. +For such VMs, on every return to userspace, the kernel +updates the vcpu's run->s.regs.device_irq_level field to represent the actual +output level of the device. + +Whenever kvm detects a change in the device output level, kvm guarantees at +least one return to userspace before running the VM. This exit could either +be a KVM_EXIT_INTR or any other exit event, like KVM_EXIT_MMIO. This way, +userspace can always sample the device output level and re-compute the state of +the userspace interrupt controller. Userspace should always check the state +of run->s.regs.device_irq_level on every kvm exit. +The value in run->s.regs.device_irq_level can represent both level and edge +triggered interrupt signals, depending on the device. Edge triggered interrupt +signals will exit to userspace with the bit in run->s.regs.device_irq_level +set exactly once per edge signal. + +The field run->s.regs.device_irq_level is available independent of +run->kvm_valid_regs or run->kvm_dirty_regs bits. + +If KVM_CAP_ARM_USER_IRQ is supported, the KVM_CHECK_EXTENSION ioctl returns a +number larger than 0 indicating the version of this capability is implemented +and thereby which bits in in run->s.regs.device_irq_level can signal values. + +Currently the following bits are defined for the device_irq_level bitmap: + + KVM_CAP_ARM_USER_IRQ >= 1: + + KVM_ARM_DEV_EL1_VTIMER - EL1 virtual timer + KVM_ARM_DEV_EL1_PTIMER - EL1 physical timer + KVM_ARM_DEV_PMU - ARM PMU overflow interrupt signal + +Future versions of kvm may implement additional events. These will get +indicated by returning a higher number from KVM_CHECK_EXTENSION and will be +listed above. diff --git a/Documentation/virtual/kvm/arm/hyp-abi.txt b/Documentation/virtual/kvm/arm/hyp-abi.txt new file mode 100644 index 000000000000..a20a0bee268d --- /dev/null +++ b/Documentation/virtual/kvm/arm/hyp-abi.txt @@ -0,0 +1,53 @@ +* Internal ABI between the kernel and HYP + +This file documents the interaction between the Linux kernel and the +hypervisor layer when running Linux as a hypervisor (for example +KVM). It doesn't cover the interaction of the kernel with the +hypervisor when running as a guest (under Xen, KVM or any other +hypervisor), or any hypervisor-specific interaction when the kernel is +used as a host. + +On arm and arm64 (without VHE), the kernel doesn't run in hypervisor +mode, but still needs to interact with it, allowing a built-in +hypervisor to be either installed or torn down. + +In order to achieve this, the kernel must be booted at HYP (arm) or +EL2 (arm64), allowing it to install a set of stubs before dropping to +SVC/EL1. These stubs are accessible by using a 'hvc #0' instruction, +and only act on individual CPUs. + +Unless specified otherwise, any built-in hypervisor must implement +these functions (see arch/arm{,64}/include/asm/virt.h): + +* r0/x0 = HVC_SET_VECTORS + r1/x1 = vectors + + Set HVBAR/VBAR_EL2 to 'vectors' to enable a hypervisor. 'vectors' + must be a physical address, and respect the alignment requirements + of the architecture. Only implemented by the initial stubs, not by + Linux hypervisors. + +* r0/x0 = HVC_RESET_VECTORS + + Turn HYP/EL2 MMU off, and reset HVBAR/VBAR_EL2 to the initials + stubs' exception vector value. This effectively disables an existing + hypervisor. + +* r0/x0 = HVC_SOFT_RESTART + r1/x1 = restart address + x2 = x0's value when entering the next payload (arm64) + x3 = x1's value when entering the next payload (arm64) + x4 = x2's value when entering the next payload (arm64) + + Mask all exceptions, disable the MMU, move the arguments into place + (arm64 only), and jump to the restart address while at HYP/EL2. This + hypercall is not expected to return to its caller. + +Any other value of r0/x0 triggers a hypervisor-specific handling, +which is not documented here. + +The return value of a stub hypercall is held by r0/x0, and is 0 on +success, and HVC_STUB_ERR on error. A stub hypercall is allowed to +clobber any of the caller-saved registers (x0-x18 on arm64, r0-r3 and +ip on arm). It is thus recommended to use a function call to perform +the hypercall. diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 9150f9732785..7c711ba61417 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -422,7 +422,17 @@ dtb_check_done: cmp r0, #HYP_MODE bne 1f - bl __hyp_get_vectors + /* + * Compute the address of the hyp vectors after relocation. + * This requires some arithmetic since we cannot directly + * reference __hyp_stub_vectors in a PC-relative way. + * Call __hyp_set_vectors with the new address so that we + * can HVC again after the copy. + */ +0: adr r0, 0b + movw r1, #:lower16:__hyp_stub_vectors - 0b + movt r1, #:upper16:__hyp_stub_vectors - 0b + add r0, r0, r1 sub r0, r0, r5 add r0, r0, r10 bl __hyp_set_vectors diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 8ef05381984b..14d68a4d826f 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -33,7 +33,7 @@ #define ARM_EXCEPTION_IRQ 5 #define ARM_EXCEPTION_FIQ 6 #define ARM_EXCEPTION_HVC 7 - +#define ARM_EXCEPTION_HYP_GONE HVC_STUB_ERR /* * The rr_lo_hi macro swaps a pair of registers depending on * current endianness. It is used in conjunction with ldrd and strd @@ -72,10 +72,11 @@ extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern void __init_stage2_translation(void); -extern void __kvm_hyp_reset(unsigned long); - extern u64 __vgic_v3_get_ich_vtr_el2(void); +extern u64 __vgic_v3_read_vmcr(void); +extern void __vgic_v3_write_vmcr(u32 vmcr); extern void __vgic_v3_init_lrs(void); + #endif #endif /* __ARM_KVM_ASM_H__ */ diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 3cd04d164c64..f0e66577ce05 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -269,12 +269,6 @@ static inline void __cpu_init_stage2(void) kvm_call_hyp(__init_stage2_translation); } -static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr, - phys_addr_t phys_idmap_start) -{ - kvm_call_hyp((void *)virt_to_idmap(__kvm_hyp_reset), vector_ptr); -} - static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext) { return 0; diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 95f38dcd611d..fa6f2174276b 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -56,7 +56,6 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); -phys_addr_t kvm_get_idmap_start(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index 8877ad5ffe10..f2e1af45bd6f 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -43,7 +43,7 @@ extern struct processor { /* * Special stuff for a reset */ - void (*reset)(unsigned long addr) __attribute__((noreturn)); + void (*reset)(unsigned long addr, bool hvc) __attribute__((noreturn)); /* * Idle the processor */ @@ -88,7 +88,7 @@ extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte); #else extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext); #endif -extern void cpu_reset(unsigned long addr) __attribute__((noreturn)); +extern void cpu_reset(unsigned long addr, bool hvc) __attribute__((noreturn)); /* These three are private to arch/arm/kernel/suspend.c */ extern void cpu_do_suspend(void *); diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h index 6dae1956c74d..141144f333a2 100644 --- a/arch/arm/include/asm/virt.h +++ b/arch/arm/include/asm/virt.h @@ -53,7 +53,7 @@ static inline void sync_boot_mode(void) } void __hyp_set_vectors(unsigned long phys_vector_base); -unsigned long __hyp_get_vectors(void); +void __hyp_reset_vectors(void); #else #define __boot_cpu_mode (SVC_MODE) #define sync_boot_mode() @@ -94,6 +94,18 @@ extern char __hyp_text_start[]; extern char __hyp_text_end[]; #endif +#else + +/* Only assembly code should need those */ + +#define HVC_SET_VECTORS 0 +#define HVC_SOFT_RESTART 1 +#define HVC_RESET_VECTORS 2 + +#define HVC_STUB_HCALL_NR 3 + #endif /* __ASSEMBLY__ */ +#define HVC_STUB_ERR 0xbadca11 + #endif /* ! VIRT_H */ diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index 254a38cace2a..a88726359e5f 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -116,6 +116,8 @@ struct kvm_debug_exit_arch { }; struct kvm_sync_regs { + /* Used with KVM_CAP_ARM_USER_IRQ */ + __u64 device_irq_level; }; struct kvm_arch_memory_slot { diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S index 15d073ae5da2..ec7e7377d423 100644 --- a/arch/arm/kernel/hyp-stub.S +++ b/arch/arm/kernel/hyp-stub.S @@ -125,7 +125,7 @@ ENTRY(__hyp_stub_install_secondary) * (see safe_svcmode_maskall). */ @ Now install the hypervisor stub: - adr r7, __hyp_stub_vectors + W(adr) r7, __hyp_stub_vectors mcr p15, 4, r7, c12, c0, 0 @ set hypervisor vector base (HVBAR) @ Disable all traps, so we don't get any nasty surprise @@ -202,9 +202,23 @@ ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE ENDPROC(__hyp_stub_install_secondary) __hyp_stub_do_trap: - cmp r0, #-1 - mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR - mcrne p15, 4, r0, c12, c0, 0 @ set HVBAR + teq r0, #HVC_SET_VECTORS + bne 1f + mcr p15, 4, r1, c12, c0, 0 @ set HVBAR + b __hyp_stub_exit + +1: teq r0, #HVC_SOFT_RESTART + bne 1f + bx r1 + +1: teq r0, #HVC_RESET_VECTORS + beq __hyp_stub_exit + + ldr r0, =HVC_STUB_ERR + __ERET + +__hyp_stub_exit: + mov r0, #0 __ERET ENDPROC(__hyp_stub_do_trap) @@ -230,15 +244,26 @@ ENDPROC(__hyp_stub_do_trap) * so you will need to set that to something sensible at the new hypervisor's * initialisation entry point. */ -ENTRY(__hyp_get_vectors) - mov r0, #-1 -ENDPROC(__hyp_get_vectors) - @ fall through ENTRY(__hyp_set_vectors) + mov r1, r0 + mov r0, #HVC_SET_VECTORS __HVC(0) ret lr ENDPROC(__hyp_set_vectors) +ENTRY(__hyp_soft_restart) + mov r1, r0 + mov r0, #HVC_SOFT_RESTART + __HVC(0) + ret lr +ENDPROC(__hyp_soft_restart) + +ENTRY(__hyp_reset_vectors) + mov r0, #HVC_RESET_VECTORS + __HVC(0) + ret lr +ENDPROC(__hyp_reset_vectors) + #ifndef ZIMAGE .align 2 .L__boot_cpu_mode_offset: @@ -246,7 +271,7 @@ ENDPROC(__hyp_set_vectors) #endif .align 5 -__hyp_stub_vectors: +ENTRY(__hyp_stub_vectors) __hyp_stub_reset: W(b) . __hyp_stub_und: W(b) . __hyp_stub_svc: W(b) . diff --git a/arch/arm/kernel/reboot.c b/arch/arm/kernel/reboot.c index 3fa867a2aae6..3b2aa9a9fe26 100644 --- a/arch/arm/kernel/reboot.c +++ b/arch/arm/kernel/reboot.c @@ -12,10 +12,11 @@ #include <asm/cacheflush.h> #include <asm/idmap.h> +#include <asm/virt.h> #include "reboot.h" -typedef void (*phys_reset_t)(unsigned long); +typedef void (*phys_reset_t)(unsigned long, bool); /* * Function pointers to optional machine specific functions @@ -51,7 +52,9 @@ static void __soft_restart(void *addr) /* Switch to the identity mapping. */ phys_reset = (phys_reset_t)virt_to_idmap(cpu_reset); - phys_reset((unsigned long)addr); + + /* original stub should be restored by kvm */ + phys_reset((unsigned long)addr, is_hyp_mode_available()); /* Should never get here. */ BUG(); diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index e3c8105ada65..8966e59806aa 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -53,7 +53,6 @@ __asm__(".arch_extension virt"); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); static kvm_cpu_context_t __percpu *kvm_host_cpu_state; -static unsigned long hyp_default_vectors; /* Per-CPU variable containing the currently running vcpu. */ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); @@ -227,6 +226,13 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) else r = kvm->arch.vgic.msis_require_devid; break; + case KVM_CAP_ARM_USER_IRQ: + /* + * 1: EL1_VTIMER, EL1_PTIMER, and PMU. + * (bump this number if adding more devices) + */ + r = 1; + break; default: r = kvm_arch_dev_ioctl_check_extension(kvm, ext); break; @@ -348,15 +354,14 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state); kvm_arm_set_running_vcpu(vcpu); + + kvm_vgic_load(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - /* - * The arch-generic KVM code expects the cpu field of a vcpu to be -1 - * if the vcpu is no longer assigned to a cpu. This is used for the - * optimized make_all_cpus_request path. - */ + kvm_vgic_put(vcpu); + vcpu->cpu = -1; kvm_arm_set_running_vcpu(NULL); @@ -514,13 +519,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) return ret; } - /* - * Enable the arch timers only if we have an in-kernel VGIC - * and it has been properly initialized, since we cannot handle - * interrupts from the virtual timer with a userspace gic. - */ - if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) - ret = kvm_timer_enable(vcpu); + ret = kvm_timer_enable(vcpu); return ret; } @@ -630,16 +629,23 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) * non-preemptible context. */ preempt_disable(); + kvm_pmu_flush_hwstate(vcpu); + kvm_timer_flush_hwstate(vcpu); kvm_vgic_flush_hwstate(vcpu); local_irq_disable(); /* - * Re-check atomic conditions + * If we have a singal pending, or need to notify a userspace + * irqchip about timer or PMU level changes, then we exit (and + * update the timer level state in kvm_timer_update_run + * below). */ - if (signal_pending(current)) { + if (signal_pending(current) || + kvm_timer_should_notify_user(vcpu) || + kvm_pmu_should_notify_user(vcpu)) { ret = -EINTR; run->exit_reason = KVM_EXIT_INTR; } @@ -711,6 +717,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ret = handle_exit(vcpu, run, ret); } + /* Tell userspace about in-kernel device output levels */ + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) { + kvm_timer_update_run(vcpu); + kvm_pmu_update_run(vcpu); + } + if (vcpu->sigset_active) sigprocmask(SIG_SETMASK, &sigsaved, NULL); return ret; @@ -1109,8 +1121,16 @@ static void cpu_init_hyp_mode(void *dummy) kvm_arm_init_debug(); } +static void cpu_hyp_reset(void) +{ + if (!is_kernel_in_hyp_mode()) + __hyp_reset_vectors(); +} + static void cpu_hyp_reinit(void) { + cpu_hyp_reset(); + if (is_kernel_in_hyp_mode()) { /* * __cpu_init_stage2() is safe to call even if the PM @@ -1118,18 +1138,10 @@ static void cpu_hyp_reinit(void) */ __cpu_init_stage2(); } else { - if (__hyp_get_vectors() == hyp_default_vectors) - cpu_init_hyp_mode(NULL); + cpu_init_hyp_mode(NULL); } } -static void cpu_hyp_reset(void) -{ - if (!is_kernel_in_hyp_mode()) - __cpu_reset_hyp_mode(hyp_default_vectors, - kvm_get_idmap_start()); -} - static void _kvm_arch_hardware_enable(void *discard) { if (!__this_cpu_read(kvm_arm_hardware_enabled)) { @@ -1313,12 +1325,6 @@ static int init_hyp_mode(void) goto out_err; /* - * It is probably enough to obtain the default on one - * CPU. It's unlikely to be different on the others. - */ - hyp_default_vectors = __hyp_get_vectors(); - - /* * Allocate stack pages for Hypervisor-mode */ for_each_possible_cpu(cpu) { diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c index 3e5e4194ef86..2c14b69511e9 100644 --- a/arch/arm/kvm/coproc.c +++ b/arch/arm/kvm/coproc.c @@ -40,6 +40,24 @@ * Co-processor emulation *****************************************************************************/ +static bool write_to_read_only(struct kvm_vcpu *vcpu, + const struct coproc_params *params) +{ + WARN_ONCE(1, "CP15 write to read-only register\n"); + print_cp_instr(params); + kvm_inject_undefined(vcpu); + return false; +} + +static bool read_from_write_only(struct kvm_vcpu *vcpu, + const struct coproc_params *params) +{ + WARN_ONCE(1, "CP15 read to write-only register\n"); + print_cp_instr(params); + kvm_inject_undefined(vcpu); + return false; +} + /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ static u32 cache_levels; @@ -502,15 +520,15 @@ static int emulate_cp15(struct kvm_vcpu *vcpu, if (likely(r->access(vcpu, params, r))) { /* Skip instruction, since it was emulated */ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); - return 1; } - /* If access function fails, it should complain. */ } else { + /* If access function fails, it should complain. */ kvm_err("Unsupported guest CP15 access at: %08lx\n", *vcpu_pc(vcpu)); print_cp_instr(params); + kvm_inject_undefined(vcpu); } - kvm_inject_undefined(vcpu); + return 1; } diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h index eef1759c2b65..3a41b7d1eb86 100644 --- a/arch/arm/kvm/coproc.h +++ b/arch/arm/kvm/coproc.h @@ -81,24 +81,6 @@ static inline bool read_zero(struct kvm_vcpu *vcpu, return true; } -static inline bool write_to_read_only(struct kvm_vcpu *vcpu, - const struct coproc_params *params) -{ - kvm_debug("CP15 write to read-only register at: %08lx\n", - *vcpu_pc(vcpu)); - print_cp_instr(params); - return false; -} - -static inline bool read_from_write_only(struct kvm_vcpu *vcpu, - const struct coproc_params *params) -{ - kvm_debug("CP15 read to write-only register at: %08lx\n", - *vcpu_pc(vcpu)); - print_cp_instr(params); - return false; -} - /* Reset functions */ static inline void reset_unknown(struct kvm_vcpu *vcpu, const struct coproc_reg *r) diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c index 96af65a30d78..5fd7968cdae9 100644 --- a/arch/arm/kvm/handle_exit.c +++ b/arch/arm/kvm/handle_exit.c @@ -160,6 +160,14 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, case ARM_EXCEPTION_DATA_ABORT: kvm_inject_vabt(vcpu); return 1; + case ARM_EXCEPTION_HYP_GONE: + /* + * HYP has been reset to the hyp-stub. This happens + * when a guest is pre-empted by kvm_reboot()'s + * shutdown call. + */ + run->exit_reason = KVM_EXIT_FAIL_ENTRY; + return 0; default: kvm_pr_unimpl("Unsupported exception type: %d", exception_index); diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S index 96beb53934c9..95a2faefc070 100644 --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -126,11 +126,29 @@ hyp_hvc: */ pop {r0, r1, r2} - /* Check for __hyp_get_vectors */ - cmp r0, #-1 - mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR - beq 1f + /* + * Check if we have a kernel function, which is guaranteed to be + * bigger than the maximum hyp stub hypercall + */ + cmp r0, #HVC_STUB_HCALL_NR + bhs 1f + /* + * Not a kernel function, treat it as a stub hypercall. + * Compute the physical address for __kvm_handle_stub_hvc + * (as the code lives in the idmaped page) and branch there. + * We hijack ip (r12) as a tmp register. + */ + push {r1} + ldr r1, =kimage_voffset + ldr r1, [r1] |
