summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorYang Jihong <yangjihong1@huawei.com>2023-02-21 08:49:16 +0900
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2023-03-11 16:39:59 +0100
commitc16e4610d5e5e2698f25280121173292c1c3f805 (patch)
tree6da8e69d5981ac16f4b5d165f44696db21eaf768 /include
parentf75ee95196cecd0375c28f56d1bc713368474c63 (diff)
downloadlinux-c16e4610d5e5e2698f25280121173292c1c3f805.tar.gz
linux-c16e4610d5e5e2698f25280121173292c1c3f805.tar.bz2
linux-c16e4610d5e5e2698f25280121173292c1c3f805.zip
x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
commit 868a6fc0ca2407622d2833adefe1c4d284766c4c upstream. Since the following commit: commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code") modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe may be in the optimizing or unoptimizing state when op.kp->flags has KPROBE_FLAG_OPTIMIZED and op->list is not empty. The __recover_optprobed_insn check logic is incorrect, a kprobe in the unoptimizing state may be incorrectly determined as unoptimizing. As a result, incorrect instructions are copied. The optprobe_queued_unopt function needs to be exported for invoking in arch directory. Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/ Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code") Cc: stable@vger.kernel.org Signed-off-by: Yang Jihong <yangjihong1@huawei.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/kprobes.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 4dbebd319b6f..0ed50f1a9578 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -342,6 +342,7 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table,
size_t *length, loff_t *ppos);
#endif
extern void wait_for_kprobe_optimizer(void);
+bool optprobe_queued_unopt(struct optimized_kprobe *op);
#else
static inline void wait_for_kprobe_optimizer(void) { }
#endif /* CONFIG_OPTPROBES */