diff options
| author | Alexei Starovoitov <ast@kernel.org> | 2023-08-21 15:51:28 -0700 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2023-08-21 15:51:28 -0700 |
| commit | d56518380085d78f179cdc701d791ace4acb1d23 (patch) | |
| tree | ed2aefcedfa32080ed5bec252a82e99db6d7f4d5 /tools/lib/bpf/libbpf_internal.h | |
| parent | acfadf25a9ee65d4ff5fbcbd91c63dbae3fe52fb (diff) | |
| parent | 8909a9392b4193f6d76dab9508c63c71458210df (diff) | |
| download | linux-d56518380085d78f179cdc701d791ace4acb1d23.tar.gz linux-d56518380085d78f179cdc701d791ace4acb1d23.tar.bz2 linux-d56518380085d78f179cdc701d791ace4acb1d23.zip | |
Merge branch 'bpf-add-multi-uprobe-link'
Jiri Olsa says:
====================
bpf: Add multi uprobe link
hi,
this patchset is adding support to attach multiple uprobes and usdt probes
through new uprobe_multi link.
The current uprobe is attached through the perf event and attaching many
uprobes takes a lot of time because of that.
The main reason is that we need to install perf event for each probed function
and profile shows perf event installation (perf_install_in_context) as culprit.
The new uprobe_multi link just creates raw uprobes and attaches the bpf
program to them without perf event being involved.
In addition to being faster we also save file descriptors. For the current
uprobe attach we use extra perf event fd for each probed function. The new
link just need one fd that covers all the functions we are attaching to.
v7 changes:
- fixed task release on error path and re-org the error
path to be more straightforward [Yonghong]
- re-organized uprobe_prog_run locking to follow general pattern
and removed might_fault check as it's not needed in uprobe/task
context [Yonghong]
There's support for bpftrace [2] and tetragon [1].
Also available at:
https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
uprobe_multi
thanks,
jirka
[1] https://github.com/cilium/tetragon/pull/936
[2] https://github.com/iovisor/bpftrace/compare/master...olsajiri:bpftrace:uprobe_multi
[3] https://lore.kernel.org/bpf/20230628115329.248450-1-laoar.shao@gmail.com/
---
====================
Link: https://lore.kernel.org/r/20230809083440.3209381-1-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/lib/bpf/libbpf_internal.h')
| -rw-r--r-- | tools/lib/bpf/libbpf_internal.h | 21 |
1 files changed, 21 insertions, 0 deletions
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index e4d05662a96c..f0f08635adb0 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -15,6 +15,7 @@ #include <linux/err.h> #include <fcntl.h> #include <unistd.h> +#include <libelf.h> #include "relo_core.h" /* make sure libbpf doesn't use kernel-only integer typedefs */ @@ -354,6 +355,8 @@ enum kern_feature_id { FEAT_BTF_ENUM64, /* Kernel uses syscall wrapper (CONFIG_ARCH_HAS_SYSCALL_WRAPPER) */ FEAT_SYSCALL_WRAPPER, + /* BPF multi-uprobe link support */ + FEAT_UPROBE_MULTI_LINK, __FEAT_CNT, }; @@ -577,4 +580,22 @@ static inline bool is_pow_of_2(size_t x) #define PROG_LOAD_ATTEMPTS 5 int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts); +bool glob_match(const char *str, const char *pat); + +long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name); +long elf_find_func_offset_from_file(const char *binary_path, const char *name); + +struct elf_fd { + Elf *elf; + int fd; +}; + +int elf_open(const char *binary_path, struct elf_fd *elf_fd); +void elf_close(struct elf_fd *elf_fd); + +int elf_resolve_syms_offsets(const char *binary_path, int cnt, + const char **syms, unsigned long **poffsets); +int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern, + unsigned long **poffsets, size_t *pcnt); + #endif /* __LIBBPF_LIBBPF_INTERNAL_H */ |
