Lines Matching +full:auto +full:- +full:string +full:- +full:detection
1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
58 * @return Pointer to a static string identifying the attach type. NULL is
67 * @return Pointer to a static string identifying the link type. NULL is
76 * @return Pointer to a static string identifying the map type. NULL is
85 * @return Pointer to a static string identifying the program type. NULL is
100 * @brief **libbpf_set_print()** sets user-provided log callback function to
105 * This function is thread-safe.
116 * - for object open from file, this will override setting object
118 * - for object open from memory buffer, this will specify an object
119 * name and will override default "<addr>-<buf-size>" name;
122 /* parse map definitions non-strictly, allowing extra attributes/data */
126 * auto-pinned to that path on load; defaults to "/sys/fs/bpf".
136 /* Path to the custom BTF to be used for BPF CO-RE relocations.
138 * for the purpose of CO-RE relocations.
145 * passed-through to bpf() syscall. Keep in mind that kernel might
146 * fail operation with -ENOSPC error if provided buffer is too small
152 * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overriden
153 * with bpf_program__set_log() on per-program level, to get
155 * - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
159 * previous contents, so if you need more fine-grained control, set
160 * per-program buffer with bpf_program__set_log_buf() to preserve each
174 * could be either libbpf's own auto-allocated log buffer, if
175 * kernel_log_buffer is NULL, or user-provided custom kernel_log_buf.
318 * @brief **bpf_program__insns()** gives read-only access to BPF program's
332 * instructions will be CO-RE-relocated, BPF subprograms instructions will be
435 * a BPF program based on auto-detection of program type, attach type,
443 * - kprobe/kretprobe (depends on SEC() definition)
444 * - uprobe/uretprobe (depends on SEC() definition)
445 * - tracepoint
446 * - raw tracepoint
447 * - tracing programs (typed raw TP/fentry/fexit/fmod_ret)
455 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
471 * enum probe_attach_mode - the mode to attach kprobe/uprobe
473 * force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
490 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
517 /* array of user-provided values fetchable through bpf_get_attach_cookie */
564 * - syms and offsets are mutually exclusive
565 * - ref_ctr_offsets and cookies are optional
570 * -1 for all processes
587 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
612 * system supports compat syscalls or defines 32-bit syscalls in 64-bit
617 * compat and 32-bit interfaces is required.
634 * a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
637 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
641 /* Function name to attach to. Could be an unqualified ("abc") or library-qualified
665 * -1 for all processes
683 * -1 for all processes
698 /* custom user-provided value accessible through usdt_cookie() */
706 * bpf_program__attach_uprobe_opts() except it covers USDT (User-space
708 * user-space function entry or exit.
712 * -1 for all processes
729 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
751 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
856 * auto-detection of attachment when programs are loaded.
872 /* Per-program log level and log buffer getters/setters.
882 * @brief **bpf_program__set_attach_target()** sets BTF-based attach target
884 * - BTF-aware raw tracepoints (tp_btf);
885 * - fentry/fexit/fmod_ret;
886 * - lsm;
887 * - freplace.
923 * @brief **bpf_map__set_autocreate()** sets whether libbpf has to auto-create
927 * @return 0 on success; -EBUSY if BPF object was already loaded
929 * **bpf_map__set_autocreate()** allows to opt-out from libbpf auto-creating
934 * This API allows to opt-out of this process for specific map instance. This
938 * BPF-side code that expects to use such missing BPF map is recognized by BPF
948 * @return the file descriptor; or -EINVAL in case of an error
976 * There is a special case for maps with associated memory-mapped regions, like
979 * adjust the corresponding BTF info. This attempt is best-effort and can only
1022 * @return The path string; which can be NULL
1072 * definition's **value_size**. For per-CPU BPF maps value size has to be
1075 * per-CPU values value size has to be aligned up to closest 8 bytes for
1081 * **bpf_map__lookup_elem()** is high-level equivalent of
1096 * definition's **value_size**. For per-CPU BPF maps value size has to be
1099 * per-CPU values value size has to be aligned up to closest 8 bytes for
1105 * **bpf_map__update_elem()** is high-level equivalent of
1121 * **bpf_map__delete_elem()** is high-level equivalent of
1135 * definition's **value_size**. For per-CPU BPF maps value size has to be
1138 * per-CPU values value size has to be aligned up to closest 8 bytes for
1144 * **bpf_map__lookup_and_delete_elem()** is high-level equivalent of
1159 * @return 0, on success; -ENOENT if **cur_key** is the last key in BPF map;
1162 * **bpf_map__get_next_key()** is high-level equivalent of
1274 * manager object. The index is 0-based and corresponds to the order in which
1304 * should still show the correct trend over the long-term.
1363 * @return A pointer to an 8-byte aligned reserved region of the user ring
1386 * should block when waiting for a sample. -1 causes the caller to block
1388 * @return A pointer to an 8-byte aligned reserved region of the user ring
1394 * If **timeout_ms** is -1, the function will block indefinitely until a sample
1395 * becomes available. Otherwise, **timeout_ms** must be non-negative, or errno
1471 * code to send data over to user-space
1472 * @param page_cnt number of memory pages allocated for each per-CPU buffer
1475 * @param ctx user-provided extra context passed into *sample_cb* and *lost_cb*
1486 LIBBPF_PERF_EVENT_ERROR = -1,
1487 LIBBPF_PERF_EVENT_CONT = -2,
1506 /* if cpu_cnt > 0, map_keys specify map keys to set per-CPU FDs for */
1526 * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
1570 * not supported; negative error code if feature detection failed or can't be
1583 * not supported; negative error code if feature detection failed or can't be
1598 * detection for provided input arguments failed or can't be performed
1747 * auto-attach is not supported, callback should return 0 and set link to
1759 /* User-provided value that is passed to prog_setup_fn,
1789 * @return Non-negative handler ID is returned on success. This handler ID has
1795 * - if *sec* is just a plain string (e.g., "abc"), it will match only
1798 * - if *sec* is of the form "abc/", proper SEC() form is
1801 * - if *sec* is of the form "abc+", it will successfully match both
1803 * - if *sec* is NULL, custom handler is registered for any BPF program that
1811 * (i.e., it's possible to have custom SEC("perf_event/LLC-load-misses")
1815 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
1831 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs