Lines Matching +full:auto +full:- +full:string +full:- +full:detection

1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
58 * @return Pointer to a static string identifying the attach type. NULL is
67 * @return Pointer to a static string identifying the link type. NULL is
76 * @return Pointer to a static string identifying the map type. NULL is
85 * @return Pointer to a static string identifying the program type. NULL is
100 * @brief **libbpf_set_print()** sets user-provided log callback function to
108 * This function is thread-safe.
119 * - for object open from file, this will override setting object
121 * - for object open from memory buffer, this will specify an object
122 * name and will override default "<addr>-<buf-size>" name;
125 /* parse map definitions non-strictly, allowing extra attributes/data */
129 * auto-pinned to that path on load; defaults to "/sys/fs/bpf".
139 /* Path to the custom BTF to be used for BPF CO-RE relocations.
141 * for the purpose of CO-RE relocations.
148 * passed-through to bpf() syscall. Keep in mind that kernel might
149 * fail operation with -ENOSPC error if provided buffer is too small
155 * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overridden
156 * with bpf_program__set_log() on per-program level, to get
158 * - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
162 * previous contents, so if you need more fine-grained control, set
163 * per-program buffer with bpf_program__set_log_buf() to preserve each
177 * could be either libbpf's own auto-allocated log buffer, if
178 * kernel_log_buffer is NULL, or user-provided custom kernel_log_buf.
194 * string. bpf_token_path overrides LIBBPF_BPF_TOKEN_PATH, if both are
197 * Setting bpf_token_path option to empty string disables libbpf's
314 * @return BPF token FD or -1, if it wasn't set
361 * @brief **bpf_program__insns()** gives read-only access to BPF program's
375 * instructions will be CO-RE-relocated, BPF subprograms instructions will be
478 * a BPF program based on auto-detection of program type, attach type,
486 * - kprobe/kretprobe (depends on SEC() definition)
487 * - uprobe/uretprobe (depends on SEC() definition)
488 * - tracepoint
489 * - raw tracepoint
490 * - tracing programs (typed raw TP/fentry/fexit/fmod_ret)
498 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
514 * enum probe_attach_mode - the mode to attach kprobe/uprobe
516 * force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
533 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
560 /* array of user-provided values fetchable through bpf_get_attach_cookie */
613 * - syms and offsets are mutually exclusive
614 * - ref_ctr_offsets and cookies are optional
619 * -1 for all processes
636 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
661 * system supports compat syscalls or defines 32-bit syscalls in 64-bit
666 * compat and 32-bit interfaces is required.
683 * a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
686 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
690 /* Function name to attach to. Could be an unqualified ("abc") or library-qualified
714 * -1 for all processes
732 * -1 for all processes
747 /* custom user-provided value accessible through usdt_cookie() */
755 * bpf_program__attach_uprobe_opts() except it covers USDT (User-space
757 * user-space function entry or exit.
761 * -1 for all processes
778 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
811 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
918 * auto-detection of attachment when programs are loaded.
934 /* Per-program log level and log buffer getters/setters.
944 * @brief **bpf_program__set_attach_target()** sets BTF-based attach target
946 * - BTF-aware raw tracepoints (tp_btf);
947 * - fentry/fexit/fmod_ret;
948 * - lsm;
949 * - freplace.
985 * @brief **bpf_map__set_autocreate()** sets whether libbpf has to auto-create
989 * @return 0 on success; -EBUSY if BPF object was already loaded
991 * **bpf_map__set_autocreate()** allows to opt-out from libbpf auto-creating
996 * This API allows to opt-out of this process for specific map instance. This
1000 * BPF-side code that expects to use such missing BPF map is recognized by BPF
1007 * @brief **bpf_map__set_autoattach()** sets whether libbpf has to auto-attach
1017 * auto-attach during BPF skeleton attach phase.
1019 * @return true if map is set to auto-attach during skeleton attach phase; false, otherwise
1027 * @return the file descriptor; or -EINVAL in case of an error
1055 * There is a special case for maps with associated memory-mapped regions, like
1058 * adjust the corresponding BTF info. This attempt is best-effort and can only
1101 * @return The path string; which can be NULL
1151 * definition's **value_size**. For per-CPU BPF maps value size has to be
1154 * per-CPU values value size has to be aligned up to closest 8 bytes for
1160 * **bpf_map__lookup_elem()** is high-level equivalent of
1175 * definition's **value_size**. For per-CPU BPF maps value size has to be
1178 * per-CPU values value size has to be aligned up to closest 8 bytes for
1184 * **bpf_map__update_elem()** is high-level equivalent of
1200 * **bpf_map__delete_elem()** is high-level equivalent of
1214 * definition's **value_size**. For per-CPU BPF maps value size has to be
1217 * per-CPU values value size has to be aligned up to closest 8 bytes for
1223 * **bpf_map__lookup_and_delete_elem()** is high-level equivalent of
1238 * @return 0, on success; -ENOENT if **cur_key** is the last key in BPF map;
1241 * **bpf_map__get_next_key()** is high-level equivalent of
1354 * manager object. The index is 0-based and corresponds to the order in which
1384 * should still show the correct trend over the long-term.
1454 * @return A pointer to an 8-byte aligned reserved region of the user ring
1477 * should block when waiting for a sample. -1 causes the caller to block
1479 * @return A pointer to an 8-byte aligned reserved region of the user ring
1485 * If **timeout_ms** is -1, the function will block indefinitely until a sample
1486 * becomes available. Otherwise, **timeout_ms** must be non-negative, or errno
1562 * code to send data over to user-space
1563 * @param page_cnt number of memory pages allocated for each per-CPU buffer
1566 * @param ctx user-provided extra context passed into *sample_cb* and *lost_cb*
1577 LIBBPF_PERF_EVENT_ERROR = -1,
1578 LIBBPF_PERF_EVENT_CONT = -2,
1597 /* if cpu_cnt > 0, map_keys specify map keys to set per-CPU FDs for */
1617 * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
1661 * not supported; negative error code if feature detection failed or can't be
1674 * not supported; negative error code if feature detection failed or can't be
1689 * detection for provided input arguments failed or can't be performed
1844 * auto-attach is not supported, callback should return 0 and set link to
1856 /* User-provided value that is passed to prog_setup_fn,
1886 * @return Non-negative handler ID is returned on success. This handler ID has
1892 * - if *sec* is just a plain string (e.g., "abc"), it will match only
1895 * - if *sec* is of the form "abc/", proper SEC() form is
1898 * - if *sec* is of the form "abc+", it will successfully match both
1900 * - if *sec* is NULL, custom handler is registered for any BPF program that
1908 * (i.e., it's possible to have custom SEC("perf_event/LLC-load-misses")
1912 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
1928 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs