Compare commits

..

1 Commits

Author SHA1 Message Date
thiagoftsm
3890b7fa29 Add patch to run on Debian 10 2025-05-20 17:32:59 +00:00
29 changed files with 422 additions and 891 deletions

View File

@@ -10,7 +10,6 @@ Herbert Xu <herbert@gondor.apana.org.au>
Jakub Kicinski <kuba@kernel.org> <jakub.kicinski@netronome.com>
Jesper Dangaard Brouer <hawk@kernel.org> <brouer@redhat.com>
Kees Cook <kees@kernel.org> <keescook@chromium.org>
Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.co.jp>
Leo Yan <leo.yan@linux.dev> <leo.yan@linaro.org>
Mark Starovoytov <mstarovo@pm.me> <mstarovoitov@marvell.com>
Maxim Mikityanskiy <maxtram95@gmail.com> <maximmi@mellanox.com>

View File

@@ -1 +1 @@
27861fc720be2c39b861d8bdfb68287f54de6855
b4432656b36e5cc1d50a1f2dc15357543add530e

View File

@@ -1 +1 @@
21aeabb68258ce17b91af113a768760b3a491d93
9325d53fe9adff354b6a93fda5f38c165947da0f

View File

@@ -1,58 +0,0 @@
# About fuzzing
Fuzzing is done by [OSS-Fuzz](https://google.github.io/oss-fuzz/).
It works by creating a project-specific binary that combines fuzzer
itself and a project provided entry point named
`LLVMFuzzerTestOneInput()`. When invoked, this executable either
searches for new test cases or runs an existing test case.
File `fuzz/bpf-object-fuzzer.c` defines an entry point for the robot:
- robot supplies bytes supposed to be an ELF file;
- wrapper invokes `bpf_object__open_mem()` to process these bytes.
File `scripts/build-fuzzers.sh` provides a recipe for fuzzer
infrastructure on how to build the executable described above (see
[here](https://github.com/google/oss-fuzz/tree/master/projects/libbpf)).
# Reproducing fuzzing errors
## Official way
OSS-Fuzz project describes error reproduction steps in the official
[documentation](https://google.github.io/oss-fuzz/advanced-topics/reproducing/).
Suppose you received an email linking to the fuzzer generated test
case, or got one as an artifact of the `CIFuzz` job (e.g. like
[here](https://github.com/libbpf/libbpf/actions/runs/16375110681)).
Actions to reproduce the error locally are:
```sh
git clone --depth=1 https://github.com/google/oss-fuzz.git
cd oss-fuzz
python infra/helper.py pull_images
python infra/helper.py build_image libbpf
python infra/helper.py build_fuzzers --sanitizer address libbpf <path-to-libbpf-checkout>
python infra/helper.py reproduce libbpf bpf-object-fuzzer <path-to-test-case>
```
`<path-to-test-case>` is usually a `crash-<many-hex-digits>` file w/o
extension, CI job wraps it into zip archive and attaches as an artifact.
To recompile after some fixes, repeat the `build_fuzzers` and
`reproduce` steps after modifying source code in
`<path-to-libbpf-checkout>`.
Note: the `build_fuzzers` step creates a binary
`build/out/libbpf/bpf-object-fuzzer`, it can be executed directly if
your environment is compatible.
## Simple way
From the project root:
```sh
SKIP_LIBELF_REBUILD=1 scripts/build-fuzzers.sh
out/bpf-object-fuzzer <path-to-test-case>
```
`out/bpf-object-fuzzer` is the fuzzer executable described earlier,
can be run with gdb etc.

View File

@@ -5,8 +5,8 @@
* modify it under the terms of version 2 of the GNU General Public
* License as published by the Free Software Foundation.
*/
#ifndef __LINUX_BPF_H__
#define __LINUX_BPF_H__
#ifndef _UAPI__LINUX_BPF_H__
#define _UAPI__LINUX_BPF_H__
#include <linux/types.h>
#include <linux/bpf_common.h>
@@ -450,7 +450,6 @@ union bpf_iter_link_info {
* * **struct bpf_map_info**
* * **struct bpf_btf_info**
* * **struct bpf_link_info**
* * **struct bpf_token_info**
*
* Return
* Returns zero on success. On error, -1 is returned and *errno*
@@ -907,17 +906,6 @@ union bpf_iter_link_info {
* A new file descriptor (a nonnegative integer), or -1 if an
* error occurred (in which case, *errno* is set appropriately).
*
* BPF_PROG_STREAM_READ_BY_FD
* Description
* Read data of a program's BPF stream. The program is identified
* by *prog_fd*, and the stream is identified by the *stream_id*.
* The data is copied to a buffer pointed to by *stream_buf*, and
* filled less than or equal to *stream_buf_len* bytes.
*
* Return
* Number of bytes read from the stream on success, or -1 if an
* error occurred (in which case, *errno* is set appropriately).
*
* NOTES
* eBPF objects (maps and programs) can be shared between processes.
*
@@ -973,7 +961,6 @@ enum bpf_cmd {
BPF_LINK_DETACH,
BPF_PROG_BIND_MAP,
BPF_TOKEN_CREATE,
BPF_PROG_STREAM_READ_BY_FD,
__MAX_BPF_CMD,
};
@@ -1476,11 +1463,6 @@ struct bpf_stack_build_id {
#define BPF_OBJ_NAME_LEN 16U
enum {
BPF_STREAM_STDOUT = 1,
BPF_STREAM_STDERR = 2,
};
union bpf_attr {
struct { /* anonymous struct used by BPF_MAP_CREATE command */
__u32 map_type; /* one of enum bpf_map_type */
@@ -1812,13 +1794,6 @@ union bpf_attr {
};
__u64 expected_revision;
} netkit;
struct {
union {
__u32 relative_fd;
__u32 relative_id;
};
__u64 expected_revision;
} cgroup;
};
} link_create;
@@ -1867,13 +1842,6 @@ union bpf_attr {
__u32 bpffs_fd;
} token_create;
struct {
__aligned_u64 stream_buf;
__u32 stream_buf_len;
__u32 stream_id;
__u32 prog_fd;
} prog_stream_read;
} __attribute__((aligned(8)));
/* The description below is an attempt at providing documentation to eBPF
@@ -2088,7 +2056,6 @@ union bpf_attr {
* for updates resulting in a null checksum the value is set to
* **CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
* that the modified header field is part of the pseudo-header.
* Flag **BPF_F_IPV6** should be set for IPv6 packets.
*
* This helper works in combination with **bpf_csum_diff**\ (),
* which does not update the checksum in-place, but offers more
@@ -2435,7 +2402,7 @@ union bpf_attr {
* into it. An example is available in file
* *samples/bpf/trace_output_user.c* in the Linux kernel source
* tree (the eBPF program counterpart is in
* *samples/bpf/trace_output.bpf.c*).
* *samples/bpf/trace_output_kern.c*).
*
* **bpf_perf_event_output**\ () achieves better performance
* than **bpf_trace_printk**\ () for sharing data with user
@@ -5005,9 +4972,6 @@ union bpf_attr {
* the netns switch takes place from ingress to ingress without
* going through the CPU's backlog queue.
*
* *skb*\ **->mark** and *skb*\ **->tstamp** are not cleared during
* the netns switch.
*
* The *flags* argument is reserved and must be 0. The helper is
* currently only supported for tc BPF program types at the
* ingress hook and for veth and netkit target device types. The
@@ -6105,7 +6069,6 @@ enum {
BPF_F_PSEUDO_HDR = (1ULL << 4),
BPF_F_MARK_MANGLED_0 = (1ULL << 5),
BPF_F_MARK_ENFORCE = (1ULL << 6),
BPF_F_IPV6 = (1ULL << 7),
};
/* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
@@ -6685,15 +6648,11 @@ struct bpf_link_info {
struct {
__aligned_u64 tp_name; /* in/out: tp_name buffer ptr */
__u32 tp_name_len; /* in/out: tp_name buffer len */
__u32 :32;
__u64 cookie;
} raw_tracepoint;
struct {
__u32 attach_type;
__u32 target_obj_id; /* prog_id for PROG_EXT, otherwise btf object id */
__u32 target_btf_id; /* BTF type id inside the object */
__u32 :32;
__u64 cookie;
} tracing;
struct {
__u64 cgroup_id;
@@ -6804,13 +6763,6 @@ struct bpf_link_info {
};
} __attribute__((aligned(8)));
struct bpf_token_info {
__u64 allowed_cmds;
__u64 allowed_maps;
__u64 allowed_progs;
__u64 allowed_attachs;
} __attribute__((aligned(8)));
/* User bpf_sock_addr struct to access socket fields and sockaddr struct passed
* by user and intended to be used by socket (e.g. to bind to, depends on
* attach type).
@@ -7623,4 +7575,4 @@ enum bpf_kfunc_flags {
BPF_F_PAD_ZEROS = (1ULL << 0),
};
#endif /* __LINUX_BPF_H__ */
#endif /* _UAPI__LINUX_BPF_H__ */

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef __LINUX_BPF_COMMON_H__
#define __LINUX_BPF_COMMON_H__
#ifndef _UAPI__LINUX_BPF_COMMON_H__
#define _UAPI__LINUX_BPF_COMMON_H__
/* Instruction classes */
#define BPF_CLASS(code) ((code) & 0x07)
@@ -54,4 +54,4 @@
#define BPF_MAXINSNS 4096
#endif
#endif /* __LINUX_BPF_COMMON_H__ */
#endif /* _UAPI__LINUX_BPF_COMMON_H__ */

View File

@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/* Copyright (c) 2018 Facebook */
#ifndef __LINUX_BTF_H__
#define __LINUX_BTF_H__
#ifndef _UAPI__LINUX_BTF_H__
#define _UAPI__LINUX_BTF_H__
#include <linux/types.h>
@@ -198,4 +198,4 @@ struct btf_enum64 {
__u32 val_hi32;
};
#endif /* __LINUX_BTF_H__ */
#endif /* _UAPI__LINUX_BTF_H__ */

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _LINUX_IF_LINK_H
#define _LINUX_IF_LINK_H
#ifndef _UAPI_LINUX_IF_LINK_H
#define _UAPI_LINUX_IF_LINK_H
#include <linux/types.h>
#include <linux/netlink.h>
@@ -1977,4 +1977,4 @@ enum {
#define IFLA_DSA_MAX (__IFLA_DSA_MAX - 1)
#endif /* _LINUX_IF_LINK_H */
#endif /* _UAPI_LINUX_IF_LINK_H */

View File

@@ -79,7 +79,6 @@ struct xdp_mmap_offsets {
#define XDP_UMEM_COMPLETION_RING 6
#define XDP_STATISTICS 7
#define XDP_OPTIONS 8
#define XDP_MAX_TX_SKB_BUDGET 9
struct xdp_umem_reg {
__u64 addr; /* Start of packet data area */

View File

@@ -3,8 +3,8 @@
/* Documentation/netlink/specs/netdev.yaml */
/* YNL-GEN uapi header */
#ifndef _LINUX_NETDEV_H
#define _LINUX_NETDEV_H
#ifndef _UAPI_LINUX_NETDEV_H
#define _UAPI_LINUX_NETDEV_H
#define NETDEV_FAMILY_NAME "netdev"
#define NETDEV_FAMILY_VERSION 1
@@ -77,11 +77,6 @@ enum netdev_qstats_scope {
NETDEV_QSTATS_SCOPE_QUEUE = 1,
};
enum netdev_napi_threaded {
NETDEV_NAPI_THREADED_DISABLED,
NETDEV_NAPI_THREADED_ENABLED,
};
enum {
NETDEV_A_DEV_IFINDEX = 1,
NETDEV_A_DEV_PAD,
@@ -139,7 +134,6 @@ enum {
NETDEV_A_NAPI_DEFER_HARD_IRQS,
NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
NETDEV_A_NAPI_THREADED,
__NETDEV_A_NAPI_MAX,
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
@@ -225,7 +219,6 @@ enum {
NETDEV_CMD_QSTATS_GET,
NETDEV_CMD_BIND_RX,
NETDEV_CMD_NAPI_SET,
NETDEV_CMD_BIND_TX,
__NETDEV_CMD_MAX,
NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
@@ -234,4 +227,4 @@ enum {
#define NETDEV_MCGRP_MGMT "mgmt"
#define NETDEV_MCGRP_PAGE_POOL "page-pool"
#endif /* _LINUX_NETDEV_H */
#endif /* _UAPI_LINUX_NETDEV_H */

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef __LINUX_NETLINK_H
#define __LINUX_NETLINK_H
#ifndef _UAPI__LINUX_NETLINK_H
#define _UAPI__LINUX_NETLINK_H
#include <linux/kernel.h>
#include <linux/socket.h> /* for __kernel_sa_family_t */
@@ -249,4 +249,4 @@ struct nla_bitfield32 {
__u32 selector;
};
#endif /* __LINUX_NETLINK_H */
#endif /* _UAPI__LINUX_NETLINK_H */

File diff suppressed because it is too large Load Diff

View File

@@ -35,48 +35,44 @@ if [[ "$SANITIZER" == undefined ]]; then
CXXFLAGS+=" $UBSAN_FLAGS"
fi
export SKIP_LIBELF_REBUILD=${SKIP_LIBELF_REBUILD:=''}
# Ideally libbelf should be built using release tarballs available
# at https://sourceware.org/elfutils/ftp/. Unfortunately sometimes they
# fail to compile (for example, elfutils-0.185 fails to compile with LDFLAGS enabled
# due to https://bugs.gentoo.org/794601) so let's just point the script to
# commits referring to versions of libelf that actually can be built
if [[ ! -e elfutils || "$SKIP_LIBELF_REBUILD" == "" ]]; then
rm -rf elfutils
git clone https://sourceware.org/git/elfutils.git
(
cd elfutils
git checkout 67a187d4c1790058fc7fd218317851cb68bb087c
git log --oneline -1
rm -rf elfutils
git clone https://sourceware.org/git/elfutils.git
(
cd elfutils
git checkout 67a187d4c1790058fc7fd218317851cb68bb087c
git log --oneline -1
# ASan isn't compatible with -Wl,--no-undefined: https://github.com/google/sanitizers/issues/380
sed -i 's/^\(NO_UNDEFINED=\).*/\1/' configure.ac
# ASan isn't compatible with -Wl,--no-undefined: https://github.com/google/sanitizers/issues/380
sed -i 's/^\(NO_UNDEFINED=\).*/\1/' configure.ac
# ASan isn't compatible with -Wl,-z,defs either:
# https://clang.llvm.org/docs/AddressSanitizer.html#usage
sed -i 's/^\(ZDEFS_LDFLAGS=\).*/\1/' configure.ac
# ASan isn't compatible with -Wl,-z,defs either:
# https://clang.llvm.org/docs/AddressSanitizer.html#usage
sed -i 's/^\(ZDEFS_LDFLAGS=\).*/\1/' configure.ac
if [[ "$SANITIZER" == undefined ]]; then
# That's basicaly what --enable-sanitize-undefined does to turn off unaligned access
# elfutils heavily relies on on i386/x86_64 but without changing compiler flags along the way
sed -i 's/\(check_undefined_val\)=[0-9]/\1=1/' configure.ac
fi
autoreconf -i -f
if ! ./configure --enable-maintainer-mode --disable-debuginfod --disable-libdebuginfod \
--disable-demangler --without-bzlib --without-lzma --without-zstd \
CC="$CC" CFLAGS="-Wno-error $CFLAGS" CXX="$CXX" CXXFLAGS="-Wno-error $CXXFLAGS" LDFLAGS="$CFLAGS"; then
cat config.log
exit 1
fi
make -C config -j$(nproc) V=1
make -C lib -j$(nproc) V=1
make -C libelf -j$(nproc) V=1
)
if [[ "$SANITIZER" == undefined ]]; then
# That's basicaly what --enable-sanitize-undefined does to turn off unaligned access
# elfutils heavily relies on on i386/x86_64 but without changing compiler flags along the way
sed -i 's/\(check_undefined_val\)=[0-9]/\1=1/' configure.ac
fi
autoreconf -i -f
if ! ./configure --enable-maintainer-mode --disable-debuginfod --disable-libdebuginfod \
--disable-demangler --without-bzlib --without-lzma --without-zstd \
CC="$CC" CFLAGS="-Wno-error $CFLAGS" CXX="$CXX" CXXFLAGS="-Wno-error $CXXFLAGS" LDFLAGS="$CFLAGS"; then
cat config.log
exit 1
fi
make -C config -j$(nproc) V=1
make -C lib -j$(nproc) V=1
make -C libelf -j$(nproc) V=1
)
make -C src BUILD_STATIC_ONLY=y V=1 clean
make -C src -j$(nproc) CFLAGS="-I$(pwd)/elfutils/libelf $CFLAGS" BUILD_STATIC_ONLY=y V=1

View File

@@ -63,7 +63,6 @@ LIBBPF_TREE_FILTER="mkdir -p __libbpf/include/uapi/linux __libbpf/include/tools
for p in "${!PATH_MAP[@]}"; do
LIBBPF_TREE_FILTER+="git mv -kf ${p} __libbpf/${PATH_MAP[${p}]} && "$'\\\n'
done
LIBBPF_TREE_FILTER+="find __libbpf/include/uapi/linux -type f -exec sed -i 's/_UAPI\(__\?LINUX\)/\1/' {} + && "$'\\\n'
LIBBPF_TREE_FILTER+="git rm --ignore-unmatch -f __libbpf/src/{Makefile,Build,test_libbpf.c,.gitignore} >/dev/null"
cd_to()
@@ -348,7 +347,7 @@ diff -u ${TMP_DIR}/linux-view.ls ${TMP_DIR}/github-view.ls
echo "Comparing file contents..."
CONSISTENT=1
for F in $(cat ${TMP_DIR}/linux-view.ls); do
if ! diff -u <(sed 's/_UAPI\(__\?LINUX\)/\1/' "${LINUX_ABS_DIR}/${F}") "${GITHUB_ABS_DIR}/${F}"; then
if ! diff -u "${LINUX_ABS_DIR}/${F}" "${GITHUB_ABS_DIR}/${F}"; then
echo "${LINUX_ABS_DIR}/${F} and ${GITHUB_ABS_DIR}/${F} are different!"
CONSISTENT=0
fi

View File

@@ -9,7 +9,7 @@ else
endif
LIBBPF_MAJOR_VERSION := 1
LIBBPF_MINOR_VERSION := 7
LIBBPF_MINOR_VERSION := 6
LIBBPF_PATCH_VERSION := 0
LIBBPF_VERSION := $(LIBBPF_MAJOR_VERSION).$(LIBBPF_MINOR_VERSION).$(LIBBPF_PATCH_VERSION)
LIBBPF_MAJMIN_VERSION := $(LIBBPF_MAJOR_VERSION).$(LIBBPF_MINOR_VERSION).0

View File

@@ -837,50 +837,6 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, netkit))
return libbpf_err(-EINVAL);
break;
case BPF_CGROUP_INET_INGRESS:
case BPF_CGROUP_INET_EGRESS:
case BPF_CGROUP_INET_SOCK_CREATE:
case BPF_CGROUP_INET_SOCK_RELEASE:
case BPF_CGROUP_INET4_BIND:
case BPF_CGROUP_INET6_BIND:
case BPF_CGROUP_INET4_POST_BIND:
case BPF_CGROUP_INET6_POST_BIND:
case BPF_CGROUP_INET4_CONNECT:
case BPF_CGROUP_INET6_CONNECT:
case BPF_CGROUP_UNIX_CONNECT:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
case BPF_CGROUP_UNIX_GETSOCKNAME:
case BPF_CGROUP_UDP4_SENDMSG:
case BPF_CGROUP_UDP6_SENDMSG:
case BPF_CGROUP_UNIX_SENDMSG:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
case BPF_CGROUP_UNIX_RECVMSG:
case BPF_CGROUP_SOCK_OPS:
case BPF_CGROUP_DEVICE:
case BPF_CGROUP_SYSCTL:
case BPF_CGROUP_GETSOCKOPT:
case BPF_CGROUP_SETSOCKOPT:
case BPF_LSM_CGROUP:
relative_fd = OPTS_GET(opts, cgroup.relative_fd, 0);
relative_id = OPTS_GET(opts, cgroup.relative_id, 0);
if (relative_fd && relative_id)
return libbpf_err(-EINVAL);
if (relative_id) {
attr.link_create.cgroup.relative_id = relative_id;
attr.link_create.flags |= BPF_F_ID;
} else {
attr.link_create.cgroup.relative_fd = relative_fd;
}
attr.link_create.cgroup.expected_revision =
OPTS_GET(opts, cgroup.expected_revision, 0);
if (!OPTS_ZEROED(opts, cgroup))
return libbpf_err(-EINVAL);
break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);
@@ -1375,23 +1331,3 @@ int bpf_token_create(int bpffs_fd, struct bpf_token_create_opts *opts)
fd = sys_bpf_fd(BPF_TOKEN_CREATE, &attr, attr_sz);
return libbpf_err_errno(fd);
}
int bpf_prog_stream_read(int prog_fd, __u32 stream_id, void *buf, __u32 buf_len,
struct bpf_prog_stream_read_opts *opts)
{
const size_t attr_sz = offsetofend(union bpf_attr, prog_stream_read);
union bpf_attr attr;
int err;
if (!OPTS_VALID(opts, bpf_prog_stream_read_opts))
return libbpf_err(-EINVAL);
memset(&attr, 0, attr_sz);
attr.prog_stream_read.stream_buf = ptr_to_u64(buf);
attr.prog_stream_read.stream_buf_len = buf_len;
attr.prog_stream_read.stream_id = stream_id;
attr.prog_stream_read.prog_fd = prog_fd;
err = sys_bpf(BPF_PROG_STREAM_READ_BY_FD, &attr, attr_sz);
return libbpf_err_errno(err);
}

View File

@@ -438,11 +438,6 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} netkit;
struct {
__u32 relative_fd;
__u32 relative_id;
__u64 expected_revision;
} cgroup;
};
size_t :0;
};
@@ -709,27 +704,6 @@ struct bpf_token_create_opts {
LIBBPF_API int bpf_token_create(int bpffs_fd,
struct bpf_token_create_opts *opts);
struct bpf_prog_stream_read_opts {
size_t sz;
size_t :0;
};
#define bpf_prog_stream_read_opts__last_field sz
/**
* @brief **bpf_prog_stream_read** reads data from the BPF stream of a given BPF
* program.
*
* @param prog_fd FD for the BPF program whose BPF stream is to be read.
* @param stream_id ID of the BPF stream to be read.
* @param buf Buffer to read data into from the BPF stream.
* @param buf_len Maximum number of bytes to read from the BPF stream.
* @param opts optional options, can be NULL
*
* @return The number of bytes read, on success; negative error code, otherwise
* (errno is also set to the error code)
*/
LIBBPF_API int bpf_prog_stream_read(int prog_fd, __u32 stream_id, void *buf, __u32 buf_len,
struct bpf_prog_stream_read_opts *opts);
#ifdef __cplusplus
} /* extern "C" */
#endif

View File

@@ -286,7 +286,6 @@ static long (* const bpf_l3_csum_replace)(struct __sk_buff *skb, __u32 offset, _
* for updates resulting in a null checksum the value is set to
* **CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
* that the modified header field is part of the pseudo-header.
* Flag **BPF_F_IPV6** should be set for IPv6 packets.
*
* This helper works in combination with **bpf_csum_diff**\ (),
* which does not update the checksum in-place, but offers more
@@ -689,7 +688,7 @@ static __u32 (* const bpf_get_route_realm)(struct __sk_buff *skb) = (void *) 24;
* into it. An example is available in file
* *samples/bpf/trace_output_user.c* in the Linux kernel source
* tree (the eBPF program counterpart is in
* *samples/bpf/trace_output.bpf.c*).
* *samples/bpf/trace_output_kern.c*).
*
* **bpf_perf_event_output**\ () achieves better performance
* than **bpf_trace_printk**\ () for sharing data with user
@@ -3707,9 +3706,6 @@ static void *(* const bpf_this_cpu_ptr)(const void *percpu_ptr) = (void *) 154;
* the netns switch takes place from ingress to ingress without
* going through the CPU's backlog queue.
*
* *skb*\ **->mark** and *skb*\ **->tstamp** are not cleared during
* the netns switch.
*
* The *flags* argument is reserved and must be 0. The helper is
* currently only supported for tc BPF program types at the
* ingress hook and for veth and netkit target device types. The

View File

@@ -215,7 +215,6 @@ enum libbpf_tristate {
#define __arg_nonnull __attribute((btf_decl_tag("arg:nonnull")))
#define __arg_nullable __attribute((btf_decl_tag("arg:nullable")))
#define __arg_trusted __attribute((btf_decl_tag("arg:trusted")))
#define __arg_untrusted __attribute((btf_decl_tag("arg:untrusted")))
#define __arg_arena __attribute((btf_decl_tag("arg:arena")))
#ifndef ___bpf_concat
@@ -315,22 +314,6 @@ enum libbpf_tristate {
___param, sizeof(___param)); \
})
extern int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args,
__u32 len__sz, void *aux__prog) __weak __ksym;
#define bpf_stream_printk(stream_id, fmt, args...) \
({ \
static const char ___fmt[] = fmt; \
unsigned long long ___param[___bpf_narg(args)]; \
\
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
bpf_stream_vprintk(stream_id, ___fmt, ___param, sizeof(___param), NULL);\
})
/* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
* Otherwise use __bpf_vprintk
*/

View File

@@ -12,7 +12,6 @@
#include <sys/utsname.h>
#include <sys/param.h>
#include <sys/stat.h>
#include <sys/mman.h>
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/btf.h>
@@ -121,9 +120,6 @@ struct btf {
/* whether base_btf should be freed in btf_free for this instance */
bool owns_base;
/* whether raw_data is a (read-only) mmap */
bool raw_data_is_mmap;
/* BTF object FD, if loaded into kernel */
int fd;
@@ -955,17 +951,6 @@ static bool btf_is_modifiable(const struct btf *btf)
return (void *)btf->hdr != btf->raw_data;
}
static void btf_free_raw_data(struct btf *btf)
{
if (btf->raw_data_is_mmap) {
munmap(btf->raw_data, btf->raw_size);
btf->raw_data_is_mmap = false;
} else {
free(btf->raw_data);
}
btf->raw_data = NULL;
}
void btf__free(struct btf *btf)
{
if (IS_ERR_OR_NULL(btf))
@@ -985,7 +970,7 @@ void btf__free(struct btf *btf)
free(btf->types_data);
strset__free(btf->strs_set);
}
btf_free_raw_data(btf);
free(btf->raw_data);
free(btf->raw_data_swapped);
free(btf->type_offs);
if (btf->owns_base)
@@ -1011,7 +996,7 @@ static struct btf *btf_new_empty(struct btf *base_btf)
if (base_btf) {
btf->base_btf = base_btf;
btf->start_id = btf__type_cnt(base_btf);
btf->start_str_off = base_btf->hdr->str_len + base_btf->start_str_off;
btf->start_str_off = base_btf->hdr->str_len;
btf->swapped_endian = base_btf->swapped_endian;
}
@@ -1045,7 +1030,7 @@ struct btf *btf__new_empty_split(struct btf *base_btf)
return libbpf_ptr(btf_new_empty(base_btf));
}
static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, bool is_mmap)
static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf)
{
struct btf *btf;
int err;
@@ -1065,18 +1050,12 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, b
btf->start_str_off = base_btf->hdr->str_len;
}
if (is_mmap) {
btf->raw_data = (void *)data;
btf->raw_data_is_mmap = true;
} else {
btf->raw_data = malloc(size);
if (!btf->raw_data) {
err = -ENOMEM;
goto done;
}
memcpy(btf->raw_data, data, size);
btf->raw_data = malloc(size);
if (!btf->raw_data) {
err = -ENOMEM;
goto done;
}
memcpy(btf->raw_data, data, size);
btf->raw_size = size;
btf->hdr = btf->raw_data;
@@ -1104,12 +1083,12 @@ done:
struct btf *btf__new(const void *data, __u32 size)
{
return libbpf_ptr(btf_new(data, size, NULL, false));
return libbpf_ptr(btf_new(data, size, NULL));
}
struct btf *btf__new_split(const void *data, __u32 size, struct btf *base_btf)
{
return libbpf_ptr(btf_new(data, size, base_btf, false));
return libbpf_ptr(btf_new(data, size, base_btf));
}
struct btf_elf_secs {
@@ -1230,7 +1209,7 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
if (secs.btf_base_data) {
dist_base_btf = btf_new(secs.btf_base_data->d_buf, secs.btf_base_data->d_size,
NULL, false);
NULL);
if (IS_ERR(dist_base_btf)) {
err = PTR_ERR(dist_base_btf);
dist_base_btf = NULL;
@@ -1239,7 +1218,7 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
}
btf = btf_new(secs.btf_data->d_buf, secs.btf_data->d_size,
dist_base_btf ?: base_btf, false);
dist_base_btf ?: base_btf);
if (IS_ERR(btf)) {
err = PTR_ERR(btf);
goto done;
@@ -1356,7 +1335,7 @@ static struct btf *btf_parse_raw(const char *path, struct btf *base_btf)
}
/* finally parse BTF data */
btf = btf_new(data, sz, base_btf, false);
btf = btf_new(data, sz, base_btf);
err_out:
free(data);
@@ -1375,37 +1354,6 @@ struct btf *btf__parse_raw_split(const char *path, struct btf *base_btf)
return libbpf_ptr(btf_parse_raw(path, base_btf));
}
static struct btf *btf_parse_raw_mmap(const char *path, struct btf *base_btf)
{
struct stat st;
void *data;
struct btf *btf;
int fd, err;
fd = open(path, O_RDONLY);
if (fd < 0)
return ERR_PTR(-errno);
if (fstat(fd, &st) < 0) {
err = -errno;
close(fd);
return ERR_PTR(err);
}
data = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
err = -errno;
close(fd);
if (data == MAP_FAILED)
return ERR_PTR(err);
btf = btf_new(data, st.st_size, base_btf, true);
if (IS_ERR(btf))
munmap(data, st.st_size);
return btf;
}
static struct btf *btf_parse(const char *path, struct btf *base_btf, struct btf_ext **btf_ext)
{
struct btf *btf;
@@ -1670,7 +1618,7 @@ struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf)
goto exit_free;
}
btf = btf_new(ptr, btf_info.btf_size, base_btf, false);
btf = btf_new(ptr, btf_info.btf_size, base_btf);
exit_free:
free(ptr);
@@ -1710,8 +1658,10 @@ struct btf *btf__load_from_kernel_by_id(__u32 id)
static void btf_invalidate_raw_data(struct btf *btf)
{
if (btf->raw_data)
btf_free_raw_data(btf);
if (btf->raw_data) {
free(btf->raw_data);
btf->raw_data = NULL;
}
if (btf->raw_data_swapped) {
free(btf->raw_data_swapped);
btf->raw_data_swapped = NULL;
@@ -5381,10 +5331,7 @@ struct btf *btf__load_vmlinux_btf(void)
pr_warn("kernel BTF is missing at '%s', was CONFIG_DEBUG_INFO_BTF enabled?\n",
sysfs_btf_path);
} else {
btf = btf_parse_raw_mmap(sysfs_btf_path, NULL);
if (IS_ERR(btf))
btf = btf__parse(sysfs_btf_path, NULL);
btf = btf__parse(sysfs_btf_path, NULL);
if (!btf) {
err = -errno;
pr_warn("failed to read kernel BTF from '%s': %s\n",

View File

@@ -326,10 +326,9 @@ struct btf_dump_type_data_opts {
bool compact; /* no newlines/indentation */
bool skip_names; /* skip member/type names */
bool emit_zeroes; /* show 0-valued fields */
bool emit_strings; /* print char arrays as strings */
size_t :0;
};
#define btf_dump_type_data_opts__last_field emit_strings
#define btf_dump_type_data_opts__last_field emit_zeroes
LIBBPF_API int
btf_dump__dump_type_data(struct btf_dump *d, __u32 id,

View File

@@ -68,7 +68,6 @@ struct btf_dump_data {
bool compact;
bool skip_names;
bool emit_zeroes;
bool emit_strings;
__u8 indent_lvl; /* base indent level */
char indent_str[BTF_DATA_INDENT_STR_LEN];
/* below are used during iteration */
@@ -227,9 +226,6 @@ static void btf_dump_free_names(struct hashmap *map)
size_t bkt;
struct hashmap_entry *cur;
if (!map)
return;
hashmap__for_each_entry(map, cur, bkt)
free((void *)cur->pkey);
@@ -2032,52 +2028,6 @@ static int btf_dump_var_data(struct btf_dump *d,
return btf_dump_dump_type_data(d, NULL, t, type_id, data, 0, 0);
}
static int btf_dump_string_data(struct btf_dump *d,
const struct btf_type *t,
__u32 id,
const void *data)
{
const struct btf_array *array = btf_array(t);
const char *chars = data;
__u32 i;
/* Make sure it is a NUL-terminated string. */
for (i = 0; i < array->nelems; i++) {
if ((void *)(chars + i) >= d->typed_dump->data_end)
return -E2BIG;
if (chars[i] == '\0')
break;
}
if (i == array->nelems) {
/* The caller will print this as a regular array. */
return -EINVAL;
}
btf_dump_data_pfx(d);
btf_dump_printf(d, "\"");
for (i = 0; i < array->nelems; i++) {
char c = chars[i];
if (c == '\0') {
/*
* When printing character arrays as strings, NUL bytes
* are always treated as string terminators; they are
* never printed.
*/
break;
}
if (isprint(c))
btf_dump_printf(d, "%c", c);
else
btf_dump_printf(d, "\\x%02x", (__u8)c);
}
btf_dump_printf(d, "\"");
return 0;
}
static int btf_dump_array_data(struct btf_dump *d,
const struct btf_type *t,
__u32 id,
@@ -2105,13 +2055,8 @@ static int btf_dump_array_data(struct btf_dump *d,
* char arrays, so if size is 1 and element is
* printable as a char, we'll do that.
*/
if (elem_size == 1) {
if (d->typed_dump->emit_strings &&
btf_dump_string_data(d, t, id, data) == 0) {
return 0;
}
if (elem_size == 1)
d->typed_dump->is_array_char = true;
}
}
/* note that we increment depth before calling btf_dump_print() below;
@@ -2599,7 +2544,6 @@ int btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
d->typed_dump->compact = OPTS_GET(opts, compact, false);
d->typed_dump->skip_names = OPTS_GET(opts, skip_names, false);
d->typed_dump->emit_zeroes = OPTS_GET(opts, emit_zeroes, false);
d->typed_dump->emit_strings = OPTS_GET(opts, emit_strings, false);
ret = btf_dump_dump_type_data(d, NULL, t, id, data, 0, 0);

View File

@@ -597,7 +597,7 @@ struct extern_desc {
int sym_idx;
int btf_id;
int sec_btf_id;
char *name;
const char *name;
char *essent_name;
bool is_set;
bool is_weak;
@@ -735,7 +735,7 @@ struct bpf_object {
struct usdt_manager *usdt_man;
int arena_map_idx;
struct bpf_map *arena_map;
void *arena_data;
size_t arena_data_sz;
@@ -1517,7 +1517,6 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->efile.obj_buf_sz = obj_buf_sz;
obj->efile.btf_maps_shndx = -1;
obj->kconfig_map_idx = -1;
obj->arena_map_idx = -1;
obj->kern_version = get_kernel_version();
obj->state = OBJ_OPEN;
@@ -2965,7 +2964,7 @@ static int init_arena_map_data(struct bpf_object *obj, struct bpf_map *map,
const long page_sz = sysconf(_SC_PAGE_SIZE);
size_t mmap_sz;
mmap_sz = bpf_map_mmap_sz(map);
mmap_sz = bpf_map_mmap_sz(obj->arena_map);
if (roundup(data_sz, page_sz) > mmap_sz) {
pr_warn("elf: sec '%s': declared ARENA map size (%zu) is too small to hold global __arena variables of size %zu\n",
sec_name, mmap_sz, data_sz);
@@ -3039,12 +3038,12 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
if (map->def.type != BPF_MAP_TYPE_ARENA)
continue;
if (obj->arena_map_idx >= 0) {
if (obj->arena_map) {
pr_warn("map '%s': only single ARENA map is supported (map '%s' is also ARENA)\n",
map->name, obj->maps[obj->arena_map_idx].name);
map->name, obj->arena_map->name);
return -EINVAL;
}
obj->arena_map_idx = i;
obj->arena_map = map;
if (obj->efile.arena_data) {
err = init_arena_map_data(obj, map, ARENA_SEC, obj->efile.arena_data_shndx,
@@ -3054,7 +3053,7 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
return err;
}
}
if (obj->efile.arena_data && obj->arena_map_idx < 0) {
if (obj->efile.arena_data && !obj->arena_map) {
pr_warn("elf: sec '%s': to use global __arena variables the ARENA map should be explicitly declared in SEC(\".maps\")\n",
ARENA_SEC);
return -ENOENT;
@@ -4260,9 +4259,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
return ext->btf_id;
}
t = btf__type_by_id(obj->btf, ext->btf_id);
ext->name = strdup(btf__name_by_offset(obj->btf, t->name_off));
if (!ext->name)
return -ENOMEM;
ext->name = btf__name_by_offset(obj->btf, t->name_off);
ext->sym_idx = i;
ext->is_weak = ELF64_ST_BIND(sym->st_info) == STB_WEAK;
@@ -4582,20 +4579,10 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
/* arena data relocation */
if (shdr_idx == obj->efile.arena_data_shndx) {
if (obj->arena_map_idx < 0) {
pr_warn("prog '%s': bad arena data relocation at insn %u, no arena maps defined\n",
prog->name, insn_idx);
return -LIBBPF_ERRNO__RELOC;
}
reloc_desc->type = RELO_DATA;
reloc_desc->insn_idx = insn_idx;
reloc_desc->map_idx = obj->arena_map_idx;
reloc_desc->map_idx = obj->arena_map - obj->maps;
reloc_desc->sym_off = sym->st_value;
map = &obj->maps[obj->arena_map_idx];
pr_debug("prog '%s': found arena map %d (%s, sec %d, off %zu) for insn %u\n",
prog->name, obj->arena_map_idx, map->name, map->sec_idx,
map->sec_offset, insn_idx);
return 0;
}
@@ -5093,16 +5080,6 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
return false;
}
/*
* bpf_get_map_info_by_fd() for DEVMAP will always return flags with
* BPF_F_RDONLY_PROG set, but it generally is not set at map creation time.
* Thus, ignore the BPF_F_RDONLY_PROG flag in the flags returned from
* bpf_get_map_info_by_fd() when checking for compatibility with an
* existing DEVMAP.
*/
if (map->def.type == BPF_MAP_TYPE_DEVMAP || map->def.type == BPF_MAP_TYPE_DEVMAP_HASH)
map_info.map_flags &= ~BPF_F_RDONLY_PROG;
return (map_info.type == map->def.type &&
map_info.key_size == map->def.key_size &&
map_info.value_size == map->def.value_size &&
@@ -9161,10 +9138,8 @@ void bpf_object__close(struct bpf_object *obj)
zfree(&obj->btf_custom_path);
zfree(&obj->kconfig);
for (i = 0; i < obj->nr_extern; i++) {
zfree(&obj->externs[i].name);
for (i = 0; i < obj->nr_extern; i++)
zfree(&obj->externs[i].essent_name);
}
zfree(&obj->externs);
obj->nr_extern = 0;
@@ -9231,7 +9206,7 @@ int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts)
return libbpf_err(-EFAULT);
if (!OPTS_VALID(opts, gen_loader_opts))
return libbpf_err(-EINVAL);
gen = calloc(1, sizeof(*gen));
gen = calloc(sizeof(*gen), 1);
if (!gen)
return libbpf_err(-ENOMEM);
gen->opts = opts;
@@ -10106,7 +10081,7 @@ static int find_kernel_btf_id(struct bpf_object *obj, const char *attach_name,
enum bpf_attach_type attach_type,
int *btf_obj_fd, int *btf_type_id)
{
int ret, i, mod_len = 0;
int ret, i, mod_len;
const char *fn_name, *mod_name = NULL;
fn_name = strchr(attach_name, ':');
@@ -10975,14 +10950,11 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
}
link->link.fd = pfd;
}
if (!OPTS_GET(opts, dont_enable, false)) {
if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
err = -errno;
pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n",
prog->name, pfd, errstr(err));
goto err_out;
}
if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
err = -errno;
pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n",
prog->name, pfd, errstr(err));
goto err_out;
}
return &link->link;
@@ -12865,34 +12837,6 @@ struct bpf_link *bpf_program__attach_xdp(const struct bpf_program *prog, int ifi
return bpf_program_attach_fd(prog, ifindex, "xdp", NULL);
}
struct bpf_link *
bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
const struct bpf_cgroup_opts *opts)
{
LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
__u32 relative_id;
int relative_fd;
if (!OPTS_VALID(opts, bpf_cgroup_opts))
return libbpf_err_ptr(-EINVAL);
relative_id = OPTS_GET(opts, relative_id, 0);
relative_fd = OPTS_GET(opts, relative_fd, 0);
if (relative_fd && relative_id) {
pr_warn("prog '%s': relative_fd and relative_id cannot be set at the same time\n",
prog->name);
return libbpf_err_ptr(-EINVAL);
}
link_create_opts.cgroup.expected_revision = OPTS_GET(opts, expected_revision, 0);
link_create_opts.cgroup.relative_fd = relative_fd;
link_create_opts.cgroup.relative_id = relative_id;
link_create_opts.flags = OPTS_GET(opts, flags, 0);
return bpf_program_attach_fd(prog, cgroup_fd, "cgroup", &link_create_opts);
}
struct bpf_link *
bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
const struct bpf_tcx_opts *opts)

View File

@@ -24,25 +24,8 @@
extern "C" {
#endif
/**
* @brief **libbpf_major_version()** provides the major version of libbpf.
* @return An integer, the major version number
*/
LIBBPF_API __u32 libbpf_major_version(void);
/**
* @brief **libbpf_minor_version()** provides the minor version of libbpf.
* @return An integer, the minor version number
*/
LIBBPF_API __u32 libbpf_minor_version(void);
/**
* @brief **libbpf_version_string()** provides the version of libbpf in a
* human-readable form, e.g., "v1.7".
* @return Pointer to a static string containing the version
*
* The format is *not* a part of a stable API and may change in the future.
*/
LIBBPF_API const char *libbpf_version_string(void);
enum libbpf_errno {
@@ -66,14 +49,6 @@ enum libbpf_errno {
__LIBBPF_ERRNO__END,
};
/**
* @brief **libbpf_strerror()** converts the provided error code into a
* human-readable string.
* @param err The error code to convert
* @param buf Pointer to a buffer where the error message will be stored
* @param size The number of bytes in the buffer
* @return 0, on success; negative error code, otherwise
*/
LIBBPF_API int libbpf_strerror(int err, char *buf, size_t size);
/**
@@ -277,7 +252,7 @@ bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
* @return 0, on success; negative error code, otherwise, error code is
* stored in errno
*/
LIBBPF_API int bpf_object__prepare(struct bpf_object *obj);
int bpf_object__prepare(struct bpf_object *obj);
/**
* @brief **bpf_object__load()** loads BPF object into kernel.
@@ -524,11 +499,9 @@ struct bpf_perf_event_opts {
__u64 bpf_cookie;
/* don't use BPF link when attach BPF program */
bool force_ioctl_attach;
/* don't automatically enable the event */
bool dont_enable;
size_t :0;
};
#define bpf_perf_event_opts__last_field dont_enable
#define bpf_perf_event_opts__last_field force_ioctl_attach
LIBBPF_API struct bpf_link *
bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd);
@@ -904,21 +877,6 @@ LIBBPF_API struct bpf_link *
bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
const struct bpf_netkit_opts *opts);
struct bpf_cgroup_opts {
/* size of this struct, for forward/backward compatibility */
size_t sz;
__u32 flags;
__u32 relative_fd;
__u32 relative_id;
__u64 expected_revision;
size_t :0;
};
#define bpf_cgroup_opts__last_field expected_revision
LIBBPF_API struct bpf_link *
bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
const struct bpf_cgroup_opts *opts);
struct bpf_map;
LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
@@ -1331,7 +1289,6 @@ enum bpf_tc_attach_point {
BPF_TC_INGRESS = 1 << 0,
BPF_TC_EGRESS = 1 << 1,
BPF_TC_CUSTOM = 1 << 2,
BPF_TC_QDISC = 1 << 3,
};
#define BPF_TC_PARENT(a, b) \
@@ -1346,11 +1303,9 @@ struct bpf_tc_hook {
int ifindex;
enum bpf_tc_attach_point attach_point;
__u32 parent;
__u32 handle;
const char *qdisc;
size_t :0;
};
#define bpf_tc_hook__last_field qdisc
#define bpf_tc_hook__last_field parent
struct bpf_tc_opts {
size_t sz;

View File

@@ -437,8 +437,6 @@ LIBBPF_1.6.0 {
bpf_linker__add_fd;
bpf_linker__new_fd;
bpf_object__prepare;
bpf_prog_stream_read;
bpf_program__attach_cgroup_opts;
bpf_program__func_info;
bpf_program__func_info_cnt;
bpf_program__line_info;
@@ -446,6 +444,3 @@ LIBBPF_1.6.0 {
btf__add_decl_attr;
btf__add_type_attr;
} LIBBPF_1.5.0;
LIBBPF_1.7.0 {
} LIBBPF_1.6.0;

View File

@@ -97,6 +97,9 @@ __u32 get_kernel_version(void)
if (sscanf(info.release, "%u.%u.%u", &major, &minor, &patch) != 3)
return 0;
if (major == 4 && minor == 19 && patch > 255)
return KERNEL_VERSION(major, minor, 255);
return KERNEL_VERSION(major, minor, patch);
}

View File

@@ -4,6 +4,6 @@
#define __LIBBPF_VERSION_H
#define LIBBPF_MAJOR_VERSION 1
#define LIBBPF_MINOR_VERSION 7
#define LIBBPF_MINOR_VERSION 6
#endif /* __LIBBPF_VERSION_H */

View File

@@ -529,9 +529,9 @@ int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id)
}
typedef int (*qdisc_config_t)(struct libbpf_nla_req *req, const struct bpf_tc_hook *hook);
typedef int (*qdisc_config_t)(struct libbpf_nla_req *req);
static int clsact_config(struct libbpf_nla_req *req, const struct bpf_tc_hook *hook)
static int clsact_config(struct libbpf_nla_req *req)
{
req->tc.tcm_parent = TC_H_CLSACT;
req->tc.tcm_handle = TC_H_MAKE(TC_H_CLSACT, 0);
@@ -539,16 +539,6 @@ static int clsact_config(struct libbpf_nla_req *req, const struct bpf_tc_hook *h
return nlattr_add(req, TCA_KIND, "clsact", sizeof("clsact"));
}
static int qdisc_config(struct libbpf_nla_req *req, const struct bpf_tc_hook *hook)
{
const char *qdisc = OPTS_GET(hook, qdisc, NULL);
req->tc.tcm_parent = OPTS_GET(hook, parent, TC_H_ROOT);
req->tc.tcm_handle = OPTS_GET(hook, handle, 0);
return nlattr_add(req, TCA_KIND, qdisc, strlen(qdisc) + 1);
}
static int attach_point_to_config(struct bpf_tc_hook *hook,
qdisc_config_t *config)
{
@@ -562,9 +552,6 @@ static int attach_point_to_config(struct bpf_tc_hook *hook,
return 0;
case BPF_TC_CUSTOM:
return -EOPNOTSUPP;
case BPF_TC_QDISC:
*config = &qdisc_config;
return 0;
default:
return -EINVAL;
}
@@ -609,7 +596,7 @@ static int tc_qdisc_modify(struct bpf_tc_hook *hook, int cmd, int flags)
req.tc.tcm_family = AF_UNSPEC;
req.tc.tcm_ifindex = OPTS_GET(hook, ifindex, 0);
ret = config(&req, hook);
ret = config(&req);
if (ret < 0)
return ret;
@@ -652,7 +639,6 @@ int bpf_tc_hook_destroy(struct bpf_tc_hook *hook)
case BPF_TC_INGRESS:
case BPF_TC_EGRESS:
return libbpf_err(__bpf_tc_detach(hook, NULL, true));
case BPF_TC_QDISC:
case BPF_TC_INGRESS | BPF_TC_EGRESS:
return libbpf_err(tc_qdisc_delete(hook));
case BPF_TC_CUSTOM:

View File

@@ -59,7 +59,7 @@
*
* STAP_PROBE3(my_usdt_provider, my_usdt_probe_name, 123, x, &y);
*
* USDT is identified by its <provider-name>:<probe-name> pair of names. Each
* USDT is identified by it's <provider-name>:<probe-name> pair of names. Each
* individual USDT has a fixed number of arguments (3 in the above example)
* and specifies values of each argument as if it was a function call.
*
@@ -81,7 +81,7 @@
* NOP instruction that kernel can replace with an interrupt instruction to
* trigger instrumentation code (BPF program for all that we care about).
*
* Semaphore above is an optional feature. It records an address of a 2-byte
* Semaphore above is and optional feature. It records an address of a 2-byte
* refcount variable (normally in '.probes' ELF section) used for signaling if
* there is anything that is attached to USDT. This is useful for user
* applications if, for example, they need to prepare some arguments that are
@@ -121,7 +121,7 @@
* a uprobe BPF program (which for kernel, at least currently, is just a kprobe
* program, so BPF_PROG_TYPE_KPROBE program type). With the only difference
* that uprobe is usually attached at the function entry, while USDT will
* normally be somewhere inside the function. But it should always be
* normally will be somewhere inside the function. But it should always be
* pointing to NOP instruction, which makes such uprobes the fastest uprobe
* kind.
*
@@ -151,7 +151,7 @@
* libbpf sets to spec ID during attach time, or, if kernel is too old to
* support BPF cookie, through IP-to-spec-ID map that libbpf maintains in such
* case. The latter means that some modes of operation can't be supported
* without BPF cookie. Such a mode is attaching to shared library "generically",
* without BPF cookie. Such mode is attaching to shared library "generically",
* without specifying target process. In such case, it's impossible to
* calculate absolute IP addresses for IP-to-spec-ID map, and thus such mode
* is not supported without BPF cookie support.
@@ -185,7 +185,7 @@
* as even if USDT spec string is the same, USDT cookie value can be
* different. It was deemed excessive to try to deduplicate across independent
* USDT attachments by taking into account USDT spec string *and* USDT cookie
* value, which would complicate spec ID accounting significantly for little
* value, which would complicated spec ID accounting significantly for little
* gain.
*/