- Feb 24, 2023
-
-
Jakub Kicinski authored
traceback.print_exception() seems tricky to call, we're missing some argument, so re-raise instead. Reported-by:
Chuck Lever III <chuck.lever@oracle.com> Fixes: 3aacf828 ("tools: ynl: add an object hierarchy to represent parsed spec") Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Chuck run into an issue with a single-element attr-set which only has an attr with value of 0. The search for max attr in a struct records attrs with value larger than 0 only (max_val is set to 0 at the start). Adjust the comparison, alternatively max_val could be init'ed to -1. Somehow picking the last attr of a value seems like a good idea in general. Reported-by:
Chuck Lever III <chuck.lever@oracle.com> Fixes: be5bea1c ("net: add basic C code generators for Netlink") Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
Fix a repeated copy/paste typo. Fixes: d3d854fd ("netdev-genl: create a simple family for netdev stuff") Signed-off-by:
Tariq Toukan <tariqt@nvidia.com> Acked-by:
Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 23, 2023
-
-
Lu Wei authored
Add tests to check whether the total fib info length is calculated corretly in route notify process. Signed-off-by:
Lu Wei <luwei32@huawei.com> Reviewed-by:
David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20230222083629.335683-3-luwei32@huawei.com Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
- Feb 22, 2023
-
-
Phil Sutter authored
When calling ip6_route_lookup() for the packet arriving on the VRF interface, the result is always the real (slave) interface. Expect this when validating the result. Fixes: acc641ab ("netfilter: rpfilter/fib: Populate flowic_l3mdev field") Signed-off-by:
Phil Sutter <phil@nwl.cc> Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org>
-
- Feb 21, 2023
-
-
Jiri Pirko authored
When devlink instance is put into network namespace and that network namespace gets deleted, devlink instance is moved back into init_ns. This is done as a part of cleanup_net() routine. Since cleanup_net() is called asynchronously from workqueue, there is no guarantee that the devlink instance move is done after "ip netns del" returns. So fix this race by making sure that the devlink instance is present before any other operation. Reported-by:
Amir Tzin <amirtz@nvidia.com> Fixes: b74c37fd ("selftests: netdevsim: add tests for devlink reload with resources") Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Reviewed-by:
Pavan Chebbi <pavan.chebbi@broadcom.com> Link: https://lore.kernel.org/r/20230220132336.198597-1-jiri@resnulli.us Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Roxana Nicolescu authored
Usage of `set -e` before executing a command causes immediate exit on failure, without cleanup up the resources allocated at setup. This can affect the next tests that use the same resources, leading to a chain of failures. A simple fix is to always call cleanup function when the script exists. This approach is already used by other existing tests. Fixes: 1056691b ("selftests: fib_tests: Make test results more verbose") Signed-off-by:
Roxana Nicolescu <roxana.nicolescu@canonical.com> Link: https://lore.kernel.org/r/20230220110400.26737-2-roxana.nicolescu@canonical.com Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
- Feb 20, 2023
-
-
Paolo Abeni authored
Explicitly check for child netns and main ns independency Signed-off-by:
Paolo Abeni <pabeni@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jakub Sitnicki authored
Data passed to user-space with a (SOL_UDP, UDP_GRO) cmsg carries an int (see udp_cmsg_recv), not a u16 value, as strace confirms: recvmsg(8, {msg_name=..., msg_iov=[{iov_base="\0\0..."..., iov_len=96000}], msg_iovlen=1, msg_control=[{cmsg_len=20, <-- sizeof(cmsghdr) + 4 cmsg_level=SOL_UDP, cmsg_type=0x68}], <-- UDP_GRO msg_controllen=24, msg_flags=0}, 0) = 11200 Interpreting the data as an u16 value won't work on big-endian platforms. Since it is too late to back out of this API decision [1], fix the test. [1]: https://lore.kernel.org/netdev/20230131174601.203127-1-jakub@cloudflare.com/ Fixes: 3327a9c4 ("selftests: add functionals test for UDP GRO") Suggested-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
Jakub Sitnicki <jakub@cloudflare.com> Reviewed-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 17, 2023
-
-
Martin KaFai Lau authored
This patch tests the bpf_fib_lookup helper when looking up a neigh in NUD_FAILED and NUD_STALE state. It also adds test for the new BPF_FIB_LOOKUP_SKIP_NEIGH flag. Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230217205515.3583372-2-martin.lau@linux.dev
-
Martin KaFai Lau authored
The bpf_fib_lookup() also looks up the neigh table. This was done before bpf_redirect_neigh() was added. In the use case that does not manage the neigh table and requires bpf_fib_lookup() to lookup a fib to decide if it needs to redirect or not, the bpf prog can depend only on using bpf_redirect_neigh() to lookup the neigh. It also keeps the neigh entries fresh and connected. This patch adds a bpf_fib_lookup flag, SKIP_NEIGH, to avoid the double neigh lookup when the bpf prog always call bpf_redirect_neigh() to do the neigh lookup. The params->smac output is skipped together when SKIP_NEIGH is set because bpf_redirect_neigh() will figure out the smac also. Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230217205515.3583372-1-martin.lau@linux.dev
-
Martin KaFai Lau authored
This reverts commit 6c20822f. build bot failed on arch with different cache line size: https://lore.kernel.org/bpf/50c35055-afa9-d01e-9a05-ea5351280e4f@intel.com/ Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org>
-
Andrii Nakryiko authored
Add tests validating that it's possible to pass context arguments into global subprogs for various types of programs, including a particularly tricky KPROBE programs (which cover kprobes, uprobes, USDTs, a vast and important class of programs). Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230216045954.3002473-4-andrii@kernel.org
-
Andrii Nakryiko authored
Convert 17 test_global_funcs subtests into test_loader framework for easier maintenance and more declarative way to define expected failures/successes. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230216045954.3002473-3-andrii@kernel.org
-
Taichi Nishimura authored
Run spell checker on files in selftest/bpf and fixed typos. Signed-off-by:
Taichi Nishimura <awkrail01@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/bpf/20230216085537.519062-1-awkrail01@gmail.com
-
Ilya Leoshkevich authored
Use the new type-safe wrappers around bpf_obj_get_info_by_fd(). Fix a prog/map mixup in prog_holds_map(). Signed-off-by:
Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230214231221.249277-6-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Use the new type-safe wrappers around bpf_obj_get_info_by_fd(). Split the bpf_obj_get_info_by_fd() call in build_btf_type_table() in two, since knowing the type helps with the Memory Sanitizer. Improve map_parse_fd_and_info() type safety by using struct bpf_map_info * instead of void * for info. Signed-off-by:
Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20230214231221.249277-4-iii@linux.ibm.com
-
Ilya Leoshkevich authored
Use the new type-safe wrappers around bpf_obj_get_info_by_fd(). Signed-off-by:
Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230214231221.249277-3-iii@linux.ibm.com
-
Ilya Leoshkevich authored
These are type-safe wrappers around bpf_obj_get_info_by_fd(). They found one problem in selftests, and are also useful for adding Memory Sanitizer annotations. Signed-off-by:
Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230214231221.249277-2-iii@linux.ibm.com
-
- Feb 16, 2023
-
-
Andrea Mayer authored
This selftest is designed for testing the PSP flavor in SRv6 End behavior. It instantiates a virtual network composed of several nodes: hosts and SRv6 routers. Each node is realized using a network namespace that is properly interconnected to others through veth pairs. The test makes use of the SRv6 End behavior and of the PSP flavor needed for removing the SRH from the IPv6 header at the penultimate node. The correct execution of the behavior is verified through reachability tests carried out between hosts. Signed-off-by:
Andrea Mayer <andrea.mayer@uniroma2.it> Signed-off-by:
Paolo Lungaroni <paolo.lungaroni@uniroma2.it> Reviewed-by:
David Ahern <dsahern@kernel.org> Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jamal Hadi Salim authored
The rsvp classifier has served us well for about a quarter of a century but has has not been getting much maintenance attention due to lack of known users. Signed-off-by:
Jamal Hadi Salim <jhs@mojatatu.com> Acked-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jamal Hadi Salim authored
The tcindex classifier has served us well for about a quarter of a century but has not been getting much TLC due to lack of known users. Most recently it has become easy prey to syzkaller. For this reason, we are retiring it. Signed-off-by:
Jamal Hadi Salim <jhs@mojatatu.com> Acked-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jamal Hadi Salim authored
The dsmark qdisc has served us well over the years for diffserv but has not been getting much attention due to other more popular approaches to do diffserv services. Most recently it has become a shooting target for syzkaller. For this reason, we are retiring it. Signed-off-by:
Jamal Hadi Salim <jhs@mojatatu.com> Acked-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jamal Hadi Salim authored
The ATM qdisc has served us well over the years but has not been getting much TLC due to lack of known users. Most recently it has become a shooting target for syzkaller. For this reason, we are retiring it. Signed-off-by:
Jamal Hadi Salim <jhs@mojatatu.com> Acked-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Jamal Hadi Salim authored
While this amazing qdisc has served us well over the years it has not been getting any tender love and care and has bitrotted over time. It has become mostly a shooting target for syzkaller lately. For this reason, we are retiring it. Goodbye CBQ - we loved you. Signed-off-by:
Jamal Hadi Salim <jhs@mojatatu.com> Acked-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
Davide Caratti authored
remove temporary files created by 'mirred_egress_to_ingress_tcp' test in the cleanup() handler. Also, change variable names to avoid clashing with globals from lib.sh. Suggested-by:
Paolo Abeni <pabeni@redhat.com> Signed-off-by:
Davide Caratti <dcaratti@redhat.com> Link: https://lore.kernel.org/r/091649045a017fc00095ecbb75884e5681f7025f.1676368027.git.dcaratti@redhat.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Alexander Lobakin authored
&xdp_buff and &xdp_frame are bound in a way that xdp_buff->data_hard_start == xdp_frame It's always the case and e.g. xdp_convert_buff_to_frame() relies on this. IOW, the following: for (u32 i = 0; i < 0xdead; i++) { xdpf = xdp_convert_buff_to_frame(&xdp); xdp_convert_frame_to_buff(xdpf, &xdp); } shouldn't ever modify @xdpf's contents or the pointer itself. However, "live packet" code wrongly treats &xdp_frame as part of its context placed *before* the data_hard_start. With such flow, data_hard_start is sizeof(*xdpf) off to the right and no longer points to the XDP frame. Instead of replacing `sizeof(ctx)` with `offsetof(ctx, xdpf)` in several places and praying that there are no more miscalcs left somewhere in the code, unionize ::frm with ::data in a flex array, so that both starts pointing to the actual data_hard_start and the XDP frame actually starts being a part of it, i.e. a part of the headroom, not the context. A nice side effect is that the maximum frame size for this mode gets increased by 40 bytes, as xdp_buff::frame_sz includes everything from data_hard_start (-> includes xdpf already) to the end of XDP/skb shared info. Also update %MAX_PKT_SIZE accordingly in the selftests code. Leave it hardcoded for 64 bit && 4k pages, it can be made more flexible later on. Minor: align `&head->data` with how `head->frm` is assigned for consistency. Minor #2: rename 'frm' to 'frame' in &xdp_page_head while at it for clarity. (was found while testing XDP traffic generator on ice, which calls xdp_convert_frame_to_buff() for each XDP frame) Fixes: b530e9e1 ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") Acked-by:
Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by:
Alexander Lobakin <aleksander.lobakin@intel.com> Link: https://lore.kernel.org/r/20230215185440.4126672-1-aleksander.lobakin@intel.com Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org>
-
Anton Protopopov authored
Add a new benchmark which measures hashmap lookup operations speed. A user can control the following parameters of the benchmark: * key_size (max 1024): the key size to use * max_entries: the hashmap max entries * nr_entries: the number of entries to insert/lookup * nr_loops: the number of loops for the benchmark * map_flags The hashmap flags passed to BPF_MAP_CREATE The BPF program performing the benchmarks calls two nested bpf_loop: bpf_loop(nr_loops/nr_entries) bpf_loop(nr_entries) bpf_map_lookup() So the nr_loops determines the number of actual map lookups. All lookups are successful. Example (the output is generated on a AMD Ryzen 9 3950X machine): for nr_entries in `seq 4096 4096 65536`; do echo -n "$((nr_entries*100/65536))% full: "; sudo ./bench -d2 -a bpf-hashmap-lookup --key_size=4 --nr_entries=$nr_entries --max_entries=65536 --nr_loops=1000000 --map_flags=0x40 | grep cpu; done 6% full: cpu01: lookup 50.739M ± 0.018M events/sec (approximated from 32 samples of ~19ms) 12% full: cpu01: lookup 47.751M ± 0.015M events/sec (approximated from 32 samples of ~20ms) 18% full: cpu01: lookup 45.153M ± 0.013M events/sec (approximated from 32 samples of ~22ms) 25% full: cpu01: lookup 43.826M ± 0.014M events/sec (approximated from 32 samples of ~22ms) 31% full: cpu01: lookup 41.971M ± 0.012M events/sec (approximated from 32 samples of ~23ms) 37% full: cpu01: lookup 41.034M ± 0.015M events/sec (approximated from 32 samples of ~24ms) 43% full: cpu01: lookup 39.946M ± 0.012M events/sec (approximated from 32 samples of ~25ms) 50% full: cpu01: lookup 38.256M ± 0.014M events/sec (approximated from 32 samples of ~26ms) 56% full: cpu01: lookup 36.580M ± 0.018M events/sec (approximated from 32 samples of ~27ms) 62% full: cpu01: lookup 36.252M ± 0.012M events/sec (approximated from 32 samples of ~27ms) 68% full: cpu01: lookup 35.200M ± 0.012M events/sec (approximated from 32 samples of ~28ms) 75% full: cpu01: lookup 34.061M ± 0.009M events/sec (approximated from 32 samples of ~29ms) 81% full: cpu01: lookup 34.374M ± 0.010M events/sec (approximated from 32 samples of ~29ms) 87% full: cpu01: lookup 33.244M ± 0.011M events/sec (approximated from 32 samples of ~30ms) 93% full: cpu01: lookup 32.182M ± 0.013M events/sec (approximated from 32 samples of ~31ms) 100% full: cpu01: lookup 31.497M ± 0.016M events/sec (approximated from 32 samples of ~31ms) Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-8-aspsk@isovalent.com
-
Anton Protopopov authored
The bench utility will print Setting up benchmark '<bench-name>'... Benchmark '<bench-name>' started. on startup to stdout. Suppress this output if --quiet option if given. This makes it simpler to parse benchmark output by a script. Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-7-aspsk@isovalent.com
-
Anton Protopopov authored
The "local-storage-tasks-trace" benchmark has a `--quiet` option. Move it to the list of common options, so that the main code and other benchmarks can use (new) env.quiet variable. Patch the run_bench_local_storage_rcu_tasks_trace.sh helper script accordingly. Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-6-aspsk@isovalent.com
-
Anton Protopopov authored
The benchs/bench_bpf_hashmap_full_update.c doesn't set a custom argp, so it shouldn't include the <argp.h> header. Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-5-aspsk@isovalent.com
-
Anton Protopopov authored
To parse command line the bench utility uses the argp_parse() function. This function takes as an argument a parent 'struct argp' structure which defines common command line options and an array of children 'struct argp' structures which defines additional command line options for particular benchmarks. This implementation doesn't allow benchmarks to share option names, e.g., if two benchmarks want to use, say, the --option option, then only one of them will succeed (the first one encountered in the array). This will be convenient if same option names could be used in different benchmarks (with the same semantics, e.g., --nr_loops=N). Fix this by calling the argp_parse() function twice. The first call is the same as it was before, with all children argps, and helps to find the benchmark name and to print a combined help message if anything is wrong. Given the name, we can call the argp_parse the second time, but now the children array points only to a correct benchmark thus always calling the correct parsers. (If there's no a specific list of arguments, then only one call to argp_parse will be done.) Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-4-aspsk@isovalent.com
-
Anton Protopopov authored
The hashmap_report_final callback function defined in the benchs/bench_bpf_hashmap_full_update.c file should be static. Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-3-aspsk@isovalent.com
-
Anton Protopopov authored
To call the bpf_hashmap_full_update benchmark, one should say: bench bpf-hashmap-ful-update The patch adds a missing 'l' to the benchmark name. Signed-off-by:
Anton Protopopov <aspsk@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230213091519.1202813-2-aspsk@isovalent.com
-
Hou Tao authored
The reinitialization of spin-lock in map value after immediate reuse may corrupt lookup with BPF_F_LOCK flag and result in hard lock-up, so add one test case to demonstrate the problem. Signed-off-by:
Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20230215082132.3856544-3-houtao@huaweicloud.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- Feb 15, 2023
-
-
Eduard Zingerman authored
A test case to verify that variable offset BPF_ST instruction preserves STACK_ZERO marks when writes zeros, e.g. in the following situation: *(u64*)(r10 - 8) = 0 ; STACK_ZERO marks for fp[-8] r0 = random(-7, -1) ; some random number in range of [-7, -1] r0 += r10 ; r0 is now variable offset pointer to stack *(u8*)(r0) = 0 ; BPF_ST writing zero, STACK_ZERO mark for ; fp[-8] should be preserved. Signed-off-by:
Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230214232030.1502829-5-eddyz87@gmail.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Check that verifier tracks the value of 'imm' spilled to stack by BPF_ST_MEM instruction. Cover the following cases: - write of non-zero constant to stack; - write of a zero constant to stack. Signed-off-by:
Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230214232030.1502829-3-eddyz87@gmail.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
For aligned stack writes using BPF_ST instruction track stored values in a same way BPF_STX is handled, e.g. make sure that the following commands produce similar verifier knowledge: fp[-8] = 42; r1 = 42; fp[-8] = r1; This covers two cases: - non-null values written to stack are stored as spill of fake registers; - null values written to stack are stored as STACK_ZERO marks. Previously both cases above used STACK_MISC marks instead. Some verifier test cases relied on the old logic to obtain STACK_MISC marks for some stack values. These test cases are updated in the same commit to avoid failures during bisect. Signed-off-by:
Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230214232030.1502829-2-eddyz87@gmail.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
The compiler is optimizing out majority of unref_ptr read/writes, so the test wasn't testing much. For example, one could delete '__kptr' tag from 'struct prog_test_ref_kfunc __kptr *unref_ptr;' and the test would still "pass". Convert it to volatile stores. Confirmed by comparing bpf asm before/after. Fixes: 2cbc469a ("selftests/bpf: Add C tests for kptr") Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Stanislav Fomichev <sdf@google.com> Acked-by:
Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230214235051.22938-1-alexei.starovoitov@gmail.com Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org>
-
Björn Töpel authored
When the BPF selftests are cross-compiled, only the a host version of bpftool is built. This version of bpftool is used on the host-side to generate various intermediates, e.g., skeletons. The test runners are also using bpftool, so the Makefile will symlink bpftool from the selftest/bpf root, where the test runners will look the tool: | $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \ | $(OUTPUT)/$(if $2,$2/)bpftool There are two problems for cross-compilation builds: 1. There is no native (cross-compilation target) of bpftool 2. The bootstrap/bpftool is never cross-compiled (by design) Make sure that a native/cross-compiled version of bpftool is built, and if CROSS_COMPILE is set, symlink the native/non-bootstrap version. Acked-by:
Quentin Monnet <quentin@isovalent.com> Signed-off-by:
Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-