Skip to content
Snippets Groups Projects
  1. Feb 17, 2023
  2. Feb 16, 2023
    • Alexander Lobakin's avatar
      bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES · 6c20822f
      Alexander Lobakin authored
      
      &xdp_buff and &xdp_frame are bound in a way that
      
      xdp_buff->data_hard_start == xdp_frame
      
      It's always the case and e.g. xdp_convert_buff_to_frame() relies on
      this.
      IOW, the following:
      
      	for (u32 i = 0; i < 0xdead; i++) {
      		xdpf = xdp_convert_buff_to_frame(&xdp);
      		xdp_convert_frame_to_buff(xdpf, &xdp);
      	}
      
      shouldn't ever modify @xdpf's contents or the pointer itself.
      However, "live packet" code wrongly treats &xdp_frame as part of its
      context placed *before* the data_hard_start. With such flow,
      data_hard_start is sizeof(*xdpf) off to the right and no longer points
      to the XDP frame.
      
      Instead of replacing `sizeof(ctx)` with `offsetof(ctx, xdpf)` in several
      places and praying that there are no more miscalcs left somewhere in the
      code, unionize ::frm with ::data in a flex array, so that both starts
      pointing to the actual data_hard_start and the XDP frame actually starts
      being a part of it, i.e. a part of the headroom, not the context.
      A nice side effect is that the maximum frame size for this mode gets
      increased by 40 bytes, as xdp_buff::frame_sz includes everything from
      data_hard_start (-> includes xdpf already) to the end of XDP/skb shared
      info.
      Also update %MAX_PKT_SIZE accordingly in the selftests code. Leave it
      hardcoded for 64 bit && 4k pages, it can be made more flexible later on.
      
      Minor: align `&head->data` with how `head->frm` is assigned for
      consistency.
      Minor #2: rename 'frm' to 'frame' in &xdp_page_head while at it for
      clarity.
      
      (was found while testing XDP traffic generator on ice, which calls
       xdp_convert_frame_to_buff() for each XDP frame)
      
      Fixes: b530e9e1 ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN")
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Signed-off-by: default avatarAlexander Lobakin <aleksander.lobakin@intel.com>
      Link: https://lore.kernel.org/r/20230215185440.4126672-1-aleksander.lobakin@intel.com
      
      
      Signed-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      6c20822f
    • Andrii Nakryiko's avatar
      Merge branch 'New benchmark for hashmap lookups' · d964f09a
      Andrii Nakryiko authored
      Anton Protopopov says:
      
      ====================
      
      Add a new benchmark for hashmap lookups and fix several typos.
      
      In commit 3 I've patched the bench utility so that now command line options
      can be reused by different benchmarks.
      
      The benchmark itself is added in the last commit 7. I was using this benchmark
      to test map lookup productivity when using a different hash function [1]. When
      run with --quiet, the results can be easily plotted [2].  The results provided
      by the benchmark look reasonable and match the results of my different
      benchmarks (requiring to patch kernel to get actual statistics on map lookups).
      
      Links:
        [1] https://fosdem.org/2023/schedule/event/bpf_hashing/
        [2] https://github.com/aspsk/bpf-bench/tree/master/hashmap-bench
      
      
      
      Changes,
      v1->v2:
      - percpu_times_index[] is of wrong size (Martin)
      - use base 0 for strtol (Andrii)
      - just use -q without argument (Andrii)
      - use less hacks when parsing arguments (Andrii)
      ====================
      
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      d964f09a
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Add benchmark for hashmap lookups · f371f2dc
      Anton Protopopov authored
      
      Add a new benchmark which measures hashmap lookup operations speed.  A user can
      control the following parameters of the benchmark:
      
          * key_size (max 1024): the key size to use
          * max_entries: the hashmap max entries
          * nr_entries: the number of entries to insert/lookup
          * nr_loops: the number of loops for the benchmark
          * map_flags The hashmap flags passed to BPF_MAP_CREATE
      
      The BPF program performing the benchmarks calls two nested bpf_loop:
      
          bpf_loop(nr_loops/nr_entries)
                  bpf_loop(nr_entries)
                           bpf_map_lookup()
      
      So the nr_loops determines the number of actual map lookups. All lookups are
      successful.
      
      Example (the output is generated on a AMD Ryzen 9 3950X machine):
      
          for nr_entries in `seq 4096 4096 65536`; do echo -n "$((nr_entries*100/65536))% full: "; sudo ./bench -d2 -a bpf-hashmap-lookup --key_size=4 --nr_entries=$nr_entries --max_entries=65536 --nr_loops=1000000 --map_flags=0x40 | grep cpu; done
          6% full: cpu01: lookup 50.739M ± 0.018M events/sec (approximated from 32 samples of ~19ms)
          12% full: cpu01: lookup 47.751M ± 0.015M events/sec (approximated from 32 samples of ~20ms)
          18% full: cpu01: lookup 45.153M ± 0.013M events/sec (approximated from 32 samples of ~22ms)
          25% full: cpu01: lookup 43.826M ± 0.014M events/sec (approximated from 32 samples of ~22ms)
          31% full: cpu01: lookup 41.971M ± 0.012M events/sec (approximated from 32 samples of ~23ms)
          37% full: cpu01: lookup 41.034M ± 0.015M events/sec (approximated from 32 samples of ~24ms)
          43% full: cpu01: lookup 39.946M ± 0.012M events/sec (approximated from 32 samples of ~25ms)
          50% full: cpu01: lookup 38.256M ± 0.014M events/sec (approximated from 32 samples of ~26ms)
          56% full: cpu01: lookup 36.580M ± 0.018M events/sec (approximated from 32 samples of ~27ms)
          62% full: cpu01: lookup 36.252M ± 0.012M events/sec (approximated from 32 samples of ~27ms)
          68% full: cpu01: lookup 35.200M ± 0.012M events/sec (approximated from 32 samples of ~28ms)
          75% full: cpu01: lookup 34.061M ± 0.009M events/sec (approximated from 32 samples of ~29ms)
          81% full: cpu01: lookup 34.374M ± 0.010M events/sec (approximated from 32 samples of ~29ms)
          87% full: cpu01: lookup 33.244M ± 0.011M events/sec (approximated from 32 samples of ~30ms)
          93% full: cpu01: lookup 32.182M ± 0.013M events/sec (approximated from 32 samples of ~31ms)
          100% full: cpu01: lookup 31.497M ± 0.016M events/sec (approximated from 32 samples of ~31ms)
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-8-aspsk@isovalent.com
      f371f2dc
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Print less if the quiet option is set · a237dda0
      Anton Protopopov authored
      
      The bench utility will print
      
          Setting up benchmark '<bench-name>'...
          Benchmark '<bench-name>' started.
      
      on startup to stdout. Suppress this output if --quiet option if given. This
      makes it simpler to parse benchmark output by a script.
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-7-aspsk@isovalent.com
      a237dda0
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Make quiet option common · 90c22503
      Anton Protopopov authored
      
      The "local-storage-tasks-trace" benchmark has a `--quiet` option. Move it to
      the list of common options, so that the main code and other benchmarks can use
      (new) env.quiet variable. Patch the run_bench_local_storage_rcu_tasks_trace.sh
      helper script accordingly.
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-6-aspsk@isovalent.com
      90c22503
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Remove an unused header · 96445462
      Anton Protopopov authored
      
      The benchs/bench_bpf_hashmap_full_update.c doesn't set a custom argp,
      so it shouldn't include the <argp.h> header.
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-5-aspsk@isovalent.com
      96445462
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Enhance argp parsing · 22ff7aea
      Anton Protopopov authored
      
      To parse command line the bench utility uses the argp_parse() function. This
      function takes as an argument a parent 'struct argp' structure which defines
      common command line options and an array of children 'struct argp' structures
      which defines additional command line options for particular benchmarks. This
      implementation doesn't allow benchmarks to share option names, e.g., if two
      benchmarks want to use, say, the --option option, then only one of them will
      succeed (the first one encountered in the array).  This will be convenient if
      same option names could be used in different benchmarks (with the same
      semantics, e.g., --nr_loops=N).
      
      Fix this by calling the argp_parse() function twice. The first call is the same
      as it was before, with all children argps, and helps to find the benchmark name
      and to print a combined help message if anything is wrong.  Given the name, we
      can call the argp_parse the second time, but now the children array points only
      to a correct benchmark thus always calling the correct parsers. (If there's no
      a specific list of arguments, then only one call to argp_parse will be done.)
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-4-aspsk@isovalent.com
      22ff7aea
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Make a function static in bpf_hashmap_full_update · 2f1c5963
      Anton Protopopov authored
      
      The hashmap_report_final callback function defined in the
      benchs/bench_bpf_hashmap_full_update.c file should be static.
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-3-aspsk@isovalent.com
      2f1c5963
    • Anton Protopopov's avatar
      selftest/bpf/benchs: Fix a typo in bpf_hashmap_full_update · 4db98ab4
      Anton Protopopov authored
      
      To call the bpf_hashmap_full_update benchmark, one should say:
      
          bench bpf-hashmap-ful-update
      
      The patch adds a missing 'l' to the benchmark name.
      
      Signed-off-by: default avatarAnton Protopopov <aspsk@isovalent.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230213091519.1202813-2-aspsk@isovalent.com
      4db98ab4
    • Alexei Starovoitov's avatar
      Merge branch 'Use __GFP_ZERO in bpf memory allocator' · 3538a0fb
      Alexei Starovoitov authored
      Hou Tao says:
      
      ====================
      
      From: Hou Tao <houtao1@huawei.com>
      
      Hi,
      
      The patchset tries to fix the hard-up problem found when checking how htab
      handles element reuse in bpf memory allocator. The immediate reuse of
      freed elements will reinitialize special fields (e.g., bpf_spin_lock) in
      htab map value and it may corrupt lookup procedure with BFP_F_LOCK flag
      which acquires bpf-spin-lock during value copying, and lead to hard-lock
      as shown in patch #2. Patch #1 fixes it by using __GFP_ZERO when allocating
      the object from slab and the behavior is similar with the preallocated
      hash-table case. Please see individual patches for more details. And comments
      are always welcome.
      
      Regards,
      
      Change Log:
      v1:
        * Use __GFP_ZERO instead of ctor to avoid retpoline overhead (from Alexei)
        * Add comments for check_and_init_map_value() (from Alexei)
        * split __GFP_ZERO patches out of the original patchset to unblock
          the development work of others.
      
      RFC: https://lore.kernel.org/bpf/20221230041151.1231169-1-houtao@huaweicloud.com
      
      
      ====================
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3538a0fb
    • Hou Tao's avatar
      selftests/bpf: Add test case for element reuse in htab map · f88da2d4
      Hou Tao authored
      
      The reinitialization of spin-lock in map value after immediate reuse may
      corrupt lookup with BPF_F_LOCK flag and result in hard lock-up, so add
      one test case to demonstrate the problem.
      
      Signed-off-by: default avatarHou Tao <houtao1@huawei.com>
      Link: https://lore.kernel.org/r/20230215082132.3856544-3-houtao@huaweicloud.com
      
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f88da2d4
    • Hou Tao's avatar
      bpf: Zeroing allocated object from slab in bpf memory allocator · 997849c4
      Hou Tao authored
      
      Currently the freed element in bpf memory allocator may be immediately
      reused, for htab map the reuse will reinitialize special fields in map
      value (e.g., bpf_spin_lock), but lookup procedure may still access
      these special fields, and it may lead to hard-lockup as shown below:
      
       NMI backtrace for cpu 16
       CPU: 16 PID: 2574 Comm: htab.bin Tainted: G             L     6.1.0+ #1
       Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
       RIP: 0010:queued_spin_lock_slowpath+0x283/0x2c0
       ......
       Call Trace:
        <TASK>
        copy_map_value_locked+0xb7/0x170
        bpf_map_copy_value+0x113/0x3c0
        __sys_bpf+0x1c67/0x2780
        __x64_sys_bpf+0x1c/0x20
        do_syscall_64+0x30/0x60
        entry_SYSCALL_64_after_hwframe+0x46/0xb0
       ......
        </TASK>
      
      For htab map, just like the preallocated case, these is no need to
      initialize these special fields in map value again once these fields
      have been initialized. For preallocated htab map, these fields are
      initialized through __GFP_ZERO in bpf_map_area_alloc(), so do the
      similar thing for non-preallocated htab in bpf memory allocator. And
      there is no need to use __GFP_ZERO for per-cpu bpf memory allocator,
      because __alloc_percpu_gfp() does it implicitly.
      
      Fixes: 0fd7c5d4 ("bpf: Optimize call_rcu in non-preallocated hash map.")
      Signed-off-by: default avatarHou Tao <houtao1@huawei.com>
      Link: https://lore.kernel.org/r/20230215082132.3856544-2-houtao@huaweicloud.com
      
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      997849c4
  3. Feb 15, 2023
  4. Feb 14, 2023
    • Alexei Starovoitov's avatar
      Revert "bpf: Add --skip_encoding_btf_inconsistent_proto, --btf_gen_optimized... · 1f5dfcc7
      Alexei Starovoitov authored
      Revert "bpf: Add --skip_encoding_btf_inconsistent_proto, --btf_gen_optimized to pahole flags for v1.25"
      
      This reverts commit 0243d3df.
      
      pahole 1.25 is too aggressive removing functions.
      With clang compiled kernel the following is seen:
      WARN: resolve_btfids: unresolved symbol tcp_reno_cong_avoid
      WARN: resolve_btfids: unresolved symbol dctcp_update_alpha
      WARN: resolve_btfids: unresolved symbol cubictcp_cong_avoid
      WARN: resolve_btfids: unresolved symbol bpf_xdp_metadata_rx_timestamp
      WARN: resolve_btfids: unresolved symbol bpf_xdp_metadata_rx_hash
      WARN: resolve_btfids: unresolved symbol bpf_task_kptr_get
      WARN: resolve_btfids: unresolved symbol bpf_task_acquire_not_zero
      WARN: resolve_btfids: unresolved symbol bpf_rdonly_cast
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_static_unused_arg
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_ref
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_pass_ctx
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_pass2
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_pass1
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_mem_len_pass1
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_mem_len_fail2
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_mem_len_fail1
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_kptr_get
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_fail3
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_fail2
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test_acquire
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test2
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_test1
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_memb_release
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_memb1_release
      WARN: resolve_btfids: unresolved symbol bpf_kfunc_call_int_mem_release
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      1f5dfcc7
    • Joanne Koong's avatar
      selftests/bpf: Clean up dynptr prog_tests · 50a7cedb
      Joanne Koong authored
      
      Clean up prog_tests/dynptr.c by removing the unneeded "expected_err_msg"
      in the dynptr_tests struct, which is a remnant from converting the fail
      tests cases to use the generic verification tester.
      
      Signed-off-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Link: https://lore.kernel.org/r/20230214051332.4007131-2-joannelkoong@gmail.com
      
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      50a7cedb
    • Joanne Koong's avatar
      selftests/bpf: Clean up user_ringbuf, cgrp_kfunc, kfunc_dynptr_param tests · 8032cad1
      Joanne Koong authored
      
      Clean up user_ringbuf, cgrp_kfunc, and kfunc_dynptr_param tests to use
      the generic verification tester for checking verifier rejections.
      The generic verification tester uses btf_decl_tag-based annotations
      for verifying that the tests fail with the expected log messages.
      
      Signed-off-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Acked-by: default avatarDavid Vernet <void@manifault.com>
      Reviewed-by: default avatarRoberto Sassu <roberto.sassu@huawei.com>
      Link: https://lore.kernel.org/r/20230214051332.4007131-1-joannelkoong@gmail.com
      
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      8032cad1
    • Alexei Starovoitov's avatar
      Merge branch 'BPF rbtree next-gen datastructure' · c8ea0997
      Alexei Starovoitov authored
      Dave Marchevsky says:
      
      ====================
      This series adds a rbtree datastructure following the "next-gen
      datastructure" precedent set by recently-added linked-list [0]. This is
      a reimplementation of previous rbtree RFC [1] to use kfunc + kptr
      instead of adding a new map type. This series adds a smaller set of API
      functions than that RFC - just the minimum needed to support current
      cgfifo example scheduler in ongoing sched_ext effort [2], namely:
      
        bpf_rbtree_add
        bpf_rbtree_remove
        bpf_rbtree_first
      
      The meat of this series is bugfixes and verifier infra work to support
      these API functions. Adding more rbtree kfuncs in future patches should
      be straightforward as a result.
      
      First, the series refactors and extends linked_list's release_on_unlock
      logic. The concept of "reference to node that was added to data
      structure" is formalized as "non-owning reference". From linked_list's
      perspective this non-owning reference after
      linked_list_push_{front,back} has same semantics as release_on_unlock,
      with the addition of writes to such references being valid in the
      critical section. Such references are no longer marked PTR_UNTRUSTED.
      Patches 2 and 13 go into more detail.
      
      The series then adds rbtree API kfuncs and necessary verifier support
      for them - namely support for callback args to kfuncs and some
      non-owning reference interactions that linked_list didn't need.
      
      BPF rbtree uses struct rb_root_cached + existing rbtree lib under the
      hood. From the BPF program writer's perspective, a BPF rbtree is very
      similar to existing linked list. Consider the following example:
      
        struct node_data {
          long key;
          long data;
          struct bpf_rb_node node;
        }
      
        static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
        {
          struct node_data *node_a;
          struct node_data *node_b;
      
          node_a = container_of(a, struct node_data, node);
          node_b = container_of(b, struct node_data, node);
      
          return node_a->key < node_b->key;
        }
      
        private(A) struct bpf_spin_lock glock;
        private(A) struct bpf_rb_root groot __contains(node_data, node);
      
        /* ... in BPF program */
        struct node_data *n, *m;
        struct bpf_rb_node *res;
      
        n = bpf_obj_new(typeof(*n));
        if (!n)
          /* skip */
        n->key = 5;
        n->data = 10;
      
        bpf_spin_lock(&glock);
        bpf_rbtree_add(&groot, &n->node, less);
        bpf_spin_unlock(&glock);
      
        bpf_spin_lock(&glock);
        res = bpf_rbtree_first(&groot);
        if (!res)
          /* skip */
        res = bpf_rbtree_remove(&groot, res);
        if (!res)
          /* skip */
        bpf_spin_unlock(&glock);
      
        m = container_of(res, struct node_data, node);
        bpf_obj_drop(m);
      
      Some obvious similarities:
      
        * Special bpf_rb_root and bpf_rb_node types have same semantics
          as bpf_list_head and bpf_list_node, respectively
        * __contains is used to associated node type with root
        * The spin_lock associated with a rbtree must be held when using
          rbtree API kfuncs
        * Nodes are allocated via bpf_obj_new and dropped via bpf_obj_drop
        * Rbtree takes ownership of node lifetime when a node is added.
          Removing a node gives ownership back to the program, requiring a
          bpf_obj_drop before program exit
      
      Some new additions as well:
      
        * Support for callbacks in kfunc args is added to enable 'less'
          callback use above
        * bpf_rbtree_first is the first graph API function to return a
          non-owning reference instead of convering an arg from own->non-own
        * Because all references to nodes already added to the rbtree are
          non-owning, bpf_rbtree_remove must accept such a reference in order
          to remove it from the tree
      
      Summary of patches:
        Patches 1 - 5 implement the meat of rbtree-specific support in this
        series, gradually building up to implemented kfuncs that verify as
        expected.
      
        Patch 6 adds the bpf_rbtree_{add,first,remove} to bpf_experimental.h.
      
        Patch 7 adds tests, Patch 9 adds documentation.
      
        [0]: lore.kernel.org/bpf/20221118015614.2013203-1-memxor@gmail.com
        [1]: lore.kernel.org/bpf/20220830172759.4069786-1-davemarchevsky@fb.com
        [2]: lore.kernel.org/bpf/20221130082313.3241517-1-tj@kernel.org
      
      Changelog:
      
      v5 -> v6: lore.kernel.org/bpf/20230212092715.1422619-1-davemarchevsky@fb.com/
      
      Patch #'s below refer to the patch's number in v5 unless otherwise stated.
      
      * General / Patch 1
        * Rebase onto latest bpf-next: "bpf: Migrate release_on_unlock logic to non-owning ref semantics"
        * This was Patch 1 of v4, was applied, not included in v6
      
      * Patch 3 - "bpf: Add bpf_rbtree_{add,remove,first} kfuncs"
        * Use bpf_callback_t instead of plain-C fn ptr for bpf_rbtree_add. This
          necessitated having bpf_rbtree_add duplicate rbtree_add's functionality.
          Wrapper function was used w/ internal __bpf_rbtree_add helper so that
          bpf_experimental.h proto could continue to use plain-C fn ptr so BPF progs
          could benefit from typechecking (Alexei)
      
      v4 -> v5: lore.kernel.org/bpf/20230209174144.3280955-1-davemarchevsky@fb.com/
      
      Patch #'s below refer to the patch's number in v4 unless otherwise stated.
      
      * General
        * Rebase onto latest bpf-next: "Merge branch 'bpf, mm: introduce cgroup.memory=nobpf'"
      
      * Patches 1-3 are squashed into "bpf: Migrate release_on_unlock logic to non-owning ref semantics".
        * Added type_is_non_owning_ref helper (Alexei)
        * Use a NON_OWN_REF type flag instead of separate bool (Alexei)
      
      * Patch 8 - "bpf: Special verifier handling for bpf_rbtree_{remove, first}"
        * When doing btf_parse_fields, reject structs with both bpf_list_node and
          bpf_rb_node fields. This is a temporary measure that can be removed after
          "collection identity" followup. See comment added in btf_parse_fields for
          more detail (Kumar, Alexei)
        * Add linked_list BTF test exercising check added to btf_parse_fields
        * Minor changes and moving around of some reg type checks due to NON_OWN_REF type flag
          introduction
      
      * Patch 10 - "selftests/bpf: Add rbtree selftests"
        * Migrate failure tests to RUN_TESTS, __failure, __msg() framework (Alexei)
      
      v3 -> v4: lore.kernel.org/bpf/20230131180016.3368305-1-davemarchevsky@fb.com/
      
      Patch #'s below refer to the patch's number in v3 unless otherwise stated.
      
      * General
        * Don't base this series on "bpf: Refactor release_regno searching logic",
          which was submitted separately as a refactor.
        * Rebase onto latest bpf-next: "samples/bpf: Add openat2() enter/exit tracepoint to syscall_tp sample"
      
      * Patch 2 - "bpf: Improve bpf_reg_state space usage for non-owning ref lock"
        * print_verifier_state change was adding redundant comma after "non_own_ref",
          fix it to put comma in correct place
        * invalidate_non_owning_refs no longer needs to take bpf_active_lock param,
          since any non-owning ref reg in env's cur_state is assumed to use that
          state's active_lock (Alexei)
        * invalidate_non_owning_refs' reg loop should check that the reg being
          inspected is a PTR_TO_BTF_ID before checking reg->non_owning_ref_lock,
          since that field is part of a union and may be filled w/ meaningless bytes
          if reg != PTR_TO_BTF_ID (Alexei)
      
      * Patch 3 - "selftests/bpf: Update linked_list tests for non-owning ref semantics"
        * Change the string searched for by the following tests:
          * linked_list/incorrect_node_off1
          * linked_list/double_push_front
          * linked_list/double_push_back
      
          necessary due to rebase / dropping of "release_regno searching logic" patch
          (see "General" changes)
      
      * Patch 8 - "bpf: Special verifier handling for bpf_rbtree_{remove, first}"
        * Just call invalidate_non_owning_refs w/ env instead of env, lock. (see
          Patch 2 changes)
      
      * Patch 11 - "bpf, documentation: Add graph documentation for non-owning refs"
        * Fix documentation formatting and improve content (David)
        * v3's version of patch 11 was missing some changes, v4's patch 11 is still
          addressing David's feedback from v2
      
      v2 -> v3: lore.kernel.org/bpf/20221217082506.1570898-1-davemarchevsky@fb.com/
      
      Patch #'s below refer to the patch's number in v2 unless otherwise stated.
      
      * Patch 1 - "bpf: Support multiple arg regs w/ ref_obj_id for kfuncs"
        * No longer needed as v3 doesn't have multiple ref_obj_id arg regs
        * The refactoring pieces were submitted separately
          (https://lore.kernel.org/bpf/20230121002417.1684602-1-davemarchevsky@fb.com/
      
      )
      
      * Patch 2 - "bpf: Migrate release_on_unlock logic to non-owning ref semantics"
        * Remove KF_RELEASE_NON_OWN flag from list API push methods, just match
          against specific kfuncs for now (Alexei, David)
        * Separate "release non owning reference" logic from KF_RELEASE logic
          (Alexei, David)
        * reg_find_field_offset now correctly tests 'rec' instead of 'reg' after
          calling reg_btf_record (Dan Carpenter)
      
      * New patch added after Patch 2 - "bpf: Improve bpf_reg_state space usage for non-owning ref lock"
        * Eliminates extra bpf_reg_state memory usage by using a bool instead of
          copying lock identity
      
      * Patch 4 - "bpf: rename list_head -> graph_root in field info types"
        * v2's version was applied to bpf-next, not including in respins
      
      * Patch 6 - "bpf: Add bpf_rbtree_{add,remove,first} kfuncs"
        * Remove KF_RELEASE_NON_OWN flag from rbtree_add, just add it to specific
          kfunc matching (Alexei, David)
      
      * Patch 9 - "bpf: Special verifier handling for bpf_rbtree_{remove, first}"
        * Remove KF_INVALIDATE_NON_OWN kfunc flag, just match against specific kfunc
          for now (Alexei, David)
      
      * Patch 11 - "libbpf: Make BTF mandatory if program BTF has spin_lock or alloc_obj type"
        * Drop for now, will submit separately
      
      * Patch 12 - "selftests/bpf: Add rbtree selftests"
        * Some expected-failure tests have different error messages due to "release
          non-owning reference logic" being separated from KF_RELEASE logic in Patch
          2 changes
      
      * Patch 13 - "bpf, documentation: Add graph documentation for non-owning refs"
        * Fix documentation formatting and improve content (David)
      
      v1 -> v2: lore.kernel.org/bpf/20221206231000.3180914-1-davemarchevsky@fb.com/
      
      Series-wide changes:
        * Rename datastructure_{head,node,api} -> graph_{root,node,api} (Alexei)
        * "graph datastructure" in patch summaries to refer to linked_list + rbtree
          instead of "next-gen datastructure" (Alexei)
        * Move from hacky marking of non-owning references as PTR_UNTRUSTED to
          cleaner implementation (Alexei)
        * Add invalidation of non-owning refs to rbtree_remove (Kumar, Alexei)
      
      Patch #'s below refer to the patch's number in v1 unless otherwise stated.
      
      Note that in v1 most of the meaty verifier changes were in the latter half
      of the series. Here, about half of that complexity has been moved to
      "bpf: Migrate release_on_unlock logic to non-owning ref semantics" - was Patch
      3 in v1.
      
      * Patch 1 - "bpf: Loosen alloc obj test in verifier's reg_btf_record"
        * Was applied, dropped from further iterations
      
      * Patch 2 - "bpf: map_check_btf should fail if btf_parse_fields fails"
        * Dropped in favor of verifier check-on-use: when some normal verifier
          checking expects the map to have btf_fields correctly parsed, it won't
          find any and verification will fail
      
      * New patch added before Patch 3 - "bpf: Support multiple arg regs w/ ref_obj_id for kfuncs"
        * Addition of KF_RELEASE_NON_OWN flag, which requires KF_RELEASE, and tagging
          of bpf_list_push_{front,back} KF_RELEASE | KF_RELEASE_NON_OWN, means that
          list-in-list push_{front,back} will trigger "only one ref_obj_id arg reg"
          logic. This is because "head" arg to those functions can be a list-in-list,
          which itself can be an owning reference with ref_obj_id. So need to
          support multiple ref_obj_id for release kfuncs.
      
      * Patch 3 - "bpf: Minor refactor of ref_set_release_on_unlock"
        * Now a major refactor w/ a rename to reflect this
          * "bpf: Migrate release_on_unlock logic to non-owning ref semantics"
        * Replaces release_on_unlock with active_lock logic as discussed in v1
      
      * New patch added after Patch 3 - "selftests/bpf: Update linked_list tests for non_owning_ref logic"
        * Removes "write after push" linked_list failure tests - no longer failure
          scenarios.
      
      * Patch 4 - "bpf: rename list_head -> datastructure_head in field info types"
        * rename to graph_root instead. Similar renamings across the series - see
          series-wide changes.
      
      * Patch 5 - "bpf: Add basic bpf_rb_{root,node} support"
        * OWNER_FIELD_MASK -> GRAPH_ROOT_MASK, OWNEE_FIELD_MASK -> GRAPH_NODE_MASK,
          and change of "owner"/"ownee" in big btf_check_and_fixup_fields comment to
          "root"/"node" (Alexei)
      
      * Patch 6 - "bpf: Add bpf_rbtree_{add,remove,first} kfuncs"
        * bpf_rbtree_remove can no longer return NULL. v2 continues v1's "use type
          system to prevent remove of node that isn't in a datastructure" approach,
          so rbtree_remove should never have been able to return NULL
      
      * Patch 7 - "bpf: Add support for bpf_rb_root and bpf_rb_node in kfunc args"
        * is_bpf_datastructure_api_kfunc -> is_bpf_graph_api_kfunc (Alexei)
      
      * Patch 8 - "bpf: Add callback validation to kfunc verifier logic"
        * Explicitly disallow rbtree_remove in rbtree callback
        * Explicitly disallow bpf_spin_{lock,unlock} call in rbtree callback,
          preventing possibility of "unbalanced" unlock (Alexei)
      
      * Patch 10 - "bpf, x86: BPF_PROBE_MEM handling for insn->off < 0"
        * Now that non-owning refs aren't marked PTR_UNTRUSTED it's not necessary to
          include this patch as part of the series
        * After conversation w/ Alexei, did another pass and submitted as an
          independent series (lore.kernel.org/bpf/20221213182726.325137-1-davemarchevsky@fb.com/)
      
      * Patch 13 - "selftests/bpf: Add rbtree selftests"
        * Since bpf_rbtree_remove can no longer return null, remove null checks
        * Remove test confirming that rbtree_first isn't allowed in callback. We want
          this to be possible
        * Add failure test confirming that rbtree_remove's new non-owning reference
          invalidation behavior behaves as expected
        * Add SEC("license") to rbtree_btf_fail__* progs. They were previously
          failing due to lack of this section. Now they're failing for correct
          reasons.
        * rbtree_btf_fail__add_wrong_type.c - add locking around rbtree_add, rename
          the bpf prog to something reasonable
      
      * New patch added after patch 13 - "bpf, documentation: Add graph documentation for non-owning refs"
        * Summarizes details of owning and non-owning refs which we hashed out in
          v1
      ====================
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      c8ea0997
    • Dave Marchevsky's avatar
      bpf, documentation: Add graph documentation for non-owning refs · c31315c3
      Dave Marchevsky authored
      
      It is difficult to intuit the semantics of owning and non-owning
      references from verifier code. In order to keep the high-level details
      from being lost in the mailing list, this patch adds documentation
      explaining semantics and details.
      
      The target audience of doc added in this patch is folks working on BPF
      internals, as there's focus on "what should the verifier do here". Via
      reorganization or copy-and-paste, much of the content can probably be
      repurposed for BPF program writer audience as well.
      
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-9-davemarchevsky@fb.com
      
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      c31315c3
Loading