Skip to content
Snippets Groups Projects
  1. Feb 17, 2023
  2. Oct 03, 2022
    • Alexander Potapenko's avatar
      stackdepot: reserve 5 extra bits in depot_stack_handle_t · 83a4f1ef
      Alexander Potapenko authored
      Some users (currently only KMSAN) may want to use spare bits in
      depot_stack_handle_t.  Let them do so by adding @extra_bits to
      __stack_depot_save() to store arbitrary flags, and providing
      stack_depot_get_extra_bits() to retrieve those flags.
      
      Also adapt KASAN to the new prototype by passing extra_bits=0, as KASAN
      does not intend to store additional information in the stack handle.
      
      Link: https://lkml.kernel.org/r/20220915150417.722975-3-glider@google.com
      
      
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Reviewed-by: default avatarMarco Elver <elver@google.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Eric Biggers <ebiggers@google.com>
      Cc: Eric Biggers <ebiggers@kernel.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Ilya Leoshkevich <iii@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      83a4f1ef
  3. Jul 18, 2022
    • Vlastimil Babka's avatar
      lib/stackdepot: replace CONFIG_STACK_HASH_ORDER with automatic sizing · f9987921
      Vlastimil Babka authored
      As Linus explained [1], setting the stackdepot hash table size as a config
      option is suboptimal, especially as stackdepot becomes a dependency of
      less "expert" subsystems than initially (e.g.  DRM, networking,
      SLUB_DEBUG):
      
      : (a) it introduces a new compile-time question that isn't sane to ask
      : a regular user, but is now exposed to regular users.
      
      : (b) this by default uses 1MB of memory for a feature that didn't in
      : the past, so now if you have small machines you need to make sure you
      : make a special kernel config for them.
      
      Ideally we would employ rhashtable for fully automatic resizing, which
      should be feasible for many of the new users, but problematic for the
      original users with restricted context that call __stack_depot_save() with
      can_alloc == false, i.e.  KASAN.
      
      However we can easily remove the config option and scale the hash table
      automatically with system memory.  The STACK_HASH_MASK constant becomes
      stack_hash_mask variable and is used only in one mask operation, so the
      overhead should be negligible to none.  For early allocation we can employ
      the existing alloc_large_system_hash() function and perform similar
      scaling for the late allocation.
      
      The existing limits of the config option (between 4k and 1M buckets) are
      preserved, and scaling factor is set to one bucket per 16kB memory so on
      64bit the max 1M buckets (8MB memory) is achieved with 16GB system, while
      a 1GB system will use 512kB.
      
      Because KASAN is reported to need the maximum number of buckets even with
      smaller amounts of memory [2], set it as such when kasan_enabled().
      
      If needed, the automatic scaling could be complemented with a boot-time
      kernel parameter, but it feels pointless to add it without a specific use
      case.
      
      [1] https://lore.kernel.org/all/CAHk-=wjC5nS+fnf6EzRD9yQRJApAhxx7gRB87ZV+pAWo9oVrTg@mail.gmail.com/
      [2] https://lore.kernel.org/all/CACT4Y+Y4GZfXOru2z5tFPzFdaSUd+GFc6KVL=bsa0+1m197cQQ@mail.gmail.com/
      
      Link: https://lkml.kernel.org/r/20220620150249.16814-1-vbabka@suse.cz
      
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f9987921
  4. Apr 06, 2022
    • Vlastimil Babka's avatar
      lib/stackdepot: allow requesting early initialization dynamically · a5f1783b
      Vlastimil Babka authored
      In a later patch we want to add stackdepot support for object owner
      tracking in slub caches, which is enabled by slub_debug boot parameter.
      This creates a bootstrap problem as some caches are created early in
      boot when slab_is_available() is false and thus stack_depot_init()
      tries to use memblock. But, as reported by Hyeonggon Yoo [1] we are
      already beyond memblock_free_all(). Ideally memblock allocation should
      fail, yet it succeeds, but later the system crashes, which is a
      separately handled issue.
      
      To resolve this boostrap issue in a robust way, this patch adds another
      way to request stack_depot_early_init(), which happens at a well-defined
      point of time. In addition to build-time CONFIG_STACKDEPOT_ALWAYS_INIT,
      code that's e.g. processing boot parameters (which happens early enough)
      can call a new function stack_depot_want_early_init(), which sets a flag
      that stack_depot_early_init() will check.
      
      In this patch we also convert page_owner to this approach. While it
      doesn't have the bootstrap issue as slub, it's also a functionality
      enabled by a boot param and can thus request stack_depot_early_init()
      with memblock allocation instead of later initialization with
      kvmalloc().
      
      As suggested by Mike, make stack_depot_early_init() only attempt
      memblock allocation and stack_depot_init() only attempt kvmalloc().
      Also change the latter to kvcalloc(). In both cases we can lose the
      explicit array zeroing, which the allocations do already.
      
      As suggested by Marco, provide empty implementations of the init
      functions for !CONFIG_STACKDEPOT builds to simplify the callers.
      
      [1] https://lore.kernel.org/all/YhnUcqyeMgCrWZbd@ip-172-31-19-208.ap-northeast-1.compute.internal/
      
      
      
      Reported-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Suggested-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Suggested-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarMarco Elver <elver@google.com>
      Reviewed-and-tested-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      a5f1783b
  5. Jan 22, 2022
    • Marco Elver's avatar
      lib/stackdepot: always do filter_irq_stacks() in stack_depot_save() · e9400660
      Marco Elver authored
      The non-interrupt portion of interrupt stack traces before interrupt
      entry is usually arbitrary.  Therefore, saving stack traces of
      interrupts (that include entries before interrupt entry) to stack depot
      leads to unbounded stackdepot growth.
      
      As such, use of filter_irq_stacks() is a requirement to ensure
      stackdepot can efficiently deduplicate interrupt stacks.
      
      Looking through all current users of stack_depot_save(), none (except
      KASAN) pass the stack trace through filter_irq_stacks() before passing
      it on to stack_depot_save().
      
      Rather than adding filter_irq_stacks() to all current users of
      stack_depot_save(), it became clear that stack_depot_save() should
      simply do filter_irq_stacks().
      
      Link: https://lkml.kernel.org/r/20211130095727.2378739-1-elver@google.com
      
      
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Reviewed-by: default avatarAlexander Potapenko <glider@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Imran Khan <imran.f.khan@oracle.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Jani Nikula <jani.nikula@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9400660
    • Vlastimil Babka's avatar
      lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() · 2dba5eb1
      Vlastimil Babka authored
      Currently, enabling CONFIG_STACKDEPOT means its stack_table will be
      allocated from memblock, even if stack depot ends up not actually used.
      The default size of stack_table is 4MB on 32-bit, 8MB on 64-bit.
      
      This is fine for use-cases such as KASAN which is also a config option
      and has overhead on its own.  But it's an issue for functionality that
      has to be actually enabled on boot (page_owner) or depends on hardware
      (GPU drivers) and thus the memory might be wasted.  This was raised as
      an issue [1] when attempting to add stackdepot support for SLUB's debug
      object tracking functionality.  It's common to build kernels with
      CONFIG_SLUB_DEBUG and enable slub_debug on boot only when needed, or
      create only specific kmem caches with debugging for testing purposes.
      
      It would thus be more efficient if stackdepot's table was allocated only
      when actually going to be used.  This patch thus makes the allocation
      (and whole stack_depot_init() call) optional:
      
       - Add a CONFIG_STACKDEPOT_ALWAYS_INIT flag to keep using the current
         well-defined point of allocation as part of mem_init(). Make
         CONFIG_KASAN select this flag.
      
       - Other users have to call stack_depot_init() as part of their own init
         when it's determined that stack depot will actually be used. This may
         depend on both config and runtime conditions. Convert current users
         which are page_owner and several in the DRM subsystem. Same will be
         done for SLUB later.
      
       - Because the init might now be called after the boot-time memblock
         allocation has given all memory to the buddy allocator, change
         stack_depot_init() to allocate stack_table with kvmalloc() when
         memblock is no longer available. Also handle allocation failure by
         disabling stackdepot (could have theoretically happened even with
         memblock allocation previously), and don't unnecessarily align the
         memblock allocation to its own size anymore.
      
      [1] https://lore.kernel.org/all/CAMuHMdW=eoVzM1Re5FVoEN87nKfiLmM2+Ah7eNu2KXEhCvbZyA@mail.gmail.com/
      
      Link: https://lkml.kernel.org/r/20211013073005.11351-1-vbabka@suse.cz
      
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: Marco Elver <elver@google.com> # stackdepot
      Cc: Marco Elver <elver@google.com>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Maxime Ripard <mripard@kernel.org>
      Cc: Thomas Zimmermann <tzimmermann@suse.de>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Oliver Glitta <glittao@gmail.com>
      Cc: Imran Khan <imran.f.khan@oracle.com>
      From: Colin Ian King <colin.king@canonical.com>
      Subject: lib/stackdepot: fix spelling mistake and grammar in pr_err message
      
      There is a spelling mistake of the work allocation so fix this and
      re-phrase the message to make it easier to read.
      
      Link: https://lkml.kernel.org/r/20211015104159.11282-1-colin.king@canonical.com
      
      
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      From: Vlastimil Babka <vbabka@suse.cz>
      Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup
      
      On FLATMEM, we call page_ext_init_flatmem_late() just before
      kmem_cache_init() which means stack_depot_init() (called by page owner
      init) will not recognize properly it should use kvmalloc() and not
      memblock_alloc().  memblock_alloc() will also not issue a warning and
      return a block memory that can be invalid and cause kernel page fault when
      saving stacks, as reported by the kernel test robot [1].
      
      Fix this by moving page_ext_init_flatmem_late() below kmem_cache_init() so
      that slab_is_available() is true during stack_depot_init().  SPARSEMEM
      doesn't have this issue, as it doesn't do page_ext_init_flatmem_late(),
      but a different page_ext_init() even later in the boot process.
      
      Thanks to Mike Rapoport for pointing out the FLATMEM init ordering issue.
      
      While at it, also actually resolve a checkpatch warning in stack_depot_init()
      from DRM CI, which was supposed to be in the original patch already.
      
      [1] https://lore.kernel.org/all/20211014085450.GC18719@xsang-OptiPlex-9020/
      
      Link: https://lkml.kernel.org/r/6abd9213-19a9-6d58-cedc-2414386d2d81@suse.cz
      
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: default avatarkernel test robot <oliver.sang@intel.com>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      From: Vlastimil Babka <vbabka@suse.cz>
      Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup3
      
      Due to cd06ab2f ("drm/locking: add backtrace for locking contended
      locks without backoff") landing recently to -next adding a new stack depot
      user in drivers/gpu/drm/drm_modeset_lock.c we need to add an appropriate
      call to stack_depot_init() there as well.
      
      Link: https://lkml.kernel.org/r/2a692365-cfa1-64f2-34e0-8aa5674dce5e@suse.cz
      
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Jani Nikula <jani.nikula@intel.com>
      Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
      Cc: Marco Elver <elver@google.com>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Maxime Ripard <mripard@kernel.org>
      Cc: Thomas Zimmermann <tzimmermann@suse.de>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Oliver Glitta <glittao@gmail.com>
      Cc: Imran Khan <imran.f.khan@oracle.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      From: Vlastimil Babka <vbabka@suse.cz>
      Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup4
      
      Due to 4e66934e ("lib: add reference counting tracking
      infrastructure") landing recently to net-next adding a new stack depot
      user in lib/ref_tracker.c we need to add an appropriate call to
      stack_depot_init() there as well.
      
      Link: https://lkml.kernel.org/r/45c1b738-1a2f-5b5f-2f6d-86fab206d01c@suse.cz
      
      
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Jiri Slab <jirislaby@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2dba5eb1
  6. Nov 09, 2021
  7. Nov 06, 2021
  8. Jul 08, 2021
  9. May 07, 2021
  10. Feb 26, 2021
  11. Dec 16, 2020
  12. Apr 07, 2020
  13. Feb 21, 2020
  14. Aug 19, 2019
  15. May 30, 2019
  16. Apr 29, 2019
    • Thomas Gleixner's avatar
      lib/stackdepot: Remove obsolete functions · 56d8f079
      Thomas Gleixner authored
      
      No more users of the struct stack_trace based interfaces.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: linux-mm@kvack.org
      Cc: David Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: linux-btrfs@vger.kernel.org
      Cc: dm-devel@redhat.com
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: intel-gfx@lists.freedesktop.org
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: linux-arch@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190425094803.617937448@linutronix.de
      56d8f079
    • Thomas Gleixner's avatar
      lib/stackdepot: Provide functions which operate on plain storage arrays · c0cfc337
      Thomas Gleixner authored
      
      The struct stack_trace indirection in the stack depot functions is a truly
      pointless excercise which requires horrible code at the callsites.
      
      Provide interfaces based on plain storage arrays.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: linux-mm@kvack.org
      Cc: David Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: linux-btrfs@vger.kernel.org
      Cc: dm-devel@redhat.com
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: intel-gfx@lists.freedesktop.org
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: linux-arch@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190425094801.414574828@linutronix.de
      c0cfc337
  17. Feb 07, 2018
  18. Nov 11, 2016
  19. Oct 28, 2016
    • Dmitry Vyukov's avatar
      lib/stackdepot.c: bump stackdepot capacity from 16MB to 128MB · 02754e0a
      Dmitry Vyukov authored
      KASAN uses stackdepot to memorize stacks for all kmalloc/kfree calls.
      Current stackdepot capacity is 16MB (1024 top level entries x 4 pages on
      second level).  Size of each stack is (num_frames + 3) * sizeof(long).
      Which gives us ~84K stacks.  This capacity was chosen empirically and it
      is enough to run kernel normally.
      
      However, when lots of configs are enabled and a fuzzer tries to maximize
      code coverage, it easily hits the limit within tens of minutes.  I've
      tested for long a time with number of top level entries bumped 4x
      (4096).  And I think I've seen overflow only once.  But I don't have all
      configs enabled and code coverage has not reached maximum yet.  So bump
      it 8x to 8192.
      
      Since we have two-level table, memory cost of this is very moderate --
      currently the top-level table is 8KB, with this patch it is 64KB, which
      is negligible under KASAN.
      
      Here is some approx math.
      
      128MB allows us to memorize ~670K stacks (assuming stack is ~200b).
      I've grepped kernel for kmalloc|kfree|kmem_cache_alloc|kmem_cache_free|
      kzalloc|kstrdup|kstrndup|kmemdup and it gives ~60K matches.  Most of
      alloc/free call sites are reachable with only one stack.  But some
      utility functions can have large fanout.  Assuming average fanout is 5x,
      total number of alloc/free stacks is ~300K.
      
      Link: http://lkml.kernel.org/r/1476458416-122131-1-git-send-email-dvyukov@google.com
      
      
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Baozeng Ding <sploving1@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      02754e0a
  20. Jul 29, 2016
  21. May 06, 2016
Loading