- Feb 23, 2023
-
-
Josh Poimboeuf authored
There have been some recently reported ORC unwinder warnings like: WARNING: can't access registers at entry_SYSCALL_64_after_hwframe+0x63/0xcd WARNING: stack going in the wrong direction? at __sys_setsockopt+0x2c6/0x5b0 net/socket.c:2271 And a KASAN warning: BUG: KASAN: stack-out-of-bounds in unwind_next_frame (arch/x86/include/asm/ptrace.h:136 arch/x86/kernel/unwind_orc.c:455) It turns out the 'signal' bit isn't getting propagated from the unwind hints to the ORC entries, making the unwinder confused at times. Fixes: ffb1b4a4 ("x86/unwind/orc: Add 'signal' field to ORC metadata") Reported-by:
kernel test robot <oliver.sang@intel.com> Reported-by:
Dmitry Vyukov <dvyukov@google.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/97eef9db60cd86d376a9a40d49d77bb67a8f6526.1676579666.git.jpoimboe@kernel.org
-
Peter Zijlstra authored
Replace the instruction::list by allocating instructions in arrays of 256 entries and stringing them together by (amortized) find_insn(). This shrinks instruction by 16 bytes and brings it down to 128. struct instruction { - struct list_head list; /* 0 16 */ - struct hlist_node hash; /* 16 16 */ - struct list_head call_node; /* 32 16 */ - struct section * sec; /* 48 8 */ - long unsigned int offset; /* 56 8 */ - /* --- cacheline 1 boundary (64 bytes) --- */ - long unsigned int immediate; /* 64 8 */ - unsigned int len; /* 72 4 */ - u8 type; /* 76 1 */ - - /* Bitfield combined with previous fields */ + struct hlist_node hash; /* 0 16 */ + struct list_head call_node; /* 16 16 */ + struct section * sec; /* 32 8 */ + long unsigned int offset; /* 40 8 */ + long unsigned int immediate; /* 48 8 */ + u8 len; /* 56 1 */ + u8 prev_len; /* 57 1 */ + u8 type; /* 58 1 */ + s8 instr; /* 59 1 */ + u32 idx:8; /* 60: 0 4 */ + u32 dead_end:1; /* 60: 8 4 */ + u32 ignore:1; /* 60: 9 4 */ + u32 ignore_alts:1; /* 60:10 4 */ + u32 hint:1; /* 60:11 4 */ + u32 save:1; /* 60:12 4 */ + u32 restore:1; /* 60:13 4 */ + u32 retpoline_safe:1; /* 60:14 4 */ + u32 noendbr:1; /* 60:15 4 */ + u32 entry:1; /* 60:16 4 */ + u32 visited:4; /* 60:17 4 */ + u32 no_reloc:1; /* 60:21 4 */ - u16 dead_end:1; /* 76: 8 2 */ - u16 ignore:1; /* 76: 9 2 */ - u16 ignore_alts:1; /* 76:10 2 */ - u16 hint:1; /* 76:11 2 */ - u16 save:1; /* 76:12 2 */ - u16 restore:1; /* 76:13 2 */ - u16 retpoline_safe:1; /* 76:14 2 */ - u16 noendbr:1; /* 76:15 2 */ - u16 entry:1; /* 78: 0 2 */ - u16 visited:4; /* 78: 1 2 */ - u16 no_reloc:1; /* 78: 5 2 */ + /* XXX 10 bits hole, try to pack */ - /* XXX 2 bits hole, try to pack */ - /* Bitfield combined with next fields */ - - s8 instr; /* 79 1 */ - struct alt_group * alt_group; /* 80 8 */ - struct instruction * jump_dest; /* 88 8 */ - struct instruction * first_jump_src; /* 96 8 */ + /* --- cacheline 1 boundary (64 bytes) --- */ + struct alt_group * alt_group; /* 64 8 */ + struct instruction * jump_dest; /* 72 8 */ + struct instruction * first_jump_src; /* 80 8 */ union { - struct symbol * _call_dest; /* 104 8 */ - struct reloc * _jump_table; /* 104 8 */ - }; /* 104 8 */ - struct alternative * alts; /* 112 8 */ - struct symbol * sym; /* 120 8 */ - /* --- cacheline 2 boundary (128 bytes) --- */ - struct stack_op * stack_ops; /* 128 8 */ - struct cfi_state * cfi; /* 136 8 */ + struct symbol * _call_dest; /* 88 8 */ + struct reloc * _jump_table; /* 88 8 */ + }; /* 88 8 */ + struct alternative * alts; /* 96 8 */ + struct symbol * sym; /* 104 8 */ + struct stack_op * stack_ops; /* 112 8 */ + struct cfi_state * cfi; /* 120 8 */ - /* size: 144, cachelines: 3, members: 28 */ - /* sum members: 142 */ - /* sum bitfield members: 14 bits, bit holes: 1, sum bit holes: 2 bits */ - /* last cacheline: 16 bytes */ + /* size: 128, cachelines: 2, members: 29 */ + /* sum members: 124 */ + /* sum bitfield members: 22 bits, bit holes: 1, sum bit holes: 10 bits */ }; pre: 5:38.18 real, 213.25 user, 124.90 sys, 23449040 mem post: 5:03.34 real, 210.75 user, 88.80 sys, 20241232 mem Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.851307606@infradead.org
-
Peter Zijlstra authored
Things like ALTERNATIVE_{2,3}() generate multiple alternatives on the same place, objtool would override the first orig_alt_group with the second (or third), failing to check the CFI among all the different variants. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.711471461@infradead.org
-
Peter Zijlstra authored
The instruction call_dest and jump_table members can never be used at the same time, their usage depends on type. struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ long unsigned int immediate; /* 64 8 */ unsigned int len; /* 72 4 */ u8 type; /* 76 1 */ /* Bitfield combined with previous fields */ u16 dead_end:1; /* 76: 8 2 */ u16 ignore:1; /* 76: 9 2 */ u16 ignore_alts:1; /* 76:10 2 */ u16 hint:1; /* 76:11 2 */ u16 save:1; /* 76:12 2 */ u16 restore:1; /* 76:13 2 */ u16 retpoline_safe:1; /* 76:14 2 */ u16 noendbr:1; /* 76:15 2 */ u16 entry:1; /* 78: 0 2 */ u16 visited:4; /* 78: 1 2 */ u16 no_reloc:1; /* 78: 5 2 */ /* XXX 2 bits hole, try to pack */ /* Bitfield combined with next fields */ s8 instr; /* 79 1 */ struct alt_group * alt_group; /* 80 8 */ - struct symbol * call_dest; /* 88 8 */ - struct instruction * jump_dest; /* 96 8 */ - struct instruction * first_jump_src; /* 104 8 */ - struct reloc * jump_table; /* 112 8 */ - struct alternative * alts; /* 120 8 */ + struct instruction * jump_dest; /* 88 8 */ + struct instruction * first_jump_src; /* 96 8 */ + union { + struct symbol * _call_dest; /* 104 8 */ + struct reloc * _jump_table; /* 104 8 */ + }; /* 104 8 */ + struct alternative * alts; /* 112 8 */ + struct symbol * sym; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ - struct symbol * sym; /* 128 8 */ - struct stack_op * stack_ops; /* 136 8 */ - struct cfi_state * cfi; /* 144 8 */ + struct stack_op * stack_ops; /* 128 8 */ + struct cfi_state * cfi; /* 136 8 */ - /* size: 152, cachelines: 3, members: 29 */ - /* sum members: 150 */ + /* size: 144, cachelines: 3, members: 28 */ + /* sum members: 142 */ /* sum bitfield members: 14 bits, bit holes: 1, sum bit holes: 2 bits */ - /* last cacheline: 24 bytes */ + /* last cacheline: 16 bytes */ }; pre: 5:39.35 real, 215.58 user, 123.69 sys, 23448736 mem post: 5:38.18 real, 213.25 user, 124.90 sys, 23449040 mem Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.640914454@infradead.org
-
Peter Zijlstra authored
Instead of caching the reloc for each instruction, only keep a negative cache of not having a reloc (by far the most common case). struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ long unsigned int immediate; /* 64 8 */ unsigned int len; /* 72 4 */ u8 type; /* 76 1 */ /* Bitfield combined with previous fields */ u16 dead_end:1; /* 76: 8 2 */ u16 ignore:1; /* 76: 9 2 */ u16 ignore_alts:1; /* 76:10 2 */ u16 hint:1; /* 76:11 2 */ u16 save:1; /* 76:12 2 */ u16 restore:1; /* 76:13 2 */ u16 retpoline_safe:1; /* 76:14 2 */ u16 noendbr:1; /* 76:15 2 */ u16 entry:1; /* 78: 0 2 */ u16 visited:4; /* 78: 1 2 */ + u16 no_reloc:1; /* 78: 5 2 */ - /* XXX 3 bits hole, try to pack */ + /* XXX 2 bits hole, try to pack */ /* Bitfield combined with next fields */ s8 instr; /* 79 1 */ struct alt_group * alt_group; /* 80 8 */ struct symbol * call_dest; /* 88 8 */ struct instruction * jump_dest; /* 96 8 */ struct instruction * first_jump_src; /* 104 8 */ struct reloc * jump_table; /* 112 8 */ - struct reloc * reloc; /* 120 8 */ + struct alternative * alts; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ - struct alternative * alts; /* 128 8 */ - struct symbol * sym; /* 136 8 */ - struct stack_op * stack_ops; /* 144 8 */ - struct cfi_state * cfi; /* 152 8 */ + struct symbol * sym; /* 128 8 */ + struct stack_op * stack_ops; /* 136 8 */ + struct cfi_state * cfi; /* 144 8 */ - /* size: 160, cachelines: 3, members: 29 */ - /* sum members: 158 */ - /* sum bitfield members: 13 bits, bit holes: 1, sum bit holes: 3 bits */ - /* last cacheline: 32 bytes */ + /* size: 152, cachelines: 3, members: 29 */ + /* sum members: 150 */ + /* sum bitfield members: 14 bits, bit holes: 1, sum bit holes: 2 bits */ + /* last cacheline: 24 bytes */ }; pre: 5:48.89 real, 220.96 user, 127.55 sys, 24834672 mem post: 5:39.35 real, 215.58 user, 123.69 sys, 23448736 mem Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.572145269@infradead.org
-
Peter Zijlstra authored
Since we don't have that many types in enum insn_type, force it into a u8 and re-arrange member to get rid of the holes, saves another 8 bytes. struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ - unsigned int len; /* 64 4 */ - enum insn_type type; /* 68 4 */ - long unsigned int immediate; /* 72 8 */ - u16 dead_end:1; /* 80: 0 2 */ - u16 ignore:1; /* 80: 1 2 */ - u16 ignore_alts:1; /* 80: 2 2 */ - u16 hint:1; /* 80: 3 2 */ - u16 save:1; /* 80: 4 2 */ - u16 restore:1; /* 80: 5 2 */ - u16 retpoline_safe:1; /* 80: 6 2 */ - u16 noendbr:1; /* 80: 7 2 */ - u16 entry:1; /* 80: 8 2 */ + long unsigned int immediate; /* 64 8 */ + unsigned int len; /* 72 4 */ + u8 type; /* 76 1 */ - /* XXX 7 bits hole, try to pack */ + /* Bitfield combined with previous fields */ - s8 instr; /* 82 1 */ - u8 visited; /* 83 1 */ + u16 dead_end:1; /* 76: 8 2 */ + u16 ignore:1; /* 76: 9 2 */ + u16 ignore_alts:1; /* 76:10 2 */ + u16 hint:1; /* 76:11 2 */ + u16 save:1; /* 76:12 2 */ + u16 restore:1; /* 76:13 2 */ + u16 retpoline_safe:1; /* 76:14 2 */ + u16 noendbr:1; /* 76:15 2 */ + u16 entry:1; /* 78: 0 2 */ + u16 visited:4; /* 78: 1 2 */ - /* XXX 4 bytes hole, try to pack */ + /* XXX 3 bits hole, try to pack */ + /* Bitfield combined with next fields */ - struct alt_group * alt_group; /* 88 8 */ - struct symbol * call_dest; /* 96 8 */ - struct instruction * jump_dest; /* 104 8 */ - struct instruction * first_jump_src; /* 112 8 */ - struct reloc * jump_table; /* 120 8 */ + s8 instr; /* 79 1 */ + struct alt_group * alt_group; /* 80 8 */ + struct symbol * call_dest; /* 88 8 */ + struct instruction * jump_dest; /* 96 8 */ + struct instruction * first_jump_src; /* 104 8 */ + struct reloc * jump_table; /* 112 8 */ + struct reloc * reloc; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ - struct reloc * reloc; /* 128 8 */ - struct alternative * alts; /* 136 8 */ - struct symbol * sym; /* 144 8 */ - struct stack_op * stack_ops; /* 152 8 */ - struct cfi_state * cfi; /* 160 8 */ + struct alternative * alts; /* 128 8 */ + struct symbol * sym; /* 136 8 */ + struct stack_op * stack_ops; /* 144 8 */ + struct cfi_state * cfi; /* 152 8 */ - /* size: 168, cachelines: 3, members: 29 */ - /* sum members: 162, holes: 1, sum holes: 4 */ - /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 40 bytes */ + /* size: 160, cachelines: 3, members: 29 */ + /* sum members: 158 */ + /* sum bitfield members: 13 bits, bit holes: 1, sum bit holes: 3 bits */ + /* last cacheline: 32 bytes */ }; pre: 5:48.86 real, 220.30 user, 128.34 sys, 24834672 mem post: 5:48.89 real, 220.96 user, 127.55 sys, 24834672 mem Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.501847188@infradead.org
-
Peter Zijlstra authored
struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ unsigned int len; /* 64 4 */ enum insn_type type; /* 68 4 */ long unsigned int immediate; /* 72 8 */ u16 dead_end:1; /* 80: 0 2 */ u16 ignore:1; /* 80: 1 2 */ u16 ignore_alts:1; /* 80: 2 2 */ u16 hint:1; /* 80: 3 2 */ u16 save:1; /* 80: 4 2 */ u16 restore:1; /* 80: 5 2 */ u16 retpoline_safe:1; /* 80: 6 2 */ u16 noendbr:1; /* 80: 7 2 */ u16 entry:1; /* 80: 8 2 */ /* XXX 7 bits hole, try to pack */ s8 instr; /* 82 1 */ u8 visited; /* 83 1 */ /* XXX 4 bytes hole, try to pack */ struct alt_group * alt_group; /* 88 8 */ struct symbol * call_dest; /* 96 8 */ struct instruction * jump_dest; /* 104 8 */ struct instruction * first_jump_src; /* 112 8 */ struct reloc * jump_table; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ struct reloc * reloc; /* 128 8 */ - struct list_head alts; /* 136 16 */ - struct symbol * sym; /* 152 8 */ - struct stack_op * stack_ops; /* 160 8 */ - struct cfi_state * cfi; /* 168 8 */ + struct alternative * alts; /* 136 8 */ + struct symbol * sym; /* 144 8 */ + struct stack_op * stack_ops; /* 152 8 */ + struct cfi_state * cfi; /* 160 8 */ - /* size: 176, cachelines: 3, members: 29 */ - /* sum members: 170, holes: 1, sum holes: 4 */ + /* size: 168, cachelines: 3, members: 29 */ + /* sum members: 162, holes: 1, sum holes: 4 */ /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 48 bytes */ + /* last cacheline: 40 bytes */ }; pre: 5:58.50 real, 229.64 user, 128.65 sys, 26221520 mem post: 5:48.86 real, 220.30 user, 128.34 sys, 24834672 mem Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.430556498@infradead.org
-
Peter Zijlstra authored
struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ unsigned int len; /* 64 4 */ enum insn_type type; /* 68 4 */ long unsigned int immediate; /* 72 8 */ u16 dead_end:1; /* 80: 0 2 */ u16 ignore:1; /* 80: 1 2 */ u16 ignore_alts:1; /* 80: 2 2 */ u16 hint:1; /* 80: 3 2 */ u16 save:1; /* 80: 4 2 */ u16 restore:1; /* 80: 5 2 */ u16 retpoline_safe:1; /* 80: 6 2 */ u16 noendbr:1; /* 80: 7 2 */ u16 entry:1; /* 80: 8 2 */ /* XXX 7 bits hole, try to pack */ s8 instr; /* 82 1 */ u8 visited; /* 83 1 */ /* XXX 4 bytes hole, try to pack */ struct alt_group * alt_group; /* 88 8 */ struct symbol * call_dest; /* 96 8 */ struct instruction * jump_dest; /* 104 8 */ struct instruction * first_jump_src; /* 112 8 */ struct reloc * jump_table; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ struct reloc * reloc; /* 128 8 */ struct list_head alts; /* 136 16 */ struct symbol * sym; /* 152 8 */ - struct list_head stack_ops; /* 160 16 */ - struct cfi_state * cfi; /* 176 8 */ + struct stack_op * stack_ops; /* 160 8 */ + struct cfi_state * cfi; /* 168 8 */ - /* size: 184, cachelines: 3, members: 29 */ - /* sum members: 178, holes: 1, sum holes: 4 */ + /* size: 176, cachelines: 3, members: 29 */ + /* sum members: 170, holes: 1, sum holes: 4 */ /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 56 bytes */ + /* last cacheline: 48 bytes */ }; pre: 5:58.22 real, 226.69 user, 131.22 sys, 26221520 mem post: 5:58.50 real, 229.64 user, 128.65 sys, 26221520 mem Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.362196959@infradead.org
-
Peter Zijlstra authored
In preparation to changing struct instruction around a bit, avoid passing it's members by pointer and instead pass the whole thing. A cleanup in it's own right too. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.291087549@infradead.org
-
- Feb 20, 2023
-
-
Arnd Bergmann authored
A lot of the tsan helpers are already excempt from the UACCESS warnings, but some more functions were added that need the same thing: kernel/kcsan/core.o: warning: objtool: __tsan_volatile_read16+0x0: call to __tsan_unaligned_read16() with UACCESS enabled kernel/kcsan/core.o: warning: objtool: __tsan_volatile_write16+0x0: call to __tsan_unaligned_write16() with UACCESS enabled vmlinux.o: warning: objtool: __tsan_unaligned_volatile_read16+0x4: call to __tsan_unaligned_read16() with UACCESS enabled vmlinux.o: warning: objtool: __tsan_unaligned_volatile_write16+0x4: call to __tsan_unaligned_write16() with UACCESS enabled As Marco points out, these functions don't even call each other explicitly but instead gcc (but not clang) notices the functions being identical and turns one symbol into a direct branch to the other. Link: https://lkml.kernel.org/r/20230215130058.3836177-4-arnd@kernel.org Fixes: 75d75b7a ("kcsan: Support distinguishing volatile accesses") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Feb 13, 2023
-
-
Juergen Gross authored
Mark xen_pv_play_dead() and related to that xen_cpu_bringup_again() as "__noreturn". Signed-off-by:
Juergen Gross <jgross@suse.com> Reviewed-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20221125063248.30256-3-jgross@suse.com Signed-off-by:
Juergen Gross <jgross@suse.com>
-
- Feb 11, 2023
-
-
Josh Poimboeuf authored
Add a 'signal' field which allows unwind hints to specify whether the instruction pointer should be taken literally (like for most interrupts and exceptions) rather than decremented (like for call stack return addresses) when used to find the next ORC entry. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/d2c5ec4d83a45b513d8fd72fab59f1a8cfa46871.1676068346.git.jpoimboe@kernel.org
-
Peter Zijlstra authored
For mysterious raisins I listed the new __asan_mem*() functions as being uaccess safe, this is giving objtool fails on KASAN builds because these functions call out to the actual __mem*() functions which are not marked uaccess safe. Removing it doesn't make the robots unhappy. Fixes: 69d4c0d3 ("entry, kasan, x86: Disallow overriding mem*() functions") Reported-by:
"Paul E. McKenney" <paulmck@kernel.org> Bisected-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230126182302.GA687063@paulmck-ThinkPad-P17-Gen-1
-
- Feb 01, 2023
-
-
Thomas Weißschuh authored
Reduce the size of struct special_alt from 72 to 64 bytes. Signed-off-by:
Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20221216-objtool-memory-v2-7-17968f85a464@weissschuh.net Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Thomas Weißschuh authored
Reduce the size of struct symbol on x86_64 from 208 to 200 bytes. This structure is allocated a lot and never freed. This reduces maximum memory usage while processing vmlinux.o from 2919716 KB to 2917988 KB (-0.5%) on my notebooks "localmodconfig". Signed-off-by:
Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20221216-objtool-memory-v2-6-17968f85a464@weissschuh.net Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Thomas Weißschuh authored
By using calloc() instead of malloc() in a loop, libc does not have to keep around bookkeeping information for each single structure. This reduces maximum memory usage while processing vmlinux.o from 3153325 KB to 3035668 KB (-3.7%) on my notebooks "localmodconfig". Note this introduces memory leaks, because some additional structs get added to the lists later after reading the symbols and sections from the original object. Luckily we don't really care about memory leaks in objtool. Signed-off-by:
Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20221216-objtool-memory-v2-3-17968f85a464@weissschuh.net Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Thomas Weißschuh authored
It is not used outside of builtin-check.c. Also remove the unused declaration from builtin.h . Signed-off-by:
Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20221216-objtool-memory-v2-2-17968f85a464@weissschuh.net Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Thomas Weißschuh authored
This data is not modified and not used outside of special.c. Also adapt its users to the constness. Signed-off-by:
Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20221216-objtool-memory-v2-1-17968f85a464@weissschuh.net Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Ian Rogers authored
HOSTCC is always wanted when building objtool. Setting CC to HOSTCC happens after tools/scripts/Makefile.include is included, meaning flags (like CFLAGS) are set assuming say CC is gcc, but then it can be later set to HOSTCC which may be clang. tools/scripts/Makefile.include is needed for host set up and common macros in objtool's Makefile. Rather than override the CC variable to HOSTCC, just pass CC as HOSTCC to the sub-makes of Makefile.build, the libsubcmd builds and also to the linkage step. Signed-off-by:
Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20230126190606.40739-4-irogers@google.com Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
- Jan 31, 2023
-
-
Ian Rogers authored
The Q variable was being used but never correctly set up. Add the setting up and use in place of @. Signed-off-by:
Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20230126190606.40739-3-irogers@google.com Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
Ian Rogers authored
Including from tools/lib can create inadvertent dependencies. Install libsubcmd in the objtool build and then include the headers from there. Signed-off-by:
Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20230126190606.40739-2-irogers@google.com Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org>
-
- Jan 21, 2023
-
-
Michal Kubecek authored
Some out-of-tree modules still do not use module_init() / module_exit() macros and simply create functions with magic names init_module() and cleanup_module() instead. As a result, these functions are not recognized as indirect call targets by objtool and such module fails to load into an IBT enabled kernel. This old way is not even documented any more but it is cleaner to issue a warning than to let the module fail on load without obvious reason. Signed-off-by:
Michal Kubecek <mkubecek@suse.cz> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230118105215.B9DA960514@lion.mk-sys.cz
-
- Jan 13, 2023
-
-
Peter Zijlstra authored
KASAN cannot just hijack the mem*() functions, it needs to emit __asan_mem*() variants if it wants instrumentation (other sanitizers already do this). vmlinux.o: warning: objtool: sync_regs+0x24: call to memcpy() leaves .noinstr.text section vmlinux.o: warning: objtool: vc_switch_off_ist+0xbe: call to memcpy() leaves .noinstr.text section vmlinux.o: warning: objtool: fixup_bad_iret+0x36: call to memset() leaves .noinstr.text section vmlinux.o: warning: objtool: __sev_get_ghcb+0xa0: call to memcpy() leaves .noinstr.text section vmlinux.o: warning: objtool: __sev_put_ghcb+0x35: call to memcpy() leaves .noinstr.text section Remove the weak aliases to ensure nobody hijacks these functions and add them to the noinstr section. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Tested-by:
Tony Lindgren <tony@atomide.com> Tested-by:
Ulf Hansson <ulf.hansson@linaro.org> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by:
Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195542.028523143@infradead.org
-
Peter Zijlstra authored
clang-14 allyesconfig gives: vmlinux.o: warning: objtool: emulator_cmpxchg_emulated+0x705: call to __ubsan_handle_load_invalid_value() with UACCESS enabled vmlinux.o: warning: objtool: paging64_update_accessed_dirty_bits+0x39e: call to __ubsan_handle_load_invalid_value() with UACCESS enabled vmlinux.o: warning: objtool: paging32_update_accessed_dirty_bits+0x390: call to __ubsan_handle_load_invalid_value() with UACCESS enabled vmlinux.o: warning: objtool: ept_update_accessed_dirty_bits+0x43f: call to __ubsan_handle_load_invalid_value() with UACCESS enabled Add the required eflags save/restore and whitelist the thing. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Tested-by:
Tony Lindgren <tony@atomide.com> Tested-by:
Ulf Hansson <ulf.hansson@linaro.org> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by:
Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195541.906007455@infradead.org
-
Peter Zijlstra authored
Idle code is very like entry code in that RCU isn't available. As such, add a little validation. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Tested-by:
Tony Lindgren <tony@atomide.com> Tested-by:
Ulf Hansson <ulf.hansson@linaro.org> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by:
Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.373461409@infradead.org
-
- Jan 09, 2023
-
-
Nicholas Piggin authored
Hand-written asm often contains non-function symbols in executable sections. _end symbols for finding the size of instruction blocks for runtime processing is one such usage. optprobe_template_end is one example that causes the warning: objtool: optprobe_template_end(): can't find starting instruction This is because the symbol happens to be at the end of the file (and therefore end of a section in the object file). So ignore end-of-section STT_NOTYPE symbols instead of bailing out because an instruction can't be found. While we're here, add a more descriptive warning for STT_FUNC symbols found at the end of a section. [ This also solves a PowerPC regression reported by Sathvika Vasireddy. ] Reported-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reported-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Sathvika Vasireddy <sv@linux.ibm.com> Link: https://lore.kernel.org/r/20221220101323.3119939-1-npiggin@gmail.com
-
- Jan 07, 2023
-
-
Miaoqian Lin authored
strdup() allocates memory for key_name. We need to release the memory in the following error paths. Add free() to avoid memory leak. Fixes: 1e7e4788 ("x86/static_call: Add inline static call implementation for x86-64") Signed-off-by:
Miaoqian Lin <linmq006@gmail.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20221205080642.558583-1-linmq006@gmail.com Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
- Jan 05, 2023
-
-
Borislav Petkov (AMD) authored
Add a struct alt_instr.flags field which will contain different flags controlling alternatives patching behavior. The initial idea was to be able to specify it as a separate macro parameter but that would mean touching all possible invocations of the alternatives macros and thus a lot of churn. What is more, as PeterZ suggested, being able to say ALT_NOT(feature) is very readable and explains exactly what is meant. So make the feature field a u32 where the patching flags are the upper u16 part of the dword quantity while the lower u16 word is the feature. The highest feature number currently is 0x26a (i.e., word 19) so there is plenty of space. If that becomes insufficient, the field can be extended to u64 which will then make struct alt_instr of the nice size of 16 bytes (14 bytes currently). There should be no functional changes resulting from this. Suggested-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/Y6RCoJEtxxZWwotd@zn.tnic
-
- Nov 23, 2022
-
-
Michael Ellerman authored
Provide an implementation for arch_pc_relative_reloc(). It is needed to pass the build once 61c6065e ("objtool: Allow !PC relative relocations") is merged. Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
- Nov 18, 2022
-
-
Sathvika Vasireddy authored
This patch enables objtool --mcount on powerpc, and adds implementation specific to powerpc. Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-17-sv@linux.ibm.com
-
Sathvika Vasireddy authored
This patch adds [stub] implementations for required functions, inorder to enable objtool build on powerpc. [Christophe Leroy: powerpc: Add missing asm/asm.h for objtool, Use local variables for type and imm in arch_decode_instruction(), Adapt len for prefixed instructions.] Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-16-sv@linux.ibm.com
-
Sathvika Vasireddy authored
Add architecture specific function to look for relocation records pointing to architecture specific symbols. Suggested-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-15-sv@linux.ibm.com
-
Sathvika Vasireddy authored
Make relocation types architecture specific. Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-14-sv@linux.ibm.com
-
Sathvika Vasireddy authored
Call add_special_section_alts() only when stackval or orc or uaccess or noinstr options are passed to objtool. Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-13-sv@linux.ibm.com
-
Sathvika Vasireddy authored
Some architectures (powerpc) may not support ftrace locations being nop'ed out at build time. Introduce CONFIG_HAVE_OBJTOOL_NOP_MCOUNT for objtool, as a means for architectures to enable nop'ing of ftrace locations. Add --mnop as an option to objtool --mcount, to indicate support for the same. Also, make sure that --mnop can be passed as an option to objtool only when --mcount is passed. Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-12-sv@linux.ibm.com
-
Christophe Leroy authored
In order to allow using objtool on cross-built kernels, determine size of long from elf data instead of using sizeof(long) at build time. For the time being this covers only mcount. [Sathvika Vasireddy: Rename variable "size" to "addrsize" and function "elf_class_size()" to "elf_class_addrsize()", and modify create_mcount_loc_sections() function to follow reverse christmas tree format to order local variable declarations.] Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-11-sv@linux.ibm.com
-
Christophe Leroy authored
Some architectures like powerpc support both endianness, it's therefore not possible to fix the endianness via arch/endianness.h because there is no easy way to get the target endianness at build time. Use the endianness recorded in the file objtool is working on. Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-10-sv@linux.ibm.com
-
Christophe Leroy authored
find_insn() will return NULL in case of failure. Check insn in order to avoid a kernel Oops for NULL pointer dereference. Tested-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Josh Poimboeuf <jpoimboe@kernel.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114175754.1131267-9-sv@linux.ibm.com
-
- Nov 05, 2022
-
-
Peter Zijlstra authored
Boris (and the robot) reported that objtool grew a new complaint about unreachable instructions. Upon inspection it was immediately clear the __weak zombie instructions struck again. For the unweary, the linker will simply remove the symbol for overriden __weak symbols but leave the instructions in place, creating unreachable instructions -- and objtool likes to report these. Commit 4adb2368 ("objtool: Ignore extra-symbol code") was supposed to have dealt with that, but the new commit 9f2899fe ("objtool: Add option to generate prefix symbols") subtly broke that logic by created unvisited symbols. Fixes: 9f2899fe ("objtool: Add option to generate prefix symbols") Reported-by:
Borislav Petkov <bp@alien8.de> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org>
-
Peter Zijlstra authored
When moving a symbol in the symtab its index changes and any reloc referring that symtol-table-index will need to be rewritten too. In order to facilitate this, objtool simply marks the whole reloc section 'changed' which will cause the whole section to be re-generated. However, finding the relocs that use any given symbol is implemented rather crudely -- a fully iteration of all sections and their relocs. Given that some builds have over 20k sections (kallsyms etc..) iterating all that for *each* symbol moved takes a bit of time. Instead have each symbol keep a list of relocs that reference it. This *vastly* improves build times for certain configs. Reported-by:
Borislav Petkov <bp@alien8.de> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Y2LlRA7x+8UsE1xf@hirez.programming.kicks-ass.net
-