mirror of
https://github.com/netdata/libbpf.git
synced 2026-04-05 16:19:06 +08:00
bpf: Introduce helper bpf_get_task_stack()
Introduce helper bpf_get_task_stack(), which dumps stack trace of given task. This is different to bpf_get_stack(), which gets stack track of current task. One potential use case of bpf_get_task_stack() is to call it from bpf_iter__task and dump all /proc/<pid>/stack to a seq_file. bpf_get_task_stack() uses stack_trace_save_tsk() instead of get_perf_callchain() for kernel stack. The benefit of this choice is that stack_trace_save_tsk() doesn't require changes in arch/. The downside of using stack_trace_save_tsk() is that stack_trace_save_tsk() dumps the stack trace to unsigned long array. For 32-bit systems, we need to translate it to u64 array. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200630062846.664389-3-songliubraving@fb.com
This commit is contained in:
committed by
Andrii Nakryiko
parent
9c104b1637
commit
c054d91247
@@ -3286,6 +3286,39 @@ union bpf_attr {
|
|||||||
* Dynamically cast a *sk* pointer to a *udp6_sock* pointer.
|
* Dynamically cast a *sk* pointer to a *udp6_sock* pointer.
|
||||||
* Return
|
* Return
|
||||||
* *sk* if casting is valid, or NULL otherwise.
|
* *sk* if casting is valid, or NULL otherwise.
|
||||||
|
*
|
||||||
|
* long bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, u64 flags)
|
||||||
|
* Description
|
||||||
|
* Return a user or a kernel stack in bpf program provided buffer.
|
||||||
|
* To achieve this, the helper needs *task*, which is a valid
|
||||||
|
* pointer to struct task_struct. To store the stacktrace, the
|
||||||
|
* bpf program provides *buf* with a nonnegative *size*.
|
||||||
|
*
|
||||||
|
* The last argument, *flags*, holds the number of stack frames to
|
||||||
|
* skip (from 0 to 255), masked with
|
||||||
|
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
|
||||||
|
* the following flags:
|
||||||
|
*
|
||||||
|
* **BPF_F_USER_STACK**
|
||||||
|
* Collect a user space stack instead of a kernel stack.
|
||||||
|
* **BPF_F_USER_BUILD_ID**
|
||||||
|
* Collect buildid+offset instead of ips for user stack,
|
||||||
|
* only valid if **BPF_F_USER_STACK** is also specified.
|
||||||
|
*
|
||||||
|
* **bpf_get_task_stack**\ () can collect up to
|
||||||
|
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
|
||||||
|
* to sufficient large buffer size. Note that
|
||||||
|
* this limit can be controlled with the **sysctl** program, and
|
||||||
|
* that it should be manually increased in order to profile long
|
||||||
|
* user stacks (such as stacks for Java programs). To do so, use:
|
||||||
|
*
|
||||||
|
* ::
|
||||||
|
*
|
||||||
|
* # sysctl kernel.perf_event_max_stack=<new value>
|
||||||
|
* Return
|
||||||
|
* A non-negative value equal to or less than *size* on success,
|
||||||
|
* or a negative error in case of failure.
|
||||||
|
*
|
||||||
*/
|
*/
|
||||||
#define __BPF_FUNC_MAPPER(FN) \
|
#define __BPF_FUNC_MAPPER(FN) \
|
||||||
FN(unspec), \
|
FN(unspec), \
|
||||||
@@ -3428,7 +3461,9 @@ union bpf_attr {
|
|||||||
FN(skc_to_tcp_sock), \
|
FN(skc_to_tcp_sock), \
|
||||||
FN(skc_to_tcp_timewait_sock), \
|
FN(skc_to_tcp_timewait_sock), \
|
||||||
FN(skc_to_tcp_request_sock), \
|
FN(skc_to_tcp_request_sock), \
|
||||||
FN(skc_to_udp6_sock),
|
FN(skc_to_udp6_sock), \
|
||||||
|
FN(get_task_stack), \
|
||||||
|
/* */
|
||||||
|
|
||||||
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
|
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
|
||||||
* function eBPF program intends to call
|
* function eBPF program intends to call
|
||||||
|
|||||||
Reference in New Issue
Block a user