Lines Matching +full:3 +full:a

27 is really a framework of several assorted tracing utilities.
29 disabled and enabled, as well as for preemption and from a time
30 a task is woken to the task is actually scheduled in.
62 For quicker access to that directory you may want to make a soft link to
90 of ftrace. Here is a list of some of the key files:
127 This file holds the output of the trace in a human
130 Note, this file is not a consumer. If tracing is off
141 retrieved. Unlike the "trace" file, this file is a
145 will not be read again with a sequential read. The
154 files. Options also exist to modify how a tracer
159 This is a directory that has a file for every available
161 or cleared by writing a "1" or "0" respectively into the
169 stored, and displayed by "trace". A new max trace will only be
173 By echoing in a time into this file, no latency will be recorded
178 Some latency tracers will record a trace whenever the
180 Only active when the file contains a number greater than 0.
186 before a waiter is woken up. That is, if an application calls a
206 A few extra pages may be allocated to accommodate buffer management
210 ( Note, the size may not be a multiple of the page size
228 the sub buffer is a page size, no event can be larger than the page
231 Note, the buffer_subbuf_size_kb is a way for the user to specify the
244 If a process is performing tracing, and the ring buffer should be
246 killed by a signal, this file can be used for that purpose. On close
248 Having a process that is tracing also open this file, when the process
256 This is a mask that lets the user only trace on specified CPUs.
257 The format is a hex string representing the CPUs.
266 has a side effect of enabling or disabling specific functions
278 As a speed up, since processing strings can be quite expensive
279 and requires a check of all functions registered to tracing, instead
280 an index can be written into this file. A number (starting with "1")
288 be traced. If a function exists in both set_ftrace_filter
296 If the "function-fork" option is set, then when a task whose
307 If the "function-fork" option is set, then when a task whose
313 If a PID is in both this file and "set_ftrace_pid", then this
318 Have the events only trace a task with a PID listed in this file.
329 Have the events not trace a task with a PID listed in this file.
331 in this file, even if a thread's PID is in the file if the
332 sched_switch or sched_wakeup events also trace a thread that should
353 by a specific function.
377 in seeing if any function has a callback attached to it.
380 displays all functions that have a callback attached to them
382 Note, a callback may also call multiple functions which will
385 If the callback registered to be traced by a function with
386 the "save regs" attribute (thus even more overhead), a 'R'
390 If the callback registered to be traced by a function with
395 If a non ftrace trampoline is attached (BPF) a 'D' will be displayed.
397 "direct" trampoline can be attached to a given function at a time.
403 If a function had either the "ip modify" or a "direct" call attached to
404 it in the past, a 'M' will be shown. This flag is never cleared. It is
405 used to know if a function was every modified by the ftrace infrastructure,
412 If the callback of a function jumps to a trampoline that is
419 This file contains all the functions that ever had a function callback
424 To see any function that has every been modified by "ip modify" or a
433 keep a histogram of the number of functions that were called
442 A directory that holds different tracing stats.
455 it will trace into a function. Setting this to a value of
462 the ring buffer references a string, only a pointer to the string
470 Only the pid of the task is recorded in a trace event unless
472 makes a cache of pid mappings to comms to try to display
473 comms for events. If a pid for a comm is not listed, then
488 the Task Group ID of a task is saved in a table mapping the PID of
495 take a snapshot of the current running trace.
517 Whenever an event is recorded into the ring buffer, a
518 "timestamp" is added. This stamp comes from a specified
537 be a bit slower than the local clock.
540 This is not a clock at all, but literally an atomic
583 sees a partial update. These effects are rare and post
599 To set a clock, simply echo the clock name into this file::
603 Setting a clock clears the ring buffer content as well as the
608 This is a very useful file for synchronizing user space
639 example in Documentation/trace/histogram.rst (Section 3.)
644 to be written to it, where a tool can be used to parse the data
658 This is a way to make multiple trace buffers where different
669 when a "1" is written to them.
681 A list of events that can be enabled in tracing.
689 different modes can coexist within a buffer but the mode in
702 delta: Default timestamp mode - timestamp is a delta against
703 a per-buffer timestamp.
705 absolute: The timestamp is a full timestamp, not a delta
716 This is a directory that contains the trace per_cpu information.
720 The ftrace buffer is defined per_cpu. That is, there's a separate
735 This is similar to the "trace_pipe" file, and is a consuming
745 a file or to the network where a server is collecting the
748 Like trace_pipe, this is a consuming reader, where multiple
755 the content of the snapshot for a given CPU, and if
776 This gets set if so many events happened within a nested
810 to draw a graph of function calls similar to C code
828 See tracing_max_latency. When a new max is recorded,
860 a SCHED_DEADLINE task to be woken (as the "wakeup" and
865 A special tracer that is used to trace binary module.
866 It will trace all the calls that a module makes to the
873 calls within the kernel. It will trace when a likely and
893 information is available. The tracing/error_log file is a circular
894 error log displaying a small number (currently, 8) of ftrace errors
950 A header is printed with the tracer name that is represented by
969 why a latency happened. Here is a typical trace::
1034 .. caution:: If the architecture does not support a way to
1045 - 'Z' - NMI occurred inside a hardirq
1047 - 'H' - hard irq occurred inside a softirq.
1058 output includes a timestamp relative to the start of the
1063 This is just to help catch your eye a bit better. And
1078 Note, the latency tracers will usually end with a back trace
1173 Similar to raw, but the numbers will be in a hexadecimal format.
1182 Print the fields as described by their types. This is a better
1183 option than using hex, bin or raw, as it gives a better parsing
1191 and one CPU buffer had a lot of events recently, thus
1192 a shorter time frame, were another CPU may have only had
1193 a few events, which lets it have older events. When
1197 display when a new CPU buffer started::
1208 This option changes the trace. It records a
1214 object the address belongs to, and print a
1216 ASLR is on, otherwise you don't get a chance to
1223 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1224 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1254 When any event or tracer is enabled, a hook is enabled
1262 When any event or tracer is enabled, a hook is enabled
1327 When set, a stack trace is recorded after any trace event
1346 When set, a stack trace is recorded after every
1355 Since the function_graph tracer has a slightly different output
1363 Each task has a fixed array of functions to
1375 A certain amount, then a delay marker is
1382 when a task is traced in and out during a context
1401 only a closing curly bracket "}" is displayed for
1402 the return of a function.
1419 the time a task schedules out in its function.
1433 Shows a more minimalistic output.
1442 the kernel know of a new mouse event. The result is a latency
1446 disabled. When a new maximum latency is hit, the tracer saves
1447 the trace leading up to that latency point so that every time a
1501 Here we see that we had a latency of 16 microseconds (which is
1509 function-trace, we get a much larger output::
1517 # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1533 bash-2042 3d... 0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1534 bash-2042 3d... 0us : add_preempt_count <-_raw_spin_lock_irqsave
1535 bash-2042 3d..1 1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1536 bash-2042 3d..1 1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1537 bash-2042 3d..1 2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1538 bash-2042 3d..1 2us : ata_qc_new_init <-__ata_scsi_queuecmd
1539 bash-2042 3d..1 3us : ata_sg_init <-__ata_scsi_queuecmd
1540 bash-2042 3d..1 4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1541 bash-2042 3d..1 4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1543 bash-2042 3d..1 67us : delay_tsc <-__delay
1544 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1545 bash-2042 3d..2 67us : sub_preempt_count <-delay_tsc
1546 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1547 bash-2042 3d..2 68us : sub_preempt_count <-delay_tsc
1548 bash-2042 3d..1 68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1549 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1550 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1551 bash-2042 3d..1 72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1552 bash-2042 3d..1 120us : <stack trace>
1580 Here we traced a 71 microsecond latency. But we also see all the
1615 3 us | 0) bash-1507 | d..2 | | __unwind_start() {
1616 3 us | 0) bash-1507 | d..2 | | get_stack_info() {
1617 3 us | 0) bash-1507 | d..2 | 0.351 us | in_task_stack();
1643 interrupts but the task cannot be preempted and a higher
1645 before it can preempt a lower priority task.
1817 # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1833 ls-2230 3d... 0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1834 ls-2230 3...1 100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1835 ls-2230 3...1 101us+: trace_preempt_on <-ata_scsi_queuecmd
1836 ls-2230 3...1 111us : <stack trace>
1864 Here is a trace with function-trace set::
1870 # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1886 kworker/-59 3...1 0us : __schedule <-schedule
1887 kworker/-59 3d..1 0us : rcu_preempt_qs <-rcu_note_context_switch
1888 kworker/-59 3d..1 1us : add_preempt_count <-_raw_spin_lock_irq
1889 kworker/-59 3d..2 1us : deactivate_task <-__schedule
1890 kworker/-59 3d..2 1us : dequeue_task <-deactivate_task
1891 kworker/-59 3d..2 2us : update_rq_clock <-dequeue_task
1892 kworker/-59 3d..2 2us : dequeue_task_fair <-dequeue_task
1893 kworker/-59 3d..2 2us : update_curr <-dequeue_task_fair
1894 kworker/-59 3d..2 2us : update_min_vruntime <-update_curr
1895 kworker/-59 3d..2 3us : cpuacct_charge <-update_curr
1896 kworker/-59 3d..2 3us : __rcu_read_lock <-cpuacct_charge
1897 kworker/-59 3d..2 3us : __rcu_read_unlock <-cpuacct_charge
1898 kworker/-59 3d..2 3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1899 kworker/-59 3d..2 4us : clear_buddies <-dequeue_task_fair
1900 kworker/-59 3d..2 4us : account_entity_dequeue <-dequeue_task_fair
1901 kworker/-59 3d..2 4us : update_min_vruntime <-dequeue_task_fair
1902 kworker/-59 3d..2 4us : update_cfs_shares <-dequeue_task_fair
1903 kworker/-59 3d..2 5us : hrtick_update <-dequeue_task_fair
1904 kworker/-59 3d..2 5us : wq_worker_sleeping <-__schedule
1905 kworker/-59 3d..2 5us : kthread_data <-wq_worker_sleeping
1906 kworker/-59 3d..2 5us : put_prev_task_fair <-__schedule
1907 kworker/-59 3d..2 6us : pick_next_task_fair <-pick_next_task
1908 kworker/-59 3d..2 6us : clear_buddies <-pick_next_task_fair
1909 kworker/-59 3d..2 6us : set_next_entity <-pick_next_task_fair
1910 kworker/-59 3d..2 6us : update_stats_wait_end <-set_next_entity
1911 ls-2269 3d..2 7us : finish_task_switch <-__schedule
1912 ls-2269 3d..2 7us : _raw_spin_unlock_irq <-finish_task_switch
1913 ls-2269 3d..2 8us : do_IRQ <-ret_from_intr
1914 ls-2269 3d..2 8us : irq_enter <-do_IRQ
1915 ls-2269 3d..2 8us : rcu_irq_enter <-irq_enter
1916 ls-2269 3d..2 9us : add_preempt_count <-irq_enter
1917 ls-2269 3d.h2 9us : exit_idle <-do_IRQ
1919 ls-2269 3d.h3 20us : sub_preempt_count <-_raw_spin_unlock
1920 ls-2269 3d.h2 20us : irq_exit <-do_IRQ
1921 ls-2269 3d.h2 21us : sub_preempt_count <-irq_exit
1922 ls-2269 3d..3 21us : do_softirq <-irq_exit
1923 ls-2269 3d..3 21us : __do_softirq <-call_softirq
1924 ls-2269 3d..3 21us+: __local_bh_disable <-__do_softirq
1925 ls-2269 3d.s4 29us : sub_preempt_count <-_local_bh_enable_ip
1926 ls-2269 3d.s5 29us : sub_preempt_count <-_local_bh_enable_ip
1927 ls-2269 3d.s5 31us : do_IRQ <-ret_from_intr
1928 ls-2269 3d.s5 31us : irq_enter <-do_IRQ
1929 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1931 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1932 ls-2269 3d.s5 32us : add_preempt_count <-irq_enter
1933 ls-2269 3d.H5 32us : exit_idle <-do_IRQ
1934 ls-2269 3d.H5 32us : handle_irq <-do_IRQ
1935 ls-2269 3d.H5 32us : irq_to_desc <-handle_irq
1936 ls-2269 3d.H5 33us : handle_fasteoi_irq <-handle_irq
1938 ls-2269 3d.s5 158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1939 ls-2269 3d.s3 158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1940 ls-2269 3d.s3 159us : __local_bh_enable <-__do_softirq
1941 ls-2269 3d.s3 159us : sub_preempt_count <-__local_bh_enable
1942 ls-2269 3d..3 159us : idle_cpu <-irq_exit
1943 ls-2269 3d..3 159us : rcu_irq_exit <-irq_exit
1944 ls-2269 3d..3 160us : sub_preempt_count <-irq_exit
1945 ls-2269 3d... 161us : __mutex_unlock_slowpath <-mutex_unlock
1946 ls-2269 3d... 162us+: trace_hardirqs_on <-mutex_unlock
1947 ls-2269 3d... 186us : <stack trace>
1962 When an interrupt is running inside a softirq, the annotation is 'H'.
1969 time it takes for a task that is woken to actually wake up.
1986 # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1988 # | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
1999 <idle>-0 3dNs7 0us : 0:120:R + [003] 312:100:R kworker/3:1H
2000 <idle>-0 3dNs7 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2001 <idle>-0 3d..3 15us : __schedule <-schedule
2002 <idle>-0 3d..3 15us : 0:120:R ==> [003] 312:100:R kworker/3:1H
2006 the kworker with a nice priority of -20 (not very nice), took
2010 Non Real-Time tasks are not that interesting. A more interesting
2016 In a Real-Time environment it is very important to know the
2027 and not the average. We can have a very fast scheduler that may
2028 only have a large latency once in a while, but that would not
2034 tracer for a while to see that effect).
2055 # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2068 <idle>-0 3d.h4 0us : 0:120:R + [003] 2389: 94:R sleep
2069 <idle>-0 3d.h4 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2070 <idle>-0 3d..3 5us : __schedule <-schedule
2071 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
2077 is about to schedule in. This may change if we add a new marker at the
2088 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
2090 The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
2104 # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2117 <idle>-0 3d.h4 1us+: 0:120:R + [003] 2448: 94:R sleep
2118 <idle>-0 3d.h4 2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2119 <idle>-0 3d.h3 3us : check_preempt_curr <-ttwu_do_wakeup
2120 <idle>-0 3d.h3 3us : resched_curr <-check_preempt_curr
2121 <idle>-0 3dNh3 4us : task_woken_rt <-ttwu_do_wakeup
2122 <idle>-0 3dNh3 4us : _raw_spin_unlock <-try_to_wake_up
2123 <idle>-0 3dNh3 4us : sub_preempt_count <-_raw_spin_unlock
2124 <idle>-0 3dNh2 5us : ttwu_stat <-try_to_wake_up
2125 <idle>-0 3dNh2 5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
2126 <idle>-0 3dNh2 6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2127 <idle>-0 3dNh1 6us : _raw_spin_lock <-__run_hrtimer
2128 <idle>-0 3dNh1 6us : add_preempt_count <-_raw_spin_lock
2129 <idle>-0 3dNh2 7us : _raw_spin_unlock <-hrtimer_interrupt
2130 <idle>-0 3dNh2 7us : sub_preempt_count <-_raw_spin_unlock
2131 <idle>-0 3dNh1 7us : tick_program_event <-hrtimer_interrupt
2132 <idle>-0 3dNh1 7us : clockevents_program_event <-tick_program_event
2133 <idle>-0 3dNh1 8us : ktime_get <-clockevents_program_event
2134 <idle>-0 3dNh1 8us : lapic_next_event <-clockevents_program_event
2135 <idle>-0 3dNh1 8us : irq_exit <-smp_apic_timer_interrupt
2136 <idle>-0 3dNh1 9us : sub_preempt_count <-irq_exit
2137 <idle>-0 3dN.2 9us : idle_cpu <-irq_exit
2138 <idle>-0 3dN.2 9us : rcu_irq_exit <-irq_exit
2139 <idle>-0 3dN.2 10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
2140 <idle>-0 3dN.2 10us : sub_preempt_count <-irq_exit
2141 <idle>-0 3.N.1 11us : rcu_idle_exit <-cpu_idle
2142 <idle>-0 3dN.1 11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
2143 <idle>-0 3.N.1 11us : tick_nohz_idle_exit <-cpu_idle
2144 <idle>-0 3dN.1 12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
2145 <idle>-0 3dN.1 12us : ktime_get <-tick_nohz_idle_exit
2146 <idle>-0 3dN.1 12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
2147 <idle>-0 3dN.1 13us : cpu_load_update_nohz <-tick_nohz_idle_exit
2148 <idle>-0 3dN.1 13us : _raw_spin_lock <-cpu_load_update_nohz
2149 <idle>-0 3dN.1 13us : add_preempt_count <-_raw_spin_lock
2150 <idle>-0 3dN.2 13us : __cpu_load_update <-cpu_load_update_nohz
2151 <idle>-0 3dN.2 14us : sched_avg_update <-__cpu_load_update
2152 <idle>-0 3dN.2 14us : _raw_spin_unlock <-cpu_load_update_nohz
2153 <idle>-0 3dN.2 14us : sub_preempt_count <-_raw_spin_unlock
2154 <idle>-0 3dN.1 15us : calc_load_nohz_stop <-tick_nohz_idle_exit
2155 <idle>-0 3dN.1 15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
2156 <idle>-0 3dN.1 15us : hrtimer_cancel <-tick_nohz_idle_exit
2157 <idle>-0 3dN.1 15us : hrtimer_try_to_cancel <-hrtimer_cancel
2158 <idle>-0 3dN.1 16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
2159 <idle>-0 3dN.1 16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2160 <idle>-0 3dN.1 16us : add_preempt_count <-_raw_spin_lock_irqsave
2161 <idle>-0 3dN.2 17us : __remove_hrtimer <-remove_hrtimer.part.16
2162 <idle>-0 3dN.2 17us : hrtimer_force_reprogram <-__remove_hrtimer
2163 <idle>-0 3dN.2 17us : tick_program_event <-hrtimer_force_reprogram
2164 <idle>-0 3dN.2 18us : clockevents_program_event <-tick_program_event
2165 <idle>-0 3dN.2 18us : ktime_get <-clockevents_program_event
2166 <idle>-0 3dN.2 18us : lapic_next_event <-clockevents_program_event
2167 <idle>-0 3dN.2 19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
2168 <idle>-0 3dN.2 19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2169 <idle>-0 3dN.1 19us : hrtimer_forward <-tick_nohz_idle_exit
2170 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
2171 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
2172 <idle>-0 3dN.1 20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2173 <idle>-0 3dN.1 20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
2174 <idle>-0 3dN.1 21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
2175 <idle>-0 3dN.1 21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2176 <idle>-0 3dN.1 21us : add_preempt_count <-_raw_spin_lock_irqsave
2177 <idle>-0 3dN.2 22us : ktime_add_safe <-__hrtimer_start_range_ns
2178 <idle>-0 3dN.2 22us : enqueue_hrtimer <-__hrtimer_start_range_ns
2179 <idle>-0 3dN.2 22us : tick_program_event <-__hrtimer_start_range_ns
2180 <idle>-0 3dN.2 23us : clockevents_program_event <-tick_program_event
2181 <idle>-0 3dN.2 23us : ktime_get <-clockevents_program_event
2182 <idle>-0 3dN.2 23us : lapic_next_event <-clockevents_program_event
2183 <idle>-0 3dN.2 24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2184 <idle>-0 3dN.2 24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2185 <idle>-0 3dN.1 24us : account_idle_ticks <-tick_nohz_idle_exit
2186 <idle>-0 3dN.1 24us : account_idle_time <-account_idle_ticks
2187 <idle>-0 3.N.1 25us : sub_preempt_count <-cpu_idle
2188 <idle>-0 3.N.. 25us : schedule <-cpu_idle
2189 <idle>-0 3.N.. 25us : __schedule <-preempt_schedule
2190 <idle>-0 3.N.. 26us : add_preempt_count <-__schedule
2191 <idle>-0 3.N.1 26us : rcu_note_context_switch <-__schedule
2192 <idle>-0 3.N.1 26us : rcu_sched_qs <-rcu_note_context_switch
2193 <idle>-0 3dN.1 27us : rcu_preempt_qs <-rcu_note_context_switch
2194 <idle>-0 3.N.1 27us : _raw_spin_lock_irq <-__schedule
2195 <idle>-0 3dN.1 27us : add_preempt_count <-_raw_spin_lock_irq
2196 <idle>-0 3dN.2 28us : put_prev_task_idle <-__schedule
2197 <idle>-0 3dN.2 28us : pick_next_task_stop <-pick_next_task
2198 <idle>-0 3dN.2 28us : pick_next_task_rt <-pick_next_task
2199 <idle>-0 3dN.2 29us : dequeue_pushable_task <-pick_next_task_rt
2200 <idle>-0 3d..3 29us : __schedule <-preempt_schedule
2201 <idle>-0 3d..3 30us : 0:120:R ==> [003] 2448: 94:R sleep
2203 This isn't that big of a trace, even with function tracing enabled,
2212 As function tracing can induce a much larger latency, but without
2214 caused it. There is a middle ground, and that is with enabling
2248 <idle>-0 2.N.2 3us : cpu_idle: state=4294967295 cpu_id=2
2249 <idle>-0 2dN.3 4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2250 …<idle>-0 2dN.3 4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer ex…
2253 <idle>-0 2d..3 6us : __schedule <-schedule
2254 <idle>-0 2d..3 6us : 0:120:R ==> [002] 5882: 94:R sleep
2263 periodically make a CPU constantly busy with interrupts disabled.
2282 …<...>-1729 [005] d... 714.756290: #3 inner/outer(us): 16/16 ts:1581527519.678961629 co…
2304 runs in a loop checking a timestamp twice. The latency detected within
2315 The number of times a latency was detected during the window.
2346 When the test is started. A kernel thread is created that
2358 ftrace_enabled is set; otherwise this tracer is a nop.
2395 tracing directly from a program. This allows you to stop the
2397 interested in. To disable the tracing directly from a C program,
2416 By writing into set_ftrace_pid you can trace a
2442 ##### CPU 3 buffer started ####
2449 If you want to trace a function when executing, you could use
2520 write(ffd, "nop", 3);
2554 probes a function on its entry and its exit. This is done by
2555 using a dynamically allocated stack of return addresses in each
2557 address of each function traced to set a custom probe. Thus the
2561 Probing on both ends of a function leads to special features
2564 - measure of a function's time execution
2565 - having a reliable call stack to draw function calls graph
2569 - you want to find the reason of a strange kernel behavior and
2576 - you want to find quickly which path is taken by a specific
2579 - you just want to peek inside a working kernel and want to see
2620 the closing bracket line of a function or on the same line
2621 than the current function in case of a leaf one. It is default
2636 3) # 1837.709 us | } /* __switch_to */
2637 3) | finish_task_switch() {
2638 3) 0.313 us | _raw_spin_unlock_irq();
2639 3) 3.177 us | }
2640 3) # 1889.063 us | } /* __schedule */
2641 3) ! 140.417 us | } /* __schedule */
2642 3) # 2034.948 us | } /* schedule */
2643 3) * 33998.59 us | } /* schedule_preempt_disabled */
2708 system clock since it started. A snapshot of this time is
2735 for a function if the start of that function is not in the
2789 be displayed in a smart way. Specifically, if it is an error code,
2813 - Even if the function return type is void, a return value will still
2819 a 64-bit return value, with the lower 32 bits saved in eax and the
2823 - In certain procedure call standards, such as arm64's AAPCS64, when a
2824 type is smaller than a GPR, it is the responsibility of the consumer
2827 when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
2876 trace_printk() For example, if you want to put a comment inside
2880 trace_printk("I'm a comment!\n")
2885 1) | /* I'm a comment! */
2900 starts of pointing to a simple return. (Enabling FTRACE will
2912 a notrace, or blocked another way and all inline functions are not
2916 A section called "__mcount_loc" is created that holds
2920 references into a single table.
2926 are loaded and before they are executed. When a module is
2937 (which is just a function stub). They now call into the ftrace
2941 a breakpoint at the location to be modified, sync all CPUs, modify
2964 A list of available functions that you can add to these files is
3081 To clear out a filter so that all functions will be recorded
3156 case of setting thousands of specific functions at a time. By passing
3157 in a list of numbers, no string processing will occur. Instead, the function
3247 Note, the proc sysctl ftrace_enable is a big on/off switch for the
3252 cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
3271 A few commands are supported by the set_ftrace_filter interface.
3287 in a different module is accomplished by appending (>>) to the
3294 functions except a specific module::
3314 no limit. For example, to disable tracing when a schedule bug
3324 to set_ftrace_filter. To remove a command, prepend it by '!'
3330 that have a counter. To remove commands without counters::
3335 Will cause a snapshot to be triggered when the function is hit.
3351 These commands can enable or disable a trace event. Note, because
3354 a "soft" mode. That is, the tracepoint will be called, but
3356 as long as there's a command that triggers it.
3377 something, and want to dump the trace when a certain function
3378 is hit. Perhaps it's a function that is called before a triple
3379 fault happens and does not allow you to get a regular dump.
3388 When the function is hit, a stack trace is recorded.
3457 To modify the buffer, simple echo in a number (in 1024 byte segments).
3497 CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3503 Snapshot preserves a current trace buffer at a particular point
3505 buffer with a spare buffer, and tracing continues in the new
3513 This is used to take a snapshot and to read the output
3514 of the snapshot. Echo 1 into this file to allocate a
3515 spare buffer and to take a snapshot (swap), then read
3584 In the tracefs tracing directory, there is a directory called "instances".
3604 is a separate and new buffer. The files affect that buffer but do not
3646 …<idle>-0 [003] d..3 136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 p…
3647 …empt-9 [003] d..3 136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 p…
3650 …bash-1998 [000] d..3 136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_…
3652 …kworker/0:1-59 [000] d..3 136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_pr…
3656 migration/1-14 [001] d.h3 138.732674: softirq_raise: vec=3 [action=NET_RX]
3657 <idle>-0 [001] dNh3 138.732725: softirq_raise: vec=3 [action=NET_RX]
3683 bash-1998 [000] d... 140.733504: sys_dup2(oldfd: a, newfd: 1)
3685 bash-1998 [000] d... 140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3687 bash-1998 [000] d... 140.733510: sys_close(fd: a)
3705 Note, if a process has a trace file open in one of the instance
3711 Since the kernel has a fixed sized stack, it is important not to
3712 waste it in functions. A kernel developer must be conscious of
3714 can be in danger of a stack overflow, and corruption will occur,
3715 usually leading to a system panic.
3718 periodically checking usage. But if you can perform a check
3720 a function tracer, it makes it convenient to check the stack size
3724 To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3733 After running it for a few minutes, the output looks like:
3745 3) 2288 80 idle_balance+0xbb/0x130