Lines Matching refs:a

151 	 Adds a very slight overhead to tracing when enabled.
236 by using a compiler feature to insert a small, 5-byte No-Operation
238 sequence is then dynamically patched into a tracer call when
251 Enable the kernel to trace a function at both its return
254 draw a call graph for each thread with some information like
256 address on the current task structure into a stack of calls.
299 replace them with a No-Op instruction) on boot up. During
300 compile time, a table is made of all the locations that ftrace
311 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but
354 When a 1 is echoed into this file profiling begins, and when a
372 kernel executes, and keeping a maximum stack depth value and
405 The default measurement method is a maximum search, which is
429 The default measurement method is a maximum search, which is
456 spinning in a loop looking for interruptions caused by
457 something other than the kernel. For example, if a
458 System Management Interrupt (SMI) takes a noticeable amount of
460 if a system is reliable for Real Time tasks.
478 periodically non responsive. Do not run this tracer on a
482 file. Every time a latency is greater than tracing_thresh, it will
497 The osnoise tracer leverages the hwlat_detector by running a similar
501 increasing a per-cpu interference counter. It saves an interference
504 observes these interferences' entry events. When a noise happens
506 hardware noise counter increases, pointing to a hardware-related
512 In addition to the tracer, a set of tracepoints were added to
528 The tracer creates a per-cpu kernel thread with real-time priority.
529 The tracer thread sets a periodic timer to wakeup itself, and goes
531 then computes a wakeup latency value as the difference between
579 bool "Create a snapshot trace buffer"
593 Allow doing a snapshot of a single CPU buffer instead of a
602 When this is enabled, this adds a little more overhead to the
607 and already adds the overhead (plus a lot more).
617 The branch profiling is a software profiler. It will add hooks
618 into the C conditionals to test which path a branch takes.
621 are annotated with a likely or unlikely macro.
627 Either of the above profilers adds a bit of overhead to the system.
633 No branch profiling. Branch profiling adds a bit of overhead.
646 Note: this will add a significant overhead; only turn this
661 This configuration, when enabled, will impose a great overhead
681 "Trace likely/unlikely profiler" is that this is not a
683 events into a running trace buffer to see when and where the
699 on a given queue. Tracing allows you to see any traffic happening
700 on a block device queue. For more information (and the userspace
735 of the arguments of the probed function, when the probe location is a
736 kernel function entry or a tracepoint.
769 recursion or any unexpected execution path which leads to a kernel
806 an event or even dereference a field of an event. It can
808 address into a string.
826 bool "Enable BPF programs to override a kprobed function"
831 Allows BPF to override the execution of a probed function and
832 set a different return value. This is used for error injection.
863 tracing_map is a special-purpose lock-free map for tracing,
864 separated out as a stand-alone facility in order to allow it
893 events are generated by writing to a tracefs file. User
895 generated by registering a value and bit with the kernel
912 reading a debugfs/tracefs file. They're useful for
927 Allow user-space to inject a specific trace event into the ring
936 When the tracepoint is enabled, it kicks off a kernel thread that
942 The string written to the tracepoint is a static string of 128 bytes
943 to keep the time the same. The initial string is simply a write of
947 As it is a tight loop, it benchmarks as hot cache. That's fine because
966 This option creates a test to stress the ring buffer and benchmark it.
969 a producer and consumer that will run for 10 seconds and sleep for
971 it recorded and give a rough estimate of how long each iteration took.
987 To fix this, there's a special macro in the kernel that can be used
1012 it adds overhead. This option will create a file in the tracefs
1014 that triggered a recursion.
1027 the functions that caused a recursion to happen.
1039 called outside of RCU, as if they are, it can cause a race. But it
1040 also has a noticeable overhead when enabled.
1067 Note that on a kernel compiled with this config, ftrace will
1074 bool "Perform a startup test on ftrace"
1078 This option performs a series of startup tests on ftrace. On bootup
1079 a series of tests are made to verify that the tracer is
1088 This option performs a test on all trace events in the system.
1091 This may take some time run as there are a lot of events.
1099 with the event enabled. This adds a bit more time for kernel boot
1102 TBD - enable a way to actually call the syscalls as we test their
1123 Run a simple self test on the ring buffer on boot up. Late in the
1125 a thread per cpu. Each thread will write various size events
1129 If any anomalies are discovered, a warning will be displayed
1148 events on a sub buffer matches the current time stamp.
1166 This is a dumb module for testing mmiotrace. It is very dangerous
1167 as it will write garbage to IO memory starting at a given address.
1173 tristate "Test module to create a preempt / IRQ disable delay thread to test latency tracers"
1176 Select this option to build a test module that can help test latency
1177 tracers by executing a preempt or irq disable section with a user
1181 For example, the following invocation generates a burst of three
1195 This option creates a test module to check the base
1208 This option creates a test module to check the base
1221 dump out a bunch of internal details about the hist triggers
1224 The hist_debug file serves a couple of purposes: