| /tools/testing/selftests/powerpc/ptrace/ |
| A D | ptrace-vsx.h | 18 if (vsx[i] != load[2 * i + 1]) { in validate_vsx() 41 load[64 + 2 * i]); in validate_vmx() 44 load[65 + 2 * i]); in validate_vmx() 55 load[65 + 2 * i]); in validate_vmx() 58 load[64 + 2 * i]); in validate_vmx() 78 1 + 2 * i, load[i]); in compare_vsx_vmx() 85 if (store[i] != load[i]) { in compare_vsx_vmx() 87 i, store[i], i, load[i]); in compare_vsx_vmx() 114 vsx[i] = load[1 + 2 * i]; in load_vsx_vmx() 117 vmx[i][0] = load[64 + 2 * i]; in load_vsx_vmx() [all …]
|
| /tools/perf/scripts/python/bin/ |
| A D | mem-phys-addr-record | 8 load=`perf list | grep mem_inst_retired.all_loads` 9 if [ -z "$load" ]; then 10 load=`perf list | grep mem_uops_retired.all_loads` 12 if [ -z "$load" ]; then 17 arg=$(echo $load | tr -d ' ')
|
| /tools/power/cpupower/bench/ |
| A D | README-BENCH | 34 You can specify load (100% CPU load) and sleep (0% CPU load) times in us which 38 load=25000 48 Will increase load and sleep time by 25ms 5 times. 50 25ms load/sleep time repeated 20 times (cycles). 51 50ms load/sleep time repeated 20 times (cycles). 53 100ms load/sleep time repeated 20 times (cycles). 69 100% CPU load (load) | 0 % CPU load (sleep) | round 88 load -----| |-----| |-----| |-----| 109 -l, --load=<long int> initial load time in us 111 -x, --load-step=<long int> time to be added to load time, in us [all …]
|
| A D | benchmark.c | 32 unsigned int calculate_timespace(long load, struct config *config) in calculate_timespace() argument 41 printf("calibrating load of %lius, please wait...\n", load); in calculate_timespace() 53 rounds = (unsigned int)(load * estimated / timed); in calculate_timespace() 88 load_time = config->load; in start_benchmark() 92 total_time += _round * (config->sleep + config->load); in start_benchmark()
|
| A D | system.c | 136 (config->load + config->load_step * round) + in prepare_user() 137 (config->load + config->load_step * round * 4); in prepare_user()
|
| A D | example.cfg | 2 load = 50000
|
| A D | main.c | 97 sscanf(optarg, "%li", &config->load); in main() 169 config->load, in main()
|
| A D | parse.h | 11 long load; /* load time in µs */ member
|
| /tools/testing/selftests/sgx/ |
| A D | Makefile | 29 $(OUTPUT)/load.o \ 38 $(OUTPUT)/load.o: load.c 55 $(OUTPUT)/load.o \
|
| /tools/testing/selftests/powerpc/security/ |
| A D | flush_utils.c | 21 static inline __u64 load(void *addr) in load() function 35 load(p + j); in syscall_loop() 47 load(p + j); in syscall_loop_uaccess()
|
| /tools/perf/util/ |
| A D | jitdump.c | 337 jr->load.pid = bswap_32(jr->load.pid); in jit_get_next_entry() 338 jr->load.tid = bswap_32(jr->load.tid); in jit_get_next_entry() 339 jr->load.vma = bswap_64(jr->load.vma); in jit_get_next_entry() 340 jr->load.code_addr = bswap_64(jr->load.code_addr); in jit_get_next_entry() 341 jr->load.code_size = bswap_64(jr->load.code_size); in jit_get_next_entry() 342 jr->load.code_index= bswap_64(jr->load.code_index); in jit_get_next_entry() 382 return jr->load.pid; in jr_entry_pid() 389 return jr->load.tid; in jr_entry_tid() 443 nspid = jr->load.pid; in jit_repipe_code_load() 446 csize = jr->load.code_size; in jit_repipe_code_load() [all …]
|
| /tools/memory-model/Documentation/ |
| A D | glossary.txt | 8 based on the value returned by an earlier load, an "address 9 dependency" extends from that load extending to the later access. 29 a special operation that includes a load and which orders that 30 load before later memory references running on that same CPU. 35 When an acquire load returns the value stored by a release store 36 to that same variable, (in other words, the acquire load "reads 55 of a value computed from a value returned by an earlier load, 56 a "control dependency" extends from that load to that store. 89 on the value returned by an earlier load, a "data dependency" 90 extends from that load to that later store. For example: [all …]
|
| A D | control-dependencies.txt | 12 Therefore, a load-load control dependency will not preserve ordering 20 are permitted to predict the result of the load from "b". This prediction 21 can cause other CPUs to see this load as having happened before the load 32 (usually) guaranteed for load-store control dependencies, as in the 42 fuse the load from "a" with other loads. Without the WRITE_ONCE(), 44 the compiler might convert the store into a load and a check followed 45 by a store, and this compiler-generated load would not be ordered by 78 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ 87 Now there is no conditional between the load from "a" and the store to 149 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ [all …]
|
| A D | explanation.txt | 481 a control dependency from the load to the store. 559 of load-tearing, where a load obtains some of its bits from one store 844 execute the load associated with the fence (e.g., the load 1364 P1 must execute its second load before the first. Indeed, if the load 1496 load: an fre link from P0's load to P1's store (which overwrites the 2085 therefore the load of x must execute before the load of y, even though 2486 where the compiler adds a load. 2511 the LKMM says that the marked load of ptr pre-bounds the plain load of 2549 that the load of ptr in P1 is r-pre-bounded before the load of *p 2575 could now perform the load of x before the load of ptr (there might be [all …]
|
| /tools/testing/selftests/bpf/ |
| A D | test_cpp.cpp | 42 int load() { return T::load(skel); } in load() function in Skeleton 75 err = skel.load(); in try_skeleton_template()
|
| A D | test_bpftool_metadata.sh | 61 bpftool prog load $BPF_FILE_UNUSED $BPF_DIR/unused 73 bpftool prog load $BPF_FILE_USED $BPF_DIR/used
|
| A D | generate_udp_fragments.py | 72 pkt = IP(src=sip,dst=dip) / UDP(sport=sport,dport=dport,chksum=0) / Raw(load=payload) 75 …c=sip6,dst=dip6) / IPv6ExtHdrFragment(id=0xBEEF) / UDP(sport=sport,dport=dport) / Raw(load=payload)
|
| /tools/testing/selftests/bpf/prog_tests/ |
| A D | ksyms_module.c | 27 err = bpf_prog_test_run_opts(skel->progs.load.prog_fd, &topts); in test_ksyms_module_lskel() 54 err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.load), &topts); in test_ksyms_module_libbpf()
|
| /tools/perf/Documentation/ |
| A D | perf-c2c.txt | 26 sample load and store operations, therefore hardware and kernel support is 33 - type of the access (load and store details) 34 - latency (in cycles) of the load access 209 - count of Total/Local/Remote load HITMs 212 - count of Total/Local/Remote load from peer cache or DRAM 218 - sum of all load accesses 229 - count of load hits in FB (Fill Buffer), L1 and L2 cache 232 - count of LLC load accesses, includes LLC hits and LLC HITMs 266 cycles - rmt hitm, lcl hitm, load (Display with HITM types) 269 cycles - rmt peer, lcl peer, load (Display with peer type) [all …]
|
| A D | perf-mem.txt | 23 not the pure load (or store latency). Use latency includes any pipeline 26 On Arm64 this uses SPE to sample load and store operations, therefore hardware 31 On AMD this use IBS Op PMU to sample load-store operations. 41 Select the memory operation type: load or store (default: load,store) 117 - blocked: reason of blocked load access for the data at the time of the sample 130 - op: operation in the sample instruction (load, store, prefetch, ...)
|
| /tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/ |
| A D | Context.pm | 23 XSLoader::load('Perf::Trace::Context', $VERSION);
|
| /tools/testing/selftests/kexec/ |
| A D | test_kexec_load.sh | 32 kexec --load $KERNEL_IMAGE > /dev/null 2>&1
|
| /tools/memory-model/litmus-tests/ |
| A D | LB+poonceonces.litmus | 6 * Can the counter-intuitive outcome for the load-buffering pattern
|
| A D | LB+poacquireonce+pooncerelease.litmus | 6 * Does a release-acquire pair suffice for the load-buffering litmus
|
| A D | S+poonceonces.litmus | 7 * first store against P1()'s final load, if the smp_store_release()
|