Home
last modified time | relevance | path

Searched refs:load (Results 1 – 25 of 86) sorted by relevance

1234

/tools/testing/selftests/powerpc/ptrace/
A Dptrace-vsx.h18 if (vsx[i] != load[2 * i + 1]) { in validate_vsx()
41 load[64 + 2 * i]); in validate_vmx()
44 load[65 + 2 * i]); in validate_vmx()
55 load[65 + 2 * i]); in validate_vmx()
58 load[64 + 2 * i]); in validate_vmx()
78 1 + 2 * i, load[i]); in compare_vsx_vmx()
85 if (store[i] != load[i]) { in compare_vsx_vmx()
87 i, store[i], i, load[i]); in compare_vsx_vmx()
114 vsx[i] = load[1 + 2 * i]; in load_vsx_vmx()
117 vmx[i][0] = load[64 + 2 * i]; in load_vsx_vmx()
[all …]
/tools/perf/scripts/python/bin/
A Dmem-phys-addr-record8 load=`perf list | grep mem_inst_retired.all_loads`
9 if [ -z "$load" ]; then
10 load=`perf list | grep mem_uops_retired.all_loads`
12 if [ -z "$load" ]; then
17 arg=$(echo $load | tr -d ' ')
/tools/power/cpupower/bench/
A DREADME-BENCH34 You can specify load (100% CPU load) and sleep (0% CPU load) times in us which
38 load=25000
48 Will increase load and sleep time by 25ms 5 times.
50 25ms load/sleep time repeated 20 times (cycles).
51 50ms load/sleep time repeated 20 times (cycles).
53 100ms load/sleep time repeated 20 times (cycles).
69 100% CPU load (load) | 0 % CPU load (sleep) | round
88 load -----| |-----| |-----| |-----|
109 -l, --load=<long int> initial load time in us
111 -x, --load-step=<long int> time to be added to load time, in us
[all …]
A Dbenchmark.c32 unsigned int calculate_timespace(long load, struct config *config) in calculate_timespace() argument
41 printf("calibrating load of %lius, please wait...\n", load); in calculate_timespace()
53 rounds = (unsigned int)(load * estimated / timed); in calculate_timespace()
88 load_time = config->load; in start_benchmark()
92 total_time += _round * (config->sleep + config->load); in start_benchmark()
A Dsystem.c136 (config->load + config->load_step * round) + in prepare_user()
137 (config->load + config->load_step * round * 4); in prepare_user()
A Dexample.cfg2 load = 50000
A Dmain.c97 sscanf(optarg, "%li", &config->load); in main()
169 config->load, in main()
A Dparse.h11 long load; /* load time in µs */ member
/tools/testing/selftests/sgx/
A DMakefile29 $(OUTPUT)/load.o \
38 $(OUTPUT)/load.o: load.c
55 $(OUTPUT)/load.o \
/tools/testing/selftests/powerpc/security/
A Dflush_utils.c21 static inline __u64 load(void *addr) in load() function
35 load(p + j); in syscall_loop()
47 load(p + j); in syscall_loop_uaccess()
/tools/perf/util/
A Djitdump.c337 jr->load.pid = bswap_32(jr->load.pid); in jit_get_next_entry()
338 jr->load.tid = bswap_32(jr->load.tid); in jit_get_next_entry()
339 jr->load.vma = bswap_64(jr->load.vma); in jit_get_next_entry()
340 jr->load.code_addr = bswap_64(jr->load.code_addr); in jit_get_next_entry()
341 jr->load.code_size = bswap_64(jr->load.code_size); in jit_get_next_entry()
342 jr->load.code_index= bswap_64(jr->load.code_index); in jit_get_next_entry()
382 return jr->load.pid; in jr_entry_pid()
389 return jr->load.tid; in jr_entry_tid()
443 nspid = jr->load.pid; in jit_repipe_code_load()
446 csize = jr->load.code_size; in jit_repipe_code_load()
[all …]
/tools/memory-model/Documentation/
A Dglossary.txt8 based on the value returned by an earlier load, an "address
9 dependency" extends from that load extending to the later access.
29 a special operation that includes a load and which orders that
30 load before later memory references running on that same CPU.
35 When an acquire load returns the value stored by a release store
36 to that same variable, (in other words, the acquire load "reads
55 of a value computed from a value returned by an earlier load,
56 a "control dependency" extends from that load to that store.
89 on the value returned by an earlier load, a "data dependency"
90 extends from that load to that later store. For example:
[all …]
A Dcontrol-dependencies.txt12 Therefore, a load-load control dependency will not preserve ordering
20 are permitted to predict the result of the load from "b". This prediction
21 can cause other CPUs to see this load as having happened before the load
32 (usually) guaranteed for load-store control dependencies, as in the
42 fuse the load from "a" with other loads. Without the WRITE_ONCE(),
44 the compiler might convert the store into a load and a check followed
45 by a store, and this compiler-generated load would not be ordered by
78 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
87 Now there is no conditional between the load from "a" and the store to
149 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
[all …]
A Dexplanation.txt481 a control dependency from the load to the store.
559 of load-tearing, where a load obtains some of its bits from one store
844 execute the load associated with the fence (e.g., the load
1364 P1 must execute its second load before the first. Indeed, if the load
1496 load: an fre link from P0's load to P1's store (which overwrites the
2085 therefore the load of x must execute before the load of y, even though
2486 where the compiler adds a load.
2511 the LKMM says that the marked load of ptr pre-bounds the plain load of
2549 that the load of ptr in P1 is r-pre-bounded before the load of *p
2575 could now perform the load of x before the load of ptr (there might be
[all …]
/tools/testing/selftests/bpf/
A Dtest_cpp.cpp42 int load() { return T::load(skel); } in load() function in Skeleton
75 err = skel.load(); in try_skeleton_template()
A Dtest_bpftool_metadata.sh61 bpftool prog load $BPF_FILE_UNUSED $BPF_DIR/unused
73 bpftool prog load $BPF_FILE_USED $BPF_DIR/used
A Dgenerate_udp_fragments.py72 pkt = IP(src=sip,dst=dip) / UDP(sport=sport,dport=dport,chksum=0) / Raw(load=payload)
75 …c=sip6,dst=dip6) / IPv6ExtHdrFragment(id=0xBEEF) / UDP(sport=sport,dport=dport) / Raw(load=payload)
/tools/testing/selftests/bpf/prog_tests/
A Dksyms_module.c27 err = bpf_prog_test_run_opts(skel->progs.load.prog_fd, &topts); in test_ksyms_module_lskel()
54 err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.load), &topts); in test_ksyms_module_libbpf()
/tools/perf/Documentation/
A Dperf-c2c.txt26 sample load and store operations, therefore hardware and kernel support is
33 - type of the access (load and store details)
34 - latency (in cycles) of the load access
209 - count of Total/Local/Remote load HITMs
212 - count of Total/Local/Remote load from peer cache or DRAM
218 - sum of all load accesses
229 - count of load hits in FB (Fill Buffer), L1 and L2 cache
232 - count of LLC load accesses, includes LLC hits and LLC HITMs
266 cycles - rmt hitm, lcl hitm, load (Display with HITM types)
269 cycles - rmt peer, lcl peer, load (Display with peer type)
[all …]
A Dperf-mem.txt23 not the pure load (or store latency). Use latency includes any pipeline
26 On Arm64 this uses SPE to sample load and store operations, therefore hardware
31 On AMD this use IBS Op PMU to sample load-store operations.
41 Select the memory operation type: load or store (default: load,store)
117 - blocked: reason of blocked load access for the data at the time of the sample
130 - op: operation in the sample instruction (load, store, prefetch, ...)
/tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/
A DContext.pm23 XSLoader::load('Perf::Trace::Context', $VERSION);
/tools/testing/selftests/kexec/
A Dtest_kexec_load.sh32 kexec --load $KERNEL_IMAGE > /dev/null 2>&1
/tools/memory-model/litmus-tests/
A DLB+poonceonces.litmus6 * Can the counter-intuitive outcome for the load-buffering pattern
A DLB+poacquireonce+pooncerelease.litmus6 * Does a release-acquire pair suffice for the load-buffering litmus
A DS+poonceonces.litmus7 * first store against P1()'s final load, if the smp_store_release()

Completed in 30 milliseconds

1234