Searched refs:smp_mb (Results 1 – 25 of 34) sorted by relevance
12
| /tools/include/asm/ |
| A D | barrier.h | 46 #ifndef smp_mb 47 # define smp_mb() mb() macro 53 smp_mb(); \ 62 smp_mb(); \
|
| /tools/memory-model/litmus-tests/ |
| A D | IRIW+fencembonceonces+OnceOnce.litmus | 6 * Test of independent reads from independent writes with smp_mb() 7 * between each pairs of reads. In other words, is smp_mb() sufficient to 26 smp_mb(); 41 smp_mb();
|
| A D | R+fencembonceonces.litmus | 6 * This is the fully ordered (via smp_mb()) version of one of the classic 17 smp_mb(); 26 smp_mb();
|
| A D | SB+fencembonceonces.litmus | 19 smp_mb(); 28 smp_mb();
|
| A D | R+poonceonces.litmus | 6 * This is the unordered (thus lacking smp_mb()) version of one of the
|
| A D | ISA2+pooncelock+pooncelock+pombonce.litmus | 36 smp_mb();
|
| A D | README | 24 Test of independent reads from independent writes with smp_mb() 25 between each pairs of reads. In other words, is smp_mb() 41 separated by smp_mb(). This addition of an external process to 53 Does a control dependency and an smp_mb() suffice for the 109 This is the fully ordered (via smp_mb()) version of one of 114 As above, but without the smp_mb() invocations. 117 This is the fully ordered (again, via smp_mb() version of store 122 As above, but without the smp_mb() invocations.
|
| A D | Z6.0+pooncelock+pooncelock+pombonce.litmus | 36 smp_mb();
|
| A D | LB+fencembonceonce+ctrlonceonce.litmus | 30 smp_mb();
|
| A D | Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus | 38 smp_mb();
|
| /tools/testing/selftests/bpf/ |
| A D | bpf_atomic.h | 62 #define smp_mb() \ macro 71 smp_mb(); \ 79 smp_mb(); \ 91 smp_mb(); \ 99 smp_mb(); \
|
| /tools/virtio/ringtest/ |
| A D | main.h | 117 #define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") macro 119 #define smp_mb() asm volatile("dmb ish" ::: "memory") macro 125 #define smp_mb() __sync_synchronize() macro 188 smp_mb(); /* Enforce dependency ordering from x */ \
|
| A D | ring.c | 183 smp_mb(); in enable_call() 193 smp_mb(); in kick_available() 215 smp_mb(); in enable_kick() 260 smp_mb(); in call_used()
|
| A D | virtio_ring_0_9.c | 222 smp_mb(); in enable_call() 232 smp_mb(); in kick_available() 254 smp_mb(); in enable_kick() 325 smp_mb(); in call_used()
|
| A D | main.c | 369 smp_mb(); in main()
|
| /tools/arch/x86/include/asm/ |
| A D | barrier.h | 29 #define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") macro
|
| /tools/arch/riscv/include/asm/ |
| A D | barrier.h | 22 #define smp_mb() RISCV_FENCE(rw, rw) macro
|
| /tools/arch/arm64/include/asm/ |
| A D | barrier.h | 23 #define smp_mb() asm volatile("dmb ish" ::: "memory") macro
|
| /tools/memory-model/Documentation/ |
| A D | recipes.txt | 153 smp_mb(); 187 smp_mb(); 341 * smp_wmb() (B) smp_mb() (D) 350 smp_wmb() would also work with smp_mb() replacing either or both of the 378 smp_mb(); 396 * smp_wmb() (B) smp_mb() (D) 461 smp_mb(); 487 smp_mb(); 494 smp_mb(); 498 Omitting either smp_mb() will allow both r0 and r1 to have final [all …]
|
| A D | glossary.txt | 77 smp_mb(); smp_mb(); smp_mb(); 80 CPU 0's smp_mb() interacts with that of CPU 1, which interacts 82 to complete the cycle. Because of the smp_mb() calls between 117 Fully Ordered: An operation such as smp_mb() that orders all of
|
| A D | ordering.txt | 51 smp_mb(), use mb(). See the "Linux Kernel Device Drivers" book or the 60 o The smp_mb() full memory barrier. 67 First, the smp_mb() full memory barrier orders all of the CPU's prior 74 smp_mb(); // Order store to x before load from y. 133 all architectures is to add a call to smp_mb(): 137 smp_mb(); // Inefficient on x86!!! 140 This works, but the added smp_mb() adds needless overhead for 279 ordering, and an smp_mb() would be needed instead: 282 smp_mb(); 285 But smp_mb() often incurs much higher overhead than does [all …]
|
| A D | cheatsheet.txt | 14 smp_mb() & synchronize_rcu() CP Y Y Y Y Y Y Y Y
|
| A D | herd-representation.txt | 44 | smp_mb | F[MB] |
|
| A D | control-dependencies.txt | 224 stores and later loads, smp_mb(). 228 either by preceding both of them with smp_mb() or by using 255 all other accesses, use smp_mb().
|
| /tools/perf/arch/arm/util/ |
| A D | auxtrace.c | 209 smp_mb(); in compat_auxtrace_mmap__write_tail()
|
Completed in 26 milliseconds
12