Home
last modified time | relevance | path

Searched refs:smp_mb (Results 1 – 25 of 34) sorted by relevance

12

/tools/include/asm/
A Dbarrier.h46 #ifndef smp_mb
47 # define smp_mb() mb() macro
53 smp_mb(); \
62 smp_mb(); \
/tools/memory-model/litmus-tests/
A DIRIW+fencembonceonces+OnceOnce.litmus6 * Test of independent reads from independent writes with smp_mb()
7 * between each pairs of reads. In other words, is smp_mb() sufficient to
26 smp_mb();
41 smp_mb();
A DR+fencembonceonces.litmus6 * This is the fully ordered (via smp_mb()) version of one of the classic
17 smp_mb();
26 smp_mb();
A DSB+fencembonceonces.litmus19 smp_mb();
28 smp_mb();
A DR+poonceonces.litmus6 * This is the unordered (thus lacking smp_mb()) version of one of the
A DISA2+pooncelock+pooncelock+pombonce.litmus36 smp_mb();
A DREADME24 Test of independent reads from independent writes with smp_mb()
25 between each pairs of reads. In other words, is smp_mb()
41 separated by smp_mb(). This addition of an external process to
53 Does a control dependency and an smp_mb() suffice for the
109 This is the fully ordered (via smp_mb()) version of one of
114 As above, but without the smp_mb() invocations.
117 This is the fully ordered (again, via smp_mb() version of store
122 As above, but without the smp_mb() invocations.
A DZ6.0+pooncelock+pooncelock+pombonce.litmus36 smp_mb();
A DLB+fencembonceonce+ctrlonceonce.litmus30 smp_mb();
A DZ6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus38 smp_mb();
/tools/testing/selftests/bpf/
A Dbpf_atomic.h62 #define smp_mb() \ macro
71 smp_mb(); \
79 smp_mb(); \
91 smp_mb(); \
99 smp_mb(); \
/tools/virtio/ringtest/
A Dmain.h117 #define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") macro
119 #define smp_mb() asm volatile("dmb ish" ::: "memory") macro
125 #define smp_mb() __sync_synchronize() macro
188 smp_mb(); /* Enforce dependency ordering from x */ \
A Dring.c183 smp_mb(); in enable_call()
193 smp_mb(); in kick_available()
215 smp_mb(); in enable_kick()
260 smp_mb(); in call_used()
A Dvirtio_ring_0_9.c222 smp_mb(); in enable_call()
232 smp_mb(); in kick_available()
254 smp_mb(); in enable_kick()
325 smp_mb(); in call_used()
A Dmain.c369 smp_mb(); in main()
/tools/arch/x86/include/asm/
A Dbarrier.h29 #define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") macro
/tools/arch/riscv/include/asm/
A Dbarrier.h22 #define smp_mb() RISCV_FENCE(rw, rw) macro
/tools/arch/arm64/include/asm/
A Dbarrier.h23 #define smp_mb() asm volatile("dmb ish" ::: "memory") macro
/tools/memory-model/Documentation/
A Drecipes.txt153 smp_mb();
187 smp_mb();
341 * smp_wmb() (B) smp_mb() (D)
350 smp_wmb() would also work with smp_mb() replacing either or both of the
378 smp_mb();
396 * smp_wmb() (B) smp_mb() (D)
461 smp_mb();
487 smp_mb();
494 smp_mb();
498 Omitting either smp_mb() will allow both r0 and r1 to have final
[all …]
A Dglossary.txt77 smp_mb(); smp_mb(); smp_mb();
80 CPU 0's smp_mb() interacts with that of CPU 1, which interacts
82 to complete the cycle. Because of the smp_mb() calls between
117 Fully Ordered: An operation such as smp_mb() that orders all of
A Dordering.txt51 smp_mb(), use mb(). See the "Linux Kernel Device Drivers" book or the
60 o The smp_mb() full memory barrier.
67 First, the smp_mb() full memory barrier orders all of the CPU's prior
74 smp_mb(); // Order store to x before load from y.
133 all architectures is to add a call to smp_mb():
137 smp_mb(); // Inefficient on x86!!!
140 This works, but the added smp_mb() adds needless overhead for
279 ordering, and an smp_mb() would be needed instead:
282 smp_mb();
285 But smp_mb() often incurs much higher overhead than does
[all …]
A Dcheatsheet.txt14 smp_mb() & synchronize_rcu() CP Y Y Y Y Y Y Y Y
A Dherd-representation.txt44 | smp_mb | F[MB] |
A Dcontrol-dependencies.txt224 stores and later loads, smp_mb().
228 either by preceding both of them with smp_mb() or by using
255 all other accesses, use smp_mb().
/tools/perf/arch/arm/util/
A Dauxtrace.c209 smp_mb(); in compat_auxtrace_mmap__write_tail()

Completed in 26 milliseconds

12