Home
last modified time | relevance | path

Searched refs:smp_load_acquire (Results 1 – 25 of 39) sorted by relevance

12

/tools/memory-model/litmus-tests/
A DMP+pooncerelease+poacquireonce.litmus7 * smp_load_acquire() provide sufficient ordering for the message-passing
24 r0 = smp_load_acquire(flag);
A DMP+polockmbonce+poacquiresilsil.litmus9 * returns false and the second true, we know that the smp_load_acquire()
29 r1 = smp_load_acquire(x);
A DMP+polockonce+poacquiresilsil.litmus9 * the smp_load_acquire() executed before the lock was acquired (loosely
28 r1 = smp_load_acquire(x);
A DISA2+pooncerelease+poacquirerelease+poacquireonce.litmus26 r0 = smp_load_acquire(y);
35 r0 = smp_load_acquire(z);
A DS+fencewmbonceonce+poacquireonce.litmus23 r0 = smp_load_acquire(y);
A DLB+poacquireonce+pooncerelease.litmus25 r0 = smp_load_acquire(y);
A DS+poonceonces.litmus8 * is replaced by WRITE_ONCE() and the smp_load_acquire() replaced by
A DMP+fencewmbonceonce+fencermbonceonce.litmus8 * is usually better to use smp_store_release() and smp_load_acquire().
A DISA2+poonceonces.litmus9 * of the smp_load_acquire() invocations are replaced by READ_ONCE()?
A Ddep+plain.litmus27 int r = smp_load_acquire(y);
A DLB+unlocklockonceonce+poacquireonce.litmus31 r2 = smp_load_acquire(y);
A DMP+polocks.litmus7 * stand in for smp_load_acquire() and smp_store_release(), respectively.
A DMP+porevlocks.litmus7 * stand in for smp_load_acquire() and smp_store_release(), respectively.
A DZ6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus29 r0 = smp_load_acquire(y);
/tools/include/asm/
A Dbarrier.h58 #ifndef smp_load_acquire
59 # define smp_load_acquire(p) \ macro
/tools/arch/s390/include/asm/
A Dbarrier.h37 #define smp_load_acquire(p) \ macro
/tools/arch/powerpc/include/asm/
A Dbarrier.h39 #define smp_load_acquire(p) \ macro
/tools/arch/sparc/include/asm/
A Dbarrier_64.h49 #define smp_load_acquire(p) \ macro
/tools/include/linux/
A Dring_buffer.h59 return smp_load_acquire(&base->data_head); in ring_buffer_read_head()
/tools/arch/x86/include/asm/
A Dbarrier.h39 #define smp_load_acquire(p) \ macro
/tools/arch/riscv/include/asm/
A Dbarrier.h32 #define smp_load_acquire(p) \ macro
/tools/lib/bpf/
A Dringbuf.c244 cons_pos = smp_load_acquire(r->consumer_pos); in ringbuf_process_ring()
247 prod_pos = smp_load_acquire(r->producer_pos); in ringbuf_process_ring()
250 len = smp_load_acquire(len_ptr); in ringbuf_process_ring()
377 return smp_load_acquire(r->consumer_pos); in ring__consumer_pos()
385 return smp_load_acquire(r->producer_pos); in ring__producer_pos()
594 cons_pos = smp_load_acquire(rb->consumer_pos); in user_ring_buffer__reserve()
596 prod_pos = smp_load_acquire(rb->producer_pos); in user_ring_buffer__reserve()
/tools/arch/arm64/include/asm/
A Dbarrier.h64 #define smp_load_acquire(p) \ macro
/tools/memory-model/Documentation/
A Dlocking.txt120 One way to fix this is to use smp_load_acquire() and smp_store_release()
126 r0 = smp_load_acquire(&flag);
140 The smp_load_acquire() guarantees that its load from "flags" will
144 The smp_store_release() pairs with the smp_load_acquire(), thus ensuring
150 this case, via the smp_load_acquire() and the smp_store_release().
/tools/testing/selftests/bpf/benchs/
A Dbench_ringbufs.c301 cons_pos = smp_load_acquire(r->consumer_pos); in ringbuf_custom_process_ring()
304 prod_pos = smp_load_acquire(r->producer_pos); in ringbuf_custom_process_ring()
307 len = smp_load_acquire(len_ptr); in ringbuf_custom_process_ring()

Completed in 17 milliseconds

12