Lines Matching refs:stores
102 device, stores it in a buffer, and sets a flag to indicate the buffer
134 Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1
140 This pattern of memory accesses, where one CPU stores values to two
197 it, as loads can obtain values only from earlier stores.
202 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
206 P0 stores 1 to buf before storing 1 to flag, since it executes
222 each CPU stores to its own shared location and then loads from the
270 W: P0 stores 1 to flag executes before
273 Z: P0 stores 1 to buf executes before
274 W: P0 stores 1 to flag.
296 Write events correspond to stores to shared memory, such as
398 executed before either of the stores to y. However, a compiler could
399 lift the stores out of the conditional, transforming the code into
601 from both of P0's stores. It is possible to handle mixed-size and
613 shared memory, the stores to that location must form a single global
619 the stores to x is simply the order in which the stores overwrite one
626 stores reach x's location in memory (or if you prefer a more
627 hardware-centric view, the order in which the stores get written to
637 and W' are two stores, then W ->co W'.
724 just like with the rf relation, we distinguish between stores that
725 occur on the same CPU (internal coherence order, or coi) and stores
728 On the other hand, stores to different memory locations are never
759 stores to x, there would also be fr links from the READ_ONCE() to
785 only internal operations. However, loads, stores, and fences involve
810 time to process the stores that it receives, and a store can't be used
812 most architectures, the local caches process stores in
839 smp_wmb() forces the CPU to execute all po-earlier stores
840 before any po-later stores;
853 propagates stores. When a fence instruction is executed on CPU C:
855 For each other CPU C', smp_wmb() forces all po-earlier stores
856 on C to propagate to C' before any po-later stores do.
860 stores executed on C) is forced to propagate to C' before the
864 executed (including all po-earlier stores on C) is forced to
869 affects stores from other CPUs that propagate to CPU C before the
870 fence is executed, as well as stores that are executed on C before the
873 A-cumulative; they only affect the propagation of stores that are
889 E and F are both stores on the same CPU and an smp_wmb() fence
898 The operational model requires that whenever W and W' are both stores
919 operations really are atomic, that is, no other stores can
925 Propagation: This requires that certain stores propagate to
947 According to the principle of cache coherence, the stores to any fixed
987 CPU 0 stores 14 to x;
988 CPU 1 stores 14 to x;
1002 there must not be any stores coming between W' and W in the coherence
1019 Note that this implies Z0 and Zn are stores to the same variable.
1058 X and Y are both stores and an smp_wmb() fence occurs between
1192 stores do reach P1's local cache in the proper order, it can happen
1201 incoming stores in FIFO order. By contrast, other architectures
1210 the stores it has already received. Thus, if the code was changed to:
1231 outstanding stores have been processed by the local cache. In the
1233 po-earlier stores to propagate to every other CPU in the system; then
1234 it has to wait for the local cache to process all the stores received
1235 as of that time -- not just the stores received when the strong fence
1261 W ->coe W'. This means that W and W' are stores to the same location,
1265 the other is made later by the memory subsystem. When the stores are
1307 read from different stores:
1355 stores. If r1 = 1 and r2 = 0 at the end then there is a prop link
1418 guarantees that the stores to x and y both propagate to P0 before the
1573 In the kernel's implementations of RCU, the requirements for stores
1844 This requires P0 and P2 to execute their loads and stores out of
1959 and some other stores W and W' occur po-before the lock-release and
2148 cumul-fence memory barriers force stores that are po-before
2149 the barrier to propagate to other CPUs before stores that are
2156 strong-fence memory barriers force stores that are po-before
2308 (i.e., smp_rmb()) and some affect only stores (smp_wmb()); otherwise
2390 stores will propagate to P1 in that order. However, rcu_dereference()
2446 Do the plain stores to y race? Clearly not if P1 reads a non-zero
2458 before the second can execute. Therefore the two stores cannot be
2463 race-candidate stores W and W', where W ->co W', the LKMM says the
2464 stores don't race if W can be linked to W' by a
2654 will self-deadlock in the executions where it stores 36 in y.