Lines Matching refs:load

74 load instructions.  The LKMM makes these predictions for code running
85 Each load instruction must obtain the value written by the most recent
190 by each load is simply the value written by the most recently executed
203 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
204 would be 1 since a load obtains its value from the most recent
398 For this code, the LKMM predicts that the load from x will always be
411 Given this version of the code, the LKMM would predict that the load
481 a control dependency from the load to the store.
498 There appears to be a data dependency from the load of x to the store
505 the value returned by the load from x, which would certainly destroy
507 the load generated for a READ_ONCE() -- that's one of the nice
508 properties of READ_ONCE() -- but it is allowed to ignore the load's
534 from the load can be discarded, breaking the address dependency.
547 write. In colloquial terms, the load "reads from" the store. We
548 write W ->rf R to indicate that the load R reads from the store W. We
549 further distinguish the cases where the load and the store occur on
559 of load-tearing, where a load obtains some of its bits from one store
561 and WRITE_ONCE() will prevent load-tearing; it's not possible to have:
580 On the other hand, load-tearing is unavoidable when mixed-size
641 is a load, then R must read from W or from some other store
644 Read-write coherence: If R ->po-loc W, where R is a load and W
690 rule: The READ_ONCE() load comes before the WRITE_ONCE() store in
712 would violate the read-read coherence rule: The r1 load comes before
713 the r2 load in program order, so it must not read from a store that
739 grok. It describes the situation where a load reads a value that gets
764 the load and the store are on the same CPU) and fre (when they are on
798 When a CPU executes a load instruction R, it first checks to see
803 CPU asks the memory subsystem for the value to load and we say that R
819 Note that load instructions may be executed speculatively and may be
821 premature executions; we simply say that the load executes at the
844 execute the load associated with the fence (e.g., the load
951 each load between the store that it reads from and the following
994 occurs in between CPU 1's load and store. To put it another way, the
1078 store; either a data, address, or control dependency from a load R to
1086 Dependencies to load instructions are more problematic. To begin with,
1087 there is no such thing as a data dependency to a load. Next, a CPU
1088 has no reason to respect a control dependency to a load, because it
1089 can always satisfy the second load speculatively before the first, and
1090 then ignore the result if it turns out that the second load shouldn't
1096 After all, a CPU cannot ask the memory subsystem to load a value from
1105 two loads. This happens when there is a dependency from a load to a
1106 store and a second, po-later load reads from that store:
1120 (In theory, a CPU might forward a store to a load when it runs across
1127 because it could tell that the store and the second load access the
1181 can malfunction on Alpha systems (notice that P1 uses an ordinary load
1187 to ptr does. And since P1 can't execute its second load
1188 until it knows what location to load from, i.e., after executing its
1189 first load, the value x = 1 must have propagated to P1 before the
1190 second load executed. So why doesn't r2 end up equal to 1?
1208 adds this fence after every READ_ONCE() and atomic load on Alpha. The
1223 its second load, the x = 1 store would already be fully processed by
1300 order. We can deduce this from the operational model; if P0's load
1302 been forwarded to the load, so r1 would have ended up equal to 1, not
1305 order, and P1's store propagated to P0 before P0's load executed.
1326 in program order. If the second load had executed before the first
1328 load executed, and so r1 would have been 9 rather than 0. In this
1330 because P1's store overwrote the value read by P0's first load, and
1331 P1's store propagated to P0 before P0's second load executed.
1357 from P1's second load to its first (backwards!). The reason is
1364 P1 must execute its second load before the first. Indeed, if the load
1366 have propagated to P1 by the time P1's load from buf executed, so r2
1374 link (hence an hb link) from the first load to the second, and the
1375 prop relation would give an hb link from the second load to the first.
1414 link from P0's store to its load. This is because P0's store gets
1418 before P2's load and store execute, P2's smp_store_release()
1420 store to z does (the second cumul-fence), and P0's load executes after the
1460 The case where E is a load is exactly the same, except that the first
1468 load, the memory subsystem would then be forced to satisfy E's read
1495 If r0 = 0 at the end then there is a pb link from P0's load to P1's
1496 load: an fre link from P0's load to P1's store (which overwrites the
1497 value read by P0), and a strong fence between P1's store and its load.
1502 Similarly, if r1 = 0 at the end then there is a pb link from P1's load
1776 P1's load at W reads from, so we have W ->fre Y. Since S ->po W and
1780 If r1 = 1 at the end then P1's load at Z reads from P0's store at X,
1907 srcu_read_lock() as a special type of load event (which is
1909 a value, just as a load does) and srcu_read_unlock() as a special type
1920 from the load (srcu-lock) to the store (srcu-unlock). For example,
2005 the LKMM treats B as a store to the variable s and C as a load from
2048 store-release in a spin_unlock() and the load-acquire which forms the
2085 therefore the load of x must execute before the load of y, even though
2112 and thus it could load y before x, obtaining r2 = 0 and r1 = 1.
2202 P1's store to x propagates to P0 before P0's load from x executes.
2203 But since P0's load from x is a plain access, the compiler may decide
2204 to carry out the load twice (for the comparison against NULL, then again
2269 If X is a load and X executes before a store Y, then indeed there is
2447 Thus U's store to buf is forced to propagate to P1 before V's load
2452 general. Suppose R is a plain load and we want to show that R
2462 "w-pre-bounded" by Y, depending on whether E was a store or a load.
2467 above were a marked load then X could simply be taken to be R itself.)
2482 thereby adding a load (and possibly replacing the store entirely).
2486 where the compiler adds a load.
2489 compiler has augmented a store with a load in this fashion, and the
2493 Incidentally, the other tranformation -- augmenting a plain load by
2496 a concurrent load from that location. Two concurrent loads don't
2498 does race with a concurrent load. Thus adding a store might create a
2501 load, on the other hand, is acceptable because doing so won't create a
2505 addition to fences: an address dependency from a marked load. That
2511 the LKMM says that the marked load of ptr pre-bounds the plain load of
2512 *p; the marked load must execute before any of the machine
2513 instructions corresponding to the plain load. This is a reasonable
2514 stipulation, since after all, the CPU can't perform the load of *p
2549 that the load of ptr in P1 is r-pre-bounded before the load of *p
2575 could now perform the load of x before the load of ptr (there might be
2632 sequence. For race-candidate load R and store W, the LKMM says the
2662 satisfy a load request and its determination of where a store will
2667 not allowed (i.e., a load cannot read from a store that it
2672 not allowed (i.e., if a store is visible to a load then the
2673 load must read from that store or one coherence-after it).
2719 it is not guaranteed that the load from y will execute after the
2735 an address dependency from a marked load R to a plain store W,
2744 links a marked load R to a store W, and the store is read by a load R'