Lines Matching refs:synchronize_rcu

101       14   synchronize_rcu();
105 Because the synchronize_rcu() on line 14 waits for all pre-existing
110 started after the synchronize_rcu() started, and must therefore also
124 | block synchronize_rcu()!!! |
131 | Second, even when using synchronize_rcu(), the other update-side |
165 24 synchronize_rcu();
169 28 synchronize_rcu();
174 the synchronize_rcu() in start_recovery() to guarantee that
181 | Why is the synchronize_rcu() on line 28 needed? |
191 critical section must not contain calls to synchronize_rcu().
194 synchronize_rcu().
410 13 synchronize_rcu();
488 before synchronize_rcu() starts is guaranteed to execute a full
490 section ends and the time that synchronize_rcu() returns. Without
495 synchronize_rcu() returns is guaranteed to execute a full memory
496 barrier between the time that synchronize_rcu() begins and the
501 #. If the task invoking synchronize_rcu() remains on a given CPU,
503 during the execution of synchronize_rcu(). This guarantee ensures
506 #. If the task invoking synchronize_rcu() migrates among a group of
509 execution of synchronize_rcu(). This guarantee also ensures that
512 thread executing the synchronize_rcu() migrates in the meantime.
520 | given instance of synchronize_rcu()? |
525 | section starts before a given instance of synchronize_rcu(), then |
527 | In other words, a given instance of synchronize_rcu() can avoid |
529 | prove that synchronize_rcu() started first. |
563 | #. CPU 0: synchronize_rcu() starts. |
567 | #. CPU 0: synchronize_rcu() returns. |
578 | #. CPU 0: synchronize_rcu() starts. |
582 | #. CPU 0: synchronize_rcu() returns. |
659 before invoking synchronize_rcu(), however, this inconvenience can
701 synchronize_rcu(). To see this, consider the following pair of
788 It might be tempting to assume that after synchronize_rcu()
791 synchronize_rcu() starts, and synchronize_rcu() is under no
797 | Suppose that synchronize_rcu() did wait until *all* readers had |
803 | For no time at all. Even if synchronize_rcu() were to wait until |
805 | synchronize_rcu() completed. Therefore, the code following |
806 | synchronize_rcu() can *never* rely on there being no readers. |
834 12 synchronize_rcu();
877 12 synchronize_rcu();
884 19 synchronize_rcu();
935 12 synchronize_rcu();
950 27 synchronize_rcu();
1213 The synchronize_rcu() grace-period-wait primitive is optimized for
1217 synchronize_rcu() are required to use batching optimizations so that
1222 of synchronize_rcu(), thus amortizing the per-invocation overhead
1227 In some cases, the multi-millisecond synchronize_rcu() latencies are
1245 be used in place of synchronize_rcu() as follows:
1284 neither synchronize_rcu() nor synchronize_rcu_expedited() would
1347 and kfree_rcu(), but not synchronize_rcu(). This was due to the
1349 places that needed something like synchronize_rcu() simply
1765 Perhaps surprisingly, synchronize_rcu() and
1768 disabled. This means that the call synchronize_rcu() (or friends)
1773 boot trick fails for synchronize_rcu() (as well as for
1776 which means that a subsequent synchronize_rcu() really does have to
1778 Unfortunately, synchronize_rcu() can't do this until all of its
1914 | synchronize_rcu() and rcu_barrier(). If latency is a concern, |
1932 grace-period operations such as (synchronize_rcu() and
2315 optimizations for synchronize_rcu(), call_rcu(),
2443 old RCU-bh update-side APIs are now gone, replaced by synchronize_rcu(),
2486 are now gone, replaced by synchronize_rcu(), synchronize_rcu_expedited(),
2632 synchronize_rcu() would guarantee that execution reached the
2652 synchronize_rcu(), and rcu_barrier(), respectively. In
2662 not watching. This means that synchronize_rcu() is insufficient, and