Searched refs:updates (Results 1 – 25 of 294) sorted by relevance
12345678910>>...12
| /linux/Documentation/ABI/testing/ |
| A D | sysfs-bus-coresight-devices-trbe | 12 Description: (Read) Shows if TRBE updates in the memory are with access 13 and dirty flag updates as well. This value is fetched from
|
| /linux/drivers/net/ |
| A D | LICENSE.SRC | 14 on an "as-is" basis. No further updates to this software should be 15 expected. Although updates may occur, no commitment exists.
|
| /linux/Documentation/driver-api/firmware/ |
| A D | fw_search_path.rst | 9 * /lib/firmware/updates/UTS_RELEASE/ 10 * /lib/firmware/updates/
|
| /linux/tools/perf/pmu-events/ |
| A D | metric.py | 582 updates: Dict[Tuple[str, str], Expression] = dict() 595 if (inner_pmu, inner_name) in updates: 596 inner_expression = updates[(inner_pmu, inner_name)] 600 if (outer_pmu, outer_name) in updates and updated.Equals(updates[(outer_pmu, outer_name)]): 602 updates[(outer_pmu, outer_name)] = updated 603 return updates
|
| /linux/Documentation/RCU/ |
| A D | checklist.rst | 32 for lockless updates. This does result in the mildly 34 rcu_read_unlock() are used to protect updates, however, this 45 c. restricting updates to a single task. 107 c. Make updates appear atomic to readers. For example, 108 pointer updates to properly aligned fields will 119 d. Carefully order the updates and the reads so that readers 298 limit on this number, stalling updates as needed to allow 306 from ever ending.) Another way to stall the updates 307 is for the updates to use a wrapper function around 317 guarding updates with a global lock, limiting their rate. [all …]
|
| /linux/scripts/atomic/kerneldoc/ |
| A D | dec | 6 * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
|
| A D | inc | 6 * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
|
| A D | and | 7 * Atomically updates @v to (@v & @i) with ${desc_order} ordering.
|
| A D | xor | 7 * Atomically updates @v to (@v ^ @i) with ${desc_order} ordering.
|
| A D | or | 7 * Atomically updates @v to (@v | @i) with ${desc_order} ordering.
|
| A D | add | 7 * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
|
| A D | andnot | 7 * Atomically updates @v to (@v & ~@i) with ${desc_order} ordering.
|
| A D | sub | 7 * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
|
| A D | xchg | 7 * Atomically updates @v to @new with ${desc_order} ordering.
|
| A D | dec_and_test | 6 * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
|
| A D | inc_and_test | 6 * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
|
| A D | add_negative | 7 * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
|
| A D | sub_and_test | 7 * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
|
| A D | dec_unless_positive | 6 * If (@v <= 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
|
| A D | inc_not_zero | 6 * If (@v != 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
|
| A D | inc_unless_negative | 6 * If (@v >= 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
|
| A D | dec_if_positive | 6 * If (@v > 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
|
| /linux/fs/bcachefs/ |
| A D | btree_trans_commit.c | 75 return i != trans->updates && in same_leaf_as_prev() 82 return i + 1 < trans->updates + trans->nr_updates && in same_leaf_as_next() 106 while (--i >= trans->updates) { in trans_lock_write_fail() 535 i < trans->nr_updates && trans->updates[i].btree_id <= btree_id; in run_btree_triggers() 537 if (trans->updates[i].btree_id != btree_id) in run_btree_triggers() 540 int ret = run_one_trans_trigger(trans, trans->updates + i, overwrite); in run_btree_triggers() 569 trans->updates[btree_id_start].btree_id < btree_id) in bch2_trans_commit_run_triggers() 578 struct btree_insert_entry *i = trans->updates + idx; in bch2_trans_commit_run_triggers() 877 struct btree_insert_entry *i = trans->updates + idx; in do_bch2_trans_commit()
|
| /linux/drivers/gpu/drm/amd/amdgpu/ |
| A D | amdgpu_ids.c | 281 uint64_t updates = amdgpu_vm_tlb_seq(vm); in amdgpu_vmid_grab_reserved() local 287 (*id)->flushed_updates < updates || in amdgpu_vmid_grab_reserved() 359 uint64_t updates = amdgpu_vm_tlb_seq(vm); in amdgpu_vmid_grab_used() local 380 if ((*id)->flushed_updates < updates) in amdgpu_vmid_grab_used()
|
| /linux/Documentation/core-api/ |
| A D | entry.rst | 4 All transitions between execution domains require state updates which are 5 subject to strict ordering constraints. State updates are required for the 167 irq_enter_rcu() updates the preemption count which makes in_hardirq() 212 The state update on entry is handled in irqentry_nmi_enter() which updates 278 while handling an NMI. So NMI entry code has to be reentrant and state updates
|
Completed in 33 milliseconds
12345678910>>...12