Searched refs:PTE (Results 1 – 25 of 44) sorted by relevance
12
15 PTE Page Table Helpers19 | pte_same | Tests whether both PTE entries are the same |21 | pte_bad | Tests a non-table mapped PTE |23 | pte_present | Tests a valid mapped PTE |25 | pte_young | Tests a young PTE |27 | pte_dirty | Tests a dirty PTE |29 | pte_write | Tests a writable PTE |31 | pte_special | Tests a special PTE |33 | pte_protnone | Tests a PROT_NONE PTE |35 | pte_devmap | Tests a ZONE_DEVICE mapped PTE |[all …]
11 access to the table. At the moment we use split lock for PTE and PMD17 maps pte and takes PTE table lock, returns pointer to the taken20 unlocks and unmaps PTE table;22 allocates PTE table if needed and take the lock, returns pointer25 returns pointer to PTE table lock;31 Split page table lock for PTE tables is enabled compile-time if35 Split page table lock for PMD tables is enabled, if it's enabled for PTE55 There's no need in special enabling of PTE split page table lock: everything57 must be called on PTE table allocation / freeing.95 The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
16 PTE for this purpose. PTE flags are scarce resource especially on some CPU
31 profit from discovering a young PTE. A page table walk can sweep all122 the latter, when the eviction walks the rmap and finds a young PTE,123 the aging scans the adjacent PTEs. For both, on finding a young PTE,125 page mapped by this PTE to ``(max_seq%MAX_NR_GENS)+1``.168 trips into the rmap. It scans the adjacent PTEs of a young PTE and170 adds the PMD entry pointing to the PTE table to the Bloom filter. This181 filter. In the aging path, set membership means that the PTE range
311 For each PTE (or PMD) being faulted into a VMA, the page add rmap function313 (unless it is a PTE mapping of a part of a transparent huge page). Or when445 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists:446 the PMD on the border of a VM_LOCKED VMA will be split into a PTE table.487 For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls489 (unless it was a PTE mapping of a part of a transparent huge page).512 for each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls514 (unless it was a PTE mapping of a part of a transparent huge page).
18 有了分页表锁,我们就有了单独的每张表锁来顺序化对表的访问。目前,我们对PTE和24 映射pte并获取PTE表锁,返回所取锁的指针;26 解锁和解映射PTE表;28 如果需要的话,分配PTE表并获取锁,如果分配失败,返回已获取的锁的指针31 返回指向PTE表锁的指针;38 时启用PTE表的分页表锁。如果分页锁被禁用,所有的表都由mm->page_table_lock59 没有必要特别启用PTE分页表锁:所有需要的东西都由pgtable_pte_page_ctor()60 和pgtable_pte_page_dtor()完成,它们必须在PTE表分配/释放时被调用。93 PTE表的spinlock_t分配在pgtable_pte_page_ctor()中,PMD表的spinlock_t
20 偏移的项(pte_file)。内核为达到这个目的在PTE中保留了标志。PTE标志是稀缺资
280 除它,而不是复制一个零页。到系统内存或设备私有结构页的有效PTE条目将被282 程中取消映射,并插入一个特殊的迁移PTE来代替原来的PTE。 migrate_vma_setup()333 一些设备具有诸如原子PTE位的功能,可以用来实现对系统内存的原子访问。为了支持对一
139 是,PAE有更多的PTE位,可以提供像NX和PAT这样的高级功能。
68 #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument69 #define pmd_populate(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
65 基于PTE访问位的访问检查68 物理和虚拟地址空间的实现都使用PTE Accessed-bit进行基本访问检查。唯一的区别在于从地址中69 找到相关的PTE访问位的方式。虚拟地址的实现是为该地址的目标任务查找页表,而物理地址的实现则
40 尽管如此,DAMON默认为虚拟内存和物理内存提供了基于vma/rmap跟踪和PTE访问位检查的地址空间
5 The soft-dirty is a bit on a PTE which helps to track which pages a task18 64-bit qword is the soft-dirty one. If set, the respective PTE was25 the soft-dirty bit on the respective PTE.31 bits on the PTE.36 the same place. When unmap is called, the kernel internally clears PTE values
53 #define PTE page_size macro475 test_cases[3] = MAKE_TEST(PTE, PTE, PTE * 2, in main()480 test_cases[4] = MAKE_TEST(_1MB, PTE, _2MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()486 test_cases[6] = MAKE_TEST(PMD, PTE, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()494 test_cases[9] = MAKE_TEST(PUD, PTE, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()
40 - CONT PTE PMD CONT PMD PUD
43 - CONT PTE PMD CONT PMD PUD
33 } PTE; typedef
47 table entry (PTE) has the Present bit cleared or other reserved bits set,48 then speculative execution ignores the invalid PTE and loads the referenced50 by the address bits in the PTE was still present and accessible.72 PTE which is marked non present. This allows a malicious user space75 encoded in the address bits of the PTE, thus making attacks more78 The Linux kernel contains a mitigation for this attack vector, PTE92 PTE inversion mitigation for L1TF, to attack physical host memory.132 'Mitigation: PTE Inversion' The host protection is active136 information is appended to the 'Mitigation: PTE Inversion' part:582 - PTE inversion to protect against malicious user space. This is done
8 This check can spot missing TLB invalidation/wrong PTE permissions/
38 - CONT PTE PMD CONT PMD PUD
212 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware213 by clearing the RWX bits in the PTE and storing the original R & X bits in more216 atomically restore the PTE to a Present state. The W bit is not saved when the217 PTE is marked for access tracking and during restoration to the Present state,
75 PTE Accessed-bit Based Access Check78 Both of the implementations for physical and virtual address spaces use PTE80 finding the relevant PTE Accessed bit(s) from the address. While the
39 Nonetheless, DAMON provides vma/rmap tracking and PTE Accessed bit check based
131 DMAR:[fault reason 05] PTE Write access is not set133 DMAR:[fault reason 05] PTE Write access is not set
111 #error PTE shared bit mismatch116 #error Invalid Linux PTE bit settings
Completed in 38 milliseconds