Home
last modified time | relevance | path

Searched refs:PTE (Results 1 – 25 of 44) sorted by relevance

12

/linux-6.3-rc2/Documentation/mm/
A Darch_pgtable_helpers.rst15 PTE Page Table Helpers
19 | pte_same | Tests whether both PTE entries are the same |
21 | pte_bad | Tests a non-table mapped PTE |
23 | pte_present | Tests a valid mapped PTE |
25 | pte_young | Tests a young PTE |
27 | pte_dirty | Tests a dirty PTE |
29 | pte_write | Tests a writable PTE |
31 | pte_special | Tests a special PTE |
33 | pte_protnone | Tests a PROT_NONE PTE |
35 | pte_devmap | Tests a ZONE_DEVICE mapped PTE |
[all …]
A Dsplit_page_table_lock.rst11 access to the table. At the moment we use split lock for PTE and PMD
17 maps pte and takes PTE table lock, returns pointer to the taken
20 unlocks and unmaps PTE table;
22 allocates PTE table if needed and take the lock, returns pointer
25 returns pointer to PTE table lock;
31 Split page table lock for PTE tables is enabled compile-time if
35 Split page table lock for PMD tables is enabled, if it's enabled for PTE
55 There's no need in special enabling of PTE split page table lock: everything
57 must be called on PTE table allocation / freeing.
95 The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
A Dremap_file_pages.rst16 PTE for this purpose. PTE flags are scarce resource especially on some CPU
A Dmultigen_lru.rst31 profit from discovering a young PTE. A page table walk can sweep all
122 the latter, when the eviction walks the rmap and finds a young PTE,
123 the aging scans the adjacent PTEs. For both, on finding a young PTE,
125 page mapped by this PTE to ``(max_seq%MAX_NR_GENS)+1``.
168 trips into the rmap. It scans the adjacent PTEs of a young PTE and
170 adds the PMD entry pointing to the PTE table to the Bloom filter. This
181 filter. In the aging path, set membership means that the PTE range
A Dunevictable-lru.rst311 For each PTE (or PMD) being faulted into a VMA, the page add rmap function
313 (unless it is a PTE mapping of a part of a transparent huge page). Or when
445 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists:
446 the PMD on the border of a VM_LOCKED VMA will be split into a PTE table.
487 For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
489 (unless it was a PTE mapping of a part of a transparent huge page).
512 for each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
514 (unless it was a PTE mapping of a part of a transparent huge page).
/linux-6.3-rc2/Documentation/translations/zh_CN/mm/
A Dsplit_page_table_lock.rst18 有了分页表锁,我们就有了单独的每张表锁来顺序化对表的访问。目前,我们对PTE
24 映射pte并获取PTE表锁,返回所取锁的指针;
26 解锁和解映射PTE表;
28 如果需要的话,分配PTE表并获取锁,如果分配失败,返回已获取的锁的指针
31 返回指向PTE表锁的指针;
38 时启用PTE表的分页表锁。如果分页锁被禁用,所有的表都由mm->page_table_lock
59 没有必要特别启用PTE分页表锁:所有需要的东西都由pgtable_pte_page_ctor()
60 和pgtable_pte_page_dtor()完成,它们必须在PTE表分配/释放时被调用。
93 PTE表的spinlock_t分配在pgtable_pte_page_ctor()中,PMD表的spinlock_t
A Dremap_file_pages.rst20 偏移的项(pte_file)。内核为达到这个目的在PTE中保留了标志。PTE标志是稀缺资
A Dhmm.rst280 除它,而不是复制一个零页。到系统内存或设备私有结构页的有效PTE条目将被
282 程中取消映射,并插入一个特殊的迁移PTE来代替原来的PTE。 migrate_vma_setup()
333 一些设备具有诸如原子PTE位的功能,可以用来实现对系统内存的原子访问。为了支持对一
A Dhighmem.rst139 是,PAE有更多的PTE位,可以提供像NX和PAT这样的高级功能。
/linux-6.3-rc2/arch/sparc/include/asm/
A Dpgalloc_64.h68 #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
69 #define pmd_populate(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
/linux-6.3-rc2/Documentation/translations/zh_CN/mm/damon/
A Ddesign.rst65 基于PTE访问位的访问检查
68 物理和虚拟地址空间的实现都使用PTE Accessed-bit进行基本访问检查。唯一的区别在于从地址中
69 找到相关的PTE访问位的方式。虚拟地址的实现是为该地址的目标任务查找页表,而物理地址的实现则
A Dfaq.rst40 尽管如此,DAMON默认为虚拟内存和物理内存提供了基于vma/rmap跟踪和PTE访问位检查的地址空间
/linux-6.3-rc2/Documentation/admin-guide/mm/
A Dsoft-dirty.rst5 The soft-dirty is a bit on a PTE which helps to track which pages a task
18 64-bit qword is the soft-dirty one. If set, the respective PTE was
25 the soft-dirty bit on the respective PTE.
31 bits on the PTE.
36 the same place. When unmap is called, the kernel internally clears PTE values
/linux-6.3-rc2/tools/testing/selftests/mm/
A Dmremap_test.c53 #define PTE page_size macro
475 test_cases[3] = MAKE_TEST(PTE, PTE, PTE * 2, in main()
480 test_cases[4] = MAKE_TEST(_1MB, PTE, _2MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()
486 test_cases[6] = MAKE_TEST(PMD, PTE, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()
494 test_cases[9] = MAKE_TEST(PUD, PTE, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()
/linux-6.3-rc2/Documentation/translations/zh_CN/arm64/
A Dhugetlbpage.rst40 - CONT PTE PMD CONT PMD PUD
/linux-6.3-rc2/Documentation/translations/zh_TW/arm64/
A Dhugetlbpage.rst43 - CONT PTE PMD CONT PMD PUD
/linux-6.3-rc2/arch/microblaze/include/asm/
A Dmmu.h33 } PTE; typedef
/linux-6.3-rc2/Documentation/admin-guide/hw-vuln/
A Dl1tf.rst47 table entry (PTE) has the Present bit cleared or other reserved bits set,
48 then speculative execution ignores the invalid PTE and loads the referenced
50 by the address bits in the PTE was still present and accessible.
72 PTE which is marked non present. This allows a malicious user space
75 encoded in the address bits of the PTE, thus making attacks more
78 The Linux kernel contains a mitigation for this attack vector, PTE
92 PTE inversion mitigation for L1TF, to attack physical host memory.
132 'Mitigation: PTE Inversion' The host protection is active
136 information is appended to the 'Mitigation: PTE Inversion' part:
582 - PTE inversion to protect against malicious user space. This is done
/linux-6.3-rc2/arch/xtensa/
A DKconfig.debug8 This check can spot missing TLB invalidation/wrong PTE permissions/
/linux-6.3-rc2/Documentation/arm64/
A Dhugetlbpage.rst38 - CONT PTE PMD CONT PMD PUD
/linux-6.3-rc2/Documentation/virt/kvm/
A Dlocking.rst212 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware
213 by clearing the RWX bits in the PTE and storing the original R & X bits in more
216 atomically restore the PTE to a Present state. The W bit is not saved when the
217 PTE is marked for access tracking and during restoration to the Present state,
/linux-6.3-rc2/Documentation/mm/damon/
A Ddesign.rst75 PTE Accessed-bit Based Access Check
78 Both of the implementations for physical and virtual address spaces use PTE
80 finding the relevant PTE Accessed bit(s) from the address. While the
A Dfaq.rst39 Nonetheless, DAMON provides vma/rmap tracking and PTE Accessed bit check based
/linux-6.3-rc2/Documentation/x86/
A Diommu.rst131 DMAR:[fault reason 05] PTE Write access is not set
133 DMAR:[fault reason 05] PTE Write access is not set
/linux-6.3-rc2/arch/arm/mm/
A Dproc-macros.S111 #error PTE shared bit mismatch
116 #error Invalid Linux PTE bit settings

Completed in 38 milliseconds

12