/linux-6.3-rc2/Documentation/admin-guide/mm/ |
A D | hugetlbpage.rst | 30 and surplus huge pages in the pool of huge pages of default size. 55 huge page from the pool of huge pages at fault time. 80 pages in the kernel's huge page pool. "Persistent" huge pages will be 93 Once a number of huge pages have been pre-allocated to the kernel huge page 103 Some platforms support multiple huge page sizes. To allocate huge pages 120 specific huge page size. Valid huge page sizes are architecture 176 huge page pool to 20, allocating or freeing huge pages, as required. 209 persistent huge page pool is exhausted. As these surplus huge pages become 226 of the in-use huge pages to surplus huge pages. This will occur even if 260 1GB and 2MB huge pages sizes. A 1GB huge page can be split into 512 [all …]
|
A D | transhuge.rst | 11 using huge pages for the backing of virtual memory with huge pages 51 collapses sequences of basic pages into huge pages. 247 ``huge=``. It can have following values: 253 Do not allocate huge pages; 265 ``huge=never`` will not attempt to break up huge pages at all, just stop more 358 is incremented if kernel fails to split huge 370 munmap() on part of huge page. It doesn't split huge page, only 376 the huge zero page, only its allocation. 389 for the huge page. 402 freed a huge page for use. [all …]
|
A D | concepts.rst | 79 `huge`. Usage of huge pages significantly reduces pressure on TLB, 83 memory with the huge pages. The first one is `HugeTLB filesystem`, or 86 the memory and mapped using huge pages. The hugetlbfs is described at 89 Another, more recent, mechanism that enables use of the huge pages is 92 the system memory should and can be mapped by the huge pages, THP 201 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
|
/linux-6.3-rc2/tools/testing/selftests/mm/ |
A D | charge_reserved_hugetlb.sh | 52 if [[ -e /mnt/huge ]]; then 53 rm -rf /mnt/huge/* 54 umount /mnt/huge || echo error 55 rmdir /mnt/huge 260 if [[ -e /mnt/huge ]]; then 261 rm -rf /mnt/huge/* 262 umount /mnt/huge 263 rmdir /mnt/huge 290 mkdir -p /mnt/huge 291 mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge [all …]
|
/linux-6.3-rc2/Documentation/mm/ |
A D | hugetlbfs_reserv.rst | 9 preallocated for application use. These huge pages are instantiated in a 11 to be used. If no huge page exists at page fault time, the task is sent 19 'reserve' huge pages at mmap() time to ensure that huge pages would be 35 huge pages are only available to the task which reserved them. 36 Therefore, the number of huge pages generally available is computed 50 There is one reserve map for each huge page mapping in the system. 75 The PagePrivate page flag is used to indicate that a huge page 76 reservation must be restored when the huge page is freed. More 77 details will be discussed in the "Freeing huge pages" section. 312 huge pages. If they can not be reserved, the mount fails. [all …]
|
A D | transhuge.rst | 13 knowledge fall back to breaking huge pmd mapping into table of ptes and, 41 is complete, so they won't ever notice the fact the page is huge. But 57 Code walking pagetables but unaware about huge pmds can simply call 92 To make pagetable walks huge pmd aware, all you need to do is to call 94 mmap_lock in read (or write) mode to be sure a huge pmd cannot be 100 page table lock will prevent the huge pmd being converted into a 104 before. Otherwise, you can proceed to process the huge pmd and the 107 Refcounts and transparent huge pages 133 requests to split pinned huge pages: it expects page count to be equal to
|
A D | zsmalloc.rst | 144 per zspage. Any object larger than 3264 bytes is considered huge and belongs 146 in huge classes do not share pages). 149 for the huge size class and fewer huge classes overall. This allows for more 152 For zspage chain size of 8, huge class watermark becomes 3632 bytes::: 164 For zspage chain size of 16, huge class watermark becomes 3840 bytes::: 192 pages per zspage number of size classes (clusters) huge size class watermark
|
A D | arch_pgtable_helpers.rst | 139 | pmd_set_huge | Creates a PMD huge mapping | 141 | pmd_clear_huge | Clears a PMD huge mapping | 195 | pud_set_huge | Creates a PUD huge mapping | 197 | pud_clear_huge | Clears a PUD huge mapping |
|
/linux-6.3-rc2/arch/powerpc/include/asm/nohash/32/ |
A D | pgtable.h | 236 static int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) in number_of_cells_per_pte() argument 238 if (!huge) in number_of_cells_per_pte() 249 unsigned long clr, unsigned long set, int huge) in pte_update() argument 257 num = number_of_cells_per_pte(pmd, new, huge); in pte_update() 284 unsigned long clr, unsigned long set, int huge) in pte_update() argument 334 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 336 pte_update(vma->vm_mm, address, ptep, 0, set, huge); in __ptep_set_access_flags()
|
A D | pte-8xx.h | 140 unsigned long clr, unsigned long set, int huge); 153 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 155 pte_update(vma->vm_mm, address, ptep, clr, set, huge); in __ptep_set_access_flags()
|
/linux-6.3-rc2/arch/powerpc/include/asm/book3s/64/ |
A D | hash.h | 147 pte_t *ptep, unsigned long pte, int huge); 154 int huge) in hash__pte_update() argument 172 if (!huge) in hash__pte_update() 177 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
|
A D | radix.h | 176 int huge) in radix__pte_update() argument 181 if (!huge) in radix__pte_update()
|
/linux-6.3-rc2/arch/loongarch/mm/ |
A D | init.c | 169 int huge = pmd_val(*pmd) & _PAGE_HUGE; in vmemmap_check_pmd() local 171 if (huge) in vmemmap_check_pmd() 174 return huge; in vmemmap_check_pmd()
|
/linux-6.3-rc2/Documentation/core-api/ |
A D | pin_user_pages.rst | 64 severely by huge pages, because each tail page adds a refcount to the 66 field, refcount overflows were seen in some huge page stress tests. 68 This also means that huge pages and large folios do not suffer 240 acquired since the system was powered on. For huge pages, the head page is 241 pinned once for each page (head page and each tail page) within the huge page. 242 This follows the same sort of behavior that get_user_pages() uses for huge 243 pages: the head page is refcounted once for each tail or head page in the huge 244 page, when get_user_pages() is applied to a huge page. 248 PAGE_SIZE granularity, even if the original pin was applied to a huge page.
|
/linux-6.3-rc2/Documentation/admin-guide/hw-vuln/ |
A D | multihit.rst | 81 * - KVM: Mitigation: Split huge pages 111 In order to mitigate the vulnerability, KVM initially marks all huge pages 125 The KVM hypervisor mitigation mechanism for marking huge pages as 134 non-executable huge pages in Linux kernel KVM module. All huge
|
/linux-6.3-rc2/arch/alpha/lib/ |
A D | ev6-clear_user.S | 86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop 87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop 88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
|
/linux-6.3-rc2/Documentation/riscv/ |
A D | vm-layout.rst | 42 …0000004000000000 | +256 GB | ffffffbfffffffff | ~16M TB | ... huge, almost 64 bits wide hole of… 78 …0000800000000000 | +128 TB | ffff7fffffffffff | ~16M TB | ... huge, almost 64 bits wide hole of… 114 …0100000000000000 | +64 PB | feffffffffffffff | ~16K PB | ... huge, almost 64 bits wide hole of…
|
/linux-6.3-rc2/arch/powerpc/mm/book3s64/ |
A D | hash_tlb.c | 41 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument 61 if (huge) { in hpte_need_flush()
|
/linux-6.3-rc2/drivers/misc/lkdtm/ |
A D | bugs.c | 276 volatile unsigned int huge = INT_MAX - 2; variable 283 value = huge; in lkdtm_OVERFLOW_SIGNED() 298 value = huge; in lkdtm_OVERFLOW_UNSIGNED()
|
/linux-6.3-rc2/mm/ |
A D | shmem.c | 117 int huge; member 525 switch (huge) { in shmem_format_huge() 1589 huge = false; in shmem_alloc_and_acct_folio() 1595 if (huge) in shmem_alloc_and_acct_folio() 3670 sbinfo->huge = ctx->huge; in shmem_reconfigure() 3738 if (sbinfo->huge) in shmem_show_options() 3804 sbinfo->huge = ctx->huge; in shmem_fill_super() 4126 int huge; in shmem_enabled_store() local 4136 if (huge == -EINVAL) in shmem_enabled_store() 4139 huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY) in shmem_enabled_store() [all …]
|
A D | memory-failure.c | 2413 bool huge = false; in unpoison_memory() local 2462 huge = true; in unpoison_memory() 2478 huge = true; in unpoison_memory() 2497 if (!huge) in unpoison_memory() 2551 bool huge = PageHuge(page); in soft_offline_in_use_page() local 2558 if (!huge && PageTransHuge(hpage)) { in soft_offline_in_use_page() 2594 bool release = !huge; in soft_offline_in_use_page() 2596 if (!page_handle_poison(page, huge, release)) in soft_offline_in_use_page() 2603 pfn, msg_page[huge], ret, &page->flags); in soft_offline_in_use_page() 2609 pfn, msg_page[huge], page_count(page), &page->flags); in soft_offline_in_use_page()
|
/linux-6.3-rc2/Documentation/features/vm/huge-vmap/ |
A D | arch-support.txt | 2 # Feature name: huge-vmap
|
/linux-6.3-rc2/Documentation/admin-guide/blockdev/ |
A D | zram.rst | 133 size of the disk when not in use so a huge zram is wasteful. 321 echo huge > /sys/block/zramX/writeback 346 Additionally, if a user choose to writeback only huge and idle pages 416 algorithm can, for example, be more successful compressing huge pages (those 453 #HUGE pages recompression is activated by `huge` mode 454 echo "type=huge" > /sys/block/zram0/recompress 488 echo "type=huge algo=zstd" > /sys/block/zramX/recompress 518 huge page 527 and the block's state is huge so it is written back to the backing
|
/linux-6.3-rc2/arch/parisc/mm/ |
A D | init.c | 398 bool huge = false; in map_pages() local 408 huge = true; in map_pages() 413 huge = true; in map_pages() 419 if (huge) in map_pages()
|
/linux-6.3-rc2/arch/powerpc/include/asm/nohash/64/ |
A D | pgtable.h | 178 int huge) in pte_update() argument 184 if (!huge) in pte_update()
|