/xen-4.10.0-shim-comet/docs/misc/ |
A D | xenpaging.txt | 5 guest memory and its filesystems! 9 xenpaging writes memory pages of a given guest to a file and moves the 10 pages back to the pool of available memory. Once the guests wants to 11 access the paged-out memory, the page is read from disk and placed into 12 memory. This allows the sum of all running guests to use more memory 33 Once xenpaging runs it needs a memory target, which is the memory 37 xenstore-write /local/domain/<dom_id>/memory/target-tot_pages $((1024*512)) 39 Now xenpaging tries to page-out as many pages to keep the overall memory
|
A D | libxl_memory.txt | 1 /* === Domain memory breakdown: HVM guests ================================== 38 === Domain memory breakdown: PV guests ==================================
|
A D | vtpm-platforms.txt | 30 memory=8 39 memory=8 46 memory=8 54 memory=1024 61 memory=1024 93 permitted access to IO memory at 0xfed42; this IO memory is accessible to the
|
/xen-4.10.0-shim-comet/xen/include/ |
A D | xlat.lst | 80 ! add_to_physmap memory.h 81 ! add_to_physmap_batch memory.h 82 ! foreign_memory_map memory.h 83 ! memory_exchange memory.h 84 ! memory_map memory.h 85 ! memory_reservation memory.h 86 ! mem_access_op memory.h 87 ! pod_target memory.h 88 ! remove_from_physmap memory.h 90 ? vmemrange memory.h [all …]
|
/xen-4.10.0-shim-comet/docs/man/ |
A D | xl.conf.pod.5 | 51 memory assigned to domain 0 in order to free memory for new domains. 54 domain 0 memory. 61 of memory given to domain 0 by default. 151 guarantee that there is memory available for the guest. This is an 155 quickly and the amount of free memory (which C<xl info> can show) is 157 the amount of memory (see 'memory' in xl.conf(5)) is set, which is then 159 The free memory in C<xl info> is the combination of the hypervisor's 160 free heap memory minus the outstanding claims value. 176 attempted as normal and may fail due to memory exhaustion. 180 Normal memory and freeable pool of ephemeral pages (tmem) is used when [all …]
|
A D | xl-numa-placement.pod.7 | 11 NUMA (which stands for Non-Uniform Memory Access) means that the memory 13 distance between that CPU and that memory. In fact, most of the NUMA 14 systems are built in such a way that each processor has its local memory, 16 data from and on remote memory (that is, memory local to some other processor) 19 the memory directly attached to the set of cores. 22 running memory-intensive workloads on a shared host. In fact, the cost 23 of accessing non node-local memory locations is very high, and the 49 created, as the most of its memory is allocated at that time and can 144 affects NUMA placement and memory accesses as, in this case, the 219 the candidate with with the greatest amount of free memory is [all …]
|
/xen-4.10.0-shim-comet/tools/examples/ |
A D | xlexample.pvlinux | 25 # Initial memory allocation (MB) 26 memory = 128 28 # Maximum memory (MB) 29 # If this is greater than `memory' then the slack will start ballooned
|
A D | xlexample.hvm | 24 # Initial memory allocation (MB) 25 memory = 128 27 # Maximum memory (MB) 28 # If this is greater than `memory' then the slack will start ballooned
|
A D | xlexample.pvhlinux | 28 # Initial memory allocation (MB) 29 memory = 512
|
/xen-4.10.0-shim-comet/tools/helpers/ |
A D | init-xenstore-domain.c | 25 static int memory; variable 67 int limit_kb = (maxmem ? : (memory + 1)) * 1024; in build() 168 rv = xc_dom_mem_init(dom, memory); in build() 289 if ( maxmem < memory ) in parse_maxmem() 330 memory = strtol(optarg, NULL, 10); in main() 353 if ( optind != argc || !kernel || !memory ) in main() 402 snprintf(buf, 16, "%d", memory * 1024); in main()
|
/xen-4.10.0-shim-comet/xen/arch/x86/boot/ |
A D | mem.S | 45 # e801h memory size call 54 cmpw $0x0, %dx # memory in AX/BX rather than 60 movl %edx,bootsym(highmem_kb) # store extended memory size 62 addl %ecx,bootsym(highmem_kb) # and add lower memory into
|
/xen-4.10.0-shim-comet/docs/misc/arm/device-tree/ |
A D | guest.txt | 11 memory where the grant table should be mapped to, using an 12 HYPERVISOR_memory_op hypercall. The memory region is large enough to map 29 xen,uefi-mmap-start | 64-bit | Guest physical address of the UEFI memory 32 xen,uefi-mmap-size | 32-bit | Size in bytes of the UEFI memory map 36 | | memory map.
|
/xen-4.10.0-shim-comet/tools/xenstore/ |
A D | talloc_guide.txt | 13 The new talloc is a hierarchical, reference counted memory pool system 64 memory of the given type. 99 then the memory is not actually released, but instead the most 125 around 48 bytes of memory on intel x86 platforms). 169 pieces of memory. A common use for destructors is to clean up 213 memory without releasing the name. All of the memory is released when 308 memory for a longer time. 350 for the top level memory context, but only if 364 for the top level memory context, but only if 533 to reduce the noise in memory leak reports. [all …]
|
/xen-4.10.0-shim-comet/tools/tests/mce-test/cases/ucna_llc/guest/ |
A D | cases.sh | 49 m) memory=$OPTARG;; 61 create_hvm_guest $image -u $vcpus -m $memory
|
/xen-4.10.0-shim-comet/tools/tests/mce-test/cases/srao_llc/guest/ |
A D | cases.sh | 49 m) memory=$OPTARG;; 61 create_hvm_guest $image -u $vcpus -m $memory
|
/xen-4.10.0-shim-comet/tools/tests/mce-test/cases/srao_mem/guest/ |
A D | cases.sh | 49 m) memory=$OPTARG;; 61 create_hvm_guest $image -u $vcpus -m $memory
|
/xen-4.10.0-shim-comet/tools/libacpi/ |
A D | build.c | 219 struct acpi_20_srat_memory *memory; in construct_srat() local 225 sizeof(*memory) * config->numa.nr_vmemranges; in construct_srat() 252 memory = (struct acpi_20_srat_memory *)processor; in construct_srat() 255 memory->type = ACPI_MEMORY_AFFINITY; in construct_srat() 256 memory->length = sizeof(*memory); in construct_srat() 257 memory->domain = config->numa.vmemrange[i].nid; in construct_srat() 258 memory->flags = ACPI_MEM_AFFIN_ENABLED; in construct_srat() 259 memory->base_address = config->numa.vmemrange[i].start; in construct_srat() 260 memory->mem_length = config->numa.vmemrange[i].end - in construct_srat() 262 memory++; in construct_srat() [all …]
|
/xen-4.10.0-shim-comet/tools/hotplug/Linux/init.d/ |
A D | sysconfig.xencommons.in | 68 # xenstore domain memory size in MiB. 75 # Maximum xenstore domain memory size. Can be specified as: 77 # - fraction of host memory, e.g. 1/100
|
/xen-4.10.0-shim-comet/tools/ocaml/test/ |
A D | list_domains.ml | 14 and memory = dominfo.Xenlight.Dominfo.current_memkb 16 printf "Dom %d: %c%c%c%c%c %LdKB\n" id running blocked paused shutdown dying memory
|
/xen-4.10.0-shim-comet/tools/tests/mce-test/lib/ |
A D | xen-mceinj-tool.sh | 70 m ) memory=$OPTARG;; 87 [ -z $memory ] || sed -i "/^memory/s/^.*$/memory = $memory/" $config
|
/xen-4.10.0-shim-comet/xen/common/ |
A D | Kconfig | 70 Transcendent memory allows PV-aware guests to collaborate on memory 71 usage. Guests can 'swap' their memory to the hypervisor or have an 72 collective pool of memory shared across guests. The end result is 73 less memory usage by guests allowing higher guest density. 98 the hypervisor itself, and related resources such as memory and 123 this will save a tiny amount of memory and time to update the stats.
|
A D | Makefile | 25 obj-y += memory.o 71 obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall.o xlat.o)
|
/xen-4.10.0-shim-comet/ |
A D | SUPPORT.md | 182 but with all the memory from the previous state of the VM intact. 200 ### Dynamic memory control 204 Allows a guest to add or remove memory after boot-time. 207 ### Populate-on-demand memory 212 to boot with memory < maxmem. 231 which guests can use to store memory 232 rather than caching in its own memory or swapping to disk. 234 can allow more efficient aggregate use of memory across VMs. 241 Allows external monitoring of hypervisor memory 306 to have higher-level page table entries point directly to memory, [all …]
|
/xen-4.10.0-shim-comet/tools/flask/policy/modules/ |
A D | nomigrate.te | 2 # once built, dom0 cannot read their memory.
|
A D | prot_domU.te | 3 # map memory belonging to those domains.
|