Lines Matching refs:on

12 accessing times of a program running on a CPU depends on the relative
15 on which it can operate very fast. On the other hand, getting and storing
16 data from and on remote memory (that is, memory local to some other processor)
22 running memory-intensive workloads on a shared host. In fact, the cost
27 page on the Wiki.
45 basing on the vCPUs' scheduling affinity.
47 Notice that, even if the node affinity of a domain may change on-line,
55 The simplest way of placing a domain on a NUMA node is setting the hard
74 itself always tries to run the domain's vCPUs on one of the nodes in
81 soft, both on a per-vCPU basis. This means each vCPU can have its own
82 soft affinity, stating where such vCPU prefers to execute on. This is
94 substantial performances benefits, although this will depend on the
104 In this case, the vCPU is always scheduled on one of the pCPUs to which
111 pCPU. In this case, the vCPU can run on every pCPU. Nevertheless, the
112 scheduler will try to have it running on one of the pCPUs in its soft
119 pCPUs. In this case, the vCPU is always scheduled on one of the pCPUs
143 should be created and scheduled on, directly in its config file. This
145 hypervisor constructs the node-affinity of a VM basing right on its
153 execute on those same pCPUs.
180 tries to figure out on its own on which node(s) the domain could fit best.
187 It is worthwhile noting that optimally fitting a set of VMs on the NUMA
194 on referred to as 'candidates') that have enough free memory and enough
198 decision on which candidate to pick happens accordingly to the following
211 candidates with a smaller number of vCPUs runnable on them (due
213 better. In case the same number of vCPUs can run on two (or more)
254 pCPU on the host, but the memory from the domain will come from the
293 if it is requested on a host with more than 16 NUMA nodes.