Lines Matching refs:of

12 accessing times of a program running on a CPU depends on the relative
13 distance between that CPU and that memory. In fact, most of the NUMA
18 defined as a set of processor cores (typically a physical CPU package) and
19 the memory directly attached to the set of cores.
23 of accessing non node-local memory locations is very high, and the
30 =head2 Xen and NUMA machines: the concept of I<node-affinity>
32 The Xen hypervisor deals with NUMA machines throughout the concept of
33 I<node-affinity>. The node-affinity of a domain is the set of NUMA nodes
34 of the host where the memory for the domain is being allocated (mostly,
37 which instead is the set of pCPUs where the vCPU is allowed (or prefers)
47 Notice that, even if the node affinity of a domain may change on-line,
49 created, as the most of its memory is allocated at that time and can
55 The simplest way of placing a domain on a NUMA node is setting the hard
56 scheduling affinity of the domain's vCPUs to the pCPUs of the node. This
57 also goes under the name of vCPU pinning, and can be done through the
64 the specified set of pCPUs for any reasons, even if all those pCPUs are
68 may come at he cost of some load imbalances.
74 itself always tries to run the domain's vCPUs on one of the nodes in
76 pick any free pCPU. Locality of access is less guaranteed than in the
80 Starting from Xen 4.5, credit1 supports two forms of affinity: hard and
87 soft affinity of the vCPUs of a domain with its node-affinity.
93 this reason, NUMA aware scheduling has the potential of bringing
104 In this case, the vCPU is always scheduled on one of the pCPUs to which
112 scheduler will try to have it running on one of the pCPUs in its soft
119 pCPUs. In this case, the vCPU is always scheduled on one of the pCPUs
122 form two disjoint sets of pCPUs, pinning "wins", and the soft affinity
132 both manual or automatic placement of them across the host's NUMA nodes.
135 the details of the heuristics adopted for automatic placement (see below),
136 and the lack of support (in both xm/xend and the Xen versions where that
145 hypervisor constructs the node-affinity of a VM basing right on its
152 That, of course, also mean the vCPUs of the domain will only be able to
156 to specify the soft affinity for all the vCPUs of the domain. This affects
185 procedure will become the soft affinity of all the vCPUs of the domain.
187 It is worthwhile noting that optimally fitting a set of VMs on the NUMA
188 nodes of an host is an incarnation of the Bin Packing Problem. In fact,
193 The first thing to do is find the nodes or the sets of nodes (from now
206 two (or more) candidates span the same number of nodes,
211 candidates with a smaller number of vCPUs runnable on them (due
213 better. In case the same number of vCPUs can run on two (or more)
219 the candidate with with the greatest amount of free memory is
228 there ensures a good balance of the overall host load. Finally, if more
230 largest amounts of free memory helps keeping the memory fragmentation
231 small, and maximizes the probability of being able to put more domains
238 by default. No API is provided (yet) for modifying the behaviour of
261 Note this may change in future versions of Xen/libxl.
266 The concept of vCPU soft affinity has been introduced for the first time
269 and so each vCPU can have its own mask of pCPUs, while node-affinity is
270 per-domain, that is the equivalent of having all the vCPUs with the same
276 As NUMA aware scheduling is a new feature of Xen 4.3, things are a little
277 bit different for earlier version of Xen. If no "cpus=" option is specified
279 the results is used to I<pin> the vCPUs of the domain to the output node(s).
282 On a version of Xen earlier than 4.2, there is not automatic placement at
291 it won't scale well to systems with arbitrary number of nodes.