1=head1 NAME
2
3xl - Xen management tool, based on libxenlight
4
5=head1 SYNOPSIS
6
7B<xl> I<subcommand> [I<args>]
8
9=head1 DESCRIPTION
10
11The B<xl> program is the new tool for managing Xen guest
12domains. The program can be used to create, pause, and shutdown
13domains. It can also be used to list current domains, enable or pin
14VCPUs, and attach or detach virtual block devices.
15
16The basic structure of every B<xl> command is almost always:
17
18=over 2
19
20B<xl> I<subcommand> [I<OPTIONS>] I<domain-id>
21
22=back
23
24Where I<subcommand> is one of the subcommands listed below, I<domain-id>
25is the numeric domain id, or the domain name (which will be internally
26translated to domain id), and I<OPTIONS> are subcommand specific
27options.  There are a few exceptions to this rule in the cases where
28the subcommand in question acts on all domains, the entire machine,
29or directly on the Xen hypervisor.  Those exceptions will be clear for
30each of those subcommands.
31
32=head1 NOTES
33
34=over 4
35
36=item start the script B</etc/init.d/xencommons> at boot time
37
38Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make
39sure you start the script B</etc/init.d/xencommons> at boot time to
40initialize all the daemons needed by B<xl>.
41
42=item setup a B<xenbr0> bridge in dom0
43
44In the most common network configuration, you need to setup a bridge in dom0
45named B<xenbr0> in order to have a working network in the guest domains.
46Please refer to the documentation of your Linux distribution to know how to
47setup the bridge.
48
49=item B<autoballoon>
50
51If you specify the amount of memory dom0 has, passing B<dom0_mem> to
52Xen, it is highly recommended to disable B<autoballoon>. Edit
53B<@XEN_CONFIG_DIR@/xl.conf> and set it to 0.
54
55=item run xl as B<root>
56
57Most B<xl> commands require root privileges to run due to the
58communications channels used to talk to the hypervisor.  Running as
59non root will return an error.
60
61=back
62
63=head1 GLOBAL OPTIONS
64
65Some global options are always available:
66
67=over 4
68
69=item B<-v>
70
71Verbose.
72
73=item B<-N>
74
75Dry run: do not actually execute the command.
76
77=item B<-f>
78
79Force execution: xl will refuse to run some commands if it detects that xend is
80also running, this option will force the execution of those commands, even
81though it is unsafe.
82
83=item B<-t>
84
85Always use carriage-return-based overwriting for displaying progress
86messages without scrolling the screen.  Without -t, this is done only
87if stderr is a tty.
88
89=item B<-T>
90
91Include timestamps and pid of the xl process in output.
92
93=back
94
95=head1 DOMAIN SUBCOMMANDS
96
97The following subcommands manipulate domains directly.  As stated
98previously, most commands take I<domain-id> as the first parameter.
99
100=over 4
101
102=item B<button-press> I<domain-id> I<button>
103
104I<This command is deprecated. Please use C<xl trigger> instead.>
105
106Indicate an ACPI button press to the domain, where I<button> can be 'power' or
107'sleep'. This command is only available for HVM domains.
108
109=item B<create> [I<configfile>] [I<OPTIONS>]
110
111The create subcommand takes a config file as its first argument: see
112L<xl.cfg(5)> for full details of the file format and possible options.
113If I<configfile> is missing B<xl> creates the domain assuming the default
114values for every option.
115
116I<configfile> has to be an absolute path to a file.
117
118Create will return B<as soon> as the domain is started.  This B<does
119not> mean the guest OS in the domain has actually booted, or is
120available for input.
121
122If the I<-F> option is specified, create will start the domain and not
123return until its death.
124
125B<OPTIONS>
126
127=over 4
128
129=item B<-q>, B<--quiet>
130
131No console output.
132
133=item B<-f=FILE>, B<--defconfig=FILE>
134
135Use the given configuration file.
136
137=item B<-p>
138
139Leave the domain paused after it is created.
140
141=item B<-F>
142
143Run in foreground until death of the domain.
144
145=item B<-V>, B<--vncviewer>
146
147Attach to domain's VNC server, forking a vncviewer process.
148
149=item B<-A>, B<--vncviewer-autopass>
150
151Pass the VNC password to vncviewer via stdin.
152
153=item B<-c>
154
155Attach console to the domain as soon as it has started.  This is
156useful for determining issues with crashing domains and just as a
157general convenience since you often want to watch the
158domain boot.
159
160=item B<key=value>
161
162It is possible to pass I<key=value> pairs on the command line to provide
163options as if they were written in the configuration file; these override
164whatever is in the I<configfile>.
165
166NB: Many config options require characters such as quotes or brackets
167which are interpreted by the shell (and often discarded) before being
168passed to xl, resulting in xl being unable to parse the value
169correctly.  A simple work-around is to put all extra options within a
170single set of quotes, separated by semicolons.  (See below for an example.)
171
172=back
173
174B<EXAMPLES>
175
176=over 4
177
178=item I<with extra parameters>
179
180  xl create hvm.cfg 'cpus="0-3"; pci=["01:05.1","01:05.2"]'
181
182This creates a domain with the file hvm.cfg, but additionally pins it to
183cpus 0-3, and passes through two PCI devices.
184
185=back
186
187=item B<config-update> I<domain-id> [I<configfile>] [I<OPTIONS>]
188
189Update the saved configuration for a running domain. This has no
190immediate effect but will be applied when the guest is next
191restarted. This command is useful to ensure that runtime modifications
192made to the guest will be preserved when the guest is restarted.
193
194Since Xen 4.5 xl has improved capabilities to handle dynamic domain
195configuration changes and will preserve any changes made at runtime
196when necessary. Therefore it should not normally be necessary to use
197this command any more.
198
199I<configfile> has to be an absolute path to a file.
200
201B<OPTIONS>
202
203=over 4
204
205=item B<-f=FILE>, B<--defconfig=FILE>
206
207Use the given configuration file.
208
209=item B<key=value>
210
211It is possible to pass I<key=value> pairs on the command line to
212provide options as if they were written in the configuration file;
213these override whatever is in the I<configfile>.  Please see the note
214under I<create> on handling special characters when passing
215I<key=value> pairs on the command line.
216
217=back
218
219=item B<console> [I<OPTIONS>] I<domain-id>
220
221Attach to the console of a domain specified by I<domain-id>.  If you've set up
222your domains to have a traditional login console this will look much like a
223normal text login screen.
224
225Use the escape character key combination (default Ctrl+]) to detach from the
226domain console.
227
228B<OPTIONS>
229
230=over 4
231
232=item I<-t [pv|serial]>
233
234Connect to a PV console or connect to an emulated serial console.
235PV consoles are the only consoles available for PV domains while HVM
236domains can have both. If this option is not specified it defaults to
237emulated serial for HVM guests and PV console for PV guests.
238
239=item I<-n NUM>
240
241Connect to console number I<NUM>. Console numbers start from 0.
242
243=item I<-e escapechar>
244
245Customize the escape sequence used to detach from the domain console to
246I<escapechar>. If not specified, the value "^]" is used.
247
248=back
249
250=item B<destroy> [I<OPTIONS>] I<domain-id>
251
252Immediately terminate the domain specified by I<domain-id>.  This doesn't give
253the domain OS any chance to react, and is the equivalent of ripping the power
254cord out on a physical machine.  In most cases you will want to use the
255B<shutdown> command instead.
256
257B<OPTIONS>
258
259=over 4
260
261=item I<-f>
262
263Allow domain 0 to be destroyed.  Because a domain cannot destroy itself, this
264is only possible when using a disaggregated toolstack, and is most useful when
265using a hardware domain separated from domain 0.
266
267=back
268
269=item B<domid> I<domain-name>
270
271Converts a domain name to a domain id.
272
273=item B<domname> I<domain-id>
274
275Converts a domain id to a domain name.
276
277=item B<rename> I<domain-id> I<new-name>
278
279Change the domain name of a domain specified by I<domain-id> to I<new-name>.
280
281=item B<dump-core> I<domain-id> [I<filename>]
282
283Dumps the virtual machine's memory for the specified domain to the
284I<filename> specified, without pausing the domain.  The dump file will
285be written to a distribution specific directory for dump files, for example:
286@XEN_DUMP_DIR@/dump.
287
288=item B<help> [I<--long>]
289
290Displays the short help message (i.e. common commands) by default.
291
292If the I<--long> option is specified, it displays the complete set of B<xl>
293subcommands, grouped by function.
294
295=item B<list> [I<OPTIONS>] [I<domain-id> ...]
296
297Displays information about one or more domains.  If no domains are
298specified it displays information about all domains.
299
300
301B<OPTIONS>
302
303=over 4
304
305=item B<-l>, B<--long>
306
307The output for B<xl list> is not the table view shown below, but
308instead presents the data as a JSON data structure.
309
310=item B<-Z>, B<--context>
311
312Also displays the security labels.
313
314=item B<-v>, B<--verbose>
315
316Also displays the domain UUIDs, the shutdown reason and security labels.
317
318=item B<-c>, B<--cpupool>
319
320Also displays the cpupool the domain belongs to.
321
322=item B<-n>, B<--numa>
323
324Also displays the domain NUMA node affinity.
325
326=back
327
328B<EXAMPLE>
329
330An example format for the list is as follows:
331
332    Name                                        ID   Mem VCPUs      State   Time(s)
333    Domain-0                                     0   750     4     r-----   11794.3
334    win                                          1  1019     1     r-----       0.3
335    linux                                        2  2048     2     r-----    5624.2
336
337Name is the name of the domain.  ID the numeric domain id.  Mem is the
338desired amount of memory to allocate to the domain (although it may
339not be the currently allocated amount).  VCPUs is the number of
340virtual CPUs allocated to the domain.  State is the run state (see
341below).  Time is the total run time of the domain as accounted for by
342Xen.
343
344B<STATES>
345
346The State field lists 6 states for a Xen domain, and which ones the
347current domain is in.
348
349=over 4
350
351=item B<r - running>
352
353The domain is currently running on a CPU.
354
355=item B<b - blocked>
356
357The domain is blocked, and not running or runnable.  This can be because the
358domain is waiting on IO (a traditional wait state) or has
359gone to sleep because there was nothing else for it to do.
360
361=item B<p - paused>
362
363The domain has been paused, usually occurring through the administrator
364running B<xl pause>.  When in a paused state the domain will still
365consume allocated resources (like memory), but will not be eligible for
366scheduling by the Xen hypervisor.
367
368=item B<s - shutdown>
369
370The guest OS has shut down (SCHEDOP_shutdown has been called) but the
371domain is not dying yet.
372
373=item B<c - crashed>
374
375The domain has crashed, which is always a violent ending.  Usually
376this state only occurs if the domain has been configured not to
377restart on a crash.  See L<xl.cfg(5)> for more info.
378
379=item B<d - dying>
380
381The domain is in the process of dying, but hasn't completely shut down or
382crashed.
383
384=back
385
386B<NOTES>
387
388=over 4
389
390The Time column is deceptive.  Virtual IO (network and block devices)
391used by the domains requires coordination by Domain0, which means that
392Domain0 is actually charged for much of the time that a DomainU is
393doing IO.  Use of this time value to determine relative utilizations
394by domains is thus very unreliable, as a high IO workload may show as
395less utilized than a high CPU workload.  Consider yourself warned.
396
397=back
398
399=item B<mem-set> I<domain-id> I<mem>
400
401Set the target for the domain's balloon driver.
402
403The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
404MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
405
406This must be less than the initial B<maxmem> parameter in the domain's
407configuration.
408
409Note that this operation requests the guest operating system's balloon
410driver to reach the target amount of memory.  The guest may fail to
411reach that amount of memory for any number of reasons, including:
412
413=over 4
414
415=item
416
417The guest doesn't have a balloon driver installed
418
419=item
420
421The guest's balloon driver is buggy
422
423=item
424
425The guest's balloon driver cannot create free guest memory due to
426guest memory pressure
427
428=item
429
430The guest's balloon driver cannot allocate memory from Xen because of
431hypervisor memory pressure
432
433=item
434
435The guest administrator has disabled the balloon driver
436
437=back
438
439B<Warning:> There is no good way to know in advance how small of a
440mem-set will make a domain unstable and cause it to crash.  Be very
441careful when using this command on running domains.
442
443=item B<mem-max> I<domain-id> I<mem>
444
445Specify the limit Xen will place on the amount of memory a guest may
446allocate.
447
448The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
449MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
450
451I<mem> can't be set lower than the current memory target for
452I<domain-id>.  It is allowed to be higher than the configured maximum
453memory size of the domain (B<maxmem> parameter in the domain's
454configuration).
455
456Setting the maximum memory size above the configured maximum memory size
457will require special guest support (memory hotplug) in order to be usable
458by the guest.
459
460The domain will not receive any signal regarding the changed memory
461limit.
462
463=item B<migrate> [I<OPTIONS>] I<domain-id> I<host>
464
465Migrate a domain to another host machine. By default B<xl> relies on ssh as a
466transport mechanism between the two hosts.
467
468B<OPTIONS>
469
470=over 4
471
472=item B<-s> I<sshcommand>
473
474Use <sshcommand> instead of ssh.  String will be passed to sh. If empty, run
475<host> instead of ssh <host> xl migrate-receive [-d -e].
476
477=item B<-e>
478
479On the new <host>, do not wait in the background for the death of the
480domain. See the corresponding option of the I<create> subcommand.
481
482=item B<-C> I<config>
483
484Send the specified <config> file instead of the file used on creation of the
485domain.
486
487=item B<--debug>
488
489Display huge (!) amount of debug information during the migration process.
490
491=item B<-p>
492
493Leave the domain on the receive side paused after migration.
494
495=item B<-D>
496
497Preserve the B<domain-id> in the domain coniguration that is transferred
498such that it will be identical on the destination host, unless that
499configuration is overridden using the B<-C> option. Note that it is not
500possible to use this option for a 'localhost' migration.
501
502=back
503
504=item B<remus> [I<OPTIONS>] I<domain-id> I<host>
505
506Enable Remus HA or COLO HA for domain. By default B<xl> relies on ssh as a
507transport mechanism between the two hosts.
508
509B<NOTES>
510
511=over 4
512
513Remus support in xl is still in experimental (proof-of-concept) phase.
514Disk replication support is limited to DRBD disks.
515
516COLO support in xl is still in experimental (proof-of-concept)
517phase. All options are subject to change in the future.
518
519=back
520
521COLO disk configuration looks like:
522
523  disk = ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
524
525The supported options are:
526
527=over 4
528
529=item B<colo-host>   : Secondary host's ip address.
530
531=item B<colo-port>   : Secondary host's port, we will run a nbd server on the
532secondary host, and the nbd server will listen on this port.
533
534=item B<colo-export> : Nbd server's disk export name of the secondary host.
535
536=item B<active-disk> : Secondary's guest write will be buffered to this disk,
537and it's used by the secondary.
538
539=item B<hidden-disk> : Primary's modified contents will be buffered in this
540disk, and it's used by the secondary.
541
542=back
543
544COLO network configuration looks like:
545
546  vif = [ '...,forwarddev=xxx,...']
547
548The supported options are:
549
550=over 4
551
552=item B<forwarddev> : Forward devices for the primary and the secondary, they
553are directly connected.
554
555
556=back
557
558B<OPTIONS>
559
560=over 4
561
562=item B<-i> I<MS>
563
564Checkpoint domain memory every MS milliseconds (default 200ms).
565
566=item B<-u>
567
568Disable memory checkpoint compression.
569
570=item B<-s> I<sshcommand>
571
572Use <sshcommand> instead of ssh.  String will be passed to sh.
573If empty, run <host> instead of ssh <host> xl migrate-receive -r [-e].
574
575=item B<-e>
576
577On the new <host>, do not wait in the background for the death of the domain.
578See the corresponding option of the I<create> subcommand.
579
580=item B<-N> I<netbufscript>
581
582Use <netbufscript> to setup network buffering instead of the
583default script (@XEN_SCRIPT_DIR@/remus-netbuf-setup).
584
585=item B<-F>
586
587Run Remus in unsafe mode. Use this option with caution as failover may
588not work as intended.
589
590=item B<-b>
591
592Replicate memory checkpoints to /dev/null (blackhole).
593Generally useful for debugging. Requires enabling unsafe mode.
594
595=item B<-n>
596
597Disable network output buffering. Requires enabling unsafe mode.
598
599=item B<-d>
600
601Disable disk replication. Requires enabling unsafe mode.
602
603=item B<-c>
604
605Enable COLO HA. This conflicts with B<-i> and B<-b>, and memory
606checkpoint compression must be disabled.
607
608=item B<-p>
609
610Use userspace COLO Proxy. This option must be used in conjunction
611with B<-c>.
612
613=back
614
615=item B<pause> I<domain-id>
616
617Pause a domain.  When in a paused state the domain will still consume
618allocated resources (such as memory), but will not be eligible for
619scheduling by the Xen hypervisor.
620
621=item B<reboot> [I<OPTIONS>] I<domain-id>
622
623Reboot a domain.  This acts just as if the domain had the B<reboot>
624command run from the console.  The command returns as soon as it has
625executed the reboot action, which may be significantly earlier than when the
626domain actually reboots.
627
628For HVM domains this requires PV drivers to be installed in your guest
629OS. If PV drivers are not present but you have configured the guest OS
630to behave appropriately you may be able to use the I<-F> option to
631trigger a reset button press.
632
633The behavior of what happens to a domain when it reboots is set by the
634B<on_reboot> parameter of the domain configuration file when the
635domain was created.
636
637B<OPTIONS>
638
639=over 4
640
641=item B<-F>
642
643If the guest does not support PV reboot control then fallback to
644sending an ACPI power event (equivalent to the I<reset> option to
645I<trigger>).
646
647You should ensure that the guest is configured to behave as expected
648in response to this event.
649
650=back
651
652=item B<restore> [I<OPTIONS>] [I<configfile>] I<checkpointfile>
653
654Build a domain from an B<xl save> state file.  See B<save> for more info.
655
656B<OPTIONS>
657
658=over 4
659
660=item B<-p>
661
662Do not unpause the domain after restoring it.
663
664=item B<-e>
665
666Do not wait in the background for the death of the domain on the new host.
667See the corresponding option of the I<create> subcommand.
668
669=item B<-d>
670
671Enable debug messages.
672
673=item B<-V>, B<--vncviewer>
674
675Attach to the domain's VNC server, forking a vncviewer process.
676
677=item B<-A>, B<--vncviewer-autopass>
678
679Pass the VNC password to vncviewer via stdin.
680
681
682
683=back
684
685=item B<save> [I<OPTIONS>] I<domain-id> I<checkpointfile> [I<configfile>]
686
687Saves a running domain to a state file so that it can be restored
688later.  Once saved, the domain will no longer be running on the
689system, unless the -c or -p options are used.
690B<xl restore> restores from this checkpoint file.
691Passing a config file argument allows the user to manually select the VM config
692file used to create the domain.
693
694=over 4
695
696=item B<-c>
697
698Leave the domain running after creating the snapshot.
699
700=item B<-p>
701
702Leave the domain paused after creating the snapshot.
703
704=item B<-D>
705
706Preserve the B<domain-id> in the domain coniguration that is embedded in
707the state file such that it will be identical when the domain is restored,
708unless that configuration is overridden. (See the B<restore> operation
709above).
710
711=back
712
713=item B<sharing> [I<domain-id>]
714
715Display the number of shared pages for a specified domain. If no domain is
716specified it displays information about all domains.
717
718=item B<shutdown> [I<OPTIONS>] I<-a|domain-id>
719
720Gracefully shuts down a domain.  This coordinates with the domain OS
721to perform graceful shutdown, so there is no guarantee that it will
722succeed, and may take a variable length of time depending on what
723services must be shut down in the domain.
724
725For HVM domains this requires PV drivers to be installed in your guest
726OS. If PV drivers are not present but you have configured the guest OS
727to behave appropriately you may be able to use the I<-F> option to
728trigger a power button press.
729
730The command returns immediately after signaling the domain unless the
731B<-w> flag is used.
732
733The behavior of what happens to a domain when it reboots is set by the
734B<on_shutdown> parameter of the domain configuration file when the
735domain was created.
736
737B<OPTIONS>
738
739=over 4
740
741=item B<-a>, B<--all>
742
743Shutdown all guest domains.  Often used when doing a complete shutdown
744of a Xen system.
745
746=item B<-w>, B<--wait>
747
748Wait for the domain to complete shutdown before returning.  If given once,
749the wait is for domain shutdown or domain death.  If given multiple times,
750the wait is for domain death only.
751
752=item B<-F>
753
754If the guest does not support PV shutdown control then fallback to
755sending an ACPI power event (equivalent to the I<power> option to
756I<trigger>).
757
758You should ensure that the guest is configured to behave as expected
759in response to this event.
760
761=back
762
763=item B<sysrq> I<domain-id> I<letter>
764
765Send a <Magic System Request> to the domain, each type of request is
766represented by a different letter.
767It can be used to send SysRq requests to Linux guests, see sysrq.txt in
768your Linux Kernel sources for more information.
769It requires PV drivers to be installed in your guest OS.
770
771=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep|s3resume> [I<VCPU>]
772
773Send a trigger to a domain, where the trigger can be: nmi, reset, init, power
774or sleep.  Optionally a specific vcpu number can be passed as an argument.
775This command is only available for HVM domains.
776
777=item B<unpause> I<domain-id>
778
779Moves a domain out of the paused state.  This will allow a previously
780paused domain to now be eligible for scheduling by the Xen hypervisor.
781
782=item B<vcpu-set> I<domain-id> I<vcpu-count>
783
784Enables the I<vcpu-count> virtual CPUs for the domain in question.
785Like mem-set, this command can only allocate up to the maximum virtual
786CPU count configured at boot for the domain.
787
788If the I<vcpu-count> is smaller than the current number of active
789VCPUs, the highest number VCPUs will be hotplug removed.  This may be
790important for pinning purposes.
791
792Attempting to set the VCPUs to a number larger than the initially
793configured VCPU count is an error.  Trying to set VCPUs to < 1 will be
794quietly ignored.
795
796Some guests may need to actually bring the newly added CPU online
797after B<vcpu-set>, go to B<SEE ALSO> section for information.
798
799=item B<vcpu-list> [I<domain-id>]
800
801Lists VCPU information for a specific domain.  If no domain is
802specified, VCPU information for all domains will be provided.
803
804=item B<vcpu-pin> [I<-f|--force>] I<domain-id> I<vcpu> I<cpus hard> I<cpus soft>
805
806Set hard and soft affinity for a I<vcpu> of <domain-id>. Normally VCPUs
807can float between available CPUs whenever Xen deems a different run state
808is appropriate.
809
810Hard affinity can be used to restrict this, by ensuring certain VCPUs
811can only run on certain physical CPUs. Soft affinity specifies a I<preferred>
812set of CPUs. Soft affinity needs special support in the scheduler, which is
813only provided in credit1.
814
815The keyword B<all> can be used to apply the hard and soft affinity masks to
816all the VCPUs in the domain. The symbol '-' can be used to leave either
817hard or soft affinity alone.
818
819For example:
820
821 xl vcpu-pin 0 3 - 6-9
822
823will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
824leaving its hard affinity untouched. On the other hand:
825
826 xl vcpu-pin 0 3 3,4 6-9
827
828will set both hard and soft affinity, the former to pCPUs 3 and 4, the
829latter to pCPUs 6,7,8, and 9.
830
831Specifying I<-f> or I<--force> will remove a temporary pinning done by the
832operating system (normally this should be done by the operating system).
833In case a temporary pinning is active for a vcpu the affinity of this vcpu
834can't be changed without this option.
835
836=item B<vm-list>
837
838Prints information about guests. This list excludes information about
839service or auxiliary domains such as dom0 and stubdoms.
840
841B<EXAMPLE>
842
843An example format for the list is as follows:
844
845    UUID                                  ID    name
846    59e1cf6c-6ab9-4879-90e7-adc8d1c63bf5  2    win
847    50bc8f75-81d0-4d53-b2e6-95cb44e2682e  3    linux
848
849=item B<vncviewer> [I<OPTIONS>] I<domain-id>
850
851Attach to the domain's VNC server, forking a vncviewer process.
852
853B<OPTIONS>
854
855=over 4
856
857=item I<--autopass>
858
859Pass the VNC password to vncviewer via stdin.
860
861=back
862
863=back
864
865=head1 XEN HOST SUBCOMMANDS
866
867=over 4
868
869=item B<debug-keys> I<keys>
870
871Send debug I<keys> to Xen. It is the same as pressing the Xen
872"conswitch" (Ctrl-A by default) three times and then pressing "keys".
873
874=item B<set-parameters> I<params>
875
876Set hypervisor parameters as specified in I<params>. This allows for some
877boot parameters of the hypervisor to be modified in the running systems.
878
879=item B<dmesg> [I<OPTIONS>]
880
881Reads the Xen message buffer, similar to dmesg on a Linux system.  The
882buffer contains informational, warning, and error messages created
883during Xen's boot process.  If you are having problems with Xen, this
884is one of the first places to look as part of problem determination.
885
886B<OPTIONS>
887
888=over 4
889
890=item B<-c>, B<--clear>
891
892Clears Xen's message buffer.
893
894=back
895
896=item B<info> [I<OPTIONS>]
897
898Print information about the Xen host in I<name : value> format.  When
899reporting a Xen bug, please provide this information as part of the
900bug report. See I<https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project> on how to
901report Xen bugs.
902
903Sample output looks as follows:
904
905 host                   : scarlett
906 release                : 3.1.0-rc4+
907 version                : #1001 SMP Wed Oct 19 11:09:54 UTC 2011
908 machine                : x86_64
909 nr_cpus                : 4
910 nr_nodes               : 1
911 cores_per_socket       : 4
912 threads_per_core       : 1
913 cpu_mhz                : 2266
914 hw_caps                : bfebfbff:28100800:00000000:00003b40:009ce3bd:00000000:00000001:00000000
915 virt_caps              : hvm hvm_directio
916 total_memory           : 6141
917 free_memory            : 4274
918 free_cpus              : 0
919 outstanding_claims     : 0
920 xen_major              : 4
921 xen_minor              : 2
922 xen_extra              : -unstable
923 xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
924 xen_scheduler          : credit
925 xen_pagesize           : 4096
926 platform_params        : virt_start=0xffff800000000000
927 xen_changeset          : Wed Nov 02 17:09:09 2011 +0000 24066:54a5e994a241
928 xen_commandline        : com1=115200,8n1 guest_loglvl=all dom0_mem=750M console=com1
929 cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
930 cc_compile_by          : sstabellini
931 cc_compile_domain      : uk.xensource.com
932 cc_compile_date        : Tue Nov  8 12:03:05 UTC 2011
933 xend_config_format     : 4
934
935
936B<FIELDS>
937
938Not all fields will be explained here, but some of the less obvious
939ones deserve explanation:
940
941=over 4
942
943=item B<hw_caps>
944
945A vector showing what hardware capabilities are supported by your
946processor.  This is equivalent to, though more cryptic, the flags
947field in /proc/cpuinfo on a normal Linux machine: they both derive from
948the feature bits returned by the cpuid command on x86 platforms.
949
950=item B<free_memory>
951
952Available memory (in MB) not allocated to Xen, or any other domains, or
953claimed for domains.
954
955=item B<outstanding_claims>
956
957When a claim call is done (see L<xl.conf(5)>) a reservation for a specific
958amount of pages is set and also a global value is incremented. This
959global value (outstanding_claims) is then reduced as the domain's memory
960is populated and eventually reaches zero. Most of the time the value will
961be zero, but if you are launching multiple guests, and B<claim_mode> is
962enabled, this value can increase/decrease. Note that the value also
963affects the B<free_memory> - as it will reflect the free memory
964in the hypervisor minus the outstanding pages claimed for guests.
965See xl I<info> B<claims> parameter for detailed listing.
966
967=item B<xen_caps>
968
969The Xen version and architecture.  Architecture values can be one of:
970x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64.
971
972=item B<xen_changeset>
973
974The Xen mercurial changeset id.  Very useful for determining exactly
975what version of code your Xen system was built from.
976
977=back
978
979B<OPTIONS>
980
981=over 4
982
983=item B<-n>, B<--numa>
984
985List host NUMA topology information
986
987=back
988
989=item B<top>
990
991Executes the B<xentop(1)> command, which provides real time monitoring of
992domains.  Xentop has a curses interface, and is reasonably self explanatory.
993
994=item B<uptime>
995
996Prints the current uptime of the domains running.
997
998=item B<claims>
999
1000Prints information about outstanding claims by the guests. This provides
1001the outstanding claims and currently populated memory count for the guests.
1002These values added up reflect the global outstanding claim value, which
1003is provided via the I<info> argument, B<outstanding_claims> value.
1004The B<Mem> column has the cumulative value of outstanding claims and
1005the total amount of memory that has been right now allocated to the guest.
1006
1007B<EXAMPLE>
1008
1009An example format for the list is as follows:
1010
1011 Name                                        ID   Mem VCPUs      State   Time(s)  Claimed
1012 Domain-0                                     0  2047     4     r-----      19.7     0
1013 OL5                                          2  2048     1     --p---       0.0   847
1014 OL6                                          3  1024     4     r-----       5.9     0
1015 Windows_XP                                   4  2047     1     --p---       0.0  1989
1016
1017In which it can be seen that the OL5 guest still has 847MB of claimed
1018memory (out of the total 2048MB where 1191MB has been allocated to
1019the guest).
1020
1021=back
1022
1023=head1 SCHEDULER SUBCOMMANDS
1024
1025Xen ships with a number of domain schedulers, which can be set at boot
1026time with the B<sched=> parameter on the Xen command line.  By
1027default B<credit> is used for scheduling.
1028
1029=over 4
1030
1031=item B<sched-credit> [I<OPTIONS>]
1032
1033Set or get credit (aka credit1) scheduler parameters.  The credit scheduler is
1034a proportional fair share CPU scheduler built from the ground up to be
1035work conserving on SMP hosts.
1036
1037Each domain (including Domain0) is assigned a weight and a cap.
1038
1039B<OPTIONS>
1040
1041=over 4
1042
1043=item B<-d DOMAIN>, B<--domain=DOMAIN>
1044
1045Specify domain for which scheduler parameters are to be modified or retrieved.
1046Mandatory for modifying scheduler parameters.
1047
1048=item B<-w WEIGHT>, B<--weight=WEIGHT>
1049
1050A domain with a weight of 512 will get twice as much CPU as a domain
1051with a weight of 256 on a contended host. Legal weights range from 1
1052to 65535 and the default is 256.
1053
1054=item B<-c CAP>, B<--cap=CAP>
1055
1056The cap optionally fixes the maximum amount of CPU a domain will be
1057able to consume, even if the host system has idle CPU cycles. The cap
1058is expressed in percentage of one physical CPU: 100 is 1 physical CPU,
105950 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is
1060no upper cap.
1061
1062NB: Many systems have features that will scale down the computing
1063power of a cpu that is not 100% utilized.  This can be in the
1064operating system, but can also sometimes be below the operating system
1065in the BIOS.  If you set a cap such that individual cores are running
1066at less than 100%, this may have an impact on the performance of your
1067workload over and above the impact of the cap. For example, if your
1068processor runs at 2GHz, and you cap a vm at 50%, the power management
1069system may also reduce the clock speed to 1GHz; the effect will be
1070that your VM gets 25% of the available power (50% of 1GHz) rather than
107150% (50% of 2GHz).  If you are not getting the performance you expect,
1072look at performance and cpufreq options in your operating system and
1073your BIOS.
1074
1075=item B<-p CPUPOOL>, B<--cpupool=CPUPOOL>
1076
1077Restrict output to domains in the specified cpupool.
1078
1079=item B<-s>, B<--schedparam>
1080
1081Specify to list or set pool-wide scheduler parameters.
1082
1083=item B<-t TSLICE>, B<--tslice_ms=TSLICE>
1084
1085Timeslice tells the scheduler how long to allow VMs to run before
1086pre-empting.  The default is 30ms.  Valid ranges are 1ms to 1000ms.
1087The length of the timeslice (in ms) must be higher than the length of
1088the ratelimit (see below).
1089
1090=item B<-r RLIMIT>, B<--ratelimit_us=RLIMIT>
1091
1092Ratelimit attempts to limit the number of schedules per second.  It
1093sets a minimum amount of time (in microseconds) a VM must run before
1094we will allow a higher-priority VM to pre-empt it.  The default value
1095is 1000 microseconds (1ms).  Valid range is 100 to 500000 (500ms).
1096The ratelimit length must be lower than the timeslice length.
1097
1098=item B<-m DELAY>, B<--migration_delay_us=DELAY>
1099
1100Migration delay specifies for how long a vCPU, after it stopped running should
1101be considered "cache-hot". Basically, if less than DELAY us passed since when
1102the vCPU was executing on a CPU, it is likely that most of the vCPU's working
1103set is still in the CPU's cache, and therefore the vCPU is not migrated.
1104
1105Default is 0. Maximum is 100 ms. This can be effective at preventing vCPUs
1106to bounce among CPUs too quickly, but, at the same time, the scheduler stops
1107being fully work-conserving.
1108
1109=back
1110
1111B<COMBINATION>
1112
1113The following is the effect of combining the above options:
1114
1115=over 4
1116
1117=item B<E<lt>nothingE<gt>>             : List all domain params and sched params from all pools
1118
1119=item B<-d [domid]>            : List domain params for domain [domid]
1120
1121=item B<-d [domid] [params]>   : Set domain params for domain [domid]
1122
1123=item B<-p [pool]>             : list all domains and sched params for [pool]
1124
1125=item B<-s>                    : List sched params for poolid 0
1126
1127=item B<-s [params]>           : Set sched params for poolid 0
1128
1129=item B<-p [pool] -s>          : List sched params for [pool]
1130
1131=item B<-p [pool] -s [params]> : Set sched params for [pool]
1132
1133=item B<-p [pool] -d>...       : Illegal
1134
1135=back
1136
1137=item B<sched-credit2> [I<OPTIONS>]
1138
1139Set or get credit2 scheduler parameters.  The credit2 scheduler is a
1140proportional fair share CPU scheduler built from the ground up to be
1141work conserving on SMP hosts.
1142
1143Each domain (including Domain0) is assigned a weight.
1144
1145B<OPTIONS>
1146
1147=over 4
1148
1149=item B<-d DOMAIN>, B<--domain=DOMAIN>
1150
1151Specify domain for which scheduler parameters are to be modified or retrieved.
1152Mandatory for modifying scheduler parameters.
1153
1154=item B<-w WEIGHT>, B<--weight=WEIGHT>
1155
1156A domain with a weight of 512 will get twice as much CPU as a domain
1157with a weight of 256 on a contended host. Legal weights range from 1
1158to 65535 and the default is 256.
1159
1160=item B<-p CPUPOOL>, B<--cpupool=CPUPOOL>
1161
1162Restrict output to domains in the specified cpupool.
1163
1164=item B<-s>, B<--schedparam>
1165
1166Specify to list or set pool-wide scheduler parameters.
1167
1168=item B<-r RLIMIT>, B<--ratelimit_us=RLIMIT>
1169
1170Attempts to limit the rate of context switching. It is basically the same
1171as B<--ratelimit_us> in B<sched-credit>
1172
1173=back
1174
1175=item B<sched-rtds> [I<OPTIONS>]
1176
1177Set or get rtds (Real Time Deferrable Server) scheduler parameters.
1178This rt scheduler applies Preemptive Global Earliest Deadline First
1179real-time scheduling algorithm to schedule VCPUs in the system.
1180Each VCPU has a dedicated period, budget and extratime.
1181While scheduled, a VCPU burns its budget.
1182A VCPU has its budget replenished at the beginning of each period;
1183Unused budget is discarded at the end of each period.
1184A VCPU with extratime set gets extra time from the unreserved system resource.
1185
1186B<OPTIONS>
1187
1188=over 4
1189
1190=item B<-d DOMAIN>, B<--domain=DOMAIN>
1191
1192Specify domain for which scheduler parameters are to be modified or retrieved.
1193Mandatory for modifying scheduler parameters.
1194
1195=item B<-v VCPUID/all>, B<--vcpuid=VCPUID/all>
1196
1197Specify vcpu for which scheduler parameters are to be modified or retrieved.
1198
1199=item B<-p PERIOD>, B<--period=PERIOD>
1200
1201Period of time, in microseconds, over which to replenish the budget.
1202
1203=item B<-b BUDGET>, B<--budget=BUDGET>
1204
1205Amount of time, in microseconds, that the VCPU will be allowed
1206to run every period.
1207
1208=item B<-e Extratime>, B<--extratime=Extratime>
1209
1210Binary flag to decide if the VCPU will be allowed to get extra time from
1211the unreserved system resource.
1212
1213=item B<-c CPUPOOL>, B<--cpupool=CPUPOOL>
1214
1215Restrict output to domains in the specified cpupool.
1216
1217=back
1218
1219B<EXAMPLE>
1220
1221=over 4
1222
12231) Use B<-v all> to see the budget and period of all the VCPUs of
1224all the domains:
1225
1226    xl sched-rtds -v all
1227    Cpupool Pool-0: sched=RTDS
1228    Name                        ID VCPU    Period    Budget  Extratime
1229    Domain-0                     0    0     10000      4000        yes
1230    vm1                          2    0       300       150        yes
1231    vm1                          2    1       400       200        yes
1232    vm1                          2    2     10000      4000        yes
1233    vm1                          2    3      1000       500        yes
1234    vm2                          4    0     10000      4000        yes
1235    vm2                          4    1     10000      4000        yes
1236
1237Without any arguments, it will output the default scheduling
1238parameters for each domain:
1239
1240    xl sched-rtds
1241    Cpupool Pool-0: sched=RTDS
1242    Name                        ID    Period    Budget  Extratime
1243    Domain-0                     0     10000      4000        yes
1244    vm1                          2     10000      4000        yes
1245    vm2                          4     10000      4000        yes
1246
1247
12482) Use, for instance, B<-d vm1, -v all> to see the budget and
1249period of all VCPUs of a specific domain (B<vm1>):
1250
1251    xl sched-rtds -d vm1 -v all
1252    Name                        ID VCPU    Period    Budget  Extratime
1253    vm1                          2    0       300       150        yes
1254    vm1                          2    1       400       200        yes
1255    vm1                          2    2     10000      4000        yes
1256    vm1                          2    3      1000       500        yes
1257
1258To see the parameters of a subset of the VCPUs of a domain, use:
1259
1260    xl sched-rtds -d vm1 -v 0 -v 3
1261    Name                        ID VCPU    Period    Budget  Extratime
1262    vm1                          2    0       300       150        yes
1263    vm1                          2    3      1000       500        yes
1264
1265If no B<-v> is specified, the default scheduling parameters for the
1266domain are shown:
1267
1268    xl sched-rtds -d vm1
1269    Name                        ID    Period    Budget  Extratime
1270    vm1                          2     10000      4000        yes
1271
1272
12733) Users can set the budget and period of multiple VCPUs of a
1274specific domain with only one command,
1275e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -e 1 -v 3 -p 300 -b 150 -e 0".
1276
1277To change the parameters of all the VCPUs of a domain, use B<-v all>,
1278e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250 -e 1".
1279
1280=back
1281
1282=back
1283
1284=head1 CPUPOOLS COMMANDS
1285
1286Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is
1287assigned at most to one cpu-pool. Domains are each restricted to a single
1288cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has
1289its own scheduler.
1290Physical cpus and domains can be moved from one cpu-pool to another only by an
1291explicit command.
1292Cpu-pools can be specified either by name or by id.
1293
1294=over 4
1295
1296=item B<cpupool-create> [I<OPTIONS>] [I<configfile>] [I<variable=value> ...]
1297
1298Create a cpu pool based an config from a I<configfile> or command-line
1299parameters.  Variable settings from the I<configfile> may be altered
1300by specifying new or additional assignments on the command line.
1301
1302See the L<xlcpupool.cfg(5)> manpage for more information.
1303
1304B<OPTIONS>
1305
1306=over 4
1307
1308=item B<-f=FILE>, B<--defconfig=FILE>
1309
1310Use the given configuration file.
1311
1312=back
1313
1314=item B<cpupool-list> [I<OPTIONS>] [I<cpu-pool>]
1315
1316List CPU pools on the host.
1317
1318B<OPTIONS>
1319
1320=over 4
1321
1322=item B<-c>, B<--cpus>
1323
1324If this option is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
1325
1326=back
1327
1328=item B<cpupool-destroy> I<cpu-pool>
1329
1330Deactivates a cpu pool.
1331This is possible only if no domain is active in the cpu-pool.
1332
1333=item B<cpupool-rename> I<cpu-pool> <newname>
1334
1335Renames a cpu-pool to I<newname>.
1336
1337=item B<cpupool-cpu-add> I<cpu-pool> I<cpus|node:nodes>
1338
1339Adds one or more CPUs or NUMA nodes to I<cpu-pool>. CPUs and NUMA
1340nodes can be specified as single CPU/node IDs or as ranges.
1341
1342For example:
1343
1344 (a) xl cpupool-cpu-add mypool 4
1345 (b) xl cpupool-cpu-add mypool 1,5,10-16,^13
1346 (c) xl cpupool-cpu-add mypool node:0,nodes:2-3,^10-12,8
1347
1348means adding CPU 4 to mypool, in (a); adding CPUs 1,5,10,11,12,14,15
1349and 16, in (b); and adding all the CPUs of NUMA nodes 0, 2 and 3,
1350plus CPU 8, but keeping out CPUs 10,11,12, in (c).
1351
1352All the specified CPUs that can be added to the cpupool will be added
1353to it. If some CPU can't (e.g., because they're already part of another
1354cpupool), an error is reported about each one of them.
1355
1356=item B<cpupool-cpu-remove> I<cpu-pool> I<cpus|node:nodes>
1357
1358Removes one or more CPUs or NUMA nodes from I<cpu-pool>. CPUs and NUMA
1359nodes can be specified as single CPU/node IDs or as ranges, using the
1360exact same syntax as in B<cpupool-cpu-add> above.
1361
1362=item B<cpupool-migrate> I<domain-id> I<cpu-pool>
1363
1364Moves a domain specified by domain-id or domain-name into a cpu-pool.
1365Domain-0 can't be moved to another cpu-pool.
1366
1367=item B<cpupool-numa-split>
1368
1369Splits up the machine into one cpu-pool per numa node.
1370
1371=back
1372
1373=head1 VIRTUAL DEVICE COMMANDS
1374
1375Most virtual devices can be added and removed while guests are
1376running, assuming that the necessary support exists in the guest OS.  The
1377effect to the guest OS is much the same as any hotplug event.
1378
1379=head2 BLOCK DEVICES
1380
1381=over 4
1382
1383=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ...
1384
1385Create a new virtual block device and attach it to the specified domain.
1386A disc specification is in the same format used for the B<disk> variable in
1387the domain config file. See L<xl-disk-configuration(5)>. This will trigger a
1388hotplug event for the guest.
1389
1390Note that only PV block devices are supported by block-attach.
1391Requests to attach emulated devices (eg, vdev=hdc) will result in only
1392the PV view being available to the guest.
1393
1394=item B<block-detach> [I<OPTIONS>] I<domain-id> I<devid>
1395
1396Detach a domain's virtual block device. I<devid> may be the symbolic
1397name or the numeric device id given to the device by domain 0.  You
1398will need to run B<xl block-list> to determine that number.
1399
1400Detaching the device requires the cooperation of the domain.  If the
1401domain fails to release the device (perhaps because the domain is hung
1402or is still using the device), the detach will fail.
1403
1404B<OPTIONS>
1405
1406=over 4
1407
1408=item B<--force>
1409
1410If this parameter is specified the device will be forcefully detached, which
1411may cause IO errors in the domain and possibly a guest crash
1412
1413=back
1414
1415
1416
1417=item B<block-list> I<domain-id>
1418
1419List virtual block devices for a domain.
1420
1421=item B<cd-insert> I<domain-id> I<virtualdevice> I<target>
1422
1423Insert a cdrom into a guest domain's existing virtual cd drive. The
1424virtual drive must already exist but can be empty. How the device should be
1425presented to the guest domain is specified by the I<virtualdevice> parameter;
1426for example "hdc". Parameter I<target> is the target path in the backend domain
1427(usually domain 0) to be exported; can be a block device or a file etc.
1428See B<target> in L<xl-disk-configuration(5)>.
1429
1430Only works with HVM domains.
1431
1432
1433=item B<cd-eject> I<domain-id> I<virtualdevice>
1434
1435Eject a cdrom from a guest domain's virtual cd drive, specified by
1436I<virtualdevice>. Only works with HVM domains.
1437
1438=back
1439
1440=head2 NETWORK DEVICES
1441
1442=over 4
1443
1444=item B<network-attach> I<domain-id> I<network-device>
1445
1446Creates a new network device in the domain specified by I<domain-id>.
1447I<network-device> describes the device to attach, using the same format as the
1448B<vif> string in the domain config file. See L<xl.cfg(5)> and
1449L<xl-network-configuration(5)>
1450for more information.
1451
1452Note that only attaching PV network interfaces is supported.
1453
1454=item B<network-detach> I<domain-id> I<devid|mac>
1455
1456Removes the network device from the domain specified by I<domain-id>.
1457I<devid> is the virtual interface device number within the domain
1458(i.e. the 3 in vif22.3). Alternatively, the I<mac> address can be used to
1459select the virtual interface to detach.
1460
1461=item B<network-list> I<domain-id>
1462
1463List virtual network interfaces for a domain.
1464
1465=back
1466
1467=head2 CHANNEL DEVICES
1468
1469=over 4
1470
1471=item B<channel-list> I<domain-id>
1472
1473List virtual channel interfaces for a domain.
1474
1475=back
1476
1477=head2 VIRTUAL TRUSTED PLATFORM MODULE (vTPM) DEVICES
1478
1479=over 4
1480
1481=item B<vtpm-attach> I<domain-id> I<vtpm-device>
1482
1483Creates a new vtpm (virtual Trusted Platform Module) device in the domain
1484specified by I<domain-id>. I<vtpm-device> describes the device to attach,
1485using the same format as the B<vtpm> string in the domain config file.
1486See L<xl.cfg(5)> for more information.
1487
1488=item B<vtpm-detach> I<domain-id> I<devid|uuid>
1489
1490Removes the vtpm device from the domain specified by I<domain-id>.
1491I<devid> is the numeric device id given to the virtual Trusted
1492Platform Module device. You will need to run B<xl vtpm-list> to determine that
1493number. Alternatively, the I<uuid> of the vtpm can be used to
1494select the virtual device to detach.
1495
1496=item B<vtpm-list> I<domain-id>
1497
1498List virtual Trusted Platform Modules for a domain.
1499
1500=back
1501
1502=head2 VDISPL DEVICES
1503
1504=over 4
1505
1506=item B<vdispl-attach> I<domain-id> I<vdispl-device>
1507
1508Creates a new vdispl device in the domain specified by I<domain-id>.
1509I<vdispl-device> describes the device to attach, using the same format as the
1510B<vdispl> string in the domain config file. See L<xl.cfg(5)> for
1511more information.
1512
1513B<NOTES>
1514
1515=over 4
1516
1517As in I<vdispl-device> string semicolon is used then put quotes or escaping
1518when using from the shell.
1519
1520B<EXAMPLE>
1521
1522=over 4
1523
1524xl vdispl-attach DomU connectors='id0:1920x1080;id1:800x600;id2:640x480'
1525
1526or
1527
1528xl vdispl-attach DomU connectors=id0:1920x1080\;id1:800x600\;id2:640x480
1529
1530=back
1531
1532=back
1533
1534=item B<vdispl-detach> I<domain-id> I<dev-id>
1535
1536Removes the vdispl device specified by I<dev-id> from the domain specified by I<domain-id>.
1537
1538=item B<vdispl-list> I<domain-id>
1539
1540List virtual displays for a domain.
1541
1542=back
1543
1544=head2 VSND DEVICES
1545
1546=over 4
1547
1548=item B<vsnd-attach> I<domain-id> I<vsnd-item> I<vsnd-item> ...
1549
1550Creates a new vsnd device in the domain specified by I<domain-id>.
1551I<vsnd-item>'s describe the vsnd device to attach, using the same format as the
1552B<VSND_ITEM_SPEC> string in the domain config file. See L<xl.cfg(5)> for
1553more information.
1554
1555B<EXAMPLE>
1556
1557=over 4
1558
1559xl vsnd-attach DomU 'CARD, short-name=Main, sample-formats=s16_le;s8;u32_be'
1560'PCM, name=Main' 'STREAM, id=0, type=p' 'STREAM, id=1, type=c, channels-max=2'
1561
1562=back
1563
1564=item B<vsnd-detach> I<domain-id> I<dev-id>
1565
1566Removes the vsnd device specified by I<dev-id> from the domain specified by I<domain-id>.
1567
1568=item B<vsnd-list> I<domain-id>
1569
1570List vsnd devices for a domain.
1571
1572=back
1573
1574=head2 KEYBOARD DEVICES
1575
1576=over 4
1577
1578=item B<vkb-attach> I<domain-id> I<vkb-device>
1579
1580Creates a new keyboard device in the domain specified by I<domain-id>.
1581I<vkb-device> describes the device to attach, using the same format as the
1582B<VKB_SPEC_STRING> string in the domain config file. See L<xl.cfg(5)>
1583for more information.
1584
1585=item B<vkb-detach> I<domain-id> I<devid>
1586
1587Removes the keyboard device from the domain specified by I<domain-id>.
1588I<devid> is the virtual interface device number within the domain
1589
1590=item B<vkb-list> I<domain-id>
1591
1592List virtual network interfaces for a domain.
1593
1594=back
1595
1596=head1 PCI PASS-THROUGH
1597
1598=over 4
1599
1600=item B<pci-assignable-list> [I<-n>]
1601
1602List all the B<BDF> of assignable PCI devices. See
1603L<xl-pci-configuration(5)> for more information. If the -n option is
1604specified then any name supplied when the device was made assignable
1605will also be displayed.
1606
1607These are devices in the system which are configured to be
1608available for passthrough and are bound to a suitable PCI
1609backend driver in domain 0 rather than a real driver.
1610
1611=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
1612
1613Make the device at B<BDF> assignable to guests. See
1614L<xl-pci-configuration(5)> for more information. If the -n option is
1615supplied then the assignable device entry will the named with the
1616given B<NAME>.
1617
1618This will bind the device to the pciback driver and assign it to the
1619"quarantine domain".  If it is already bound to a driver, it will
1620first be unbound, and the original driver stored so that it can be
1621re-bound to the same driver later if desired.  If the device is
1622already bound, it will assign it to the quarantine domain and return
1623success.
1624
1625CAUTION: This will make the device unusable by Domain 0 until it is
1626returned with pci-assignable-remove.  Care should therefore be taken
1627not to do this on a device critical to domain 0's operation, such as
1628storage controllers, network interfaces, or GPUs that are currently
1629being used.
1630
1631=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
1632
1633Make a device non-assignable to guests. The device may be identified
1634either by its B<BDF> or the B<NAME> supplied when the device was made
1635assignable. See L<xl-pci-configuration(5)> for more information.
1636
1637This will at least unbind the device from pciback, and
1638re-assign it from the "quarantine domain" back to domain 0.  If the -r
1639option is specified, it will also attempt to re-bind the device to its
1640original driver, making it usable by Domain 0 again.  If the device is
1641not bound to pciback, it will return success.
1642
1643Note that this functionality will work even for devices which were not
1644made assignable by B<pci-assignable-add>.  This can be used to allow
1645dom0 to access devices which were automatically quarantined by Xen
1646after domain destruction as a result of Xen's B<iommu=quarantine>
1647command-line default.
1648
1649As always, this should only be done if you trust the guest, or are
1650confident that the particular device you're re-assigning to dom0 will
1651cancel all in-flight DMA on FLR.
1652
1653=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
1654
1655Hot-plug a new pass-through pci device to the specified domain. See
1656L<xl-pci-configuration(5)> for more information.
1657
1658=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
1659
1660Hot-unplug a pci device that was previously passed through to a domain. See
1661L<xl-pci-configuration(5)> for more information.
1662
1663B<OPTIONS>
1664
1665=over 4
1666
1667=item B<-f>
1668
1669If this parameter is specified, B<xl> is going to forcefully remove the device
1670even without guest domain's collaboration.
1671
1672=back
1673
1674=item B<pci-list> I<domain-id>
1675
1676List the B<BDF> of pci devices passed through to a domain.
1677
1678=back
1679
1680=head1 USB PASS-THROUGH
1681
1682=over 4
1683
1684=item B<usbctrl-attach> I<domain-id> I<usbctrl-device>
1685
1686Create a new USB controller in the domain specified by I<domain-id>,
1687I<usbctrl-device> describes the device to attach, using form
1688C<KEY=VALUE KEY=VALUE ...> where B<KEY=VALUE> has the same
1689meaning as the B<usbctrl> description in the domain config file.
1690See L<xl.cfg(5)> for more information.
1691
1692=item B<usbctrl-detach> I<domain-id> I<devid>
1693
1694Destroy a USB controller from the specified domain.
1695B<devid> is devid of the USB controller.
1696
1697=item B<usbdev-attach> I<domain-id> I<usbdev-device>
1698
1699Hot-plug a new pass-through USB device to the domain specified by
1700I<domain-id>, I<usbdev-device> describes the device to attach, using
1701form C<KEY=VALUE KEY=VALUE ...> where B<KEY=VALUE> has the same
1702meaning as the B<usbdev> description in the domain config file.
1703See L<xl.cfg(5)> for more information.
1704
1705=item B<usbdev-detach> I<domain-id> I<controller=devid> I<port=number>
1706
1707Hot-unplug a previously assigned USB device from a domain.
1708B<controller=devid> and B<port=number> is USB controller:port in the guest
1709domain the USB device is attached to.
1710
1711=item B<usb-list> I<domain-id>
1712
1713List pass-through usb devices for a domain.
1714
1715=back
1716
1717=head1 DEVICE-MODEL CONTROL
1718
1719=over 4
1720
1721=item B<qemu-monitor-command> I<domain-id> I<command>
1722
1723Issue a monitor command to the device model of the domain specified by
1724I<domain-id>. I<command> can be any valid command qemu understands. This
1725can be e.g. used to add non-standard devices or devices with non-standard
1726parameters to a domain. The output of the command is printed to stdout.
1727
1728B<Warning:> This qemu monitor access is provided for convenience when
1729debugging, troubleshooting, and experimenting.  Its use is not
1730supported by the Xen Project.
1731
1732Specifically, not all information displayed by the qemu monitor will
1733necessarily be accurate or complete, because in a Xen system qemu
1734does not have a complete view of the guest.
1735
1736Furthermore, modifying the guest's setup via the qemu monitor may
1737conflict with the Xen toolstack's assumptions.  Resulting problems
1738may include, but are not limited to: guest crashes; toolstack error
1739messages; inability to migrate the guest; and security
1740vulnerabilities which are not covered by the Xen Project security
1741response policy.
1742
1743B<EXAMPLE>
1744
1745Obtain information of USB devices connected as such via the device model
1746(only!) to a domain:
1747
1748 xl qemu-monitor-command vm1 'info usb'
1749  Device 0.2, Port 5, Speed 480 Mb/s, Product Mass Storage
1750
1751=back
1752
1753=head1 FLASK
1754
1755B<FLASK> is a security framework that defines a mandatory access control policy
1756providing fine-grained controls over Xen domains, allowing the policy writer
1757to define what interactions between domains, devices, and the hypervisor are
1758permitted. Some example of what you can do using XSM/FLASK:
1759 - Prevent two domains from communicating via event channels or grants
1760 - Control which domains can use device passthrough (and which devices)
1761 - Restrict or audit operations performed by privileged domains
1762 - Prevent a privileged domain from arbitrarily mapping pages from other
1763   domains.
1764
1765You can find more details on how to use FLASK and an example security
1766policy here: L<https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1767
1768=over 4
1769
1770=item B<getenforce>
1771
1772Determine if the FLASK security module is loaded and enforcing its policy.
1773
1774=item B<setenforce> I<1|0|Enforcing|Permissive>
1775
1776Enable or disable enforcing of the FLASK access controls. The default is
1777permissive, but this can be changed to enforcing by specifying "flask=enforcing"
1778or "flask=late" on the hypervisor's command line.
1779
1780=item B<loadpolicy> I<policy-file>
1781
1782Load FLASK policy from the given policy file. The initial policy is provided to
1783the hypervisor as a multiboot module; this command allows runtime updates to the
1784policy. Loading new security policy will reset runtime changes to device labels.
1785
1786=back
1787
1788=head1 PLATFORM SHARED RESOURCE MONITORING/CONTROL
1789
1790Intel Haswell and later server platforms offer shared resource monitoring
1791and control technologies. The availability of these technologies and the
1792hardware capabilities can be shown with B<psr-hwinfo>.
1793
1794See L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html> for more
1795information.
1796
1797=over 4
1798
1799=item B<psr-hwinfo> [I<OPTIONS>]
1800
1801Show Platform Shared Resource (PSR) hardware information.
1802
1803B<OPTIONS>
1804
1805=over 4
1806
1807=item B<-m>, B<--cmt>
1808
1809Show Cache Monitoring Technology (CMT) hardware information.
1810
1811=item B<-a>, B<--cat>
1812
1813Show Cache Allocation Technology (CAT) hardware information.
1814
1815=back
1816
1817=back
1818
1819=head2 CACHE MONITORING TECHNOLOGY
1820
1821Intel Haswell and later server platforms offer monitoring capability in each
1822logical processor to measure specific platform shared resource metric, for
1823example, L3 cache occupancy. In the Xen implementation, the monitoring
1824granularity is domain level. To monitor a specific domain, just attach the
1825domain id with the monitoring service. When the domain doesn't need to be
1826monitored any more, detach the domain id from the monitoring service.
1827
1828Intel Broadwell and later server platforms also offer total/local memory
1829bandwidth monitoring. Xen supports per-domain monitoring for these two
1830additional monitoring types. Both memory bandwidth monitoring and L3 cache
1831occupancy monitoring share the same set of underlying monitoring service. Once
1832a domain is attached to the monitoring service, monitoring data can be shown
1833for any of these monitoring types.
1834
1835There is no cache monitoring and memory bandwidth monitoring on L2 cache so
1836far.
1837
1838=over 4
1839
1840=item B<psr-cmt-attach> I<domain-id>
1841
1842attach: Attach the platform shared resource monitoring service to a domain.
1843
1844=item B<psr-cmt-detach> I<domain-id>
1845
1846detach: Detach the platform shared resource monitoring service from a domain.
1847
1848=item B<psr-cmt-show> I<psr-monitor-type> [I<domain-id>]
1849
1850Show monitoring data for a certain domain or all domains. Current supported
1851monitor types are:
1852 - "cache-occupancy": showing the L3 cache occupancy(KB).
1853 - "total-mem-bandwidth": showing the total memory bandwidth(KB/s).
1854 - "local-mem-bandwidth": showing the local memory bandwidth(KB/s).
1855
1856=back
1857
1858=head2 CACHE ALLOCATION TECHNOLOGY
1859
1860Intel Broadwell and later server platforms offer capabilities to configure and
1861make use of the Cache Allocation Technology (CAT) mechanisms, which enable more
1862cache resources (i.e. L3/L2 cache) to be made available for high priority
1863applications. In the Xen implementation, CAT is used to control cache allocation
1864on VM basis. To enforce cache on a specific domain, just set capacity bitmasks
1865(CBM) for the domain.
1866
1867Intel Broadwell and later server platforms also offer Code/Data Prioritization
1868(CDP) for cache allocations, which support specifying code or data cache for
1869applications. CDP is used on a per VM basis in the Xen implementation. To
1870specify code or data CBM for the domain, CDP feature must be enabled and CBM
1871type options need to be specified when setting CBM, and the type options (code
1872and data) are mutually exclusive. There is no CDP support on L2 so far.
1873
1874=over 4
1875
1876=item B<psr-cat-set> [I<OPTIONS>] I<domain-id> I<cbm>
1877
1878Set cache capacity bitmasks(CBM) for a domain. For how to specify I<cbm>
1879please refer to L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1880
1881B<OPTIONS>
1882
1883=over 4
1884
1885=item B<-s SOCKET>, B<--socket=SOCKET>
1886
1887Specify the socket to process, otherwise all sockets are processed.
1888
1889=item B<-l LEVEL>, B<--level=LEVEL>
1890
1891Specify the cache level to process, otherwise the last level cache (L3) is
1892processed.
1893
1894=item B<-c>, B<--code>
1895
1896Set code CBM when CDP is enabled.
1897
1898=item B<-d>, B<--data>
1899
1900Set data CBM when CDP is enabled.
1901
1902=back
1903
1904=item B<psr-cat-show> [I<OPTIONS>] [I<domain-id>]
1905
1906Show CAT settings for a certain domain or all domains.
1907
1908B<OPTIONS>
1909
1910=over 4
1911
1912=item B<-l LEVEL>, B<--level=LEVEL>
1913
1914Specify the cache level to process, otherwise the last level cache (L3) is
1915processed.
1916
1917=back
1918
1919=back
1920
1921=head2 Memory Bandwidth Allocation
1922
1923Intel Skylake and later server platforms offer capabilities to configure and
1924make use of the Memory Bandwidth Allocation (MBA) mechanisms, which provides
1925OS/VMMs the ability to slow misbehaving apps/VMs by using a credit-based
1926throttling mechanism. In the Xen implementation, MBA is used to control memory
1927bandwidth on VM basis. To enforce bandwidth on a specific domain, just set
1928throttling value (THRTL) for the domain.
1929
1930=over 4
1931
1932=item B<psr-mba-set> [I<OPTIONS>] I<domain-id> I<thrtl>
1933
1934Set throttling value (THRTL) for a domain. For how to specify I<thrtl>
1935please refer to L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1936
1937B<OPTIONS>
1938
1939=over 4
1940
1941=item B<-s SOCKET>, B<--socket=SOCKET>
1942
1943Specify the socket to process, otherwise all sockets are processed.
1944
1945=back
1946
1947=item B<psr-mba-show> [I<domain-id>]
1948
1949Show MBA settings for a certain domain or all domains. For linear mode, it
1950shows the decimal value. For non-linear mode, it shows hexadecimal value.
1951
1952=back
1953
1954=head1 IGNORED FOR COMPATIBILITY WITH XM
1955
1956xl is mostly command-line compatible with the old xm utility used with
1957the old Python xend.  For compatibility, the following options are
1958ignored:
1959
1960=over 4
1961
1962=item B<xl migrate --live>
1963
1964=back
1965
1966=head1 ENVIRONMENT VARIABLES
1967
1968The following environment variables shall affect the execution of xl:
1969
1970=over 4
1971
1972=item LIBXL_BOOTLOADER_RESTRICT
1973
1974Equivalent to L<xl.cfg(5)> B<bootloader_restrict> option.  Provided for
1975compatibility reasons.  Having this variable set is equivalent to enabling
1976the option, even if the value is 0.
1977
1978If set takes precedence over L<xl.cfg(5)> and L<xl.conf(5)>
1979B<bootloader_restrict> options.
1980
1981=item LIBXL_BOOTLOADER_USER
1982
1983Equivalent to L<xl.cfg(5)> B<bootloader_user> option.  Provided for
1984compatibility reasons.
1985
1986If set takes precedence over L<xl.cfg(5)> B<bootloader_user> option.
1987
1988=item LIBXL_BOOTLOADER_TIMEOUT
1989
1990Timeout in seconds for bootloader execution when running in restricted mode.
1991Otherwise the build time default in LIBXL_BOOTLOADER_TIMEOUT will be used.
1992
1993If defined the value must be an unsigned integer between 0 and INT_MAX,
1994otherwise behavior is undefined.  Setting to 0 disables the timeout.
1995
1996=back
1997
1998=head1 SEE ALSO
1999
2000The following man pages:
2001
2002L<xl.cfg(5)>, L<xlcpupool.cfg(5)>, L<xentop(1)>, L<xl-disk-configuration(5)>
2003L<xl-network-configuration(5)>
2004
2005And the following documents on the xenproject.org website:
2006
2007L<https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
2008L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>
2009
2010For systems that don't automatically bring the CPU online:
2011
2012L<https://wiki.xenproject.org/wiki/Paravirt_Linux_CPU_Hotplug>
2013
2014=head1 BUGS
2015
2016Send bugs to xen-devel@lists.xenproject.org, see
2017https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project on how to send bug reports.
2018