XCP-XAPI can be installed on top of other distributions (see Project Kronos), it includes the Xen hypervisor, installed at low level as the virtualization tool, with the XAPI toolstack used internally to manage the hypervisor.
xe is the command line tool used to configure and operate the Xen Cloud Platform (XCP) platform, while xsconsole being a TUI to do basic tasks and configuration.
It is possible to use the XCP-ISO distribution, but it comes on top of a CentOS system, just like Citrix's XenServer. Read the Xen / XCP / XAPI Overview to have a detailed view about the differences that exist between those.
XenServer is Citrix' commercial distribution of XCP.
Various management tools are available to use on top of XAPI.
Read the XCP Beginners Guide for more info.
A summarized version of the whole install process is available here: https://7terminals.com/articles/step-by-step-guide-to-setting-up-xen-and-xenapi-xcp-on-ubuntu-12-04-and-managing-it-with-xencenter/
Start with the base Debian installation as described in this wiki.
XCP requires lspci, part of the pciutils package, if you do not have the lspci command available:
> apt-get install pciutils
Then install XCP's xapi and all its dependencies, including the Xen hypervisor.
> apt-get update > apt-get install xcp-xapi select: bridge network
Later, apt-get update && apt-get upgrade will keep up with the latest builds.
This is optional, but highly recommended on a server configuration.
> dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen > update-grub
Please refer to the Configure Networking section.
This was not applied and seems to work correctly up to now… Although we encounter great problems to modify this value later on, as far as our experience goes.
You should always dedicate a fixed amount of RAM for Xen dom0. For more details, see Xen - Configure Dom0 Memory.
Here are the commands to issue: Edit /etc/default/grub:
> nano /etc/default/grub ADD: # Xen boot parameters for all Xen boots GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=1280M,max:1280M dom0_max_vcpus=2"
Note : On servers with huge memory, Xen kernel crash. You must set a dom0 memory limit.
Take care on Wheezy, 1024M is not enough and cause kernel crash at boot with out-of-memory message.
Remember to apply the change to the grub configuration by running
> update-grub
Next step is to configure the toolstack to make sure dom0 memory is never ballooned down while starting new guests:
Using the XL toolstack, you should change /etc/xen/xl.conf
As we are unsing XAPI, which re-uses parts of the xend toolstack, we'll modify /etc/xen/xend-config.sxp, changing the “dom0-min-mem” option (to “dom0-min-mem 512”) and changing the “enable-dom0-ballooning” option (to “enable-dom0-ballooning no”). These options will make sure xend never takes any memory away from dom0. We'll then need to reboot the system.
> nano /etc/xen/xend-config.sxp CHANGE: line 208: (dom0-min-mem 1280) line 212: (enable-dom0-ballooning no)
At this point you should reboot so that these changes take effect.
> reboot
> cat /proc/xen/capabilities
Should display “control_d”
At this stage the xcp-xapi service is not running as we didn't mention Xen to use it as its toolstack. Before we do this we still need to tweak some little settings in the system.
XAPI won't start yet as trying to start the service will yield an error:
[FAIL] /var/run/xend.pid exists; xapi conflicts with xend ... failed!
You will need to disable xend from starting in order to get xcp-xapi to start.
On the latest install (Debian Wheezy 7.4) /etc/init.d/xend does not exist!
This is done by modifying the file at /etc/init.d/xend. Many of the setup processes in this script still need to run, but xend itself should not start. This will be resolved in a future release. The command to resolve this is:
> sed -i -e 's/xend_start$/#xend_start/' -e 's/xend_stop$/#xend_stop/' /etc/init.d/xend
> update-rc.d xendomains disable
This will probably yield two warnings (no further explanation on this at the moment)
insserv: warning: current start runlevel(s) (empty) of script 'xendomains' overrides LSB defaults (2 3 4 5). insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script 'xendomains' overrides LSB defaults (0 1 6).
In order for vncterm to start up with a Linux VM, it needs to load the qemu keymaps however they are located in a location other than where it looks, this can be resolved with the following commands:
> ln -s /usr/share/qemu-linaro/keymaps /usr/share/qemu/keymaps
You can check whether xapi is running using the following command:
> service xcp-xapi status
If you are following this install process here's what you'll get:
[FAIL] Xen toolstack is not set to xapi! Exiting. ... failed!
Set a TOOLSTACK variable to get xcp-xapi to start, you can do that with the following:
> nano /etc/default/xen ADD: TOOLSTACK=xapi
Configure xcp to use bridge networking or openswitch
> nano /etc/xcp/network.conf
REPLACE: “openvswitch” with “bridge” (or vice-versa).
After making those changes a reboot will be needed
> reboot
It is often interesting to setup at least one VM to autostart at boot time.
Do do so you can follow this next procedure, that is also described on this post:
> xe pool-list uuid ( RO) : 4a187cc1-69ce-eaf3-2742-6aec0783159f name-label ( RW): name-description ( RW): master ( RO): 288efd1c-7afe-21ca-e374-cace5e2d7e20 default-SR ( RW): 26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0
> xe pool-param-set uuid=4a187cc1-69ce-eaf3-2742-6aec0783159f other-config:auto_poweron=true
> xe vm-list uuid ( RO) : 76868e4b-4d82-320e-dd46-41340e6a67f3 name-label ( RW): Control domain on host: store power-state ( RO): running uuid ( RO) : b7e42681-56c1-24cc-db45-9577981000b1 name-label ( RW): XOA 3.6 Basic power-state ( RO): running > xe vm-param-set uuid=b7e42681-56c1-24cc-db45-9577981000b1 other-config:auto_poweron=true
The previous steps are documented to be enough when using Citrix's XenServer. As we're using XCP/XAPI, one supplementary step is required, having a script to start at boot time all vms with “auto_poweron” in their other-config. This can be achieved by editing the /etc/rc.local file, adding the following code:
> nano /etc/rc.local #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. [ -e /proc/xen ] || exit 0 XAPI_START_TIMEOUT_SECONDS=240 # wait for xapi to complete initialisation for a max of XAPI_START_TIMEOUT_SECONDS /usr/lib/xcp/bin/xapi-wait-init-complete ${XAPI_START_TIMEOUT_SECONDS} if [ $? -eq 0 ]; then pool=$(xe pool-list params=uuid --minimal 2> /dev/null) auto_poweron=$(xe pool-param-get uuid=${pool} param-name=other-config param-key=auto_poweron 2> /dev/null) if [ $? -eq 0 ] && [ "${auto_poweron}" = "true" ]; then logger "$0 auto_poweron is enabled on the pool-- this is an unsupported configuration." # if xapi init completed then start vms (best effort, don't report errors) xe vm-start other-config:auto_poweron=true power-state=halted --multiple >/dev/null 2>/dev/null || true fi fi
You can test that the code works using the following command, it is important to validate its function as line 19 may need to be adapted, i.e: YMMV for /usr/lib/xcp/bin/xapi-wait-init-complete
> /etc/rc.local
First identifying our SR of choice, then create the file based VHD (VDI).
> xe sr-list ... uuid ( RO) : 26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0 name-label ( RW): X-Local-SR name-description ( RW): host ( RO): store type ( RO): ext content-type ( RO): ... > xe vdi-create sr-uuid=26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0 name-label=IPFire type=user virtual-size=5GiB 72e00fc6-98bb-48fe-ab4d-b52d1ef721b5
We'll now have a new vhd file at:
/run/sr-mount/<sr-uuid>/<vdi-uuid>.vhd
Which, in our example, translates to:
/run/sr-mount/26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0/72e00fc6-98bb-48fe-ab4d-b52d1ef721b5.vhd
The newly created VBD will link our Dom0 to the VDI we just created, first let's recover our Dom0 UUID:
> xe vm-list | grep -B 1 -e Control uuid ( RO) : 76868e4b-4d82-320e-dd46-41340e6a67f3 name-label ( RW): Control domain on host: store
Then let's create and link the VBD to our Dom:
> xe vbd-create vm-uuid=76868e4b-4d82-320e-dd46-41340e6a67f3 vdi-uuid=72e00fc6-98bb-48fe-ab4d-b52d1ef721b5 device=autodetect 7472f458-ba3f-7344-99a7-6660a39037a6
Finaly we'll plug the VBD, making the VDI effectively available from the Dom0:
> xe vbd-plug uuid=7472f458-ba3f-7344-99a7-6660a39037a6
At this stage, a new device will get listed under /dev/sm/backend/<sr-uuid>/<vdi-uuid> like:
Before vdb-plug instruction:
ls -l /dev/sm/backend/26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0/ total 0 brw------- 1 root root 253, 0 Mar 29 01:38 4ea98b0d-b3fc-4f62-86bd-19a80f7d356b
After vdb-plug instruction:
ls -l /dev/sm/backend/26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0/ total 0 brw------- 1 root root 253, 0 Mar 29 01:38 4ea98b0d-b3fc-4f62-86bd-19a80f7d356b brw------- 1 root root 253, 1 Mar 29 01:59 72e00fc6-98bb-48fe-ab4d-b52d1ef721b5
The kpartx command creates device maps from partition tables. Each guest storage image has a partition table embedded in the file.
We'll first need to install the package if it is not already available.
> apt-get install kpartx
kpartx lets you inspect an img file, showing its contained partitions using the -l command:
> kpartx -l /opt/xen/X-Local-ISO/<image_file_name>.img loop0p1 : 0 122880 /dev/loop0 8192 loop0p3 : 0 1536000 /dev/loop0 131072 loop deleted : /dev/loop0
THIS DOESN'T WORK!
> kpartx -av /run/sr-mount/26b9d87b-f344-1c8d-c5c5-a155d4e4e2e0/72e00fc6-98bb-48fe-ab4d-b52d1ef721b5.vhd