This is an attempt at setting up a Xen based Home Theater PC. Later on we'll also try to use PCI Passtrhough to enable gaming with near bear metal performances (that's when I'll get the money for the required hardware).
The initial hardware we'll be using is the following:
We're aiming at setting up a Xen hypervisor base onto which each service will be deployed using a separate Virtual Machine, each of those VMs we'll later call “appliance”.
TBC
We'll unfortunately be forced to go and use Debian Wheezy (7) as the base operating system for dom0, since XAPI isn't made available under Jessie (8). So the first thing we need is a bootable USB key with the Debian Wheezy installer:
Using the following link will download the netinst release of Debian Wheezy (7.8.0) for AMD 64* processor:
http://cdimage.debian.org/debian-cd/7.8.0/amd64/iso-cd/debian-7.8.0-amd64-netinst.iso
All standard flavors of the Debian Wheezy installer can be obtained on the Debian “wheezy” Installation Information page of the official Debian website.
Note that the netinst version of the installer will require your system to be able to access the Internet during the installation process, if you think this might cause a problem, you may want to use another installer set (like CD or DVD)…
For more info about creating a bootable USB Key of the Debian installer, please refer to this section of the Debian Official Website.
In some situations, you may need to add some proprietary drivers on the the space that is left free on your USB key. For example a machine with a Realtek WiFi interface required the rtlwifi/rtl8192cfw.bin package to be present.
In this situation, you need to create a new partition in the free space left on the installation media, using GParted is one of the easiest way to do this.
Once the packages are present on this extra space, it seems the Debian installer automatically detects them and uses them.
Plug the previously created USB key into the to be installed computer. You might need to get into the BIOS (or UEFI) to set the USB key as primary boot device. This is outside of the scope of this wiki, and specific operation may vary for each type of machine, but it generally involves pressing an “F” key at startup (F10 or F12). Please refer to your hardware manual to determine what is the appropriate action required for your system.
Basically you'll need to set the USB Key as the first boot device of the system.
Once you get to the Debian install screen, choose “Install”, then specify your location (region, locales and keyboard mapping).
This is a unique identifier that will enable you to access the server once it will be availbale on the network. We won't delve into the details here, but let's say you can just make up something here, as long as it doesn't interfere with any other existing domain name on the Internet. My recommended configuration here is to use a machine identifier, followed by a dot, then something that represents the geographical location where the machine resides. For example, let's say you're at number 205 on sunset boulevard, USA; you could use something like: srv01.sunsetbld205.us
Then you'll be asked to set the root password, and invited to create an administrator user (along with a password). Be creative but note these down as this information will be required later to access your installation.
Well, that's a tricky one… or not!
We might very well go for the no brainer option of using the entire disk and everything would be perfectly fine.
Although, we're aiming at setting up an hypervisor controlled system and as such we'd better evaluate what partition layout would best suit our needs. Here again discussing the optimal partitioning scheme of our system disk is outside of the scope of this wiki, but basically we could separate each main part of the storage based on the mount points the system will use. That is the boot space (/boot), the system root (/), user's home directories (/home) as well as other fundamental system directories (like /var, /opt, /tmp etc.).
To keep things simple at this stage, let's say that, following our objective of setting up our hypervisor stack at the moment, we'll simply separate the general “system” directories from our user's “home” ones. Also, as we plan on having the option to add as many “virtual appliances” as we might need, we'd store them in the “/opt” partition. This means we'll set up four partitions, a very small one to serve as boot “/boot”, another one as root “/”, one to serve as “/opt” for all virtual appliances, and a last one to serve as user's “/home” space.
Now maybe the hardest decision is left to make, what size should we allocate to each of these partitions?
There is, of course, no definitive answer to that question, but, based on experience, 200MB is enough for “/boot” and 15GB should be well enough for root (“/”), leaving the rest of our available system disk space for “/home” and “/opt”. Those two, although, might grow wildly as we use the system, specifically using it as an HTPC.
Based on this, we'll create two, 200MB and 15GB, ext4 partitions for “/boot” and “/” (which will include “/home”, “/opt”, “/var” and “/tmp”), along with an LVM group containing one logical volume, that will be easy to expand when needed, this will serve as our local “storage repository” (SR).
OUPS!
Just forgot about the SWAP space that's also gonna be needed… As a rule of thumb it is recommended to set it as equal to your available RAM space until you get 2GB of RAM, above that point a swap space of 2GB is enough.
This summarizes to:
At a point, you'll be asked to select the Debian mirror that is to be used to download the necessary files to continue the installation process. Although it is perfectly OK to select any country you see fit for your location, you might also choose to use the “automatic Debian mirror redirector” that will automagically select the best mirror for your location. You can find more information regarding this option on the http.debian.net website.
To use this option, go to the top of the countries' list and select the manual entry of the mirror. Then, as the mirror address, use http.debian.net, and /debian/ as the repository to look for.
Note that the mirror address you select at this point will be stored as the default mirror to use for any subsequent system update and package installations on this system. You may later edit the /etc/apt/sources.list file in case you want to change it.
Once the core system packages are installed, you will be given the option to install some complementary software. You could for example install a complete desktop environment here. Although, be aware that doing so will install a ton of extra (heavy) packages along the way, like a complete install of the LibreOffice suite.
For what we aim at doing here (install the Xen hypervisor), it is recommended that you only select to install:
As the final step, the installer will ask you to install the GRUB bootloader to your newly installed drive, by default it should pre-select the disk we previously formatted, most often this will be /dev/sdb but it might differ depending on your system configuration.
All set!
You'll be invited to remove the installation media that was used (here the USB key) and reboot the system on the newly installed Debian OS.
We'll need the bridge-utils package, so make sure it's available or install it using:
> sudo apt-get install bridge-utils
Let's modify the dom0 network configuration to provide a bridged interface:
> sudo cp /etc/network/interfaces /etc/network/interfaces.bak > sudo nano /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual # Main bridge for Hypervisor auto xenbr0 iface xenbr0 inet static bridge_ports eth0 address 192.168.1.201 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1
> sudo ifdown eth0 > sudo killall dhclient > sudo ifup xenbr0 > sudo brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.xxxxxxxxxxxx no eth0
> nano /etc/sysctl.conf ADD: net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
> sudo sysctl -p /etc/sysctl.conf
It is often interesting to give “sudo” rights to the initial user that was created during installation:
> apt-get update > apt-get install sudo > usermod -G sudo -a user_name (chosen during install process)
REBOOT
You may now connect to your new system via ssh using the specified user_name, removing screen and keyboard from it if necessary
Replace the content of ~/.bashrc to bring some color to the terminal:
> cp ~/.bashrc ~/.bashrc.bak > nano ~/.bashrc
REPLACE CONTENT WITH: (click to see)
Activate changes:
> source ~/.bashrc
Installing XCP's XAPI and all its dependencies, including the Xen hypervisor, is all covered by the xcp-xapi meta-package:
> sudo apt-get update > sudo apt-get install xcp-xapi select: bridge network
Later, apt-get update && apt-get upgrade will keep up with the latest builds.
This is optional, but highly recommended on a server configuration.
> sudo dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen > sudo update-grub
> sudo reboot
Check that Xen is running:
> cat /proc/xen/capabilities Should display “control_d”
At this stage XAPI is still not running, we need to apply a few last modifications before it can run:
> sudo update-rc.d xendomains disable
> sudo ln -s /usr/share/qemu-linaro/keymaps /usr/share/qemu/keymaps
> sudo nano /etc/default/xen ADD: TOOLSTACK=xapi
After this final reboot Xen should be running with XAPI.
Depending on the particular CPU and system capabilities, there might be some other tweaks that may be interesting to apply, you can refer to this other configuration section on the Xen wiki for more options descriptions.
Now what's needed is some storage space that the Xen hypervisor will be allowed to use. We created a volume group (Xvg0), along with a logical volume (Xsr0) during Debian installation, we'll dedicate it for Xen SR usage. As LVHD seems to offer both advantages of EXT and LVM format, we'll use the lvhd type:
ATTENTION: THE TARGET VOLUME WILL BE ERASED AND ALL DATA ON IT WILL BE LOST!
> sudo xe sr-create type=ext content-type=user name-label='X-Local-SR' device-config:device=/dev/mapper/Xvg0-Xsr0 2f93b6d9-9904-dbfc-afba-d4ba190fca3d
Note that this operation may take a little time as it formats the selected disk space. Once completed, the operation will return the SR UUID.
Check the newly created SR:
> sudo xe sr-list uuid ( RO) : ef162035-0edb-7cd9-6e1f-4a9a60e1dba8 name-label ( RW): XenServer Tools name-description ( RW): XenServer Tools ISOs host ( RO): provocator type ( RO): iso content-type ( RO): iso uuid ( RO) : 2f93b6d9-9904-dbfc-afba-d4ba190fca3d name-label ( RW): X-Local-SR name-description ( RW): host ( RO): provocator type ( RO): ext content-type ( RO): user
We can also examine the physical partition that was created to accommodate the newly created SR:
> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:64 1 28.9G 0 disk ├─sda1 8:65 1 190M 0 part /boot ├─sda2 8:66 1 14G 0 part / ├─sda3 8:67 1 1.9G 0 part [SWAP] ├─sda4 8:68 1 1K 0 part └─sda5 8:69 1 12.8G 0 part ├─Xvg0-Xsr0 (dm-0) 254:0 0 9.3G 0 lvm │ └─XSLocalEXT--2f93b6d9...ca3d (dm-2) 254:2 0 9.3G 0 lvm /run/sr-mount/2f93b6d9...ca3d └─Xvg0-Xsr1 (dm-1) 254:1 0 3.5G 0 lvm
A PBD was also created in the process:
> sudo xe pbd-list uuid ( RO) : c55da5af-981f-5a68-d46b-4480b974e856 host-uuid ( RO): a006d803-4aa1-11e6-827c-7e5c7af9806e sr-uuid ( RO): ef162035-0edb-7cd9-6e1f-4a9a60e1dba8 device-config (MRO): location: /usr/share/xcp/packages/iso; legacy_mode: true currently-attached ( RO): true uuid ( RO) : 84bf61e9-4976-6837-9664-4f837d4bf214 host-uuid ( RO): a006d803-4aa1-11e6-827c-7e5c7af9806e sr-uuid ( RO): 2f93b6d9-9904-dbfc-afba-d4ba190fca3d device-config (MRO): device: /dev/mapper/Xvg0-Xsr0 currently-attached ( RO): true
We'll now register this newly created SR as the pool's default, i.e. new VMs VDIs willl be stored on this SR unless otherwise specified at creation time:
> sudo pool-list uuid ( RO) : 352be6da-23d4-2815-494a-8c6d63957335 name-label ( RW): name-description ( RW): master ( RO): a006d803-4aa1-11e6-827c-7e5c7af9806e default-SR ( RW): <not in database> > sudo xe pool-param-set uuid=352be6da-23d4-2815-494a-8c6d63957335 default-SR=2f93b6d9-9904-dbfc-afba-d4ba190fca3d
The Xen Cloud Platform uses a special repository of type “ISO” that handles CD images stored as files in ISO format.
The previous (xe sr-list) command shows that one such repository already exists. As an exercise, let's investigate it, locating the related system directory that this SR points to:
> sudo xe sr-param-list uuid=2c510782-2ad6-af40-8414-4ec89e9bc85c uuid ( RO) : 2c510782-2ad6-af40-8414-4ec89e9bc85c name-label ( RW): XenServer Tools name-description ( RW): XenServer Tools ISOs host ( RO): store allowed-operations (SRO): forget; plug; destroy; scan; VDI.clone; unplug current-operations (SRO): VDIs (SRO): PBDs (SRO): bbb45d9b-56ae-9b5a-6c6e-2b379d73caeb virtual-allocation ( RO): 0 physical-utilisation ( RO): -1 physical-size ( RO): -1 type ( RO): iso content-type ( RO): iso shared ( RW): true introduced-by ( RO): <not in database> other-config (MRW): xensource_internal: true; xenserver_tools_sr: true; i18n-key: xenserver-tools; i18n-original-value-name_label: XenServer Tools; i18n-original-value-name_description: XenServer Tools ISOs sm-config (MRO): blobs ( RO): local-cache-enabled ( RO): false tags (SRW):
This indicates that the “XenServer Tools” SR is related to the PBD with uuid bbb45d9b-56ae-9b5a-6c6e-2b379d73caeb, let's find out more about this PBD:
> sudo xe pbd-param-list uuid=bbb45d9b-56ae-9b5a-6c6e-2b379d73caeb uuid ( RO) : bbb45d9b-56ae-9b5a-6c6e-2b379d73caeb host ( RO) [DEPRECATED]: 288efd1c-7afe-21ca-e374-cace5e2d7e20 host-uuid ( RO): 288efd1c-7afe-21ca-e374-cace5e2d7e20 sr-uuid ( RO): 2c510782-2ad6-af40-8414-4ec89e9bc85c device-config (MRO): location: /usr/share/xcp/packages/iso; legacy_mode: true currently-attached ( RO): true other-config (MRW): storage_driver_domain: OpaqueRef:b2232444-4d9b-7cc6-95ab-abb31a2aac8f
Here we have it, the device-config parameter indicates the PBD location: /usr/share/xcp/packages/iso.
But, wait a minute, this is a special directory that should contain the xs-tools.iso, a kind of XenServer's extensions pack mainly aiming at enhancing I/O drivers performances for Windows HVM (more on that later).
So we won't touch this right now, we'll rather create a new ISO repository to put our installers .iso files in.
ISO repositories must have their location parameter set to an existing directory. As we would like to have all Xen related system files inside our LVM partitions, we'll create a directory in Xvg0-Xsr1.
In case you did not format the Xsr1 partition during install, you'll need to create the filesystem first:
> sudo mkfs.ext4 /dev/mapper/Xvg0-Xsr1
We'll create a mount point for this filesystem, add it to /etc/fstab so it will be available upon restart, and finally register it as our ISO SR.
To avoid any confusion, we'll use the Xvg0-Xsr1 UUID in fstab so, even if we were to rename it, it should still be mounted correctly.
> sudo mkdir -p /opt/xen/X-Local-ISO > sudo blkid | grep Xvg0-Xsr1 /dev/mapper/Xvg0-Xsr1: UUID="3afd499b-9b0c-4674-8560-6877db89fb88" TYPE="ext4" > sudo nano /etc/fstab ADD THIS: # LVM partition for Xen ISO (Xvg0-Xsr1) UUID=3afd499b-9b0c-4674-8560-6877db89fb88 /opt/xen/X-Local-ISO ext4 defaults 0 0 > sudo mount -a
We use lsblk again to confirm our LVM partition is well mounted where we expect it to be:
> lsblk | grep Xvg0-Xsr1 └─Xvg0-Xsr1 (dm-1) 254:1 0 5G 0 lvm /opt/xen/X-Local-ISO
We can now register /opt/xen/X-local-ISO as an ISO SR for our Xen host:
> sudo xe sr-create name-label=X-Local-ISO type=iso shared=true device-config:location=/opt/xen/X-Local-ISO/ device-config:legacy_mode=true content-type=iso
And verify that it is well registered:
> sudo xe sr-list name-label=X-Local-ISO uuid ( RO) : 6960a80d-94e0-ba9f-d2bc-dc05b418a9b4 name-label ( RW): X-Local-ISO name-description ( RW): host ( RO): store type ( RO): iso content-type ( RO): iso
XenOrchestra (XO) is an open-source Web interface for XenServer (or XCP in this case) communicating through XAPI. It is made available as an “appliance” for Xen, which means you can download a fully configured VM from the XO website. You'll have to register first but there is a free version available for download (version is 3.8 as of this writing: 2015-03-31). Once downloaded, you'll have an .xva file that you need to transfer to your Xen Host. We'll use sftp to do this, creating a new directory on the host for .xva files in /opt/xen/X-Local-XVA:
-- From the workstation where you downloaded xoa_free_3.8.xva -- > cd /path/to/your/download > sftp root@<xen_host_ip> sftp> cd /opt/xen sftp> mkdir X-Local-XVA sftp> cd X-Local-XVA sftp> put xoa_free_3.8.xva xoa_free_3.8.xva 17% 116MB 4.0MB/s 02:16 ETA sftp> exit
Now login (ssh) to your Xen host and import the VM:
> cd /opt/xen/X-Local-XVA > sudo xe vm-import filename=xoa_free_3.8.xva e803456a-1478-6047-8735-171f1ac0dcf2
It takes a little time (unzipping the image), to be honest it can take quite some time.
Although the VM has now been imported, its initial state is “halted”:
> sudo xe vm-list uuid ( RO) : 185059ea-0cdc-01a6-490f-befa6d20052b name-label ( RW): Control domain on host: provocator power-state ( RO): running uuid ( RO) : e803456a-1478-6047-8735-171f1ac0dcf2 name-label ( RW): XOA 3.8 Free Edition power-state ( RO): halted
Note that you might want to run an IP scan before starting the VM (seen next point).
> sudo xe vm-start name-label="XOA 3.8 Free Edition"
Now, a little tricky one here is that the XOA appliance is, by default, configured with DHCP. So you MUST have a DHCP on your network for it to receive an IP address. The difficulty being that it is not always obvious to determine what exact IP it will be assigned. On small networks it is possible to “guess” this information by using an ip scanner like Angry IP Scanner, scanning the network before and after you start the VM and looking at what IP has been “activated”.
Another option would be to monitor your DHCP logs at the time you start the VM and observe the assigned IP.
Unfortunately if many systems require an IP assignment from your DHCP at the same time, the process can become a bit tedious.
Once the assigned IP is identified, it is recommended that you assign the VM a fixed IP by login (ssh) into the VM, modifying it's /etc/network/interfaces file and restarting it's network interface as follow (assuming it was assigned the ip 192.168.1.11), the default ssh credentials being user: root, password: xoa.
--- From a workstation in the 192.168.1.1/24 range --- > ssh root@192.168.1.11 > nano /etc/network/interfaces CHANGE eth0 definition to: # The primary network interface allow-hotplug eth0 iface eth0 inet static address 192.168.1.202 netmask 255.255.255.0 gateway 192.168.1.1 > ifdown eth0 && ifup eth0
You will be disconnected from your ssh session as the IP address has now changed.
Let's now transfer our public id_rsa key so we won't need to enter a password to login to the VM anymore, then disable password login:
> ssh-copy-id root@192.168.1.202 > ssh root@192.168.1.202 > nano /etc/ssh/sshd_config add this AT THE END OF THE FILE: # Disable Password authentication PasswordAuthentication no # Except for users list or groups #Match User root,user1 # PasswordAuthentication yes #Match Group groupname # PasswordAuthentication yes
It is important to add these lines at the end of the file, since match is effective until either another Match line or the end of the file. (the indentation isn't significant)
You will now be able to access the XenOrchestra web interface using a browser via http at the static IP address that was set up.
You can refer to the XenOrchestra documentation on github to learn more about its usage.
The first VM that we're gonna deploye will be IPfire, a complete open source FireWall distribution, that is intended at protecting our installation from external attacks.
We'll follow the IPFire installation as DomU section of this wiki.
One thing to note though, is that the current system we are using only has ONE NIC, that is NOT compatible with a firewall installation, which requires at least TWO NICs!
That is the reason we'll have to install a complementary PCIe ethernet card in the server prior to the IPFire installation.