This is an old revision of the document!


Configure Networking

Paravirtualised Network Devices

A paravirtualised network device consists of a pair of network devices. The first of these (the frontend) will reside in the guest domain while the second (the backend) will reside in the backend domain (typically Dom0). A similar pair of devices is created for each virtual network interface.

The frontend devices appear much like any other physical Ethernet NIC in the guest domain. Typically under Linux it is bound to the xen-netfront driver and creates a device ethN. Under NetBSD and FreeBSD the frontend devices are named xennetN and xnN respectively.

The backend device is typically named such that it contains both the guest domain ID and the index of the device. Under Linux such devices are by default named vifDOMID.DEVID while under NetBSD xvifDOMID.DEVID is used.

The front and backend devices are linked by a virtual communication channel, guest networking is achieved by arranging for traffic to pass from the backend device onto the wider network, e.g. using bridging, routing or Network Address Translation (NAT).

direct&200 |

Emulated Network Devices

As well as PV network interface fully virtualised (HVM) guests can also be configured with one or more emulated network devices. These devices emulate a real piece of hardware and are useful when a guest OS does not have PV drivers available or when they are not yet available (i.e. during guest installation).

An emulated network device is usually paired with a PV device with the same MAC address and configuration. This allows the guest to smoothly transition from the emulated device to the PV device when a driver becomes available.

The emulated network device is provided by the device model (DM[1]) (running either as a process in domain 0 or as a Stub Domain).

When the DM runs as a process in domain 0 then the device is surfaced in the backend domain as a tap type network device. Historically these were named either tapID (for an arbitrary ID) or tapDOMID.DEVID. More recently they have been named vifDOMID.DOMID-emu to highlight the relationship between the paired PV and emulated devices.

If the DM runs in a stub domain then the device surfaces in domain 0 as a PV network device attached to the stub domain. The stub domain will take care of forwarding between the device emulator and this PV device.

MAC addresses

Virtualised network interfaces in domains are given Ethernet MAC addresses. By default most Xen toolstacks will select a random address, depending on the toolstack this will either be static for the entire life time of the guest (e.g. Libvirt, XAPI or xend managed domains) or will change each time the guest is started (e.g. XL or xend unmanaged domains).

In the latter case if a fixed MAC address is required e.g. for using with DHCP then this can be be configured using the mac= option to the vif configuration directive (e.g. vif = ['mac=aa:00:00:00:00:11']). See XL Network Configuration for more details of the syntax.

In case yo have to set a MAC address yourself, refer to http://wiki.xenproject.org/wiki/XenNetworking for indications on how to do that.

Bridging

The default (and most common) Xen configuration uses bridging within the backend domain (typically domain 0) to allow all domains to appear on the network as individual hosts.

In this configuration a software bridge is created in the backend domain. The backend virtual network devices (vifDOMID.DEVID)) are added to this bridge along with an (optional) physical Ethernet device to provide connectivity off the host. By omitting the physical Ethernet device an isolated network containing only guest domains can be created.

There are two common naming schemes when using bridged networking. In one scheme the physical device eth0 is renamed to peth0 and a bridge named eth0 is created. In the other the physical device remains eth0 while the bridge is named xenbr0 (or br0 etc). We shall use the eth0+xenbr0 naming scheme here.

Of course you are free to use whatever names you like, including descriptive names (e.g. “dmz”, “internal”, “external” etc).

direct&200 |

Setting up bridged networking

The recommended method for configuring bridged networking is to use your distro supplied network configuration tools as described in Host Configuration/Networking.

The XL toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately.

Attaching virtual devices to the appropriate bridge

When a domU starts up the vif-bridge script is run which:

  attaches vifDOMID.DEVID to the appropriate bridge
  brings vifDOMID.DEVID up. 

With XL and xend the bridge to us for each VIF can be configured using the bridge configuration key. e.g.

   vif=[ 'bridge=mybridge' ]

or

   vif=[ 'mac=00:16:3e:01:01:01,bridge=mybridge' ]

or to create multiple interfaces attached to different bridges:

   vif=[ 'mac=00:16:3e:70:01:01,bridge=br0', 'mac=00:16:3e:70:02:01,bridge=br1' ]

Open vSwitch

The Xen 4.3 release will feature initial integration of Open vSwitch based networking. Conceptually this is similar to a bridged configuration but rather than placing each vif on a Linux bridge instead an Open vSwitch switch is used. Open vSwitch supports more advanced Software-defined Networking (SDN) features such as OpenFlow.

Setting up Open vSwitch networking

Set up openvswitch according to the Host Networking Configuration Examples.

If you want openvswitch to be the default, add the following line to your xl.conf file:

vif.default.script="vif-openvswitch"

If you have given the openvswitch bridge a name other than xenbr0, you will need to update that default as well:

vif.default.bridge="ovsbr0"

Alternately, you can specify the new script (and bridge, if necessary) in each config file by adding script=vif-openvswitch (and possiblybridge=ovsbr0) to the vifspec of individual vifs in config files. See xl-network-configuration.markdown for more information.

Attaching virtual devices to the appropriate switch

Xen 4.3 ships with a vif-openvswitch hotplug script which behaves similarly to the vif-bridge script, except that it attaches the VIF to an openvswitch switch (named via the VIF's bridge parameter).

In addition to naming the bridge the openvswitch hotplug script supports an extended syntax for the bridge option which allows for VLAN tagging and trunking. That syntax is:

BRIDGE_NAME[.VLAN][:TRUNK:TRUNK]

To add a vif to VLAN 102 on bridge xenbr0:

vif = [ 'mac=00:16:3e:01:01:01,bridge=xenbr0.102' ]

To add a vif to bridge xenbr1 trunked and receiving traffic for VLAN 101 and 202:

vif = [ 'mac=00:16:3e:01:01:01,bridge=xenbr0:101:202' ]

Software Bridge

In order to give network access to guest domains it is necessary to configure the domain 0 network appropriately. The most common configuration is to use a software bridge.

It is recommended that you manage your own network bridge using the Debian network bridge. The Xen wiki page Host Configuration/Networking also has some useful information. The Xen supplied network scripts are not always reliable and will be removed from a later version. They are disabled by default in Debian's packages.

If you have a router that assigns ip addresses through dhcp, the following is a working example of the /etc/network/interfaces file using bridge-utils software.

> nano /etc/network/interfaces

#The loopback network interface
auto lo
iface lo inet loopback

iface eth0 inet manual

auto xenbr0
iface xenbr0 inet dhcp
   bridge_ports eth0

#other possibly useful options in a virtualized environment
  #bridge_stp off       # disable Spanning Tree Protocol
  #bridge_waitport 0    # no delay before a port becomes available
  #bridge_fd 0          # no forwarding delay

activate your changes like this:

> ifdown eth0
> killall dhclient
> ifup xenbr0
> brctl show

bridge name bridge id       STP enabled interfaces
xenbr0      8000.xxxxxxxxxxxx   no      eth0

You should see your new IP address on ifconfig br-lan, and you should still be able to ping out (e.g. ping 8.8.8.8 / and resolve: ping google.com).

Disable Netfilter on Bridges

It is highly recommended, for performance and security reasons, that netfilter is disabled on all bridges by adding the following to /etc/sysctl.conf.

> nano /etc/sysctl.conf

ADD:
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Then, as root:

> sysctl -p /etc/sysctl.conf

Open vSwitch

The configuration for openvswitch is persistent across reboots. Supposing that you want to create a bridge called 'xenbr0' and attach it to physical nic 'eth0', after installation you can use the following commands:

> ovs-vsctl add-br xenbr0
> ovs-vsctl add-port xenbr0 eth0

xenbr0 will still need to be set up to acquire an IP address just as Linux bridging does; see above for examples on how to set this up.