Configuring and Installing a Xen Hardware Virtual Machine (HVM) domainU Guest

From Virtuatopia
Revision as of 19:02, 29 May 2008 by Neil (Talk | contribs)

Jump to: navigation, search
PreviousTable of ContentsNext
An Overview of Virtualization TechniquesInstalling and Running Windows XP or Vista as a Xen HVM domainU Guest

Xen hardware virtual machine (HVM) provides support for the virtualization of unmodified guest operating systems. Where ever possible it is better to run paravirtualized guests because HVM domainU guests run slightly slower than paravirtualized domainU guests and require that the host system contain a processor type with build in virtualization support. That said, if you need to virtualize an operating system which has not, or cannot be modified to run as a paravirtualized Xen guest (such as Microsoft Windows) then HVM virtualization is a good solution.

Full Virtualization vs. Para-Virtualization

Xen provides two approaches to virtualization - full virtualization and paravirtualization. Full virtualization provides complete abstraction between the hardware and the guest operating system. In this scenario, the guest operating system is provided a complete virtual physical environment in which to run and, as such, is unaware that it is running inside a virtual machine. One advantage of full virtualization is that the operating system does not need to be modified in order to run in a Xen virtualized environment. This means that proprietary operating systems such as Microsoft Windows can be run on Linux systems.

Disadvantages of full virtualization are that performance is slightly reduced as compared to paravirtualization and Xen requires CPUs with special virtualization support built in (such as Intel-VT and AMD-V) in order to perform full virtualization.

Paravirtualization requires that a guest operating system be modified to support virtualization. This typically means that guest operating systems are limited to open source systems such as Linux. The advantage to this approach is that a paravirtualized guest system comes closer to native performance than a fully virtualized guest, and the latest virtualization CPU support is not needed.

Checking Hardware Support for Xen Hardware Virtual Machines (HVM)

As mentioned previously, in order to support full hardware virtualization, the CPU must include Intel-VT or AMD-V support. This can be verified using the following commands:

For Intel CPUs:

grep vmx /proc/cpuinfo


grep svm /proc/cpuinfo

If your system does not include this support (i.e neither of the above commands produce any output) you can still use Xen in paravirtualization mode. You will not, however, be able to run unmodified operating systems such as Microsoft Windows as a Xen guest operating system.

Preparing to Install a Xen HVM domainU Guest

In order to install a fully virtualized Xen guest some form of disk storage is needed to contain the installed operating system. This can take the form of a physical disk (such as /dev/sdb) or a disk image file. A suitable disk image file can be created using the dd command. For example:

dd if=/dev/zero of=xenguest.img bs=1024k seek=6144 count=0

The above command creates a 6Gb disk image file for use by the Xen HVM domainU guest.

In addition, suitable installation media (such as a DVD or ISO image) will be required from which to install the guest operating system onto the disk image or disk drive.

Creating a Xen HVM Configuration File

The example HVM guest configuration file provided with Xen is as follows:

import os, re
arch = os.uname()[4]
if'64', arch):
    arch_libdir = 'lib64'
    arch_libdir = 'lib'

# Kernel image file.
kernel = "/usr/lib/xen/boot/hvmloader"

# The domain build function. HVM domain uses 'hvm'.

# Initial memory allocation (in megabytes) for the new domain.
# WARNING: Creating a domain with insufficient memory may cause out of
#          memory errors. The domain needs enough memory to boot kernel
#          and modules. Allocating less than 32MBs is not recommended.
memory = 128

# Shadow pagetable memory for the domain, in MB.
# If not explicictly set, xend will pick an appropriate value.  
# Should be at least 2KB per MB of domain memory, plus a few MB per vcpu.
# shadow_memory = 8

# A name for your domain. All domains must have different names.
name = "ExampleHVMDomain"

# 128-bit UUID for the domain.  The default behavior is to generate a new UUID
# on each call to 'xm create'.
#uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"

# The number of cpus guest platform has, default=1

# Enable/disable HVM guest PAE, default=1 (enabled)

# Enable/disable HVM guest ACPI, default=1 (enabled)

# Enable/disable HVM APIC mode, default=1 (enabled)
# Note that this option is ignored if vcpus > 1

# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = ""         # leave to Xen to pick
#cpus = "0"        # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5

# Optionally define mac and/or bridge for the network interfaces.
# Random MACs are assigned if not given.
#vif = [ 'type=ioemu, mac=00:16:3e:00:00:11, bridge=xenbr0, model=ne2k_pci' ]
# type=ioemu specify the NIC is an ioemu device not netfront
vif = [ 'type=ioemu, bridge=xenbr0' ]

# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.

#disk = [ 'phy:hda1,hda1,r' ]
disk = [ 'file:/var/images/min-el3-i386.img,hda,w', ',hdc:cdrom,r' ]

# Configure the behaviour when a domain exits.  There are three 'reasons'
# for a domain to stop: poweroff, reboot, and crash.  For each of these you
# may specify:
#   "destroy",        meaning that the domain is cleaned up as normal;
#   "restart",        meaning that a new domain is started in place of the old
#                     one;
#   "preserve",       meaning that no clean-up is done until the domain is
#                     manually destroyed (using xm destroy, for example); or
#   "rename-restart", meaning that the old domain is not cleaned up, but is
#                     renamed and a new domain started in its place.
# The default is
#   on_poweroff = 'destroy'
#   on_reboot   = 'restart'
#   on_crash    = 'restart'
# For backwards compatibility we also support the deprecated option restart
# restart = 'onreboot' means on_poweroff = 'destroy'
#                            on_reboot   = 'restart'
#                            on_crash    = 'destroy'
# restart = 'always'   means on_poweroff = 'restart'
#                            on_reboot   = 'restart'
#                            on_crash    = 'restart'
# restart = 'never'    means on_poweroff = 'destroy'
#                            on_reboot   = 'destroy'
#                            on_crash    = 'destroy'

#on_poweroff = 'destroy'
#on_reboot   = 'restart'
#on_crash    = 'restart'


# New stuff
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'

# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d) 
# default: hard disk, cd-rom, floppy

#  write to temporary files instead of disk image files

# enable SDL library for graphics, default = 0

# enable VNC library for graphics, default = 1

# address that should be listened on for the VNC server if vnc is set.
# default is to use 'vnc-listen' setting from /etc/xen/xend-config.sxp

# set VNC display number, default = domid

# try to find an unused port for the VNC server, default = 1

# enable spawning vncviewer for domain's console
# (only valid when vnc=1), default = 0

# set password for domain's VNC console
# default is depents on vncpasswd in xend-config.sxp

# no graphics, use serial port

# enable stdvga, default = 0 (use cirrus logic device model)

#   serial port re-direct to pty deivce, /dev/pts/n 
#   then xm console or minicom can connect

#   Qemu Monitor, default is disable
#   Use ctrl-alt-2 to connect

#   enable sound card support, [sb16|es1370|all|..,..], default none

#    set the real time clock to local time [default=0 i.e. set to utc]

#    set the real time clock offset in seconds [default=0 i.e. same as dom0]

#    start in full screen

#   Enable USB support (specific devices specified at runtime through the
#                       monitor window)

#   Enable USB mouse support (only enable one of the following, `mouse' for
#                             PS/2 protocol relative mouse, `tablet' for
#                             absolute mouse)

#   Set keyboard layout, default is en-us keyboard. 

This file provides an excellent template to use as the basis for an HVM domainU guest configuration file. Many of these values can be left unchanged. Some key values which will need to be changed will be discussed in the remainder of this section.

Firstly, the domainU system needs to be given a suitable name. The name = field should, therefore, be changed accordingly. For example:

name = "XenHVMGuest1"

The disk = line needs to be modified to reflect the required disk and CD/DVDN drive configuration. For example, if you have decided to use a physical disk accessible on the host system as /dev/sdb:

disk = [ 'phy:/dev/sdb,hda,w', 'phy:/dev/cdrom,hdc:cdrom.r' ]

where /dev/cdrom is replaced by the device name for the CDROM/DVD drive on your chosen Linux distribution.

Alternatively, if you have chosen to use a disk image instead of a physical disk drive:

disk = [ 'file:/home/xen/xenguest.img,hda,w', 'phy:/dev/cdrom,hdc:cdrom,r' ]

The above examples assume that you will be installing the guest operating system from a CD or DVD drive. It is also possible to perform the installation from an ISO image residing on the filesystem of the host. For example:

disk = [ 'file:/home/xen/XenGuest.img,hda,w', 'file:/media/disk/ISO/CentOS-5.1-i386-bin-DVD.iso,hdc:cdrom,r' ]

Xen provides a choice of VNC or SDL for supporting a graphical console when the guest is running. For example the following settings select VNC:

vnc = 1
sdl = 0

whilst the following selects SDL:

vnc = 0
sdl = 1

Both SDL and VNC work well in terms of displaying a graphical console, although VNC has some distinct advantages over SDL. Firstly, VNC provides greater flexibility than SDL in terms of remote access to the domainU graphical console. With VNC it is possible to connect to the graphical console from other systems, either on the local network or even over the internet. Secondly, when you close a VNC viewer window the guest domain continues to run allowing you to simply reconnect to carry on where you left off. Closing an SDL window, however, immediately terminates the guest domainU system resulting in possible data loss.

By default Xen does not automatically start the VNC console when the domainU guest starts up. In order to have the graphical console spawned automatically, change the vncconsole = value to 1:


Finally, the boot search sequence needs to be defined. Since we have to install the operating system before we can boot from the hard disk drive we need to place the CDROM/DVD first in the boor order:

# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy

Save the modified configuration file as XenHVMGuest.cfg.

Booting the HVM Guest

The guest system can now be started using the xm create commandm for example:

xm create XenHVMGuest.cfg -c

All being well the domainU guest will start with output to the Xen text console similar to the following:

xm create *cfg -c Using config file "./hvm.cfg". Started domain xenhvm

and the installation process from the chosen media will begin.

Connecting to the HVM dominU Guest Graphical Console

If SDL was chosen for the graphical console then the console should appear when the guest starts up. If, on the other hand, VNC was selected and the HVM domainU was not configured to automatically start vncviewer it is now necessary to connect manually. By default the VNC port is 5900 + the ID of the domain to which you wish to connect (which can be obtained using the xm list command). For example, to connect to domain ID 10:

vncviewer localhost:5910

In my experience this is not always the case and the best way to find out which port is being used is to run the following command:

 ps -ef | grep vnc
root      2992  2441 13 14:51 ?        00:00:00 /usr/lib/xen/bin/qemu-dm -d 11 -vcpus 1 -boot cd 
-serial pty -acpi -domain-name xenhvm -net nic,vlan=1,macaddr=00:16:3e:2e:10:b0,model=rtl8139 
-net tap,vlan=1,bridge=xenbr0 -vnc -vncviewer

As we can see from the above output, the Xen guest named xenhvm is accessible using Using this information we would, therefore, connect as follows:


Once the installation of the guest operating system has completed be sure to reverse the boot order in the configuration file if you no longer wish to have the CD/DVD drive first:

# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy

PreviousTable of ContentsNext
An Overview of Virtualization TechniquesInstalling and Running Windows XP or Vista as a Xen HVM domainU Guest