Deploy a virtualized environment
- 1 Considerations on deploying virtual machines on a cloud
- 2 Deploying a virtualized environment
- 3 VMs network configuration
- 3.1 Get the assigned IP ranges
- 3.2 Configure the domU
- 3.3 How to connect to a virtual machine
- 4 Improve your environment
- 5 Record your tuned environment
- 6 Exercice: Monitor your nodes
The attendee will learn here how to deploy a Xen hypervisor, how to connect and configure its virtual machines, how to tune its environments and how to record them for future deployments.
This practice uses Xen virtualization technology.
This tutorial assumes the attendee knows how to reserve nodes and how to deploy environments.
Considerations on deploying virtual machines on a cloud
When you deploy an image into a node of a cluster, the node (which is a physical machine) has its own MAC address imposed by its network interface. During the boot, the node contacts the DHCP server to obtain its IP address associated to its MAC. This happens transparently for the user. However, when a user creates a VM (manually, or through VM provisioning software), he must choose a MAC address and the IP used. An interaction between the user and the platform to properly configure the network parameters of the virtual environments is therefore needed.
This raises two fundamental questions:
- What MAC address do we assign to the VMs?
- What IP address do we assign to the VMs?
These addresses must be unique on the whole grid infrastructure in order to not have conflicts with other machines.
This problem becomes more complicated when the platform (like Grid5000) allows to save the user's configured environments (including the VMs created inside the host OS) and re-deploy them later on the future. Imagine the following scenario: you configure a VM with the IP address 18.104.22.168 on Monday and you save the environment at the end of the day because your reservation time was not enough to finish your experiments. Next day you deploy your image that contains your VM, but unfortunately, an other user has already taken the same IP address that you had from your old reservation (22.214.171.124), generating an IP conflict with your machine and potentially damaging the experiments for both users.
How Grid5000 tries to solve those problems
There exists several ways to solve the MAC and IP addressing for the virtual machines, each of them with its advantages and disadvantages.
For the MAC addressing, a Grid5000 mechanism installed on the xen reference environments ensured a unique MAC address range usable by users having reserved a physical host. That MAC was calculated by a function on the cluster and node names where the image was deployed. This technique is becoming obsolete because of its poor robustness to change.
For the IP addresses, the virtual machines obtained the IP by requesting the DHCP server. In parallel, a complex infrastructure and a set of tricky scripts were in charge of letting know the user the IP addresses assigned to his VMs. With this mechanism, if the VM was rebooted many times during the reservation, the IP of the VM could change. This poses a problem because the user expects to have the same IP addresses during the same reservation time.
The suggested way of choosing an MAC address is using an approach that seem to be having success in the cloud community: assign random MAC addresses on the virtual machines.
Work has been done on Grid5000 to supply a mechanism that allows users to reserve a personal IP ranges that he or she handles at his own wish. It is known as subnet reservation. An important consequence of this solution for the user is that he has to handle by hand (or with his own scripts) the list of IP addresses that he assigns to his VMs, showing the intrinsic difficulties that goes with virtualized environments on grid infrastructure.
Deploying a virtualized environment
Reservation of the nodes and IP ranges
First of all, you have to reserve the nodes where you want to deploy the virtualized environment.
In addition to the nodes, your reservation must include the type of the subnet (for example /22 or /19) and the number of subnets you plan to use for your experiments. In this tutorial we will use a single /22 subnet (range of 1024 IPs):
More documentation about the subnets reservation can be found in this page.
Locate a suitable image
squeeze-x64-xen use the new subnet reservation feature. This image will be used in this tutorial.
Deploy the environment
Now you are ready to deploy your environment as you do with other kadeploy3 environments as described on Deploy an environment. The single difference is that you have to specify the user that owns the image with the
Without argument, the
-k option will copy your
~/.ssh/authorized_keys (located in your home in the frontend) to the deployed nodes.
To have the
VMs network configuration
Now that your Xen environment is deployed, you want to create one or several virtual machines and configure them with the IP addresses belonging to the range that has been assigned to you.
Get the assigned IP ranges
Grid5000 provides the package g5k-subnets to manage the subnets you reserved.
In the frontend, while being connected in your job, you can type the following command to know the IP ranges that OAR assigned to your reservation:
g5k-subnets provides many options to print the network information into different formats. In order to get the list of IP addresses that you can assign to your virtual machines, you can type:
You want also to know the broadcast, netmask and gateway for your VMs:
Keep this information, you will need it soon.
Configure the domU
In Xen terminology, a domain U or domU is a virtual machine. The domain 0 or dom0 is the non-virtual machine which hosts the domUs (in our case the dom0 is the Grid5000 node you deployed).
The image squeeze-x64-xen includes a pre-configured domU. The configuration file of this VM is placed in
/etc/xen/domU.cfg. Inside this file, you can specify the parameters of your virtual machine. They are defined by:
- kernel and initrd : linux kernel and initrd with xen domU support.
- vcpus : number of virtual CPUs given to the VM.
- memory : size (MB) of RAM given to the VM.
- root : where is located the root partition .
- disk : which files contain the partitions on your virtual host.
- name : the name of the hostname, as displayed by xm list and as given by the system itself.
- dhcp : do we use dhcp ?
- vif : the configuration of the domU's network interfaces
- on_poweroff on_restart on_crash : how should react xen hypervisor with domU
You can find the official documentation and other options in http://wiki.xenproject.org/wiki/XenConfigurationFileOptions.
The vif line configures the domU's network. It usually contains:
- a MAC address defined under the form
00:16:3E:XX:XX:XX. This is the Xen reserved MAC range.
- the network mode (bridge, router, etc.)
In the deployed image, the script
/etc/init.d/xen-g5k assigns a random MAC address to the domU.cfg file on each reboot of your dom0. This script ensures (in a probabilistic way) that there are no MAC conflicts between the domUs of several Grid5000 users.
A common network mode is the bridge one, where domUs communicate with the network using a bridge created on the dom0's first network interface. Instead of using a bridge, you can use other modes. Visit http://wiki.xensource.com/xenwiki/XenNetworking for details on network configuration.
The Xen domU's network configuration
At this point you want to adapt the Xen domU's configuration file with your network parameters.
Connect to one of your dom0 (ie: the node where you deployed the image):
First you would want to take a look on the node's network interfaces to see how it is configured (ie: is eno1 or eno2 the default network interface?):
Because we will use the bridge mode in this tutorial, you want to know the bridge that Xen created when it booted:
Now that you know the bridge interface, you can edit the
/etc/xen/domU.cfg and modify the vif line.
It should look like this:
vif = [ 'mac=00:16:3E:xx:xx:xx, bridge=
You have to substitute
eno1 by the name of your bridge.
Finally, you can boot your domU:
and check that your VM is running by typing:
Configure the domU's network
Once your VM is booted, you have to configure it to use your reserved IP range. This example of domU is not configured to have an IP address yet, so the only way you have to connect to the VM is using the Xen console:
This command will connect you to the domU's console. To exit from the VM's console and go back to the dom0, press Ctrl+[. Log in as root (domU root password: grid5000).
From the list of IP addresses that you stored in the file
ip_list.txt on the frontend, you have to chose one IP to assign to your VM.
You have to edit the common Linux network files in concordance with your assigned IP range:
auto eno1 iface eno1 inet static address
Replace the network parameters with those of your IP range. Remember that the
g5k-subnets -a command give you all this information.
The DNS parameters depends on the site where you deployed the environment. Take a look on the
/etc/resolv.conf of your dom0 to get such parameters.
It just remains to restart your domU's network configuration:
route -n that your network is well configured and try to ping the frontend. You should be able to reach it.
How to connect to a virtual machine
You have seen how you can connect to your domUs from the dom0 using the Xen commands
xm list and
However, you would be interested in knowing other conventional ways (that is, SSH) to connect to your VMs from your dom0 or the frontend.
Connect to the virtual host from dom0
The domU's root account has been configured to be automatically accessible with the ssh key of root user (
/root/.ssh/id_rsa on the dom0). You can therefore connect to the domU from the dom0 in such easy way:
Connect to the virtual host from the frontend
To connect from the frontend, you can :
- Define a password on the virtual machine, and use the IP given to it. You know how to do it from the dom0.
- Add your public ssh key into the virtual machine.
For the last case, you have to copy your public ssh key (stored in your home on the frontend) into the dom0 and then copy it again into the virtual machine:
You can now connect from the frontend as root on your virtual machine:
Connect to all your virtual machines
Remember that we recommended you to maintain a file
vms_assigned_ips.txt updated with the IP addresses that you assign to your VMs.
Now is when you will use it.
We assume that the file
vms_assigned_ips.txt is on the frontend and you can connect to your VMs from the frontend.
In order to execute a command on all of your virtual nodes, you can run the following bash for-loop with the command you want to execute on all VMs:
$ for vm in `cat
Improve your environment
Here you learn how to add virtual machines to your environment and how to reuse your environment.
Create new domU images
You should be able to create new virtual machines by different methods. Those methods include :
- create a new virtual machine by copying the existing one and adapting it
- create your virtual machine with the debian xen-tools
You have to run
Copy the existing virtual machine
Create a directory where you can store your new disk images :
Ensure the virtual machine you are copying from is not running :
Copy disk and swap to your new storage :
Copy the original configuration file into a new one:
Edit the new configuration file and adapt the name and disk lines like this:
name = 'domU2' disk = [ 'file:
Recall that your domU VM has already an IP address and its network is configured. If you boot the domU2 VM and domU is running, it will be an IP address conflict since both VMs will have the same network configuration (since we have just duplicated the whole disk). You would want also to change the internal hostname of domU2 (if not, it will be domU).
Do not boot your domU2 VM yet.
You can create a new image with the Debian xen-tools. Xen-tools is a collection of scripts able to easily create a new virtual machine. In a single command line you can specify the network parameters you want your domU to have:
--passwd option will prompt you for the root password of the VM.
Other interesting options (like partitions size, memory, etc.) can also be specified. Visit http://xen-tools.org/software/xen-tools/ for more information.
The xen-tools configuration is done in
/etc/xen-tools/xen-tools.conf on your dom0.
You may want to take a look on the
/var/log/xen-tools/domU3.log log file to know what xen-tools does.
If you look inside the
/etc/xen/domU3.cfg generated by xen-tools, you will see that the IP address appears in the vif line. This is how xen-tools works, but is not necessary to have the IP address of your VMs in the vif line. You can remove it in order to not confuse you. You will also see that the bridge is not configured in the vif line. Although it may work on some clusters without it, it is a good practice to specify it.
You may have to change the bridge interface of your domU's configuration file
vif = [ 'mac=00:16:3E:50:D1:00,bridge=
You probably want to access to the domU3 without typing a password and from the frontend, so you can follow again the steps explained early on this tutorial.
Configure your virtual machines
You have to assign a random MAC to your new VMs (domU2 and domU3).
Run the script that assigns random MAC addresses to your VMs:
This script calls the command
random_mac.sh which generates a random MAC address for each Xen configuration file.
Verify your files have been correctly configured :
You can now start your VMs with the xm create command.
You can monitor processor, memory and disk usage with the
Disk space considerations
The partition sda3 used to deploy your image is limited to 6G. A virtual disk, just with the Debian base system, has a size of 400M. You can't store many images in your root partition.
Let's do a summary of the domUs you have created previously :
- You have created a virtual machine named 'domU2' under /tmp/. It's a good place to store the xen images, because it's a big disk partition. But this area is not backuped when you record your environment.
- You have created a virtual machine named 'domU3' under /opt/. The virtual machine will be recorded with the image. We want it to be started automatically so we add a link like this :
Making the VM boot automatically is not a good idea if you plan to save your environment and deploy it again in an other reservation because the IP range you will obtain in the second reservation may not be the same as in your first reservation.
An other method that aims at reducing the vms disk usage is LVM snapshotting.
Record your tuned environment
Before saving your environment, remember to shutdown your VMs:
Usual methods apply to backup your disk image:
Now return on frontend and copy the description file of squeeze-x64-xen to your home :
Edit now this file and adjust the parameters. The important parameters are :
- name : name of your environment.
- author : your email address.
- tarball : the location of your gzipped image (something like /home/your_user/my_image.tgz)
You can now record your environment as usual :
Exercice: Monitor your nodes
You can now deploy your own environment on your two nodes.
Install ganglia on your four virtual machines and on the dom0. The usual command on debian is :
On the dom0 you have to edit
/etc/gmond.conf and add a line with the
penoX interface that Xen created, like the following:
And look at them on the web interface:
Create as many virtual machines as cores the dom0 has and install ganglia on them.
When done, try to reach limits in terms of memory and disk space.