Deploy a virtualized environment

From Grid5000
Jump to: navigation, search
See also: Deploy a virtualized environmentXen related toolsVirtual Network ConfigurationPut some green in your experiments


Contents


The attendee will learn here how to deploy a Xen hypervisor, how to connect and configure its virtual machines, how to tune its environments and how to record them for future deployments.

This practice uses Xen virtualization technology.

Warning.png Warning

This tutorial assumes the attendee knows how to reserve nodes and how to deploy environments.


Considerations on deploying virtual machines on a cloud

When you deploy an image into a node of a cluster, the node (which is a physical machine) has its own MAC address imposed by its network interface. During the boot, the node contacts the DHCP server to obtain its IP address associated to its MAC. This happens transparently for the user. However, when a user creates a VM (manually, or through VM provisioning software), he must be choose a MAC address and the IP used. An interaction between the user and the platform to properly configure the network parameters of the virtual environments is therefore needed.

This raises two fundamental questions:

  • What MAC address do we assign to the VMs?
  • What IP address do we assign to the VMs?

These addresses must be unique on the whole grid infrastructure in order to not have conflicts with other machines.

This problem becomes more complicated when the platform (like Grid5000) allows to save the user's configured environments (including the VMs created inside the host OS) and re-deploy them later on the future. Imagine the following scenario: you configure a VM with the IP address 1.1.1.1 on Monday and you save the environment at the end of the day because your reservation time was not enough to finish your experiments. Next day you deploy your image that contains your VM, but unfortunately, an other user has already taken the same IP address that you had from your old reservation (1.1.1.1), generating an IP conflict with your machine and potentially damaging the experiments for both users.


How Grid5000 tries to solve those problems

There exists several ways to solve the MAC and IP addressing for the virtual machines, each of them with its advantages and disadvantages.

Previous infrastructure

For the MAC addressing, a Grid5000 mechanism installed on the xen reference environments ensured a unique MAC address range usable by users having reserved a physical host. That MAC was calculated by a function on the cluster and node names where the image was deployed. This technique is becoming obsolete because of its poor robustness to change.

For the IP addresses, the virtual machines obtained the IP by requesting the DHCP server. In parallel, a complex infrastructure and a set of tricky scripts were in charge of letting know the user the IP addresses assigned to his VMs. With this mechanism, if the VM was rebooted many times during the reservation, the IP of the VM could change. This poses a problem because the user expects to have the same IP addresses during the same reservation time.

Current infrastructure

The suggested way of choosing an MAC address is using an approach that seem to be having success in the cloud community: assign random MAC addresses on the virtual machines.

Work has been done on Grid5000 to supply a mechanism that allows users to reserve a personal IP ranges that he or she handles at his own wish. It is known as subnet reservation. An important consequence of this solution for the user is that he has to handle by hand (or with his own scripts) the list of IP addresses that he assigns to his VMs, showing the intrinsic difficulties that goes with virtualized environments on grid infrastructure.

Deploying a virtualized environment

Reservation of the nodes and IP ranges

First of all, you have to reserve the nodes where you want to deploy the virtualized environment.

In addition to the nodes, your reservation must include the type of the subnet (for example /22 or /19) and the number of subnets you plan to use for your experiments. In this tutorial we will use a single /22 subnet (range of 1024 IPs):

Terminal.png frontend:
$ oarsub -I -t deploy -l slash_22=1+nodes=2,walltime=2:00

More documentation about the subnets reservation can be found in this page.

Locate a suitable image

squeeze-x64-xen use the new subnet reservation feature. This image will be used in this tutorial.

Deploy the environment

Now you are ready to deploy your environment as you do with other kadeploy3 environments as described on Deploy an environment. The single difference is that you have to specify the user that owns the image with the -u option:

Terminal.png frontend:
$ kadeploy3 -e squeeze-x64-xen -f $OAR_FILE_NODES -k pub_key_path

Without argument, the -k option will copy your ~/.ssh/authorized_keys (located in your home in the frontend) to the deployed nodes.

Note.png Note

To have the -k option effective, you have to generate a ssh key without password or enable your ssh agent to propagate this key. Type ssh-keygen -t rsa to create your public/private keys on the frontend.

VMs network configuration

Now that your Xen environment is deployed, you want to create one or several virtual machines and configure them with the IP addresses belonging to the range that has been assigned to you.


Get the assigned IP ranges

Grid5000 provides the package g5k-subnets to manage the subnets you reserved.

In the frontend, while being connected in your job, you can type the following command to know the IP ranges that OAR assigned to your reservation:

Terminal.png frontend:
$ g5k-subnets -p

g5k-subnets provides many options to print the network information into different formats. In order to get the list of IP addresses that you can assign to your virtual machines, you can type:

Terminal.png frontend:
$ g5k-subnets -i -o ip_list.txt


You want also to know the broadcast, netmask and gateway for your VMs:

Terminal.png frontend:
$ g5k-subnets -a

Keep this information, you will need it soon.

Configure the domU

In Xen terminology, a domain U or domU is a virtual machine. The domain 0 or dom0 is the non-virtual machine which hosts the domUs (in our case the dom0 is the Grid5000 node you deployed).

The image squeeze-x64-xen includes a pre-configured domU. The configuration file of this VM is placed in /etc/xen/domU.cfg. Inside this file, you can specify the parameters of your virtual machine. They are defined by:

  • kernel and initrd : linux kernel and initrd with xen domU support.
  • vcpus : number of virtual CPUs given to the VM.
  • memory : size (MB) of RAM given to the VM.
  • root : where is located the root partition .
  • disk : which files contain the partitions on your virtual host.
  • name : the name of the hostname, as displayed by xm list and as given by the system itself.
  • dhcp : do we use dhcp ?
  • vif : the configuration of the domU's network interfaces
  • on_poweroff on_restart on_crash : how should react xen hypervisor with domU

You can find the official documentation and other options in http://wiki.xensource.com/xenwiki/XenConfigurationFileOptions.


The vif line configures the domU's network. It usually contains:

  • a MAC address defined under the form 00:16:3E:XX:XX:XX. This is the Xen reserved MAC range.
  • the network mode (bridge, router, etc.)

In the deployed image, the script /etc/init.d/xen-g5k assigns a random MAC address to the domU.cfg file on each reboot of your dom0. This script ensures (in a probabilistic way) that there are no MAC conflicts between the domUs of several Grid5000 users.

A common network mode is the bridge one, where domUs communicate with the network using a bridge created on the dom0's first network interface. Instead of using a bridge, you can use other modes. Visit http://wiki.xensource.com/xenwiki/XenNetworking for details on network configuration.


The Xen domU's network configuration

At this point you want to adapt the Xen domU's configuration file with your network parameters.

Connect to one of your dom0 (ie: the node where you deployed the image):

Terminal.png frontend:
$ ssh root@dom0

First you would want to take a look on the node's network interfaces to see how it is configured (ie: is eth0 or eth1 the default network interface?):

Terminal.png dom0:
# ifconfig


Because we will use the bridge mode in this tutorial, you want to know the bridge that Xen created when it booted:

Terminal.png dom0:
# brctl show


Now that you know the bridge interface, you can edit the /etc/xen/domU.cfg and modify the vif line. It should look like this:

vif = [ 'mac=00:16:3E:xx:xx:xx, bridge=eth0' ]

You have to substitute eth0 by the name of your bridge.


Finally, you can boot your domU:

Terminal.png dom0:
# xm create /etc/xen/domU.cfg

and check that your VM is running by typing:

Terminal.png dom0:
# xm list

Configure the domU's network

Once your VM is booted, you have to configure it to use your reserved IP range. This example of domU is not configured to have an IP address yet, so the only way you have to connect to the VM is using the Xen console:

Terminal.png dom0:
# xm console domU
Note.png Note

Default domU root password is : grid5000

This command will connect you to the domU's console. To exit from the VM's console and go back to the dom0, press Ctrl+[. Log in as root (domU root password: grid5000).

From the list of IP addresses that you stored in the file ip_list.txt on the frontend, you have to chose one IP to assign to your VM.

Note.png Note

It is a good time to keep a second file with the IPs you are using (eg: vms_assigned_ips.txt). It will be very useful when you will want to execute commands on all your VMs. You may want to remove the assigned IPs from the original ip_list.txt in order to not assign twice the same IP.


You have to edit the common Linux network files in concordance with your assigned IP range:

  • /etc/network/interfaces
auto eth0
iface eth0 inet static
       address 10.180.0.3
       netmask 255.252.0.0
       broadcast 10.183.255.255
       gateway 10.183.255.254

Replace the network parameters with those of your IP range. Remember that the g5k-subnets -a command give you all this information.

  • /etc/resolv.conf
domain grenoble.grid5000.fr
search grenoble.grid5000.fr
nameserver 172.16.16.247

The DNS parameters depends on the site where you deployed the environment. Take a look on the /etc/resolv.conf of your dom0 to get such parameters.

It just remains to restart your domU's network configuration:

Terminal.png domU:
# /etc/init.d/networking restart

Check with ifconfig and route -n that your network is well configured and try to ping the frontend. You should be able to reach it.

How to connect to a virtual machine

You have seen how you can connect to your domUs from the dom0 using the Xen commands xm list and xm console. However, you would be interested in knowing other conventional ways (that is, SSH) to connect to your VMs from your dom0 or the frontend.

Connect to the virtual host from dom0

The domU's root account has been configured to be automatically accessible with the ssh key of root user (/root/.ssh/id_rsa on the dom0). You can therefore connect to the domU from the dom0 in such easy way:

Terminal.png dom0:
# ssh domU_ip_address

Connect to the virtual host from the frontend

To connect from the frontend, you can :

  • Define a password on the virtual machine, and use the IP given to it. You know how to do it from the dom0.
  • Add your public ssh key into the virtual machine.

For the last case, you have to copy your public ssh key (stored in your home on the frontend) into the dom0 and then copy it again into the virtual machine:

Terminal.png frontend:
$ scp ~/.ssh/id_rsa.pub root@dom0_node:
Terminal.png frontend:
$ ssh root@dom0_node
Terminal.png dom0:
# scp id_rsa.pub root@domU_IP_address:
Terminal.png dom0:
# ssh root@domU_IP_address
Terminal.png domU:
# cat id_rsa.pub >> .ssh/authorized_keys

You can now connect from the frontend as root on your virtual machine:

Terminal.png frontend:
$ ssh root@domU_IP_address

Connect to all your virtual machines

Remember that we recommended you to maintain a file vms_assigned_ips.txt updated with the IP addresses that you assign to your VMs. Now is when you will use it.

We assume that the file vms_assigned_ips.txt is on the frontend and you can connect to your VMs from the frontend.

In order to execute a command on all of your virtual nodes, you can run the following bash for-loop with the command you want to execute on all VMs:

Terminal.png frontend:
$ for vm in `cat vms_assigned_ips.txt`; do ssh root@$vm "apt-get update && apt-get install --yes vim" ; done

Improve your environment

Here you learn how to add virtual machines to your environment and how to reuse your environment.

Create new domU images

You should be able to create new virtual machines by different methods. Those methods include :

  • create a new virtual machine by copying the existing one and adapting it
  • create your virtual machine with the debian xen-tools


Warning.png Warning

You have to run /etc/init.d/xen-g5k restart in the dom0 before starting your new VMs to get a random MAC address.


Copy the existing virtual machine

Create a directory where you can store your new disk images :

Terminal.png dom0:
# mkdir -p /tmp/xen/domU2/

Ensure the virtual machine you are copying from is not running :

Terminal.png dom0:
# xm shutdown domU
Terminal.png dom0:
# xm list

Copy disk and swap to your new storage :

Terminal.png dom0:
# cp /opt/xen/domains/domU/disk.img /tmp/xen/domU2/
Terminal.png dom0:
# cp /opt/xen/domains/domU/swap.img /tmp/xen/domU2/

Copy the original configuration file into a new one:

Terminal.png dom0:
# cd /etc/xen
Terminal.png dom0:
# cp domU.cfg domU2.cfg

Edit the new configuration file and adapt the name and disk lines like this:

name    = 'domU2'
disk    = [ 'file:/tmp/xen/domU2/swap.img,xvda1,w', 'file:/tmp/xen/domU2/disk.img,xvda2,w' ]


Recall that your domU VM has already an IP address and its network is configured. If you boot the domU2 VM and domU is running, it will be an IP address conflict since both VMs will have the same network configuration (since we have just duplicated the whole disk). You would want also to change the internal hostname of domU2 (if not, it will be domU).

Do not boot your domU2 VM yet.

Use xen-tools

You can create a new image with the Debian xen-tools. Xen-tools is a collection of scripts able to easily create a new virtual machine. In a single command line you can specify the network parameters you want your domU to have:

Terminal.png dom0:
# xen-create-image --hostname domU3 --ip domU_desired_IP --gateway domU_gateway_IP --netmask domU_netmask --passwd
Warning.png Warning

If you create your virtual machine with xen-tools, you need to configure the proxy on the dom0. Refer to Web proxy client for that.

The --passwd option will prompt you for the root password of the VM. Other interesting options (like partitions size, memory, etc.) can also be specified. Visit http://xen-tools.org/software/xen-tools/ for more information. The xen-tools configuration is done in /etc/xen-tools/xen-tools.conf on your dom0.

You may want to take a look on the /var/log/xen-tools/domU3.log log file to know what xen-tools does.

If you look inside the /etc/xen/domU3.cfg generated by xen-tools, you will see that the IP address appears in the vif line. This is how xen-tools works, but is not necessary to have the IP address of your VMs in the vif line. You can remove it in order to not confuse you. You will also see that the bridge is not configured in the vif line. Although it may work on some clusters without it, it is a good practice to specify it.

Note.png Note

You may have to change the bridge interface of your domU's configuration file

vif = [ 'mac=00:16:3E:50:D1:00,bridge=eth0' ]

for nodes from some clusters to have the network working (eth1 used instead of eth0 for example).

You probably want to access to the domU3 without typing a password and from the frontend, so you can follow again the steps explained early on this tutorial.

Configure your virtual machines

You have to assign a random MAC to your new VMs (domU2 and domU3).

Run the script that assigns random MAC addresses to your VMs:

Terminal.png dom0:
# /etc/init.d/xen-g5k restart

This script calls the command random_mac.sh which generates a random MAC address for each Xen configuration file.

Note.png Note

The /etc/init.d/xen-g5k will change the MAC address of all your /etc/xen/*.cfg files. These configuration files are only read when you boot a VM with the xm create domu_config_file command. Hence, changing the MAC address in the configuration file of a VM that is running does not affect the currently running VM.


Verify your files have been correctly configured :

Terminal.png dom0:
# grep vif /etc/xen/*.cfg


You can now start your VMs with the xm create command.

Terminal.png dom0:
# xm create /etc/xen/domUX.cfg

You can monitor processor, memory and disk usage with the xentop command.

Disk space considerations

The partition sda3 used to deploy your image is limited to 6G. A virtual disk, just with the Debian base system, has a size of 400M. You can't store many images in your root partition.

Let's do a summary of the domUs you have created previously :

  • You have created a virtual machine named 'domU2' under /tmp/. It's a good place to store the xen images, because it's a big disk partition. But this area is not backuped when you record your environment.
  • You have created a virtual machine named 'domU3' under /opt/. The virtual machine will be recorded with the image. We want it to be started automatically so we add a link like this :
Terminal.png dom0:
# cd /etc/xen/auto
Terminal.png dom0:
# ln -s ../domU3.cfg

Making the VM boot automatically is not a good idea if you plan to save your environment and deploy it again in an other reservation because the IP range you will obtain in the second reservation may not be the same as in your first reservation.


An other method that aims at reducing the vms disk usage is LVM snapshotting.

Record your tuned environment

Before saving your environment, remember to shutdown your VMs:

Terminal.png dom0:
# xm shutdown domUX

Usual methods apply to backup your disk image:

Terminal.png frontend:
$ ssh root@dom0 tgz-g5k > my_image.tgz

Now return on frontend and copy the description file of squeeze-x64-xen to your home :

Terminal.png frontend:
$ cp /grid5000/descriptions/squeeze-x64-xen-1.1.dsc my_image.dsc

Edit now this file and adjust the parameters. The important parameters are :

  • name : name of your environment.
  • author : your email address.
  • tarball : the location of your gzipped image (something like /home/your_user/my_image.tgz)


You can now record your environment as usual :

Terminal.png frontend:
$ kaenv3 --add my_image.dsc

Exercice: Monitor your nodes

You can now deploy your own environment on your two nodes.

Install ganglia on your four virtual machines and on the dom0. The usual command on debian is :

Terminal.png domU:
# export http_proxy=http://proxy:3128 ; apt-get install ganglia-monitor
Terminal.png dom0:
# export http_proxy=http://proxy:3128 ; apt-get install ganglia-monitor

On the dom0 you have to edit /etc/gmond.conf and add a line with the pethX interface that Xen created, like the following:

mcast_if peth0

And look at them on the web interface:

https://helpdesk.grid5000.fr/ganglia/

Create as many virtual machines as cores the dom0 has and install ganglia on them.

When done, try to reach limits in terms of memory and disk space.

Personal tools
Namespaces

Variants
Actions
Public Portal
Users Portal
Admin portal
Wiki special pages
Toolbox