Deploy a virtualized environment deprecated

From Grid5000
Jump to: navigation, search

All of this follows assumes the attendee knows how to reserve nodes and how to deploy environments.

The attendee will learn here how to deploy a xen hypervisor, how to connect and configure its virtual machines, how to tune its environments and how to record it for future deployments.

This practice uses Xen virtualization technology.

Contents

Find and deploy existing images

Where you will learn to locate environments and deploy them

Locate a suitable image

You should be able to browse through all available environments:

To deploy a virtualized environment, you must know its name registered in the Kadeploy database. This tutorial uses the lenny-x64-xen environment.

kaenv3 -l to have a look at the different environments available.

Deploy the environment

You should be able to deploy your environment as you do with other kadeploy3 environments as described on Deploy an environment

$ oarsub -I -t deploy -l nodes=2,walltime=2:00
$ kadeploy3 -e lenny-x64-xen -f $OAR_FILE_NODES -k pub_key_path

Without argument, the -k option will copy your ~/.ssh/authorized_keys.

Note.png Note

To have the -k option effective, you need generate a ssh key without password or with an agent enabled with this key. ssh-keygen -t rsa to create your public/private keys.

Initialize a simple connection to virtual machines

How to connect to one virtual machine

Where you learn how to connect to a virtual machine by usual methods ... and discover automated tools

When deploying the lenny-x64-xen environment, a virtual machine is automatically started at boot. Connect as root on one node and see if it's running by the following command :

# xm list
Name                                      ID Mem(MiB) VCPUs State   Time(s)
Domain-0                                   0      977     4 r-----     37.9
domU                                       1      128     1 -b----      5.6

In Xen terminology, a domU is a virtual host.

The parameters of this machine are described in /etc/xen/domU.cfg. They are defined by :

  • kernel and initrd : linux kernel and initrd with xen domU support.
  • memory : size (MB) of ram given to the node.
  • root : where is located / .
  • disk : which files contain the partitions on your virtual host.
  • name : the name of the hostname, as displayed by xm list and as given by the system itself.
  • dhcp : do we use dhcp ?
  • vif : the configuration of the network interfaces
  • on_poweroff on_restart on_crash : how should react xen hypervisor with domU

We use dhcp, it's easier to use it and to automate things. You can choose a static address, but your address will not be usable on another site when you want to deploy your environment on it.

On vif line, you will see :

  • a mac address defined under the form 00:16:3E:XX:XX:XX. It's in the Xen reserved mac range.
  • a bridge : nodes communicates with the network by a bridge created on the first network interface. You can list existing interfaces by ifconfig and know the bridge and involved interfaces details by brctl show command.
  • you can use other modes, such as router mode, see http://wiki.xensource.com/xenwiki/XenNetworking for details on network configuration.

How to know the ip given to my virtual node

In Xen terminology, a dom0 is the real host.

On the dom0, you can't discover the IP address given to your node by the dhcp server, even by network capture. But two components know this information :

  • the virtual node itself
  • the dhcp server

You can connect to the virtual node from the dom0 by use of the serial console automatically configured :

# xm console domU

To exit, it's CTRL + ] . There is no root password defined by default.

The dhcp server knows all leased IPs in its logs but is administratively restricted. dhcp-proxy (the name of the server created) parses the dhcp leases given using a grid'5000 protocol. To secure the connection with this server, a dedicated proxy named omapi-proxy is launched on the frontend and listens on 7910 port.

You can ask for the given mac address with netcat, as it :

# nc frontend 7910
00:16:3E:XX:XX:XX
10.xxx.xxx.xxx  node-n-vm0.virtual


A script named xenconf has been created to do this job and automate the netcat processus : With the MAC-address argument, this script returns the IP address of the domU and a default hostname created as $node-$node_number-vm$vm_number.virtual.$site.grid5000.fr

# xenconf 00:16:3E:XX:XX:XX
10.xxx.xxx.xxx  node-n-vm0.virtual

A script named xenlist allows to get those informations for all of your domUs and creates a file with all IPs in it.

# xenlist 
/etc/xen/domU.cfg       10.xxx.xxx.xxx  node-n-vm0.virtual
# cat /tmp/iplist 
10.xxx.xxx.xxx

Connect to the virtual host from dom0

The root domU account has been configured to be automatically accessible with the ssh key of root user. You can therefore connect to the domU from the dom0 :

# ssh ip_address

Connect to the virtual host from frontend

To connect from frontend, you can :

  • Define a password on the virtual nodes, and use the IP and or hostname given to it. You know how to do it from the dom0.
  • Add your public ssh key to the virtual node.

You need to copy your public ssh key on the dom0 and copy the file on the virtual node :

Terminal.png frontend:
scp ~/.ssh/id_rsa.pub root@node:
Terminal.png dom0:
scp id_rsa.pub root@node-vm0.virtual:
Terminal.png dom0:
ssh root@node-vm0.virtual
Terminal.png domU:
cat id_rsa.pub >> .ssh/authorized_keys

You can also copy the key as an authorized_keys2 :

Terminal.png dom0:
scp id_rsa.pub root@node-vm0.virtual:.ssh/authorized_keys2

You can now connect from the frontend as root on your virtual node :

Terminal.png frontend:
ssh root@node-vm0.virtual

The xenkeys command allows to automatize those copies on all of your domUs. xenkeys command uses the previous iplist generated file.

Terminal.png dom0:
xenkeys id_rsa.pub
Note.png Note

all those commands are described on https://www.grid5000.fr/mediawiki/index.php/Xen_related_tools and written in ruby langage. Feel free to enhance them in your environments.

How to connect to all of your virtual machines

Where you discover an automated solution to resolve all those problems.

To configure and execute a command on all of your virtual nodes, you need to apply the following algorithm :

for each dom0
do for each domU
   do  read mac address in /etc/xen/*.cfg
       ask for ip with the given mac address
       copy your public ssh key on the domU
   end
   collect the list of ip in a file
end
create a file with all of your domU

This algorithm, in fact, is resumed to :

 for each dom0
 do xenlist
    add iplist file to a file
    xenkeys
 end

The command xennodefile, on the frontend, does this job. You need three arguments :

  • a path to a list of real nodes
  • a path to the public key to install
  • a path to a file which will store the virtual machines IP addresses
Terminal.png frontend:
xennodefile -f $OAR_FILE_NODES -k .ssh/id_rsa.pub -o iplist

You have now a file listing your virtual machines. You can iterate with a for-loop on it if you want.

Terminal.png frontend:
for vm in `cat iplist` ; do ssh root@$vm "apt-get update && apt-get install --yes vim" ; done

Or give a password to root user on vm with the passwd command.

Warning.png Warning

Currently, the dns name of your VMs (node-XX-vmYY) is only accessible from the local site.

Improve your environment

Where you learn how to add virtual hosts to your environment and how to reuse your environment

Create new domU images

You should be able to create new virtual machines by different methods. Those methods include :

  • create a new virtual machine by copying the existent one and by adapting it
  • create your virtual machine with xen-tools method
  • provide your own existing virtual machine
Note.png Note

You have to run automatic_xen_conf.rb in de dom0 before starting your VMs to get IP address in the appropriate range.

Copy the existing virtual machine

Create a directory where you can store your new disk images :

Terminal.png dom0:
mkdir -p /tmp/xen/domU2/

Ensure the virtual machine you are copying is not running :

Terminal.png dom0:
xm shutdown domU

Copy disk and swap to your new storage :

Terminal.png dom0:
cp /opt/xen/domains/domU/disk.img /tmp/xen/domU2/
Terminal.png dom0:
cp /opt/xen/domains/domU/swap.img /tmp/xen/domU2/

Copy the original configuration file to a new :

Terminal.png dom0:
cd /etc/xen
Terminal.png dom0:
cp domU.cfg domU2.cfg

Edit the new configuration file and adapt the name and disk lines like this:

name    = 'domU2'
disk    = [ 'file:/tmp/xen/domU2/disk.img,xvda2,w', 'file:/tmp/xen/domU2/swap.img,xvda1,w' ]

Use xen-tools

You can create a new image with xen-tools. Xen-tools is a collection of scripts able to easily create a new virtual machine.

Terminal.png dom0:
xen-create-image --hostname=domU3 --dhcp
Note.png Note

The xen tools configuration is done in /etc/xen-tools/xen-tools.conf You may have to edit the line

vif = [ 'mac=00:16:3E:50:D1:00,bridge=xenbr0' ]

for nodes from some clusters to have the network working (eth1 used instead of eth0 for example).

Configure your virtual machines

You will configure your virtual machines and start them. The only important thing you have to edit in the configuration file is the network line.

Now examine the line vif in domU.cfg on your two dom0 :

Terminal.png frontend:
$ ssh root@node1 "grep vif /etc/xen/domU.cfg"
Terminal.png frontend:
$ ssh root@node2 "grep vif /etc/xen/domU.cfg"


The given mac address are different. It's not random, it's generated by a service started at boot. This service is available in /etc/init.d/xen-g5k . This script calls a command automatic_xen_conf.rb which generates a mac address for each xen configuration files, based on a definition file. This definition file /etc/definitions.yaml contains id for many clusters.

A file one is also generated, named /etc/xen-macs . You can use all mac address defined in it for all virtual machines hosted by this node.

To generate a mac-address for new domU you've created, just choose addresses by this command :

Terminal.png dom0:
# head -n 3 /etc/xen-macs


The first is already used by the virtual node named 'domU'.

Or you can reconfigure all your configuration files automatically by applying the command, but all of your virtual machines should be stopped :

Terminal.png dom0:
# xm shutdown domU
Terminal.png dom0:
# automatic_xen_conf.rb


Verify your file have been correctly configured :

Terminal.png dom0:
# grep vif /etc/xen/*.cfg


You can now start your domU with the xm create command.

Terminal.png dom0:
# xm create /etc/xen/domUX.cfg

You can monitor processor, memory and disk usage with the xentop command.

Disk space considerations

The partition sda3 used to deploy your image is limited to 6G. And a virtual disk, just with the debian base system, has a size of 400M. You can't store many images in your root partition.

Let's do a summary of domU you've created previously :

  • You have created a virtual machine named 'domU2' under /tmp/. It's a good place to store the xen images, because it's a big disk area. But this area is not backuped when you record your environment.
  • You have created a virtual machine named 'domU3' under /opt/. The virtual machine will be recorded with this image. We want it to be started automatically so we add a link like this :
#  cd  /etc/xen/auto
# ln -s ../domU3.cfg

Record your tuned environment

Usual methods apply to backup your disk image :

Terminal.png frontend:
ssh root@node tgz-g5k > mon_image.tgz

Now return on frontend and copy the description file of lenny-x64-xen to your home :

Terminal.png frontend:
cp /grid5000/descriptions/lenny-x64-xen-2.0.dsc3 mon_image.dsc

Edit now this file and adjust the parameters. The important parameters are :

  • name : name of your environment.
  • author : your email address.
  • filebase : the path to your disk image.
  • initrdpath : a line containing : path to hypervisor, memory seen by dom0, path to dom0 kernel, path to initrd kernel.
  • kernelpath : contains mboot.c32, a multiboot tftp binary which allow to deploy hypervisor, kernel and ramdisk.
Note.png Note

The hypervisor uses all memory available on the node, dom0 sees only what is available to him.

You can now record your environment as usual :

Terminal.png frontend:
kaenv3 --add mon_image.dsc

Exercice: Monitor your nodes

You can now deploy your own environment on your two nodes.

Install ganglia on your four virtual machines and on the dom0. The usual command on debian is :

# export http_proxy=http://proxy:3128 apt-get install ganglia-monitor

And look at them on the web interface:

https://helpdesk.grid5000.fr/ganglia/

Add one virtual node by core and install ganglia on it.

When done, try to add new virtual nodes with ganglia in it to reach limits in term of memory and disk space.

Personal tools
Namespaces

Variants
Actions
Public Portal
Users Portal
Admin portal
Wiki special pages
Toolbox