Virtualization in Grid'5000: Difference between revisions

From Grid5000
Jump to navigation Jump to search
(34 intermediate revisions by 12 users not shown)
Line 1: Line 1:
{{Author|Hyacinthe Cartiaux}}
{{Author|Hyacinthe Cartiaux}}
{{Maintainer|Hyacinthe Cartiaux}}
{{Maintainer|David Loup}}
{{Status|In production}}
{{Status|In production}}
{{Portal|User}}
{{Portal|User}}
{{TutorialHeader}}


= Purpose =
= Purpose =


This page presents how to use KVM on the production environment (with a "non-deploy" reservation).
This page presents how to use KVM on the standard environment (with a "non-deploy" reservation).
The aim is to permit the execution of virtual machines on the nodes, along with a subnet reservation, which will give you a range of routed IP for your experiment.
The aim is to permit the execution of virtual machines on the nodes, along with a subnet reservation, which will give you a range of routed IP for your experiment.


In the first part, you will learn the basics of ''g5k-subnets'', which is a prerequisite for the rest of this tutorial.
In the first part, you will learn the basics of ''g5k-subnets'', which is a prerequisite for the rest of this tutorial.
The ''Quick start'' explains how to run a VM on the production environment in the minimal number of steps.
The ''Quick start'' explains how to run a VM on the standard environment in the minimal number of steps.
The next part is optional, it explains in details the contextualization mechanism, which allows you to customize your virtual machines.
The next part is optional, it explains in details the contextualization mechanism, which allows you to customize your virtual machines.
Finally, in the ''Multi-site experiment'' section, we will deploy 2 VMs on 2 sites, and we will measure the network bandwidth between them with iperf.
In the ''Multi-site experiment'' section, we will deploy 2 VMs on 2 sites, and we will measure the network bandwidth between them with iperf.
 
Finally, an alternative to KVM on the standard environment is quickly introduced: the Xen reference environments.


= Prerequisite: Network subnets reservation with g5k-subnets =
= Prerequisite: Network subnets reservation with g5k-subnets =
Line 25: Line 28:


To reserve 4 /22 subnets and 2 nodes, just type:
To reserve 4 /22 subnets and 2 nodes, just type:
{{Term|location=frontend|cmd=<code class="command">oarsub -l slash_22=4+nodes=2 -I</code>}}
{{Term|location=frontend|cmd=<code class="command">oarsub -l slash_22=4+{"virtual!='NO'"}/nodes=2 -I</code>}}


You can of course have more complex request. To obtain 4 /22 on different /19 subnets, you can type:
You can of course have more complex request. To obtain 4 /22 on different /19 subnets, you can type:
{{Term|location=frontend|cmd=<code class="command">oarsub -l slash_19=4/slash_22=1+nodes=2/core=1 -I</code>}}
{{Term|location=frontend|cmd=<code class="command">oarsub -l slash_19=4/slash_22=1+{"virtual!='NO'"}/nodes=2/core=1 -I</code>}}
 
To request a node from a specific cluster, [[Advanced_OAR#Complex_resources_selection|advanced OAR usage]] is needed:
{{Term|location=frontend|cmd=<code class="command">oarsub -I -l "slash_22=1+{"virtual!='NO' AND cluster='edel'"}/nodes=1,walltime=2:00:00"</code>}}


== Usage ==
== Usage ==
Line 67: Line 73:
In this part, we will create a virtual machine in a few steps, and ssh to it.
In this part, we will create a virtual machine in a few steps, and ssh to it.


=== Job submission===
== Job submission ==


In order to test easily the kvm environment, we use an interactive job, and we reserve one subnet and one node with hardware virtualization capabilities.
In order to test easily the kvm environment, we use an interactive job, and we reserve one subnet and one node with hardware virtualization capabilities.


{{Term|location=frontend|cmd=<code class="command">oarsub -I -l "slash_22=1+{virtual!='none'}/nodes=1"</code>}}
{{Term|location=frontend|cmd=<code class="command">oarsub -I -l slash_22=1+{"virtual!='NO'"}/nodes=1</code>}}
 


=== Disk image, virtual machine ===
== Disk image, virtual machine ==


A disk image containing debian wheezy is available at the following path:
A disk image containing debian stretch is available at the following path:
<code class="command">/grid5000/images/KVM/wheezy-x64-base.qcow2</code>
<code>/grid5000/virt-images/debian9-x64-base.qcow2</code>


It can be used as a base for more advanced work.
You can copy it on the node : It will be our base image for our VMs :
For the next steps of this tutorial, copy the disk image to /tmp on the node:
{{Term|location=node|cmd=<code class="command">cp /grid5000/virt-images/debian9-x64-base.qcow2 /tmp/</code>}}


{{Term|location=node|cmd=<code class="command">cp /grid5000/images/KVM/wheezy-x64-base.qcow2 /tmp/</code>}}
If we want to create multiple VMs, we will have to copy the qcow2 as many times as the number of VM we want.<br/>
To gain storage space, we can use <code>debian9-x64-base.qcow2</code> as a backing file :
{{Term|location=node|cmd=<code class="command">qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/domain1.qcow2</code>}}
By doing this, domain1.qcow2 will only store the difference from debian9-x64-base.qcow2 (and not the whole image)<br/>
If you want to create a second virtual machine based on the same image, simply run the same command with <code>domain2.qcow</code> instead of <code>domain1.qcow2</code>


 
== Choose a MAC address ==
=== Network configuration ===
 
==== Create a Virtual Interface ====
 
For Virtual Machines to use the network, a Tun/Tap interface must be created manually for each of them.
 
This virtual interface will be attached to your virtual machine, and bridged on the physical machine to the production network. Therefore, the virtual machine will be able to get an IP from the DHCP server and access the network.
 
A script is available to create automatically this interface on the node:
<code class="command">create_tap</code>:
 
{{Term|location=node|cmd=<code class="command">sudo create_tap</code>}}
 
* Tun/Tap interfaces are listed by issuing the command <code class="command">/sbin/ifconfig</code>.
{{Term|location=node|cmd=<code class="command">/sbin/ifconfig</code>}}
 
tap0      Link encap:Ethernet  HWaddr 00:16:3e:db:c6:41
          inet6 addr: fe80::58ff:a4ff:fe97:c6a8/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:29435 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
{{Note|text=- Create one Tun/Tap interface per guest OS.
- Use <code class="command">tunctl</code> if you need to delete a Tun/Tap device.
{{Term|location=node|cmd=<code class="command">/usr/sbin/tunctl -d tap0</code>}}
}}
 
 
==== Select a MAC address ====


As seen before, g5k-subnets maintains a correspondence between MAC addresses and IP addresses.
As seen before, g5k-subnets maintains a correspondence between MAC addresses and IP addresses.
Line 139: Line 117:
</pre>
</pre>


 
== Run the guest OS using libvirt ==
=== Run the guest OS using libvirt ===


Libvirt is a toolkit for managing virtualization servers. Libvirt is also an abstraction layer for different virtualization solutions, including KVM but also Xen and VMWare ESX.
Libvirt is a toolkit for managing virtualization servers. Libvirt is also an abstraction layer for different virtualization solutions, including KVM but also Xen and VMWare ESX.
Line 148: Line 125:
* Create a domain file in XML, describing a virtual machine.
* Create a domain file in XML, describing a virtual machine.


eg : <code class="file">domain.xml</code>
eg : <code class="file">domain1.xml</code>


<pre class="brush: bash">
<pre class="brush: bash">
<domain type='kvm'>
<domain type='kvm'>
  <name>wheezy</name>
<name>domain1</name>
  <memory>524288</memory>
<memory>2048000</memory>
  <vcpu>1</vcpu>
<vcpu>1</vcpu>
  <os>
<os>
    <type arch="x86_64">hvm</type>
  <type arch="x86_64">hvm</type>
  </os>
</os>
  <clock offset="localtime"/>
<clock offset="localtime"/>
  <on_poweroff>destroy</on_poweroff>
<on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
<on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
<on_crash>destroy</on_crash>
  <devices>
<devices>
    <emulator>/usr/bin/kvm</emulator>
  <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
  <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
    <driver name='qemu' type='qcow2'/>
      <source file='/tmp/wheezy-x64-base.qcow2'/>
    <source file='/tmp/domain1.qcow2'/>
      <target dev='vda' bus='virtio'/>
    <target dev='vda' bus='virtio'/>
    <shareable/>
  </disk>
    </disk>
  <interface type='bridge'>
    <interface type='ethernet'>
    <source bridge='br0'/>
      <mac address='AA:BB:CC:DD:EE:FF'/>
    <mac address='AA:BB:CC:DD:EE:FF'/>
      <script path='no'/>
  </interface>
      <model type='virtio'/>
  <serial type='pty'>
      <target dev='tap0'/>
    <source path='/dev/ttyS0'/>
    </interface>
    <target port='0'/>
    <serial type='pty'>
  </serial>
      <source path='/dev/ttyS0'/>
  <console type='pty'>
      <target port='0'/>
    <source path='/dev/ttyS0'/>
    </serial>
    <target port='0'/>
    <console type='pty'>
  </console>
      <source path='/dev/ttyS0'/>
</devices>
      <target port='0'/>
</domain>
    </console>
 
  </devices>
</domain>
</pre>
</pre>


{{Note|text=- Adapt this file to your case, you must change the "mac address" field with one of the g5k-subnet addresses (and possibly "target dev" if you run several VMs)<br />
{{Note|text=- Libvirt will create a virtual interface and attach it to the bridge <code>br0</code> so your VM can reach the rest of Grid'5000 and access internet <br/>
- The password for the <code class="command">root</code> account is <code class="command">grid5000</code>}}
- Adapt this file to your case, you must change the "mac address" field with one of the g5k-subnet addresses <br/>}}


* Now, the guest OS can be started.
Now, we can run and manage our guest OS with <code class="command">virsh</code>.
{{Term|location=node|cmd=<code class="command">virsh create domain.xml</code>}}
* Run the guest with the following command :
{{Term|location=node|cmd=<code class="command">virsh create domain1.xml</code>}}


* You can also use <code class="command">virsh</code> to manage your guest OS:
* We can see ou guest is currently running :
** list the running virtual machines: <code class="command">virsh list</code>
{{Term|location=node|cmd=<code class="command">virsh list</code>}}
** open a console on the "wheezy" virtual machine: <code class="command">virsh console wheezy</code>
Id    Name                State
---------------------------------------
1    domain1              running


{{Note|text=Use <code class="command">CTRL+AltGr+]</code> on a french keyboard to disconnect from <code class="command">virsh console</code>}}
* You can connect to your VM console
** The default root password is <code>grid5000</code>
** Use <code class="command">CTRL+]</code> to disconnect from <code class="command">virsh console</code>
{{Term|location=node|cmd=<code class="command">virsh console domain1</code>}}


* At this point, you can repeat the full process and launch several VMs in parallel.
* At this point, you can repeat the full process and launch several VMs in parallel.


* Stop the execution of your VM with:
* Stop the execution of your VM with:
{{Term|location=node|cmd=<code class="command">virsh destroy wheezy</code>}}
{{Term|location=node|cmd=<code class="command">virsh destroy domain1</code>}}


=== Run the guest OS using the kvm command ===
== Run the guest OS using the kvm command ==


You can also use kvm to start the virtual machine:  
You can also use kvm to start the virtual machine:  


{{Term|location=node|cmd=<code class="command">screen kvm -m 512 -hda '''/tmp/wheezy-x64-base.qcow2''' -net nic,model=virtio,macaddr='''AA:BB:CC:DD:EE:FF''' -net tap,ifname='''tap0''',script=no -nographic</code>}}
{{Term|location=node|cmd=<code class="command">screen kvm -m 2048 -hda /tmp/debian9-x64-min.qcow2 -netdev bridge,id=br0 -device virtio-net-pci,netdev=br0,id=nic1,mac=</code><code class="replace">AA:BB:CC:DD:EE:FF</code><code class="command"> -nographic</code>}}
This is an example command, feel free to adapt it to your use case (The kvm process is launched in a <code class="command">screen</code> session, if you are not familiar with screen, read its [[Screen|documentation]])


== SSH to your virtual machine ==
On Jessie and Stretch, root SSH authentication with password is disabled by default, to SSH to your VM, do the following steps
# Log into your VM console using <code class="command">virsh console domain1</code>. The root password is <code class="command">grid5000</code>
# Run these command to allow root login with password in ssh config, and reload ssh daemon :
{{Term|location=node|cmd=<code class="command">sed</code> -i s/"PermitRootLogin without-password"/"PermitRootLogin yes"/g /etc/ssh/sshd_config}}
{{Term|location=node|cmd=<code class="command">service sshd reload</code> }}


This is an example command, feel free to adapt it to your use case (The kvm process is launched in a <code class="command">screen</code> session, if you are not familiar with screen, read its [[Screen|documentation]])
Finally, you can ssh directly to your VM from anywhere in Grid'5000:


{{Term|location=node|cmd=<code class="command">ssh root@</code><code class="replace">g5k-subnet_ip_addr</code> }}


{{Note|text=- <code class="command">tap0</code> is the name of our Tun/Tap interface. Adapt it with the Tun/Tap name on which you want to attach your guest OS. <br />
= Contextualize your VMs with cloud-init =
- Adapt "macaddr" option with one of the g5k-subnet addresses
As we have seen, we must use the console of our VM to configure SSH and connect to it later.  
- The password for the <code class="command">root</code> account is <code class="command">grid5000</code>
It's a bit annoying if we have many VMs, we would have to manyally configure SSH on each instances within the console.
}}


This part describes how to contextualize your VM using cloud-init.<br/>
cloud-init ( https://cloudinit.readthedocs.io/en/latest/ ) runs on startup of the VM and search for a '''datasource''' to fetch configurations to apply to the VM, such as :<br/>
* Set the hostname
* Create users
* Copy SSH key to root account
* Mount a device
* Execute a script
* ...
This is the '''contextualization'''.


=== SSH to your virtual machine ===
On Grid'5000, this datasource is a virtual disk (.iso) that contains the configurations we want.


Finally, you can ssh directly to your VM from anywhere in Grid'5000:
== Create a virtual disk for cloud-init ==


{{Term|location=node|cmd=<code class="command">ssh root@</code><code class="replace">g5k-subnet_ip_addr</code> }}
In this example, we will create a CD containing simple contextualization configuration for cloud-init: It will change the hostname of the VM and add your public SSH key to the root account.


{{Note|text=The password for the <code class="command">root</code> account is <code class="command">grid5000</code>}}
To help you creating cloud-init configuration file, there is a script <code class="command">cloud-init-example.sh</code> you can copy on your node:
{{Term|location=node|cmd=<code class="command">cp /grid5000/virt-images/cloud-init-example.sh /tmp/</code>}}
This script will generate basics configuration files for cloud-init to add your public SSH key to the root account so that you can SSH to the VM without password and without using the console.


{{Term|location=node|cmd=<code class="command">cd /tmp && export cloud_init_key=$(cat ~/.ssh/id_rsa.pub) && ./cloud-init-example.sh</code>}}


{{Note|text=The previous command assume your SSH public key is in <code>~/.ssh/id_rsa.pub</code>. If not, please put the correct path in the command}}


= Use contextualization to configure your VMs =
You can see 2 files were created in cloud-init-data : '''meta-data''' and '''user-data'''
* meta-data contains configuration such as hostname, root SSH key, instance id, ... .You can see the script wrote your SSH public key in this file.
* user-data can contains more configuration in different format
** It can be a bash script that will be executed on startup
** It can be a file in YAML that describes configuration like creating users, mounting a device, running puppet, changing the resolv.conf, ... ( For other examples : https://cloudinit.readthedocs.io/en/latest/topics/examples.html# )


This part describes the basic usage of a contextualization, an easy way to dynamically configure your VM.
Now, we can generate an iso file using the following command :
{{Term|location=node:/tmp|cmd=<code class="command">genisoimage  -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data</code>}}


== Mechanism ==
The file <code class="file">cloud-init-data.iso</code> is ready to be attached to a VM.<br/>
Cloud-init will detect the disk on startup and configure the virtual machine using the informations in '''meta-data''' and '''user-data''' on the CD.


Contextualization can be used to do any kind of configuration upon VMs boot, in order to configure the virtual machines on boot.. It works like the following :
== Start a VM with contextualization ==
* Test for the presence of a CD in the CD drive of the VM
We will run a new VM with contextualization :
* if it exists, mount the CD, test the presence of a script <code class="file">post-install</code>, and run it as root
First we create a new disk image from our base image :
{{Term|location=node:/tmp|cmd=<code class="command">qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/contextualized-domain.qcow2</code>}}


We create a new <code class="file">contextualized-domain.xml</code> with this content :


== Create a contextualization script ==
<pre class="brush: bash">
  <domain type='kvm'>
  <name>contextualized-domain</name>
  <memory>2048000</memory>
  <vcpu>1</vcpu>
  <os>
    <type arch="x86_64">hvm</type>
  </os>
  <clock offset="localtime"/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/tmp/contextualized-domain.qcow2'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='cdrom'>
      <source file='/tmp/cloud-init-data.iso'/>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
    </disk>
    <interface type='bridge'>
      <source bridge='br0'/>
      <mac address='AA:BB:CC:DD:EE:FF'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/ttyS0'/>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <source path='/dev/ttyS0'/>
      <target port='0'/>
    </console>
  </devices>
  </domain>
</pre>


In this example, we will create a simple contextualization script to add your SSH key to enable password-less connection.
You can notice it's the same xml file as in the previous part, except that we added this part :
<pre class="brush: bash">
  <disk type='file' device='cdrom'>
    <source file='/tmp/cloud-init-data.iso'/>
    <target dev='vdb' bus='virtio'/>
    <readonly/>
  </disk>
</pre>
It mount the iso on the VM as a cdrom.


* Create a directory to store your contextualization script:
{{Note|text= - Don't forget to change the MAC address with one of your g5k-subnet mac<br/>
{{Term|location=node|cmd=<code class="command">mkdir kvm-context/</code>}}
- If you want to use the kvm command instead of virsh, add <code class="command">-cdrom /tmp/cloud-init-data.iso</code> option }}


* Copy your SSH public key into it:
Start the guest OS and connect it using ssh :
{{Term|location=node|cmd=<code class="command">cp ~/.ssh/id_rsa.pub kvm-context/</code>}}
{{Term|location=node|cmd=<code class="command">virsh create contextualized-domain.xml</code>}}
 
{{Term|location=node|cmd=<code class="command">ssh root@</code><code class="replace">g5k-subnet_ip_addr</code> }}
* Create a file named "post-install" and add execution rights to it:
{{Term|location=node|cmd=<code class="command">touch kvm-context/post-install</code>}}
{{Term|location=node|cmd=<code class="command">chmod 755 kvm-context/post-install</code>}}


* Edit the "post-install" script and write the command needed to add the SSH key to root's authorized_keys file :
You can now SSH to your VM without password, and without having to use the VM console. You can notice the hostname also changed to '''example-vm''', as specified in the '''meta-data''' file.
#!/bin/sh
mkdir -p /root/.ssh
cat /mnt/id_rsa.pub >> /root/.ssh/authorized_keys
Note that contextualization files are accessible in the virtual machine inside /mnt directory


* Once you have prepared the content of your contextualization script, you can generate an ISO image :
== Playing with cloud-init ==
{{Term|location=node|cmd=<code class="command">genisoimage -r -o kvm-context.iso kvm-context/</code>}}
This ''optional'' part shows more example of what can be done with cloud-init


The file <code class="file">kvm-context.iso</code> is ready to be attached to a VM, the script included in the iso will be executed and will configure the first network interface at boot time.
=== Run a script on startup with user-data ===
The current content of user-data is :


#cloud-config
disable_root: false


== Start a VM with contextualization ==
By default, cloud-init disable the root account.<br/>
If you try to connect as root on the VM without enabling the root account in user-data,<br/>
you will get a message saying you need to connect as user '''debian''' (your public key will be accepted for user debian).
This user has sudo rights


Use this domain file to start your VM using libvirt:
user-data file start with '''#cloud-config''', telling cloud-init that the format of the file is a cloud-config.
As we will see in the next part, it's a file in YAML that describes the configurations cloud-init has to apply on boot.


<code class="file">domain.xml</code>
But user-data can also be a bash script, and that's what we will do here.
Replace the content of user-data with:


<pre class="brush: bash">
<pre class="brush: bash">
<domain type='kvm'>
#!/bin/bash
  <name>wheezy</name>
apt-get update && apt-get install -y lighttpd
  <memory>524288</memory>
cat << EOF > /var/www/html/index.lighttpd.html
  <vcpu>1</vcpu>
<!DOCTYPE html>
  <os>
<html>
    <type arch="x86_64">hvm</type>
<head>
  </os>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
  <clock offset="localtime"/>
<title>VM HTTP Server</title>
  <on_poweroff>destroy</on_poweroff>
</head>
  <on_reboot>restart</on_reboot>
<body>
  <on_crash>destroy</on_crash>
<h1> Install and configured with cloud-init </h1>
  <devices>
</body>
    <emulator>/usr/bin/kvm</emulator>
</html>
    <disk type='file' device='disk'>
EOF
      <driver name='qemu' type='qcow2'/>
      <source file='/tmp/wheezy-x64-base.qcow2'/>
      <target dev='vda' bus='virtio'/>
    <shareable/>
    </disk>
    <disk type='file' device='cdrom'>
      <source file='/tmp/kvm-context.iso'/>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
    </disk>
    <interface type='ethernet'>
      <mac address='AA:BB:CC:DD:EE:FF'/>
      <script path='no'/>
      <model type='virtio'/>
      <target dev='tap0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/ttyS0'/>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <source path='/dev/ttyS0'/>
      <target port='0'/>
    </console>
  </devices>
</domain>
</pre>
</pre>


Generate the iso file with this new configuration :
{{Term|location=node:/tmp|cmd=<code class="command">genisoimage  -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data</code>}}


{{Note|text=- Note the second disk section, you must adapt it with the path of the contextualization iso file (don't forget to also change the mac address <br/>
We will destroy our running VM, create a fresh disk from debian9-x64-base.qcow2 and restart it :
- To include contextualization ISO to a VM started with the kvm command, add <code class="command">-cdrom /tmp/kvm-context.iso</code> option }}


Start the guest OS and connect it using ssh :
{{Term|location=node:/tmp|cmd=<code class="command">virsh destroy contextualized-domain</code>}}
{{Term|location=node|cmd=<code class="command">virsh create domain.xml</code>}}
{{Term|location=node:/tmp|cmd=<code class="command">qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/contextualized-domain.qcow2</code>}}
{{Term|location=node|cmd=<code class="command">ssh root@</code><code class="replace">g5k-subnet_ip_addr</code> }}
{{Term|location=node:/tmp|cmd=<code class="command">virsh create contextualized-domain.xml</code>}}
 
After a few moment, we can try :
{{Term|location=node:/tmp|cmd=<code class="command">curl http://</code><code class="replace">vm_ip</code>}}
 
The script in user-data ran on startup. It installed lighttpd, a small http server, and replaced the default index.html


If the contextualization ran correctly, you should not need to enter a password.
We can still ssh on our VM, but not as root since we remove the option that enabled root account in user-data :
{{Term|location=node:/tmp|cmd=<code class="command">ssh debian@</code><code class="replace">vm_ip</code>}}
{{Term|location=debian@vm|cmd=<code class="command">sudo su</code>}}
{{Term|location=root@vm|cmd=<code class="command">#</code>}}


== Add contextualization to your own VM ==
=== Going further with user-data in YAML ===
We have seen how to use '''user-data''' as a startup script.<br/>
We will now use it in '''cloud-config''' format : It's description file in YAML to trigger some actions on startup :


The contextualization mechanism is not standard, if you want to use it on your VM, you must adapt it to your image.
Copy the following content to '''user-data''' and insert your public SSH key where needed. Then regenerate the iso file :


* Here is an example of a contextualization script :
#cloud-config
<pre class="brush: bash">
groups:
#!/bin/bash
  - foo
  - bar
users:
  - name: foo
    primary-group: foo
    groups: users
    shell: /bin/bash
    ssh-authorized-keys:
      - <insert your public key here>
  - name: bar
    primary-group: bar
    groups: users
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    ssh-authorized-keys:
      - <insert your public key here>
packages:
  - lighttpd


DEVICE=
{{Term|location=node:/tmp|cmd=<code class="command">genisoimage  -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data</code>}}
[ -b /dev/hdb ] && DEVICE=/dev/hdb
[ -b /dev/sdb ] && DEVICE=/dev/sdb
[ -b /dev/vdb ] && DEVICE=/dev/vdb
[ -b /dev/xvdb ] && DEVICE=/dev/xvdb
[ -b /dev/sr0 ] && DEVICE=/dev/sr0


if [ -b "$DEVICE" ];then
The advantage of using this format is the readability. We can quickly identify what it will do :
    /bin/mount -t iso9660 $DEVICE /mnt 2> /dev/null
* Create a user '''foo''' in group foo (with no sudo right)
* Create a user '''bar''' in group bar with '''sudo''' rights
* Install the package lighttpd


    if [ -f /mnt/post-install ]; then
You can create a new VM to test this new configuration :
      bash /mnt/post-install
{{Term|location=node:/tmp|cmd=<code class="command">virsh destroy contextualized-domain</code>}}
    fi
{{Term|location=node:/tmp|cmd=<code class="command">qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/contextualized-domain.qcow2</code>}}
{{Term|location=node:/tmp|cmd=<code class="command">virsh create contextualized-domain.xml</code>}}


    umount /mnt 2> /dev/null
You'll be able to connect without password as foo and bar
fi
{{Term|location=node:/tmp|cmd=<code class="command">ssh bar@</code><code class="replace">vm_ip</code>}}
exit 0
</pre>


* This contextualization script must be executed during the boot sequence. For instance,
For more example of what can be done with cloud-config : http://cloudinit.readthedocs.io/en/latest/topics/examples.html
** copy it to /usr/local/bin/init
** add execution rights to /usr/local/bin/init
** to launch the script at VM startup, add:
<pre class="brush: bash">
# KVM contextualization script
/usr/local/bin/init
</pre>
inside <code class="file">/etc/rc.local</code>, at the end of the file (before the exit 0 if any).


=== Add cloud-init to you own image ===
You may want to add cloud-init to your own virtual image so you can run custom VMs on Grid'5000.


To do so, simply run {{Term|location=VM|cmd=<code class="command">apt-get update && apt-get install cloud-init</code>}} in your VM before exporting it as qcow2.


You can also use virt-customize for an already existing qcow2 :<br/>
{{Term|location=local_pc|cmd=<code class="command">apt-get update && apt-get libguestfs-tools</code>}}
{{Term|location=local_pc|cmd=<code class="command">virt-customize -a my_image.qcow2 --install cloud-init</code>}}


= Multi-site experiment =
= Multi-site experiment =


In this part, to illustrate what can be done using Virtual machines on the production environment, we will start two virtual machines on two sites, and make them communicate using the virtualization network.
In this part, to illustrate what can be done using Virtual machines on the standard environment, we will start two virtual machines on two sites, and make them communicate using the virtualization network.


== Reservation ==
== Reservation ==
Line 376: Line 428:
Then, reserve two virtualization-capable nodes and two subnets on two different sites.
Then, reserve two virtualization-capable nodes and two subnets on two different sites.


{| width="100%"
For the rest of the multi-site experiment part, don't forget to run each command on '''both sites'''.
|-
 
|
{{Term|location=frontends(both)|cmd=<code class="command">oarsub -I -l slash_22=1+{"virtual!='NO'"}/nodes=1</code>}}
| width="50%" | {{Term|location=frontend|cmd=<code class="command">oarsub -I -l "slash_22=1+{virtual!='none'}/nodes=1"</code>}}
||
| width="50%" | {{Term|location=frontend|cmd=<code class="command">oarsub -I -l "slash_22=1+{virtual!='none'}/nodes=1"</code>}}
|}


== Network configuration ==
== Network configuration ==
Line 391: Line 439:
Note that <code class="command">g5k-subnets</code> returns completely different information on each site. In the following, we assume that you chose '''10.144.8.1''' ('''00:16:3e:90:08:01''') in Nancy, and '''10.172.0.1''' ('''00:16:3e:ac:00:01''') in Luxembourg.
Note that <code class="command">g5k-subnets</code> returns completely different information on each site. In the following, we assume that you chose '''10.144.8.1''' ('''00:16:3e:90:08:01''') in Nancy, and '''10.172.0.1''' ('''00:16:3e:ac:00:01''') in Luxembourg.


{| width="100%"
{{Term|location=nodes(both)|cmd=<code class="command">g5k-subnets -im  &#124; head</code>}}
|-
| width="50%" |
{{Term|location=node(nancy)|cmd=<code class="command">g5k-subnets -im  &#124; head</code>}}
||
| width="50%" |
{{Term|location=node(luxembourg)|cmd=<code class="command">g5k-subnets -im  &#124; head</code>}}
|}


== Instantiate your VMs ==
== Instantiate your VMs ==


=== Create the tap interfaces ===
=== Copy a standard virtual machine image ===


{| width="100%"
Copy the default virtual machine image from <code class="file">/grid5000/virt-images/debian9-x64-base.qcow2</code> to <code class="file">/tmp</code> on '''both''' nodes :
|-
| width="50%" |
{{Term|location=node(nancy)|cmd=<code class="command">sudo create_tap</code>}}
||
| width="50%" |
{{Term|location=node(luxembourg)|cmd=<code class="command">sudo create_tap</code>}}
|}


{{Term|location=nodes(both)|cmd=<code class="command">cp /grid5000/virt-images/debian9-x64-base.qcow2 /tmp/</code>}}


=== Copy a standard virtual machine image ===
=== Configure cloud-init ===
 
To be able to SSH without password, we will use cloud-init :
Copy the default virtual machine image from <code class="file">/grid5000/images/KVM/wheezy-x64-base.qcow2</code> to <code class="file">/tmp</code>
 
{| width="100%"
|-
| width="50%" |
{{Term|location=node(nancy)|cmd=<code class="command">cp /grid5000/images/KVM/wheezy-x64-base.qcow2 /tmp/</code>}}
||
| width="50%" |
{{Term|location=node(luxembourg)|cmd=<code class="command">cp /grid5000/images/KVM/wheezy-x64-base.qcow2 /tmp/</code>}}
|}


{{Term|location=node(both)|cmd=<code class="command">cp /grid5000/virt-images/cloud-init-example.sh /tmp/</code>}}
{{Term|location=node(both)|cmd=<code class="command">cd /tmp && export cloud_init_key=$(cat ~/.ssh/id_rsa.pub) && ./cloud-init-example.sh</code>}}
{{Term|location=node(both)|cmd=<code class="command">genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data</code>}}


=== Create the <code class="file">domain.xml</code> file ===
=== Create the <code class="file">domain.xml</code> file ===


The <code class="file">domain.xml</code> file contains the description of your virtual machine. You must adapt it, in order to use a mac address provided by <code class="command">g5k-subnets -im</code>. The virtual machine will get the IP associated to its mac address.
The <code class="file">domain.xml</code> file contains the description of your virtual machine.
Create it on both side and adapt it to use a mac address provided by <code class="command">g5k-subnets -im</code>. The virtual machine will get the IP associated to its mac address :


{| width="100%"
|-
| width="50%" |
<pre class="brush: bash">
<domain type='kvm'>
<name>wheezy</name>
<memory>362144</memory>
<vcpu>1</vcpu>
<os>
  <type arch="x86_64">hvm</type>
</os>
<clock sync="localtime"/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
  <emulator>/usr/bin/kvm</emulator>
  <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2'/>
    <source file='/tmp/wheezy-x64-base.qcow2'/>
    <target dev='vda' bus='virtio'/>
    <shareable/>
  </disk>
    <interface type='ethernet'>
    <mac address='00:16:3e:90:08:01'/>
      <script path='no'/>
      <model type='virtio'/>
      <target dev='tap0'/>
    </interface>
  <serial type='pty'>
    <source path='/dev/ttyS0'/>
    <target port='0'/>
  </serial>
  <console type='pty'>
    <source path='/dev/ttyS0'/>
    <target port='0'/>
  </console>
</devices>
</domain>
</pre>
||
| width="50%" |
<pre class="brush: bash">
<pre class="brush: bash">
<domain type='kvm'>
  <domain type='kvm'>
<name>wheezy</name>
  <name>stretch</name>
<memory>362144</memory>
  <memory>2048000</memory>
<vcpu>1</vcpu>
  <vcpu>1</vcpu>
<os>
  <os>
  <type arch="x86_64">hvm</type>
    <type arch="x86_64">hvm</type>
</os>
  </os>
<clock sync="localtime"/>
  <clock offset="localtime"/>
<on_poweroff>destroy</on_poweroff>
  <on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
  <on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
  <on_crash>destroy</on_crash>
<devices>
  <devices>
  <emulator>/usr/bin/kvm</emulator>
    <emulator>/usr/bin/kvm</emulator>
  <disk type='file' device='disk'>
    <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2'/>
      <driver name='qemu' type='qcow2'/>
    <source file='/tmp/wheezy-x64-base.qcow2'/>
      <source file='/tmp/debian9-x64-base.qcow2'/>
    <target dev='vda' bus='virtio'/>
      <target dev='vda' bus='virtio'/>
     <shareable/>
     </disk>
  </disk>
    <disk type='file' device='cdrom'>
    <interface type='ethernet'>
      <source file='/tmp/cloud-init-data.iso'/>
      <mac address='00:16:3e:ac:00:01'/>
      <target dev='vdb' bus='virtio'/>
      <script path='no'/>
      <readonly/>
      <model type='virtio'/>
    </disk>
      <target dev='tap0'/>
    <interface type='bridge'>
    </interface>  
      <source bridge='br0'/>
  <serial type='pty'>
      <mac address='AA:BB:CC:DD:EE:FF'/>
    <source path='/dev/ttyS0'/>
    </interface>
    <target port='0'/>
    <serial type='pty'>
  </serial>
      <source path='/dev/ttyS0'/>
  <console type='pty'>
      <target port='0'/>
    <source path='/dev/ttyS0'/>
    </serial>
    <target port='0'/>
    <console type='pty'>
  </console>
      <source path='/dev/ttyS0'/>
</devices>
      <target port='0'/>
</domain>
    </console>
  </devices>
  </domain>
</pre>
</pre>
|}


=== Launch the two VMs ===
=== Launch the two VMs ===


{| width="100%"
{{Term|location=nodes(both)|cmd=<code class="command">virsh create domain.xml</code>}}
|-
| width="50%" |
{{Term|location=node(nancy)|cmd=<code class="command">virsh create domain.xml</code>}}
||
| width="50%" |
{{Term|location=node(luxembourg)|cmd=<code class="command">virsh create domain.xml</code>}}
|}


== Enjoy ! ==
== Enjoy ! ==
Line 536: Line 517:
{{Term|location=node(luxembourg)|cmd=<code class="command">ssh root@'''10.172.0.1'''</code>}}
{{Term|location=node(luxembourg)|cmd=<code class="command">ssh root@'''10.172.0.1'''</code>}}
|}
|}
{{Note|text=The password for the <code class="command">root</code> account is <code class="command">grid5000</code>
}}


=== Install and run iperf ===
=== Install and run iperf ===
Line 549: Line 527:
|-  
|-  
| width="50%" |  
| width="50%" |  
{{Term|location=vm(nancy)|cmd=<code class="command">apt-get install iperf</code>}}
{{Term|location=vm(nancy)|cmd=<code class="command">apt-get update && apt-get install iperf</code>}}
{{Term|location=vm(nancy)|cmd=<code class="command">iperf -s</code>}}
{{Term|location=vm(nancy)|cmd=<code class="command">iperf -s</code>}}
<pre class="brush: bash">
<pre class="brush: bash">
Line 563: Line 541:
||
||
| width="50%" |  
| width="50%" |  
{{Term|location=vm(luxembourg)|cmd=<code class="command">apt-get install iperf</code>}}
{{Term|location=vm(luxembourg)|cmd=<code class="command">apt-get update && apt-get install iperf</code>}}
{{Term|location=vm(luxembourg)|cmd=<code class="command">iperf -c '''10.144.8.1'''</code>}}
{{Term|location=vm(luxembourg)|cmd=<code class="command">iperf -c '''10.144.8.1'''</code>}}
<pre class="brush: bash">
<pre class="brush: bash">
Line 576: Line 554:
</pre>
</pre>
|}
|}
= Another alternative: Xen reference environments =
Grid'5000 proposes Xen reference environments, as an alternative to KVM on the standard environment.
This last part is a quick guide to Xen, we will show how to deploy a Xen environment on nodes, create virtual machines and use g5k-subnets for the network configuration.
{{Note|text=In Xen terminology, a domain U or domU is a virtual machine.
The domain 0 or dom0 is the physical machine which hosts the domUs (in our case the dom0 is the Grid5000 node you deployed).}}
== Reserve resources and deploy xen environment ==
{{Term|location=frontend|cmd=<code class="command">oarsub -I -t deploy -l slash_22=1+nodes=1,walltime=2:00</code>}}
{{Term|location=frontend|cmd=<code class="command">kadeploy3 -e debian9-x64-xen -f $OAR_FILE_NODES -k</code>}}
== DomU network configuration ==
The image <code class="env">debian9-x64-xen</code> includes a pre-configured domU.
The configuration file of this VM is placed in <code class="file">/etc/xen/domU.cfg</code>.
Inside this file, you can specify the parameters of your virtual machine. They are defined by:
* '''kernel''' and '''initrd''' : linux kernel and initrd with xen domU support.
* '''vcpus''' : number of virtual CPUs given to the VM.
* '''memory''' : size (MB) of RAM given to the VM.
* '''root''' : where is located the root partition .
* '''disk''' : which files contain the partitions on your virtual host.
* '''name''' : the name of the hostname, as displayed by xl list and as given by the system itself.
* '''vif''' : the configuration of the domU's network interfaces
* '''on_poweroff''' '''on_restart''' '''on_crash''' : how should react xen hypervisor on these events
You can find the official documentation and other options here :  http://xenbits.xen.org/docs/4.9-testing/man/xl.cfg.5.html
The vif line configures the domU's network. It usually contains:
* a MAC address
* the bridge name, in our case br0 which is a bridge that includes the production network interface.
{{Note|text=In the <code class="env">debian9-x64-xen</code> environment, the mac addresses is updated randomly at each boot, you can prevent this behavior by disabling the service xen-g5k}}
== Use the default domU ==
Select 1 IP from your reserved subnet:
{{Term|location=frontend|cmd=<code class="command">g5k-subnets -im <nowiki>|</nowiki> head -1 </code>}}
<pre class="brush: bash">
10.172.4.1      00:16:3E:AC:04:01
</pre>
Edit the file <code class="file">/etc/xen/domU.cfg</code> and replace the mac address. Then start the domU.
{{Term|location=node|cmd=<code class="command">xl create '''/etc/xen/domU.cfg'''</code>}}
{{Term|location=node|cmd=<code class="command">xl list</code>}}
Name        ID  Mem VCPUs      State  Time(s)
Domain-0    0  976    8    r-----      30.7
domU        1  512    1    -b----      4.7
The example VM is already configured to accept the debian9-x64-xen key.
So you can SSH to it without password, and without cloud-init :
{{Term|location=node|cmd=<code class="command">ssh root@</code><code class="replace">ip_g5k-subnet</code>}}
== Create a new domU ==
Select another ip and mac address, and create a new domU with the command <code class="command">xen-create-image</code>
{{Term|location=frontend|cmd=<code class="command">g5k-subnets -im</code>}}
<pre class="brush: bash">
...
10.172.4.3      00:16:3E:AC:04:03
...
</pre>
{{Term|location=node|cmd=<code class="command">xen-create-image --dir=/tmp/ --size=10G --hostname=domU2 --role=udev --genpass=0 --password=grid5000 --mac=00:16:3E:AC:04:03 --dhcp --bridge=br0 --memory=512M</code>}}
At this point, a new domU configuration file (<code class='file'>/etc/xen/domU2.cfg</code> and a new disk image <code class='file'>/tmp/domains/domU2/disk.img</code> have been generated.
{{Term|location=node|cmd=<code class="command">xl create '''/etc/xen/domU2.cfg'''</code>}}
Due to the default xen configuration in the debian9-x64-xen environment, the hosts SSH key has been copied during the image generation : You can SSH as root without password in '''domU2''' :
{{Term|location=node|cmd=<code class="command">ssh root@</code><code class="replace">ip_g5k-subnet</code>}}
== Using Grid'5000 qcow2 images ==
It's possible to run VM with Grid'5000 environnement.
First, copy the image and the script to setup cloud-init on the node (the dom0) :
{{Term|location=frontend|cmd=<code class="command"> scp /grid5000/virt-images/debian9-x64-min.qcow2 /grid5000/virt-images/cloud-init-example.sh root@node:/tmp </code>}}
To be able to SSH to the VM with your public key, run the following commands :
{{Term|location=frontend|cmd=<code class="command"> cat ~/.ssh/id_rsa.pub </code>}}
Copy your SSH key
{{Term|location=node|cmd=<code class="command"> apt-get update && apt-get install genisoimage </code>}}
{{Term|location=node|cmd=<code class="command"> cd /tmp; export cloud_init_key="</code><code class="replace">paste your SSH key</code><code class="command">" && /tmp/cloud-init-example.sh </code>}}
{{Term|location=node:/tmp|cmd=<code class="command"> genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data </code>}}
Then we create a domU config file /etc/xen/g5k_image.cfg :
#
# Configuration file for the Xen instance domU, created
# by xen-tools 4.7 on Fri Jun  1 00:48:00 2018.
#
#
#  Kernel + memory size
#
kernel      = '/vmlinuz'
extra      = 'elevator=noop'
ramdisk    = '/initrd.img'
vcpus      = '1'
memory      = '512'
#  Disk device(s).
#
root        = '/dev/xvda1 ro'
disk        = [
                  'format=qcow2, vdev=xvda, access=rw, target=/tmp/debian9-x64-min.qcow2',
                  'format=raw, vdev=hdc, access=ro, devtype=cdrom, target=/tmp/cloud-init-data.iso'
              ]
#  Hostname
#
name        = 'g5k_image'
#  Networking
#
dhcp        = 'dhcp'
vif        = [ 'mac=<code class="replace">MAC g5k-subnet</code>,bridge=br0' ]
#  Behaviour
#
on_poweroff = 'destroy'
on_reboot  = 'restart'
on_crash    = 'restart'
The important parts in this configuration file are :
* The 2 drives : 1 for the image of the environment and 1 for the cloud-init.iso
* The mac address you need to change
Finally, run the VM :
{{Term|location=node|cmd=<code class="command"> xl create /etc/xen/g5k_image.cfg </code>}}
{{Term|location=frontend|cmd=<code class="command"> ssh root@</code><code class="replace">VM_IP</code>}}
== Common administrative commands ==
* List the running domUs with the following command:
{{Term|location=dom0|cmd=<code class="command">xl list</code>}}
* Connect to a domU using the xen console
{{Term|location=dom0|cmd=<code class="command">xl console '''<domU-name>'''</code>}}
* Start a domU
{{Term|location=dom0|cmd=<code class="command">xl create '''/etc/xen/<domU-name>.cfg'''</code>}}
* Shutdown properly a domU
{{Term|location=dom0|cmd=<code class="command">xl shutdown '''<domU-name>'''</code>}}
* Instantly terminate a domU
{{Term|location=dom0|cmd=<code class="command">xl destroy '''<domU-name>'''</code>}}
* Print information about the dom0
{{Term|location=dom0|cmd=<code class="command">xl info</code>}}
* Shows real time monitoring information:
{{Term|location=dom0|cmd=<code class="command">xl top</code>}}
== Going further ==
Please, refer to the official [https://wiki.xenproject.org/wiki/Xen_Project_4.9_Man_Pages Xen documentation] and [https://wiki.debian.org/Xen Debian documentation].

Revision as of 16:26, 8 October 2018

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Purpose

This page presents how to use KVM on the standard environment (with a "non-deploy" reservation). The aim is to permit the execution of virtual machines on the nodes, along with a subnet reservation, which will give you a range of routed IP for your experiment.

In the first part, you will learn the basics of g5k-subnets, which is a prerequisite for the rest of this tutorial. The Quick start explains how to run a VM on the standard environment in the minimal number of steps. The next part is optional, it explains in details the contextualization mechanism, which allows you to customize your virtual machines. In the Multi-site experiment section, we will deploy 2 VMs on 2 sites, and we will measure the network bandwidth between them with iperf.

Finally, an alternative to KVM on the standard environment is quickly introduced: the Xen reference environments.

Prerequisite: Network subnets reservation with g5k-subnets

Users deploying VMs on Grid'5000 need to attribute IP address to them. Each site of Grid'5000 is allocated a /14 block for this purpose, divided in 4 smaller blocks.

OAR can be used to reserve a range of IPs. OAR permits to share the IP resources among users, and avoid the potential IP conflicts at the same time.

Reservation

Subnet reservation through OAR is similar to normal resource reservation.

To reserve 4 /22 subnets and 2 nodes, just type:

Terminal.png frontend:
oarsub -l slash_22=4+{"virtual!='NO'"}/nodes=2 -I

You can of course have more complex request. To obtain 4 /22 on different /19 subnets, you can type:

Terminal.png frontend:
oarsub -l slash_19=4/slash_22=1+{"virtual!='NO'"}/nodes=2/core=1 -I

To request a node from a specific cluster, advanced OAR usage is needed:

Terminal.png frontend:
oarsub -I -l "slash_22=1+{"virtual!='NO' AND cluster='edel'"}/nodes=1,walltime=2:00:00"

Usage

The simplest way to get the list of your allocated subnets is to use the g5k-subnets script provided on the head node of the submission.

# g5k-subnets
10.8.0.0
10.8.8.0

Several other printing options are available (-p option to display the CIDR format, -b to display broadcast address, -n to see the netmask, and -a is equivalent to -bnp):

# g5k-subnets -a
10.8.0.0/21	10.11.255.255	255.255.252.0	10.11.255.254
10.8.8.0/21	10.11.255.255	255.255.252.0	10.11.255.254

You can also summarize the subnets into a larger one if they are contiguous:

# g5k-subnets -sp
10.8.0.0/20

You can display all the available IP in your reservation, and their associated unique mac addresses, with the following command.

# g5k-subnets -im
10.158.16.1     00:16:3E:9E:10:01
...
Note.png Note

For detailed information, see the Subnet reservation page. The Grid5000:Network page also describes our organization of the virtual IP space inside Grid'5000.


Quick start

In this part, we will create a virtual machine in a few steps, and ssh to it.

Job submission

In order to test easily the kvm environment, we use an interactive job, and we reserve one subnet and one node with hardware virtualization capabilities.

Terminal.png frontend:
oarsub -I -l slash_22=1+{"virtual!='NO'"}/nodes=1

Disk image, virtual machine

A disk image containing debian stretch is available at the following path: /grid5000/virt-images/debian9-x64-base.qcow2

You can copy it on the node : It will be our base image for our VMs :

Terminal.png node:
cp /grid5000/virt-images/debian9-x64-base.qcow2 /tmp/

If we want to create multiple VMs, we will have to copy the qcow2 as many times as the number of VM we want.
To gain storage space, we can use debian9-x64-base.qcow2 as a backing file :

Terminal.png node:
qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/domain1.qcow2

By doing this, domain1.qcow2 will only store the difference from debian9-x64-base.qcow2 (and not the whole image)
If you want to create a second virtual machine based on the same image, simply run the same command with domain2.qcow instead of domain1.qcow2

Choose a MAC address

As seen before, g5k-subnets maintains a correspondence between MAC addresses and IP addresses. The Debian system provided on the disk image is configured to use DHCP and the DHCP server will assign the IP corresponding to the MAC address of the virtual machine.

Consequently, you have to choose an IP in the range you have reserved, and set the MAC address of the VM to the associated MAC address.

You can get the list of available IP, and an associated unique MAC address with the following command.

Terminal.png node:
g5k-subnets -im
10.172.0.1      00:16:3E:AC:00:01
10.172.0.2      00:16:3E:AC:00:02
10.172.0.3      00:16:3E:AC:00:03
10.172.0.4      00:16:3E:AC:00:04
10.172.0.5      00:16:3E:AC:00:05
10.172.0.6      00:16:3E:AC:00:06
10.172.0.7      00:16:3E:AC:00:07
10.172.0.8      00:16:3E:AC:00:08
10.172.0.9      00:16:3E:AC:00:09
10.172.0.10     00:16:3E:AC:00:0A
...

Run the guest OS using libvirt

Libvirt is a toolkit for managing virtualization servers. Libvirt is also an abstraction layer for different virtualization solutions, including KVM but also Xen and VMWare ESX.

In our case, we use libvirt on top of KVM.

  • Create a domain file in XML, describing a virtual machine.

eg : domain1.xml

<domain type='kvm'>
 <name>domain1</name>
 <memory>2048000</memory>
 <vcpu>1</vcpu>
 <os>
   <type arch="x86_64">hvm</type>
 </os>
 <clock offset="localtime"/>
 <on_poweroff>destroy</on_poweroff>
 <on_reboot>restart</on_reboot>
 <on_crash>destroy</on_crash>
 <devices>
   <emulator>/usr/bin/kvm</emulator>
   <disk type='file' device='disk'>
     <driver name='qemu' type='qcow2'/>
     <source file='/tmp/domain1.qcow2'/>
     <target dev='vda' bus='virtio'/>
   </disk>
   <interface type='bridge'>
     <source bridge='br0'/>
     <mac address='AA:BB:CC:DD:EE:FF'/>
   </interface>
   <serial type='pty'>
     <source path='/dev/ttyS0'/>
     <target port='0'/>
   </serial>
   <console type='pty'>
     <source path='/dev/ttyS0'/>
     <target port='0'/>
   </console>
 </devices>
</domain>

Note.png Note

- Libvirt will create a virtual interface and attach it to the bridge br0 so your VM can reach the rest of Grid'5000 and access internet
- Adapt this file to your case, you must change the "mac address" field with one of the g5k-subnet addresses

Now, we can run and manage our guest OS with virsh.

  • Run the guest with the following command :
Terminal.png node:
virsh create domain1.xml
  • We can see ou guest is currently running :
Terminal.png node:
virsh list
Id    Name                 State
---------------------------------------
1     domain1              running
  • You can connect to your VM console
    • The default root password is grid5000
    • Use CTRL+] to disconnect from virsh console
Terminal.png node:
virsh console domain1
  • At this point, you can repeat the full process and launch several VMs in parallel.
  • Stop the execution of your VM with:
Terminal.png node:
virsh destroy domain1

Run the guest OS using the kvm command

You can also use kvm to start the virtual machine:

Terminal.png node:
screen kvm -m 2048 -hda /tmp/debian9-x64-min.qcow2 -netdev bridge,id=br0 -device virtio-net-pci,netdev=br0,id=nic1,mac=AA:BB:CC:DD:EE:FF -nographic

This is an example command, feel free to adapt it to your use case (The kvm process is launched in a screen session, if you are not familiar with screen, read its documentation)

SSH to your virtual machine

On Jessie and Stretch, root SSH authentication with password is disabled by default, to SSH to your VM, do the following steps

  1. Log into your VM console using virsh console domain1. The root password is grid5000
  2. Run these command to allow root login with password in ssh config, and reload ssh daemon :
Terminal.png node:
sed -i s/"PermitRootLogin without-password"/"PermitRootLogin yes"/g /etc/ssh/sshd_config
Terminal.png node:
service sshd reload

Finally, you can ssh directly to your VM from anywhere in Grid'5000:

Terminal.png node:
ssh root@g5k-subnet_ip_addr

Contextualize your VMs with cloud-init

As we have seen, we must use the console of our VM to configure SSH and connect to it later. It's a bit annoying if we have many VMs, we would have to manyally configure SSH on each instances within the console.

This part describes how to contextualize your VM using cloud-init.
cloud-init ( https://cloudinit.readthedocs.io/en/latest/ ) runs on startup of the VM and search for a datasource to fetch configurations to apply to the VM, such as :

  • Set the hostname
  • Create users
  • Copy SSH key to root account
  • Mount a device
  • Execute a script
  • ...

This is the contextualization.

On Grid'5000, this datasource is a virtual disk (.iso) that contains the configurations we want.

Create a virtual disk for cloud-init

In this example, we will create a CD containing simple contextualization configuration for cloud-init: It will change the hostname of the VM and add your public SSH key to the root account.

To help you creating cloud-init configuration file, there is a script cloud-init-example.sh you can copy on your node:

Terminal.png node:
cp /grid5000/virt-images/cloud-init-example.sh /tmp/

This script will generate basics configuration files for cloud-init to add your public SSH key to the root account so that you can SSH to the VM without password and without using the console.

Terminal.png node:
cd /tmp && export cloud_init_key=$(cat ~/.ssh/id_rsa.pub) && ./cloud-init-example.sh
Note.png Note

The previous command assume your SSH public key is in ~/.ssh/id_rsa.pub. If not, please put the correct path in the command

You can see 2 files were created in cloud-init-data : meta-data and user-data

  • meta-data contains configuration such as hostname, root SSH key, instance id, ... .You can see the script wrote your SSH public key in this file.
  • user-data can contains more configuration in different format

Now, we can generate an iso file using the following command :

Terminal.png node:/tmp:
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data

The file cloud-init-data.iso is ready to be attached to a VM.
Cloud-init will detect the disk on startup and configure the virtual machine using the informations in meta-data and user-data on the CD.

Start a VM with contextualization

We will run a new VM with contextualization : First we create a new disk image from our base image :

Terminal.png node:/tmp:
qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/contextualized-domain.qcow2

We create a new contextualized-domain.xml with this content :

  <domain type='kvm'>
   <name>contextualized-domain</name>
   <memory>2048000</memory>
   <vcpu>1</vcpu>
   <os>
     <type arch="x86_64">hvm</type>
   </os>
   <clock offset="localtime"/>
   <on_poweroff>destroy</on_poweroff>
   <on_reboot>restart</on_reboot>
   <on_crash>destroy</on_crash>
   <devices>
     <emulator>/usr/bin/kvm</emulator>
     <disk type='file' device='disk'>
       <driver name='qemu' type='qcow2'/>
       <source file='/tmp/contextualized-domain.qcow2'/>
       <target dev='vda' bus='virtio'/>
     </disk>
     <disk type='file' device='cdrom'>
       <source file='/tmp/cloud-init-data.iso'/>
       <target dev='vdb' bus='virtio'/>
       <readonly/>
     </disk>
     <interface type='bridge'>
       <source bridge='br0'/>
       <mac address='AA:BB:CC:DD:EE:FF'/>
     </interface>
     <serial type='pty'>
       <source path='/dev/ttyS0'/>
       <target port='0'/>
     </serial>
     <console type='pty'>
       <source path='/dev/ttyS0'/>
       <target port='0'/>
     </console>
   </devices>
  </domain>

You can notice it's the same xml file as in the previous part, except that we added this part :

  <disk type='file' device='cdrom'>
    <source file='/tmp/cloud-init-data.iso'/>
    <target dev='vdb' bus='virtio'/>
    <readonly/>
  </disk>

It mount the iso on the VM as a cdrom.

Note.png Note

- Don't forget to change the MAC address with one of your g5k-subnet mac
- If you want to use the kvm command instead of virsh, add -cdrom /tmp/cloud-init-data.iso option

Start the guest OS and connect it using ssh :

Terminal.png node:
virsh create contextualized-domain.xml
Terminal.png node:
ssh root@g5k-subnet_ip_addr

You can now SSH to your VM without password, and without having to use the VM console. You can notice the hostname also changed to example-vm, as specified in the meta-data file.

Playing with cloud-init

This optional part shows more example of what can be done with cloud-init

Run a script on startup with user-data

The current content of user-data is :

#cloud-config
disable_root: false

By default, cloud-init disable the root account.
If you try to connect as root on the VM without enabling the root account in user-data,
you will get a message saying you need to connect as user debian (your public key will be accepted for user debian). This user has sudo rights

user-data file start with #cloud-config, telling cloud-init that the format of the file is a cloud-config. As we will see in the next part, it's a file in YAML that describes the configurations cloud-init has to apply on boot.

But user-data can also be a bash script, and that's what we will do here. Replace the content of user-data with:

#!/bin/bash
apt-get update && apt-get install -y lighttpd
cat << EOF > /var/www/html/index.lighttpd.html
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>VM HTTP Server</title>
</head>
<body>
 <h1> Install and configured with cloud-init </h1>
</body>
</html>
EOF

Generate the iso file with this new configuration :

Terminal.png node:/tmp:
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data

We will destroy our running VM, create a fresh disk from debian9-x64-base.qcow2 and restart it :

Terminal.png node:/tmp:
virsh destroy contextualized-domain
Terminal.png node:/tmp:
qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/contextualized-domain.qcow2
Terminal.png node:/tmp:
virsh create contextualized-domain.xml

After a few moment, we can try :

Terminal.png node:/tmp:
curl http://vm_ip

The script in user-data ran on startup. It installed lighttpd, a small http server, and replaced the default index.html

We can still ssh on our VM, but not as root since we remove the option that enabled root account in user-data :

Terminal.png node:/tmp:
ssh debian@vm_ip
Terminal.png debian@vm:
sudo su
Terminal.png root@vm:
#

Going further with user-data in YAML

We have seen how to use user-data as a startup script.
We will now use it in cloud-config format : It's description file in YAML to trigger some actions on startup :

Copy the following content to user-data and insert your public SSH key where needed. Then regenerate the iso file :

#cloud-config
groups:
  - foo
  - bar
users:
  - name: foo
    primary-group: foo
    groups: users
    shell: /bin/bash
    ssh-authorized-keys:
      - <insert your public key here>
  - name: bar
    primary-group: bar
    groups: users
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    ssh-authorized-keys:
      - <insert your public key here>
packages:
  - lighttpd
Terminal.png node:/tmp:
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data

The advantage of using this format is the readability. We can quickly identify what it will do :

  • Create a user foo in group foo (with no sudo right)
  • Create a user bar in group bar with sudo rights
  • Install the package lighttpd

You can create a new VM to test this new configuration :

Terminal.png node:/tmp:
virsh destroy contextualized-domain
Terminal.png node:/tmp:
qemu-img create -f qcow2 -o backing_file=/tmp/debian9-x64-base.qcow2 /tmp/contextualized-domain.qcow2
Terminal.png node:/tmp:
virsh create contextualized-domain.xml

You'll be able to connect without password as foo and bar

Terminal.png node:/tmp:
ssh bar@vm_ip

For more example of what can be done with cloud-config : http://cloudinit.readthedocs.io/en/latest/topics/examples.html

Add cloud-init to you own image

You may want to add cloud-init to your own virtual image so you can run custom VMs on Grid'5000.

To do so, simply run

Terminal.png VM:
apt-get update && apt-get install cloud-init

in your VM before exporting it as qcow2.

You can also use virt-customize for an already existing qcow2 :

Terminal.png local_pc:
apt-get update && apt-get libguestfs-tools
Terminal.png local_pc:
virt-customize -a my_image.qcow2 --install cloud-init

Multi-site experiment

In this part, to illustrate what can be done using Virtual machines on the standard environment, we will start two virtual machines on two sites, and make them communicate using the virtualization network.

Reservation

Open 2 terminals, and ssh to the frontends of 2 sites, in this example, it will be the frontend of Luxembourg, and the frontend of Nancy. Then, reserve two virtualization-capable nodes and two subnets on two different sites.

For the rest of the multi-site experiment part, don't forget to run each command on both sites.

Terminal.png frontends(both):
oarsub -I -l slash_22=1+{"virtual!='NO'"}/nodes=1

Network configuration

In this part, we will choose an IP for the 2 virtual machines.

Choose a couple of IP & MAC for each VM, in the output of g5k-subnets -im. Note that g5k-subnets returns completely different information on each site. In the following, we assume that you chose 10.144.8.1 (00:16:3e:90:08:01) in Nancy, and 10.172.0.1 (00:16:3e:ac:00:01) in Luxembourg.

Terminal.png nodes(both):
g5k-subnets -im | head

Instantiate your VMs

Copy a standard virtual machine image

Copy the default virtual machine image from /grid5000/virt-images/debian9-x64-base.qcow2 to /tmp on both nodes :

Terminal.png nodes(both):
cp /grid5000/virt-images/debian9-x64-base.qcow2 /tmp/

Configure cloud-init

To be able to SSH without password, we will use cloud-init :

Terminal.png node(both):
cp /grid5000/virt-images/cloud-init-example.sh /tmp/
Terminal.png node(both):
cd /tmp && export cloud_init_key=$(cat ~/.ssh/id_rsa.pub) && ./cloud-init-example.sh
Terminal.png node(both):
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data

Create the domain.xml file

The domain.xml file contains the description of your virtual machine. Create it on both side and adapt it to use a mac address provided by g5k-subnets -im. The virtual machine will get the IP associated to its mac address :

  <domain type='kvm'>
   <name>stretch</name>
   <memory>2048000</memory>
   <vcpu>1</vcpu>
   <os>
     <type arch="x86_64">hvm</type>
   </os>
   <clock offset="localtime"/>
   <on_poweroff>destroy</on_poweroff>
   <on_reboot>restart</on_reboot>
   <on_crash>destroy</on_crash>
   <devices>
     <emulator>/usr/bin/kvm</emulator>
     <disk type='file' device='disk'>
       <driver name='qemu' type='qcow2'/>
       <source file='/tmp/debian9-x64-base.qcow2'/>
       <target dev='vda' bus='virtio'/>
     </disk>
     <disk type='file' device='cdrom'>
       <source file='/tmp/cloud-init-data.iso'/>
       <target dev='vdb' bus='virtio'/>
       <readonly/>
     </disk>
     <interface type='bridge'>
       <source bridge='br0'/>
       <mac address='AA:BB:CC:DD:EE:FF'/>
     </interface>
     <serial type='pty'>
       <source path='/dev/ttyS0'/>
       <target port='0'/>
     </serial>
     <console type='pty'>
       <source path='/dev/ttyS0'/>
       <target port='0'/>
     </console>
   </devices>
  </domain>

Launch the two VMs

Terminal.png nodes(both):
virsh create domain.xml

Enjoy !

SSH in your VMs

Terminal.png node(nancy):
ssh root@10.144.8.1
Terminal.png node(luxembourg):
ssh root@10.172.0.1

Install and run iperf

Finally, we will install iperf and measure the bandwidth between the two VMs:

  • install iperf with apt-get ;
  • then, run iperf in server mode (-s parameter) on one node, and in client mode (-c parameter) on the other.
Terminal.png vm(nancy):
apt-get update && apt-get install iperf
Terminal.png vm(nancy):
iperf -s
root@vm-1:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.144.8.1 port 5001 connected with 10.172.0.1 port 52389
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.09 GBytes    938 Mbits/sec
Terminal.png vm(luxembourg):
apt-get update && apt-get install iperf
Terminal.png vm(luxembourg):
iperf -c 10.144.8.1
root@vm-1:~# iperf -c 10.144.8.1
------------------------------------------------------------
Client connecting to 10.144.8.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.172.0.1 port 52389 connected with 10.144.8.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes    938 Mbits/sec

Another alternative: Xen reference environments

Grid'5000 proposes Xen reference environments, as an alternative to KVM on the standard environment. This last part is a quick guide to Xen, we will show how to deploy a Xen environment on nodes, create virtual machines and use g5k-subnets for the network configuration.

Note.png Note

In Xen terminology, a domain U or domU is a virtual machine. The domain 0 or dom0 is the physical machine which hosts the domUs (in our case the dom0 is the Grid5000 node you deployed).

Reserve resources and deploy xen environment

Terminal.png frontend:
oarsub -I -t deploy -l slash_22=1+nodes=1,walltime=2:00
Terminal.png frontend:
kadeploy3 -e debian9-x64-xen -f $OAR_FILE_NODES -k

DomU network configuration

The image debian9-x64-xen includes a pre-configured domU. The configuration file of this VM is placed in /etc/xen/domU.cfg. Inside this file, you can specify the parameters of your virtual machine. They are defined by:

  • kernel and initrd : linux kernel and initrd with xen domU support.
  • vcpus : number of virtual CPUs given to the VM.
  • memory : size (MB) of RAM given to the VM.
  • root : where is located the root partition .
  • disk : which files contain the partitions on your virtual host.
  • name : the name of the hostname, as displayed by xl list and as given by the system itself.
  • vif : the configuration of the domU's network interfaces
  • on_poweroff on_restart on_crash : how should react xen hypervisor on these events

You can find the official documentation and other options here : http://xenbits.xen.org/docs/4.9-testing/man/xl.cfg.5.html

The vif line configures the domU's network. It usually contains:

  • a MAC address
  • the bridge name, in our case br0 which is a bridge that includes the production network interface.
Note.png Note

In the debian9-x64-xen environment, the mac addresses is updated randomly at each boot, you can prevent this behavior by disabling the service xen-g5k

Use the default domU

Select 1 IP from your reserved subnet:

Terminal.png frontend:
g5k-subnets -im | head -1
10.172.4.1      00:16:3E:AC:04:01

Edit the file /etc/xen/domU.cfg and replace the mac address. Then start the domU.

Terminal.png node:
xl create /etc/xen/domU.cfg
Terminal.png node:
xl list
Name         ID   Mem VCPUs      State   Time(s)
Domain-0     0   976     8     r-----      30.7
domU         1   512     1     -b----       4.7

The example VM is already configured to accept the debian9-x64-xen key. So you can SSH to it without password, and without cloud-init :

Terminal.png node:
ssh root@ip_g5k-subnet

Create a new domU

Select another ip and mac address, and create a new domU with the command xen-create-image

Terminal.png frontend:
g5k-subnets -im
...
10.172.4.3      00:16:3E:AC:04:03
...
Terminal.png node:
xen-create-image --dir=/tmp/ --size=10G --hostname=domU2 --role=udev --genpass=0 --password=grid5000 --mac=00:16:3E:AC:04:03 --dhcp --bridge=br0 --memory=512M

At this point, a new domU configuration file (/etc/xen/domU2.cfg and a new disk image /tmp/domains/domU2/disk.img have been generated.

Terminal.png node:
xl create /etc/xen/domU2.cfg

Due to the default xen configuration in the debian9-x64-xen environment, the hosts SSH key has been copied during the image generation : You can SSH as root without password in domU2 :

Terminal.png node:
ssh root@ip_g5k-subnet

Using Grid'5000 qcow2 images

It's possible to run VM with Grid'5000 environnement. First, copy the image and the script to setup cloud-init on the node (the dom0) :

Terminal.png frontend:
scp /grid5000/virt-images/debian9-x64-min.qcow2 /grid5000/virt-images/cloud-init-example.sh root@node:/tmp

To be able to SSH to the VM with your public key, run the following commands :

Terminal.png frontend:
cat ~/.ssh/id_rsa.pub

Copy your SSH key

Terminal.png node:
apt-get update && apt-get install genisoimage
Terminal.png node:
cd /tmp; export cloud_init_key="paste your SSH key" && /tmp/cloud-init-example.sh
Terminal.png node:/tmp:
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data

Then we create a domU config file /etc/xen/g5k_image.cfg :

#
# Configuration file for the Xen instance domU, created
# by xen-tools 4.7 on Fri Jun  1 00:48:00 2018.
#

#
#  Kernel + memory size
#
kernel      = '/vmlinuz'
extra       = 'elevator=noop'
ramdisk     = '/initrd.img' 

vcpus       = '1'
memory      = '512'

#  Disk device(s).
#
root        = '/dev/xvda1 ro'
disk        = [
                  'format=qcow2, vdev=xvda, access=rw, target=/tmp/debian9-x64-min.qcow2',
                  'format=raw, vdev=hdc, access=ro, devtype=cdrom, target=/tmp/cloud-init-data.iso'
              ]

#  Hostname
#
name        = 'g5k_image'

#  Networking
#
dhcp        = 'dhcp'
vif         = [ 'mac=MAC g5k-subnet,bridge=br0' ]

#  Behaviour
#
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'


The important parts in this configuration file are :

  • The 2 drives : 1 for the image of the environment and 1 for the cloud-init.iso
  • The mac address you need to change

Finally, run the VM :

Terminal.png node:
xl create /etc/xen/g5k_image.cfg
Terminal.png frontend:
ssh root@VM_IP

Common administrative commands

  • List the running domUs with the following command:
Terminal.png dom0:
xl list
  • Connect to a domU using the xen console
Terminal.png dom0:
xl console <domU-name>
  • Start a domU
Terminal.png dom0:
xl create /etc/xen/<domU-name>.cfg
  • Shutdown properly a domU
Terminal.png dom0:
xl shutdown <domU-name>
  • Instantly terminate a domU
Terminal.png dom0:
xl destroy <domU-name>
  • Print information about the dom0
Terminal.png dom0:
xl info
  • Shows real time monitoring information:
Terminal.png dom0:
xl top

Going further

Please, refer to the official Xen documentation and Debian documentation.