Advanced Kadeploy: Difference between revisions

From Grid5000
Jump to navigation Jump to search
Line 169: Line 169:


{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -a <code class=replace>mysqueeze-x64-base.env</code> }}
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -a <code class=replace>mysqueeze-x64-base.env</code> }}
and then (if your environment is named "mysqueeze-base"):
{{Term|location=frontend|cmd=<code class=command>kadeploy3</code> -f <code class=env>$OAR_NODEFILE</code> -e <code class=replace>mysqueeze-base</code>}}


With <code class=command>kaenv3</code> command, you can manage your environments at your ease. Please refer to [[Kadeploy-v3#Kaenv|its documentation]] for an overview of its features.
With <code class=command>kaenv3</code> command, you can manage your environments at your ease. Please refer to [[Kadeploy-v3#Kaenv|its documentation]] for an overview of its features.

Revision as of 13:35, 29 October 2013

What you need to know before starting

The first thing to understand is that by using kadeploy3, you will be running a command that attempts to remotely reboot nodes, and boot them using configuration files hosted on a server, many nodes at a time. On some clusters, there is a failure rate associated with this operation that is not null. You might therefore experience failures on some operations during this tutorial. In this case, retry. The system doesn't retry for you as this implies waiting for long timeouts in all cases, even those where a 90% success rate is sufficient.

What is an Environment ?

Where we describe what exactly is image, kernel, initrd and postinstall

An environment in kadeploy3 is a set of file describing a fully functional Operating System. To be able to setup a Operating System, kadeploy3 needs at least 4 files in the most common cases:

  1. An image
    • An image is a file containing all the Operating System files. It can be a compressed archive (ie tgz file) or a dump of a device (ie dd file). In this tutorial, you will learn to build new images for Kadeploy3
  2. A kernel file
    • For the Unix based environment, the kernel file specifies which kernel to boot. It is the full path of the kernel file.
  3. initrd file (optional)
    • For the Linux based environment, the optional initrd file allows to use an initial ramdisk which will be used as the root filesytem at the boot sequence. More information: Initrd on Wikipedia
  4. A postinstall file (optional)
    • The postinstall file allows you to correctly configure all specificity on each cluster. It is not mandatory to specify it for Kadeploy3 environment but if you know what you are doing, feel free to define it.

Once you have this set of files, you can describe your environment to kadeploy3. This description represents an environment in the kadeploy3 sense.

How can I make my own environment ?

To create our own environment they are two main ways. A first way consist to deploy an existing environment and the other way is to create a from scrath environment from a classical ISO installation. In both situation you can customize and save your environment in order to use it again later.

Search and deploy an existing environment

Search an environment

Grid'5000 maintains several reference environments directly available on any site. These environments are based on various versions of debian. And for each debian version you will find different variants of reference environments.
They are called reference environments because they can be used to generate customized environments. You will find different variants of reference environments, depending on which version of debian they are based on.
The complete list all available environments, with their different variants, and the sites where they are available are listed on the wiki page:

From that same page there is a link for each variant of each reference environments to another page which gives a thorough description of the environment content, how it was build and how to use it with kadeploy3. An example in the next link :

An environment library is maintained on each site in the /grid5000 directory of the frontend node. So all environments available on each site are stored in that directory.

To deploy a registered environment, you must know its name as registered in the Kadeploy database. It is the first information on the environment description page. This tutorial uses the squeeze-x64-base environment.

You can also list all available environment in a site by using the kaenv3 command :

Terminal.png frontend:
kaenv3 -l

This command lists all public as well as your private environments.

We distinguish three levels of visibility for an environment :

  • public: All users can see those environments. Only administrators can tag them this way.
  • shared: Every users can see the environment provided they use the -u option to specify the user the environment belong to.
  • private: The environment is only visible by the user the environment belong to.

For example, a shared environment added by user user is listed this way :

Terminal.png frontend:
kaenv3 -l -u user

Being able to reproduce the experiments that are done is a desirable feature. Therefore, you should always try to control as much as possible the environment the experiment is done in. Therefore, we will attempt to check that the environment that was chosen in the environment directory is the one available on a given cluster. On the cluster you would like to deploy, type the following command to print information about an environment :

Terminal.png frontend:
kaenv3 -p squeeze-x64-base -u deploy

You must specify the user option. In our case, all public environments belong to user deploy.

Check that the tarball file is the expected one by checking its name and its checksum which you should find on the identification sheet:

Terminal.png frontend:
md5sum /grid5000/images/squeeze-x64-base-1.5.tgz

In theory, you should also check the post-install script. A post-install script adapts an environment to the site it is deployed on. In the same way as for environments, you should be able to find a description of the post-install script on pages such as here. Post-install scripts is an evolving matter, so don't be too worried if you don't find things as described here. If everything seems ok, please proceed to the next step.

Make a job on a deployable node

By default, Grid'5000 nodes are running on the production environment. Which already contains most of the important features and can be used to run experiments. But you will not have administrative privileges (root privileges) on these nodes. So you will not be able to customize these environments at will. In fact, only reference environments can be customized at will. But to have the right to deploy a reference environment on a node, you must supply the option -t deploy when submitting your job.

For this part of the tutorial, job made will be interactive (-I), of the deploy type (-t deploy), on only one machine (-l nodes=1) to do environment customization (we will give ourselves 3 hours with -l walltime=3), which gives us the following command, that will open a new shell session on the frontend node:

Terminal.png frontend:
oarsub -I -t deploy -l nodes=1,walltime=3

Since all Grid'5000 nodes do not necessary have console access, it is recommended in the context of this tutorial to add the option rconsole="YES" in your reservation command.

Terminal.png frontend:
oarsub -I -t deploy -l '{rconsole="YES"}/nodes=1,walltime=3'

Indeed, when you submit a job of the deploy type, a new shell is opened on the frontend node and not on the first machine of the job as for standard jobs. When you exit from this shell, the job ends. The shell is populated with OAR_* environment variables. You should look at the list of available variables to get an idea of the information you can use to script deployment later. As usual, if the job is successfull, you will get the name of the machine allocated to your job with:

Terminal.png frontend:
cat $OAR_FILE_NODES
Warning.png Warning

At the end of a reservation with the options -t deploy, the reserved nodes will be restarted to boot on the production environment and thus be available to any other user. So you should only use this option -t deploy when you actually intend to deploy a reference environment on the reserved nodes.

Deploy an reference environment

To deploy your environment, you must discover the nodes you were allocated by OAR. The simplest way of doing this is to look at the content of the file whose name is stored in $OAR_FILE_NODES (this variable is labelled $OAR_NODE_FILE too) or the messages displayed when the job was made. This variable $OAR_NODE_FILE simply stores the url of the file containing the FQDN of all your reserved nodes. Deployment happens when you run the following command:

Terminal.png frontend:
kadeploy3 -e squeeze-x64-base -m node.site.grid5000.fr

You can automate this to deploy on all nodes of your job with Kadeploy3's -f option:

Terminal.png frontend:
kadeploy3 -e squeeze-x64-base -f $OAR_FILE_NODES


If you want to be able to connect to the node as root without any password prompting you can use the -k option and proceed by two ways :

  • You can either specify the public key that will be copied in /root/.ssh/authorized_keys on the deployed nodes :
Terminal.png frontend:
kadeploy3 -e squeeze-x64-base -f $OAR_FILE_NODES -k ~/.ssh/my_special_key.pub
  • Or you can supply the -k option without argument. This will automatically copy your ~/.ssh/authorized_keys and replace the /root/.ssh/authorized_keys file on the deployed nodes.
Terminal.png frontend:
kadeploy3 -e squeeze-x64-base -f $OAR_FILE_NODES -k

The second case is actually the simplest way. One of its advantages is that after deployments, you will be able to connect directly from your local computer to the deployed nodes, the same way you connect to the frontend of the site were those nodes are.
Once kadeploy has run successfully, the allocated node is deployed under squeeze-x64-base environment. It will then be possible to tune this environment according to your needs.

Note.png Note

It is not necessary here, but you can specify destination partition with the -p option. You can find on the Node storage page all informations about the partitions table used on G5K

Connect to the deployed environment and customize it

Connection

On reference environments managed by the staff, you can use root account for login through ssh (kadeploy checks that sshd is running before declaring success of a deployment). To connect to the node type :

Terminal.png frontend:
ssh root@node.site.grid5000.fr
Note.png Note

If you have not deployed the nodes with a public key using option -k, you will be asked for a password. Default root password for all reference environments is grid5000. Please check the environments descriptions.

In case this doesn't work, please take a look at the kadeploy section of the Sidebar > FAQ

Customization (example with authentification parameters)

Using the root account for all your experiments is possible, but you will probably be better off creating a user account. You could even create user accounts for all the users that would be using your environment. However, if the number of users is greater than 2 or 3, you should better configure an LDAP client by tuning the post-install script or using a fat version of this script (beyond the scope of this tutorial). Two ways of doing things.

The simplest is to create a dedicated account (e.g. the generic user g5k) and move in all experiment data at the beginning and back at the end of an experiment, using scp or rsync. A more elaborate approach is to locally recreate our usual Grid'5000 account with the same uid/gid on our deployed environment. This second approach could simplify file rights management if you need to store temporary data on shared volumes.

To create your local unix group on your environment, first find your primary group on the frontend node with:

Terminal.png frontend:
id

The output of this command is for instance of the form:

uid=19002(dmargery) gid=19000(rennes) groups=9998(CT),9999(grid5000),19000(rennes)

Where :

userId = 19002
userLogin = dmargery
groupId = 19000
groupName = rennes

Then, as root, on the deployed machine:

Terminal.png node:
addgroup --gid groupId groupName

Now, to enable access to your local user account on your environment, as root, on the deployed machine:

Terminal.png node:
adduser --uid userId --ingroup groupName userLogin

Finally, as root, become the newly created user and place your ssh key:

Terminal.png node:
su - userLogin
mkdir ~/.ssh
exit
cp /root/.ssh/authorized_keys /home/userLogin/.ssh/
chown userLogin:groupName /home/userLogin/.ssh/authorized_keys

Now you can login to the node with your user account

Terminal.png fronted:
ssh userLogin@node.site.grid5000.fr

Adding software to an environment

Where you learn to install software using the package repository of your distribution on Grid'5000 (using proxys)...

You can therefore update your environment (to add missing libraries that you need, or remove packages that you don't so that sizes down the image and speeds up the deployment process, etc.) using:

Terminal.png node:
apt-get update
apt-get upgrade
apt-get install list of desired packages and libraries
apt-get --purge remove list of unwanted packages
apt-get clean


Note.png Note

On reference environments, apt-* commands are automatically configured to use the proper proxy. But if you need an outside access for the HTTP, HTTPS and FTP protocols, with another command (wget, git,...), you will have to configure the proxy by following the documentation on the Web_proxy_client page.

Create a new environment from a customized environment

We now need to save this customized environment, where you have a user account, to be able to use this account again each time you deploy it.
The first step to create an environment is to create an archive of the node you just customized. Because of the various implementations of the /dev filesystem tree, this can be a more or less complex operation.

Use the provided tools

  • You can use TGZ-G5K, a script installed in all reference environments. You can find all instructions on how to use it on its TGZ-G5K page.

Examples :

Terminal.png frontend:
ssh root@node tgz-g5k > path_to_myimage.tgz

or

Terminal.png node:
tgz-g5k login@frontend:path_to_myimage.tgz

This will create a file path_to_image.tgz into your home directory on frontend. The first example is to be preferred, as it can ran password-less or passphrase-less without adding your private key to the image.

Describe the newly created environment for deployments

Kadeploy3 works using an environment description. The easiest way to create a description for your new environment is to change the description of the environment it is based on. We have based this tutorial on the squeeze-x64-base environment of user deploy. We therefore print its description to a file that will be used as a good basis:

Terminal.png frontend:
kaenv3 -p squeeze-x64-base -u deploy > mysqueeze-x64-base.env

It should be edited to change the name, description, author lines, as well as the tarball line. The visibility line should be removed, or changed to shared or private. Once this is done, the newly created environment can be deployed using:

Terminal.png frontend:
kadeploy3 -f $OAR_NODEFILE -a mysqueeze-x64-base.env

This kind of deployment is called anonymous deployment because the description is not recorded into the Kadeploy3 database. It is particularly useful when you perform the tuning of your environment if you have to update the environment tarball several times.

Once your customized environment is successfully tuned, you can save it to Kadeploy3 database so that you can directly deploy it with kadeploy3, by specifying its name:

Terminal.png frontend:
kaenv3 -a mysqueeze-x64-base.env

and then (if your environment is named "mysqueeze-base"):

Terminal.png frontend:
kadeploy3 -f $OAR_NODEFILE -e mysqueeze-base

With kaenv3 command, you can manage your environments at your ease. Please refer to its documentation for an overview of its features.

Deploy an environment from a classical ISO installation

First of all, this method is OS independant, so you can create Kadeploy3 tgz (linux based systems) or ddgz (other systems) images for any kind of OS from a CD/DVD ISO.

This procedure is based on the usage of virtualization (KVM) mechanism to boot the CD/DVD iso of the OS installer. The system will be installed on a physical partition of a node and then copied as a Kadeploy3 system image.

To be sure the installed system will be bootable, we will make the OS installer install the system on the Grid'5000 deployment partition (more information).

To make this possible, we will deploy the hypervisor's system on the temporaty partition and then install the system on the deployment partition.

Preparation

  • Download your the CD/DVD ISO of your OS installer (say OS_ISO) and upload it on the frontend of the target site help here
Terminal.png local:
scp OS_ISO USERNAME@access.grid5000.fr:SITE/

Note: In this tutorial we will use one Ubuntu server 64bits ISO image file (available Nancy frontend: /grid5000/iso/ubuntu-11.10-server-amd64.iso)

  • Make a reservation with 1 node, with the deployment mode and the destructive mode (to be able to deploy on the temporary partition), two hour should be enough.

{{Term|location=frontend|cmd=oarsub

Terminal.png frontend:
oarsub-I -t deploy -t destructive -l nodes=1,walltime=2
  • Deploy a minimal system on the node's temporary partition, use your ssh key (since Grid'5000 is more Debian friendly, let's say Debian/squeeze)
Terminal.png frontend:
kadeploy3 -e squeeze-x64-min -p 5 -k -f $OAR_NODEFILE
  • Connect to the node and install the needed packages
Terminal.png frontend:
ssh root@NODE

Run the CD/DVD OS installer using KVM/VNC

  • Preparation
    Copy the OS's ISO to the node
Terminal.png frontend:
scp OS_ISO root@NODE:
  • Clean the old system partition /dev/sda3
Terminal.png NODE:
echo -e 'd\n3\nn\np\n\n\nt\n3\n0\nw\n' | fdisk /dev/sda && sync && partprobe /dev/sda
  • Connect to the node and install vncviewer and kvm on the system.

First update the packages definition

Terminal.png NODE:
apt-get -y update

Then install the needed packages

Terminal.png NODE:
apt-get install -y vncviewer kvm
  • Launch the virtual machine, booting on the OS's ISO, using VNC output
Terminal.png NODE:
kvm -drive file=/dev/sda -cpu host -m 1024 -net nic,model=e1000 -net user -k fr -vnc :1 -boot d -cdrom OS_ISO

Note: This is currently hard to build an image from KVM for nodes that network devices need specific drivers (bnx2, ...).

To be sure your node network device is compatible with the e1000e driver you can check the API using:

Terminal.png frontend:
curl -s -k https://api.grid5000.fr/stable/grid5000/sites/SITE/clusters/CLUSTERS/nodes/NODE

(The node has to be specified by basename: griffon-42.nancy.grid5000.frgriffon-42)

  • Connect to the frontend using SSH X11 forwarding
Terminal.png local:
ssh -Xt USERNAME@access.grid5000.fr 'ssh -X SITE'
  • Get the screen of our virtucomplicated case eachal machine using VNC
Terminal.png frontend:
ssh -X root@NODE 'vncviewer :1'

Note: If your OS installer is changing the screen resolution, your vncviewer will be closed, you'll just have to relaunch the command to get the screen back.

  • Installation process IMPORTANT instructions
    • You MUST install your system and it's root on a single partition: /dev/sda3
    • The system size is limited to Grid'5000's deployment partition default size (around 6GiB)
    • If your installer need to access to the internet, you should specify a Proxy server:http://proxy.SITE.grid5000.fr:3128 (the allowed access are listed here).
    • You must install an SSH server
    • A bootloader should be installed on the partition /dev/sda3 and not on the MBR
  • Install the system

Note: after the installation process, the virtual machine will fail to boot, it's normal. You can close vncviewer and kvm.

Create a Kadeploy3 image of the filesystem of our OS

  • Create the mounting point directory
Terminal.png frontend:
ssh root@NODE 'mkdir -p /mnt/myos'
  • Mount the partition
Terminal.png frontend:
ssh root@NODE 'partprobe'
Terminal.png frontend:
ssh root@NODE 'mount /dev/sda3 /mnt/myos'

Note: If this command fails, you can try to use this command first: partprobe /dev/sda

Warning.png Warning

You have to disable selinux (example for Centos, put "SELINUX=disable" in /mnt/myos/etc/selinux/config)
You have to remove /mnt/myos/etc/udev/rules.d/70-*

Note.png Note

You can put internal dns in /etc/resolv.conf
You can add proxy for update packages: you can take a look at Web_proxy_client

  • Save the filesystem in a tgz archive. tgz-g5k documentation is available here
Terminal.png frontend:
ssh root@NODE 'tgz-g5k --root /mnt/myos' > IMAGE_FILE.tgz
  • Create an environment description file such as:
###
name : IMAGE_NAME
version : 1
description : My OS Image
author : me@domain.tld
tarball : /path/to/IMAGE_FILE.tgz|tgz
postinstall : /grid5000/postinstalls/debian-x64-min-1.0-post.tgz|tgz|traitement.ash /rambin
kernel : /path/to/the/kernel/in/the/image
initrd : /path/to/the/initrd/in/the/image
fdisktype : 83
filesystem : ext3
environment_kind : linux
visibility : private
  • You can use Grid'5000 files for the postinstall
Note.png Note

For linux systems, most of times the path to the kernel file is /vmlinuz and the path to the initrd is /initrd.img. You can locate those files by connecting to NODE and checking the /mnt/myos directory.

  • Test it !
Terminal.png frontend:
kadeploy3 -a IMAGE_DESC_FILE -m NODE -k

Use disk(s) as I want

In some cases, kadeploy default handling of partitions is too limited and we need to use disks as we want (e.g. to deploy our environment in an optimal way). To do that they are two main ways:

  • simply deploy on another existing partition (sda2 or sda5)
  • repartition disks entirely and/or use several disks (such as sdb or sdc on hercule cluster)

Deploy on sda2 or sda5

First, as this kind of deployment will break node standard operation, you must tell to OAR that it should be redeployed entirely after the reservation with the -t destructive option:

Terminal.png frontend:
oarsub-t deploy -t destructive -l '/nodes=1,walltime=05:00:00' -p "cluster='hercule'" -I

Then you can deploy on sda2 or sda5 with the -p [2,5] parameter :

Terminal.png frontend:
kadeploy3 -e squeeze-x64-nfs -f $OAR_NODEFILE -p 2 -k

Repartition entirely during deployment (simple case)

Since kadeploy 3.1.6 we can easily customize the Kadeploy3's automata. It's now possible to add custom pre, post or substitute operations to each steps. In a custom operation it's possible to: send a file, execute a command or run a script.

This feature in explained in Kadeploy3's documentation (available on Kadeploy3's website) in the section 4.2.2, Use Case 10 and 4.7.

In this example, we will make a custom partitioning during the deployment.

We can repartitioning with a new partition scheme such as:

  • swap linux-swap on sda1
  • root ext3 on sda2
  • data1 ext4 on sdb1
  • data2 ext2 on sdc1

Note that few clusters have more than 1 disk (hercule for example).

You must tell to OAR that it should redeploy the node entirely after the reservation with the -t destructive parameter:

Terminal.png frontend:
oarsub-t deploy -t destructive -l '/nodes=1,walltime=05:00:00' -p "cluster='hercule'" -I

Then you should create a custom kadeploy operation. In this custom operation we will repartition the disk with parted and redefine the partition operation.

  • Create a file to redefine partitions (file is called partitions here):
mklabel msdos
u GB mkpart primary 0% 30
u GB mkpart primary 30 100%
align-check optimal 1
align-check optimal 2
select /dev/sdb
mklabel msdos
u GB mkpart primary 0% 10% 
align-check optimal 1
select /dev/sdc
mklabel msdos
u GB mkpart primary 0 15 
align-check optimal 1
  • Create a file to format partitions (file is called called format here):
#!/bin/sh
set -e
# formating /media/data1
mkfs -t ext3 -b 4096 -O sparse_super,filetype,resize_inode,dir_index -q /dev/sdb1
# formating /media/data2
mkfs -t ext3 -b 4096 -O sparse_super,filetype,resize_inode,dir_index -q /dev/sdc1
  • Create a file to redefine kadeploy operation (file is called customparted.yml here):
---
SetDeploymentEnvUntrusted:
# define partitioning step
  create_partition_table:
      substitute:
        - action: send
          file: partitions
          destination: $KADEPLOY_TMP_DIR 
          name: send_partitions
        - action: exec
          name: partitioning_with_parted 
          command: parted -a optimal /dev/sda --script $(cat $KADEPLOY_TMP_DIR/partitions)
# add formating step to kadeploy
  format_deploy_part:
      post-ops:
        - action: run 
          name: format_with_mkfs 
          file: format
# we don't need those both step so we escape it.
  format_tmp_part:
      substitute:
        - action: exec
          name: remove_format_tmp_part_step 
          command: /bin/true
  format_swap_part:
      substitute:
        - action: exec
          name: remove_format_swap_part_step 
          command: /bin/true
  • deploy you environment with this custom operation (don't forget to indicate root partition with -p 2):
Terminal.png frontend:
kadeploy3 -e squeeze-x64-min -f $OAR_NODE_FILE -k -p 2 --set-custom-operations ./customparted.yml
Warning.png Warning

In some case you should increase the step's timeout (for some long formatting for example) see Advanced_Kadeploy#Adjusting timeout for some environments for details.

  • both partitions are not mounted on boot. To mount those partitions you should do:
Terminal.png NODE:
mkdir -p /media/data1
Terminal.png NODE:
mkdir /media/data2
Terminal.png NODE:
mount /dev/sdb1 /media/data1
Terminal.png NODE:
mount /dev/sdc1 /media/data2

Repartition entirely during deployment (complex case)

Since kadeploy 3.1.6 we can easily customize the Kadeploy3's automata. It's now possible to add custom pre, post or substitute operations to each steps. In a custom operation it's possible to: send a file, execute a command or run a script.

This feature in explained in Kadeploy3's documentation (available on Kadeploy3's website) in the section 4.2.2, Use Case 10 and 4.7.

In this example, we will make a custom partitioning during the deployment.

Imagine that you want to make your own partitioning scheme like that:

  • swap linux-swap on sda1
  • root ext3 on sda2
  • home ext4 on sdb1
  • tmp ext1 on sdb2
  • var ext3 on sdc1
  • usr ext3 on sdc2

So you have made your reservation with:

Terminal.png frontend:
oarsub -t deploy -t destructive -l nodes=1,walltime=05:00:00 -p "cluster='hercule'" -I
  • Create a file to redefine partitions (file is called partitions here):
mklabel msdos
u GB mkpart primary 0% 30
u GB mkpart primary 30 100%
align-check optimal 1
align-check optimal 2
select /dev/sdb
mklabel msdos
u GB mkpart primary 0% 10% 
u GB mkpart primary 10% 20% 
align-check optimal 1
align-check optimal 2 
select /dev/sdc
mklabel msdos
u GB mkpart primary 0 15 
u GB mkpart primary 15 30
align-check optimal 1
align-check optimal 2
  • Create a file to format partitions (file is called format here):
#!/bin/sh
set -e
# formating /home/
mkfs -t ext4 -b 4096 -O sparse_super,filetype,resize_inode,dir_index -q /dev/sdb1
# formating /tmp/
mkfs -t ext2 -b 4096 -O sparse_super,filetype,resize_inode,dir_index -q /dev/sdb2
# formating /var/
mkfs -t ext3 -b 4096 -O sparse_super,filetype,resize_inode,dir_index -q /dev/sdc1
# formating /usr/
mkfs -t ext3 -b 4096 -O sparse_super,filetype,resize_inode,dir_index -q /dev/sdc2
  • Create a file to mount partitions (file is called mount here):
#!/bin/sh
set -e
#mount home
mkdir /mnt/dest/home
mount /dev/sdb1 /mnt/dest/home/ 
#mount tmp
mkdir /mnt/dest/tmp
mount /dev/sdb2 /mnt/dest/tmp/
#mount var
mkdir /mnt/dest/var
mount /dev/sdc1 /mnt/dest/var/
#mount usr
mkdir /mnt/dest/usr
mount /dev/sdc2 /mnt/dest/usr/
  • Create a file to redefine kadeploy operation (file is called customparted.yml here):
---SetDeploymentEnvUntrusted:
# define partitioning step
  create_partition_table:
      substitute:
        - action: send
          file: partitions
          destination: $KADEPLOY_TMP_DIR
          name: send_partitions
        - action: exec
          name: partitioning_with_parted
          command: parted -a optimal /dev/sda --script $(cat $KADEPLOY_TMP_DIR/partitions)
# define format step
  format_deploy_part:
      post-ops:
        - action: run
          name: format_with_mkfs
          file: format
# mount partitions home, tmp, var, usr
  mount_deploy_part:
      post-ops:
        - action: run
          name: mount_other_partitions
          file: mount
# we don't need those both steps (defined just before) so we substitute it by nothing
  format_tmp_part:
      substitute:
        - action: exec
          name: remove_format_tmp_part_step
          command: /bin/true
  format_swap_part:
      substitute:
        - action: exec
          name: remove_format_swap_part_step
          command: /bin/true
  • grab and untar the classical postinstall to include our fstab
Terminal.png frontend:
cp /grid5000/postinstalls/debian-x64-min-1.1-post.tgz .
Terminal.png frontend:
tar -xvzf debian-x64-min-1.1-post.tgz
  • change the fstab:
Terminal.png frontend:
vim dest/etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type> <options>  <dump>  <pass>
proc            /proc           proc   defaults   0       0
sysfs           /sys            sysfs  defaults   0       0
devpts          /dev/pts        devpts gid=5,mode=620 0   0
tmpfs           /dev/shm        tmpfs  defaults   0       0
/dev/sda1       none            swap   sw         0       0
/dev/sda2       /               ext3   errors=remount-ro  1  1
/dev/sdb1       /home   ext4   defaults   1       2
/dev/sdb2       /tmp    ext2   defaults   1       2
/dev/sdc1       /var    ext3   defaults   1       2
/dev/sdc2       /usr    ext3   defaults   1       2
  • rebuild the postinstall tarball
Terminal.png frontend:
tar -cvzf debian-min-post.tgz dest prepostinst traitement.ash
  • create a custom postinstall (file is called custompost.dsc):
###
name : wheezy-custom
version : 1
description : Wheezy
author : emile.morel@inria.fr
tarball : /grid5000/images/wheezy-x64-min-0.4.tgz|tgz
postinstall : ./debian-min-post.tgz|tgz|traitement.ash /rambin
kernel : /vmlinuz
initrd : /initrd.img
fdisktype : 83
filesystem : ext3
environment_kind : linux
visibility : shared
demolishing_env : false
  • finally, deploy with your custom postinstall and your custom operations:
Terminal.png frontend:
kadeploy3 -a custompost.dsc -f $OAR_NODE_FILE -p 2 -k --set-custom-operations ./customparted.yml --force-steps SetDeploymentEnv|SetDeploymentEnvUntrusted:1:450&BroadcastEnv|BroadcastEnvKastafior:1:3600&BootNewEnv|BootNewEnvClassical:0:400"

Tuning the Kadeploy3 deployment workflow

kadeploy3 allows to fully modify the deployment workflow.

First of all you have to understand the different steps of a deployment. There are 3 macro-steps:

  1. SetDeploymentEnv: this step aims at setting up the deployment environment that contains all the required tools to perform a deployment ;
  2. BroadcastEnv: this step aims at broadcasting the new environment to the nodes and writing it to disk;
  3. BootNewEnv: this step aims at rebooting the nodes on their new environment.

kadeploy3 provides several implementations for each of those 3 macro-steps. You can consult that list in the kadeploy3 page. In Grid'5000, we use the following steps by default in all our clusters :

  • SetDeploymentEnv -> SetDeploymentEnvUntrusted : use an embedded deployment environment
  • BroadcastEnv -> BroadcastEnvKastafior : use the Kastafior tool to broadcast the environment
  • BootNewEnv -> BootNewEnvKexec : the nodes use kexec to reboot (if it fails, a BootNewEnvClassical, classical reboot, will be performed)

Each one of these implementations is divided in micro steps. You can can see the name of those micro-steps if you use the kadeploy3 option --verbose-level 4. And to see what is actually executed during those micro-steps you can add the debug option of kadeploy3 -d

Terminal.png frontend:
kadeploy3 -f $OAR_FILE_NODES -k -e squeeze-x64-base --verbose-level 4 -d > ~/kadeploy3_steps

This command will store the kadeploy3 standard output in the file ~/kadeploy3_steps. Lets analyse its content:

Terminal.png frontend:
grep "Time in" ~/kadeploy3_steps

This command will print on the terminal all the micro-steps executed during the deployment process, and the time spent for each execution. Here are the micro-steps that you should see:

  1. SetDeploymentEnvUntrusted-switch_pxe: Configures the PXE server so that this node will boot on an environment that contains all the required tools to perform the deployment,
  2. SetDeploymentEnvUntrusted-reboot: Sends a reboot signal to the node
  3. SetDeploymentEnvUntrusted-wait_reboot: Waits for the node to restart.
  4. SetDeploymentEnvUntrusted-send_key_in_deploy_env: Sends kadeploy's user's ssh public key into the node's authorized_keys to ease the following ssh connections,
  5. SetDeploymentEnvUntrusted-create_partition_table: Creates the partition table
  6. SetDeploymentEnvUntrusted-format_deploy_part: Format the partition where your environment will be installed. This partition is by default /dev/sda3
  7. SetDeploymentEnvUntrusted-mount_deploy_part: Mounts the deployment partition in a local directory.
  8. SetDeploymentEnvUntrusted-format_tmp_part: Format the partition defined as tmp (by default, /dev/sda5)
  9. SetDeploymentEnvUntrusted-format_swap_part: Format the swap partition
  10. BroadcastEnvKastafior-send_environment: Sends your environments into the node and untar it into the deployment partition.
  11. BroadcastEnvKastafior-manage_admin_post_install: Execute post installation instructions defined by the site admins, in general to adapt to the specificities of the cluster: console baud rate, Infiniband, Myrinet, proxy address,...
  12. BroadcastEnvKastafior-manage_user_post_install: Execute user defined post installation instructions to automatically configure its node depending on its cluster, site, network capabilities, disk capabilities,...
  13. BroadcastEnvKastafior-send_key: Sends the user public ssh key(s) to the node (if the user specified it with the option -k).
  14. BroadcastEnvKastafior-install_bootloader: Properly configures the bootloader
  15. BootNewEnvKexec-switch_pxe: Configure the PXE server so that this node will boot on the partition where your environment has been installed
  16. BootNewEnvKexec-umount_deploy_part : Umount the deployment partition from the directory where it has been mounted during the step 7.
  17. BootNewEnvKexec-mount_deploy_part : ReMount the deployment partition
  18. BootNewEnvKexec-kexec: Perform a kexec reboot on the node
  19. BootNewEnvKexec-set_vlan: Properly configure the node's VLAN
  20. BootNewEnvKexec-wait_reboot: Wait for the node to be up.

That is it. You now know all the default micro-steps used to deploy your environments.

Note.png Note

It is recommended to consult the Node storage page to understand which partition is used at which step.

Adjusting timeout for some environments

Since kadeploy3 provides multiple macro-steps and micro-steps, its is important to detect when a step in failing its execution. This error detection is done by using timeout on each step. When a timeout is reached, the nodes that have not completed the given step are discarded from the deployment process.
The value of those timeouts varies from one cluster to another since they depend on the hardware configuration (network speed, hard disk speed, reboot speed, ...). All defaults timeouts are entered in the configurations files on the kadeploy3 server. But you can consult the default timeouts of each macro-steps by using the command kastat3

Terminal.png frontend:
kastat3 --last -f hostname -f step1 -f step1_duration -f step2 -f step2_duration -f step3 -f step3_duration
 griffon-1.nancy.grid5000.fr,SetDeploymentEnvUntrusted,143,BroadcastEnvKastafior,111,BootNewEnvKexec,33
 griffon-10.nancy.grid5000.fr,SetDeploymentEnvUntrusted,143,BroadcastEnvKastafior,111,BootNewEnvKexec,33
 ...

This command will simply print information of the last deployment made on each nodes of that site. The format of the output is the following:

 hostname,step1,step1_duration,step2 ,step2_duration,step3,step3_duration
Note.png Note

Please consult the kastat3 page for more features information.

Nevertheless, kadeploy3 allow users to change timeouts in the command line. In some cases, when you try to deploy an environment with a large tarball or with a post-install that lasts too long, you may get discarded nodes. This false positive behavior can be avoided by manually modifying the timeouts for each step at the deployment time.

For instance, in our previous example, the timeout of each steps are:

  • SetDeploymentEnvUntrusted: 143
  • BroadcastEnvKastafior: 111
  • BootNewEnvKexec: 33

You can increase the timeout of the second step to 1200 seconds with the following command :

Terminal.png frontend:
kadeploy3 -e my_big_env -f $OAR_FILE_NODES -k --force-steps "SetDeploymentEnv|SetDeploymentEnvUntrusted:1:450&BroadcastEnv|BroadcastEnvKastafior:1:1200&BootNewEnv|BootNewEnvClassical:1:400"

Set Break-Point during deployment

As mentioned in the section above, a deployment is a succession of micro steps that can be consulted and modified.
Moreover, kadeploy3 allows user to set a break-point during deployment.

Terminal.png frontend:
kadeploy3 -f $OAR_FILE_NODES -k -e squeeze-x64-base --verbose-level 4 -d --breakpoint BroadcastEnvKastafior:manage_user_post_install

This command can be used for debugging purpose. It performs a deployment with the maximum verbose level and it asks to stop the deployment workflow just before executing the manage_user_post_install micro-step of the BroadcastEnvKastafior macro-step. Thus you will be able to connect in the deployment environment and to manually run the user post install script to debug it.

Warning.png Warning

At the current state of kadeploy3, it is not possible to resume the deployment from the break-point step. Thus you will have to redeploy you environment from the first step. This feature will be implemented in future version of kadeploy3.