Lenny-x64-xen-2.2

From Grid5000
Jump to: navigation, search


This page describes the 2.2 version of the xen environment based on Lenny version of Debian distribution for AMD64/EM64T machines. It intends to explain how this environment was built and how to use it with Kadeploy. This page is inspired from Lenny-x64-xen-2.0.

Contents

Identification sheet

Lenny-x64-xen-2.0

Kernel version 2.6.26-2-xen-amd64 from Debian for amd64/em64t

Authentication

  • Remote console: enabled on ttyS0 at 34800 bps
  • Services: ldap:no, nfs:no
  • Accounts: root:grid5000,g5k:grid5000

Applications

Misc

Build

Here are explanations on how the system was installed and tuned from a Lenny-x64-base-2.2.

Motd

Motd is updated :

cat > /etc/motd.tail <<EOF
Lenny-x64-xen-2.2 (image based on Debian version Lenny/stable for AMD64/EM64T)
Maintained by Philippe-Charles Robert <philippe-charles.robert@loria.fr>
Valid on Dell {PE1855, PE1950}, HP {DL140G3, DL145G2, DL385G2}, 
       IBM {e325, e326, e326m}, Sun {V20z, X2200 M2, X4100},
       Altix Xe 310, Carri CS-5393B
Applications
* Text: Vim, XEmacs, JED, nano, JOE
* Script: Perl, Python, Ruby
  (Type "dpkg -l" to see complete installed package list)
Misc
* i386 shared libraries are available
* SSH has X11 forwarding enabled
* Max open files: 8192
* TCP bandwidth: for 1Gbs
More details: https://www.grid5000.fr/index.php/Lenny-x64-xen-2.2
EOF

Packages

Xen kernel - related packages

Install Xen kernel and related tools

apt-get update && apt-get upgrade
apt-get install linux-image-xen-amd64 xen-utils-3.2-1

Remove useless files in /boot:

rm /boot/System.map-2.6.26-2-amd64
rm /boot/config-2.6.26-2-amd64
rm /boot/intrd.img-2.6.26-amd64
rm /boot/vmlinuz-2.6.26-2-amd64

We should reboot to check everything work fine. Before this, we switch the description file to lenny-x64-xen-2.0.dsc3, otherwise the node will reboot in the base environment. See Recording_environment at the end of this article.

tgz-g5k probert@frontend:lenny-x64-xen.tgz

Once rebooted:

Check the running kernel is the Xen one

uname -a

Install miscellaneous tools and remove the lenny-base kernel

apt-get install debootstrap xen-tools sysfsutils lvm2 libyaml-ruby 
apt-get autoremove linux-image-2.6.26-2-amd64 linux-image-2.6-amd64 linux-headers-2.6-amd64 linux-headers-2.6.26-2-amd64

Support for the references API

apt-get install ruby libopenssl-ruby libjson-ruby curl rubygems
cd /usr/local/src
wget http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz
tar xzf rubygems-1.3.5.tgz
cd rubygems-1.3.5
ruby setup.rb
ln -sfv /usr/bin/gem1.8 /usr/bin/gem
gem install rest-client
rm rubygems-1.3.5.tgz

Support for netxtreme2 drivers

This is necessary to provide the installation on suno cluster

apt-get install linux-headers-2.6.26-2-xen-amd64
wget http://git.grid5000.fr/sources/netxtreme2-5.0.17.tar.gz
mv netxtreme2-5.0.17.tar.gz /usr/local/src
tar -zxvf netxtreme2-5.0.17.tar.gz
cd netxtreme2-5.0.17.tar.gz
make && make install
rm netxtreme2-5.0.17.tar.gz

Configuration

Xen and xen-tools packages

We want network-briging: The domUs will access directly to the outside network and the dom0 will behaves as a commutator.

sed -i -e 's/^# (\(network-script network-bridge\))$/(\1)/' /etc/xen/xend-config.sxp
sed -i -e '/network-dummy/d' /etc/xen/xend-config.sxp

DomU configuration file. Options can be overridden by tipping them on the command xen-create-image

cat > /etc/xen-tools/xen-tools.conf <<EOF
dir = /opt/xen
debootstrap = 1   # installation method
size   = 400Mb    # Disk image size.
memory = 128Mb    # Memory size
swap   = 128Mb    # Swap size
fs     = ext3     # use the EXT3 filesystem for the disk image.
dist   = lenny    # Default distribution to install.
image  = sparse   # Specify sparse vs. full disk images.
dhcp = 1          # Does we use dhcp services?
kernel = /boot/vmlinuz-2.6.26-2-xen-amd64
initrd = /boot/initrd.img-2.6.26-2-xen-amd64
arch=amd64
mirror = http://ftp.fr.debian.org/debian/
ext3_options   = noatime,nodiratime,errors=remount-ro
serial_device = hvc0   # Necessary to get access to the
disk_device = xvda     # domU through xm console command
EOF

domU ssh access

A key is necessary to log on domU from dom0

ssh-keygen -f /root/.ssh/id_rsa -q -C "dom0 lenny-x64-xen-2.0 key" -N ""

Ssh security is too restrictive

echo "StrictHostKeyChecking no" > /root/.ssh/config

Xen-tools provide a solution to add file we want in domU by automated method.

mkdir -p /etc/xen-tools/skel/root/.ssh
cat /root/.ssh/id_rsa.pub > /etc/xen-tools/skel/root/.ssh/authorized_keys
mkdir -p /etc/xen-tools/skel/etc/apt/apt.conf.d/
cat > /etc/xen-tools/skel/etc/apt/apt.conf.d/proxy <<EOF
Acquire::http::Proxy "http://proxy:3128";
EOF
cp /etc/localtime /etc/xen-tools/skel/etc/localtime

domU creation

Xen-tools lenny package has a bug in devices creation.

sed -i -e 's#\./MAKEDEV#MAKEDEV#' /usr/lib/xen-tools/debian.d/55-create-dev 

The udev role is necessary to add console access and tty (ssh) to domU.

xen-create-image --hostname=domU --role=udev

domU is automatically created at boot.

mkdir -p /etc/xen/auto
ln -s /etc/xen/domU.cfg /etc/xen/auto/

Automatic tools

As for lenny-x64-xen-2.0, additionnal tools are added.

  • xen-g5k reconfigure automatically domU network
  • configuration.rb tries to reach the node configuration through the references API.
    • Its creates the file /etc/definitions2.yaml in case of success. The request to references API can take a while, so xen-g5k is added earlier in the runlevels.
    • In case of failure, the file /etc/definitions.yaml will be read. This file is stored in the postinstall, in order to ease it edition.
  • automatic_xen_conf.rb configures the domU configuration file according to the configuration found (bu the API or locally)/
mv /root/xen-g5k /etc/init.d/
update-rc.d xen-g5k defaults 19
mv /root/automatic_xen_conf.rb /usr/local/bin/
mv /root/configuration.rb /usr/local/bin/

xenconf and xenlist allow to interrogate omapi proxy to learn domU ip address and dns naming.

mv /root/xenconf /usr/local/bin/
mv /root/xenlist /usr/local/bin/

xenkeys allow to copy user key on domUs.

mv /root/xenkeys /usr/local/bin/

xen-addvm allow to automate domU création by lvm snapshot method.

mv /root/xen-addvm /usr/local/bin/

Mark

Update the date of the release

date > /root/release


Environment

Creating image's archive

System archive creation and retrieving:

tgz-g5k login@frontend:lenny-x64-xen-2.2.tgz

Creating postinstall's archive

The postinstall archive lenny-x64-xen-2.2-post.tgz si based on etch-x64-base-2.0-post.tgz. The file definitions.yaml is added to dest/etc/definitions.yaml to ease its edition, rather than creating a new environment.

Recording environment

Recording environment can be done from a description file. So we create lenny-x64-xen-2.2.dsc3:

name : lenny-x64-xen
version : 2.2
description : https://www.grid5000.fr/index.php/Lenny-x64-xen-2.2
author : Philippe-charles.robert@loria.fr
tarball : /grid5000/images/lenny-x64-xen-2.2.tgz|tgz
postinstall : /grid5000/postinstalls/lenny-x64-xen-2.2-post.tgz|tgz|traitement.ash /rambin
size = 1000
initrd : /boot/initrd.img-2.6.26-2-xen-amd64
kernel : /boot/vmlinuz-2.6.26-2-xen-amd64
hypervisor : /boot/xen-3.2-1-amd64.gz
hypervisor_params : dom0_mem=1000000
fdisktype : 83
filesystem : ext3
environment_kind : xen
visibility : shared
demolishing_env : 0

Add the new environment to kadeploy3:

kaenv3 -a /grid5000/images/lenny-x64-xen-2.2.dsc3

Deployment

Steps/Sites Bordeaux Grenoble Lille Lyon Nancy Orsay Rennes Sophia Toulouse
Checks Lenny-x64-xen_test Check.png Check.png Check.png Check.png Check.png Check.png Check.png Pas de support Omapi
Deployment Lenny-x64-xen_test Check.png Check.png Check.png Check.png Check.png Check.png Check.png Pas de support Omapi
  • Toulouse
    • The servers get IP-addresses through DHCP requests, hence it is not possible to use the same process for the Xen VMs; servers would get IP-addresses in the virtualisation network.
    • Change the Xen VMs from 00:16:3E to 00:16:EF, for instance? Or decide that servers should not have dynamic IP addresses allocation?
  • ALL
    • Loading Mellanox MLX4_EN HCA driver: [FAILED]
    • Loading HCA driver and Access Layer: [FAILED]
Personal tools
Namespaces

Variants
Actions
Public Portal
Users Portal
Admin portal
Wiki special pages
Toolbox