FC11 Tlse

From Grid5000
Jump to: navigation, search


This page describes an environment based on Fedora Core 11 (Leonidas) distribution for AMD64/EM64T machines. This environment is designed to be the default environment on the site of Toulouse. In this page, it is explained which software are installed and how this environment was built.
The first part is especially destined to Grid'5000 users. If some features need more documentation, or if you think some widely-used software or libraries are really missing, please report it using the "discussion" tab at the top of this page.
The second part is more destined to administrators: it gives clues to build a new default environment.

Contents

Identification sheet

FC11_Tlse

Kernel version 2.6.30.10-105.fc11 from Fedora for x86_64

Authentication

  • Remote console: enabled on ttyS1 at 115200 bps for pastel, on ttyS0 at 34800 bps for violette
  • Services: ldap:yes, nfs:yes
  • Accounts: root and the user which OAR allocated the resource to

Applications

  • Development:
    • cvs, git, subversion
    • autoconf/automake, bison/byacc/flex, cmake, g77, gcc/g++ (3.4 and 4.4.1), gfortran, libf2c, make, Sun JDK 1.6.0
    • gdb, strace, valgrind
  • MPI: openmpi, mpich2, mpich
  • Network: ldap, lftp, nfs, nscd, rsync, ssh (with X11Forwarding=yes), telnet, wget
  • Script: Perl, Python, Ruby
  • Text: vim, emacs, nano
  • X: libraries to remotely execute graphical commands, gtk2, tcl, QT.

Misc

  • PathScale compilers libraries are available (execution only)

Environment

MPI

4 MPI implementations are installed. Because some conflicts exist between OpenMPI and MPICH2, only the former is installed as a Fedora binary package.

OpenMPI (1.3.3)
Installed from the Fedora package. Please refer to the package documentation for further information.
To use this implementation, simply set some environment variables inside your ~/.bashrc or ~/.tcshrc (necessary to have all MPI processes use the same environment, and so the same implementation):
bash# echo "[ -f /opt/openmpi/env.sh ] && . /opt/openmpi/env.sh" >> ~/.bashrc
tcsh# echo "[ -f /opt/openmpi/env.csh ] && source /opt/openmpi/env.csh" >> ~/.tcshrc
With this implementation the MPI tutorial does not work as is, because there is no mpd daemon to launch. Instead, use directly mpiexec or mpirun which will spawn the MPI processes over the nodes.
MPICH2 (1.8)
MPICH2 is installed in /opt/mpich2. All information about the configuration can be retrieved by executing /opt/mpich2/bin/mpich2version. Note that we chose the ch3:nemesis device, which should take the best benefits from a cluster of multi-core machines.
To use this implementation, simply set some environment variables inside your ~/.bashrc or ~/.tcshrc (necessary to have all MPI processes use the same environment, and so the same implementation):
bash# echo "[ -f /opt/mpich2/env.sh ] && . /opt/mpich2/env.sh" >> ~/.bashrc
tcsh# echo "[ -f /opt/mpich2/env.csh ] && source /opt/mpich2/env.csh" >> ~/.tcshrc
With this implementation the MPI tutorial does work as is.
MPICH (1.2.7) p4mpd
MPICH 1.2.7 with device p4mpd is installed in /opt/mpich1_p4mpd. The configuration command can be found in /opt/mpich1_p4mpd/configure_cmd: it is best suited for inter-node communications and should not conflict with pthreads. Note that the patch /opt/mpich1_p4mpd/mpich-1.2.7p1.patch had been applied, mostly to enable the building of shared libraries.
To use this implementation, simply set some environment variables inside your ~/.bashrc or ~/.tcshrc (necessary to have all MPI processes use the same environment, and so the same implementation.
bash# echo "[ -f /opt/mpich1_p4mpd/env.sh ] && . /opt/mpich1_p4mpd/env.sh" >> ~/.bashrc
tcsh# echo "[ -f /opt/mpich1_p4mpd/env.csh ] && source /opt/mpich1_p4mpd/env.csh" >> ~/.tcshrc
With this implementation the MPI tutorial almost works as is: just read the MPICH documentation to launch the mpd daemon correctly.
MPICH (1.2.7) shmem
MPICH 1.2.7 with device shmem is installed in /opt/mpich1_shmem. The configuration command can be found in /opt/mpich1_shmem/configure_cmd: it is suited for intra-node communications only and should not conflict with pthreads. Note that the patch /opt/mpich1_shmem/mpich-1.2.7p1.patch had been applied, mostly to enable the building of shared libraries.
To use this implementation, simply set some environment variables inside your ~/.bashrc or ~/.tcshrc (necessary to have all MPI processes use the same environment, and so the same implementation.
bash# echo "[ -f /opt/mpich1_shmem/env.sh ] && . /opt/mpich1_shmem/env.sh" >> ~/.bashrc
tcsh# echo "[ -f /opt/mpich1_shmem/env.csh ] && source /opt/mpich1_shmem/env.csh" >> ~/.tcshrc
With this implementation the MPI tutorial does not work as is, because there is no mpd daemon to launch. Instead, use directly mpiexec or mpirun which will spawn the MPI processes over the nodes.

Compilers and interpreters

Java
Java 1.6.0 is installed. The JDK used is the Sun JDK 6u16.
GNU
g++, gcc and gfortran: 4.4.1.
g++34, gcc34, g77: 3.4.6.
PathScale
The licence for PathScale has not been renewed. Only the runtime libraries are shipped, to let users go on executing applications compiled with PathScale compilers in this environment.
Interpreters
bash (default connexion shell), ksh, nash, tcsh, zsh.
bc, perl, python, ruby

Development

Versionning
cvs, git, subversion
Compilation
autoconf, automake, bison, cmake, flex, make
Debuggers
gdb, valgrind
Editors
emacs (with X support), nano, vim

Misc

Communications
ftp, rsync, telnet, wget
Useful libraries
fftw2, X libraries and fonts for remote execution of graphical applications.


Creation

Base installation from DVD ISO

The environment was initially installed from a Fecora Core 11 (Leonidas) DVD for x86_64 (64-bit, AMD64 and EM64T), on Qemu virtual machine. But with this procedure, I was not able to run it on Xen domU.

I wanted the same environment for nodes and compil, the machine which the users must compile their applications on. It is very handy to have compil on a Xen domU, then I begun another installation from the DVD on a Xen domU, extracted the generated initrd image. With this new initrd, the environment could run on Xen domU as well as on a physical node. I will write later a How-To for a Xen only procedure, to make it simpler. In this how-to, a kickstart config file will be given for an exact list of packages to be installed.

Only the Base package group of Fedora 11 has been selected at installation, whose many optional packages have been unselected (such as WiFi related packages or graphical administration tools), so that the installation is as minimal as possible. The X Windows package group has not been selected. Yet some X libraries are selected through the dependencies mechanism, that are necessary to execute remotely graphical applications such as emacs or xterm.

Additional packages and Configuration

Fedora repositories
In /etc/yum.repos.d/fedora*.repo, enable only http://download.fedora.redhat.com/, for this is the only repository accepted by Grid'5000 proxies.
Additional software
Some necessary packages (compilers, interpreters, editors, etc.) have been added carefully, with a constant desire to keep the environment as small as possible.
Some other software are installed from sources: OAR (node part), some MPI implementations...
Firewall
All iptables are suppressed:
iptables -F
service iptables save
LDAP, NSCD, NSS and PAM configuration
The user connexion aspects are configured as it is described on the related admin page.
SSH
Minor changes are done in configuration file /etc/ssh/sshd_config:
PrintMotd no
PasswordAuthentication no
ChallengeResponseAuthentication yes
PermitEmptyPasswords no
IgnoreUserKnownHosts yes
X11Forwarding yes
TCP bandwidth
On a grid, network kernel settings must be tuned to maximize inter-site connections bandwidth. This is done by editing /etc/sysctl.conf, as it is described on the related tuning page.
Ulimit
max number of open file descriptors
To make some experiments possible, max open file descriptors limit must be lifted. This is done modifying /etc/security/limits.conf, as it is described on the related tuning page.
Message of the Day
The text in /etc/motd points to this page.
g5k-parts
This program performs some sanity checks about the compliance of the node to the Grid'5000 node storage convention.

Regenerating initrd

The necessary SCSI modules should be put in /etc/modprobe.conf as scsi_hostadapter[n] aliases. The problem is that /etc/modprobe.conf is deprecated as for specifying module aliases and is responsible for several warnings at boot time. To avoid this we need to remove it once the initrd is generated. We have developed a short wrapper to mkinitrd, /root/make_initrd.sh, which adds the option --scsi to mkinitrd existing options. Here is how the initrd for this environment was generated (xen_blkfront is to make it runnable on a XEN VM, such as compil):

 /root/make_initrd.sh -v --scsi=xen_blkfront --scsi=mptspi --scsi=mptscsih --scsi=mptbase --scsi=scsi_transport_spi --scsi=usb-storage --scsi=sata_nv -f /boot/initrd-`uname -r`.img `uname -r`

Saving the image

tgz-g5k-1.0.6 is installed. The /usr/share/tgz-g5k/dismissed lists many useless files and directories, but this can still be improved to reduce the image size.

Saving from compil (Toulouse admins only)
The installation procedure of the compil machine first deploys the default environment on the chosen partition. Then, it applies the #Kadeploy_postinstall. Then it adds/changes some configuration files, to comply with the specificity of the machine: PAM files, motd, a cron job to clean /tmp... All changed or added files are logged into a script file /usr/sbin/tgz-g5k-from-compil: one can then call this wrapper to tgz-g5k on compil to regenerate the environment image. The compil installation script and files can be found in the Grid'5000 SVN repository:
svn checkout --username <user> https://scm.gforge.inria.fr/svn/grid5000/site/toulouse/default_env/install_compil

Kadeploy postinstall

Partition table
/etc/fstab is compliant with the Grid'5000 node storage convention.

Remote console
Serial console login is configured in /etc/inittab (ttyS0 or ttyS1 is added in /etc/securetty as well):
T0:23:respawn:/sbin/agetty -L ttyS0 38400 vt100 (violette)
T0:23:respawn:/sbin/agetty -L ttyS1 115200 vt100 (pastel)
LDAP Client
configured accordingly to teh Grid'5000 convention described in LDAP_client.
Web proxy
yum (/etc/yum.conf), wget (/etc/wgetrc) and rpm (/etc/rpm/macros.proxy) are configured to cope with Grid'5000 proxies. Yet, it does not work for rpm because of an upstream bug.

Recording environment

This environment has bot been registered in Kadeploy 2 database. On frontend.toulouse.grid5000.fr, its description is in /grid5000/descriptions/FC11_Tlse.dsc:

name : FC11_Tlse
version : 2
description : Fedora Core 11 (Toulouse default environment)
author : Philippe.Combes@enseeiht.fr
tarball : /grid5000/images/FC11_Tlse.tgz|tgz
postinstall : /grid5000/postinstalls/FC11_Tlse-post.tgz|tgz|traitement.ash /rambin
kernel : /boot/vmlinuz-2.6.30.10-105.fc11.x86_64
kernel_params : 
initrd : /boot/initrd-2.6.30.10-105.fc11.x86_64.img
fdisktype : 83
filesystem : ext3
environment_kind : linux
demolishing_env : 0

With kaenv3, the new environment is registered in Kadeploy 3 database, with the default visibility:

su deploy -c "kaenv3 -a /grid5000/descriptions/FC11_Tlse.dsc"
Personal tools
Namespaces

Variants
Actions
Public Portal
Users Portal
Admin portal
Wiki special pages
Toolbox