This page is meant to provide a quick guide to QCG-OMPI for administrators and users.


0. About this document

This document is a short guide to QCG-OMPI for system administrators and users. It is meant to provide a quick help to QCG-OMPI. For a more complete guide, check the on-line documentation at the following URL:

1. About QCG-OMPI

QCG-OMPI is an MPI environment designed for running high-performance distributed applications on grids. It features connectivity techniques that enable communication accross the grid, through firewalls and NATs. It can transmit information such as topology to the application at run-time. It can be interfaced with the rest of the QosCosGrid software stack, or be used as a stand-alone system.

2. Administrators' guide
2.1. Compilation and installation

QCG-OMP uses threads and needs libpcap to be installed on the system.

A configure script is available. If libpcap is not installed in the path, its installation path must be passed to the configure script. The rest is the same as usual:

$ ./configure --prefix=$OMPI --with-pcap=$PATH_TO_PCAP
$ make
$ sudo make install
2.2. Deployment

The grid infrastructure does not need to be run under super user's identity. It can be deployed by a normal user. The scrit deploys the infrastructure, generates the parameter file and a list of the available machines, and copies them on all the machines of the grid.

2.2.1. Configuration file

The deployment script needs to read a configuration file that describes every cluster of the grid. The syntax of this configuration file is the following, for each cluster:

<cluster id> <# of machines in the cluster> <technique ID> <frontal> <proxy> <proxy's IP> <port min> <port max>

The first machine of the first cluster must be the machine the broker will be running on. The last two elements, port min and port max, correspond to the boundaries of the range of open ports (if any) of the firewall. If no ports or all the ports are open, any range can be specified, or the value of the maximum can be set to -1.

The available connectivity techniques and the corresponding IDs are the following:

For example:

1 5 1 4000 4100
2 2 2 5000 5100
2.2.2. Deployment script

The deployment script takes the following options:

-c <path to the config file>
-m <port min>
-M <port max>
-p <path to the grid infrastructure's executables>
-d (if specified: debug mode)

For example: -c /etc/qcg_config -m 4000 -M 4100 -p /usr/bin

The above command will deploy a grid infrastructure as specified if in the config file in the /etc directory, using the binaries found in /usr/bin, and using ports between 4000 and 4100.

3. Users' guide
3.1. Environment setup

The path to the installation of OpenMPI must be in your path. For bash-compatible shells, if OpenMPI is installed in the $OMPI directory, you must use the following commands:

export PATH=$OMPI/bin:$PATH

If you also want to be able to access the man pages, you can aso add:

export MANPATH=$OMPI/share/man:$MANPATH

Some systems do not read user-set environments when using a remote connection. In this case, you will have to add the following option to your OpenMPI command-line:

--prefix $OMPI
3.2. Compiling and running applications

QCG-OMPI allow you to compile MPI applications using exactly the same commands as for usual MPI applications. Compilers are the same as with 'raw' OpenMPI, and are to be used in exactly the same way.

The MPI library and run-time environment of QCG-OMPI must be able to 'know' how to access the QCG-OMPI grid infrastructure. All the information they need is set in a configuration file which is placed by the system in the /tmp directory. To use this configuration file, all you have to do is to add the following option to your execution command-line:

-am /tmp/mcaparams.conf

If you want to enable this in an automatic way, you can specify the path to this file in your default configuration files in your .openmpi/mca-params.conf file by adding the following line:

mca_params_file = "~/.openmpi/mca-params.conf /etc/openmpi-mca-params.conf /tmp/mcaparams.conf"

MPI jobs are idientified by the grid infrastructure by a job ID. The job ID must be passed to the job as an environment variable called JOBID. It must be set on each of the computing nodes. It can be set in each of the shell configuration files, or passed to the mpiexec command adding the following option:


This command sets the JOBID environment variable to the value it has in the current environment in the environment of each of the remote MPI processes.

3.3. Topology discovery

The topology can be passed as an environment variable (for example, by the QosCosGrid meta-scheduler and deployment system) called QCG_TOPOLOGY.

If this environment variable is not set, the library tries to guess what the topology may be like from the domain names of the machines used in the computation. In this case, a warning message is displayed.

The topology information can be obtained at run-time by the MPI program using the MPI_Attr_get() or MPI_Get_attr() routines.

The depths are stored as an array of N integers, N being the number of processes in the MPI job. They are stored as the QCG_TOPOLOGY_DEPTH attribute.

The depths must be obtained first, in order to be able to allocate enough space for the table of colors. Colors are stored as arrays of characters, with a maximum length of 20 characters. They are stored as the QCG_TOPOLOGY_COLORS attribute.

Colors can be transposed into integers using the QCG_ColorToInt() routine. This routine creates a bijective mapping between colors and integers, which means that exactly one integer corresponds to one color, and exactly one color corresponds to one integer.

This function has the following prototype:

int QCG_ColorToInt( char* color );

It can be used, for example, to create communicators with the MPI_Comm_split() routine.

An example can be found in the examples/ directory in the openmpi directory of QCG-OMPI.

Valid XHTML 1.0 Strict