Michael McCourt : mccomic@mcs.anl.gov (or mccomic@iit.edu)
Office : E1 105d
Office Hours : MW 10:00-1:00, TR 2:00-5:00
Homework 2
Following the instructions provided below, install PETSc in your account on ada.cs.iit.edu. Recall that
there is info about accessing ada.cs.iit.edu on References.
Test the installation of MPI
If you followed the instructions below to install MPI, there will be a file ~/cs595/hellow.c in your account.
Modify that file so that each process sends its name to the process 0, which prints it. Do this in such a way
that the rank are printed in REVERSED order. You can see the cpi.c file to learn how to call the function
MPI_Get_processor_name() which will be needed here.
Test the installation of PETSc
If you followed the instructions below to install PETSc, there will be a file
$PETSC_DIR/src/vec/vec/examples/tutorials/ex2.c present. Modify that file so that each process replaces its local
vector entries with the value "rank+1". Specifically, executing the command VecView(x,PETSC_VIEWER_STDOUT_WORLD)
should display
Process [0]
1
Process [1]
2
2
Process [2]
3
3
3
when the program is run with mpiexec -n 3 ./ex2
Test $PETSC_DIR/src/ksp/ksp/examples/tutorials/ex5.c with various runtime options. Some of them will be listed
here shortly.
Read Lecture 1 in Numerical Linear Algebra, by Trefethen and Bau
PETSc (and MPI) Installation Walkthrough
This example assumes you're running the installation in your home directory on the ada.cs.iit.edu machine. If you're running it on your own personal computer, some of the locations will be different, but much of this will still
be the same. If you have difficulties, feel free to ask Hong or myself, or email petsc-maint@mcs.anl.gov.
Login to your machine, either using ssh @ada.cs.iit.edu, or with PuTTY. Check References for PuTTY info.
Check to make sure your basic tools are present. For the initial installation you should only need the C compiler gcc, the Fortran compiler g77 and Python. To check that these are present, execute the
following command at the prompt: userid@ada:~> which gcc g77 python
/usr/bin/gcc
/usr/bin/g77
/usr/bin/python
If you see that then you should be good to go.
Download PETSc to your directory. If you're comfortable using Mercurial you can feel free to do so, but it requires some extra stuff that isn't necessary right now, so we'll just show the ftp way to get PETSc. Since
we don't have access to the root user system of ada we will install PETSc to our own software directory. Also note that the last line just renames the PETSc directory to remove the patch number.
mkdir $HOME/soft; cd soft
wget --passive-ftp ftp://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-lite-3.3-p3.tar.gz
gunzip -c petsc-lite-3.3-p3.tar.gz | tar -xof -
mv petsc-3.3-p3 petsc-3.3
Configure and make PETSc.
The first step here requires us to declare some variables for the configure script to use. To do that, we need
export PETSC_DIR=$HOME/soft/petsc-3.3
export PETSC_ARCH=arch-cs595
Note that this export command only works in the BASH shell. For most of you that's fine - anyone not working in BASH I'm sure knows there way around whatever shell you are using.
Now we need to configure PETSc to use the compilers that we have on ada. We do this by telling the configure script what compilers we prefer to use. The following line accomplishes this along with two other important
things: ./config/configure.py --with-cc=gcc --with-fc=gfortran --download-mpich=1 --download-f-blas-lapack=1
Along with the gcc and gfortran values, we also see a request to download MPICH and blas/lapack. MPICH are the libraries that we will
be using to conduct message passing between processes. On a computing cluster the
sys admin would install MPI, but since we are only developing code we can install our own to cut down on complications.
Blas/lapack are numerical linear algebra libraries which allow us to do such things as conduct matrix-vector product and compute singular values. They have been optimized over many decades of research, but they need to be
compiled on this computer with the compilers you specify so that they will be compatible with other code you write. That is why we need to download and compile blas/lapack right now.
Once your configure has completed successfully, just run the make command to build the PETSc libraries. make
After they have finished building, confirm that they have built successfully with make test
Test the installation of MPI. Define your MPIEXEC shell variable as export MPIEXEC=$PETSC_DIR/$PETSC_ARCH/bin/mpiexec
or you can add the necessary directory to your path export PATH=$PATH:$PETSC_DIR/$PETSC_ARCH/bin/
We also need somewhere to put projects that we are working on. Let's put them in a directory in $HOME: mkdir $HOME/cs595; cd $HOME/cs595 cp $PETSC_DIR/externalpackages/mpich2-1.4.1p1/examples/hellow.c . cp $PETSC_DIR/src/ksp/ksp/examples/tutorials/makefile .
At this point, the makefile you have copied into your cs595 folder is full of a lot of stuff that you don't need. Rather than get all the unnecessary stuff out, let's put in the one thing we will need - a way to compile the
hellow.c file that is now in the directory. Look through the makefile and file a logical place to add the following lines (hint: it should blend it with what's around it)
hellow: hellow.o chkopts
-${CLINKER} -o hellow hellow.o ${PETSC_KSP_LIB}
${RM} hellow.o
Note that the spaces before the last two lines need to be tabs to be correctly interpreted. If you have correctly found the logical spot, you should be able to now compile and link hellow.c with the following command make hellow
which should spit out a bunch of nastiness that I'm not going to copy here. When that is completed successfully you should be able to type $MPIEXEC -n 3 ./hellow
and get some awesome output looking like
Hello world from process 0 of 3
Hello world from process 1 of 3
Hello world from process 2 of 3
although it may not be exactly in that order because of the stochastic nature of parallel input/output, as we discussed briefly in lab on Thursday.
At this point you can complete part 2 of the homework. I would also recommend trying the same compiling exercise we just went through with the file cpi.c. You can move that into the cs595 directory with the command cp $PETSC_DIR/externalpackages/mpich2-1.4.1p1/examples/cpi.c .
You will need to add something new into the makefile again, and it will look similar to what is above, only with cpi in place of hellow.
Test the installation of PETSc. You've already done this earlier when you executed make test in PETSC_DIR. Now you need to make sure you can build the exercise that your homework is
based on. The following commands will take you into the PETSc tutorials on how parallel vector objects work and it will build the simplest example available.
cd $PETSC_DIR/src/vec/vec/examples/tutorials
make ex2
$MPIEXEC -n 3 ./ex2
This of course assumes that you declared MPIEXEC as described earlier. If this went successfully you should see
Process [0]
4
Process [1]
4
4
Process [2]
4
3
2
At this point you can complete the first part of problem 3.
You are encouraged to check out the various runtime options that PETSc provides for users, either for debugging or profiling their code. Go to the KSP tutorials (KSP is short for Krylov Subspace Method for solving sparse
linear systems) and try ex5.c:
cd $PETSC_DIR/src/ksp/ksp/examples/tutorials
make ex5
./ex5 -help
That last command should give you a lot of stuff printed to the screen. These are options that you can pass to PETSc programs. It may be easier to read them all by redirecting the output to a file and then reading that file
separately:
./ex5 -help > PetscOpts
less PetscOpts
Mess around. Try and figure out what options do what and maybe what options will cause a PETSC_ERROR to crash the program (don't worry you won't hurt your computer). Don't spend all day on this as there are far more options
than you have time to look at; just pick a couple that you feel you understand and try them out. As Hong suggests, -mat_view_info and -ksp_view
are both useful debugging tools.