CS 595 - Advanced Scientific Computing

Fall 2010

The Boss is: OUT


Site Map

Contact Information

  • Hong Zhang : hzhang@mcs.anl.gov
    • Office : SB 235C
    • Office Hours : R 3:00-5:30
  • Michael McCourt : mccomic@mcs.anl.gov (or mccomic@iit.edu)
    • Office : E1 105d
    • Office Hours : MW 10:00-1:00, TR 2:00-5:00

Homework 2

  1. Following the instructions provided below, install PETSc in your account on ada.cs.iit.edu. Recall that there is info about accessing ada.cs.iit.edu on References.
  2. Test the installation of MPI
  3. Test the installation of PETSc
  4. Read Lecture 1 in Numerical Linear Algebra, by Trefethen and Bau

PETSc (and MPI) Installation Walkthrough

This example assumes you're running the installation in your home directory on the ada.cs.iit.edu machine. If you're running it on your own personal computer, some of the locations will be different, but much of this will still be the same. If you have difficulties, feel free to ask Hong or myself, or email petsc-maint@mcs.anl.gov.
  1. Login to your machine, either using ssh @ada.cs.iit.edu, or with PuTTY. Check References for PuTTY info.
  2. Check to make sure your basic tools are present. For the initial installation you should only need the C compiler gcc, the Fortran compiler g77 and Python. To check that these are present, execute the following command at the prompt:
    userid@ada:~> which gcc g77 python
    /usr/bin/gcc
    /usr/bin/g77
    /usr/bin/python
    If you see that then you should be good to go.
  3. Download PETSc to your directory. If you're comfortable using Mercurial you can feel free to do so, but it requires some extra stuff that isn't necessary right now, so we'll just show the ftp way to get PETSc. Since we don't have access to the root user system of ada we will install PETSc to our own software directory. Also note that the last line just renames the PETSc directory to remove the patch number.
    mkdir $HOME/soft; cd soft
    wget --passive-ftp ftp://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-lite-3.3-p3.tar.gz
    gunzip -c petsc-lite-3.3-p3.tar.gz | tar -xof -
    mv petsc-3.3-p3 petsc-3.3
  4. Configure and make PETSc. The first step here requires us to declare some variables for the configure script to use. To do that, we need
    export PETSC_DIR=$HOME/soft/petsc-3.3
    export PETSC_ARCH=arch-cs595
    Note that this export command only works in the BASH shell. For most of you that's fine - anyone not working in BASH I'm sure knows there way around whatever shell you are using.

    Now we need to configure PETSc to use the compilers that we have on ada. We do this by telling the configure script what compilers we prefer to use. The following line accomplishes this along with two other important things:
    ./config/configure.py --with-cc=gcc --with-fc=gfortran --download-mpich=1 --download-f-blas-lapack=1
    Along with the gcc and gfortran values, we also see a request to download MPICH and blas/lapack. MPICH are the libraries that we will be using to conduct message passing between processes. On a computing cluster the sys admin would install MPI, but since we are only developing code we can install our own to cut down on complications.

    Blas/lapack are numerical linear algebra libraries which allow us to do such things as conduct matrix-vector product and compute singular values. They have been optimized over many decades of research, but they need to be compiled on this computer with the compilers you specify so that they will be compatible with other code you write. That is why we need to download and compile blas/lapack right now.

    Once your configure has completed successfully, just run the make command to build the PETSc libraries.
    make
    After they have finished building, confirm that they have built successfully with
    make test
  5. Test the installation of MPI. Define your MPIEXEC shell variable as
    export MPIEXEC=$PETSC_DIR/$PETSC_ARCH/bin/mpiexec
    or you can add the necessary directory to your path
    export PATH=$PATH:$PETSC_DIR/$PETSC_ARCH/bin/
    We also need somewhere to put projects that we are working on. Let's put them in a directory in $HOME:
    mkdir $HOME/cs595; cd $HOME/cs595
    cp $PETSC_DIR/externalpackages/mpich2-1.4.1p1/examples/hellow.c .
    cp $PETSC_DIR/src/ksp/ksp/examples/tutorials/makefile .
    At this point, the makefile you have copied into your cs595 folder is full of a lot of stuff that you don't need. Rather than get all the unnecessary stuff out, let's put in the one thing we will need - a way to compile the hellow.c file that is now in the directory. Look through the makefile and file a logical place to add the following lines (hint: it should blend it with what's around it)
    hellow: hellow.o chkopts
    -${CLINKER} -o hellow hellow.o ${PETSC_KSP_LIB}
    ${RM} hellow.o
    Note that the spaces before the last two lines need to be tabs to be correctly interpreted. If you have correctly found the logical spot, you should be able to now compile and link hellow.c with the following command
    make hellow
    which should spit out a bunch of nastiness that I'm not going to copy here. When that is completed successfully you should be able to type
    $MPIEXEC -n 3 ./hellow
    and get some awesome output looking like
    Hello world from process 0 of 3
    Hello world from process 1 of 3
    Hello world from process 2 of 3
    although it may not be exactly in that order because of the stochastic nature of parallel input/output, as we discussed briefly in lab on Thursday.

    At this point you can complete part 2 of the homework. I would also recommend trying the same compiling exercise we just went through with the file cpi.c. You can move that into the cs595 directory with the command
    cp $PETSC_DIR/externalpackages/mpich2-1.4.1p1/examples/cpi.c .
    You will need to add something new into the makefile again, and it will look similar to what is above, only with cpi in place of hellow.
  6. Test the installation of PETSc. You've already done this earlier when you executed make test in PETSC_DIR. Now you need to make sure you can build the exercise that your homework is based on. The following commands will take you into the PETSc tutorials on how parallel vector objects work and it will build the simplest example available.
    cd $PETSC_DIR/src/vec/vec/examples/tutorials
    make ex2
    $MPIEXEC -n 3 ./ex2
    This of course assumes that you declared MPIEXEC as described earlier. If this went successfully you should see
    Process [0]
    4
    Process [1]
    4
    4
    Process [2]
    4
    3
    2
    At this point you can complete the first part of problem 3.
  7. You are encouraged to check out the various runtime options that PETSc provides for users, either for debugging or profiling their code. Go to the KSP tutorials (KSP is short for Krylov Subspace Method for solving sparse linear systems) and try ex5.c:
    cd $PETSC_DIR/src/ksp/ksp/examples/tutorials
    make ex5
    ./ex5 -help
    That last command should give you a lot of stuff printed to the screen. These are options that you can pass to PETSc programs. It may be easier to read them all by redirecting the output to a file and then reading that file separately:
    ./ex5 -help > PetscOpts
    less PetscOpts
    Mess around. Try and figure out what options do what and maybe what options will cause a PETSC_ERROR to crash the program (don't worry you won't hurt your computer). Don't spend all day on this as there are far more options than you have time to look at; just pick a couple that you feel you understand and try them out. As Hong suggests, -mat_view_info and -ksp_view are both useful debugging tools.