R modules: Super Exciting New Updates

This is revised Monday, July 24, 2017.

Some of you have reported segmentation faults during the past week. We learned they come from 3 different problems. First, some people have R packages compiled in their user accounts. These fall out-of-date with the R packages we provide, causing incompatability. Second, some new compute nodes came on line during the past 2 weeks and some are missing support libraries. When these are missing, the R packages that rely on them (such as our beloved kutils or rockchalk) would fail to load. This was a tricky problem because it only happened on some nodes, which only became recently available. Third, I did not understand the gravity and drama involved with the user account setup and the Rmpi package.

Lets skip to the chase. What should users do now.

Step 1. remove module statements from submission scripts.

Those statements are not having the anticipated effect, and they will destroy the benefits of the changes I suggest next.

I'm told this problem does not affect all MPI jobs, just ones that use R and the style of parallelization that we understand.

Step 2. Configure your individual user account to load appropriate modules.

Some module should be available for every session launched for your account, in every node. These have to be THE SAME in all nodes and cores launched by the job. There are 2 ways to get this done.

Option 1. The easy way: Use my R package module stanza, crmda_env.sh

In the cluster file system, I have a file /panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh with contents like this:

#!/bin/bash

module purge
module load legacy
module load emacs
module use /panfs/pfs.local/work/crmda/tools/modules
module load Rstats/3.3
OMPI_MCA_btl=^openib
export OMPI_MCA_btl

I say "like this" because I may insert new material there. The last 2 lines were inserted July 22, 2017. The goal is to conceal all of the details from users by putting them in a module that's loaded, such as Rstats/3.3. When we are ready to transition to R-3.4, I'll change that line accordingly.

In your user accounts, there are 2 files where you can incorporate this information, they are ~/.bashrc and ~/.bash_profile. Add a last line in those files like this:

source /panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh

I'll show you my ~/.bashrc file so you can see the larger context:

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific aliases and functions
export LS_COLORS=$LS_COLORS:'di=0;33:'
# alert for rm, cp, mv
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# color and with classification
alias ls='ls -F --color=auto'
alias ll='ls -alF'

source /panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh

I strongly urge all of our cluster users to include the "alert for rm, cp, mv" piece. This causes the system to ask for confirmation before deleting or replacing files. But that's up to you. I also have some an adjustment to the colors of the directory listing.

I insert the same "source" line at the end of ~/.bash_profile as well.

On 2017-07-23, I made a minor edit in my .bashrc and .bash_profile files:

export PATH=/panfs/pfs.local/work/crmda/tools/bin:$PATH
source crmda_env.sh

This is equivalent, but gives me a side benefit. Instead of adding the source function with the full path, I inserted that bin folder into my path. That means I can use any script in that folder without typing out the full path. When I find very handy shell scripts that I use often, and I think the other users should have access to them as well, then I will put them in that folder. For example, if you look there today, you should see "crmda_env-test.sh", which is the new one I'm working on. When that's ready, it will become "crmda_env.sh" and the old one will get renamed as "crmda_env-2017xxxx.sh", where xxxx is the date on which it becomes the old one.

Option 2. Add your own module statements in ~/.bashrc and ~/.bash_profile

Make sure you put the same modules in both ~./bashrc and ~./bash_profile. Look at the file /panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh to get ideas of what you need. For example, run

$ cat /panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh

You might consider creating a file similar to /panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh in your account. Then source that at the end of your ~/.bashrc and ~/.bash_profile. If you do that, they will always stay consistent.

Frequently Asked Questions that I anticipate

Can you explain why the segmentation fault happens?

Answer: Yes, I have some answers.

Here is the basic issue. Suppose you have a submission script that looks like this:

#!/bin/sh
#
#
#MSUB -N RParallelHelloWorld 
#MSUB -q crmda
#MSUB -l nodes=1:ppn=11:ib
#MSUB -l walltime=00:50:00
#MSUB -M your-name-here@ku.edu
#MSUB -m bea

cd $PBS_O_WORKDIR

module purge 
module load legacy 
module load emacs 
module use /panfs/pfs.local/work/crmda/tools/modules 
module load Rstats/3.3

mpiexec -n 1 R --vanilla -f parallel-hello.R 

I though we were supposed to do that, until last week. Here's what is wrong with it.

The environment specifies Rstats/3.3, but that ONLY applies to the "master" node in the R session. It does not apply to the "child" nodes that are spawned by Rmpi. Those nodes are spawned, they are completely separate shell sessions and they are launched by settings in ~/.bash_profile. If your ~/.bash_profile does not have the required modules, then the new nodes are going to have the system default R session, and guess what you get with that? The wrong shared libraries for just about everything. Possibly you get a different version of Rmpi or Rcpp loaded, and when the separate nodes start taking to each other, they notice the difference and sometimes crash.

As a result, the submission scripts, for example, in hpcexample/Ex65-R-parallel, will now look like this:

#!/bin/sh
#
#
#MSUB -N RParallelHelloWorld
#MSUB -q crmda
#MSUB -l nodes=1:ppn=11:ib
#MSUB -l walltime=00:50:00
#MSUB -M pauljohn@ku.edu
#MSUB -m bea

cd $PBS_O_WORKDIR

## Please check your ~/.bash_profile to make sure
## the correct modules will be loaded with new shells.
## See discussion:
## http://www.crmda.dept.ku.edu/timeline/archives/184

mpiexec -n 1 R --vanilla -f parallel-hello.R

Why is this needed for both ~/.bashrc and ~/.bash_profile?

Answer: You ask a lot of questions.

The short answer is "there's some computer nerd detail". The long answer is, "when you log in on a system, the settings in ~/.bash_profile are used. That is a 'login shell'. If you are in already, and you run a command that launches a new shell inside your session, for example by running "bash", then your new shell is not a 'login shell'. It will be created with settings in ~./bashrc.

If you will never run an interactive session, never interact with R via Emacs or Rstudio, then it might be enough to change ~/.bash_profile. If you think you might ever want to log in and run a small test case, then you should have same in both ~/.bashrc and ~/.bash_profile.

What are the benefits of Option 1?

Answer: Over time, the CRMDA R setup may evolve. Right now, I've already built a setup Rstats/3.4. After we do some bug-testing, then I can easily update the shell file (/panfs/pfs.local/work/crmda/tools/bin/crmda_env.sh) and use that. If you maintain your own modules, then you have to do that yourself.

What are the dangers of Option 1?

Answer: If I get it wrong, then you get it wrong.

Does this mean you need to revise all of the code examples in the hpcexample (https://gitlab.crmda.ku.edu/crmda/hpcexample​) set?

Answer: Yes. It has not been a good week. And it looks like it won't be a good week again.

Why didn't we hear about this in the old community cluster, or in CRMDA's HPC cluster

Answer: Because "we" were in control of the cluster settings and user accounts, the cluster administrators would work all of this out for us and they inserted the settings in the shell for us. Some of you may open your ~/.bashrc or ~/.bash_profile and see the old cluster settings. When I opened mine on 2017-07-07, I noticed that I had modules loaded from the old cluster. I also noticed I'd made an error of editing ~/.bashrc and not ~/.bash_profile.

Why didn't we see these problems before?

Answer: Dumb luck.

In the new CRC-supervised cluster, some modules are loaded automatically. As those modules were more-or-less consistent with what we need to do, then the different environments were not causing segmentation faults. However, when we update the R packages like Rstan, Rcpp, and, well, anything with a lot of shared libraries, then we hit the crash.

I notice you don't have oreterun in your submission example. Do you mean mpiexec really?

Answer: The documentation says that orterun, mpiexec, and mpirun are all interchangeable. I rather enjoyed orterun, it sounds fancy. However, it appears mpiexec is more widely used. There are more advanced tools (such as mpiexec.hydra, which we might start using).

In your submission script, why don't you specify the $PBS_NODEFILE any more.

Answer: The program mpiexec is compiled in a way that makes this no longer necessary. It is not harmful to specify $PBS_NODEFILE, but it is not needed either. The hpcexamples will get cleaned up. The CRMDA cluster documentation will need to be corrected.

This entry was posted in Computing, R. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *