R
Description
R is a free software environment for statistical computing and graphics.
Note: File a support ticket to request installation of additional libraries.
Required Modules
Serial
- R
Parallel (MPI)
- Rmpi
The "Rmpi" module enables access to the version of R that provides the Rmpi library for large-scale multi-node parallel computations.
System Variables
- HPC_{{#uppercase:R}}_DIR - installation directory
- HPC_R_BIN - executable directory
- HPC_R_LIB - library directory
- HPC_R_INCLUDE - includes directory
How To Run
R can be run on the command-line (or the batch system) using the 'Rscript myscript.R
' or 'R CMD BATCH myscript.R
' command or, for script development or visualization, via RStudio ('rstudio' environment module and command) on gui.rc.ufl.edu or gui1.rc.ufl.edu servers.
- Notes and Warnings
- Java
rJava users need to load the java module manually with 'module load java/1.7.0_79
'
- TMPDIR
If temporary files are produced the may fill up memory disks on HPG2 nodes and cause node and job failures. Use something like
mkdir -p tmp export TMPDIR=$(pwd)/tmp
in your job script to prevent this and launch your job from the respective directory and not from your home directory.
- Tasks vs Cores for parallel runs
Parallel threads in an R job will be bound to the same CPU core even if multiple ntasks are specified in the job script. Use cpus-per-task to use R 'parallel' module correctly. For example, for an 8-thread parallel job use the following resource request in your job script:
#SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=8
PBS Script Examples
See the R_PBS page for R PBS script examples.
Performance
We have benchmarked our most recent installed R version (3.0.2) built with the included blas/lapack libraries versus the newest (as of April 2015) release 3.2.0 built with Intel MKL libraries on the HiPerGator1 hardware (AMD Abu Dhabi 2.4GHz CPUs) and the Intel Haswell 2.3GHz CPUs we're testing for possible usage in HiPerGator2. The results are presented in the R Benchmark 2.5 table
FAQ
- Q: When I submit a job using 'parallel' package all threads seem to share a single CPU core instead of running on the separate cores I requested.
- A: On SLURM you need to use --cpus-per-task to specify the number of available cores. E.g.
#SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=12
will allow mcapply or other function from the 'parallel' package to run on all requested cores
Rmpi Example
Example of using the parallel module to run MPI jobs under R 2.14.1+
{{#fileAnchor: rmpi_test.R}} Download raw source of the [{{#fileLink: rmpi_test.R}} rmpi_test.R] file.
# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
library("Rmpi")
}
# Spawn as many slaves as possible
mpi.spawn.Rslaves()
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
if (is.loaded("mpi_initialize")){
if (mpi.comm.size(1) > 0){
print("Please use mpi.close.Rslaves() to close slaves.")
mpi.close.Rslaves()
}
print("Please use mpi.quit() to quit R")
.Call("mpi_finalize")
}
}
# Tell all slaves to return a message identifying themselves
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
# Tell all slaves to close down, and exit the program
mpi.close.Rslaves()
mpi.quit()
Installed Libraries
Note: Many of the packages in the R library shown below are installed as a part of Bioconductor meta-library. The list is generated from the default R version. File R_PACKAGES is missing.
Name | Description |
---|