Difference between revisions of "R"

From UFRC
Jump to navigation Jump to search
Line 50: Line 50:
 
<!--Faq-->
 
<!--Faq-->
 
{{#if: {{#var: faq}}|==FAQ==
 
{{#if: {{#var: faq}}|==FAQ==
*'''Q:''' When I submit the job with N=1 and M=1 it runs and R allocates the 10 slaves that I want.  Is this the OK?
+
*'''Q:''' When I submit a job using 'parallel' package all threads seem to share a single CPU core instead of running on the separate cores I requested.
**'''A:''' In short, no. This is bad since you are lying to the scheduler about the resources you intend to run.  We have scripts that will kill your job if they catch it and we tend to suspend accounts of users who make a practice of it. :)
+
**'''A:''' On SLURM you need to use --cpus-per-task to specify the number of available cores. E.g.
*'''Q:''' The actual job I want to run is much larger.  Anywhere from 31 to 93 processors are desired.  Is it ok to request this many processors.
+
  #SBATCH --nodes=1
**'''A:''' That depends on the level of investment from your PI.  If you ask for processors than your groups core allocation, which depends on the investment level, you will be essentially borrowing cores from other groups and may wait an extended period of time in the queue before your job runs.  Groups are allowed to run on up to 10x their core allocation provided the resources are available.  If you ask for more than 10x your groups core allocation, the job will be blocked indefinitely.
+
  #SBATCH --ntasks=1
*'''Q:'''  Do I need the number of nodes requested to be correct or can I just have R go grab slaves after the job is submitted with N=1 and M=1?
+
#SBATCH --cpus-per-task=12
**'''A:''' Your resource request must be consistent with what you actually intend to use as noted above.
+
 
<ul>
+
will allow mcapply or other function from the 'parallel' package to run on all requested cores
<li>'''Q:''' Is it better to request a large number of nodes for a shorter period of time or less nodes for longer period of time (concretely, say 8 nodes for 40 hours versus 16 nodes for 20 hours) in terms of getting through the queue?
+
|}}
<ul><li>'''A:''' Do not confuse "nodes" with "cores/processors".   Each "node" is a physical machine with between 4 and 48 cores.  Your MPI threads will run on "cores" which may all be in the same "node" or spread among multiple nodes. You should ask for the number of cores you need and spread them among as few nodes as possible unless you have a good reason to do otherwise.   Thus you should generally ask for things like<pre>
 
  #PBS -l nodes=1:ppn=8    (we have lots of 8p nodes)
 
  #PBS -l nodes=1:ppn=12 (we have a number of 12p also)</pre>
 
Multiples of the above work as well so you might ask for nodes=3:ppn=8 if you want to run 24 threads on 24 different cores.
 
It looks like in the R model there is a master/slave paradigm so you really need one master thread to manage the "slave" threads.  It is likely that the master thread accumulates little CPU time so you ''could'' neglect it.  In other words tell the scheduler that you want nodes=3:ppn=8 and tell R to spawn 24 children.
 
This is a white lie which will do little harm.  However, if it turns out that the master accumulates significant CPU time and your job gets killed by our rogue process killer, you can ask for the resources as follows
 
#PBS -l nodes=1:ppn=1infiniband+3:ppn=8:infiniband
 
This will allocate 1 thread on a separate node (the master thread) and then the slave threads will be allocated on 3 additional nodes with at least 8 cores each.</li>
 
</ul>
 
</ul>
 
|}}
 
 
{{#if: {{#var: citation}}|==Citation==
 
{{#if: {{#var: citation}}|==Citation==
 
If you publish research that uses {{{app}}} you have to cite it as follows:
 
If you publish research that uses {{{app}}} you have to cite it as follows:

Revision as of 23:02, 18 July 2016

Description

R website  

R is a free software environment for statistical computing and graphics.

Note: File a support ticket to request installation of additional libraries.

Required Modules

modules documentation

Serial

  • R
Note
rJava users need to load the java module manually with 'module load java/1.6.0_3'

Parallel (MPI)

  • Rmpi

The "Rmpi" module enables access to the version of R that provides the Rmpi library for large-scale multi-node parallel computations.

System Variables

  • HPC_{{#uppercase:R}}_DIR - installation directory
  • HPC_R_BIN - executable directory
  • HPC_R_LIB - library directory
  • HPC_R_INCLUDE - includes directory

How To Run

R can be run on the command-line (or the batch system) using the 'Rscript myscript.R' or 'R CMD BATCH myscript.R' command or, for script development or visualization, via RStudio ('rstudio' environment module and command) on gui.rc.ufl.edu or gui1.rc.ufl.edu servers.

PBS Script Examples

See the R_PBS page for R PBS script examples.

Performance

We have benchmarked our most recent installed R version (3.0.2) built with the included blas/lapack libraries versus the newest (as of April 2015) release 3.2.0 built with Intel MKL libraries on the HiPerGator1 hardware (AMD Abu Dhabi 2.4GHz CPUs) and the Intel Haswell 2.3GHz CPUs we're testing for possible usage in HiPerGator2. The results are presented in the R Benchmark 2.5 table

FAQ

  • Q: When I submit a job using 'parallel' package all threads seem to share a single CPU core instead of running on the separate cores I requested.
    • A: On SLURM you need to use --cpus-per-task to specify the number of available cores. E.g.
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=12

will allow mcapply or other function from the 'parallel' package to run on all requested cores

Rmpi Example

Example of using the parallel module to run MPI jobs under R 2.14.1+

{{#fileAnchor: rmpi_test.R}} Download raw source of the [{{#fileLink: rmpi_test.R}} rmpi_test.R] file.

# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
    library("Rmpi")
    }
                                                                                
# Spawn as many slaves as possible
mpi.spawn.Rslaves()
                                                                                
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
    if (is.loaded("mpi_initialize")){
        if (mpi.comm.size(1) > 0){
            print("Please use mpi.close.Rslaves() to close slaves.")
            mpi.close.Rslaves()
        }
        print("Please use mpi.quit() to quit R")
        .Call("mpi_finalize")
    }
}

# Tell all slaves to return a message identifying themselves
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))

# Tell all slaves to close down, and exit the program
mpi.close.Rslaves()
mpi.quit()

Installed Libraries

Note: Many of the packages in the R library shown below are installed as a part of Bioconductor meta-library. The list is generated from the default R version. File R_PACKAGES is missing.

Name Description