Difference between revisions of "R"

From UFRC
Jump to navigation Jump to search
m (Text replace - "support.hpc.ufl.edu" to "support.rc.ufl.edu")
Line 19: Line 19:
 
R is a free software environment for statistical computing and graphics.
 
R is a free software environment for statistical computing and graphics.
  
'''Note: File a [http://support.hpc.ufl.edu support ticket] to request installation of additional libraries.'''
+
'''Note: File a [http://support.rc.ufl.edu support ticket] to request installation of additional libraries.'''
 
<!--Modules-->
 
<!--Modules-->
 
==Required Modules==
 
==Required Modules==

Revision as of 21:08, 8 August 2014

Description

R website  

R is a free software environment for statistical computing and graphics.

Note: File a support ticket to request installation of additional libraries.

Required Modules

modules documentation

Serial

  • R

Parallel (MPI)

  • Rmpi

The "Rmpi" module enables access to the version of R that provides the Rmpi library for large-scale multi-node parallel computations.

System Variables

  • HPC_{{#uppercase:R}}_DIR - installation directory
  • HPC_R_BIN - executable directory
  • HPC_R_LIB - library directory
  • HPC_R_INCLUDE - includes directory

Installed Libraries

Note: Many of the packages in the R library shown below are installed as a part of Bioconductor meta-library. The list is generated from the default R version. File R_PACKAGES is missing.

Name Description



FAQ

  • Q: When I submit the job with N=1 and M=1 it runs and R allocates the 10 slaves that I want. Is this the OK?
    • A: In short, no. This is bad since you are lying to the scheduler about the resources you intend to run. We have scripts that will kill your job if they catch it and we tend to suspend accounts of users who make a practice of it. :)
  • Q: The actual job I want to run is much larger. Anywhere from 31 to 93 processors are desired. Is it ok to request this many processors.
    • A: That depends on the level of investment from your PI. If you ask for processors than your groups core allocation, which depends on the investment level, you will be essentially borrowing cores from other groups and may wait an extended period of time in the queue before your job runs. Groups are allowed to run on up to 10x their core allocation provided the resources are available. If you ask for more than 10x your groups core allocation, the job will be blocked indefinitely.
  • Q: Do I need the number of nodes requested to be correct or can I just have R go grab slaves after the job is submitted with N=1 and M=1?
    • A: Your resource request must be consistent with what you actually intend to use as noted above.
  • Q: Is it better to request a large number of nodes for a shorter period of time or less nodes for longer period of time (concretely, say 8 nodes for 40 hours versus 16 nodes for 20 hours) in terms of getting through the queue?
    • A: Do not confuse "nodes" with "cores/processors". Each "node" is a physical machine with between 4 and 48 cores. Your MPI threads will run on "cores" which may all be in the same "node" or spread among multiple nodes. You should ask for the number of cores you need and spread them among as few nodes as possible unless you have a good reason to do otherwise. Thus you should generally ask for things like
       #PBS -l nodes=1:ppn=8    (we have lots of 8p nodes)
       #PBS -l nodes=1:ppn=12  (we have a number of 12p also)

      Multiples of the above work as well so you might ask for nodes=3:ppn=8 if you want to run 24 threads on 24 different cores. It looks like in the R model there is a master/slave paradigm so you really need one master thread to manage the "slave" threads. It is likely that the master thread accumulates little CPU time so you could neglect it. In other words tell the scheduler that you want nodes=3:ppn=8 and tell R to spawn 24 children. This is a white lie which will do little harm. However, if it turns out that the master accumulates significant CPU time and your job gets killed by our rogue process killer, you can ask for the resources as follows

      #PBS -l nodes=1:ppn=1infiniband+3:ppn=8:infiniband 
      
      This will allocate 1 thread on a separate node (the master thread) and then the slave threads will be allocated on 3 additional nodes with at least 8 cores each.

Rmpi Example

Example of using the parallel module to run MPI jobs under R 2.14.1+

{{#fileAnchor: rmpi_test.R}} Download raw source of the [{{#fileLink: rmpi_test.R}} rmpi_test.R] file.

# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
    library("Rmpi")
    }
                                                                                
# Spawn as many slaves as possible
mpi.spawn.Rslaves()
                                                                                
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
    if (is.loaded("mpi_initialize")){
        if (mpi.comm.size(1) > 0){
            print("Please use mpi.close.Rslaves() to close slaves.")
            mpi.close.Rslaves()
        }
        print("Please use mpi.quit() to quit R")
        .Call("mpi_finalize")
    }
}

# Tell all slaves to return a message identifying themselves
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))

# Tell all slaves to close down, and exit the program
mpi.close.Rslaves()
mpi.quit()