Difference between revisions of "R"

From UFRC
Jump to navigation Jump to search
 
(51 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
__NOEDITSECTION__
 
__NOEDITSECTION__
[[Category:Software]][[Category:Statistics]]
+
{|align=right
<!-- ########  Template Configuration ######## -->
+
  |__TOC__
<!--Edit definitions of the variables used in template calls
+
  |}
Required variables:
+
[[Category:Software]][[Category:Statistics]][[Category:Programming]]
app - lowercase name of the application e.g. "amber"
+
{|<!--Main settings - REQUIRED-->
url - url of the software page (project, company product, etc) - e.g. "http://ambermd.org/"
 
Optional variables:
 
INTEL - Version of the Intel Compiler e.g. "11.1"
 
MPI - MPI Implementation and version e.g. "openmpi/1.3.4"
 
-->
 
{|
 
<!--Main settings - REQUIRED-->
 
 
|{{#vardefine:app|R}}
 
|{{#vardefine:app|R}}
 
|{{#vardefine:url|http://www.r-project.org/}}
 
|{{#vardefine:url|http://www.r-project.org/}}
<!--Compiler and MPI settings - OPTIONAL -->
+
|{{#vardefine:exe|1}} <!--Present manual instructions for running the software -->
|{{#vardefine:intel|}} <!-- E.g. "11.1" -->
 
|{{#vardefine:mpi|}} <!-- E.g. "openmpi/1.3.4" -->
 
<!--Choose sections to enable - OPTIONAL-->
 
|{{#vardefine:mod|1}} <!--Present instructions for running the software with modules -->
 
|{{#vardefine:exe|}} <!--Present manual instructions for running the software -->
 
 
|{{#vardefine:conf|}} <!--Enable config wiki page link - {{#vardefine:conf|1}} = ON/conf|}} = OFF-->
 
|{{#vardefine:conf|}} <!--Enable config wiki page link - {{#vardefine:conf|1}} = ON/conf|}} = OFF-->
|{{#vardefine:pbs|}} <!--Enable PBS script wiki page link-->
+
|{{#vardefine:job|1}} <!--Enable job script wiki page link-->
 
|{{#vardefine:policy|}} <!--Enable policy section -->
 
|{{#vardefine:policy|}} <!--Enable policy section -->
|{{#vardefine:testing|}} <!--Enable performance testing/profiling section -->
+
|{{#vardefine:testing|1}} <!--Enable performance testing/profiling section -->
 
|{{#vardefine:faq|1}} <!--Enable FAQ section -->
 
|{{#vardefine:faq|1}} <!--Enable FAQ section -->
 
|{{#vardefine:citation|}} <!--Enable Reference/Citation section -->
 
|{{#vardefine:citation|}} <!--Enable Reference/Citation section -->
Line 34: Line 22:
 
R is a free software environment for statistical computing and graphics.
 
R is a free software environment for statistical computing and graphics.
  
'''Note: File a [http://support.hpc.ufl.edu support ticket] to request installation of additional libraries.'''
+
'''Note: File a [http://support.rc.ufl.edu support ticket] to request installation of additional libraries.'''
 
<!--Modules-->
 
<!--Modules-->
{{#if: {{#var: mod}}|==Execution Environment and Modules==
+
==Environment Modules==
{{App_Module|app={{#var:app}}|intel={{#var:intel}}|mpi={{#var:mpi}}}}|}}
+
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application.
 +
==System Variables==
 +
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory
 
* HPC_R_BIN - executable directory
 
* HPC_R_BIN - executable directory
 
* HPC_R_LIB - library directory
 
* HPC_R_LIB - library directory
 
* HPC_R_INCLUDE - includes directory
 
* HPC_R_INCLUDE - includes directory
 +
{{#if: {{#var: exe}}|==How To Run==
 +
R can be run on the command-line (or the batch system) using the '<code>Rscript myscript.R</code>' or '<code>R CMD BATCH myscript.R</code>' command. For script development or visualization RStudio GUI application can be used. See the [[GUI_Programs|respective documentation]] for details. Alternatively an instance of [[RStudio_Server|RStudio Server]] can be started in a job. Then you can connect to it through an SSH tunnel from a web browser on your local computer.
 +
;Notes and Warnings:
 +
 +
* The parallel::detectCores() function will return the total number of cores on a compute node and not the number of cores assigned to your job by the scheduler. Instead, use something like
 +
numCores = as.integer(Sys.getenv("SLURM_CPUS_ON_NODE"))
 +
to find out the number of CPU cores 'X' requested in your job script by:
 +
#SBATCH --cpus-per-task=X
 +
 +
* Default RData format
 +
In R-3.6.0 the default serialization format used to save RData files has been changed to version 3 (RDX3), so R versions prior to 3.5.0 will not be able to open it. Keep this in mind if you copy RData files from HiPerGator to an external system with old R installed.
 +
 +
* Java
 +
rJava users need to load the java module manually with '<code>module load java/1.7.0_79</code>'
 +
 +
* TMPDIR
 +
If temporary files are produced the may fill up memory disks on HPG2 nodes and cause node and job failures. Use something like
 +
mkdir -p tmp
 +
export TMPDIR=$(pwd)/tmp
 +
in your job script to prevent this and launch your job from the respective directory and not from your home directory.
 +
 +
{{Note|'''For users of PHI and FERPA:''' It is particularly important to set your working and TMPDIR directories to be in your project's PHI/FERPA configured directory in <code>/blue</code> when working with R. Writing files to <code>/home</code> or <code>$TMPDIR</code> could expose restricted data to unauthorized users.|warn}}
 +
 +
* Tasks vs Cores for parallel runs
 +
Parallel threads in an R job will be bound to the same CPU core even if multiple ntasks are specified in the job script. Use cpus-per-task to use R 'parallel' module correctly. For example, for an 8-thread parallel job use the following resource request in your job script:
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=1
 +
#SBATCH --cpus-per-task=8
  
To use the version of R built for parallel execution with MPI via the Rmpi library load the following modules:
+
See the single-threaded and multi-threaded examples on the [[Sample SLURM Scripts]] page for more details.
module load intel/11.1 openmpi/1.4.3 R
+
|}}
==Installed Libraries==
 
'''Note: ''' Many of the packages in the R library shown below are installed as a part of Bioconductor meta-library. The list is generated from the default R version.
 
<!-- Note to HPC Staff: paste the list generated by the "library()" command between the <pre> </pre> tags in the http://wiki.hpc.ufl.edu/index.php/R_libraries wiki page for the inclusion below to work. -->
 
{{:R_libraries}}
 
{{#if: {{#var: exe}}|==How To Run==
 
WRITE INSTRUCTIONS ON RUNNING THE ACTUAL BINARY|}}
 
 
{{#if: {{#var: conf}}|==Configuration==
 
{{#if: {{#var: conf}}|==Configuration==
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
{{#if: {{#var: pbs}}|==PBS Script Examples==
+
{{#if: {{#var: job}}|==Job Script Examples==
See the [[{{PAGENAME}}_PBS]] page for {{#var: app}} PBS script examples.|}}
+
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
{{#if: {{#var: policy}}|==Usage policy==
+
''Expand this section to view example R script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<source lang=bash>
 +
#!/bin/bash
 +
#SBATCH --job-name=R_test  #Job name
 +
#SBATCH --mail-type=END,FAIL  # Mail events (NONE, BEGIN, END, FAIL, ALL)
 +
#SBATCH --mail-user=ENTER_YOUR_EMAIL_HERE  # Where to send mail
 +
#SBATCH --ntasks=1
 +
#SBATCH --mem=1gb  # Per processor memory
 +
#SBATCH --time=00:05:00  # Walltime
 +
#SBATCH --output=r_job.%j.out  # Name output file
 +
#Record the time and compute node the job ran on
 +
date; hostname; pwd
 +
#Use modules to load the environment for R
 +
module load R
 +
 
 +
#Run R script  
 +
Rscript myRscript.R
 +
 
 +
date
 +
</source></div></div>
 +
|}}
 +
{{#if: {{#var: policy}}|==Usage Policy==
 
WRITE USAGE POLICY HERE (perhaps templates for a couple of main licensing schemes can be used)|}}
 
WRITE USAGE POLICY HERE (perhaps templates for a couple of main licensing schemes can be used)|}}
 
{{#if: {{#var: testing}}|==Performance==
 
{{#if: {{#var: testing}}|==Performance==
WRITE PERFORMANCE TESTING RESULTS HERE|}}
+
We have benchmarked our most recent installed R version (3.0.2) built with the included blas/lapack libraries versus the newest (as of April 2015) release 3.2.0 built with Intel MKL libraries on the HiPerGator1 hardware (AMD Abu Dhabi 2.4GHz CPUs) and the Intel Haswell 2.3GHz CPUs we're testing for possible usage in HiPerGator2. The results are presented in the [[R Benchmark 2.5]] table |}}
<!--Faq-->
 
{{#if: {{#var: faq}}|==FAQ==
 
*'''Q:''' When I submit the job with N=1 and M=1 it runs and R allocates the 10 slaves that I want. Is this the OK?
 
**'''A:''' In short, no. This is bad since you are lying to the scheduler about the resources you intend to run.  We have scripts that will kill your job if they catch it and we tend to suspend accounts of users who make a practice of it. :)
 
*'''Q:''' The actual job I want to run is much larger. Anywhere from 31 to 93 processors are desired. Is it ok to request this many processors.
 
**'''A:''' That depends on the level of investment from your PI. If you ask for processors than your groups core allocation, which depends on the investment level, you will be essentially borrowing cores from other groups and may wait an extended period of time in the queue before your job runs.   Groups are allowed to run on up to 10x their core allocation provided the resources are available.  If you ask for more than 10x your groups core allocation, the job will be blocked indefinitely.
 
*'''Q:'''  Do I need the number of nodes requested to be correct or can I just have R go grab slaves after the job is submitted with N=1 and M=1?
 
**'''A:''' Your resource request must be consistent with what you actually intend to use as noted above.
 
<ul>
 
<li>'''Q:''' Is it better to request a large number of nodes for a shorter period of time or less nodes for longer period of time (concretely, say 8 nodes for 40 hours versus 16 nodes for 20 hours) in terms of getting through the queue?
 
<ul><li>'''A:''' Do not confuse "nodes" with "cores/processors".   Each "node" is a physical machine with between 4 and 48 cores.  Your MPI threads will run on "cores" which may all be in the same "node" or spread among multiple nodes.  You should ask for the number of cores you need and spread them among as few nodes as possible unless you have a good reason to do otherwise.  Thus you should generally ask for things like<pre>
 
#PBS -l nodes=1:ppn=8    (we have lots of 8p nodes)
 
#PBS -l nodes=1:ppn=12  (we have a number of 12p also)</pre>
 
Multiples of the above work as well so you might ask for nodes=3:ppn=8 if you want to run 24 threads on 24 different cores.
 
It looks like in the R model there is a master/slave paradigm so you really need one master thread to manage the "slave" threads.  It is likely that the master thread accumulates little CPU time so you ''could'' neglect it. In other words tell the scheduler that you want nodes=3:ppn=8 and tell R to spawn 24 children.
 
This is a white lie which will do little harm.  However, if it turns out that the master accumulates significant CPU time and your job gets killed by our rogue process killer, you can ask for the resources as follows
 
#PBS -l nodes=1:ppn=1infiniband+3:ppn=8:infiniband
 
This will allocate 1 thread on a separate node (the master thread) and then the slave threads will be allocated on 3 additional nodes with at least 8 cores each.</li>
 
</ul>
 
</ul>
 
|}}
 
 
{{#if: {{#var: citation}}|==Citation==
 
{{#if: {{#var: citation}}|==Citation==
 
If you publish research that uses {{{app}}} you have to cite it as follows:
 
If you publish research that uses {{{app}}} you have to cite it as follows:
Line 84: Line 97:
 
|}}
 
|}}
 
==Rmpi Example==
 
==Rmpi Example==
Example of using the parallel module to run MPI jobs under R 2.14.1+
+
See [[R MPI Example]] page for an example of using Rmpi code.
  
{{#fileAnchor: rmpi_test.R}}
+
==Installed Libraries==
Download raw source of the [{{#fileLink: rmpi_test.R}} rmpi_test.R] file.
+
You can install your own libraries to use with R. These are stored in your /home/ environment. For details visit our [[Applications FAQ]] and see the section "How do I install R packages?".
<source lang=bash>
+
 
# Load the R MPI package if it is not already loaded.
+
Make sure the directory for that version of R is created or R will try to install to a system path and fail. E.g. for R/4.3 run the following command before attempting to install a package:
if (!is.loaded("mpi_initialize")) {
+
mkdir ~/R/x86_64-pc-linux-gnu-library/4.3
    library("Rmpi")
+
 
    }
+
You can set a custom library path with the R_LIBS_USER environment variable.
                                                                               
+
From [https://cran.r-project.org/web/packages/startup/vignettes/startup-intro.html https://cran.r-project.org/web/packages/startup/vignettes/startup-intro.html]:
# Spawn as many slaves as possible
 
mpi.spawn.Rslaves()
 
                                                                               
 
# In case R exits unexpectedly, have it automatically clean up
 
# resources taken up by Rmpi (slaves, memory, etc...)
 
.Last <- function(){
 
    if (is.loaded("mpi_initialize")){
 
        if (mpi.comm.size(1) > 0){
 
            print("Please use mpi.close.Rslaves() to close slaves.")
 
            mpi.close.Rslaves()
 
        }
 
        print("Please use mpi.quit() to quit R")
 
        .Call("mpi_finalize")
 
    }
 
}
 
  
# Tell all slaves to return a message identifying themselves
+
"R_LIBS_USER - user's library path, e.g. R_LIBS_USER=~/R/%p-library/%v is the folder specification used by default on all platforms and and R version. The folder must exist, otherwise it is ignored by R. The %p (platform) and %v (version) parts are R-specific conversion specifiers."
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
 
  
# Tell all slaves to close down, and exit the program
+
To see a list of installed libraries in the currently loaded version of R:
mpi.close.Rslaves()
+
<pre>
mpi.quit()
+
$ R
</source>
+
> installed.packages()
 +
</pre>
 +
'''Note: ''' Many of the packages in the R library shown below are installed as a part of Bioconductor meta-library. The list is generated from the default R version.
 +
<!-- Note to HPC Staff: paste the list generated by the "library()" command between the <pre> </pre> tags in the http://wiki.rc.ufl.edu/index.php/R_libraries wiki page for the inclusion below to work. -->
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand this section to view installed library list.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
{{:R_libraries}}
 +
</div>
 +
</div>

Latest revision as of 14:41, 20 September 2024

Description

R website  

R is a free software environment for statistical computing and graphics.

Note: File a support ticket to request installation of additional libraries.

Environment Modules

Run module spider R to find out what environment modules are available for this application.

System Variables

  • HPC_R_DIR - installation directory
  • HPC_R_BIN - executable directory
  • HPC_R_LIB - library directory
  • HPC_R_INCLUDE - includes directory

How To Run

R can be run on the command-line (or the batch system) using the 'Rscript myscript.R' or 'R CMD BATCH myscript.R' command. For script development or visualization RStudio GUI application can be used. See the respective documentation for details. Alternatively an instance of RStudio Server can be started in a job. Then you can connect to it through an SSH tunnel from a web browser on your local computer.

Notes and Warnings
  • The parallel::detectCores() function will return the total number of cores on a compute node and not the number of cores assigned to your job by the scheduler. Instead, use something like
numCores = as.integer(Sys.getenv("SLURM_CPUS_ON_NODE"))

to find out the number of CPU cores 'X' requested in your job script by:

#SBATCH --cpus-per-task=X
  • Default RData format

In R-3.6.0 the default serialization format used to save RData files has been changed to version 3 (RDX3), so R versions prior to 3.5.0 will not be able to open it. Keep this in mind if you copy RData files from HiPerGator to an external system with old R installed.

  • Java

rJava users need to load the java module manually with 'module load java/1.7.0_79'

  • TMPDIR

If temporary files are produced the may fill up memory disks on HPG2 nodes and cause node and job failures. Use something like

mkdir -p tmp
export TMPDIR=$(pwd)/tmp

in your job script to prevent this and launch your job from the respective directory and not from your home directory.

For users of PHI and FERPA: It is particularly important to set your working and TMPDIR directories to be in your project's PHI/FERPA configured directory in /blue when working with R. Writing files to /home or $TMPDIR could expose restricted data to unauthorized users.
  • Tasks vs Cores for parallel runs

Parallel threads in an R job will be bound to the same CPU core even if multiple ntasks are specified in the job script. Use cpus-per-task to use R 'parallel' module correctly. For example, for an 8-thread parallel job use the following resource request in your job script:

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8

See the single-threaded and multi-threaded examples on the Sample SLURM Scripts page for more details.

Job Script Examples

Expand this section to view example R script.

#!/bin/bash
#SBATCH --job-name=R_test   #Job name	
#SBATCH --mail-type=END,FAIL   # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=ENTER_YOUR_EMAIL_HERE   # Where to send mail	
#SBATCH --ntasks=1
#SBATCH --mem=1gb   # Per processor memory
#SBATCH --time=00:05:00   # Walltime
#SBATCH --output=r_job.%j.out   # Name output file 
#Record the time and compute node the job ran on
date; hostname; pwd
#Use modules to load the environment for R
module load R

#Run R script 
Rscript myRscript.R

date

Performance

We have benchmarked our most recent installed R version (3.0.2) built with the included blas/lapack libraries versus the newest (as of April 2015) release 3.2.0 built with Intel MKL libraries on the HiPerGator1 hardware (AMD Abu Dhabi 2.4GHz CPUs) and the Intel Haswell 2.3GHz CPUs we're testing for possible usage in HiPerGator2. The results are presented in the R Benchmark 2.5 table

Rmpi Example

See R MPI Example page for an example of using Rmpi code.

Installed Libraries

You can install your own libraries to use with R. These are stored in your /home/ environment. For details visit our Applications FAQ and see the section "How do I install R packages?".

Make sure the directory for that version of R is created or R will try to install to a system path and fail. E.g. for R/4.3 run the following command before attempting to install a package:

mkdir ~/R/x86_64-pc-linux-gnu-library/4.3

You can set a custom library path with the R_LIBS_USER environment variable. From https://cran.r-project.org/web/packages/startup/vignettes/startup-intro.html:

"R_LIBS_USER - user's library path, e.g. R_LIBS_USER=~/R/%p-library/%v is the folder specification used by default on all platforms and and R version. The folder must exist, otherwise it is ignored by R. The %p (platform) and %v (version) parts are R-specific conversion specifiers."

To see a list of installed libraries in the currently loaded version of R:

$ R
> installed.packages()

Note: Many of the packages in the R library shown below are installed as a part of Bioconductor meta-library. The list is generated from the default R version.

Expand this section to view installed library list.

File R_PACKAGES is missing.

Name Description