Difference between revisions of "R MPI Example"
Jump to navigation
Jump to search
Moskalenko (talk | contribs) |
|||
Line 2: | Line 2: | ||
[[R|back to the main R page]] | [[R|back to the main R page]] | ||
− | Example of using the parallel module to run MPI jobs under Rmpi. | + | Example, of using the parallel module to run MPI jobs under Rmpi. |
{{#fileAnchor: rmpi_test.R}} | {{#fileAnchor: rmpi_test.R}} | ||
Line 34: | Line 34: | ||
mpi.close.Rslaves() | mpi.close.Rslaves() | ||
mpi.quit() | mpi.quit() | ||
+ | </source> | ||
+ | |||
+ | Example job script using rmpi_test.R script. | ||
+ | |||
+ | <source lang=bash> | ||
+ | #!/bin/sh | ||
+ | #SBATCH --job-name=mpi_job_test # Job name | ||
+ | #SBATCH --mail-type=ALL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
+ | #SBATCH --mail-user=ENTER_YOUR_EMAIL_HERE # Where to send mail | ||
+ | #SBATCH --ntasks=24 # Number of MPI ranks | ||
+ | #SBATCH --cpus-per-task=1 # Number of cores per MPI rank | ||
+ | #SBATCH --nodes=2 #Number of nodes | ||
+ | #SBATCH --ntasks-per-node=12 #How many tasks on each node | ||
+ | #SBATCH --ntasks-per-socket=6 #How many tasks on each CPU or socket | ||
+ | #SBATCH --distribution=cyclic:cyclic #Distribute tasks cyclically on nodes and sockets | ||
+ | #SBATCH --mem-per-cpu=1gb # Memory per processor | ||
+ | #SBATCH --time=00:05:00 # Time limit hrs:min:sec | ||
+ | #SBATCH --output=mpi_test_%j.out # Standard output and error log | ||
+ | pwd; hostname; date | ||
+ | |||
+ | echo "Running example Rmpi script. Using $SLURM_JOB_NUM_NODES nodes with $SLURM_NTASKS | ||
+ | tasks, each with $SLURM_CPUS_PER_TASK cores." | ||
+ | |||
+ | module load intel/2016.0.109 openmpi/1.10.2 Rmpi/3.3.1 | ||
+ | |||
+ | mpiexec Rscript /ufrc/data/training/SLURM/prime/rmpi_test.R | ||
+ | |||
+ | date | ||
</source> | </source> |
Revision as of 18:49, 25 October 2016
Example, of using the parallel module to run MPI jobs under Rmpi.
{{#fileAnchor: rmpi_test.R}} Download raw source of the [{{#fileLink: rmpi_test.R}} rmpi_test.R] file.
# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
library("Rmpi")
}
# Spawn as many slaves as possible
mpi.spawn.Rslaves()
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
if (is.loaded("mpi_initialize")){
if (mpi.comm.size(1) > 0){
print("Please use mpi.close.Rslaves() to close slaves.")
mpi.close.Rslaves()
}
print("Please use mpi.quit() to quit R")
.Call("mpi_finalize")
}
}
# Tell all slaves to return a message identifying themselves
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
# Tell all slaves to close down, and exit the program
mpi.close.Rslaves()
mpi.quit()
Example job script using rmpi_test.R script.
#!/bin/sh
#SBATCH --job-name=mpi_job_test # Job name
#SBATCH --mail-type=ALL # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=ENTER_YOUR_EMAIL_HERE # Where to send mail
#SBATCH --ntasks=24 # Number of MPI ranks
#SBATCH --cpus-per-task=1 # Number of cores per MPI rank
#SBATCH --nodes=2 #Number of nodes
#SBATCH --ntasks-per-node=12 #How many tasks on each node
#SBATCH --ntasks-per-socket=6 #How many tasks on each CPU or socket
#SBATCH --distribution=cyclic:cyclic #Distribute tasks cyclically on nodes and sockets
#SBATCH --mem-per-cpu=1gb # Memory per processor
#SBATCH --time=00:05:00 # Time limit hrs:min:sec
#SBATCH --output=mpi_test_%j.out # Standard output and error log
pwd; hostname; date
echo "Running example Rmpi script. Using $SLURM_JOB_NUM_NODES nodes with $SLURM_NTASKS
tasks, each with $SLURM_CPUS_PER_TASK cores."
module load intel/2016.0.109 openmpi/1.10.2 Rmpi/3.3.1
mpiexec Rscript /ufrc/data/training/SLURM/prime/rmpi_test.R
date