Difference between revisions of "Annotated SLURM Script"

From UFRC
Jump to navigation Jump to search
(Created page with "Category:SLURM {{HPG2}} This is a walk-through for a basic SLURM scheduler job script. Annotations are marked with bullet points. You can click on the link below to downlo...")
 
m (fixed analysis spelling)
 
(23 intermediate revisions by 4 users not shown)
Line 1: Line 1:
[[Category:SLURM]]
+
[[Category:Scheduler]]
{{HPG2}}
+
This is a walk-through for a basic SLURM scheduler job script for a common case of a multi-threaded analysis. If the program you run is single-threaded (can use only one CPU core) then only use '--ntasks=1' line for the cpu request instead of all three listed lines. Annotations are marked with bullet points. You can click on the link below to download the raw job script file without the annotation. Values in brackets are placeholders. You need to replace them with your own values. E.g. Change '<job name>' to something like 'blast_proj22'. We will write additional documentation on more complex job layouts for MPI jobs and other situations when a simple number of processor cores is not sufficient.
This is a walk-through for a basic SLURM scheduler job script. Annotations are marked with bullet points. You can click on the link below to download the raw job script file without the annotation. Values in brackets are placeholders. You need to replace them with your own values. E.g. Change '<job name>' to something like 'blast_proj22'.
 
  
Download raw source of the [{{#fileLink: scratch_local.PBS}} scratch_local.PBS] file.
+
{|cellspacing=30
* Set the shell to use
+
|-style="vertical-align:top;"
{{#fileAnchor: slurm_job.sh}}
+
|style="width: 50%"|
<source lang=make>
+
;Set the shell to use
 +
<pre>
 
#!/bin/bash
 
#!/bin/bash
</source>
+
</pre>
;COMMON SETTINGS
+
;Common arguments
 
* Name the job to make it easier to see in the job queue
 
* Name the job to make it easier to see in the job queue
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
 
 
#SBATCH --job-name=<JOBNAME>
 
#SBATCH --job-name=<JOBNAME>
</source>
+
</pre>
;Optional:
 
:A group to use if you belong to multiple groups. Otherwise, do not use.
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
#SBATCH --account=<GROUP>
 
 
;Email
 
;Email
 
:Your email address to use for all batch system communications
 
:Your email address to use for all batch system communications
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
+
#SBATCH --mail-user=<EMAIL>
##SBATCH --mail-user=<EMAIL>
+
#SBATCH --mail-user=<EMAIL-ONE>,<EMAIL-TWO>
</source>
+
</pre>
 
;What emails to send
 
;What emails to send
 
:NONE - no emails
 
:NONE - no emails
 
:ALL - all emails
 
:ALL - all emails
 
:END,FAIL - only email if the job fails and email the summary at the end of the job
 
:END,FAIL - only email if the job fails and email the summary at the end of the job
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
 
 
#SBATCH --mail-type=FAIL,END
 
#SBATCH --mail-type=FAIL,END
</source>
+
</pre>
 
+
;Standard Output and Error log files
;Standard Output Log File
 
 
:Use file patterns  
 
:Use file patterns  
 
:: %j - job id
 
:: %j - job id
:: %A-$a - Array job id (A) and task id (a)
+
:: %A-%a - Array job id (A) and task id (a)
{{#fileAnchor: slurm_job.sh}}
+
:: You can also use --error for a separate stderr log
<source lang=make>
+
<pre>
#SBATCH --output <my_job-%j.out
+
#SBATCH --output <my_job-%j.out>
</source>
+
</pre>
* Batch error file
+
;Number of nodes to use. For all non-MPI jobs this number will be equal to '1'
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
+
#SBATCH --nodes=1
#PBS -e testjob_$PBS_JOBID.err
+
</pre>
</source>
+
;Number of tasks. For all non-MPI jobs this number will be equal to '1'
* Number of compute nodes and processor cores (equal to how many processes you will run simultaneously). Unless you use MPI programs keep ''nodes=1'' and only change the ''ppn'' value.
+
<pre>
{{#fileAnchor: slurm_job.sh}}
+
#SBATCH --ntasks=1
<source lang=make>
+
</pre>
#PBS -l nodes=1:ppn=1
+
;Number of CPU cores to use. This number must match the argument used for the program  you run.
</source>
+
<pre>
* Memory per processor (nodes * ppn * pmem = total memory used by the job), use mb for megabytes or gb for gigabytes.
+
#SBATCH --cpus-per-task=4
{{#fileAnchor: slurm_job.sh}}
+
</pre>
<source lang=make>
+
||
#PBS -l pmem=1gb
+
;Total memory limit for the job. Default is 2 gigabytes, but units can be specified with mb or gb for Megabytes or Gigabytes.
</source>
+
<pre>
* Job run time in HOURS:MINUTES:SECONDS
+
#SBATCH --mem=4gb
{{#fileAnchor: slurm_job.sh}}
+
</pre>
<source lang=make>
+
;Job run time in [DAYS]:HOURS:MINUTES:SECONDS
#PBS -l walltime=01:00:00
+
:[DAYS] are optional, use when it is convenient
</source>
+
<pre>
;Recommended Optional Settings
+
#SBATCH --time=72:00:00
* If we're inside a job change to this directory instead of /home/$USER.
+
</pre>
{{#fileAnchor: slurm_job.sh}}
+
;Optional:
<source lang=make>
+
:A group to use if you belong to multiple groups. Otherwise, do not use.
[[ -d $PBS_O_WORKDIR ]] && cd $PBS_O_WORKDIR
+
<pre>
</source>
+
#SBATCH --account=<GROUP>
 +
</pre>
 +
:A job array, which will create many jobs (called array tasks) different only in the '<code>$SLURM_ARRAY_TASK_ID</code>' variable
 +
<pre>
 +
#SBATCH --array=<BEGIN-END>
 +
</pre>
 +
;Example of five tasks
 +
:<nowiki>#</nowiki>SBATCH --array=1-5
 +
----
 +
;Recommended convenient shell code to put into your job script
 
* Add host, time, and directory name for later troubleshooting
 
* Add host, time, and directory name for later troubleshooting
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
 
 
date;hostname;pwd
 
date;hostname;pwd
 
+
</pre>
</source>
 
;END OF PBS SETTINGS:
 
 
Below is the shell script part - the commands you will run to analyze your data. The following is an example.
 
Below is the shell script part - the commands you will run to analyze your data. The following is an example.
  
 
* Load the software you need
 
* Load the software you need
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
 
 
module load ncbi_blast
 
module load ncbi_blast
 
+
</pre>
</source>
 
 
* Run the program
 
* Run the program
{{#fileAnchor: slurm_job.sh}}
+
<pre>
<source lang=make>
+
blastn -db nt -query input.fa -outfmt 6 -out results.xml --num_threads 4
blastn -db nt -query input.fa -outfmt 6 -out results.xml
 
  
 
date
 
date
</source>
+
</pre>
 +
|}

Latest revision as of 20:49, 13 May 2024

This is a walk-through for a basic SLURM scheduler job script for a common case of a multi-threaded analysis. If the program you run is single-threaded (can use only one CPU core) then only use '--ntasks=1' line for the cpu request instead of all three listed lines. Annotations are marked with bullet points. You can click on the link below to download the raw job script file without the annotation. Values in brackets are placeholders. You need to replace them with your own values. E.g. Change '<job name>' to something like 'blast_proj22'. We will write additional documentation on more complex job layouts for MPI jobs and other situations when a simple number of processor cores is not sufficient.

Set the shell to use
#!/bin/bash
Common arguments
  • Name the job to make it easier to see in the job queue
#SBATCH --job-name=<JOBNAME>
Email
Your email address to use for all batch system communications
#SBATCH --mail-user=<EMAIL>
#SBATCH --mail-user=<EMAIL-ONE>,<EMAIL-TWO>
What emails to send
NONE - no emails
ALL - all emails
END,FAIL - only email if the job fails and email the summary at the end of the job
#SBATCH --mail-type=FAIL,END
Standard Output and Error log files
Use file patterns
%j - job id
%A-%a - Array job id (A) and task id (a)
You can also use --error for a separate stderr log
#SBATCH --output <my_job-%j.out>
Number of nodes to use. For all non-MPI jobs this number will be equal to '1'
#SBATCH --nodes=1
Number of tasks. For all non-MPI jobs this number will be equal to '1'
#SBATCH --ntasks=1
Number of CPU cores to use. This number must match the argument used for the program you run.
#SBATCH --cpus-per-task=4
Total memory limit for the job. Default is 2 gigabytes, but units can be specified with mb or gb for Megabytes or Gigabytes.
#SBATCH --mem=4gb
Job run time in [DAYS]
HOURS:MINUTES:SECONDS
[DAYS] are optional, use when it is convenient
#SBATCH --time=72:00:00
Optional
A group to use if you belong to multiple groups. Otherwise, do not use.
#SBATCH --account=<GROUP>
A job array, which will create many jobs (called array tasks) different only in the '$SLURM_ARRAY_TASK_ID' variable
#SBATCH --array=<BEGIN-END>
Example of five tasks
#SBATCH --array=1-5

Recommended convenient shell code to put into your job script
  • Add host, time, and directory name for later troubleshooting
date;hostname;pwd

Below is the shell script part - the commands you will run to analyze your data. The following is an example.

  • Load the software you need
module load ncbi_blast
  • Run the program
blastn -db nt -query input.fa -outfmt 6 -out results.xml --num_threads 4

date