Difference between revisions of "Annotated SLURM Script"
Moskalenko (talk | contribs) |
Moskalenko (talk | contribs) |
||
Line 48: | Line 48: | ||
#SBATCH --ntasks=1 | #SBATCH --ntasks=1 | ||
</source> | </source> | ||
− | ; | + | ;Total memory limit for the job. Default is 2 gigabytes, but units can be specified with mb or gb for Megabytes or Gigabytes. |
{{#fileAnchor: run.sh}} | {{#fileAnchor: run.sh}} | ||
<source lang=make> | <source lang=make> | ||
− | #SBATCH --mem | + | #SBATCH --mem=4gb |
</source> | </source> | ||
;Job run time in [DAYS]:HOURS:MINUTES:SECONDS | ;Job run time in [DAYS]:HOURS:MINUTES:SECONDS |
Revision as of 18:11, 19 August 2016
HiPerGator 2.0 documentation |
This is a walk-through for a basic SLURM scheduler job script. Annotations are marked with bullet points. You can click on the link below to download the raw job script file without the annotation. Values in brackets are placeholders. You need to replace them with your own values. E.g. Change '<job name>' to something like 'blast_proj22'. We will write additional documentation on more complex job layouts for MPI jobs and other situations when a simple number of processor cores is not sufficient.
Download raw source of the [{{#fileLink: run.sh}} run.sh] file.
- Set the shell to use
{{#fileAnchor: run.sh}}
#!/bin/bash
- Common arguments
- Name the job to make it easier to see in the job queue
{{#fileAnchor: run.sh}}
#SBATCH --job-name=<JOBNAME>
- Your email address to use for all batch system communications
{{#fileAnchor: run.sh}}
##SBATCH --mail-user=<EMAIL>
- What emails to send
- NONE - no emails
- ALL - all emails
- END,FAIL - only email if the job fails and email the summary at the end of the job
{{#fileAnchor: run.sh}}
#SBATCH --mail-type=FAIL,END
- Standard Output and Error log files
- Use file patterns
- %j - job id
- %A-%a - Array job id (A) and task id (a)
{{#fileAnchor: run.sh}}
#SBATCH --output <my_job-%j.out>
#SBATCH --error <my_job-%j.err>
- Number of nodes to use
{{#fileAnchor: run.sh}}
#SBATCH --nodes=1
- Number of tasks (usually translate to processor cores) to use
{{#fileAnchor: run.sh}}
#SBATCH --ntasks=1
- Total memory limit for the job. Default is 2 gigabytes, but units can be specified with mb or gb for Megabytes or Gigabytes.
{{#fileAnchor: run.sh}}
#SBATCH --mem=4gb
- Job run time in [DAYS]
- HOURS:MINUTES:SECONDS
- [DAYS] are optional, use when it is convenient
{{#fileAnchor: run.sh}}
#SBATCH --time=72:00:00
- Optional
- A group to use if you belong to multiple groups. Otherwise, do not use.
{{#fileAnchor: run.sh}}
#SBATCH --account=<GROUP>
- A job array, which will create many jobs (called array tasks) different only in the '
$SLURM_ARRAY_TASK_ID
' variable, similar to Torque_Job_Arrays on HiPerGator 1
{{#fileAnchor: run.sh}}
#SBATCH --array=<BEGIN-END>
- Example of five tasks
- #SBATCH --array=1-5
- END OF PBS SETTINGS
- Recommended convenient shell code to put into your job script
- Add host, time, and directory name for later troubleshooting
{{#fileAnchor: run.sh}}
date;hostname;pwd
Below is the shell script part - the commands you will run to analyze your data. The following is an example.
- Load the software you need
{{#fileAnchor: run.sh}}
module load ncbi_blast
- Run the program
{{#fileAnchor: run.sh}}
blastn -db nt -query input.fa -outfmt 6 -out results.xml
date