Difference between revisions of "SLURM Job Arrays"

From UFRC
Jump to navigation Jump to search
 
(21 intermediate revisions by 4 users not shown)
Line 1: Line 1:
[[Category:SLURM]]
+
[[Category:Scheduler]]
==Introduction==
+
{|align=right
To submit a number of identical jobs without having drive the submission with an external script use the SLURM's feature of ''array jobs''.
+
  |__TOC__
 +
  |}
 +
==Introduction and Submitting Arrays==
 +
To submit a number of identical jobs without having drive the submission with an external script use the SLURM's feature of ''array jobs''. You can learn how to  submit them at [[Submitting Array Jobs]].
  
==Submitting array jobs==
+
'''Note:''' There is a maximum limit of 3000 jobs per user on HiPerGator.
A job array can be submitted simply by adding
 
#SBATCH --array=x-y
 
to the job script where ''x'' and ''y'' are the array bounds. A job array can also be specified at the command line with
 
sbatch --array=x-y job_script.sbatch
 
 
 
A job array will then be created with a number of ''tasks'' that correspond to the specified array size.
 
 
 
SLURM's job array handling is very versatile. Instead of providing a task range a comma-separated list of task numbers can be provided, for example, to rerun a few failed jobs from a previously completed job array as in
 
sbatch --array=4,8,15,16,23,42  job_script.sbatch
 
 
 
which can be used to quickly rerun the lost tasks from a previous job array for example. Command line options override options in the script, so those can be left unchanged.
 
 
 
===Limiting the number of tasks that run at once ===
 
To ''throttle'' a job array by keeping only a certain number of tasks active at a time use the <code>%N</code> suffix where ''N'' is the number of active tasks. For example
 
#SBATCH -t 1-200%5
 
will produce a 200 task job array with only 5 tasks active at any given time.
 
 
 
Note that while the symbol used is the % sign, this is the actual number of tasks to be submitted at once.
 
 
 
====Using scontrol to modify throttling of running array jobs====
 
If you want to change the number of simultaneous tasks of an active job, you can use scontrol:
 
scontrol update ArrayTaskThrottle=<count> JobId=<jobID>
 
eg
 
scontrol update ArrayTaskThrottle=50 JobId=12345
 
 
 
Set ArrayTaskThrottle=0 to eliminate any limit.
 
 
 
===Naming output and error files===
 
 
 
SLURM uses the %A and %a replacement strings for the master job ID and task ID, respectively.
 
 
 
For example:
 
#SBATCH --output=Array_test.%A_%a.out
 
#SBATCH --error=Array_test.%A_%a.error
 
The error log is optional as both types of logs can be written to the 'output' log.
 
#SBATCH --output=Array_test.%A_%a.log
 
 
 
;Note: if you only use '%A' in the log all array tasks will try to write to a single file. The performance of the run will approach zero asymptotically. '''Make sure to use both %A and %a''' in the log file name specification.
 
  
 
==Using the array ID Index==
 
==Using the array ID Index==
SLURM will provide a ''$SLURM_ARRAY_TASK_ID'' variable to each task. It can be used inside the job script to handle input and output files for that task.  
+
SLURM will provide a ''$SLURM_ARRAY_TASK_ID'' variable to each task. It can be used inside the job script to handle input and output files for that task. To learn how and see some examples, visit [[Array ID Indexes]].
 
 
For instance, for a 100-task job array the input files can be named ''seq_1.fa'', ''seq_2.fa'' and so on through ''seq_100.fa''. In a job script for a blastn job they can be referenced as ''blastn -query seq_${SLURM_ARRAY_TASK_ID}.fa''. The output files can be handled in the same way.
 
 
 
One common application of array jobs is to run many input files. While it is easy if the files are numbered as in the example above, this is not needed. If for example you have a folder of 100 files that end in .txt, you can use the following approach to get the name of the file for each task automatically:
 
 
 
file=$(ls *.txt | sed -n ${SLURM_ARRAY_TASK_ID}p)
 
myscript -in $file
 
  
 
==Running many short tasks==
 
==Running many short tasks==
Line 58: Line 16:
 
If you have hundreds or thousands of tasks, it is unlikely that a simple array job is the best solution. That does not mean that array jobs are not helpful in these cases, but that a little more thought needs to go into them for efficient use of the resources.
 
If you have hundreds or thousands of tasks, it is unlikely that a simple array job is the best solution. That does not mean that array jobs are not helpful in these cases, but that a little more thought needs to go into them for efficient use of the resources.
  
As an example let's imaging I have 5,000 runs of a program to do, with each run taking about 30 seconds to complete. Rather than running an array job with 5,000 tasks, it would be much more efficient to run 5 tasks where each completes 1,000 runs. Here's a sample script to accomplish this by combining array jobs with bash loops.
+
[[File:Play_icon.png|frameless|30px|link=https://mediasite.video.ufl.edu/Mediasite/Play/5bbd7cfb22b2416bbb0541e79875def51d]] [10 min, 16sec] Watch the video discussing some of the issues and walking through the details of the example script below.
  
 
+
As an example let's imagine I have 5,000 runs of a program to do, with each run taking about 30 seconds to complete. Rather than running an array job with 5,000 tasks, it would be much more efficient to run 5 tasks where each completes 1,000 runs.  
Download the [{{#fileLink: large_array.sh}} large_array.sh] script
+
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
{{#fileAnchor: large_array.sh}}
+
''Expand to view a sample script to accomplish this by combining array jobs with bash loops.''
<source lang=bash>
+
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<pre>
 
#!/bin/sh
 
#!/bin/sh
 
#SBATCH --job-name=mega_array  # Job name
 
#SBATCH --job-name=mega_array  # Job name
 
#SBATCH --mail-type=ALL        # Mail events (NONE, BEGIN, END, FAIL, ALL)
 
#SBATCH --mail-type=ALL        # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=magitz@ufl.edu # Where to send mail
+
#SBATCH --mail-user=gatorlink@ufl.edu # Where to send mail
 
#SBATCH --nodes=1                  # Use one node
 
#SBATCH --nodes=1                  # Use one node
 
#SBATCH --ntasks=1                  # Run a single task
 
#SBATCH --ntasks=1                  # Run a single task
Line 74: Line 33:
 
#SBATCH --output=array_%A-%a.out    # Standard output and error log
 
#SBATCH --output=array_%A-%a.out    # Standard output and error log
 
#SBATCH --array=1-5                # Array range
 
#SBATCH --array=1-5                # Array range
 
 
# This is an example script that combines array tasks with
 
# This is an example script that combines array tasks with
 
# bash loops to process many short runs. Array jobs are convenient
 
# bash loops to process many short runs. Array jobs are convenient
Line 80: Line 38:
 
# quickly become inefficient, taking more time to schedule than
 
# quickly become inefficient, taking more time to schedule than
 
# they spend doing any work and bogging down the scheduler for
 
# they spend doing any work and bogging down the scheduler for
# all users.
+
# all users.  
 
 
pwd; hostname; date
 
pwd; hostname; date
 
  
 
#Set the number of runs that each SLURM task should do
 
#Set the number of runs that each SLURM task should do
Line 94: Line 50:
  
 
# Print the task and run range
 
# Print the task and run range
echo This is task $SLURM_ARRAY_TASK_ID, which will do runs $START_NUM to $END_NU
+
echo This is task $SLURM_ARRAY_TASK_ID, which will do runs $START_NUM to $END_NUM
M
+
 
 
# Run the loop of runs for this task.
 
# Run the loop of runs for this task.
 
for (( run=$START_NUM; run<=END_NUM; run++ )); do
 
for (( run=$START_NUM; run<=END_NUM; run++ )); do
Line 101: Line 57:
 
   #Do your stuff here
 
   #Do your stuff here
 
done
 
done
 
  
 
date
 
date
</source>
+
</pre>
 +
</div>
 +
</div>
  
 
==Deleting job arrays and tasks==
 
==Deleting job arrays and tasks==

Latest revision as of 18:50, 16 May 2023

Introduction and Submitting Arrays

To submit a number of identical jobs without having drive the submission with an external script use the SLURM's feature of array jobs. You can learn how to submit them at Submitting Array Jobs.

Note: There is a maximum limit of 3000 jobs per user on HiPerGator.

Using the array ID Index

SLURM will provide a $SLURM_ARRAY_TASK_ID variable to each task. It can be used inside the job script to handle input and output files for that task. To learn how and see some examples, visit Array ID Indexes.

Running many short tasks

While SLURM array jobs make it easy to run many similar tasks, if each task is short (seconds or even a few minutes), array jobs quickly bog down the scheduler and more time is spent managing jobs than actually doing any work for you. This also negatively impacts other users.

If you have hundreds or thousands of tasks, it is unlikely that a simple array job is the best solution. That does not mean that array jobs are not helpful in these cases, but that a little more thought needs to go into them for efficient use of the resources.

Play icon.png [10 min, 16sec] Watch the video discussing some of the issues and walking through the details of the example script below.

As an example let's imagine I have 5,000 runs of a program to do, with each run taking about 30 seconds to complete. Rather than running an array job with 5,000 tasks, it would be much more efficient to run 5 tasks where each completes 1,000 runs.

Expand to view a sample script to accomplish this by combining array jobs with bash loops.

#!/bin/sh
#SBATCH --job-name=mega_array   # Job name
#SBATCH --mail-type=ALL         # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=gatorlink@ufl.edu # Where to send mail	
#SBATCH --nodes=1                   # Use one node
#SBATCH --ntasks=1                  # Run a single task
#SBATCH --mem-per-cpu=1gb           # Memory per processor
#SBATCH --time=00:10:00             # Time limit hrs:min:sec
#SBATCH --output=array_%A-%a.out    # Standard output and error log
#SBATCH --array=1-5                 # Array range
# This is an example script that combines array tasks with
# bash loops to process many short runs. Array jobs are convenient
# for running lots of tasks, but if each task is short, they
# quickly become inefficient, taking more time to schedule than
# they spend doing any work and bogging down the scheduler for
# all users. 
pwd; hostname; date

#Set the number of runs that each SLURM task should do
PER_TASK=1000

# Calculate the starting and ending values for this task based
# on the SLURM task and the number of runs per task.
START_NUM=$(( ($SLURM_ARRAY_TASK_ID - 1) * $PER_TASK + 1 ))
END_NUM=$(( $SLURM_ARRAY_TASK_ID * $PER_TASK ))

# Print the task and run range
echo This is task $SLURM_ARRAY_TASK_ID, which will do runs $START_NUM to $END_NUM

# Run the loop of runs for this task.
for (( run=$START_NUM; run<=END_NUM; run++ )); do
  echo This is SLURM task $SLURM_ARRAY_TASK_ID, run number $run
  #Do your stuff here
done

date

Deleting job arrays and tasks

To delete all of the tasks of an array job, use scancel with the job ID:

scancel 292441

To delete a single task, add the task ID:

scancel 292441_5

Controlling Job emails

By default in SLURM, the emails for events BEGIN, END and FAIL apply to the job array as a whole rather than individual tasks. So:

#SBATCH --mail-type=BEGIN,END,FAIL

would result in one email per job, not per task. If you want per task emails, specify:

 #SBATCH --mail-type=BEGIN,END,FAIL,ARRAY_TASKS

which will send emails for each task in the array.