Difference between revisions of "SLURM Partition Limits"
Jump to navigation
Jump to search
Moskalenko (talk | contribs) |
|||
Line 3: | Line 3: | ||
|} | |} | ||
Different sets of hardware resources presented as SLURM partitions have individual time limits. | Different sets of hardware resources presented as SLURM partitions have individual time limits. | ||
+ | |||
+ | Example of time limit configuration: | ||
+ | |||
+ | #SBATCH --time=4-00:00:00 # Walltime in hh:mm:ss or d-hh:mm:ss | ||
+ | |||
+ | |||
==Interactive Work== | ==Interactive Work== | ||
Partitions: hpg-dev, gpu, hpg-ai | Partitions: hpg-dev, gpu, hpg-ai |
Revision as of 15:23, 4 October 2023
Different sets of hardware resources presented as SLURM partitions have individual time limits.
Example of time limit configuration:
- SBATCH --time=4-00:00:00 # Walltime in hh:mm:ss or d-hh:mm:ss
Interactive Work
Partitions: hpg-dev, gpu, hpg-ai
- Default time limit if not specified (Default): 10 min
- hpg-dev Maximum: 12 hours
- gpu
- Maximum: 12 hours for srun .... --pty bash -i sessions
- Maximum: 72 hours for Jupyter sessions in Open OnDemand.
- hpg-ai
- Maximum: 12 hours for srun .... --pty bash -i sessions
Jupyter
- JupyterHub: Sessions are preset with individual limits shown in the menu
- JupyterLab in Open OnDemand Maximum: 72 hours for the GPU partition, other partitions follow standard partition limits
GPU/HPG-AI Partitions
- Default: 10 min
- Maximum: 14 days
Note: There is no burst QOS for the gpu partitions.
Compute Partitions
- Partitions
- hpg-default, hpg2-compute, bigmem
Both the hpg-default and the hpg2-compute partitions are selected by default if no partition is specified for a job.
Investment QOS
- Default: 10 min
- Maximum: 31 days (744 hours)
Burst QOS
- Default: 10 min
- Maximum: 4 days (96 hours)
Hardware Accelerated GUI
- Partition
- hwgui
- Default: 10 min
- Maximum: 4 days (96 hours)