SLURM Partition Limits
Jump to navigation
Jump to search
Jobs can run on specific HiPerGator servers or nodes based on hardware requirements. Different sets of hardware configurations are presented as SLURM partitions. See also: Available Node Features
Time Limits
Partitions have individual time limits. Example of time limit configuration:
- SBATCH --time=4-00:00:00 # Walltime in hh:mm:ss or d-hh:mm:ss
Interactive Work
Partitions: hpg-dev, gpu, hpg-ai
- Default time limit if not specified (Default): 10 min
- hpg-dev Maximum: 12 hours
- gpu
- Maximum: 12 hours for srun .... --pty bash -i sessions
- Maximum: 72 hours for Jupyter sessions in Open OnDemand.
- hpg-ai
- Maximum: 12 hours for srun .... --pty bash -i sessions
Jupyter
- JupyterHub: Sessions are preset with individual limits shown in the menu
- JupyterLab in Open OnDemand Maximum: 72 hours for the GPU partition, other partitions follow standard partition limits
GPU/HPG-AI Partitions
- Default: 10 min
- Maximum: 14 days
Note: There is no burst QOS for the gpu partitions.
Compute Partitions
- Partitions
- hpg-default, hpg2-compute, bigmem
Both the hpg-default and the hpg2-compute partitions are selected by default if no partition is specified for a job.
Investment QOS
- Default: 10 min
- Maximum: 31 days (744 hours)
Burst QOS
- Default: 10 min
- Maximum: 4 days (96 hours)
Hardware Accelerated GUI
- Partition
- hwgui
- Default: 10 min
- Maximum: 4 days (96 hours)