Choosing QOS for a Job

From UFRC
Jump to navigation Jump to search

Back to Account and QOS limits under SLURM
When choosing between the high-priority investment QOS and the 9x larger low-priority burst QOS, you should start by considering the overall resource requirements for the job. For smaller allocations the investment QOS may not be large enough for some jobs, whereas for other smaller jobs the wait time in the burst QOS could be too long. In addition, consider the current state of the account you are planning to use for your job.

For any individual jobs submitted to the Burst QOS we do not guarantee that they will ever start, although historical data shows that burst jobs do start and provide significant additional throughput to groups that use them correctly as 'long queues' i.e.
  • Submit only non-time-critical jobs to the Burst QOS.
  • Parallelize analyses to make sure they can run within the 4-day window.
  • Let the scheduler take its time to find unused resources to run burst jobs.
In summary, the Burst QOS is best handled in a "hands-off" fashion. If any of your analyses are time-critical then you should be submitting them to the appropriately sized investment qos.

To show the status of any SLURM account as well as the overall usage of HiPerGator resources, use the following command from the UFRC module:

$ module load ufrc
$ slurmInfo

for the primary account (group name) or

$ slurmInfo <account>

for another account (group name)

Example: $ slurmInfo ufgi:

----------------------------------------------------------------------
Allocation summary:    Time Limit             Hardware Resources
   Investment QOS           Hours          CPU     MEM(GB)     GPU
----------------------------------------------------------------------
             ufgi             744          150         527       0
----------------------------------------------------------------------
CPU/MEM Usage:                Running        Pending        Total
                       CPU   MEM(GB)    CPU   MEM(GB)    CPU   MEM(GB)
----------------------------------------------------------------------
     Investment (ufgi):   100      280     0        0   100      280
----------------------------------------------------------------------
HiPerGator Utilization
                 CPUs: Used/Total    MEM(GB): Used/Total  GPUs: Used/Total
--------------------------------------------------------------------------
Total        :  47150/79620   59%    250529/600009   41%    512/1234  41%
--------------------------------------------------------------------------
HiPerGator GPU Utilization by type
Partition     A100              geforce       quadro
--------------------------------------------------------------------------
gpu     :    467/664   70%    43/528   8%    0/0     0%
hwgui   :      0/0     0%      0/0     0%    2/42    4% 
--------------------------------------------------------------------------
* Burst QOS uses idle cores at low priority with a 4-day time limit
* Duplicate partition(s): hpg-ai / gpu
* Reserved nodes excluded from HPG utilization metrics

Run 'slurmInfo -h' to see all available options

The output shows that the investment QOS for the ufgi account is actively used. Since 100 CPU cores out of 150 available are used only 50 cores are available. In the same vein since 280GB out of 527GB in the investment QOS are used 247GB are still available. The ufgi-b burst QOS is unused. The total HiPerGator use is 59% of all CPU cores and 41% of all memory on compute nodes, which means that there is some available capacity from which burst resources can be drawn. In this case a job submitted to the ufgi-b QOS would likely take a long time to start. If the overall utilization is below 80% it would be easier to start a burst job within a reasonable amount of time. When the HiPerGator load is high, or if the burst QOS is actively used, the investment QOS is more appropriate for a smaller job. The output also includes GPU utilization per GPU type cluster-wide.