Difference between revisions of "Account and QOS limits under SLURM"

From UFRC
Jump to navigation Jump to search
 
(39 intermediate revisions by 10 users not shown)
Line 1: Line 1:
[[Category:SLURM]]
+
[[Category:Scheduler]]
In the HiperGator 2.0 SLURM implementation, a job's priority and resource limits are determined by its QOS (Quality of Service).  Every group with an HPG2 investment will have a corresponding SLURM '''account'''. Each SLURM account has two associated QOSs - the normal or high-priority '''investment QOS''' and a low-priority '''burst QOS'''. In turn, each user who is a member of a '''vested group''' (i.e. a group with an investment and a corresponding computational allocation) has a SLURM association. It is this association which determines which QOSs are available to a user. In addition to the primary group association, users who have secondary group membership (in the Unix/Linux sense) will also have secondary group associations, which will afford them access to the investment and burst QOSs of that group's account.
+
{|align=right
==Account and QOS Use==
+
  |__TOC__
;Load the ufrc environment module before running the following commands with:
+
  |}
$ module load ufrc
+
Every group on HiPerGator (HPG) must have an '''investment''' with a corresponding hardware allocation to be able to do any work on HPG. Each allocation is associated with a scheduler '''account'''. Each account has two quality of service (QOS) levels - high-priority '''investment QOS''' and a low-priority '''burst QOS'''. The latter allows short-term borrowing of unused resources from other groups' accounts. In turn, each user in a group has a scheduler account association. In the end, it is this association which determines which QOSes are available to a particular user. Users with secondary Linux group membership will have associations with QOSes from their secondary groups.
To see your SLURM associations, use the following command:
 
  
  $ showAssoc <username>
+
In summary, each HPG user has scheduler associations with group account based QOSes that determine what resources are available to the users's jobs. These QOSes can be thought of as pools of computational (CPU cores), memory (RAM), maximum run time (time limit) resources with associated starting priority levels that can be consumed by jobs to run applications according to QOS levels, which we will review below.
  
For example, the command <code>$ showAssoc magitz</code> returns the following output:
+
==Account and QOS==
<pre style="color:black; background:WhiteSmoke; border:1px solid gray;">
+
===Using/Finding the resources from a secondary group===
              User    Account   Def Acct  Def QOS                                      QOS  
+
To view instructions on using SLURM resources from one of your secondary groups, or find what those associations are, view [[Checking and Using Secondary Resources]]
------------------ ---------- ---------- --------- ----------------------------------------
 
magitz                zoo6927      ufhpc    ufhpc zoo6927,zoo6927-b                       
 
magitz                  ufhpc      ufhpc    ufhpc ufhpc,ufhpc-b                           
 
magitz                soltis      ufhpc    soltis soltis,soltis-b                         
 
magitz                  borum      ufhpc    borum borum,borum-b 
 
</pre>
 
The output shows that the user <code style="color:black; background:WhiteSmoke; border:1px solid gray; width:70%;">magitz</code> has four SLURM associations and thus, access to 8 different QOSs.  By convention, a user's default account is always the account of their primary group.  Additionally, their default QOS is the investment (high-priority) QOS.  If a user does not explicitly request a specific account and QOS, the user's default account and QOS will be assigned to the job.
 
  
 +
=== CPU cores and Memory (RAM) Resource Use===
 +
CPU cores and RAM are allocated to jobs independently as requested in job scripts. Considerations for selecting how many CPU cores and how much memory to request for a job must take into account the QOS limits based on the group investment, the limitations of the hardware (compute nodes), and the desire to be a good neighbor on a shared resource like HiPerGator to ensure that system resources are allocated efficiently, used fairly, and everyone has a chance to get their work done without causing negative impacts on work performed by other researchers.
  
However, if the user <code style="color:black; background:WhiteSmoke; border:1px solid gray;">magitz</code> wanted to use the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">borum</code> group's account - which he has access by virtue of the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">borum</code> account association - he would specify the account and the chosen QOS in his batch script as follows:
+
HiPerGator consists of many interconnected servers (compute nodes). The hardware resources of each compute node, including CPU cores, memory, memory bandwidth, network bandwidth, and [[Temporary_Directories|local storage]] are limited. If any single one of the above resources is fully consumed the remaining unused resources can become effectively wasted, which makes it progressively harder or even impossible to achieve the shared goals of Research Computing and the UF Researcher Community stated above. See the [[Available Node Features]] for details on the  hardware on compute nodes. Nodes with similar hardware are generally separated into partitions. If the job requires larger nodes or particular hardware make sure to explicitly specify a partition.
<pre>
+
'''Example:'''
  #SBATCH  --account=borum
+
  --partition=bigmem
#SBATCH  --qos=borum
 
</pre>
 
Or, for the burst QOS:
 
<pre>
 
#SBATCH --account=borum
 
#SBATCH  --qos=borum-b
 
</pre>
 
Note that both must be specified.  Otherwise SLURM will assume the default <code style="color:black; background:WhiteSmoke; border:1px solid gray;">ufhpc</code> account is intended, and neither the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">borum</code> nor <code style="color:black; background:WhiteSmoke; border:1px solid gray;">borum-b</code> QOSs will be available to the job. Consequently, SLURM would deny the submission.
 
  
These sbatch directives can also be given as command line arguments to <code>srun</code>. For example:
+
When a job is submitted, if no resource request is provided, the default limits of 1 CPU core, 4gb of memory, and a 10 minute time limit will be set on the job by the scheduler. Check the resource request if it's not clear why the job ended before the analysis was done. Premature exit can be due to the job exceeding the time limit or the application using more memory than the request.
$ srun --account=borum --qos=borum-b <example_command>
 
  
===QOS Limits===
+
Run testing jobs to find out what resource a particular analysis needs. To make sure that the analysis is performed successfully without wasting valuable resources you must specify both the number of CPU cores and the amount of memory needed for the analysis in the job script. See [[Sample SLURM Scripts]] for examples of specifying CPU core requests depending on the nature of the application running in a job. Use <code>--mem</code> (total job memory on a node) or <code>--mem-per-cpu</code> (per-core memory) options to request memory. Use <code>--time</code> to set a time limit to an appropriate value within the QOS limit.
  
SLURM refers to resources - NCUs (cores), Memory (RAM), accelerators, software licenses, etc. - as '''Trackable Resources''' (TRES). The TRES available to a given group are determined by the group's investments and are limited by parameters assigned to the QOS.
+
As jobs are submitted and the resources under a particular account are consumed the group may reach either the CPU or Memory group limit. The group has consumed all cores in a QOS if the scheduler shows <code>QOSGrpCpuLimit</code> or memory if the scheduler shows <code>QOSGrpMemLimit</code> in the reason a job is pending ('NODELIST(REASON)' column of the <code>squeue</code> command output). '''Example:'''
 
+
<pre>
To view a group's trackable resource limits for a specific QOS, use the following command from the ''ufrc'' environment module:
+
            JOBID PARTITION    NAME    USER ST      TIME NODES NODELIST(REASON)
 
+
            123456    bigmem test_job    jdoe PD      0:00      1 (QOSGrpMemLimit)
$ showQos <specified_qos>
 
 
 
Continuing the example above, the command <code>$ showQos borum</code> returns the following output:
 
<pre style="color:black; background:WhiteSmoke; border:1px solid gray;">
 
                Name                          Descr                                      GrpTRES GrpCPUs
 
-------------------- ------------------------------ --------------------------------------------- --------
 
borum                borum qos                      cpu=41,mem=125952,gres/gpu=0,gres/mic=0            41
 
 
</pre>
 
</pre>
  
From the output, we can see that when submitting jobs under the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">borum</code> group investment QOS, users have access to a total of 41 cores, 125 GB of RAM, and no access to accelerators (GPUs, MICs). These resources are shared among all members of the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">borum</code> group running jobs under the investment QOS. 
+
Reaching a resource limit of a QOS does not interfere with job submission. However, the jobs with this reason will not run and will remain in the pending state until the QOS use falls below the limit.
 
 
 
 
Similarly, to check the burst QOS resource limits, the command <code lang=bash>$ showQos borum-b</code> returns the following output:
 
  
<pre style="color:black; background:WhiteSmoke; border:1px solid gray;">
+
If the resource request for submitted job is impossible to satisfy within either the QOS limits or HiPerGator compute node hardware for a particular partition the scheduler will refuse the job submission altogether and return the following error message,
                Name                          Descr                                      GrpTRES  GrpCPUs
 
-------------------- ------------------------------ --------------------------------------------- --------
 
borum-b              borum burst qos                cpu=369,mem=1133568,gres/gpu=0,gres/mic=0          369
 
</pre>
 
 
 
 
 
There are additional limits and parameters associated with QOSs in addition to the TRES limits.  Among them are the maximum wall time available under the QOS and the base priority assigned to the job.  Use the following command to view these parameters:
 
 
<pre>
 
<pre>
$ sacctmgr show qos format="name%-20,Description%-30,priority,maxwall" <specified_qos>
+
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy
</pre>
+
               (job submit limit, user's size and/or time limits)
To continue our example, the command <code>$ sacctmgr show qos format="name%-20,Description%-30,priority,maxwall" borum borum-b</code> returns the following output:
 
 
 
<pre style="color:black; background:WhiteSmoke; border:1px solid gray;">
 
                Name                          Descr  Priority    MaxWall
 
-------------------- ------------------------------ ---------- -----------
 
borum               borum qos                          36000 31-00:00:00
 
borum-b              borum burst qos                      900  4-00:00:00
 
 
</pre>
 
</pre>
  
We see that investment and burst QOS jobs are limited to a maximum duration of 31 and 4 days, respectively. Additionally, the base priority of a burst QOS job is 1/40th that of an investment QOS job. It is important to remember that the base priority is only one component of the jobs overall priority and that the priority will change over time as the job waits in the queue.
+
=== Time and Resource Limits ===
 
 
By policy, the burst QOS cpu and memory limits are always nine times (9x) those of the investment QOS and are intended to allow groups to take advantage of unused resources beyond those that they have purchased for short periods of time.
 
 
=== Basis for 31 Day QOS Limit ===
 
=== Cores (CPUs) and Memory (RAM)===
 
The resources of any computing device are limited. This is to say that the number of cores, the amount of memory, the memory bandwidth, the I/O bandwidth, etc. - all are limited. Once you have used up all of the available cores on a machine, it is fully consumed and unavailable to other users. This is true whether you use 1 byte of RAM or all of the RAM on the machine. Likewise, if your application uses all of the memory available to the machine, whether it uses 1 core or all the cores, the machine is consumed and unavailable to others. Because of this, we place limits on both the number of cores available to a group (based on the NCU investments) '''and''' the amount of memory available to a group (NCUs x 3GB). 
 
 
 
Limiting the amount of usable memory is necessary in order to be fair to all investors. For example, consider a system where a PI has invested in 10 NCUs. His TRES cpu limit will be "10". With no memory limits he could submit ten jobs requesting 1 cpu and 120GB each. Because each machine only has about 120GB available for applications, each job would be started on a separate machine leaving no memory available for any other jobs to run on the machine. As a result the group has invested in 10 NCUs, but is consuming 320. Such a scenario is not tenable and would quickly result in a grossly unfair allocation of resources. Thus, we place limits on both CPUs and memory.
 
  
The total memory limit is calculated as 'QOS NCU * 3 GB'. For example, an investment QOS of 30 NCUs will have a group memory limit of 90gb while the burst qos for that group will be equal to '30 * 3gb * 9 = 810gb'. If the group memory limit is reached you will see a '(QOSGrpMemLimit)' status in the 'NODELIST(REASON)' column of the <code>squeue</code> output, similar to the following:
+
See [[SLURM Partition Limits]] for partition time limits.
  
<source lang=bash>
+
For details on the limits placed on time and resources like GPUs on SLURM, view [[QOS Limits]].
$ squeue | grep MemLimit | head -n 1
 
            123456    bigmem test_job  jdoe PD      0:00      1 (QOSGrpMemLimit)
 
</source>
 
  
The above message can only be seen in the output of the <code>squeue</code> command and does not interfere with job submission. However, the job will not run and will remain in the pending state until the group falls below its memory limit.  
+
To view a summary of currently active jobs for a group, use the slurmInfo command from the [https://help.rc.ufl.edu/doc/UFRC_environment_module ufrc module].
  
If the submitted job request so much memory or so many cores that either or both fall outside the total resource limit of the specified QOS, SLURM will refuse the job submission altogether and return the following error message,
+
    slurmInfo -pu -g <i>groupname</i>
 
 
<source lang=bash>
 
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy
 
              (job submit limit, user's size and/or time limits)
 
</source>
 
  
 
==Choosing QOS for a Job==
 
==Choosing QOS for a Job==
When choosing between the high-priority investment QOS and the 9x larger low-priority burst QOS, you should start by considering the overall resource requirements for the job; for smaller allocations the investment QOS may not be large enough for some jobs, whereas for other smaller jobs the wait time in the burst QOS could be too long. In addition, consider the current state of the account you are planning to use for your job.  
+
For advice on choosing QOS, go to [[Choosing QOS for a Job]].
  
To show the status of any SLURM account as well as the overall usage of HiPerGator resources, use the following command from the ''ufrc'' environment module:
+
==Example==
$ slurmInfo <account>
+
A hypothetical group ($GROUP in the examples below) has an investment of 42 CPU cores and 148GB of memory. That's the group's so-called ''soft limit'' for HiPerGator jobs in the investment qos for up to 744 hour time limit at high priority. The hard limit accessible through the so-called ''burst qos'' is 9 times that giving a group potentially a total of 10x the invested resources i.e. 420 total CPU cores and 1480GB of total memory with burst qos providing 378 CPU cores and 1330GB of total memory for up to 96 hours at low base priority.
 
 
As an example, consider the following output, returned for the command <code>$ slurmInfo ufgi</code>:
 
<pre style="color:black; background:WhiteSmoke; border:1px solid gray;">
 
  Allocation summary for 'ufgi' account:
 
 
 
QOS    Time Limit      Allocations (cpus, mem(MB),GPU,MIC)
 
ufgi    31-00:00:00    cpu=100,mem=307200,gpu=0,mic=0
 
ufgi-b  4-00:00:00      cpu=900,mem=2764800,gpu=0,mic=0
 
 
 
  Current use:
 
 
 
    Main QOS ('ufgi'):
 
        81% or 81 out of 100 cores.
 
        30% or 96GB out of 300GB memory limit.
 
 
 
    Burst QOS ('ufgi-b'):
 
        CPU Cores: None
 
        Memory: None
 
 
 
  Total HiPerGator usage:
 
        76% or 23728 out of 31080 cores
 
</pre>
 
 
 
The output shows that the investment QOS for the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">ufgi</code> account is actively used, and only 19 cores out of 100 and 204gb out of the maximum 300gb are currently available. On the other hand, the burst QOS is unused. Furthermore, the total HiPerGator use is at 76%, which means that there's still available capacity from which burst resources can be drawn. In this case a job submitted to the <code style="color:black; background:WhiteSmoke; border:1px solid gray;">ufgi-b</code> QOS should be able to start within a reasonable amount of time and will enjoy access to much larger amount of computational and memory resources. If the HiPerGator load was higher, or if the burst QOS was actively used, the investment QOS would be more appropriate for a smaller job.
 
 
 
 
 
==Examples==
 
 
 
A hypothetical group ($GROUP in the examples below) has an investment of 42 NCUs. That's the group's so-called ''soft limit'' for HiPerGator jobs in the investment qos for up to 744 hours at high priority. The hard limit accessible through the so-called ''burst qos'' is +9 times that giving a group potentially a total of 10x the invested resources i.e. 420 NCUs with burst qos providing 378 NCUs of that capacity for up to 96 hours at low priority.
 
  
 
Let's test:
 
Let's test:
<source lang=bash>
+
<pre>
[marvin@gator ~]$ srun --mem=126gb --pty bash -i
+
[marvin@gator ~]$ srun --mem=126gb --pty bash -i
  
 
srun: job 123456 queued and waiting for resources
 
srun: job 123456 queued and waiting for resources
Line 147: Line 63:
 
srun: Job allocation 123456 has been revoked
 
srun: Job allocation 123456 has been revoked
 
srun: Force Terminated job 123456
 
srun: Force Terminated job 123456
</source>
+
</pre>
  
 
On the other hand, going even 1gb over that limit results in the already encountered job limit error
 
On the other hand, going even 1gb over that limit results in the already encountered job limit error
  
<source lang=bash>
+
<pre>
 
[marvin@gator ~]$ srun --mem=127gb --pty bash -i
 
[marvin@gator ~]$ srun --mem=127gb --pty bash -i
 
srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits
 
srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits
</source>
+
</pre>
  
 
At this point the group can try using the 'burst' QOS with
 
At this point the group can try using the 'burst' QOS with
 
+
#SBATCH --qos=$GROUP-b
<source lang=bash>
 
#SBATCH --qos=$GROUP-b
 
</source>
 
  
 
Let's test:
 
Let's test:
 
+
<pre>
<source lang=bash>
 
 
[marvin@gator3 ~]$ srun -p bigmem --mem=400gb --time=96:00:00 --qos=$GROUP-b --pty bash -i
 
[marvin@gator3 ~]$ srun -p bigmem --mem=400gb --time=96:00:00 --qos=$GROUP-b --pty bash -i
  
Line 174: Line 86:
 
srun: Job allocation 123457 has been revoked
 
srun: Job allocation 123457 has been revoked
 
srun: Force Terminated job 123457
 
srun: Force Terminated job 123457
</source>
+
</pre>
  
 
However, now there's the burst qos time limit to consider.
 
However, now there's the burst qos time limit to consider.
  
<source lang=bash>
+
<pre>
 
[marvin@gator ~]$ srun --mem=400gb --time=300:00:00 --pty bash -i
 
[marvin@gator ~]$ srun --mem=400gb --time=300:00:00 --pty bash -i
  
 
srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits
 
srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits
</source>
+
</pre>
  
 
Let's reduce the time limit to what burst qos supports and try again.
 
Let's reduce the time limit to what burst qos supports and try again.
  
<source lang=bash>
+
<pre>
 
[marvin@gator ~]$ srun --mem=400gb --time=96:00:00 --pty bash -i
 
[marvin@gator ~]$ srun --mem=400gb --time=96:00:00 --pty bash -i
  
Line 196: Line 108:
 
srun: Job allocation 123458 has been revoked
 
srun: Job allocation 123458 has been revoked
 
srun: Force Terminated job
 
srun: Force Terminated job
</source>
+
</pre>
 
 
  
 
==Pending Job Reasons==
 
==Pending Job Reasons==
The following ''Reasons'' can be seen in the <code>NODELIST(REASON)</code> column of the <code>squeue</code> command when the group reaches the resource limit for the respective account/qos combination:
+
To reiterate, the following ''Reasons'' can be seen in the <code>NODELIST(REASON)</code> column of the <code>squeue</code> command when the group reaches the resource limit for a QOS:
 
+
<div style="column-count:2">
 
;QOSGrpCpuLimit
 
;QOSGrpCpuLimit
 
All CPU cores available for the listed account within the respective QOS are in use.
 
All CPU cores available for the listed account within the respective QOS are in use.
 
 
;QOSGrpMemLimit
 
;QOSGrpMemLimit
 
All memory available for the listed account within the respective QOS as described in the previous section is in use.
 
All memory available for the listed account within the respective QOS as described in the previous section is in use.
 +
</div>
 +
{{Note|Once it has marked any jobs in the group's list of pending jobs with a reason of <code>QOSGrpCpuLimit</code> or <code>QOSGrpMemLimit</code>, SLURM may not evaluate other jobs and they may simply be listed with the <code>Priority</code> reason code. See [https://help.rc.ufl.edu/doc/FAQ FAQ] at the bottom of the page for a list of reasons.|reminder}}

Latest revision as of 17:37, 26 July 2024

Every group on HiPerGator (HPG) must have an investment with a corresponding hardware allocation to be able to do any work on HPG. Each allocation is associated with a scheduler account. Each account has two quality of service (QOS) levels - high-priority investment QOS and a low-priority burst QOS. The latter allows short-term borrowing of unused resources from other groups' accounts. In turn, each user in a group has a scheduler account association. In the end, it is this association which determines which QOSes are available to a particular user. Users with secondary Linux group membership will have associations with QOSes from their secondary groups.

In summary, each HPG user has scheduler associations with group account based QOSes that determine what resources are available to the users's jobs. These QOSes can be thought of as pools of computational (CPU cores), memory (RAM), maximum run time (time limit) resources with associated starting priority levels that can be consumed by jobs to run applications according to QOS levels, which we will review below.

Account and QOS

Using/Finding the resources from a secondary group

To view instructions on using SLURM resources from one of your secondary groups, or find what those associations are, view Checking and Using Secondary Resources

CPU cores and Memory (RAM) Resource Use

CPU cores and RAM are allocated to jobs independently as requested in job scripts. Considerations for selecting how many CPU cores and how much memory to request for a job must take into account the QOS limits based on the group investment, the limitations of the hardware (compute nodes), and the desire to be a good neighbor on a shared resource like HiPerGator to ensure that system resources are allocated efficiently, used fairly, and everyone has a chance to get their work done without causing negative impacts on work performed by other researchers.

HiPerGator consists of many interconnected servers (compute nodes). The hardware resources of each compute node, including CPU cores, memory, memory bandwidth, network bandwidth, and local storage are limited. If any single one of the above resources is fully consumed the remaining unused resources can become effectively wasted, which makes it progressively harder or even impossible to achieve the shared goals of Research Computing and the UF Researcher Community stated above. See the Available Node Features for details on the hardware on compute nodes. Nodes with similar hardware are generally separated into partitions. If the job requires larger nodes or particular hardware make sure to explicitly specify a partition. Example:

--partition=bigmem

When a job is submitted, if no resource request is provided, the default limits of 1 CPU core, 4gb of memory, and a 10 minute time limit will be set on the job by the scheduler. Check the resource request if it's not clear why the job ended before the analysis was done. Premature exit can be due to the job exceeding the time limit or the application using more memory than the request.

Run testing jobs to find out what resource a particular analysis needs. To make sure that the analysis is performed successfully without wasting valuable resources you must specify both the number of CPU cores and the amount of memory needed for the analysis in the job script. See Sample SLURM Scripts for examples of specifying CPU core requests depending on the nature of the application running in a job. Use --mem (total job memory on a node) or --mem-per-cpu (per-core memory) options to request memory. Use --time to set a time limit to an appropriate value within the QOS limit.

As jobs are submitted and the resources under a particular account are consumed the group may reach either the CPU or Memory group limit. The group has consumed all cores in a QOS if the scheduler shows QOSGrpCpuLimit or memory if the scheduler shows QOSGrpMemLimit in the reason a job is pending ('NODELIST(REASON)' column of the squeue command output). Example:

             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
            123456    bigmem test_job     jdoe PD       0:00      1 (QOSGrpMemLimit)

Reaching a resource limit of a QOS does not interfere with job submission. However, the jobs with this reason will not run and will remain in the pending state until the QOS use falls below the limit.

If the resource request for submitted job is impossible to satisfy within either the QOS limits or HiPerGator compute node hardware for a particular partition the scheduler will refuse the job submission altogether and return the following error message,

sbatch: error: Batch job submission failed: Job violates accounting/QOS policy 
               (job submit limit, user's size and/or time limits)

Time and Resource Limits

See SLURM Partition Limits for partition time limits.

For details on the limits placed on time and resources like GPUs on SLURM, view QOS Limits.

To view a summary of currently active jobs for a group, use the slurmInfo command from the ufrc module.

   slurmInfo -pu -g groupname

Choosing QOS for a Job

For advice on choosing QOS, go to Choosing QOS for a Job.

Example

A hypothetical group ($GROUP in the examples below) has an investment of 42 CPU cores and 148GB of memory. That's the group's so-called soft limit for HiPerGator jobs in the investment qos for up to 744 hour time limit at high priority. The hard limit accessible through the so-called burst qos is 9 times that giving a group potentially a total of 10x the invested resources i.e. 420 total CPU cores and 1480GB of total memory with burst qos providing 378 CPU cores and 1330GB of total memory for up to 96 hours at low base priority.

Let's test:

 [marvin@gator ~]$ srun --mem=126gb --pty bash -i

srun: job 123456 queued and waiting for resources

#Looks good, let's terminate the request with Ctrl+C>

^C
srun: Job allocation 123456 has been revoked
srun: Force Terminated job 123456

On the other hand, going even 1gb over that limit results in the already encountered job limit error

[marvin@gator ~]$ srun --mem=127gb --pty bash -i
srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits

At this point the group can try using the 'burst' QOS with

#SBATCH --qos=$GROUP-b

Let's test:

[marvin@gator3 ~]$ srun -p bigmem --mem=400gb --time=96:00:00 --qos=$GROUP-b --pty bash -i

srun: job  123457 queued and waiting for resources

#Looks good, let's terminate with Ctrl+C

^C
srun: Job allocation 123457 has been revoked
srun: Force Terminated job 123457

However, now there's the burst qos time limit to consider.

[marvin@gator ~]$ srun --mem=400gb --time=300:00:00 --pty bash -i

srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits

Let's reduce the time limit to what burst qos supports and try again.

[marvin@gator ~]$ srun --mem=400gb --time=96:00:00 --pty bash -i

srun: job  123458 queued and waiting for resources

#Looks good, let's terminate with Ctrl+C

^C
srun: Job allocation 123458 has been revoked
srun: Force Terminated job

Pending Job Reasons

To reiterate, the following Reasons can be seen in the NODELIST(REASON) column of the squeue command when the group reaches the resource limit for a QOS:

QOSGrpCpuLimit

All CPU cores available for the listed account within the respective QOS are in use.

QOSGrpMemLimit

All memory available for the listed account within the respective QOS as described in the previous section is in use.

Once it has marked any jobs in the group's list of pending jobs with a reason of QOSGrpCpuLimit or QOSGrpMemLimit, SLURM may not evaluate other jobs and they may simply be listed with the Priority reason code. See FAQ at the bottom of the page for a list of reasons.