Difference between revisions of "Development and Testing"

From UFRC
Jump to navigation Jump to search
(10 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
[[Category:Hardware]][[Category:SLURM]]
 
[[Category:Hardware]][[Category:SLURM]]
 +
==Login Nodes==
 +
Generally speaking, interactive work other than managing jobs and data is discouraged on the login nodes.  However, short test jobs (processes) are permitted as long as they fall within the following limits.
  
There are multiple ways to develop code or test workflows and job scripts before submitting production (final) jobs to the scheduler. Our login servers are quite capable, so it is alright to use them for development and short tests. Just try not to leave lingering processes or run tests at such scale that other users will be negatively affected. If you do, your large tests will be throttled down to fewer CPU cores than you expected, so your test will not reflect anything useful performance-wise. In addition, if you persist in trying to overload a login node your account may be suspended for mis-use of resources.
+
# No more than 16 cores
 +
# No longer than 10 minutes (wall time)
 +
# No more than 64 GB of RAM.
  
An alternative approach is to create an environment similar to what you would get in a real job on HiPerGator. One way to do this is to start an interactive job on a compute node with srun. E.g.
+
That above resource limits essentially define what we mean by a “small, short” job or process.   These limits should allow for the testing of job submission scripts or even simple application development tests. If you need to run multiple instances of some process (gzip, make, cp, etc), you should observe the above limits and not run more than 16 simultaneous instances of any single process nor should the collection of such processes consume more than 64 GB of RAM.   
  $ srun --mem=4gb --time=08:00:00 --pty bash -i
 
  
will present you, once the job starts, with an interactive prompt. You can run your job scripts as regular shell scripts in that environment.
+
Data management operations such as ''gzip, rsync, scp sftp'', etc. can take a long time to complete and are exempt from the 10 minute time limit.  
  
If it's taking a while to start an interactive session perhaps you should try our developmental partition as shown below. The 'dev' nodes are set up to start jobs faster as long as resources are available. The software environment on the nodes within the dev partition is consistent with that of the compute nodes so you can run jobs and get an accurate idea of what resources are needed to successfully complete your jobs.
+
If you have development and testing requirements that exceed the above resource limits there are several options as described below.
  
For example, to get a four-hour session with the default 1 processor core and 2gb of memory:
+
==SLURM Interactive Session==
$ module load ufrc
+
You can request resources for an interactive session (i.e. job) and start a command shell (bash, for example).  Within that command shell you will have access to the resources you requested and can run whatever commands and processes you wish. Consider the example below. It will give you 4 GB of memory for 8 hours on a real compute host and present you, once the job starts, with an interactive command shell (bash). From that shell you can run commands and launch processes just as you would from any other host (login or otherwise).
$ srundev --time=04:00:00
 
  
The srundev command is a wrapper around the <code>srun --partition=hpg2-dev --pty bash -i</code> command.
+
    $ srun --mem=4gb --time=08:00:00 --pty bash -i
  
Other SLURM directives can also be added to request more processors or memory. For example:
+
'''Note:''' Because the requested resources must be allocated and scheduled by the batch scheduler, it could take any where from a few seconds to a few hours for your interactive session to startHow long it takes depends on many factors that include how busy the system is overall and what percentage of your group's allocation is already in use.
  $ module load ufrc
 
$ srundev --time=60 --ntasks=1 --cpus-per-task=4 --mem=4gb
 
  
;Note:
+
See the [https://slurm.schedmd.com/srun.html SchedMD srun documentation] for further information and details regarding the ''srun'' command.
  
* The default time limit for the developmental SLURM partition is 00:10:00 (10 minutes). The maximum time limit in the dev partition is 12 hours.
+
==SLURM Development Session==
  
Yet another approach is to log into HiPerGator and create a SLURM allocation under which you can run commands or scripts with 'srun' for as long as the allocation is valid. Whatever you srun under the allocation will be executed within a job environment, but there will be no delay for job startup. For example,
+
A small number of servers have been placed into a SLURM partition (collection of hosts) for the purpose of supporting just software development.  You can access the partition by specifying the "dev" partition to the appropriate SLURM command.  For example, to obtain a resources for an interactive session (job) in the dev partition you could run the following command.
 +
 
 +
          srun --partition=hpg2-dev --mem=4gb --time=04:00:00 --pty bash -i
 +
 
 +
or, if you need more cores,
 +
 
 +
          srun --partition=hpg2-dev --mem=4gb --ntasks=1 --cpus-per-task=8 --time=04:00:00 --pty bash -i
 +
 
 +
By loading the ''ufrc'' environment module, you can take advantage ''srundev'' and simpify the above to,
 +
 
 +
          module load ufrc
 +
          srundev --mem=4gb --ntasks=1 --cpus-per-task=8 --time=04:00:00
 +
 
 +
The ''srundev'' command is a wrapper encapsulating ''srun --partition=hpg2-dev --pty bash -i''.
 +
 
 +
'''Note:''' The default time limit for the SLURM development partition is 00:10:00 (10 minutes). The maximum time limit in the development partition is 12 hours.
 +
 
 +
==Pre-Allocation of Resources==
 +
Finally, you can also use [https://slurm.schedmd.com/salloc.html salloc] to create a SLURM allocation under which you can run commands or scripts with ''srun'' for as long as the allocation is valid. Whatever you ''srun'' under the allocation will be executed within the context of the allocated resources but there will be no delay since the resources have already been allocated.  
 +
 
 +
For example,
  
 
<pre>
 
<pre>
 
$ salloc -n 1 --cpus-per-task=2 --mem=8gb --time=10:00:00
 
$ salloc -n 1 --cpus-per-task=2 --mem=8gb --time=10:00:00
salloc: Pending job allocation 33359121
+
salloc: Pending job allocation 52219029
salloc: job 33333333 queued and waiting for resources
+
salloc: job 52219029 queued and waiting for resources
salloc: job 33333333 has been allocated resources
+
salloc: job 52219029 has been allocated resources
salloc: Granted job allocation 33333333
+
salloc: Granted job allocation 52219029
 
 
$ srun hostname
 
c99a-s1.ufhpc
 
 
 
$ srun echo "Running inside an allocation"
 
Running inside an allocation
 
  
$ echo $SLURM_MEM_PER_NODE
+
[chasman@login4 slurm]$ printenv | grep SLURM
8192
+
SLURM_NODELIST=c6a-s26
 +
SLURM_JOB_NAME=bash
 +
SLURM_NODE_ALIASES=(null)
 +
SLURM_JOB_QOS=ufhpc
 +
SLURM_NNODES=1
 +
SLURM_JOBID=52219029
 +
SLURM_NTASKS=1
 +
SLURM_TASKS_PER_NODE=1
 +
SLURM_CPUS_PER_TASK=2
 +
SLURM_JOB_ID=52219029
 +
SLURM_SUBMIT_DIR=/home/chasman
 +
SLURM_NPROCS=1
 +
SLURM_JOB_NODELIST=c6a-s26
 +
SLURM_CLUSTER_NAME=hipergator
 +
SLURM_JOB_CPUS_PER_NODE=2
 +
SLURM_SUBMIT_HOST=login4.ufhpc
 +
SLURM_JOB_PARTITION=hpg1-compute,hpg2-compute
 +
SLURM_JOB_ACCOUNT=ufhpc
 +
SLURM_JOB_NUM_NODES=1
 +
SLURM_MEM_PER_NODE=8192
 
</pre>
 
</pre>
Enjoy the many ways to make sure your jobs are set up right. Test responsibly.
 

Revision as of 15:57, 20 May 2020

Login Nodes

Generally speaking, interactive work other than managing jobs and data is discouraged on the login nodes. However, short test jobs (processes) are permitted as long as they fall within the following limits.

  1. No more than 16 cores
  2. No longer than 10 minutes (wall time)
  3. No more than 64 GB of RAM.

That above resource limits essentially define what we mean by a “small, short” job or process. These limits should allow for the testing of job submission scripts or even simple application development tests. If you need to run multiple instances of some process (gzip, make, cp, etc), you should observe the above limits and not run more than 16 simultaneous instances of any single process nor should the collection of such processes consume more than 64 GB of RAM.

Data management operations such as gzip, rsync, scp sftp, etc. can take a long time to complete and are exempt from the 10 minute time limit.

If you have development and testing requirements that exceed the above resource limits there are several options as described below.

SLURM Interactive Session

You can request resources for an interactive session (i.e. job) and start a command shell (bash, for example). Within that command shell you will have access to the resources you requested and can run whatever commands and processes you wish. Consider the example below. It will give you 4 GB of memory for 8 hours on a real compute host and present you, once the job starts, with an interactive command shell (bash). From that shell you can run commands and launch processes just as you would from any other host (login or otherwise).

   $ srun --mem=4gb --time=08:00:00 --pty bash -i

Note: Because the requested resources must be allocated and scheduled by the batch scheduler, it could take any where from a few seconds to a few hours for your interactive session to start. How long it takes depends on many factors that include how busy the system is overall and what percentage of your group's allocation is already in use.

See the SchedMD srun documentation for further information and details regarding the srun command.

SLURM Development Session

A small number of servers have been placed into a SLURM partition (collection of hosts) for the purpose of supporting just software development. You can access the partition by specifying the "dev" partition to the appropriate SLURM command. For example, to obtain a resources for an interactive session (job) in the dev partition you could run the following command.

         srun --partition=hpg2-dev --mem=4gb --time=04:00:00 --pty bash -i 

or, if you need more cores,

         srun --partition=hpg2-dev --mem=4gb --ntasks=1 --cpus-per-task=8 --time=04:00:00 --pty bash -i 

By loading the ufrc environment module, you can take advantage srundev and simpify the above to,

         module load ufrc
         srundev --mem=4gb --ntasks=1 --cpus-per-task=8 --time=04:00:00 

The srundev command is a wrapper encapsulating srun --partition=hpg2-dev --pty bash -i.

Note: The default time limit for the SLURM development partition is 00:10:00 (10 minutes). The maximum time limit in the development partition is 12 hours.

Pre-Allocation of Resources

Finally, you can also use salloc to create a SLURM allocation under which you can run commands or scripts with srun for as long as the allocation is valid. Whatever you srun under the allocation will be executed within the context of the allocated resources but there will be no delay since the resources have already been allocated.

For example,

$ salloc -n 1 --cpus-per-task=2 --mem=8gb --time=10:00:00
salloc: Pending job allocation 52219029
salloc: job 52219029 queued and waiting for resources
salloc: job 52219029 has been allocated resources
salloc: Granted job allocation 52219029

[chasman@login4 slurm]$ printenv | grep SLURM
SLURM_NODELIST=c6a-s26
SLURM_JOB_NAME=bash
SLURM_NODE_ALIASES=(null)
SLURM_JOB_QOS=ufhpc
SLURM_NNODES=1
SLURM_JOBID=52219029
SLURM_NTASKS=1
SLURM_TASKS_PER_NODE=1
SLURM_CPUS_PER_TASK=2
SLURM_JOB_ID=52219029
SLURM_SUBMIT_DIR=/home/chasman
SLURM_NPROCS=1
SLURM_JOB_NODELIST=c6a-s26
SLURM_CLUSTER_NAME=hipergator
SLURM_JOB_CPUS_PER_NODE=2
SLURM_SUBMIT_HOST=login4.ufhpc
SLURM_JOB_PARTITION=hpg1-compute,hpg2-compute
SLURM_JOB_ACCOUNT=ufhpc
SLURM_JOB_NUM_NODES=1
SLURM_MEM_PER_NODE=8192