Difference between revisions of "Development and Testing"

From UFRC
Jump to navigation Jump to search
Line 1: Line 1:
 
[[Category:Hardware]][[Category:SLURM]]
 
[[Category:Hardware]][[Category:SLURM]]
 
==Login Nodes==
 
==Login Nodes==
There are multiple ways to develop code or test workflows and job scripts before submitting production (final) jobs to the scheduler. Our login servers are quite capable, so it is alright to use them for development and short tests. Just try not to leave lingering processes or run tests at such scale that other users will be negatively affected. If you do, your large tests will be throttled down to fewer CPU cores than you expected, so your test will not reflect anything useful performance-wise. In addition, if you persist in trying to overload a login node your account may be suspended for mis-use of resources.
+
Generally speaking, interactive work other than managing jobs and data is discouraged on the login nodes. However, short test jobs are permitted as long as they fall within the following limits.
 +
 
 +
# No more than 16 cores
 +
# No longer than 10 minutes (wall time)
 +
# No more than 64 GB of RAM.
 +
 
 +
That above resource limits essentially define what we mean by a “small, short” job or process.   These limits should allow for the testing of job submission scripts or even simple application development tests.
  
 
==Interactive SLURM Job==
 
==Interactive SLURM Job==

Revision as of 12:18, 20 May 2020

Login Nodes

Generally speaking, interactive work other than managing jobs and data is discouraged on the login nodes. However, short test jobs are permitted as long as they fall within the following limits.

  1. No more than 16 cores
  2. No longer than 10 minutes (wall time)
  3. No more than 64 GB of RAM.

That above resource limits essentially define what we mean by a “small, short” job or process. These limits should allow for the testing of job submission scripts or even simple application development tests.

Interactive SLURM Job

An alternative approach is to create an environment similar to what you would get in a real job on HiPerGator. One way to do this is to start an interactive job on a compute node with srun. E.g.

$ srun --mem=4gb --time=08:00:00 --pty bash -i

will give you 4gb of memory for 8 hours on a real compute node and present you, once the job starts, with an interactive prompt. You can run your job scripts as regular shell scripts in that environment.

Developmental SLURM Session

If it's taking a while to start an interactive session perhaps you should try our developmental partition as shown below. The 'dev' nodes are set up to start jobs faster as long as resources are available. The software environment on the nodes within the dev partition is consistent with that of the compute nodes so you can run jobs and get an accurate idea of what resources are needed to successfully complete your jobs.

For example, to get a four-hour session with the default 1 processor core and 2gb of memory:

$ module load ufrc
$ srundev --time=04:00:00

The srundev command is a wrapper around the srun --partition=hpg2-dev --pty bash -i command.

Other SLURM directives can also be added to request more processors or memory. For example:

$ module load ufrc
$ srundev --time=60 --ntasks=1 --cpus-per-task=4 --mem=4gb
Note
  • The default time limit for the developmental SLURM partition is 00:10:00 (10 minutes). The maximum time limit in the dev partition is 12 hours.

Pre-Allocation of Resources

Yet another approach is to log into HiPerGator and create a SLURM allocation under which you can run commands or scripts with 'srun' for as long as the allocation is valid. Whatever you srun under the allocation will be executed within a job environment, but there will be no delay for job startup. For example,

$ salloc -n 1 --cpus-per-task=2 --mem=8gb --time=10:00:00
salloc: Pending job allocation 33359121
salloc: job 33333333 queued and waiting for resources
salloc: job 33333333 has been allocated resources
salloc: Granted job allocation 33333333

$ srun hostname
c99a-s1.ufhpc

$ srun echo "Running inside an allocation"
Running inside an allocation

$ echo $SLURM_MEM_PER_NODE
8192

Enjoy the many ways to make sure your jobs are set up right. Test responsibly.