Development and Testing
There are multiple ways to develop code (i.e., the standard edit, build, test cycle) or test workflows and job scripts before submitting jobs to the scheduler. Our login servers are quite capable, so it is alright to use them for development and short tests. Just try not to leave lingering processes or run tests at such scale that other users will be negatively affected. If you do, your large tests will be throttled down to fewer CPU cores than you expected, so your test will not reflect anything useful performance-wise. In addition, if you persist in trying to overload a login node your account may be suspended for mis-use of resources.
An alternative approach is to create an environment similar to what you would get in a real job on HiPerGator. One way to do this is to start an interactive job on a compute node with srun. E.g.
$ srun --mem=4gb --time=08:00:00 --pty bash -i
will present you, once the job starts, with an interactive prompt. You can run your job scripts as regular shell scripts in that environment.
If it's taking a while to start an interactive session perhaps you should try our developmental partition as shown below. The 'dev' nodes are set up to start jobs faster as long as resources are available. The software environment on the nodes within the dev partition is consistent with that of the compute nodes so you can run jobs and get an accurate idea of what resources are needed to successfully complete your jobs.
For example, to get a four-hour session with the default 1 processor core and 2gb of memory:
$ module load ufrc $ srundev --time=04:00:00
The srundev command is a wrapper around the srun --partition=hpg2-dev --pty bash -i
command.
Other SLURM directives can also be added to request more processors or memory. For example:
$ module load ufrc $ srundev --time=60 --ntasks=1 --cpus-per-task=4 --mem=4gb
- Note
- The default time limit for the developmental SLURM partition is 00:10:00 (10 minutes). The maximum time limit in the dev partition is 12 hours.
Yet another approach is to log into HiPerGator and create a SLURM allocation under which you can run commands or scripts with 'srun' for as long as the allocation is valid. Whatever you srun under the allocation will be executed within a job environment, but there will be no delay for job startup. For example,
$ salloc -n 1 --cpus-per-task=2 --mem=8gb --time=10:00:00 salloc: Pending job allocation 33359121 salloc: job 33333333 queued and waiting for resources salloc: job 33333333 has been allocated resources salloc: Granted job allocation 33333333 $ srun hostname c99a-s1.ufhpc $ srun echo "Running inside an allocation" Running inside an allocation $ echo $SLURM_MEM_PER_NODE 8192
Enjoy the many ways to make sure your jobs are set up right. Test responsibly.