Researchers may use GPUs in the form of Normalized Graphics Processor Units (NGUs), which include all of the infrastructure (memory, network, rack space, cooling), necessary for GPU-accelerated computation.
Groups that do not have GPU allocations can invest into GPUs by filling out the purchase form at: https://www.rc.ufl.edu/services/purchase-request/.
We have two types of GPU services for two different kinds of applications.
Hardware Accelerated GUI
GPUs in these servers are used to accelerate rendering for graphical applications. These servers are in the SLURM "hwgui" partition. Refer to the Hardware Accelerated GUI Sessions page for more information on available resources and usage.
GPU Assisted Computation
A number of high performance applications installed on HiPerGator implement GPU-accelerated computing functions via CUDA to achieve significant speed-up over CPU implementations. You must use the "gpu" partition (--partition=gpu) to run GPU enabled computational applications.
GPU Specification for GPU Partition
We have three types of NVIDIA GPU nodes currently available in "gpu" partition:
- Nvidia K80s, with 2 GPUs per K80 card and 2 K80 cards in one host. Please refer to K80 technical specs
- Nvidia GeForce GTX 1080 Ti, with 1 GPU per 1080Ti card and 2 1080Ti cards in one host. Please refer to 1080Ti technical specs
- Nvidia GeForce RTX 2080Ti, with 1 GPU per 2080Ti card and 8 2080Ti cards in one host. Please refer to 2080Ti technical specs
Compile CUDA Enabled Programs
To compile CUDA programs, please refer to Nvidia CUDA Toolkit
GPU Use Under Slurm
- GPUs are allocated only via the investment QOS. There is no burst QOS in the gpu partition.
- Time Limit for the gpu partition is 7 days (at most
#SBATCH --time=7-00:00:00) to increase the availability of GPU resources.
- CUDA environment change: the default CUDA environment is cuda/10. To help user transition from cuda/9 to cuda/10, some GPU nodes are kept running cuda/9 environment until Oct. 31. To request cuda/9 nodes, please add the SLURM option: --constraint=cuda9 in your job script.
In order to request interactive access to a GPU under SLURM, use commands similar to those that follow.
- • To request access to one GPU (of any type) for a default 10-minute session:
srun -p gpu --gres=gpu:1 --pty -u bash -i
- • To request access to two Tesla GPUs on a single node for a 1-hour session:
srun -p gpu --gres=gpu:tesla:2 --time=01:00:00 --pty -u bash -i
- • To request access to two GeForce GPUs on a single node for a 1-hour session:
srun -p gpu --gres=gpu:geforce:2 --time=01:00:00 --pty -u bash -i
- • To request access to GPU nodes in cuda/9 environment for a 1-hour session:
srun -p gpu -gres=gpu:1 --constraint=cuda9 -t 01:00:00 --pty -u bash -i
If no units are accessible, your request will be queued and your connection established once the next GPU becomes available. Otherwise, you may choose to try connecting again at a later time. If you have requested for a longer time than is needed, please be sure to end your session so that the GPU will be available for other users.
For batch jobs, to request GPU resources, use lines similar to the following in your submission script.
- • In this example, two Tesla GPUs on a single server (--nodes defaults to "1") will be allocated to the job:
#SBATCH --partition=gpu #SBATCH --gpus=tesla:2
- • In this example, two GeForce GPUs on a single server (--nodes defaults to "1") will be allocated to the job:
#SBATCH --partition=gpu #SBATCH --gpus=geforce:2
- • In this example, 2 GPUs on a single server (--nodes defaults to "1") with cuda/9 environment will be allocated to the job:
#SBATCH --partition=gpu #SBATCH --constraint=cuda9
The GPUs are configured to run in exclusive mode. This means that the gpu driver will only allow one process at a time to access the GPU. If GPU 0 is in use and your application tries to use it, it will simply block. If your application does not call cudaSetDevice(), the CUDA runtime should assign it to a free GPU. Since everyone will be accessing the GPUs through the batch system, there should be no over-subscription of the GPUs.
Job Script Examples
This is a sample script for MPI parallel VASP job requesting and using GPUs under SLURM:
#!/bin/bash #SBATCH --job-name=vasptest #SBATCH --output=vasp.out #SBATCH --error=vasp.err #SBATCH --mail-type=ALL #SBATCH --firstname.lastname@example.org #SBATCH --nodes=1 #SBATCH --ntasks=8 #SBATCH --cpus-per-task=1 #SBATCH --ntasks-per-node=8 #SBATCH --ntasks-per-socket=4 #SBATCH --mem-per-cpu=7000mb #SBATCH --distribution=cyclic:cyclic #SBATCH --partition=gpu #SBATCH --gres=gpu:geforce:4 #SBATCH --time=00:30:00 echo "Date = $(date)" echo "host = $(hostname -s)" echo "Directory = $(pwd)" module purge module load cuda/10.0.130 intel/2018 openmpi/4.0.0 vasp/5.4.4 T1=$(date +%s) srun --mpi=pmix_v3 vasp_gpu T2=$(date +%s) ELAPSED=$((T2 - T1)) echo "Elapsed Time = $ELAPSED"