Nvidia CUDA Toolkit
CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA.
See also: GPU Access
Use the 'module avail' command after loading a cuda environment module to see the available module trees or see which compiler and openmpi modules require the cuda module to be loaded.
For CUDA development please load the "cuda" module. Doing so will ensure that your environment is set up correctly for the use of the CUDA compiler, header files, and libraries. Currently cuda/9.2.88 and cuda/10.0.130 are the only versions supported on hipergator.
Selecting CUDA Arch Flags
When compiling with NVCC, you need to specify the Nvidia architecture that the CUDA files will be compiled for. Please refer to GPU Feature List for CUDA naming scheme sm_xy where x denotes the GPU generation and y denotes the version. The table below lists the SM flags for the three types of GPUs on HiPerGator.
|SM_37||Tesla K80 (No longer available)|
|SM_61||GeForce GTX 1080Ti|
|SM_75||GeForce RTX 2080Ti|
Sample GPU Batch Job Scripts
See the Example_SLURM-GPU-Job-Scripts page for an example.