Difference between revisions of "Nvidia CUDA Toolkit"

From UFRC
Jump to navigation Jump to search
Line 7: Line 7:
 
|{{#vardefine:exe|1}}            <!--ADDITIONAL INFO-->
 
|{{#vardefine:exe|1}}            <!--ADDITIONAL INFO-->
 
|{{#vardefine:pbs|1}}            <!--PBS SCRIPTS-->
 
|{{#vardefine:pbs|1}}            <!--PBS SCRIPTS-->
|{{#vardefine:policy|}}        <!--POLICY-->
+
|{{#vardefine:policy|1}}        <!--POLICY-->
 
|{{#vardefine:testing|}}      <!--PROFILING-->
 
|{{#vardefine:testing|}}      <!--PROFILING-->
 
|{{#vardefine:faq|}}            <!--FAQ-->
 
|{{#vardefine:faq|}}            <!--FAQ-->
Line 40: Line 40:
 
<!--Policy-->
 
<!--Policy-->
 
{{#if: {{#var: policy}}|==Usage Policy==
 
{{#if: {{#var: policy}}|==Usage Policy==
WRITE USAGE POLICY HERE (Licensing, usage, access).
+
===Interactive Use===
 +
 
 +
If you need interactive access to a gpu for development and testing you may do so by requesting an interactive session through the batch system. 
 +
 
 +
In order to gain interactive access to a GPU server you should run similar to the one that follows.
 +
 
 +
<pre>
 +
qsub -I -l nodes=1:gpus=1:tesla,walltime=01:00:00 -q gpu
 +
</pre>
 +
 
 +
To gain access to one of the Fermi-class GPUs, you can make a similar request but specify the "fermi" attribute in your resource request as below.
 +
 
 +
<pre>
 +
qsub -I -l nodes=1:gpus=1:fermi,walltime=01:00:00 -q gpu
 +
</pre>
 +
 
 +
If a gpu is available, you will get a prompt on one of the nodes within a minute or two.  Otherwise, you will have to wait or try another time.  If you choose to wait, you will be connected when a gpu is available.    The default walltime limit for the gpu queue is 10 minutes.  You should request the amount of time you need but be sure to log out and end your session when you are finished so that the GPU will be available to others.
 +
 
 +
If your work needs both GPUs attached to the same node, you would run the following command instead.
 +
 
 +
<pre>
 +
qsub -I -l nodes=1:gpus=2,walltime=01:00:00 -q gpu
 +
</pre>
 +
 
 +
If you need to request a particular machine, say ''tesla1'', you would use the following qsub command.
 +
 
 +
<pre>
 +
qsub -I -l nodes=tesla1:gpus=1,walltime=01:00:00 -q gpu
 +
</pre>
 
|}}
 
|}}
 
<!--Performance-->
 
<!--Performance-->

Revision as of 20:10, 20 September 2013

Description

cuda website  
CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA.

Required Modules

cuda

System Variables

  • HPC_{{#uppercase:cuda}}_DIR
  • HPC_{{#uppercase:cuda}}_BIN
  • HPC_{{#uppercase:cuda}}_INC
  • HPC_{{#uppercase:cuda}}_LIB

Additional Information

Also see NVIDIA GPUs.

PBS Script Examples

See the Nvidia CUDA Toolkit_PBS page for cuda PBS script examples.

Usage Policy

Interactive Use

If you need interactive access to a gpu for development and testing you may do so by requesting an interactive session through the batch system.

In order to gain interactive access to a GPU server you should run similar to the one that follows.

qsub -I -l nodes=1:gpus=1:tesla,walltime=01:00:00 -q gpu

To gain access to one of the Fermi-class GPUs, you can make a similar request but specify the "fermi" attribute in your resource request as below.

qsub -I -l nodes=1:gpus=1:fermi,walltime=01:00:00 -q gpu

If a gpu is available, you will get a prompt on one of the nodes within a minute or two. Otherwise, you will have to wait or try another time. If you choose to wait, you will be connected when a gpu is available. The default walltime limit for the gpu queue is 10 minutes. You should request the amount of time you need but be sure to log out and end your session when you are finished so that the GPU will be available to others.

If your work needs both GPUs attached to the same node, you would run the following command instead.

qsub -I -l nodes=1:gpus=2,walltime=01:00:00 -q gpu

If you need to request a particular machine, say tesla1, you would use the following qsub command.

qsub -I -l nodes=tesla1:gpus=1,walltime=01:00:00 -q gpu