Difference between revisions of "OpenCL"
Jump to navigation
Jump to search
(8 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
− | [[Category:Software]] | + | [[Category:Software]][[Category:Phylogenetics]] |
{|<!--CONFIGURATION: REQUIRED--> | {|<!--CONFIGURATION: REQUIRED--> | ||
|{{#vardefine:app|opencl}} | |{{#vardefine:app|opencl}} | ||
Line 5: | Line 5: | ||
<!--CONFIGURATION: OPTIONAL (|1}} means it's ON)--> | <!--CONFIGURATION: OPTIONAL (|1}} means it's ON)--> | ||
|{{#vardefine:conf|}} <!--CONFIGURATION--> | |{{#vardefine:conf|}} <!--CONFIGURATION--> | ||
− | |{{#vardefine:exe|}} <!--ADDITIONAL INFO--> | + | |{{#vardefine:exe|1}} <!--ADDITIONAL INFO--> |
|{{#vardefine:pbs|}} <!--PBS SCRIPTS--> | |{{#vardefine:pbs|}} <!--PBS SCRIPTS--> | ||
|{{#vardefine:policy|1}} <!--POLICY--> | |{{#vardefine:policy|1}} <!--POLICY--> | ||
Line 20: | Line 20: | ||
OpenCL™ (Open Computing Language) is a low-level API for heterogeneous computing that runs on GPUs. Using the OpenCL API, developers can launch compute kernels written using a limited subset of the C programming language on a GPU. | OpenCL™ (Open Computing Language) is a low-level API for heterogeneous computing that runs on GPUs. Using the OpenCL API, developers can launch compute kernels written using a limited subset of the C programming language on a GPU. | ||
<!--Modules--> | <!--Modules--> | ||
− | == | + | ==Environment Modules== |
− | + | Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application. | |
− | |||
− | < | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==System Variables== | ==System Variables== | ||
− | * HPC_{{ | + | * HPC_{{uc:{{#var:app}}}}_DIR - installation directory |
− | * HPC_{{# | + | * HPC_{{uc:{{#var:app}}}}_INC |
+ | * HPC_{{uc:{{#var:app}}}}_LIB | ||
<!--Configuration--> | <!--Configuration--> | ||
{{#if: {{#var: conf}}|==Configuration== | {{#if: {{#var: conf}}|==Configuration== | ||
Line 41: | Line 32: | ||
<!--Run--> | <!--Run--> | ||
{{#if: {{#var: exe}}|==Additional Information== | {{#if: {{#var: exe}}|==Additional Information== | ||
− | + | ||
+ | The following example shows how to build a sample code with OpenCL. | ||
+ | |||
+ | <pre> | ||
+ | [prescott@r11a-s17 ~]$ mkdir opencl | ||
+ | [prescott@r11a-s17 ~]$ cd opencl | ||
+ | [prescott@r11a-s17 opencl]$ module load opencl | ||
+ | [prescott@r11a-s17 opencl]$ wget -q https://raw.github.com/smistad/OpenCL-Getting-Started/master/main.c | ||
+ | [prescott@r11a-s17 opencl]$ wget -q https://raw.github.com/smistad/OpenCL-Getting-Started/master/vector_add_kernel.cl | ||
+ | [prescott@r11a-s17 opencl]$ gcc -c -I$HPC_OPENCL_INC main.c -o main.o | ||
+ | [prescott@r11a-s17 opencl]$ gcc main.o -o myopenclprog -l OpenCL | ||
+ | </pre> | ||
|}} | |}} | ||
<!--PBS scripts--> | <!--PBS scripts--> |
Latest revision as of 17:52, 19 August 2022
Description
OpenCL™ (Open Computing Language) is a low-level API for heterogeneous computing that runs on GPUs. Using the OpenCL API, developers can launch compute kernels written using a limited subset of the C programming language on a GPU.
Environment Modules
Run module spider opencl
to find out what environment modules are available for this application.
System Variables
- HPC_OPENCL_DIR - installation directory
- HPC_OPENCL_INC
- HPC_OPENCL_LIB
Additional Information
The following example shows how to build a sample code with OpenCL.
[prescott@r11a-s17 ~]$ mkdir opencl [prescott@r11a-s17 ~]$ cd opencl [prescott@r11a-s17 opencl]$ module load opencl [prescott@r11a-s17 opencl]$ wget -q https://raw.github.com/smistad/OpenCL-Getting-Started/master/main.c [prescott@r11a-s17 opencl]$ wget -q https://raw.github.com/smistad/OpenCL-Getting-Started/master/vector_add_kernel.cl [prescott@r11a-s17 opencl]$ gcc -c -I$HPC_OPENCL_INC main.c -o main.o [prescott@r11a-s17 opencl]$ gcc main.o -o myopenclprog -l OpenCL
Usage Policy
OpenCL is a trademark of Apple Inc., used under license by Khronos.
Installation
The OpenCL implementation installed on the cluster is the one included with the NVIDIA CUDA Toolkit.