Difference between revisions of "TRITONSERVER"

From UFRC
Jump to navigation Jump to search
(Created page with "Category:SoftwareCategory:Machine Learning {|<!--CONFIGURATION: REQUIRED--> |{{#vardefine:app|TRITONSERVER}} |{{#vardefine:url|https://ngc.nvidia.com/catalog/container...")
 
 
Line 22: Line 22:
 
<!--Modules-->
 
<!--Modules-->
 
==Environment Modules==
 
==Environment Modules==
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application.
+
Run <code>module spider triton</code> to find out what environment modules are available for this application.
 
==System Variables==
 
==System Variables==
 
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory
 
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory

Latest revision as of 20:06, 21 August 2022

Description

TRITONSERVER website  

Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows the full functionality of Triton to be included directly in an application.

Environment Modules

Run module spider triton to find out what environment modules are available for this application.

System Variables

  • HPC_TRITONSERVER_DIR - installation directory