Jump to navigation Jump to search
Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows the full functionality of Triton to be included directly in an application.
module spider triton to find out what environment modules are available for this application.
- HPC_TRITONSERVER_DIR - installation directory