Caper

From UFRC
Revision as of 14:43, 11 October 2023 by G0ddengr (talk | contribs) (Created page with "Category:SoftwareCategory:Data_Science {|<!--CONFIGURATION: REQUIRED--> |{{#vardefine:app|caper}} |{{#vardefine:url|https://github.com/ENCODE-DCC/caper}} <!--CONFIGURA...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Description

caper website  

Caper (Cromwell Assisted Pipeline ExecutoR) is a wrapper Python package for Cromwell and a dependency of the ENCODE Hi-C uniform processing pipeline. Caper wraps Cromwell to run pipelines on multiple platforms like GCP (Google Cloud Platform), AWS (Amazon Web Service) and HPCs like SLURM, SGE, PBS/Torque and LSF. It provides easier way of running Cromwell server/run modes by automatically composing necessary input files for Cromwell. Caper can run each task on a specified environment (Docker, Singularity or Conda). Also, Caper automatically localizes all files (keeping their directory structure) defined in your input JSON and command line according to the specified backend. For example, if your chosen backend is GCP and files in your input JSON are on S3 buckets (or even URLs) then Caper automatically transfers s3:// and http(s):// files to a specified gs:// bucket directory. Supported URIs are s3://, gs://, http(s):// and local absolute paths. You can use such URIs either in CLI and input JSON. Private URIs are also accessible if you authenticate using cloud platform CLIs like gcloud auth, aws configure and using ~/.netrc for URLs.

Environment Modules

Run module spider caper to find out what environment modules are available for this application.

System Variables

  • HPC_CAPER_DIR - installation directory
  • HPC_CAPER_BIN - executable directory