Difference between revisions of "QIIME"
Moskalenko (talk | contribs) |
Moskalenko (talk | contribs) |
||
Line 39: | Line 39: | ||
mkdir -p tmp | mkdir -p tmp | ||
export TMPDIR=$(pwd)/tmp | export TMPDIR=$(pwd)/tmp | ||
− | in your QIIME job script to prevent this. | + | in your QIIME job script to prevent this and launch your job from the respective directory and not from your home directory, for example. |
* Tasks vs Cores for parallel runs | * Tasks vs Cores for parallel runs | ||
Python threads in a parallel QIIME job will be bound to the same CPU core even if multiple ntasks are specified in the job script. Use cpus-per-task to parallelize QIIME jobs correctly. For example, for an 8-thread parallel QIIME job use the following resource request in your job script: | Python threads in a parallel QIIME job will be bound to the same CPU core even if multiple ntasks are specified in the job script. Use cpus-per-task to parallelize QIIME jobs correctly. For example, for an 8-thread parallel QIIME job use the following resource request in your job script: |
Revision as of 20:48, 20 July 2016
Description
QIIME (pronounced "chime") stands for Quantitative Insights Into Microbial Ecology. QIIME is an open source software package for comparison and analysis of microbial communities, primarily based on high-throughput amplicon sequencing data (such as SSU rRNA) generated on a variety of platforms, but also supporting analysis of other types of data (such as shotgun metagenomic data). QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics. QIIME has been applied to single studies based on billions of sequences from thousands of samples.
Required Modules
Serial
- qiime
System Variables
- HPC_{{#uppercase:qiime}}_DIR - installation directory
How To Run
- TMPDIR
QIIME will use /tmp by default, which will fill up memory disks on HPG2 nodes and cause node and job failures. Use something like
mkdir -p tmp export TMPDIR=$(pwd)/tmp
in your QIIME job script to prevent this and launch your job from the respective directory and not from your home directory, for example.
- Tasks vs Cores for parallel runs
Python threads in a parallel QIIME job will be bound to the same CPU core even if multiple ntasks are specified in the job script. Use cpus-per-task to parallelize QIIME jobs correctly. For example, for an 8-thread parallel QIIME job use the following resource request in your job script:
#SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=8