Difference between revisions of "GROMACS"

From UFRC
Jump to navigation Jump to search
 
Line 6: Line 6:
 
|{{#vardefine:conf|}}          <!--CONFIGURATION-->
 
|{{#vardefine:conf|}}          <!--CONFIGURATION-->
 
|{{#vardefine:exe|}}            <!--ADDITIONAL INFO-->
 
|{{#vardefine:exe|}}            <!--ADDITIONAL INFO-->
|{{#vardefine:job|}}            <!--JOB SCRIPTS-->
+
|{{#vardefine:job|1}}            <!--JOB SCRIPTS-->
 
|{{#vardefine:policy|}}        <!--POLICY-->
 
|{{#vardefine:policy|}}        <!--POLICY-->
 
|{{#vardefine:testing|}}      <!--PROFILING-->
 
|{{#vardefine:testing|}}      <!--PROFILING-->
 
|{{#vardefine:faq|}}            <!--FAQ-->
 
|{{#vardefine:faq|}}            <!--FAQ-->
|{{#vardefine:citation|}}      <!--CITATION-->
+
|{{#vardefine:citation|1}}      <!--CITATION-->
 
|{{#vardefine:installation|}} <!--INSTALLATION-->
 
|{{#vardefine:installation|}} <!--INSTALLATION-->
 
|}
 
|}
Line 42: Line 42:
 
<!--Job Scripts-->
 
<!--Job Scripts-->
 
{{#if: {{#var: job}}|==Job Script Examples==
 
{{#if: {{#var: job}}|==Job Script Examples==
See the [[{{PAGENAME}}_Job_Scripts]] page for {{#var: app}} Job script examples.
+
;Note: Use 'module spider gromacs' to find available gromacs versions. The module loads below may be outdated.
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand to view sample Parallel MPI script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --job-name=gromacs
 +
#SBATCH --mail-user=YOUR_MAIL_ADDRESS_HERE
 +
#SBATCH --mail-type=FAIL,BEGIN,END
 +
#SBATCH --output=gmx-%j.out
 +
#SBATCH --ntasks=2
 +
#SBATCH --cpus-per-task=4
 +
#SBATCH --ntasks-per-socket=1
 +
#SBATCH --distribution=cyclic:block
 +
#SBATCH --time=24:00:00
 +
#SBATCH --mem-per-cpu=1gb
 +
 
 +
module purge
 +
ml gcc/8.2.0 openmpi/4.0.1 gromacs/2019.2
 +
 
 +
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
 +
srun --mpi=pmix_v3 gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s topol.tpr
 +
</pre>
 +
</div>
 +
</div>
 +
 
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand to view sample GPU Acceleration script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --job-name=multi-gpu
 +
#SBATCH --mail-user=YOUR_MAIL_ADDRESS_HERE
 +
#SBATCH --mail-type=FAIL,BEGIN,END
 +
#SBATCH --output=gromacs_%j.log
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=2
 +
#SBATCH --tasks-per-node=2
 +
#SBATCH --cpus-per-task=7
 +
#SBATCH --ntasks-per-socket=1
 +
#SBATCH --distribution=cyclic:block
 +
#SBATCH --time=2:00:00
 +
#SBATCH --mem-per-cpu=1gb
 +
#SBATCH --partition=hpg2-gpu
 +
#SBATCH --gres=gpu:a100:2
 +
 
 +
module load gcc/5.2.0  openmpi/1.10.2 gromacs/2016.3-CUDA
 +
 
 +
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
 +
srun --mpi=pmi2 --accel-bind=g --ntasks=$SLURM_NTASKS gmx mdrun -v
 +
 
 +
</pre>
 +
</div>
 +
</div>
 
|}}
 
|}}
 
<!--Policy-->
 
<!--Policy-->
Line 61: Line 114:
 
<!--Citation-->
 
<!--Citation-->
 
{{#if: {{#var: citation}}|==Citation==
 
{{#if: {{#var: citation}}|==Citation==
If you publish research that uses {{#var:app}} you have to cite it as follows:
+
If you publish research that uses {{#var:app}} you have to cite it as follows:<br/>
 
+
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation
WRITE_CITATION_HERE
+
Hess, B., Kutzner, C., van der Spoel, D. and Lindahl, E.
 
+
J. Chem. Theory Comput., 4, 435-447 (2008)
 
|}}
 
|}}
 
<!--Installation-->
 
<!--Installation-->

Latest revision as of 19:16, 5 January 2023

Description

gromacs website  

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Environment Modules

Run module spider gromacs to find out what environment modules are available for this application.

System Variables

  • HPC_GROMACS_DIR - installation directory
  • HPC_GROMACS_BIN - executable directory
  • HPC_GROMACS_INC - header file directory
  • HPC_GROMACS_LIB - library directory


Job Script Examples

Note
Use 'module spider gromacs' to find available gromacs versions. The module loads below may be outdated.

Expand to view sample Parallel MPI script.

#!/bin/bash
#SBATCH --job-name=gromacs
#SBATCH --mail-user=YOUR_MAIL_ADDRESS_HERE
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --output=gmx-%j.out
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=4
#SBATCH --ntasks-per-socket=1
#SBATCH --distribution=cyclic:block
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=1gb

module purge
ml gcc/8.2.0 openmpi/4.0.1 gromacs/2019.2

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun --mpi=pmix_v3 gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s topol.tpr

Expand to view sample GPU Acceleration script.

#!/bin/bash
#SBATCH --job-name=multi-gpu
#SBATCH --mail-user=YOUR_MAIL_ADDRESS_HERE
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --output=gromacs_%j.log
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=2
#SBATCH --cpus-per-task=7
#SBATCH --ntasks-per-socket=1
#SBATCH --distribution=cyclic:block
#SBATCH --time=2:00:00
#SBATCH --mem-per-cpu=1gb
#SBATCH --partition=hpg2-gpu
#SBATCH --gres=gpu:a100:2

module load gcc/5.2.0  openmpi/1.10.2 gromacs/2016.3-CUDA

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun --mpi=pmi2 --accel-bind=g --ntasks=$SLURM_NTASKS gmx mdrun -v 


Citation

If you publish research that uses gromacs you have to cite it as follows:
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation Hess, B., Kutzner, C., van der Spoel, D. and Lindahl, E. J. Chem. Theory Comput., 4, 435-447 (2008)