Difference between revisions of "Gromacs"

From UFRC
Jump to navigation Jump to search
m (Text replacement - "#lowercase" to "lc")
 
(14 intermediate revisions by 5 users not shown)
Line 24: Line 24:
 
It is primarily designed for biochemical molecules like proteins,  
 
It is primarily designed for biochemical molecules like proteins,  
 
lipids and nucleic acids that have a lot of complicated bonded  
 
lipids and nucleic acids that have a lot of complicated bonded  
interactions, but since GROMACS is extremely fast at calculating  
+
interactions. But since GROMACS is extremely fast at calculating  
 
the nonbonded interactions (that usually dominate simulations)  
 
the nonbonded interactions (that usually dominate simulations)  
 
many groups are also using it for research on non-biological  
 
many groups are also using it for research on non-biological  
Line 32: Line 32:
 
==Required Modules==
 
==Required Modules==
 
You must load the appropriate modules in your submission script.
 
You must load the appropriate modules in your submission script.
===Serial===
 
* intel
 
* {{#lowercase:{{#var:app}}}}
 
 
===Parallel (MPI)===
 
===Parallel (MPI)===
* intel
+
* intel/2016.0.109
* openmpi
+
* openmpi/1.10.2
* {{#lowercase:{{#var:app}}}}
+
* {{lc:{{#var:app}}}}
  
 
==System Variables==
 
==System Variables==
Line 52: Line 49:
 
<!--Run-->
 
<!--Run-->
 
{{#if: {{#var: exe}}|==Additional Information==
 
{{#if: {{#var: exe}}|==Additional Information==
WRITE_ADDITIONAL_INSTRUCTIONS_ON_RUNNING_THE_SOFTWARE_IF_NECESSARY
+
 
 +
===GPU Support===
 +
Gromacs 4.6.5 was built with CUDA-based GPU acceleration in both serial and parallel (MPI+OpenMP) executables. It is not necessary to load the CUDA module. It will be loaded by the Gromacs 4.6.5 module. If no GPUs are available, Gromacs 4.6.5 will note that no GPUs were found and will run without them.
 +
 
 +
Also be aware that the use of "pbsgpu-wrapper" with Gromacs 4.6.5 when attempting to use more than a single GPU will not work and Gromacs will only see the GPU exposed to the MPI "rank 0" process.  Therefore, if you intend to use more than one GPU with Gromacs you should observe the following guidelines.
 +
 
 +
# Run some preliminary scaling test to ensure that you can efficiently utilize more than one GPU.
 +
# Limit your runs to multiple GPUs within the same server (i.e. nodes=1:ppn=4:gpus=2)
 +
# Expose the assigned GPUs to gromacs via the "CUDA_VISIBLE_GPUS" environment variable (see sample scripts).
 +
 
 
|}}
 
|}}
<!--PBS scripts-->
+
<!--SLURM scripts-->
{{#if: {{#var: pbs}}|==PBS Script Examples==
+
{{#if: {{#var: pbs}}|==Job Script Examples==
See the [[{{PAGENAME}}_PBS]] page for {{#var: app}} PBS script examples.
+
See the [[{{PAGENAME}}_Job_Scripts]] page for {{#var: app}} Job script examples.
 
|}}
 
|}}
 
<!--Policy-->
 
<!--Policy-->

Latest revision as of 21:31, 6 December 2019

Description

Gromacs website  

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. But since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Required Modules

You must load the appropriate modules in your submission script.

Parallel (MPI)

  • intel/2016.0.109
  • openmpi/1.10.2
  • gromacs

System Variables

  • HPC_GROMACS_DIR - installation directory
  • HPC_GROMACS_BIN - executable directory
  • HPC_GROMACS_INC - header file directory
  • HPC_GROMACS_LIB - library directory


Job Script Examples

See the Gromacs_Job_Scripts page for Gromacs Job script examples.


Citation

If you publish research that uses Gromacs you have to cite it as follows:
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation Hess, B., Kutzner, C., van der Spoel, D. and Lindahl, E. J. Chem. Theory Comput., 4, 435-447 (2008)