Difference between revisions of "LAMMPS"

From UFRC
Jump to navigation Jump to search
Line 69: Line 69:
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
{{#if: {{#var: pbs}}|==Job Script Examples==
 
{{#if: {{#var: pbs}}|==Job Script Examples==
See the [[{{PAGENAME}}_Job_Scripts]] page for {{#var: app}} job script examples.|}}
+
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand this section to view sample serial script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --job-name=<JOBNAME>
 +
#SBATCH --mail-user=<EMAIL>
 +
#SBATCH --mail-type=FAIL,END
 +
#SBATCH --output <my_job-%j.out>
 +
#SBATCH --error <my_job-%j.err>
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=1
 +
#SBATCH --mem-per-cpu=2G
 +
#SBATCH --time=01:00:00
 +
#SBATCH --account=<GROUP>
 +
#SBATCH --array=<BEGIN-END>
 +
 +
module load intel/2016.0.109 lammps
 +
 +
LAMMPS=lmp_ufhpc
 +
INPUT=<input_file>
 +
 +
mpiexec $LAMMPS < $INPUT > log.out 2>&1
 +
</pre></div></div>
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand this section to view sample parallel script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --job-name=<JOBNAME>
 +
#SBATCH --mail-user=<EMAIL>
 +
#SBATCH --mail-type=FAIL,END
 +
#SBATCH --output <my_job-%j.out>
 +
#SBATCH --error <my_job-%j.err>
 +
#SBATCH --nodes=1
 +
#SBATCH --ntasks=<number of tasks>
 +
#SBATCH --mem-per-cpu=2G
 +
#SBATCH --time=01:00:00
 +
#SBATCH --account=<GROUP>
 +
#SBATCH --array=<BEGIN-END>
 +
 +
cd $PBS_O_WORKDIR
 +
 +
module load intel/2016.0.109 openmpi/1.10.2 lammps
 +
 +
LAMMPS=lmp_ufhpc
 +
INPUT=<input_file>
 +
 +
mpiexec $LAMMPS < $INPUT > log.out 2>&1
 +
</pre></div></div>
 +
|}}
 
{{#if: {{#var: policy}}|==Usage Policy==
 
{{#if: {{#var: policy}}|==Usage Policy==
 
WRITE USAGE POLICY HERE (perhaps templates for a couple of main licensing schemes can be used)|}}
 
WRITE USAGE POLICY HERE (perhaps templates for a couple of main licensing schemes can be used)|}}

Revision as of 21:05, 5 January 2023

Description

LAMMPS website  

LAMMPS , Large-scale Atomic/Molecular Massively Parallel Simulator, is a molecular dynamic simulator that models an ensemble of particles in liquid, solid, or gaseous state. It is a open source software, written in C++ and developed at Sandia laboratory. It can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using various force fields and boundary conditions.

Environment Modules

Run module spider LAMMPS to find out what environment modules are available for this application.

System Variables

  • HPC_LAMMPS_DIR - installation directory
  • HPC_LAMMPS_BIN - executable directory
  • HPC_LAMMPS_LIB - library directory

Additional Information

To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to

srun --mpi=pmix_v3 $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job

For standard or user-customized installations the fastest way is to use cmake.

Expand this section to view instructions for standard installation.

Standard installation:

cd <install-dir>
tar xzf lammps-<version>.tar.gz
cd lammps-<version>
ml purge
ml cmake/<version> intel/<version> openmpi/<version>
ml list
mkdir build
cd build
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> ../cmake
make -j8
make install

Using presets for customization - e.g. for a very rich set of packages:

cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake

NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage:

==========================================================
ml purge
ml ngc-lammps
lmp <your_parameters>
OR (for a MPI run):
mpirun -n <#_of_processes> lmp <your_parameters>

Job Script Examples

Expand this section to view sample serial script.

#!/bin/bash
#SBATCH --job-name=<JOBNAME>
#SBATCH --mail-user=<EMAIL>
#SBATCH --mail-type=FAIL,END
#SBATCH --output <my_job-%j.out>
#SBATCH --error <my_job-%j.err>
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=2G
#SBATCH --time=01:00:00
#SBATCH --account=<GROUP>
#SBATCH --array=<BEGIN-END>
 
module load intel/2016.0.109 lammps
 
LAMMPS=lmp_ufhpc
INPUT=<input_file>
 
mpiexec $LAMMPS < $INPUT > log.out 2>&1

Expand this section to view sample parallel script.

#!/bin/bash
#SBATCH --job-name=<JOBNAME>
#SBATCH --mail-user=<EMAIL>
#SBATCH --mail-type=FAIL,END
#SBATCH --output <my_job-%j.out>
#SBATCH --error <my_job-%j.err>
#SBATCH --nodes=1
#SBATCH --ntasks=<number of tasks>
#SBATCH --mem-per-cpu=2G
#SBATCH --time=01:00:00
#SBATCH --account=<GROUP>
#SBATCH --array=<BEGIN-END>
 
cd $PBS_O_WORKDIR
 
module load intel/2016.0.109 openmpi/1.10.2 lammps
 
LAMMPS=lmp_ufhpc
INPUT=<input_file>
 
mpiexec $LAMMPS < $INPUT > log.out 2>&1