Difference between revisions of "LAMMPS"
Line 23: | Line 23: | ||
==Environment Modules== | ==Environment Modules== | ||
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application. | Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application. | ||
+ | LAMMPS at HiperGator is installed in three flavors: | ||
+ | *GNU compiler with openmpi | ||
+ | *Intel compiler with openmpi | ||
+ | *NVIDIA NGC container | ||
+ | |||
==System Variables== | ==System Variables== | ||
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory | * HPC_{{uc:{{#var:app}}}}_DIR - installation directory | ||
* HPC_LAMMPS_BIN - executable directory | * HPC_LAMMPS_BIN - executable directory | ||
* HPC_LAMMPS_LIB - library directory | * HPC_LAMMPS_LIB - library directory | ||
+ | |||
<!--Additional--> | <!--Additional--> | ||
{{#if: {{#var: exe}}|==Additional Information== | {{#if: {{#var: exe}}|==Additional Information== |
Revision as of 19:15, 6 April 2024
Description
LAMMPS , Large-scale Atomic/Molecular Massively Parallel Simulator, is a molecular dynamic simulator that models an ensemble of particles in liquid, solid, or gaseous state. It is a open source software, written in C++ and developed at Sandia laboratory. It can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using various force fields and boundary conditions.
Environment Modules
Run module spider LAMMPS
to find out what environment modules are available for this application.
LAMMPS at HiperGator is installed in three flavors:
- GNU compiler with openmpi
- Intel compiler with openmpi
- NVIDIA NGC container
System Variables
- HPC_LAMMPS_DIR - installation directory
- HPC_LAMMPS_BIN - executable directory
- HPC_LAMMPS_LIB - library directory
Additional Information
To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to
srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job
For standard or user-customized installations the fastest way is to use cmake.
Expand this section to view instructions for standard installation.
Standard installation:
cd <install-dir> tar xzf lammps-<version>.tar.gz cd lammps-<version>
ml purge ml cmake/<version> intel/<version> openmpi/<version> ml list
mkdir build cd build cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> ../cmake make -j8 make install
Using presets for customization - e.g. for a very rich set of packages:
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake
NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage:
ml purge ml ngc-lammps lmp <your_parameters> OR (for a MPI run): mpirun -n <#_of_processes> lmp <your_parameters>
Job Script Examples
Note: This section may require cleanup to meet UFRC standards. It's either outdated, has factual errors, too terse, too verbose, or inappropriate for UFRC public wiki.
Expand this section to view sample serial script.
#!/bin/bash #SBATCH --job-name=<JOBNAME> #SBATCH --mail-user=<EMAIL> #SBATCH --mail-type=FAIL,END #SBATCH --output <my_job-%j.out> #SBATCH --error <my_job-%j.err> #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem-per-cpu=2G #SBATCH --time=01:00:00 #SBATCH --account=<GROUP> #SBATCH --array=<BEGIN-END> module load intel/2016.0.109 lammps LAMMPS=lmp_ufhpc INPUT=<input_file> mpiexec $LAMMPS < $INPUT > log.out 2>&1
Expand this section to view sample parallel script.
#!/bin/bash #SBATCH --job-name=<JOBNAME> #SBATCH --mail-user=<EMAIL> #SBATCH --mail-type=FAIL,END #SBATCH --output <my_job-%j.out> #SBATCH --error <my_job-%j.err> #SBATCH --nodes=1 #SBATCH --ntasks=<number of tasks> #SBATCH --mem-per-cpu=2G #SBATCH --time=01:00:00 #SBATCH --account=<GROUP> #SBATCH --array=<BEGIN-END> cd $PBS_O_WORKDIR module load intel/2016.0.109 openmpi/1.10.2 lammps LAMMPS=lmp_ufhpc INPUT=<input_file> mpiexec $LAMMPS < $INPUT > log.out 2>&1