Difference between revisions of "LAMMPS"
(4 intermediate revisions by the same user not shown) | |||
Line 25: | Line 25: | ||
LAMMPS at HiperGator is installed in three flavors: | LAMMPS at HiperGator is installed in three flavors: | ||
*GNU compiler with openmpi | *GNU compiler with openmpi | ||
+ | *GNU compiler with openmpi and cuda supported | ||
*Intel compiler with openmpi | *Intel compiler with openmpi | ||
*NVIDIA NGC container (https://catalog.ngc.nvidia.com/orgs/hpc/containers/lammps) | *NVIDIA NGC container (https://catalog.ngc.nvidia.com/orgs/hpc/containers/lammps) | ||
Line 64: | Line 65: | ||
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake | cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake | ||
− | + | NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage (see https://help.rc.ufl.edu/doc/Modules for instructions on how to set up personal module environment) | |
− | NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage: | ||
ml purge | ml purge | ||
ml ngc-lammps | ml ngc-lammps | ||
− | lmp <your_parameters> | + | srun --mpi=pmix_v3 lmp <your_parameters> (see below for a sample slurm script) |
− | |||
− | |||
Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera. | Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera. | ||
Line 81: | Line 79: | ||
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | <div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | ||
− | ''Expand this section to view sample | + | ''Expand this section to view a sample script.'' |
<div class="mw-collapsible-content" style="padding: 5px;"> | <div class="mw-collapsible-content" style="padding: 5px;"> | ||
<pre> | <pre> | ||
Line 90: | Line 88: | ||
#SBATCH --output <my_job-%j.out> | #SBATCH --output <my_job-%j.out> | ||
#SBATCH --error <my_job-%j.err> | #SBATCH --error <my_job-%j.err> | ||
− | #SBATCH --nodes=1 | + | #SBATCH --nodes=4 |
− | #SBATCH --ntasks= | + | #SBATCH --ntasks=4 |
+ | #SBATCH --cpus-per-task=1 | ||
+ | #SBATCH --ntasks-per-socket=4 | ||
+ | #SBATCH --ntasks-per-node=4 | ||
#SBATCH --mem-per-cpu=2gb | #SBATCH --mem-per-cpu=2gb | ||
#SBATCH --time=01:00:00 | #SBATCH --time=01:00:00 | ||
Line 101: | Line 102: | ||
LAMMPS=lmp_mpi | LAMMPS=lmp_mpi | ||
INPUT=<input_file> | INPUT=<input_file> | ||
− | + | srun --mpi=pmix_v3 $LAMMPS < $INPUT > log.out 2>&1 | |
− | |||
+ | *Disclaimer: The above slurm configuration is for demonstrative purposes only. The users must tailor it to their specific needs and the available resources. | ||
</pre></div></div> | </pre></div></div> | ||
|}} | |}} |
Latest revision as of 22:34, 30 June 2024
Description
LAMMPS , Large-scale Atomic/Molecular Massively Parallel Simulator, is a molecular dynamic simulator that models an ensemble of particles in liquid, solid, or gaseous state. It is a open source software, written in C++ and developed at Sandia laboratory. It can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using various force fields and boundary conditions.
Environment Modules
Run module spider LAMMPS
to find out what environment modules are available for this application.
LAMMPS at HiperGator is installed in three flavors:
- GNU compiler with openmpi
- GNU compiler with openmpi and cuda supported
- Intel compiler with openmpi
- NVIDIA NGC container (https://catalog.ngc.nvidia.com/orgs/hpc/containers/lammps)
System Variables
- HPC_LAMMPS_DIR - installation directory
- HPC_LAMMPS_BIN - executable directory
- HPC_LAMMPS_LIB - library directory
Additional Information
The GNU version of LAMMPS supports external packages PLUMED, EXTRA-MOLECULE, EXTRA-COMPUTE, MISC, ML-QUIP, INTEL and KOKKOS. The Intel version of LAMMPS only includes default packages. HPG staff's goal is to build software with widely used external packages. The users who need packages that are not in the HPG installation can make their custom installation in their local environment in consultation with the HPG staff.
To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to
srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job
For standard or user-customized installations the fastest way is to use cmake.
Expand this section to view instructions for standard installation.
Standard installation:
cd <install-dir> tar xzf lammps-<version>.tar.gz cd lammps-<version>
ml purge ml cmake/<version> intel/<version> openmpi/<version> ml list
mkdir build cd build cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> ../cmake make -j8 make install
Using presets for customization - e.g. for a very rich set of packages:
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake
NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage (see https://help.rc.ufl.edu/doc/Modules for instructions on how to set up personal module environment)
ml purge ml ngc-lammps srun --mpi=pmix_v3 lmp <your_parameters> (see below for a sample slurm script)
Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera.
Job Script Examples
Expand this section to view a sample script.
#!/bin/bash #SBATCH --job-name=<JOBNAME> #SBATCH --mail-user=<EMAIL> #SBATCH --mail-type=FAIL,END #SBATCH --output <my_job-%j.out> #SBATCH --error <my_job-%j.err> #SBATCH --nodes=4 #SBATCH --ntasks=4 #SBATCH --cpus-per-task=1 #SBATCH --ntasks-per-socket=4 #SBATCH --ntasks-per-node=4 #SBATCH --mem-per-cpu=2gb #SBATCH --time=01:00:00 #SBATCH --account=<GROUP> #SBATCH --array=<BEGIN-END> module load gcc/12.2.0 lammps/02Aug23 LAMMPS=lmp_mpi INPUT=<input_file> srun --mpi=pmix_v3 $LAMMPS < $INPUT > log.out 2>&1 *Disclaimer: The above slurm configuration is for demonstrative purposes only. The users must tailor it to their specific needs and the available resources.