Difference between revisions of "LAMMPS"

From UFRC
Jump to navigation Jump to search
 
(23 intermediate revisions by 5 users not shown)
Line 5: Line 5:
 
|{{#vardefine:app|LAMMPS}}
 
|{{#vardefine:app|LAMMPS}}
 
|{{#vardefine:url|http://lammps.sandia.gov}}
 
|{{#vardefine:url|http://lammps.sandia.gov}}
|{{#vardefine:exe|}} <!--Present manual instructions for running the software -->
+
|{{#vardefine:exe|1}} <!--Present manual instructions for running the software -->
 
|{{#vardefine:conf|}} <!--Enable config wiki page link - {{#vardefine:conf|1}} = ON/conf|}} = OFF-->
 
|{{#vardefine:conf|}} <!--Enable config wiki page link - {{#vardefine:conf|1}} = ON/conf|}} = OFF-->
 
|{{#vardefine:pbs|1}} <!--Enable PBS script wiki page link-->
 
|{{#vardefine:pbs|1}} <!--Enable PBS script wiki page link-->
Line 21: Line 21:
  
 
<!--Modules-->
 
<!--Modules-->
==Required Modules==
+
==Environment Modules==
[[Modules|modules documentation]]
+
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application.
===Serial===
+
LAMMPS at HiperGator is installed in three flavors:
*intel/2016.0.109
+
*GNU compiler with openmpi
*lammps
+
*GNU compiler with openmpi and cuda supported
 +
*Intel compiler with openmpi
 +
*NVIDIA NGC container (https://catalog.ngc.nvidia.com/orgs/hpc/containers/lammps)
  
===Parallel (MPI)===
 
*intel/2016.0.109
 
*openmpi
 
*lammps
 
 
===GPU (MPI)===
 
*intel/2016.0.109
 
*openmpi
 
*cuda/8.0
 
*lammps/30Jul16-cuda
 
<pre>
 
$ module load intel/2016.0.109 openmpi cuda/8.0 lammps/30Jul16-cuda
 
</pre>
 
 
==System Variables==
 
==System Variables==
* HPC_{{#uppercase:{{#var:app}}}}_DIR - installation directory
+
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory
 
* HPC_LAMMPS_BIN - executable directory
 
* HPC_LAMMPS_BIN - executable directory
 
* HPC_LAMMPS_LIB - library directory
 
* HPC_LAMMPS_LIB - library directory
 +
 
<!--Additional-->
 
<!--Additional-->
 
{{#if: {{#var: exe}}|==Additional Information==
 
{{#if: {{#var: exe}}|==Additional Information==
 +
The GNU version of LAMMPS supports external packages PLUMED, EXTRA-MOLECULE, EXTRA-COMPUTE, MISC, ML-QUIP, INTEL and KOKKOS. The Intel version of LAMMPS only includes default packages. HPG staff's goal is to build software with widely used external packages. The users who need packages that are not in the HPG installation can make their custom installation in their local environment in consultation with the HPG staff.
 +
 +
To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to
 +
srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job
 +
 +
For standard or user-customized installations the fastest way is to use cmake.
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand this section to view instructions for standard installation.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
Standard installation:
 +
 +
cd <install-dir>
 +
tar xzf lammps-<version>.tar.gz
 +
cd lammps-<version>
 +
 +
ml purge
 +
ml cmake/<version> intel/<version> openmpi/<version>
 +
ml list
 +
 +
mkdir build
 +
cd build
 +
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> ../cmake
 +
make -j8
 +
make install
 +
</div></div>
 +
Using presets for customization - e.g. for a very rich set of packages:
 +
 +
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake
 +
 +
NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage (see https://help.rc.ufl.edu/doc/Modules for instructions on how to set up personal module environment)
 +
 +
ml purge
 +
ml ngc-lammps
 +
srun  --mpi=pmix_v3 lmp <your_parameters> (see below for a sample slurm script)
 +
 +
Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera. 
 +
 
|}}
 
|}}
 
{{#if: {{#var: conf}}|==Configuration==
 
{{#if: {{#var: conf}}|==Configuration==
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
{{#if: {{#var: pbs}}|==Job Script Examples==
 
{{#if: {{#var: pbs}}|==Job Script Examples==
See the [[{{PAGENAME}}_Job_Scripts]] page for {{#var: app}} job script examples.|}}
+
 
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand this section to view a sample script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
<pre>
 +
#!/bin/bash
 +
#SBATCH --job-name=<JOBNAME>
 +
#SBATCH --mail-user=<EMAIL>
 +
#SBATCH --mail-type=FAIL,END
 +
#SBATCH --output <my_job-%j.out>
 +
#SBATCH --error <my_job-%j.err>
 +
#SBATCH --nodes=4
 +
#SBATCH --ntasks=4
 +
#SBATCH --cpus-per-task=1
 +
#SBATCH --ntasks-per-socket=4
 +
#SBATCH --ntasks-per-node=4
 +
#SBATCH --mem-per-cpu=2gb
 +
#SBATCH --time=01:00:00
 +
#SBATCH --account=<GROUP>
 +
#SBATCH --array=<BEGIN-END>
 +
 +
module load gcc/12.2.0 lammps/02Aug23
 +
 +
LAMMPS=lmp_mpi
 +
INPUT=<input_file>
 +
srun --mpi=pmix_v3 $LAMMPS < $INPUT > log.out 2>&1
 +
 
 +
*Disclaimer: The above slurm configuration is for demonstrative purposes only. The users must tailor it to their specific needs and the available resources.  
 +
</pre></div></div>
 +
|}}
 
{{#if: {{#var: policy}}|==Usage Policy==
 
{{#if: {{#var: policy}}|==Usage Policy==
 
WRITE USAGE POLICY HERE (perhaps templates for a couple of main licensing schemes can be used)|}}
 
WRITE USAGE POLICY HERE (perhaps templates for a couple of main licensing schemes can be used)|}}

Latest revision as of 22:34, 30 June 2024

Description

LAMMPS website  

LAMMPS , Large-scale Atomic/Molecular Massively Parallel Simulator, is a molecular dynamic simulator that models an ensemble of particles in liquid, solid, or gaseous state. It is a open source software, written in C++ and developed at Sandia laboratory. It can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using various force fields and boundary conditions.

Environment Modules

Run module spider LAMMPS to find out what environment modules are available for this application. LAMMPS at HiperGator is installed in three flavors:

System Variables

  • HPC_LAMMPS_DIR - installation directory
  • HPC_LAMMPS_BIN - executable directory
  • HPC_LAMMPS_LIB - library directory

Additional Information

The GNU version of LAMMPS supports external packages PLUMED, EXTRA-MOLECULE, EXTRA-COMPUTE, MISC, ML-QUIP, INTEL and KOKKOS. The Intel version of LAMMPS only includes default packages. HPG staff's goal is to build software with widely used external packages. The users who need packages that are not in the HPG installation can make their custom installation in their local environment in consultation with the HPG staff.

To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to

srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job

For standard or user-customized installations the fastest way is to use cmake.

Expand this section to view instructions for standard installation.

Standard installation:

cd <install-dir>
tar xzf lammps-<version>.tar.gz
cd lammps-<version>
ml purge
ml cmake/<version> intel/<version> openmpi/<version>
ml list
mkdir build
cd build
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> ../cmake
make -j8
make install

Using presets for customization - e.g. for a very rich set of packages:

cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake

NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage (see https://help.rc.ufl.edu/doc/Modules for instructions on how to set up personal module environment)

ml purge
ml ngc-lammps
srun  --mpi=pmix_v3 lmp <your_parameters> (see below for a sample slurm script) 

Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera.

Job Script Examples

Expand this section to view a sample script.

#!/bin/bash
#SBATCH --job-name=<JOBNAME>
#SBATCH --mail-user=<EMAIL>
#SBATCH --mail-type=FAIL,END
#SBATCH --output <my_job-%j.out>
#SBATCH --error <my_job-%j.err>
#SBATCH --nodes=4
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=1
#SBATCH --ntasks-per-socket=4
#SBATCH --ntasks-per-node=4
#SBATCH --mem-per-cpu=2gb
#SBATCH --time=01:00:00
#SBATCH --account=<GROUP>
#SBATCH --array=<BEGIN-END>
 
module load gcc/12.2.0 lammps/02Aug23
 
LAMMPS=lmp_mpi
INPUT=<input_file>
srun --mpi=pmix_v3 $LAMMPS < $INPUT > log.out 2>&1

*Disclaimer: The above slurm configuration is for demonstrative purposes only. The users must tailor it to their specific needs and the available resources.