Difference between revisions of "LAMMPS"

From UFRC
Jump to navigation Jump to search
 
(11 intermediate revisions by the same user not shown)
Line 23: Line 23:
 
==Environment Modules==
 
==Environment Modules==
 
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application.
 
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application.
 +
LAMMPS at HiperGator is installed in three flavors:
 +
*GNU compiler with openmpi
 +
*GNU compiler with openmpi and cuda supported
 +
*Intel compiler with openmpi
 +
*NVIDIA NGC container (https://catalog.ngc.nvidia.com/orgs/hpc/containers/lammps)
 +
 
==System Variables==
 
==System Variables==
 
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory
 
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory
 
* HPC_LAMMPS_BIN - executable directory
 
* HPC_LAMMPS_BIN - executable directory
 
* HPC_LAMMPS_LIB - library directory
 
* HPC_LAMMPS_LIB - library directory
 +
 
<!--Additional-->
 
<!--Additional-->
 
{{#if: {{#var: exe}}|==Additional Information==
 
{{#if: {{#var: exe}}|==Additional Information==
 +
The GNU version of LAMMPS supports external packages PLUMED, EXTRA-MOLECULE, EXTRA-COMPUTE, MISC, ML-QUIP, INTEL and KOKKOS. The Intel version of LAMMPS only includes default packages. HPG staff's goal is to build software with widely used external packages. The users who need packages that are not in the HPG installation can make their custom installation in their local environment in consultation with the HPG staff.
 +
 
To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to  
 
To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to  
 
  srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job
 
  srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job
Line 56: Line 65:
 
  cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake
 
  cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake
  
NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage:
+
NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage (see https://help.rc.ufl.edu/doc/Modules for instructions on how to set up personal module environment)
  
 
  ml purge
 
  ml purge
 
  ml ngc-lammps
 
  ml ngc-lammps
  lmp <your_parameters>
+
  srun  --mpi=pmix_v3 lmp <your_parameters> (see below for a sample slurm script)  
OR (for a MPI run):
+
 
  mpirun -n <#_of_processes> lmp <your_parameters>
+
Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera.  
  
 
|}}
 
|}}
Line 68: Line 77:
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
See the [[{{PAGENAME}}_Configuration]] page for {{#var: app}} configuration details.|}}
 
{{#if: {{#var: pbs}}|==Job Script Examples==
 
{{#if: {{#var: pbs}}|==Job Script Examples==
Note: This section may require cleanup to meet UFRC standards. It's either outdated, has factual errors, too terse, too verbose, or inappropriate for UFRC public wiki.
+
 
 
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
''Expand this section to view sample serial script.''
+
''Expand this section to view a sample script.''
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
<pre>
 
<pre>
Line 79: Line 88:
 
#SBATCH --output <my_job-%j.out>
 
#SBATCH --output <my_job-%j.out>
 
#SBATCH --error <my_job-%j.err>
 
#SBATCH --error <my_job-%j.err>
#SBATCH --nodes=1
+
#SBATCH --nodes=4
#SBATCH --ntasks=1
+
#SBATCH --ntasks=4
#SBATCH --mem-per-cpu=2G
+
#SBATCH --cpus-per-task=1
 +
#SBATCH --ntasks-per-socket=4
 +
#SBATCH --ntasks-per-node=4
 +
#SBATCH --mem-per-cpu=2gb
 
#SBATCH --time=01:00:00
 
#SBATCH --time=01:00:00
 
#SBATCH --account=<GROUP>
 
#SBATCH --account=<GROUP>
 
#SBATCH --array=<BEGIN-END>
 
#SBATCH --array=<BEGIN-END>
 
   
 
   
module load intel/2016.0.109 lammps
+
module load gcc/12.2.0 lammps/02Aug23
 
   
 
   
LAMMPS=lmp_ufhpc
+
LAMMPS=lmp_mpi
 
INPUT=<input_file>
 
INPUT=<input_file>
+
srun --mpi=pmix_v3 $LAMMPS < $INPUT > log.out 2>&1
mpiexec $LAMMPS < $INPUT > log.out 2>&1
+
 
</pre></div></div>
+
*Disclaimer: The above slurm configuration is for demonstrative purposes only. The users must tailor it to their specific needs and the available resources.  
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 
''Expand this section to view sample parallel script.''
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
<pre>
 
#!/bin/bash
 
#SBATCH --job-name=<JOBNAME>
 
#SBATCH --mail-user=<EMAIL>
 
#SBATCH --mail-type=FAIL,END
 
#SBATCH --output <my_job-%j.out>
 
#SBATCH --error <my_job-%j.err>
 
#SBATCH --nodes=1
 
#SBATCH --ntasks=<number of tasks>
 
#SBATCH --mem-per-cpu=2G
 
#SBATCH --time=01:00:00
 
#SBATCH --account=<GROUP>
 
#SBATCH --array=<BEGIN-END>
 
 
cd $PBS_O_WORKDIR
 
 
module load intel/2016.0.109 openmpi/1.10.2 lammps
 
 
LAMMPS=lmp_ufhpc
 
INPUT=<input_file>
 
 
mpiexec $LAMMPS < $INPUT > log.out 2>&1
 
 
</pre></div></div>
 
</pre></div></div>
 
|}}
 
|}}

Latest revision as of 22:34, 30 June 2024

Description

LAMMPS website  

LAMMPS , Large-scale Atomic/Molecular Massively Parallel Simulator, is a molecular dynamic simulator that models an ensemble of particles in liquid, solid, or gaseous state. It is a open source software, written in C++ and developed at Sandia laboratory. It can be used to model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using various force fields and boundary conditions.

Environment Modules

Run module spider LAMMPS to find out what environment modules are available for this application. LAMMPS at HiperGator is installed in three flavors:

System Variables

  • HPC_LAMMPS_DIR - installation directory
  • HPC_LAMMPS_BIN - executable directory
  • HPC_LAMMPS_LIB - library directory

Additional Information

The GNU version of LAMMPS supports external packages PLUMED, EXTRA-MOLECULE, EXTRA-COMPUTE, MISC, ML-QUIP, INTEL and KOKKOS. The Intel version of LAMMPS only includes default packages. HPG staff's goal is to build software with widely used external packages. The users who need packages that are not in the HPG installation can make their custom installation in their local environment in consultation with the HPG staff.

To execute lammps binaries set up the appropriate environment variables and execute them with srun using the pmix level corresponding to the openmpi version used to build the binary. E.g. the command will look similar to

srun --mpi=${HPC_PMIX} $LAMMPS -sf gpu -pk gpu 2 -var x 2 -var y 7 -var z 7 < in.$job

For standard or user-customized installations the fastest way is to use cmake.

Expand this section to view instructions for standard installation.

Standard installation:

cd <install-dir>
tar xzf lammps-<version>.tar.gz
cd lammps-<version>
ml purge
ml cmake/<version> intel/<version> openmpi/<version>
ml list
mkdir build
cd build
cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> ../cmake
make -j8
make install

Using presets for customization - e.g. for a very rich set of packages:

cmake -D CMAKE_INSTALL_PREFIX=<target_lammps_dir> -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake ../cmake

NGC containers downloaded and installed from NVIDIA GPU Cloud - Usage (see https://help.rc.ufl.edu/doc/Modules for instructions on how to set up personal module environment)

ml purge
ml ngc-lammps
srun  --mpi=pmix_v3 lmp <your_parameters> (see below for a sample slurm script) 

Note: HPG staff can be consulted for custom installation of LAMMPS. Submit a support ticket addressed to Ajith Perera.

Job Script Examples

Expand this section to view a sample script.

#!/bin/bash
#SBATCH --job-name=<JOBNAME>
#SBATCH --mail-user=<EMAIL>
#SBATCH --mail-type=FAIL,END
#SBATCH --output <my_job-%j.out>
#SBATCH --error <my_job-%j.err>
#SBATCH --nodes=4
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=1
#SBATCH --ntasks-per-socket=4
#SBATCH --ntasks-per-node=4
#SBATCH --mem-per-cpu=2gb
#SBATCH --time=01:00:00
#SBATCH --account=<GROUP>
#SBATCH --array=<BEGIN-END>
 
module load gcc/12.2.0 lammps/02Aug23
 
LAMMPS=lmp_mpi
INPUT=<input_file>
srun --mpi=pmix_v3 $LAMMPS < $INPUT > log.out 2>&1

*Disclaimer: The above slurm configuration is for demonstrative purposes only. The users must tailor it to their specific needs and the available resources.