Difference between revisions of "ORCA Job Scripts"

From UFRC
Jump to navigation Jump to search
Line 3: Line 3:
 
   %pal nprocs n  
 
   %pal nprocs n  
 
   end
 
   end
 
 
   where n is the number of processors requested. The number of processors must match with the total number  
 
   where n is the number of processors requested. The number of processors must match with the total number  
 
   of tasks given in the slurm configuration.
 
   of tasks given in the slurm configuration.

Revision as of 14:33, 21 June 2024

Input file must contain the parallel configuration section. 

 %pal nprocs n 
 end
 where n is the number of processors requested. The number of processors must match with the total number 
 of tasks given in the slurm configuration.
Orca on a single node, with multiple cores:
===========================================
!/bin/bash
#SBATCH --job-name=parallel_job      # Job name
#SBATCH --mail-type=END,FAIL         # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=usename@ufl.edu  # Where to send mail
#SBATCH --nodes=1                    # Run all processes on a single node
#SBATCH --ntasks=2                   # Run on 2 processors
#SBATCH --ntasks-per-node=2          # Maximum number of tasks on each node
#SBATCH --mem-per-cpu=500mb          # Memory per processor
#SBATCH --time=00:05:00              # Time limit hrs:min:sec
#SBATCH --output=parallel_%j.log     # Standard output and error log
pwd; hostname; date
echo "Running orca test calculation on a with four CPU cores"
echo "Date              = $(date)"
echo "Hostname          = $(hostname -s)"
echo "Working Directory = $(pwd)"
echo ""
echo "Number of Nodes Allocated      = $SLURM_JOB_NUM_NODES"
echo "Number of Tasks Allocated      = $SLURM_NTASKS"
echo "Number of Cores/Task Allocated = $SLURM_CPUS_PER_TASK"
echo ""
module load gcc/12.2.0 openmpi/4.1.1 orca/5.0.4
which mpirun; echo $PATH; echo $LD_LIBRARY_PATH
export ORCA_DIR=/apps/gcc/12.2.0/openmpi/4.1.1/orca/5.0.4
$ORCA_DIR/orca job.inp  > job.ou
date
Disclaimer: The above slurm configuration is hypothetical. The user must customize it based on the size of the calculation, available resources etc.