Difference between revisions of "ORCA"
Line 6: | Line 6: | ||
|{{#vardefine:conf|}} <!--CONFIGURATION--> | |{{#vardefine:conf|}} <!--CONFIGURATION--> | ||
|{{#vardefine:exe|1}} <!--ADDITIONAL INFO--> | |{{#vardefine:exe|1}} <!--ADDITIONAL INFO--> | ||
− | |{{#vardefine:job|}} <!--JOB SCRIPTS--> | + | |{{#vardefine:job|1}} <!--JOB SCRIPTS--> |
|{{#vardefine:policy|}} <!--POLICY--> | |{{#vardefine:policy|}} <!--POLICY--> | ||
|{{#vardefine:testing|}} <!--PROFILING--> | |{{#vardefine:testing|}} <!--PROFILING--> | ||
Line 38: | Line 38: | ||
{{#if: {{#var: job}}|==Job Script Examples== | {{#if: {{#var: job}}|==Job Script Examples== | ||
See the [[{{PAGENAME}}_Job_Scripts]] page for {{#var: app}} Job script examples. | See the [[{{PAGENAME}}_Job_Scripts]] page for {{#var: app}} Job script examples. | ||
+ | #!/bin/bash | ||
+ | #SBATCH --job-name=parallel_job # Job name | ||
+ | #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL) | ||
+ | #SBATCH --mail-user=ax@ufl.edu # Where to send mail | ||
+ | #SBATCH --nodes=1 # Run all processes on a single node | ||
+ | #SBATCH --ntasks=2 # Run on 2 processors | ||
+ | #SBATCH --ntasks-per-node=2 # Maximum number of tasks on each node | ||
+ | #SBATCH --mem-per-cpu=500mb # Memory per processor | ||
+ | #SBATCH --time=00:05:00 # Time limit hrs:min:sec | ||
+ | #SBATCH --output=parallel_%j.log # Standard output and error log | ||
+ | pwd; hostname; date | ||
+ | |||
+ | echo "Running orca test calculation on a with four CPU cores" | ||
+ | echo "Date = $(date)" | ||
+ | echo "Hostname = $(hostname -s)" | ||
+ | echo "Working Directory = $(pwd)" | ||
+ | echo "" | ||
+ | echo "Number of Nodes Allocated = $SLURM_JOB_NUM_NODES" | ||
+ | echo "Number of Tasks Allocated = $SLURM_NTASKS" | ||
+ | echo "Number of Cores/Task Allocated = $SLURM_CPUS_PER_TASK" | ||
+ | echo "" | ||
+ | |||
+ | module load gcc/12.2.0 openmpi/4.1.1 | ||
+ | |||
+ | which mpirun; echo $PATH; echo $LD_LIBRARY_PATH | ||
+ | |||
+ | srun --mpi=pmix_v3 /blue/ax/orca/504/orca ./Inputs/h2o-pal3.inp > h2o-pal12.out | ||
+ | |||
+ | date | ||
+ | |||
|}} | |}} | ||
<!--Policy--> | <!--Policy--> |
Revision as of 22:28, 31 March 2024
Description
ORCA is an ab initio quantum chemistry program package for modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties. The program ORCA is developed in Frank Neese group, with contributions from many current and former coworkers and several collaborating groups. The binaries of ORCA are available free of charge for academic users for a variety of platforms. ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
Environment Modules
Run module spider ORCA
to find out what environment modules are available for this application.
System Variables
- HPC_ORCA_DIR - installation directory
Additional Information
Questions about running ORCA or electronic structure methods in general should be addressed to Ajith Perera through the UFIT Support System Main Page
Job Script Examples
See the ORCA_Job_Scripts page for ORCA Job script examples.
- !/bin/bash
- SBATCH --job-name=parallel_job # Job name
- SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL)
- SBATCH --mail-user=ax@ufl.edu # Where to send mail
- SBATCH --nodes=1 # Run all processes on a single node
- SBATCH --ntasks=2 # Run on 2 processors
- SBATCH --ntasks-per-node=2 # Maximum number of tasks on each node
- SBATCH --mem-per-cpu=500mb # Memory per processor
- SBATCH --time=00:05:00 # Time limit hrs:min:sec
- SBATCH --output=parallel_%j.log # Standard output and error log
pwd; hostname; date
echo "Running orca test calculation on a with four CPU cores" echo "Date = $(date)" echo "Hostname = $(hostname -s)" echo "Working Directory = $(pwd)" echo "" echo "Number of Nodes Allocated = $SLURM_JOB_NUM_NODES" echo "Number of Tasks Allocated = $SLURM_NTASKS" echo "Number of Cores/Task Allocated = $SLURM_CPUS_PER_TASK" echo ""
module load gcc/12.2.0 openmpi/4.1.1
which mpirun; echo $PATH; echo $LD_LIBRARY_PATH
srun --mpi=pmix_v3 /blue/ax/orca/504/orca ./Inputs/h2o-pal3.inp > h2o-pal12.out
date