Difference between revisions of "Modulus"
Jump to navigation
Jump to search
Hityangsir (talk | contribs) |
Hityangsir (talk | contribs) |
||
(5 intermediate revisions by 2 users not shown) | |||
Line 2: | Line 2: | ||
[[Category:Physics]] | [[Category:Physics]] | ||
[[Category:Machine Learning]] | [[Category:Machine Learning]] | ||
− | |||
{|<!--CONFIGURATION: REQUIRED--> | {|<!--CONFIGURATION: REQUIRED--> | ||
|{{#vardefine:app|Modulus}} | |{{#vardefine:app|Modulus}} | ||
Line 31: | Line 30: | ||
* HPC_{{uc:{{#var:app}}}}_DIR - installation directory | * HPC_{{uc:{{#var:app}}}}_DIR - installation directory | ||
* HPC_{{uc:{{#var:app}}}}_BIN - executable directory | * HPC_{{uc:{{#var:app}}}}_BIN - executable directory | ||
− | |||
− | |||
==Learning Materials== | ==Learning Materials== | ||
=== NVIDIA Modulus usage on HiPerGator=== | === NVIDIA Modulus usage on HiPerGator=== | ||
− | + | 1) Server: | |
− | + | To start the server as a slurm job: | |
ml purge | ml purge | ||
ml modulus/<version> | ml modulus/<version> | ||
sbatch <slurm_script.sh> | sbatch <slurm_script.sh> | ||
− | + | 2) Interactive run: | |
+ | |||
An example job resource request for running jobs interactively: | An example job resource request for running jobs interactively: | ||
− | srun -p hpg-ai -N 1 --cpus-per-task=16 --gpus=a100:2 --mem=32gb --time=200:00 --pty bash -i | + | srun -p hpg-ai -N 1 --cpus-per-task=16 --gpus=a100:2 --mem=32gb --time=200:00 --pty bash -i |
− | The | + | The <code>--num-gpus X</code> pbrun argument must match the number 'X' of requested GPUs. |
Latest revision as of 17:43, 29 August 2022
Description
NVIDIA Modulus is a neural network framework that blends the power of physics and partial differential equations (PDEs) with AI to build more robust models for better analysis of applications like digital twins.
Environment Modules
Run module spider Modulus
to find out what environment modules are available for this application.
System Variables
- HPC_MODULUS_DIR - installation directory
- HPC_MODULUS_BIN - executable directory
Learning Materials
NVIDIA Modulus usage on HiPerGator
1) Server:
To start the server as a slurm job:
ml purge ml modulus/<version> sbatch <slurm_script.sh>
2) Interactive run:
An example job resource request for running jobs interactively:
srun -p hpg-ai -N 1 --cpus-per-task=16 --gpus=a100:2 --mem=32gb --time=200:00 --pty bash -i
The --num-gpus X
pbrun argument must match the number 'X' of requested GPUs.