Difference between revisions of "Available Node Features"
Line 3: | Line 3: | ||
;Example: | ;Example: | ||
+ | |||
+ | Use one of the following commands to specify between westmere and haswell microarchitectures | ||
#SBATCH --constraint=westmere | #SBATCH --constraint=westmere | ||
− | |||
#SBATCH --constraint=haswell | #SBATCH --constraint=haswell | ||
Revision as of 21:19, 15 November 2022
HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the --constraint
directive to specify node features.
- Example
Use one of the following commands to specify between westmere and haswell microarchitectures
#SBATCH --constraint=westmere #SBATCH --constraint=haswell
Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors AND InfiniBand interconnect use
#SBATCH --constraint='intel&infiniband'
To request processors from either Intel Haswell OR skylake CPU family use
#SBATCH --constraint='haswell|skylake'
All Node Features
You can run nodeInfo
command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models.
Partition | Cores per node | Sockets | Socket Cores | Threads/Core | Memory,GB | Features | CPU Model
|
---|---|---|---|---|---|---|---|
hpg-dev | 64 | 8 | 8 | 1 | 500 | hpg3;amd;milan;infiniband;el8 | AMD EPYC 75F3 32-Core Processor |
gui | 32 | 2 | 16 | 1 | 124 | gui;i21;intel;haswell;el8 | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz |
hwgui | 32 | 2 | 16 | 1 | 186 | hpg2;intel;skylake;infiniband;gpu;rtx6000;el8 | Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz |
bigmem | 128 | 8 | 16 | 1 | 4023 | bigmem;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor |
bigmem | 192 | 4 | 24 | 2 | 1509 | bigmem;intel;skylake;infiniband;el8 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz |
hpg-milan | 64 | 8 | 8 | 1 | 500 | hpg3;amd;milan;infiniband;el8 | AMD EPYC 75F3 32-Core Processor |
hpg-default | 128 | 8 | 16 | 1 | 1003 | hpg3;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor |
hpg2-compute | 32 | 2 | 16 | 1 | 124 | hpg2;intel;haswell;infiniband;el8 | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz |
hpg2-compute | 28 | 2 | 14 | 1 | 125 | hpg2;intel;haswell;infiniband;el8 | Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz |
gpu | 32 | 2 | 16 | 1 | 186 | hpg2;intel;skylake;infiniband;gpu;2080ti;el8 | Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz |
gpu | 128 | 8 | 16 | 1 | 2010 | ai;su3;amd;rome;infiniband;gpu;a100;el8 | AMD EPYC 7742 64-Core Processor |
hpg-ai | 128 | 8 | 16 | 1 | 2010 | ai;su3;amd;rome;infiniband;gpu;a100;el8 | AMD EPYC 7742 64-Core Processor |
Note: the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.
#SBATCH --partition=bigmem
Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.
Note: See GPU_Access for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest to the newest - interlagos, magny, sandy-bridge, dhabi, haswell, broadwell, skylake. The 'dhabi' and 'haswell' models are from HPG1 and HPG2 deployments.