Difference between revisions of "Available Node Features"
Moskalenko (talk | contribs) |
|||
(40 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
− | [[Category: | + | [[Category:Scheduler]][[Category:Infrastructure]] |
− | + | HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the <code>--constraint</code> directive to specify HiPerGator server ''features''. | |
− | HiPerGator users may finely control | ||
;Example: | ;Example: | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | = | + | Use one of the following commands to specify between rome and milan microarchitectures |
− | = | + | #SBATCH --constraint=rome |
− | + | #SBATCH --constraint=milan | |
− | + | Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors '''AND''' InfiniBand interconnect use | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
#SBATCH --constraint='intel&infiniband' | #SBATCH --constraint='intel&infiniband' | ||
− | To request | + | To request processors from either AMD Rome '''OR''' Milan CPU family use |
− | #SBATCH --constraint=' | + | #SBATCH --constraint='rome|milan' |
− | ==Node | + | ==All Node Features== |
− | + | You can run <code>nodeInfo</code> command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models. | |
+ | {{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/node_data.csv | ||
+ | |format=CSV with header | ||
+ | |data=partition=Partition,ncores=NodeCores,sockets=Sockets,ht=HT,socketcores=SocketCores,memory=Memory,features=Features,cpumodel=CPU | ||
+ | |cache seconds=7200 | ||
+ | }} | ||
+ | {| class="wikitable sortable" border="1" sort=Partition cellspacing="0" cellpadding="2" align="center" style="border-collapse: collapse; margin: 1em 1em 1em 0; border-top: none; border-right:none; " | ||
+ | ! Partition | ||
+ | ! Cores per node | ||
+ | ! Sockets | ||
+ | ! Socket Cores | ||
+ | ! Threads/Core | ||
+ | ! Memory,GB | ||
+ | ! Features | ||
+ | ! CPU Model | ||
+ | {{#for_external_table:<nowiki/> | ||
+ | {{!}}- | ||
+ | {{!}} {{{partition}}} | ||
+ | {{!}} {{{ncores}}} | ||
+ | {{!}} {{{sockets}}} | ||
+ | {{!}} {{{socketcores}}} | ||
+ | {{!}} {{{ht}}} | ||
+ | {{!}} {{{memory}}} | ||
+ | {{!}} {{{features}}} | ||
+ | {{!}} {{{cpumodel}}} | ||
+ | }} | ||
+ | |} | ||
− | + | '''Note''': the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script. | |
− | + | #SBATCH --partition=bigmem | |
− | + | Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | '''Note''': See [[GPU_Access]] for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest (HPG2) to the newest (HPG3) - haswell, rome, milan. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Latest revision as of 19:45, 22 April 2024
HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the --constraint
directive to specify HiPerGator server features.
- Example
Use one of the following commands to specify between rome and milan microarchitectures
#SBATCH --constraint=rome #SBATCH --constraint=milan
Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors AND InfiniBand interconnect use
#SBATCH --constraint='intel&infiniband'
To request processors from either AMD Rome OR Milan CPU family use
#SBATCH --constraint='rome|milan'
All Node Features
You can run nodeInfo
command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models.
Partition | Cores per node | Sockets | Socket Cores | Threads/Core | Memory,GB | Features | CPU Model
|
---|---|---|---|---|---|---|---|
hpg-dev | 64 | 8 | 8 | 1 | 500 | hpg3;amd;milan;infiniband;el8 | AMD EPYC 75F3 32-Core Processor |
gui | 32 | 2 | 16 | 1 | 124 | gui;i21;intel;haswell;el8 | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz |
hwgui | 32 | 2 | 16 | 1 | 186 | hpg2;intel;skylake;infiniband;gpu;rtx6000;el8 | Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz |
bigmem | 128 | 8 | 16 | 1 | 4023 | bigmem;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor |
bigmem | 192 | 4 | 24 | 2 | 1509 | bigmem;intel;skylake;infiniband;el8 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz |
hpg-milan | 64 | 8 | 8 | 1 | 500 | hpg3;amd;milan;infiniband;el8 | AMD EPYC 75F3 32-Core Processor |
hpg-default | 128 | 8 | 16 | 1 | 1003 | hpg3;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor |
hpg2-compute | 32 | 2 | 16 | 1 | 124 | hpg2;intel;haswell;infiniband;el8 | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz |
hpg2-compute | 28 | 2 | 14 | 1 | 125 | hpg2;intel;haswell;infiniband;el8 | Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz |
gpu | 32 | 2 | 16 | 1 | 186 | hpg2;intel;skylake;infiniband;gpu;2080ti;el8 | Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz |
gpu | 128 | 8 | 16 | 1 | 2010 | ai;su3;amd;rome;infiniband;gpu;a100;el8 | AMD EPYC 7742 64-Core Processor |
hpg-ai | 128 | 8 | 16 | 1 | 2010 | ai;su3;amd;rome;infiniband;gpu;a100;el8 | AMD EPYC 7742 64-Core Processor |
Note: the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.
#SBATCH --partition=bigmem
Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.
Note: See GPU_Access for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest (HPG2) to the newest (HPG3) - haswell, rome, milan.