Difference between revisions of "Available Node Features"

From UFRC
Jump to navigation Jump to search
 
(39 intermediate revisions by 4 users not shown)
Line 1: Line 1:
[[Category:SLURM]]
+
[[Category:Scheduler]]
==Summary==
+
HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the <code>--constraint</code> directive to specify node ''features''.
HiPerGator users may finely control the compute nodes requested by a given SLURM job (e.g. specific processor families, processor models) by using the <code>--constraint</code> directive to specify the node features they desire. Note that the partition must be selected if it's not the default partition.
 
  
 
;Example:
 
;Example:
#SBATCH --partition=hpg1-compute
+
 
 +
Use one of the following commands to specify between westmere and haswell microarchitectures
 
  #SBATCH --constraint=westmere
 
  #SBATCH --constraint=westmere
but
 
 
  #SBATCH --constraint=haswell
 
  #SBATCH --constraint=haswell
for the default hpg2-compute partition.
 
  
==Using node features as job constraints==
+
Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors '''AND''' InfiniBand interconnect use
===Commonly constrained features===
 
Use node features as SLURM job constraints.
 
  
A non-exhaustive list of commonly used feature constraints, found to be generally useful:
+
#SBATCH --constraint='intel&infiniband'
{| class="wikitable"
 
| align="center" style="background:#f0f0f0;"|'''Feature'''
 
| align="center" style="background:#f0f0f0;"|'''Constraints'''
 
| align="center" style="background:#f0f0f0;"|'''Description'''
 
|-
 
| Compute partition||<code>hpg1</code> , <code>hpg2</code>||''Requests nodes within a specified compute partition''
 
|-
 
| Processor family||<code>amd</code> , <code>intel</code>||''Requests nodes having processors of a specified vendor''
 
|-
 
| Processor model||<code>o6220</code> , <code>o6378</code> , <code>o4184</code>||''Requests nodes having a specified processor model''
 
|-
 
| Network fabric||<code>infiniband</code>||''Requests nodes having an Infiniband interconnect''
 
|}
 
  
;Examples:
+
To request processors from either Intel Haswell '''OR''' skylake CPU family use
 
To request an Intel processor, use the following:
 
  
  #SBATCH --contstraint=intel
+
  #SBATCH --constraint='haswell|skylake'
  
To request nodes that have Intel processors '''AND''' InfiniBand interconnect:
+
==All Node Features==
 
+
You can run <code>nodeInfo</code> command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models.
  #SBATCH --constraint='intel&infiniband'
+
{{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/node_data.csv
 
+
|format=CSV with header
To request nodes that have processors from the Intel Sandy Bridge '''OR''' Haswell CPU families:
+
|data=partition=Partition,ncores=NodeCores,sockets=Sockets,ht=HT,socketcores=SocketCores,memory=Memory,features=Features,cpumodel=CPU
 +
|cache seconds=7200
 +
}}
 +
{| class="wikitable sortable" border="1" sort=Partition cellspacing="0" cellpadding="2" align="center" style="border-collapse: collapse; margin: 1em 1em 1em 0; border-top: none; border-right:none; "
 +
! Partition
 +
! Cores per node
 +
! Sockets
 +
! Socket Cores
 +
! Threads/Core
 +
! Memory,GB
 +
! Features
 +
! CPU Model
 +
{{#for_external_table:<nowiki/>
 +
{{!}}-
 +
{{!}} {{{partition}}}
 +
{{!}} {{{ncores}}}
 +
{{!}} {{{sockets}}}
 +
{{!}} {{{socketcores}}}
 +
{{!}} {{{ht}}}
 +
{{!}} {{{memory}}}
 +
{{!}} {{{features}}}
 +
{{!}} {{{cpumodel}}}
 +
}}
 +
|}
  
#SBATCH --constraint='sandy-bridge|haswell'
+
'''Note''': the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.
 +
  #SBATCH --partition=bigmem
  
==Node features by partition==
+
Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.
HiPerGator node features are documented comprehensively below, sectioned by partition. Use the table columns headings within each section to sort by the criteria of your choice.
 
  
This documentation will be updated periodically. To request current node feature information directly from the cluster, load the ufrc module <code>module load ufrc</code> and run the following command:
 
$ nodeInfo
 
  
===hpg1-compute===
+
'''Note''': See [[GPU_Access]] for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest to the newest - interlagos, magny, sandy-bridge, dhabi, haswell, broadwell, skylake. The 'dhabi' and 'haswell' models are from HPG1 and HPG2 deployments.
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||c6145||dhabi||o6378||4||64||250
 
|-
 
| amd||a2840||opteron||o6220||2||16||60
 
|-
 
| intel||r2740||westmere||x5675||2||12||94
 
|-
 
| intel||c6100||westmere||x5675||2||12||92
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg1-compute</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.  
 
===hpg2-compute===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| intel||sos6320||haswell||e5-s2643||2||32||125
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg2-compute</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
===hpg2-dev===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||sm-h8qg6||dhabi||o6378||2||28||125
 
|-
 
| intel||sos6320||haswell||e5-2698||2||28||125
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg2-dev</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
===gpu===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | GPUS
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| intel||r730||haswell||e5-2683||4||2||28||125
 
|-
 
|}
 
</div>
 
* Nodes in the <code>gpu</code> partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
 
* Nodes in the <code>gpu</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
===bigmem===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||r815||magny||o6174||4||48||512
 
|-
 
| intel||r820||sandy-bridge||e5-4607||4||24||768
 
|-
 
| intel||r940||skylake||8186||4||192||1546
 
|-
 
|}
 
</div>
 
 
 
===gui===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| intel||sos6320||haswell||e5-2698||2||32||125
 
|-
 
|}
 
</div>
 
===phase4===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||c6105||lisbon||o4184||2||12||32
 
|-
 
|}
 
</div>
 
* Nodes in the <code>phase4</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 

Latest revision as of 21:19, 15 November 2022

HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the --constraint directive to specify node features.

Example

Use one of the following commands to specify between westmere and haswell microarchitectures

#SBATCH --constraint=westmere
#SBATCH --constraint=haswell

Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors AND InfiniBand interconnect use

#SBATCH --constraint='intel&infiniband'

To request processors from either Intel Haswell OR skylake CPU family use

#SBATCH --constraint='haswell|skylake'

All Node Features

You can run nodeInfo command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models.

Partition Cores per node Sockets Socket Cores Threads/Core Memory,GB Features CPU Model

hpg-dev 64 8 8 1 500 hpg3;amd;milan;infiniband AMD EPYC 75F3 32-Core Processor
gui 32 2 16 1 125 gui;i21;intel;haswell Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
hwgui 32 2 16 1 186 hpg2;intel;skylake;infiniband;gpu;rtx6000;cuda11 Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz
bigmem 192 4 24 2 1509 bigmem;intel;skylake;infiniband Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
bigmem 128 8 16 1 4023 bigmem;amd;rome;infiniband AMD EPYC 7702 64-Core Processor
hpg-milan 64 8 8 1 500 hpg3;amd;milan;infiniband AMD EPYC 75F3 32-Core Processor
hpg-default 128 8 16 1 1003 hpg3;amd;rome;infiniband AMD EPYC 7702 64-Core Processor
hpg2-compute 32 2 16 1 125 hpg2;intel;haswell;infiniband;cms Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
hpg2-compute 28 2 14 1 125 hpg2;intel;haswell;infiniband Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
gpu 32 2 16 1 186 hpg2;intel;skylake;infiniband;gpu;2080ti;cuda11 Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz
gpu 128 8 16 1 2010 ai;su4;amd;rome;infiniband;gpu;a100 AMD EPYC 7742 64-Core Processor
hpg-ai 128 8 16 1 2010 ai;su4;amd;rome;infiniband;gpu;a100 AMD EPYC 7742 64-Core Processor

Note: the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.

 #SBATCH --partition=bigmem

Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.


Note: See GPU_Access for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest to the newest - interlagos, magny, sandy-bridge, dhabi, haswell, broadwell, skylake. The 'dhabi' and 'haswell' models are from HPG1 and HPG2 deployments.