Difference between revisions of "Available Node Features"

From UFRC
Jump to navigation Jump to search
m
 
(69 intermediate revisions by 6 users not shown)
Line 1: Line 1:
__NOTOC__
+
[[Category:Scheduler]][[Category:Infrastructure]]
 +
HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the <code>--constraint</code> directive to specify HiPerGator server ''features''.
  
HiPerGator node features are documented below by partition. When searching this page, use the table columns to sort the list by the feature criteria of your choice.  
+
;Example:
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
+
 
===hpg1-compute===
+
Use one of the following commands to specify between rome and milan microarchitectures
<div class="mw-collapsible-content" style="padding: 5px;">
+
#SBATCH --constraint=rome
{| class="wikitable sortable"
+
#SBATCH --constraint=milan
|-
+
 
! scope="col" | Chip Vendor
+
Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors '''AND''' InfiniBand interconnect use
! scope="col" | Chassis Model
+
 
! scope="col" | Processor Family
+
#SBATCH --constraint='intel&infiniband'
! scope="col" | Processor Model
+
 
! scope="col" | Nodes
+
To request processors from either AMD Rome '''OR''' Milan CPU family use
! scope="col" | Sockets
+
 
! scope="col" | CPUs
+
#SBATCH --constraint='rome|milan'
! scope="col" | RAM (GB)
+
 
|-
+
==All Node Features==
| AMD||c6145||dhabi||o6378||160||4||64||250
+
You can run <code>nodeInfo</code> command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models.
|-
+
{{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/node_data.csv
| AMD||a2840||opteron||o6220||64||2||16||60
+
|format=CSV with header
|-
+
|data=partition=Partition,ncores=NodeCores,sockets=Sockets,ht=HT,socketcores=SocketCores,memory=Memory,features=Features,cpumodel=CPU
| Intel||r2740||westmere||x5675||8||2||12||94
+
|cache seconds=7200
|-
+
}}
| Intel||c6100||westmere||x5675||16||2||12||92
+
{| class="wikitable sortable" border="1" sort=Partition cellspacing="0" cellpadding="2" align="center" style="border-collapse: collapse; margin: 1em 1em 1em 0; border-top: none; border-right:none; "
|-
+
! Partition
 +
! Cores per node
 +
! Sockets
 +
! Socket Cores
 +
! Threads/Core
 +
! Memory,GB
 +
! Features
 +
! CPU Model
 +
{{#for_external_table:<nowiki/>
 +
{{!}}-
 +
{{!}} {{{partition}}}
 +
{{!}} {{{ncores}}}
 +
{{!}} {{{sockets}}}
 +
{{!}} {{{socketcores}}}
 +
{{!}} {{{ht}}}
 +
{{!}} {{{memory}}}
 +
{{!}} {{{features}}}
 +
{{!}} {{{cpumodel}}}
 +
}}
 
|}
 
|}
* Nodes in the hpg1-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
+
 
</div>
+
'''Note''': the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.
</div>
+
  #SBATCH --partition=bigmem
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
+
 
===hpg1-gpu===
+
Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.
<div class="mw-collapsible-content" style="padding: 5px;">
+
 
{| class="wikitable sortable"
+
 
|-
+
'''Note''': See [[GPU_Access]] for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest (HPG2) to the newest (HPG3) - haswell, rome, milan.
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| AMD||-||opteron||o6220||14||2||16||29
 
|-
 
| Intel||sm-x9drg||sandy-bridge||e5-s2643||7||2||8||62
 
|-
 
|}
 
* '''Nodes in the hpg1-gpu partition are equipped with Nvidia Tesla M2090 GPU Computing Modules.'''
 
* Nodes in the hpg1-gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
</div>
 
</div>
 
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
 
==hpg2-compute==
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| Intel||sos6320||haswell||e5-s2643||900||2||32||125
 
|-
 
|}
 
* Nodes in the hpg2-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
</div>
 
</div>
 
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
 
==hpg2-dev==
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| AMD||sm-h8qg6||dhabi||o6378||2||2||28||125
 
|-
 
| Intel||sos6320||haswell||e5-2698||4||2||28||125
 
|-
 
|}
 
* Nodes in the hpg2-dev partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
</div>
 
</div>
 
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
 
==hpg2gpu==
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| Intel||r730||haswell||e5-2683||11||2||28||125
 
|-
 
|}
 
* '''Nodes in the hpg2gpu partition are equipped with Nvidia Tesla K80 GPU Computing Modules.'''
 
* Nodes in the hpg2gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.  
 
</div>
 
</div>
 
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
 
==bigmem==
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| AMD||-||magny||o6174||1||4||48||496
 
|-
 
| Intel||-||nehalem||x7560||1||2||16||125
 
|-
 
| Intel||-||||e5-4607||1||4||24||750
 
|-
 
| Intel||-||||e7-8850||1||8||80||1009
 
|-
 
|}
 
</div>
 
</div>
 
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
 
==gui==
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| Intel||sos6320||haswell||e5-2698||4||2||32||125
 
|-
 
|}
 
</div>
 
</div>
 
<div class="mw-collapsible mw-collapsed" padding: 5px; border: 1px solid gray;">
 
==phase4==
 
<div class="mw-collapsible-content" style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| AMD||c6105||libson||o4184||127||2||12||31
 
|-
 
|}
 
* Nodes in the phase4 partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
</div>
 
</div>
 

Latest revision as of 19:45, 22 April 2024

HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the --constraint directive to specify HiPerGator server features.

Example

Use one of the following commands to specify between rome and milan microarchitectures

#SBATCH --constraint=rome
#SBATCH --constraint=milan

Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors AND InfiniBand interconnect use

#SBATCH --constraint='intel&infiniband'

To request processors from either AMD Rome OR Milan CPU family use

#SBATCH --constraint='rome|milan'

All Node Features

You can run nodeInfo command from the ufrc environment module to list all available SLURM features. In addition, the table below shows automatically updated nodeInfo output as well as the corresponding CPU models. Error while fetching data from URL https://data.rc.ufl.edu/pub/ufrc/data/node_data.csv: $2.
HTTP request timed out.
There was a problem during the HTTP request: 0 Error

Partition Cores per node Sockets Socket Cores Threads/Core Memory,GB Features CPU Model

Note: the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.

 #SBATCH --partition=bigmem

Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.


Note: See GPU_Access for more details on GPUs, such as available GPU memory. The following CPU models are in order from the oldest (HPG2) to the newest (HPG3) - haswell, rome, milan.