Difference between revisions of "Available Node Features"

From UFRC
Jump to navigation Jump to search
Line 7: Line 7:
 
  #SBATCH --constraint=westmere
 
  #SBATCH --constraint=westmere
 
but
 
but
  SBATCH --constraint=haswell
+
  #SBATCH --constraint=haswell
 
for the default hpg2-compute partition.
 
for the default hpg2-compute partition.
  
Line 26: Line 26:
 
| Chassis model||<code>c6145</code> , <code>sos6320</code>||''Requests nodes having a specified chassis model''
 
| Chassis model||<code>c6145</code> , <code>sos6320</code>||''Requests nodes having a specified chassis model''
 
|-
 
|-
| Processor model||<code>o6220</code> , <code>o3678</code> , <code>o4184</code>||''Requests nodes having a specified processor model''
+
| Processor model||<code>o6220</code> , <code>o6378</code> , <code>o4184</code>||''Requests nodes having a specified processor model''
 
|-
 
|-
 
| Network fabric||<code>infiniband</code>||''Requests nodes having an Infiniband interconnect''
 
| Network fabric||<code>infiniband</code>||''Requests nodes having an Infiniband interconnect''
Line 48: Line 48:
 
HiPerGator node features are documented comprehensively below, sectioned by partition. Use the table columns headings within each section to sort by the criteria of your choice.
 
HiPerGator node features are documented comprehensively below, sectioned by partition. Use the table columns headings within each section to sort by the criteria of your choice.
  
This documentation will be updated periodically. To request current node feature information directly from the cluster, load the <code>ufrc</code> module and run the following command:
+
This documentation will be updated periodically. To request current node feature information directly from the cluster, load the ufrc module <code>module load ufrc</code> and run the following command:
 
  $ nodeInfo
 
  $ nodeInfo
  
Line 59: Line 59:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
 
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| AMD||c6145||dhabi||o6378||160||4||64||250
+
| amd||c6145||dhabi||o6378||4||64||250
 
|-
 
|-
| AMD||a2840||opteron||o6220||64||2||16||60
+
| amd||a2840||opteron||o6220||2||16||60
 
|-
 
|-
| Intel||r2740||westmere||x5675||8||2||12||94
+
| intel||r2740||westmere||x5675||2||12||94
 
|-
 
|-
| Intel||c6100||westmere||x5675||16||2||12||92
+
| intel||c6100||westmere||x5675||2||12||92
 
|-
 
|-
 
|}
 
|}
 
</div>
 
</div>
 
* Nodes in the <code>hpg1-compute</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.  
 
* Nodes in the <code>hpg1-compute</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.  
===hpg1-gpu===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Chip Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Nodes
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| AMD||-||opteron||o6220||14||2||16||29
 
|-
 
| Intel||sm-x9drg||sandy-bridge||e5-s2643||7||2||8||62
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg1-gpu</code> partition are equipped with Nvidia Tesla M2090 GPU Computing Modules.
 
* Nodes in the <code>hpg1-gpu</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
 
===hpg2-compute===
 
===hpg2-compute===
 
<div style="padding: 5px;">
 
<div style="padding: 5px;">
Line 104: Line 82:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
 
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| Intel||sos6320||haswell||e5-s2643||900||2||32||125
+
| intel||sos6320||haswell||e5-s2643||2||32||125
 
|-
 
|-
 
|}
 
|}
Line 122: Line 99:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
 
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| AMD||sm-h8qg6||dhabi||o6378||2||2||28||125
+
| amd||sm-h8qg6||dhabi||o6378||2||28||125
 
|-
 
|-
| Intel||sos6320||haswell||e5-2698||4||2||28||125
+
| intel||sos6320||haswell||e5-2698||2||28||125
 
|-
 
|-
 
|}
 
|}
 
</div>
 
</div>
 
* Nodes in the <code>hpg2-dev</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
* Nodes in the <code>hpg2-dev</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
===hpg2gpu===
+
===gpu===
 
<div style="padding: 5px;">
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
Line 142: Line 118:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
+
! scope="col" | GPUS
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| Intel||r730||haswell||e5-2683||11||2||28||125
+
| intel||r730||haswell||e5-2683||4||2||28||125
 
|-
 
|-
 
|}
 
|}
 
</div>
 
</div>
* Nodes in the <code>hpg2gpu</code> partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
+
* Nodes in the <code>gpu</code> partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
* Nodes in the <code>hpg2gpu</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.  
+
* Nodes in the <code>gpu</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.  
 
===bigmem===
 
===bigmem===
 
<div style="padding: 5px;">
 
<div style="padding: 5px;">
Line 161: Line 137:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
 
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| AMD||-||magny||o6174||1||4||48||496
+
| amd||r815||magny||o6174||4||48||512
|-
 
| Intel||-||nehalem||x7560||1||2||16||125
 
 
|-
 
|-
| Intel||-||sandy||e5-4607||1||4||24||750
+
| intel||r820||sandy-bridge||e5-4607||4||24||768
 
|-
 
|-
| Intel||-||skylake||8186||1||4||192||1510
+
| intel||r940||skylake||8186||4||192||1546
 
|-
 
|-
 
|}
 
|}
Line 185: Line 158:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
 
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| Intel||sos6320||haswell||e5-2698||4||2||32||125
+
| Intel||sos6320||haswell||e5-2698||2||32||125
 
|-
 
|-
 
|}
 
|}
Line 202: Line 174:
 
! scope="col" | Processor Family
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Processor Model
! scope="col" | Nodes
 
 
! scope="col" | Sockets
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
! scope="col" | RAM (GB)
 
|-
 
|-
| AMD||c6105||libson||o4184||127||2||12||31
+
| AMD||c6105||lisbon||o4184||2||12||32
 
|-
 
|-
 
|}
 
|}
 
</div>
 
</div>
 
* Nodes in the <code>phase4</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
* Nodes in the <code>phase4</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

Revision as of 02:27, 11 October 2018

Summary

HiPerGator users may finely control the compute nodes requested by a given SLURM job (e.g. specific chip families, processor models) by using the --constraint directive to specify the node features they desire. Note that the partition must be selected if it's not the default partition.

Example
#SBATCH --partition=hpg1-compute
#SBATCH --constraint=westmere

but

#SBATCH --constraint=haswell

for the default hpg2-compute partition.

Using node features as job constraints

Commonly constrained features

Use node features as SLURM job constraints.

A non-exhaustive list of commonly used feature constraints, found to be generally useful:

Feature Constraints Description
Compute partition hpg1 , hpg2 Requests nodes within a specified compute partition
Chip family amd , intel Requests nodes having processors of a specified chip vendor
Chassis model c6145 , sos6320 Requests nodes having a specified chassis model
Processor model o6220 , o6378 , o4184 Requests nodes having a specified processor model
Network fabric infiniband Requests nodes having an Infiniband interconnect
Examples

To request an Intel processor, use the following:

#SBATCH --contstraint=intel

To request nodes that have Intel processors AND InfiniBand interconnect:

#SBATCH --constraint='intel&infiniband'

To request nodes that have processors from the Intel Sandy Bridge OR Haswell CPU families:

#SBATCH --constraint='sandy-bridge|haswell'

Node features by partition

HiPerGator node features are documented comprehensively below, sectioned by partition. Use the table columns headings within each section to sort by the criteria of your choice.

This documentation will be updated periodically. To request current node feature information directly from the cluster, load the ufrc module module load ufrc and run the following command:

$ nodeInfo

hpg1-compute

Chip Vendor Chassis Model Processor Family Processor Model Sockets CPUs RAM (GB)
amd c6145 dhabi o6378 4 64 250
amd a2840 opteron o6220 2 16 60
intel r2740 westmere x5675 2 12 94
intel c6100 westmere x5675 2 12 92
  • Nodes in the hpg1-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

hpg2-compute

Chip Vendor Chassis Model Processor Family Processor Model Sockets CPUs RAM (GB)
intel sos6320 haswell e5-s2643 2 32 125
  • Nodes in the hpg2-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

hpg2-dev

Chip Vendor Chassis Model Processor Family Processor Model Sockets CPUs RAM (GB)
amd sm-h8qg6 dhabi o6378 2 28 125
intel sos6320 haswell e5-2698 2 28 125
  • Nodes in the hpg2-dev partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

gpu

Chip Vendor Chassis Model Processor Family Processor Model GPUS Sockets CPUs RAM (GB)
intel r730 haswell e5-2683 4 2 28 125
  • Nodes in the gpu partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
  • Nodes in the gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

bigmem

Chip Vendor Chassis Model Processor Family Processor Model Sockets CPUs RAM (GB)
amd r815 magny o6174 4 48 512
intel r820 sandy-bridge e5-4607 4 24 768
intel r940 skylake 8186 4 192 1546

gui

Chip Vendor Chassis Model Processor Family Processor Model Sockets CPUs RAM (GB)
Intel sos6320 haswell e5-2698 2 32 125

phase4

Chip Vendor Chassis Model Processor Family Processor Model Sockets CPUs RAM (GB)
AMD c6105 lisbon o4184 2 12 32
  • Nodes in the phase4 partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.