Difference between revisions of "Available Node Features"

From UFRC
Jump to navigation Jump to search
m
Line 1: Line 1:
__NOTOC__
+
HiPerGator users may finely control what compute nodes are requested by a given SLURM job (e.g. a specific chip families or processor models) by using the <code>--constraint</code> directive to specify the node features they desire:
HiPerGator node features are documented below, sectioned by partition.  
 
  
Use the table columns headings within each section to sort by the criteria of your choice.
+
#SBATCH --constraint=<node_feature>
 +
 
 +
=Using node features as job constraints=
 +
===Common feature constraints===
 +
 
 +
A non-exhaustive list of commonly used features constraints, found to be generally useful:
 +
{| class="wikitable"
 +
| align="center" style="background:#f0f0f0;"|'''Feature'''
 +
| align="center" style="background:#f0f0f0;"|'''Description'''
 +
| align="center" style="background:#f0f0f0;"|'''Available Constraints'''
 +
|-
 +
| Compute partition||''Requests nodes within a specified compute partition''||hpg1, hpg2
 +
|-
 +
| Chip family||''Requests nodes having processors of a specified chip vendor''||amd, intel
 +
|-
 +
| Chassis model||''Requests nodes having a specified chassis model''||c6145, sos6320
 +
|-
 +
| Processor model||''Requests nodes having a specified processor model''||o6220, o3678, o4184
 +
|-
 +
| Network fabric||''Requests nodes having an Infiniband interconnect''||infiniband
 +
|}
  
 
;Example:
 
;Example:
Use node features as SLURM job constraints. To ask for an Intel Haswell processor like those in HPG2 nodes, use the following:
+
Use node features as SLURM job constraints; to ask for an Intel processor, use the following:
 +
 
 +
#SBATCH --contstraint=intel
 +
 
 +
To request nodes that have Intel processors '''AND''' InfiniBand interconnect:
 +
 
 +
#SBATCH --constraint='intel&infiniband'
 +
 
 +
To request nodes that have processors from the Intel Sandy Bridge '''OR''' Haswell CPU families:
 +
 
 +
#SBATCH --constraint='sandy-bridge|haswell'
  
#SBATCH --contstraint=haswell
+
=Node features by partition=
 +
HiPerGator node features are documented comprehensively below, sectioned by partition.
  
 +
Use the table columns headings within each section to sort by the criteria of your choice
 
==hpg1-compute==
 
==hpg1-compute==
 
<div style="padding: 5px;">
 
<div style="padding: 5px;">

Revision as of 18:33, 17 July 2017

HiPerGator users may finely control what compute nodes are requested by a given SLURM job (e.g. a specific chip families or processor models) by using the --constraint directive to specify the node features they desire:

#SBATCH --constraint=<node_feature>

Using node features as job constraints

Common feature constraints

A non-exhaustive list of commonly used features constraints, found to be generally useful:

Feature Description Available Constraints
Compute partition Requests nodes within a specified compute partition hpg1, hpg2
Chip family Requests nodes having processors of a specified chip vendor amd, intel
Chassis model Requests nodes having a specified chassis model c6145, sos6320
Processor model Requests nodes having a specified processor model o6220, o3678, o4184
Network fabric Requests nodes having an Infiniband interconnect infiniband
Example

Use node features as SLURM job constraints; to ask for an Intel processor, use the following:

#SBATCH --contstraint=intel

To request nodes that have Intel processors AND InfiniBand interconnect:

#SBATCH --constraint='intel&infiniband'

To request nodes that have processors from the Intel Sandy Bridge OR Haswell CPU families:

#SBATCH --constraint='sandy-bridge|haswell'

Node features by partition

HiPerGator node features are documented comprehensively below, sectioned by partition.

Use the table columns headings within each section to sort by the criteria of your choice

hpg1-compute

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
AMD c6145 dhabi o6378 160 4 64 250
AMD a2840 opteron o6220 64 2 16 60
Intel r2740 westmere x5675 8 2 12 94
Intel c6100 westmere x5675 16 2 12 92
  • Nodes in the hpg1-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

hpg1-gpu

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
AMD - opteron o6220 14 2 16 29
Intel sm-x9drg sandy-bridge e5-s2643 7 2 8 62
  • Nodes in the hpg1-gpu partition are equipped with Nvidia Tesla M2090 GPU Computing Modules.
  • Nodes in the hpg1-gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

hpg2-compute

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
Intel sos6320 haswell e5-s2643 900 2 32 125
  • Nodes in the hpg2-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

hpg2-dev

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
AMD sm-h8qg6 dhabi o6378 2 2 28 125
Intel sos6320 haswell e5-2698 4 2 28 125
  • Nodes in the hpg2-dev partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

hpg2gpu

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
Intel r730 haswell e5-2683 11 2 28 125
  • Nodes in the hpg2gpu partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
  • Nodes in the hpg2gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.

bigmem

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
AMD - magny o6174 1 4 48 496
Intel - nehalem x7560 1 2 16 125
Intel - e5-4607 1 4 24 750
Intel - e7-8850 1 8 80 1009

gui

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
Intel sos6320 haswell e5-2698 4 2 32 125

phase4

Chip Vendor Chassis Model Processor Family Processor Model Nodes Sockets CPUs RAM (GB)
AMD c6105 libson o4184 127 2 12 31
  • Nodes in the phase4 partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.