Difference between revisions of "Available Node Features"

From UFRC
Jump to navigation Jump to search
Line 1: Line 1:
 
[[Category:SLURM]]
 
[[Category:SLURM]]
==Summary==
+
==Usage==
HiPerGator users may finely control the compute nodes requested by a given SLURM job (e.g. specific processor families, processor models) by using the <code>--constraint</code> directive to specify the node features they desire. Note that the partition must be selected if it's not the default partition. If a partition is not specified by default the jobs will be sent to both 'hpg1-compute,hpg2-compute' partitions.
+
HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the <code>--constraint</code> directive to specify node ''features''.
  
 
;Example:
 
;Example:
#SBATCH --partition=hpg1-compute
 
 
  #SBATCH --constraint=westmere
 
  #SBATCH --constraint=westmere
but
+
or
 
  #SBATCH --constraint=haswell
 
  #SBATCH --constraint=haswell
for the default hpg2-compute partition.
 
  
==Using node features as job constraints==
+
Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors '''AND''' InfiniBand interconnect use
===Commonly constrained features===
 
Use node features as SLURM job constraints.
 
 
 
A non-exhaustive list of commonly used feature constraints, found to be generally useful:
 
{| class="wikitable"
 
| align="center" style="background:#f0f0f0;"|'''Feature'''
 
| align="center" style="background:#f0f0f0;"|'''Constraints'''
 
| align="center" style="background:#f0f0f0;"|'''Description'''
 
|-
 
| Compute partition||<code>hpg1</code> , <code>hpg2</code>||''Requests nodes within a specified compute partition''
 
|-
 
| Processor family||<code>amd</code> , <code>intel</code>||''Requests nodes having processors of a specified vendor''
 
|-
 
| Network fabric||<code>infiniband</code>||''Requests nodes having an Infiniband interconnect''
 
|}
 
 
 
;Examples:
 
 
To request an Intel processor, use the following:
 
 
 
#SBATCH --contstraint=intel
 
 
 
To request nodes that have Intel processors '''AND''' InfiniBand interconnect:
 
  
 
  #SBATCH --constraint='intel&infiniband'
 
  #SBATCH --constraint='intel&infiniband'
  
To request nodes that have processors from the Haswell '''OR'' Skylake Intel CPU families:
+
To request processors from either Intel Haswell '''OR''' skylake CPU family use
  
 
  #SBATCH --constraint='haswell|skylake'
 
  #SBATCH --constraint='haswell|skylake'
Line 43: Line 18:
 
==All Node Features==
 
==All Node Features==
  
This data has been automatically extracted from node information in SLURM and system processor information and represents all available Processor models and other node features on HiPerGator.
+
This data has been automatically extracted and represents all available SLURM features and the CPU models they represent on HiPerGator.
  
 
{{#get_web_data:url=https://bio.rc.ufl.edu/pub/ufrc/data/node_data.csv
 
{{#get_web_data:url=https://bio.rc.ufl.edu/pub/ufrc/data/node_data.csv
Line 69: Line 44:
 
}}
 
}}
 
|}
 
|}
 
==Node features by partition==
 
HiPerGator node features are documented comprehensively below, sectioned by partition. Use the table columns headings within each section to sort by the criteria of your choice.
 
 
This documentation will be updated periodically. To request current node feature information directly from the cluster, load the ufrc module <code>module load ufrc</code> and run the following command:
 
$ nodeInfo
 
 
===hpg1-compute===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||dhabi||o6378||4||64||250
 
|-
 
| amd||opteron||o6220||2||16||60
 
|-
 
| intel||westmere||x5675||2||12||92
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg1-compute</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
 
===hpg2-compute===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| intel||sos6320||haswell||e5-s2643||2||32||125
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg2-compute</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
===hpg2-dev===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||sm-h8qg6||dhabi||o6378||2||64||250
 
|-
 
| intel||sos6320||haswell||e5-2698||2||28||125
 
|-
 
|}
 
</div>
 
* Nodes in the <code>hpg2-dev</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
 
===gpu===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | GPUS
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| intel||r730||haswell||e5-2683||4||2||28||125
 
|-
 
|}
 
</div>
 
* Nodes in the <code>gpu</code> partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
 
* Nodes in the <code>gpu</code> partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
 
===bigmem===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| amd||r815||magny||o6174||4||48||512
 
|-
 
| intel||r820||sandy-bridge||e5-4607||4||24||768
 
|-
 
| intel||r940||skylake||8186||4||192||1546
 
|-
 
|}
 
</div>
 
 
===gui===
 
<div style="padding: 5px;">
 
{| class="wikitable sortable"
 
|-
 
! scope="col" | Processor Vendor
 
! scope="col" | Chassis Model
 
! scope="col" | Processor Family
 
! scope="col" | Processor Model
 
! scope="col" | Sockets
 
! scope="col" | CPUs
 
! scope="col" | RAM (GB)
 
|-
 
| intel||sos6320||haswell||e5-2698||2||32||125
 
|-
 
|}
 
</div>
 

Revision as of 21:00, 7 August 2019

Usage

HiPerGator users may finely control selection of compute hardware for a SLURM job like specific processor families, processor models by using the --constraint directive to specify node features.

Example
#SBATCH --constraint=westmere

or

#SBATCH --constraint=haswell

Basic boolean logic can be used to request combinations of features. For example, to request nodes that have Intel processors AND InfiniBand interconnect use

#SBATCH --constraint='intel&infiniband'

To request processors from either Intel Haswell OR skylake CPU family use

#SBATCH --constraint='haswell|skylake'

All Node Features

This data has been automatically extracted and represents all available SLURM features and the CPU models they represent on HiPerGator.


Partition Node Cores Sockets Socket Cores Memory(MB) SLURM Features CPU Model

hpg-dev 64 8 8 hpg3;amd;milan;infiniband;el8 AMD EPYC 75F3 32-Core Processor
gui 32 2 16 gui;i21;intel;haswell;el8 Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
hwgui 32 2 16 hpg2;intel;skylake;infiniband;gpu;rtx6000;el8 Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz
bigmem 128 8 16 bigmem;amd;rome;infiniband;el8 AMD EPYC 7702 64-Core Processor
bigmem 192 4 24 bigmem;intel;skylake;infiniband;el8 Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
hpg-milan 64 8 8 hpg3;amd;milan;infiniband;el8 AMD EPYC 75F3 32-Core Processor
hpg-default 128 8 16 hpg3;amd;rome;infiniband;el8 AMD EPYC 7702 64-Core Processor
hpg2-compute 32 2 16 hpg2;intel;haswell;infiniband;el8 Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
hpg2-compute 28 2 14 hpg2;intel;haswell;infiniband;el8 Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
gpu 32 2 16 hpg2;intel;skylake;infiniband;gpu;2080ti;el8 Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz
gpu 128 8 16 ai;su3;amd;rome;infiniband;gpu;a100;el8 AMD EPYC 7742 64-Core Processor
hpg-ai 128 8 16 ai;su3;amd;rome;infiniband;gpu;a100;el8 AMD EPYC 7742 64-Core Processor