Available Node Features
Summary
HiPerGator users may finely control the compute nodes requested by a given SLURM job (e.g. specific processor families, processor models) by using the --constraint
directive to specify the node features they desire. Note that the partition must be selected if it's not the default partition. If a partition is not specified by default the jobs will be sent to both 'hpg1-compute,hpg2-compute' partitions.
- Example
#SBATCH --partition=hpg1-compute #SBATCH --constraint=westmere
but
#SBATCH --constraint=haswell
for the default hpg2-compute partition.
Using node features as job constraints
Commonly constrained features
Use node features as SLURM job constraints.
A non-exhaustive list of commonly used feature constraints, found to be generally useful:
Feature | Constraints | Description |
Compute partition | hpg1 , hpg2 |
Requests nodes within a specified compute partition |
Processor family | amd , intel |
Requests nodes having processors of a specified vendor |
Network fabric | infiniband |
Requests nodes having an Infiniband interconnect |
- Examples
To request an Intel processor, use the following:
#SBATCH --contstraint=intel
To request nodes that have Intel processors AND InfiniBand interconnect:
#SBATCH --constraint='intel&infiniband'
To request nodes that have processors from the Haswell 'OR Skylake Intel CPU families:
#SBATCH --constraint='haswell|skylake'
All Node Features
This data has been automatically extracted from node information in SLURM and system processor information and represents all available Processor models and other node features on HiPerGator.
Partition | Node Cores | Sockets | Socket Cores | Memory(MB) | SLURM Features | CPU Model
|
---|---|---|---|---|---|---|
hpg-dev | 64 | 8 | 8 | hpg3;amd;milan;infiniband;el8 | AMD EPYC 75F3 32-Core Processor | |
gui | 32 | 2 | 16 | gui;i21;intel;haswell;el8 | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz | |
hwgui | 32 | 2 | 16 | hpg2;intel;skylake;infiniband;gpu;rtx6000;el8 | Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz | |
bigmem | 128 | 8 | 16 | bigmem;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor | |
bigmem | 192 | 4 | 24 | bigmem;intel;skylake;infiniband;el8 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | |
hpg-milan | 64 | 8 | 8 | hpg3;amd;milan;infiniband;el8 | AMD EPYC 75F3 32-Core Processor | |
hpg-default | 128 | 8 | 16 | hpg3;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor | |
hpg2-compute | 32 | 2 | 16 | hpg2;intel;haswell;infiniband;el8 | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz | |
hpg2-compute | 28 | 2 | 14 | hpg2;intel;haswell;infiniband;el8 | Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz | |
gpu | 32 | 2 | 16 | hpg2;intel;skylake;infiniband;gpu;2080ti;el8 | Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz | |
gpu | 128 | 8 | 16 | ai;su3;amd;rome;infiniband;gpu;a100;el8 | AMD EPYC 7742 64-Core Processor | |
hpg-ai | 128 | 8 | 16 | ai;su3;amd;rome;infiniband;gpu;a100;el8 | AMD EPYC 7742 64-Core Processor |
Node features by partition
HiPerGator node features are documented comprehensively below, sectioned by partition. Use the table columns headings within each section to sort by the criteria of your choice.
This documentation will be updated periodically. To request current node feature information directly from the cluster, load the ufrc module module load ufrc
and run the following command:
$ nodeInfo
hpg1-compute
Processor Vendor | Chassis Model | Processor Family | Processor Model | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|
amd | c6145 | dhabi | o6378 | 4 | 64 | 250 |
amd | a2840 | opteron | o6220 | 2 | 16 | 60 |
intel | r2740 | westmere | x5675 | 2 | 12 | 94 |
intel | c6100 | westmere | x5675 | 2 | 12 | 92 |
- Nodes in the
hpg1-compute
partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
hpg2-compute
Processor Vendor | Chassis Model | Processor Family | Processor Model | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|
intel | sos6320 | haswell | e5-s2643 | 2 | 32 | 125 |
- Nodes in the
hpg2-compute
partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
hpg2-dev
Processor Vendor | Chassis Model | Processor Family | Processor Model | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|
amd | sm-h8qg6 | dhabi | o6378 | 2 | 64 | 250 |
intel | sos6320 | haswell | e5-2698 | 2 | 28 | 125 |
- Nodes in the
hpg2-dev
partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
gpu
Processor Vendor | Chassis Model | Processor Family | Processor Model | GPUS | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
intel | r730 | haswell | e5-2683 | 4 | 2 | 28 | 125 |
- Nodes in the
gpu
partition are equipped with Nvidia Tesla K80 GPU Computing Modules. - Nodes in the
gpu
partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
bigmem
Processor Vendor | Chassis Model | Processor Family | Processor Model | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|
amd | r815 | magny | o6174 | 4 | 48 | 512 |
intel | r820 | sandy-bridge | e5-4607 | 4 | 24 | 768 |
intel | r940 | skylake | 8186 | 4 | 192 | 1546 |
gui
Processor Vendor | Chassis Model | Processor Family | Processor Model | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|
intel | sos6320 | haswell | e5-2698 | 2 | 32 | 125 |
phase4
Processor Vendor | Chassis Model | Processor Family | Processor Model | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|
amd | c6105 | lisbon | o4184 | 2 | 12 | 32 |
- Nodes in the
phase4
partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.