Difference between revisions of "Available Node Features"
Jump to navigation
Jump to search
m |
m |
||
Line 1: | Line 1: | ||
− | + | HiPerGator node features are documented below by partition. When searching this page, use the table columns to sort the list by the feature criteria of your choice. | |
− | + | ==hpg1-compute== | |
− | HiPerGator node features are documented below by partition. | ||
− | |||
− | |||
<div class="mw-collapsible-content" style="padding: 5px;"> | <div class="mw-collapsible-content" style="padding: 5px;"> | ||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
Line 25: | Line 22: | ||
|- | |- | ||
|} | |} | ||
+ | </div> | ||
* Nodes in the hpg1-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | * Nodes in the hpg1-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | ||
− | + | ==hpg1-gpu== | |
− | |||
− | |||
− | |||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 49: | Line 43: | ||
* '''Nodes in the hpg1-gpu partition are equipped with Nvidia Tesla M2090 GPU Computing Modules.''' | * '''Nodes in the hpg1-gpu partition are equipped with Nvidia Tesla M2090 GPU Computing Modules.''' | ||
* Nodes in the hpg1-gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | * Nodes in the hpg1-gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | ||
− | |||
− | |||
− | |||
==hpg2-compute== | ==hpg2-compute== | ||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 68: | Line 58: | ||
|- | |- | ||
|} | |} | ||
− | * Nodes in the hpg2-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | + | * Nodes in the hpg2-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. |
− | + | ==hpg2-dev== | |
− | |||
− | |||
− | ==hpg2-dev== | ||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 91: | Line 77: | ||
|} | |} | ||
* Nodes in the hpg2-dev partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | * Nodes in the hpg2-dev partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | ||
− | |||
− | |||
− | |||
==hpg2gpu== | ==hpg2gpu== | ||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 112: | Line 94: | ||
* '''Nodes in the hpg2gpu partition are equipped with Nvidia Tesla K80 GPU Computing Modules.''' | * '''Nodes in the hpg2gpu partition are equipped with Nvidia Tesla K80 GPU Computing Modules.''' | ||
* Nodes in the hpg2gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | * Nodes in the hpg2gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | ||
− | |||
− | |||
− | |||
==bigmem== | ==bigmem== | ||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 137: | Line 115: | ||
|- | |- | ||
|} | |} | ||
− | + | ||
− | |||
− | |||
==gui== | ==gui== | ||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 156: | Line 131: | ||
|- | |- | ||
|} | |} | ||
− | + | ||
− | |||
− | |||
==phase4== | ==phase4== | ||
− | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 176: | Line 148: | ||
|} | |} | ||
* Nodes in the phase4 partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | * Nodes in the phase4 partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage. | ||
− | |||
− |
Revision as of 17:05, 17 July 2017
HiPerGator node features are documented below by partition. When searching this page, use the table columns to sort the list by the feature criteria of your choice.
hpg1-compute
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
AMD | c6145 | dhabi | o6378 | 160 | 4 | 64 | 250 |
AMD | a2840 | opteron | o6220 | 64 | 2 | 16 | 60 |
Intel | r2740 | westmere | x5675 | 8 | 2 | 12 | 94 |
Intel | c6100 | westmere | x5675 | 16 | 2 | 12 | 92 |
- Nodes in the hpg1-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
hpg1-gpu
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
AMD | - | opteron | o6220 | 14 | 2 | 16 | 29 |
Intel | sm-x9drg | sandy-bridge | e5-s2643 | 7 | 2 | 8 | 62 |
- Nodes in the hpg1-gpu partition are equipped with Nvidia Tesla M2090 GPU Computing Modules.
- Nodes in the hpg1-gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
hpg2-compute
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
Intel | sos6320 | haswell | e5-s2643 | 900 | 2 | 32 | 125 |
- Nodes in the hpg2-compute partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
hpg2-dev
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
AMD | sm-h8qg6 | dhabi | o6378 | 2 | 2 | 28 | 125 |
Intel | sos6320 | haswell | e5-2698 | 4 | 2 | 28 | 125 |
- Nodes in the hpg2-dev partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
hpg2gpu
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
Intel | r730 | haswell | e5-2683 | 11 | 2 | 28 | 125 |
- Nodes in the hpg2gpu partition are equipped with Nvidia Tesla K80 GPU Computing Modules.
- Nodes in the hpg2gpu partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.
bigmem
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
AMD | - | magny | o6174 | 1 | 4 | 48 | 496 |
Intel | - | nehalem | x7560 | 1 | 2 | 16 | 125 |
Intel | - | e5-4607 | 1 | 4 | 24 | 750 | |
Intel | - | e7-8850 | 1 | 8 | 80 | 1009 |
gui
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
Intel | sos6320 | haswell | e5-2698 | 4 | 2 | 32 | 125 |
phase4
Chip Vendor | Chassis Model | Processor Family | Processor Model | Nodes | Sockets | CPUs | RAM (GB) |
---|---|---|---|---|---|---|---|
AMD | c6105 | libson | o4184 | 127 | 2 | 12 | 31 |
- Nodes in the phase4 partition use the InfiniBand network fabric for distributed memory parallel processing and fast access to storage.