Difference between revisions of "Large-Memory SMP Servers"

From UFRC
Jump to navigation Jump to search
Line 12: Line 12:
 
! Cores
 
! Cores
 
! Memory(GB)
 
! Memory(GB)
 +
! SLURM Memory (mb)
 
|-
 
|-
! scope="row" | i1a-s3
+
! scope="row" | 1
| amd64 || AMD || 6174 || 2.2 || 48 || 512
+
| amd64 || Intel || X7560 || 2.3 || 16 || 125 || 128000
 
|-
 
|-
! scope="row" | i1a-s11
+
! scope="row" | 2
| amd64 || Intel || E5-4607 || 2.2 || 24 || 750
+
| amd64 || AMD || 6174 || 2.2 || 48 || 496 || 508000
 
|-
 
|-
! scope="row" | s5a-s23
+
! scope="row" | 3
| amd64 || Intel || E7-8850 || 2.0 || 80 || 1024 (1TB)
+
| amd64 || Intel || E5-4607 || 2.2 || 24 || 770 || 792000
 +
|-
 +
! scope="row" | 4
 +
| amd64 || Intel || E7-8850 || 2.0 || 80 || 1009 (1TB) || 1033728
 
|}
 
|}
  
Line 26: Line 30:
 
   #SBATCH --partition=bigmem
 
   #SBATCH --partition=bigmem
  
Please also be aware that servers with hundreds of gigabytes of RAM are still fairly expensive.  If you submit a job to the ''bigmem'' partition and your group does not have an NCU allocation sufficient to accommodate the resource request, your job will be rejected.
+
Please also be aware that servers with hundreds of gigabytes of RAM are very expensive.  If you submit a job to the ''bigmem'' partition and your group does not have an NCU allocation sufficient to accommodate the resource request, your job will be rejected.
 
 
More precise core memory limits of bigmem nodese for SLURM use are:
 
16 cores / 128000mb
 
48 cores / 508000mb
 
24 cores / 792000mb
 
80 cores / 1033728mb
 

Revision as of 21:11, 18 July 2016

Research Computing currently maintains the following resources for calculations requiring large amounts of physical memory.

Large Memory Servers
Host Architecture Vendor Processor Frequency Cores Memory(GB) SLURM Memory (mb)
1 amd64 Intel X7560 2.3 16 125 128000
2 amd64 AMD 6174 2.2 48 496 508000
3 amd64 Intel E5-4607 2.2 24 770 792000
4 amd64 Intel E7-8850 2.0 80 1009 (1TB) 1033728

The big memory machines are available only via the HiPerGator "bigmem" partition. To submit jobs to this partition you will need to add the following directive to your job submission script.

 #SBATCH --partition=bigmem

Please also be aware that servers with hundreds of gigabytes of RAM are very expensive. If you submit a job to the bigmem partition and your group does not have an NCU allocation sufficient to accommodate the resource request, your job will be rejected.