Child pages
  • Clusters - Mercer

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column
width20%

Include Page
Clusters and Storage - Links
Clusters and Storage - Links

Column
Info

Mercer cluster is decommissioned on Friday May 19th 2017.

 

Mercer

Overview

Mercer is our primary cluster - currently a 394-node, 5812-core heterogeneous cluster based on Intel x86_64 architecture.

The cluster is partially owned by our HPC Stakeholders, and the scheduler keeps standing reservations to ensure users in these groups get timely access to their part of the hardware.

Hardware Specifications

System Name

Mercer

Vendor/Model

Dell PowerEdge M620

Network

QDR IB by Mellanox (MPI, /scratch)

1 Gb Ethernet (management)

Operating System

Linux (Centos 6.3)

Login Nodes

2 nodes each with 2 Intel Xeon E-2690v2 3.0GHz CPUs (10 cores/socket, 20 cores/node) and 192 GB memory

2 nodes each with 2 Intel Xeon X5650 2.67GHz CPUs (6 cores/socket, 12 cores/node) and 96 GB memory

Compute Nodes

48 nodes each with 2 Intel Xeon E-2690v2 3.0GHz CPUs ("Ivy Bridge", 10 cores/socket, 20 cores/node) and 192 GB memory (189 GB usable)

112 nodes each with 2 Intel Xeon E-2690v2 3.0GHz CPUs ("Ivy Bridge", 10 cores/socket, 20 cores/node) and 64 GB memory (62 GB usable)

16 nodes each with 2 Intel Xeon X5650 2.67GHz CPUs ("Westmere", 6 cores/socket, 12 cores/node) and 96 GB memory (23 GB usable)

8 nodes each with 2 Intel Xeon X5650 2.67GHz CPUs ("Westmere", 6 cores/socket, 12 cores/node) and 48 GB memory (46 GB usable)

68 nodes each with 2 Intel Xeon X5650 2.67GHz CPUs ("Westmere", 6 cores/socket, 12 cores/node) and 24 GB memory (23 GB usable)

64 nodes each with 2 Intel Xeon X5675 3.07GHz CPUs ("Westmere", 6 cores/socket, 12 cores/node) and 48 GB memory (46 GB usable)

64 nodes each with 2 Intel Xeon X5550 2.67GHz CPUs ("Nehalem", 4 cores/socket, 8 cores/node) and 24 GB memory (23 GB usable)

9 GPU-enabled nodes each with 2 Intel Xeon E5-2650 2.0GHz CPUs ("Sandy Bridge", 8 cores/socket, 16 cores/node), 128 GB memory (126 GB usable), 4 NVidia GTX Titan Black GPUs each with 6GB RAM, and 3 local SSD 1TB disks

4 older GPU-enabled nodes each with 2 Intel Xeon X5650 2.67GHz CPUs ("Westmere", 6 cores/socket, 12 cores/node) and 24 GB memory (23 GB usable) and 1 NVidia Tesla M2070 GPU with 5GB RAM

1 node with 2 Intel Xeon X7560 2.27GHz CPUs ("Nehalem", 8 cores/socket, 16 cores/node) and 256 GB memory (250 GB usable)

1 node with 4 Intel Xeon E7- 8837 2.67GHz CPUs ("Westmere", 8 cores/socket, 32 cores/node) and 1 TB memory (1000 GB usable) (CGSB users only)

Number of Total CPU Cores

3200 compute + 40 login

Total Memory

16 TB for compute + 384 GB for login nodes

Theoretical Peak Performance

76.8 TFLOPS (8 DP FLOPS/cycle/core, @ 3GHz, x 3200 core)


File Systems

The table below shows the File Systems available on the Mercer Cluster.

Mountpoint

Size

FS Type

Backed up?

Flushed?

Availability

Variable

Value

/home

1 TB

20 GB / user

ZFS

Yes

No

All nodes (login, compute) through NFS over 4GB/sec Infiniband.

$HOME

/home/$USER

/scratch

410 TB total

5 TB / user

Lustre

NO

 

YES

Files unused for 60 days are deleted

All nodes (login, compute) through Lustre over 4GB/sec Infiniband

$SCRATCH

/scratch/$USER

/archive

200 TB (shared)

2 TB / user

ZFS

Yes

No

Only the login nodes through NFS over 4 Gb/sec Infiniband. 

$ARCHIVE

/archive/$USER

/work500 GB / userZFSNONoAll nodes (login, compute) through NFS over 4GB/sec Infiniband.$WORK

/work/$USER

/state/partition1Varies, mostly >100GBext3NO

YES

at the end of each job

Separate local filesystem on each compute node$PBS_JOBTMP/state/partition1/$PBS_JOBID