Child pages
  • Clusters - USQ
Skip to end of metadata
Go to start of metadata

Overview

The Union Square Cluster is a multi-purpose high performance computing resource for the NYU research community.

Hardware Specifications

System Name

Union Square (USQ)

Network

DDR Infiniband

Operating System

Linux

Number of Login Nodes

2

Number of Compute Nodes

60 

Number of Compute CPU Cores

584

Total Memory

1.5 Terabytes

Theoretical Peak Performance

4.47 Teraflops

CPU Manufacturer / Model

Intel Xeon Quad-Core

CPU Speed

2.33GHz

Memory per Node

16-32GB

Login Nodes

Below are the Login Nodes available on the Union Square Cluster.

Login Node

Processors

Memory

usq1.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

16GB

usq2.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

16GB

Compute Nodes

Below are the Compute Nodes available on the Union Square Cluster.

Compute Nodes

Count

Processors

Memory

CPUs per Node

compute-0-21,compute-0-44 

2

Intel ® Xeon® CPU E5345 @ 2.33GHZ,

16GB memory

8

compute-0-64 to compute-0-71 

8

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-73 to compute-0-83 

11

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-85 to compute-0-97 

13

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-99 to compute-0-114 

16

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-116 to compute-0-122

7

Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

32GB memory

8

compute-0-124 to compute-0-139

16

Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

32GB memory

8

Networks

The table below describes the two networks available on the Union Square Cluster.

Network

Vendor

Purpose

Speed

Ethernet

Dell

Management, Serving /home over NFS to nodes

1GB/sec

Infiniband

Cisco

High-Speed, low-latency, MPI, Lustre parallel file system

20GB/sec Theoretical, 12-13GB/sec benchmarked.

File Systems

The table below shows the File Systems available on the Union Square Cluster.

Mountpoint

Size

FS Type

Backed up?

Availability

Variable

Value

/home

1TB

ext3

Yes

All nodes (login, compute) through NFS over 1GB/sec ethernet.

$HOME

/home/NetID

/scratch

301TB

Lustre

NO!

All nodes (login, compute) through Lustre over 4GB/sec Infiniband.

$SCRATCH

/scratch/NetID

/archive

102TB

ZFS

Yes

Only login nodes have a 1GB/sec connection to /archive.

$ARCHIVE

/archive/NetID

USQ Queues

Type

Name of Queue

Maximum
Walltime

Max Jobs
  Per User

Max CPU
Core/User *

Maximum
 Nodes

Node Allocation
     Type * *

Active

Priority

Serial

ser2

48 hours

   N/A

64,128

 

Shared

Yes

 

Serial

serlong

96 hours

   N/A

32,64

 

Shared

Yes

 

Interactive

interactive

4 hours

    2

N/A

2

Shared

Yes

highest

Notes:

*  Max CPU Core/User defines the largest processor count available to any one user. The first number represents a soft limit and the second number a hard limit. These flexible dual limits are set to ensure efficient utilization of cluster resources.  

** Exclusive nodes versus shared nodes. Due to the complexity of message passing used by parallel jobs, all NYU HPC parallel queues are setup for "exclusive" node use, which means only one job can run on a node at the same time. Serial jobs using serial queues on the other hand can share the same node, up to matching the CPU/core count.    

USQ is for running serial jobs. Please use Bowery for parallel jobs.

  • No labels