Child pages
  • Union Square
Skip to end of metadata
Go to start of metadata

no longer in operation

Union Square is no longer in operation. Users are advised to instead use the newest cluster, Prince.


Overview

The Union Square Cluster is a multi-purpose high performance computing resource for the NYU research community.

Hardware Specifications

System Name

Union Square (USQ)

Network

DDR Infiniband

Operating System

Linux

Number of Login Nodes

2

Number of Compute Nodes

60 

Number of Compute CPU Cores

584

Total Memory

1.5 Terabytes

Theoretical Peak Performance

4.47 Teraflops

CPU Manufacturer / Model

Intel Xeon Quad-Core

CPU Speed

2.33GHz

Memory per Node

16-32GB

Unable to render {include} The included page could not be found.

Available Software on USQ

Note

Always, run the command module avail from the terminal to view the up to date available software.

Environment Modules

NYU HPC uses an open source software package called "Environment Modules," (or Modules for short) which allows you to add various path definitions to your shell environment. Default compilers, applications and libraries can be set by individual or combinations of Modules commands. Modules are not applications, rather they simply add the location of applications to your environment. You can list the available Modules using the command:

$ module avail

You can load a module, in this case the C/C++ Intel Compiler module, using the command:

$ module load intel/11.1.046

Once the module for Intel C/C++ is added to your shell environment, Intel C/C++ binaries, headers, libraries and help pages will be available to your session.

You can load multiple modules in one line. For example, to load intel compilers and OpenMPI:

$ module load intel/11.1.046 openmpi/intel/1.4.3

You can view the added modules using the command:

$ module list

To see specific module information such as what other modules needed to be loaded first to load a specific module you can use the command module show <module name>. For example, to show more information on OpenMPI module:

$ module show openmpi

To unload a module you can use the command module unload <module name>. For example, to unload a loaded intel compiler module:

$ module unload intel

To unload more than one out of many loaded modules you can use module unload <module name> <module name>. For example, out of many loaded modules if you want to unload intel and openmpi:

$ module unload openmpi intel

To unload all the loaded modules, you can do:

$ module purge

You can use help option to know more information on module command:

$ module --help

 

Scheduler

NYU HPC uses TORQUE/MOAB,  an industry standard scheduling software in order to allow large numbers of jobs submitted by many researchers to run in an orderly and equitable fashion. Please, see sections Running Jobs and Queues for cluster specific instruction.
   

qsub Tutorial

   

Login Nodes

Below are the Login Nodes available on the Union Square Cluster.

Login Node

Processors

Memory

usq1.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

16GB

usq2.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

16GB

Compute Nodes

Below are the Compute Nodes available on the Union Square Cluster.

Compute Nodes

Count

Processors

Memory

CPUs per Node

compute-0-21,compute-0-44 

2

Intel ® Xeon® CPU E5345 @ 2.33GHZ,

16GB memory

8

compute-0-64 to compute-0-71 

8

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-73 to compute-0-83 

11

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-85 to compute-0-97 

13

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-99 to compute-0-114 

16

Intel ® Xeon® CPU E5410 @ 2.33GHZ

16GB memory

8

compute-0-116 to compute-0-122

7

Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

32GB memory

8

compute-0-124 to compute-0-139

16

Intel(R) Xeon(R) CPU E5345 @ 2.33GHz

32GB memory

8

Networks

The table below describes the two networks available on the Union Square Cluster.

Network

Vendor

Purpose

Speed

Ethernet

Dell

Management, Serving /home over NFS to nodes

1GB/sec

Infiniband

Cisco

High-Speed, low-latency, MPI, Lustre parallel file system

20GB/sec Theoretical, 12-13GB/sec benchmarked.

File Systems

The table below shows the File Systems available on the Union Square Cluster.

Mountpoint

Size

FS Type

Backed up?

Availability

Variable

Value

/home

1TB

ext3

Yes

All nodes (login, compute) through NFS over 1GB/sec ethernet.

$HOME

/home/NetID

/scratch

301TB

Lustre

NO!

All nodes (login, compute) through Lustre over 4GB/sec Infiniband.

$SCRATCH

/scratch/NetID

/archive

102TB

ZFS

Yes

Only login nodes have a 1GB/sec connection to /archive.

$ARCHIVE

/archive/NetID

File System Usage

The /scratch file system is used for job input/output and compilation. Please clean out this space regularly. Your individual allocation in the /scratch file system is limited to 5TB.

The /archive file system is available for long term storage. Please move important data that you need to keep for longer than 30 days from the /scratch/NetID to the /archive/NetID. Your individual allocation in the /archive file system is limited to 2TB.

For additional file systems mounted to individual clusters, please see the Clusters section of this document.

Queues

For information on queues on USQ see the Queues section.

   

 

 

PBS Script Generator
An interactive tool that generates PBS script based on user's input. Check this page for more details.
Front-Line HPC Consulting
HPC consultations are available once a week, Monday 1-3 PM. Appointments are required. Please make an appointment at hpc@nyu.edu.

 

 

 

  • No labels