Child pages
  • Clusters - Bowery
Skip to end of metadata
Go to start of metadata

Bowery is no longer in operation - the compute nodes from Bowery are now part of Mercer

 

Bowery

Overview

Bowery is a tightly integrated cluster suitable for demanding computation. The original cluster, composed of 64 8-core nodes, was purchased by ITS in 2009 to support faculty computing throughout the NYU community. In September 2010 the cluster was expanded by 96 12-core nodes and one 16-core node with a memory of 256GB, increasing the total size of the cluster from 64 to 161 nodes, with a total of 1,680 processors. Another 4 12-core nodes with Nvidia Tesla M2070 GPU card were added in December 2011 bringing the total of nodes to 165 and cores to 1,728. In March 2012 the cluster was expanded one more time by 64 12-core nodes and one 32-core node with a memory of 512GB, increasing the total size of the cluster from 165 to 230, with a total of 2,528 processors.

The first expansion is primarily funded by a grant from the Office of Naval Research DURIP program, awarded to Professors Andrew Majda, David Holland, Olivier Pauluis and Shafer Smith, all of the Center for Atmosphere Ocean Science (CAOS), a unit of the Courant Institute. The remaining funding was contributed by ITS. The additional resources are intended to support the development, testing and implementation of new mathematical models for components of Earth's climate system, especially those that must be parametrized in operational global climate models.

The second expansion is a partnership between ITS and Center for Genomics and Systems Biology (CGSB) group. Out of 65 nodes 20 12-core nodes and one 32-core node were bought by CGSB group and the remaining 40 nodes were bought by ITS. CGSB group has priority over their 20 nodes and NYU HPC general community get to use the unused CPU cycles whenever CGSB users are not running jobs on these nodes.

Bowery contains 74 nodes with a memory of 48GB and another 16 nodes with a memory of 96GB. It also has a 256GB memory node for heavy computational needs and a 512GB memory node which is only for CGSB group users.

The CAOS group has priority access of up to 64 nodes of the cluster resources and the CGSB group has priority over roughly 1/3 of the nodes on chassis 12 and 13. The NYU community is allocated a minimum of 60% of the cluster's resources, more when available. Allocations are dynamic and flexible to allow for best utilization of the instrument.

Hardware Specifications

System Name

Bowery

Vendor/Model

Dell/HP

Network

QDR IB by Mellanox 

Operating System

Linux

Number of Login Nodes

4

Number of Compute Nodes

230 (includes 4 Nvidia - Tesla M2070)

Number of Total CPU Cores

2528

Total Memory

8.95 Terabytes

Theoretical Peak Performance

28.23 Teraflops

CPU Manufacturer / Model

Intel Nehalem, Xeon, X5550, X5650, X5675, X7560, E7- 8837,
Nvidia Tesla M2070

CPU Speed

2.67GHz/3.07GHz

Memory per Node

134 nodes at 24GB
74 nodes at 48 GB
16 nodes at 96 GB
1 node at 256 GB
1 node at 512 GB
4 GPU nodes at 24 GB


Login and Compute Nodes

The following pbstop snaphot indicates the hardware layout on Bowery:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The table below shows the Login Nodes available on the Bowery Cluster.

Login Node

Accessing from the Bastion

Processors

Memory

CPUs per Node

login-0-0

bowery0.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

48GB

12

login-0-1

bowery1.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

48GB

12

login-0-2

bowery2.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

24GB

12

login-0-3

bowery3.es.its.nyu.edu

2 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

24GB

12

bowery.es.its.nyu.edu has 4 IP's that it cycles through. These are the 4 login nodes bowery0 through bowery3. You may also ssh to other login nodes once you log in to any of them. See Access and FAQs for instructions.

The table below shows the Compute Nodes available on the Bowery Cluster.

Compute Nodes

Nodes Count

Processors

Memory per Node

CPUs per Node

compute-0-0 to compute-3-15

64

Intel(R) Xeon(R) CPU X5550 @ 2.67GHz

24GB

8

compute-4-0 to compute-8-5

70

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

24GB

12

compute-8-6 to compute-8-15

10

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

48GB

12

compute-9-0 to compute-9-15

16

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

96GB

12

compute-10-0

1

Intel(R) Xeon(R) CPU X7560 @ 2.27GHz

256GB

16

compute-10-1

1

Intel(R) Xeon(R) CPU E7- 8837 @ 2.67GHz

512GB

32

compute-11-0 to compute-11-3

4

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

24GB

12

compute-12-0 to compute-13-32

64

Intel(R) Xeon(R) CPU X5675 @ 3.07GHz

48GB

12

Networks 

The table below shows the Networks available on the Bowery Cluster.

Network

Vendor

Purpose

Speed

Ethernet

Dell

Management, Serving /home over NFS to nodes

1 GB/sec

Infiniband

Mellanox

High-Speed, low-latency, MPI, Lustre parallel file system

40GB/sec

File Systems

The table below shows the File Systems available on the Bowery Cluster.

Mountpoint

Size

FS Type

Backed up?

Availability

Variable

Value

/home

1TB

ext3

Yes

All nodes (login, compute) through NFS over 1GB/sec ethernet.

$HOME

/home/$USER

/scratch

301TB

Lustre

NO

All nodes (login, compute) through Lustre over 4GB/sec Infiniband.
(11/2/09 see note below for temporary configuration)

$SCRATCH

/scratch/$USER

/archive

200TB

ZFS

Yes

Only the login nodes through NFS over Gb/sec connection. 

$ARCHIVE

/archive/$USER

  • No labels