Child pages
  • Clusters - ViDA OpenStack
Skip to end of metadata
Go to start of metadata

ViDA OpenStack

ViDA OpenStack cluster is currently in its implementation phase. It is not in production yet.

Features

OpenStack Cloud provides Infrastructure as a Service.

  • Users request the resources they need for their project

  • The system configures requested resources and gives low-level control to the user.

Users are free to utilize the system in whatever manner their research requires

  • They can install packages, customize the system, run/modify system services.

  • This allows experienced researchers to work independently of support staff, improving productivity and freeing support staff to work on larger issues.

OpenStack Cloud allows operations that are not normally available on traditional clusters

  • Users can create snapshots of their systems allowing them to go back to past states if an experiment goes awry or to reproduce an experiment at a latter date.

  • Users can create virtual networks allowing control of communication between different nodes. This is particularly useful for developing networking applications.

Rich library of supported environments

  • These environments can be provisioned with a push of a button making the set up seamless.

  • These environments do not necessary need to provide root access, ensuring that the system would work as expected.  

  • This could allow administrators to provide the same level of support to users who don’t need full control of the system or don’t have the technical skills necessary.

Hardware Specifications

System Name

HPC Cluster OpenStack

Network

  • 100GBit Ethernet network will allow for fast communication between compute servers and between the compute and storage servers.

  • 10Gbit network will provide uplinks to the NYU network

  • 1Gbit networks will provide management of the cluster

Servers

The cluster will consist of 25 new servers

  • Each server will have two 14 core Intel E5-2690v4 with a total of 28 cores running at 2.6 GHz

  • The servers will each have 256Gb of DDR4 RAM.

  • 20 of the servers will have 4 Nvidia P100 GPUs with 12Gb RAM each.

  • The remaining 5 servers will have 4 Nvidia P40 GPUs with 24Gb RAM each

  • New compute servers can be added, regardless of specs or vendor.

 

Storage

The compute nodes will be backed by a Ceph storage system.

  • The storage will have 300TBs of storage (100TBs usable).

  • Ceph will provide storage for images, volumes and data.

  • The system will provide researchers with high bandwidth consistent storage.

  • The storage will be scalable, new storage servers can be added as needed, without vendor lock in.

 

 

 

 

 

 

 

  • No labels