Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


<div style="position:fixed; bottom:8px; right:8px; width:90px; text-align:center; border:1px; z-index:9999;">
<a href="#TOP">Return to Top</a>

Include Page
Quick Links
Quick Links


ViDA Open Stack

Table of Contents

ViDA Openstack cluster is currently being deployed. Not in production yet.


OpenStack Cloud provides Infrastructure as a Service.

  • Users request the resources they need for their project and are given low level access to those resources with a particular system image installed.

  • The system then installs the requested images and gives control to the user.

Users are free to utilize the system in whatever manner their research requires

  • They can install packages, customize the system, and run/modify system services.

  • This allows experienced researchers to work independently of support staff, improving productivity and freeing support staff to work on larger issues.

OpenStack™ Cloud also allows for operations that are not normally available on traditional clusters

  • Users can create snapshots of their systems allowing them to go back to past states if an experiment goes awry or to reproduce an experiment at a latter date.

  • Users can create virtual networks allowing control of communication between different nodes. This is particularly useful for developing networked applications.

Rich library of supported environments

  • These environments can be provisioned with a push of a button making the process seamless for HPC staff.

  • These environments do not necessary need to provide root access, ensuring that the system would work as expected.  

  • This could allow HPC departments to provide the same level of support to users who don’t need full control of the system or don’t have the technical skills necessary.

Hardware Specifications

System Name

HPC Cluster OpenStack


  • A 100Gbit ethernet network will allow for fast communication between different compute servers and between the compute and storage servers.

  • 10Gbit and 1Gbit networks will provide control the cluster and provide uplinks to the NYU network.


The heart of this new cluster will consist of 25 new servers

  • Each server will have two 14 core Intel E5-2690v4 with a total of 28 cores running at 2.6 GHz

  • The servers will each have 256GB of DDR4 ram.

  • 20 of the servers will have 4 Nvidia 1080Ti GPUs with 11GB GDDR5X each.

  • The remaining 5 servers will have enterprise class Nvidia cards.

  • New compute servers can be added, regardless of specks or vendor.



The compute nodes will be backed by a Ceph storage system.

  • The storage will have 300TBs of storage (100TBs usable).

  • Ceph will provide storage for images, volumes and data.

  • The system will provide researchers with high bandwidth consistent storage.

  • The storage will be scalable, new storage servers can be added as needed, without vendor lock in.