A cluster is a group of servers (also known as compute nodes) that are managed as a single system. The use of clusters enables researchers to utilize large numbers of identically configured nodes for parallel jobs, or single nodes and single CPU cores for serial jobs. NYU HPC maintains the following clusters:
Each cluster is made up of computers known as nodes. There are two major types of nodes: login nodes and compute nodes.
Login nodes are used for file editing, data transfer, light compiling and debugging, and initiating batch and interactive sessions via the scheduler.
Compute nodes are used for running batch jobs via the scheduler and interactive sessions. User access to compute nodes is available only via the scheduler process.
All NYU HPC resources are accessed via a secure bastion host. The bastion host is a secure landing pad that leads to the clusters' login nodes.
Which Cluster Should I Use?
NYU HPC offers a variety of production and experimental level services for the research community. Here are some guidelines for selecting the best cluster to meet your project requirements.
If you would like to experiment with accelerators and GPU computing
For tightly coupled multinode parallel jobs that need a low latency network
If you need linear scaling
If you need a large, single memory system
If you are running job-arrays
For small serial jobs
To run series of Matlab jobs
To run parallel Matlab jobs
To run statistical computing, like R, Rmpi, Snow
To run Stata
If you prefer Intel CPUs
If you prefer AMD CPUs
For running both serial and parallel jobs
For login-node based post-processing and visualization
If you are still not sure which cluster is best for your needs, you are welcome to discuss your computational requirements with the HPC staff. Contact us at email@example.com.