Skip to end of metadata
Go to start of metadata

NYU HPC Policies

The policies and procedures governing access to and usage of the shared NYU HPC clusters managed and hosted by IT have been approved by the #Research Faculty Advisory Group. These policies are necessary to ensure that the resources are equitably shared, properly used, and effective in supporting the needs of all researchers.

Contact information

Your NYU email account is the HPC official means of communication. All HPC announcements and communication about downtime, upgrades, etc., will be sent to the account holder's NetID-associated email (NetID@nyu.edu). It is each account owner's responsibility to read these correspondences.

Acceptable Use Policy

All computer systems operated by New York University (NYU) may be accessed only by authorized users. Authorized users are granted specific, limited privileges in their use of the system. The data and programs in this system may not be accessed, copied, modified or disclosed without prior approval of NYU. Access and use, or causing access and use, of this computer system by anyone other than as permitted by NYU are strictly prohibited by NYU and by law, and may subject an unauthorized user, including unauthorized employees, to criminal and civil penalties as well as NYU-initiated disciplinary proceedings. The use of this system is routinely monitored and recorded, and anyone accessing this system consents to such monitoring and recording. 

Questions regarding this access policy should be directed (by email) to hpc@nyu.edu or (by phone) to 212-998-3333.

Storage Allocations and Policies

Each individual researcher is assigned a standard storage allocation or quota on /home, /scratch and /archive. Researchers who use more than their allocated space will be blocked from submitting jobs until they clean their space and reduce their usage, or in the case of /archive, purchase additional storage. The chart below shows the storage allocations for individual accounts and the cost of additional /archive space.

Space

Space Purpose

Backed up?

Allocation

Additional Storage Cost

Total Size

File System

/home

Program development space; storing small files you
want to keep long term , e.g. source code, scripts

Yes

5GB

N/A

~1TB*

NFS

/archive

Long-term storage

Yes

2TB**

$500/year for 1TB

200TB

ZFS

/scratch

Computational work space

No

5TB;
inode quota: 1 million
Policy

N/A

301TB

Lustre

Important: Of all the space, only /scratch should be used for computational purposes. Please do not write to /home when running jobs as it can easily be filled up.

*Note:  Capacity of the /home file system varies from cluster to cluster. Unlike /scratch and /archive, the /home file system is not mounted across clusters. Each cluster has its own /home, its own user base and /home allocation policy.   

To purchase additional storage, send email to hpc@nyu.edu.

/scratch Policy

The /scratch storage system is a shared resource that needs to run as efficiently as possible for the benefit of all.  All HPC account holders have a /scratch disk space quota of 5TB and inode quota of 1 million. There is no system backup for data in /scratch, it is the user's responsibility to back up data. We cannot recover any data in /scratch, including files lost to system crashes or hardware failure so it is important to make copies of your important data regularly.

  • All inactive files older than 30 days will be removed.  It is a policy violation to use scripts to change the file access time. Any user found to be violating this policy will have their user's HPC account locked. A second violation may result in your HPC account being turned off.  
  • We strongly urge users to regularly clean up their data in /scratch to decrease /scratch usage by backing up files you need to retain either on /archive or elsewhere. 
  • All users will be asked to do cleanup if total /scratch usage is above 75%, which will decrease the scratch file system performance.
  • We retain the right to clean up files on /scratch at any time if it is needed to improve system performance.

There are some recommendations:

  • Do not put important source code, scripts, libraries, executables in /scratch. These important files should be stored in /home.
  • Do not make soft link for the folders in /scratch to /home for /scratch access
  • We strongly suggest users work with big size files, instead of many small size files.
  • For frequently accessed temporary files during job running process, please use local disk in the compute node or even RAM file system on the compute node to decrease IO load to /scratch file system.

Group Quotas on /archive

HPC accounts include an allocation of 2 TB of storage in /archive. An HPC sponsor may request that his/her quota and the quota of his/her research group be combined to create a single larger space. Some conditions:

  • Requests must be made by the sponsor
  • All of the members of the group must share the same sponsor
  • All group members must be active users of the HPC system

The sponsor's account will hold the full quota and each individual's quota will be set to 0.

Requests will be considered by HPC management and assessed by evaluating the need for it and availability.

Maximum size of group quota is 10 TB. Additional storage can be added for $500/TB/year (based on availability)

To apply for a group quota please use the form at this link.   You will receive a response to your request within 1 week.

Automatic File Deletion Policy

The table below describes the policy concerning the automatic deletion of files.

Space

Automatic File Deletion Policy

/home

none

/archive

none

/scratch

Files may be deleted as needed without warning if required for system productivity.

ALL

ALL /home and /archive files associated with expired accounts will be automatically deleted 90 days after account expiration.
/scratch files will automatically be deleted no later than 30 days after account expiration. 

HPC Hosting and Equipment Life Cycle Policy

IT data centers are secure, state-of-the-art facilities with 24/7 monitoring and redundant AC and power. All HPC equipment in ITS data centers are taken out of service after 4 years. If used/refurbished equipment is put into the data center, this is measured from the original manufacture date rather than the date it was put into the data center.

Allocation of space and other scarce resources in ITS facilities is determined by the Research Faculty Advisory Group. Effective Summer 2013 there are no longer co-location fees for research clusters hosted in these facilities for individual researchers or departments. All requests to qualify for this service and/or to extend the management life of a cluster beyond 4 years (for a maximum of 1 additional year) must be approved by the Research Faculty Advisory Group.  Requests should be emailed to hpc@nyu.edu to submit to the Research Faculty Advisory Group and for requests to extend beyond 4 years up to 1 additional year should include the following:

  • Reason for extension request
  • Plans for replacement or retirement of cluster
  • Length of time of extension being requested

Research Faculty Advisory Group

The Research Faculty Advisory Group is charged with facilitating ongoing communication among faculty, the central Information Technology Services (ITS) division and the academic leadership of the sciences including the Chairs, Deans and Provost with the goal of providing an enhanced technological infrastructure for scientific research.

The group advises ITS and the Science leadership on issues of resource allocation, prioritization and the availability of centralized and customized resources.  This group meets about four times a year,

Research Faculty Advisory Group Members 2012/2013

School/Division

Department

Name

College of Dentistry

Basic Science and Craniofacial Biology

Lou Terracio

Faculty of Arts & Science

Anthropology

Todd Disotell

 

Biology

Richard Bonneau

 

Biology

Kristin Gunsalas

 

Center for Neural Science

Eero Simoncelli

 

Chemistry

Mark Tuckerman

 

Economics

Ahu Gemici

 

Psychology

Michael Landy

 

Psychology

Todd Gureckis

Courant Institute of Mathematical Sciences

Computer Science

Marsha Berger

 

Center for Atmosphere Ocean Science

Shafer Smith

Silver School of Social Work

Social Work

Wen-Jui Han

Stern Business School

IOMS/IS Group

Norman White

   

 

 

 

 

  • No labels