Running jobs from /home is serious violation of HPC policy. Any users who intentionally violate this policy will get their account suspended. /home SSDs are not design for scratch disks, it will kill the SSDs quickly.
Make sure you know basic Linux usage. Useful links:
Dalma is accessed through a dedicated set of login nodes, which are designed for light-weight, short period tasks. Access to the compute, GPU and visualization nodes for production runs, is controlled by the workload manager Slurm. The production jobs are submitted on login nodes to Slurm. Then Slurm will schedule and run these jobs on compute nodes.
Getting and Renewing an Account
Please follow the instructions here: Accounts
Once you have an HPC account, you are ready to access a cluster. In the most simple case, use ssh in your terminal:
If you use Windows or outside NYU AD/NY network, follow the instructions here: Access Dalma.
Right after login to Dalma, user is automatically directed to $HOME. Dalma storage consists of 4 filesystems: $HOME (/home/<Net-ID>), $FASTSCRATCH (/fastscratch/<Net-ID>), $SCRATCH (/scratch/<Net-ID>) and $ARCHIVE (/archive/<Net-ID>) that can be referenced through the environment variables $HOME, $FASTSCRATCH, $SCRATCH. $ARCHIVE can NOT be accessed directly on login or compute nodes (see the tutorial here for usage: The guide to Archive on Dalma).
The usages are summarized as below.
Summary of storage
Submit your job and prepare your input / output in $SCRATCH
Put your source code, applications and executable in $HOME. NO JOBS SHOULD BE RUN FROM $HOME
Back up your data in $ARCHIVE
Contact us for usage in $FASTSCRATCH
If you encounter 'disk quota exceeded' error or similar, that's because you breached the disk quota, either data size, of number of files, on one or more of the filesystems. Running myquota in the terminal on Dalma gives you the current usage and quota.
HOME is limited.
The quota of $HOME is only 5GB. Run myquota in terminal on Dalma to check your current usage and quota.
We urge our users to clean up their storage.
Backup your FASTSCRATCH and SCRATCH
Files older than 90 days at $FASTSCRATCH and $SCRATCH will be deleted.
Backing up user data is user's own responsibility. E.g., if a user deleted something accidentally, we can not have it recovered unfortunately.
Users can not acquire all physical memory on nodes in their job scripts. Some memory is reserved for system. See the form below.
|Node Type||Number of Nodes||Hardware per Node||Maximum Memory Per Node User Can Require||Note|
|Standard Compute||236||128GB, 28 cores, Broadwell||112GB|
|Fat||8||192GB memory, 12 cores, Westmere||180GB|
1TB memory, 32 cores, Westmere
|Ultra Fat||1||2TB memory, 72 cores, Broadwell||2000GB||Consult with us for access to this node|
|BuTinah||16||96GB memory, 12 cores, Westmere||90GB|
We now have available a new Module Environment in Dalma, which is part of the User Centric Approach that we have been promoting from NYUAD to manage the software stack. This new Module Environment, NYUAD 3.0 overcome the flaws of the traditional modules environment when used to manage complex modern software environments.
First, you could check what applications are available
Then you could select the desired software to load. The following example shows how to load a self-sufficient-single-application environment for gromacs.
Thie following example shows how to load an environment for compiling source code from scratch.
If you cannot find a certain version of software (for example, you are looking for Python 3, but only to find Python 2 is available), try running the following command to make all modules visible first.
As you can see, Python 3 is available then. You could load Python 3 by loading the specific module.
The batch system on Dalma is the Slurm (Simple Linux Utility for Resource Management), a free open-source resource manager by LLNL. Similar to most supercomputers, on Dalma, production jobs are submitted to the batch system. In order to submit a job you need to create a submission script where you specify your resources requirements. Before jobs are dispatched to run they are put onto partitions waiting for available processing resources. There are partitions for various types of use. Parallel partition will allocate entire nodes to the job (i.e. only 1 job per node). Serial partition allows multiple jobs to share one node.
Computational heavy jobs are not allowed on login nodes. The alternative way should be using an interactive session. To start an interactive session, use srun command:
To exit the interactive session, type Ctrl+d, or
Available Partitions (Queues)
Most used partitions for users:
- serial: For job using no more than 1 node.
- parallel: For job using more than 1 node.
to check your job limits.
Writing a Batch Script
A job script is a text file describing the job, resources required. Slurm has its unique directives, but is similar in many ways to PBS or LSF. Moreover, Slurm maintaned a good compability on PBS script. In many case, PBS script is directly acceptable.
Any job ntasks <= 28 should use #SBATCH -p serial. Any job with ntasks >28 should use #SBATCH -p parallel and set ntasks to be dividible by 28. (if not using MPI-OpenMP Hybrid Parallelization).
Serial Job Example
A typical Slurm serial job script looks like this. Let say you save it as serial-job.sh
Below, your will find a generic Slurm job scripts will gentle explanation with each directives.
Then you can submit the saved job script serial-job.sh with:
Parallel Job Example
A typical Slurm parallel job script looks like this. Let say you save it as parallel-job.sh
Then you can submit the saved job script parallel-job.sh with:
Submitting a Job
Please be aware that submitting jobs is only possible from login nodes at the moment. Contact us if you need help.
Command sbatch is for submitting jobs. A simple example:
After the submission, it will return the corresponding job id. Once scheduled for run, the script is executed on the first compute node in the allocation.
Checking Job Status
Before and During Job Execution
This command gives you all the jobs by you.
It means the job with Job ID 31408, has been running (ST: R) for 2 minutes on compute-21-4.
For more verbose information, use scontrol show job.
After Job Execution
Once the job is finished, the job can't be inspected by squeue or scontrol show job. At this point, you could inspect the job by sacct.
The following commands give you extremely verbose information on a job.
Canceling a Job
If you decide to end a job prematurely, use scancel
Use with Cautions
To cancel all jobs from your account. Run this on Dalma terminal.