Page tree
Skip to end of metadata
Go to start of metadata

Running jobs from /home is serious violation of HPC policy. Any users who intentionally violate this policy will get their account suspended. /home SSDs are not design for scratch disks, it will kill the SSDs quickly. 

You cannot submit jobs until you pass our online training. Dalma Training

Prerequisite

Make sure you know basic Linux usage. Useful links:

Usage Model

Dalma is accessed through a dedicated set of login nodes, which are designed for light-weight, short period tasks. Access to the compute, GPU and visualization nodes for production runs, is controlled by the workload manager Slurm. The production jobs are submitted on login nodes to Slurm. Then Slurm will schedule and run these jobs on compute nodes.

 

Dalma Access Model

Getting and Renewing an Account

Please follow the instructions here: Accounts

Access

Once you have an HPC account, you are ready to access a cluster. In the most simple case, use ssh in your terminal:

Login Dalma

If you use Windows or outside NYU AD/NY network, follow the instructions here: Access Dalma.

 

Storage System

Right after login to Dalma, user is automatically directed to $HOME. Dalma storage consists of 4 filesystems: $HOME (/home/<Net-ID>), $FASTSCRATCH (/fastscratch/<Net-ID>), $SCRATCH (/scratch/<Net-ID>) and $ARCHIVE (/archive/<Net-ID>) that can be referenced through the environment variables $HOME, $FASTSCRATCH, $SCRATCH. $ARCHIVE can NOT be accessed directly on login or compute nodes (see the tutorial here for usage: The guide to Archive on Dalma). 

Access different files systems using environmental variables

The usages are summarized as below.

Summary of storage

Submit your job and prepare your input / output in $SCRATCH

Put your source code, applications and executable in $HOME. NO JOBS SHOULD BE RUN FROM $HOME

Back up your data in $ARCHIVE

Contact us for usage in $FASTSCRATCH

If you encounter 'disk quota exceeded' error or similar, that's because you breached the disk quota, either data size, of number of files, on one or more of the filesystems. Running myquota in the terminal on Dalma gives you the current usage and quota.

HOME is limited.

The quota of $HOME is only 5GB. Run myquota in terminal on Dalma to check your current usage and quota.

We urge our users to clean up their storage.

Backup your FASTSCRATCH and SCRATCH

Files older than 90 days at $FASTSCRATCH and $SCRATCH will be deleted.

Backing up user data is user's own responsibility. E.g., if a user deleted something accidentally, we can not have it recovered unfortunately.

 

 

Data Transfer

You can use either Terminal or FileZilla to transfer your data from / to Dalma, as instructed here. File Transfer using rsync and File Transfer using FileZilla.

Hardware Overview

Users can not acquire all physical memory on nodes in their job scripts. Some memory is reserved for system. See the form below.

Node TypeNumber of NodesHardware per NodeMaximum Memory Per Node User Can RequireNote
Standard Compute236128GB, 28 cores, Broadwell112GB 
Fat8192GB memory, 12 cores, Westmere180GB 
Super Fat1

1TB memory, 32 cores, Westmere

1000GB 
Ultra Fat12TB memory, 72 cores, Broadwell2000GBConsult with us for access to this node
BuTinah1696GB memory, 12 cores, Westmere90GB 

 

 

Software Overview

We now have available a new Module Environment in Dalma, which is part of the User Centric Approach that we have been promoting from NYUAD to manage the software stack. This new Module Environment, NYUAD 3.0 overcome the flaws of the traditional modules environment when used to manage complex modern software environments.

First, you could check what applications are available

Then you could select the desired software to load. The following example shows how to load a self-sufficient-single-application environment for gromacs.

Load a self-sufficient-single-application environment for Gromacs

Thie following example shows how to load an environment for compiling source code from scratch.

Load GCC, OpenMPI and FFTW for Compiling Source Code

If you cannot find a certain version of software (for example, you are looking for Python 3, but only to find Python 2 is available), try running the following command to make all modules visible first.

As you can see, Python 3 is available then. You could load Python 3 by loading the specific module.

 


Batch System

The batch system on Dalma is the Slurm (Simple Linux Utility for Resource Management), a free open-source resource manager by LLNL. Similar to most supercomputers, on Dalma, production jobs are submitted to the batch system.  In order to submit a job you need to create a submission script where you specify your resources requirements. Before jobs are dispatched to run they are put onto partitions waiting for available processing resources. There are partitions for various types of use. Parallel partition will allocate entire nodes to the job (i.e. only 1 job per node). Serial partition allows multiple jobs to share one node. 

Interactive Sessions

Computational heavy jobs are not allowed on login nodes. The alternative way should be using an interactive session. To start an interactive session, use srun command:

To exit the interactive session, type Ctrl+d, or 

 

Available Partitions (Queues)

Most used partitions for users: 

  1. serial: For job using no more than 1 node.
  2. parallel: For job using more than 1 node.

Job Limit

Run 

to check your job limits. 

Writing a Batch Script

A job script is a text file describing the job, resources required. Slurm has its unique directives, but is similar in many ways to PBS or LSF. Moreover, Slurm maintaned a good compability on PBS script. In many case, PBS script is directly acceptable. 

You cannot submit jobs until you pass our online training. Dalma Training

Any job ntasks <= 28 should use #SBATCH -p serial. Any job with ntasks >28 should use #SBATCH -p parallel and set ntasks to be dividible by 28. (if not using MPI-OpenMP Hybrid Parallelization).

Serial Job Example

  1. A typical Slurm serial job script looks like this. Let say you save it as serial-job.sh

    Typical Serial Job Script in Slurm

    Below, your will find a generic Slurm job scripts will gentle explanation with each directives.

  2. Then you can submit the saved job script serial-job.sh with:

    Submitting a Serial Job

Parallel Job Example

You cannot submit jobs until you pass our online training. Dalma Training

  1. A typical Slurm parallel job script looks like this. Let say you save it as parallel-job.sh

    Typical Parallel Job Script in Slurm
  2. Then you can submit the saved job script parallel-job.sh with:

    Submitting a Parallel Job

Submitting a Job

Please be aware that submitting jobs is only possible from login nodes at the moment. Contact us if you need help.

Command sbatch is for submitting jobs. A simple example:

Minimal Example of Job Submission

After the submission, it will return the corresponding job id. Once scheduled for run, the script is executed on the first compute node in the allocation.

Checking Job Status

Before and During Job Execution

This command gives you all the jobs by you.

List all current jobs for a user

Example output:

It means the job with Job ID 31408, has been running (ST: R) for 2 minutes on compute-21-4.

For more verbose information, use scontrol show job.
Getting Verbose Information on a Job

After Job Execution

Once the job is finished, the job can't be inspected by squeue or scontrol show job. At this point, you could inspect the job by sacct.

Checking a Job

The following commands give you extremely verbose information on a job.

Getting Verbose Information on a Job

 

Canceling a Job

If you decide to end a job prematurely, use scancel

Use with Cautions

To cancel all jobs from your account. Run this on Dalma terminal.

 

 

On This Page: