Skip to end of metadata
Go to start of metadata

Work in progress

We are still working on this page. Information may be incomplete.

 

 

 

Basic Slurm Usage

In a Linux cluster there are hundreds of computing nodes inter-connected by high speed networks. The resources are shared among many users for their technical or scientific computing purposes. Users submit jobs, which compete for computing resources. The Slurm software system is a resource manager and a job scheduler, which is designed to allocate resources and schedule jobs. Slurm is an open source software, with a large user community, and has been installed on many top 500 supercomputers.

This tutorial assumes you have a NYU HPC account, and have connected your computer to our facility. Also assumes you are comfortable with Linux command-line environment. If not, you may apply for an account here, or learn about Linux from Tutorial 1.

A typical interaction with Slurm from an user's point of view is as the following:

We show briefly below how to do these step by step.

Check cluster status

The sinfo command gives information about the cluster status, by default listing all the partitions. Partitions group computing nodes into logical sets, which serves various functionalities such as interactivity, visualization and batch processing.

As we can see in the above box, there are two partitions namely test and normal. The partition marked with an asterisk is the default one. Except the node c5 which is in the state 'mix', which means some CPU cores occupied, all other nodes are idle.

The squeue command lists jobs which are in a state of either running, or waiting or completing etc.

In the squeue output example above, there is a job which is in 'R' meaning running, on the node c3, which is in the partition 'test'. The job ID is 10000007, and its owner is user 'sd3477'. All other four jobs are in the partition 'normal'. One job is in 'PD' (pending), which is due to the user's choice (JobHeldUser). All these jobs are on single node, no jobs span more than one node, with the time column showing time durations jobs have been running to now.

Run 'man sinfo' or 'man squeue' to see detailed usage information of the commands.

Submit jobs

Batch job submission can be accomplished with the command sbatch. Like in Torque qsub, we create a bash script to describe our job requirement: what resources we need, what softwares and processing we want to run, memory and CPU requested, and where to send job standard output and error etc.

In the runscript.sh we request for one node for one hour, memory per cpu of 700 MB, name the job 'myFirstTest', and specify job standard output and error file names being slurm_%j.out and .err where %j will be replaced with the job ID. Once the script is ready, we can submit the job to the Slurm controller as the following:

This job has been submitted successfully. And as the example box showing, its job ID is 10000010. Usually we should let the scheduler to decide on what nodes to run jobs. In cases there is a need to request a specific set of nodes, use the directive nodelist, e.g. '#SBATCH --nodelist=c9'.

Check job status

With the job ID in hand, we can track the job status through its life time. The job first appears in the Slurm queue in the PENDING state. Then when its required resources become available, the job gets priority for its turn to run, and is allocated resources, the job will transit to the RUNNING state. Once the job runs to the end and completes successfully, it goes to the COMPLETED state; otherwise it would be in the FAILED state. Use squeue -j <jobID> to check a job status. Run the command sstat to display various information of running job/step. Run the command sacct to check accounting information of jobs and job steps in the Slurm log or database. There is a '–-helpformat' option in the latter two commands to help checking what output columns are available.

Cancel a job

Things can go wrong, or in a way unexpected. Should you decide to terminate a job before it finishes, scancel is the tool to help.

Look at job results

Job results includes the job execution logs (standard output and error), and of course the output data files if any defined when submitting the job. Log files should be created in the working directory, and output data files in your specified directory. Exam log files with a text viewer or editor, to gain a rough idea of how the execution goes. Open output data files to see exactly what result is generated. Run sacct command to see resource usage statistics. Should you decide that the job needs to be rerun, submit it again with sbatch with a modified version of batch script and/or updated execution configuration. Iteration is one characteristic of a typical data analysis!

 

 

  • No labels