Working on the HPC clusters is not the same as working at a desktop workstation: in order to provide high performance computing to many users simultaneously, computational work must be packaged into a job - a script specifying what resources the job will need and the commands necessary to perform the work - and submitted to the system to be run without further input from the user. The system then schedules and runs the job on a dedicated portion of the cluster. (Note that there is a way to work interactively within this model, for work which cannot be scripted, such as debugging).
On the NYU clusters, Torque and Moab manage the running and scheduling of jobs. As a user you will interact mostly with Torque, which accepts and runs job scripts and manages and monitors the cluster's compute resources. Moab does the heavy thinking: the planning of which job should be run where and when.
Login and Compute Nodes
Note that certain filesystems are visible to the login or compute nodes but not both: specifically at NYU
/archive is not visible to the compute nodes, while
/state/partition1 is visible and local only to individual compute nodes.
Not all jobs can be run at once - the cluster is finite! - so when jobs are submitted they are placed into a queue. When a "space" becomes available in the schedule Moab looks down the queue for the first job that will fit into the space.
Jobs are not necessarily placed at the end of the queue - Moab uses the priority (discussed here) to determine where in the queue a job should be placed.
There is more than one queue. Each queue is configured for different types of jobs and has resource limits and priorities set accordingly. If you do not specify a queue to submit to, Torque will use the resources requested to select a queue for you. Frequently this is the best option, however in some circumstances you are better off explicitly specifying a queue.
You can see the list of queues with the command "
qstat -q", and you can see more detail about a specific queue with "
qstat -Qf queue-name".
Writing a Job Script
Submitting a Job
Jobs are submitted with the
$ qsub options job-script
The options tell Torque information about the job, such as what resources will be needed. These can be specified in the job-script as PBS directives, or on the command line as options, or both (in which case the command line options take precedence should the two contradict each other). For each option there is a corresponding PBS directive with the syntax:
For example, you can specify that a job needs 2 nodes and 8 cores on each node by adding to the script the directive:
or as a command-line option to
qsub when you submit the job:
$ qsub -l nodes=2:ppn=8 my_script.q
To see the status of a single job - or a list of specific jobs - pass the Job IDs to
qstat, as in the following example:
$ qstat 3593014 3593016
Job id Name User Time Use S Queue
------------- ---------------- --------------- -------- - -----
3593014 model_scen_1 ab123 7:23:47 R s48
3593016 model_scen_1 ab123 7:23:26 R s48
Most of the fields in the output are self-explanatory. The second-last column "S" is the job status, which can be :
- Q meaning "Queued"
- H meaning "Held" - this may be the result of a manual hold or of a job dependency
- R meaning "Running"
- C meaning "Completed". After the job finishes, it will remain with "completed" status for a short time before being removed from the batch system.
Other, less common job status flags are described in the manual (
pbstop, available on the login nodes, shows which jobs are currently running on which nodes and cores of a cluster.
Jobs belonging to a single user can be highlighted by launching
pbstop with the
(of course, replace
<NetID> with your NYU NetID). Or, you can use the alias "me":
When you start pbstop you see something like the annotated screenshot below. You might need to resize your terminal to make it all fit:
Canceling a Job
To kill a running job, or remove a queued job from the queue, use
To cancel ALL of your jobs: