Job QueuesNot all jobs can be run at once - the cluster is finite! - so when jobs are submitted they are placed into a queue. When a "space" becomes available in the schedule Moab looks down the queue for the first job that will fit into the space.
Jobs are not necessarily placed at the end of the queue - Moab uses the priority (discussed here) to determine where in the queue a job should be placed.
At NYU HPC shorter jobs are given higher priority
There is more than one queue. Each queue is configured for different types of jobs and has resource limits and priorities set accordingly. If you do not specify a queue to submit to, Torque will use the resources requested to select a queue for you. Frequently this is the best option, however in some circumstances you are better off explicitly specifying a queue.
You can see the list of queues with the command "
qstat -q", and you can see more detail about a specific queue with "
qstat -Qf queue-name".
The following example shows the queues available on Mercer, with some more detail about each queue in the table below. The output shows:
- The name of each queue
- The maximum memory, CPU time, Wallclock time and number of nodes that a job in each queue can use
- The number of currently queued and currently running jobs in each queue
- The queue job limits and state (these columns are of interest mostly to the system administrators)
In almost all cases, do not specify a queue - the system will work out where to best place your job according to the resources requested
|Queue name||Job limit per user||Resource limits per job||Resource limit defaults||Purpose|
|route||Routing queue: jobs submitted without specifying a queue are processed here and routed to one of the other queues according to the resources requested.|
|s48||1000||168 hrs walltime, 1 node||1 hr walltime, 2GB memory, 1 core|
Single-node jobs (serial or multithreaded).
|p12||100||168 hrs walltime, 2+ nodes||1 hour walltime, 1 core per node|
This queue will not accept interactive jobs - you can use up to 2 nodes interactively for up to 4 hours via the interactive queue.
|interactive||2||4 hrs walltime, 2 full nodes||1 hr walltime, 1 core, 2GB memory|
High-priority interactive use, especially debugging. Interactive jobs go here by default.
Interactive jobs which do not meet the resource limits for this queue will go to the s48 queue, and consequently may take longer to start.
|cgsb-s||1000||168 hours walltime, 48 GB memory, 12 cores on a single node||12 hrs walltime, 1 core|
Long-running CGSB jobs. (see HPC Stakeholders)
Jobs needing more than 96 hours and submitted by CGSB users will be routed to this queue and scheduled on the CGSB-owned nodes
|sysadm||0||Maintenance reservations by system administrators: normal users do not have access to this queue|