Child pages
  • Running jobs - Submitting
Skip to end of metadata
Go to start of metadata

Submitting a Job

Jobs are submitted with the qsub command:

$ qsub options job-script

The options tell Torque information about the job, such as what resources will be needed. These can be specified in the job-script as PBS directives, or on the command line as options, or both (in which case the command line options take precedence should the two contradict each other). For each option there is a corresponding PBS directive with the syntax:

#PBS option

For example, you can specify that a job needs 2 nodes and 8 cores on each node by adding to the script the directive:

or as a command-line option to qsub when you submit the job: 

$ qsub -l nodes=2:ppn=8 my_script.q

Options to manage job output:

  • -N jobname
    Give the job a name. The default is the filename of the job script. Within the job, $PBS_JOBNAME expands to the job name 
  • -j oe
    Merge stderr into the stdout file 
  • -o path/for/stdout
    Send stdout to path/for/stdout. Can be a filename or an existing directory. The default filename is $PBS_JOBNAME.o${PBS_JOBID/.*}, eg myjob.o12345, in the directory from which the job was submitted 
  • -e path/for/stderr
    Send stderr to path/for/stderr. Same usage as for stdout.
  • -M my_email_address@nyu.edu
    Send email to my_email_address@nyu.edu when certain events occur. By default an email is sent only if the job is killed by the batch system.
  • -m b -m e -m a -m abe
    Send email when the job begins (b), ends (e) and/or is aborted (a).  

Options to set the job environment:

  • -S /path/to/shell
    Use the shell at /path/to/shell to interpret the script. Default is your login shell, which at NYU HPC is normally /bin/bash
  • -v VAR1,VAR2="some value",VAR3
    Pass variables to the job, either with a specific value (the VAR= form) or from the submitting environment (without "=")  
  • -V
    Pass the full environment the job was submitted from

Options to request compute resources:

  • -l walltime=walltime
    Maximum wallclock time the job will need. Default depends on queue, mostly 1 hour. Walltime is specified in seconds or as hh:mm:ss or mm:ss.
  • -l mem=memory
    Maximum memory per node the job will need. Default depends on queue, normally 2GB for serial jobs and the full node for parallel jobs. Memory should be specified with units, eg 500MB or 8GB. The available memory per node for different nodes of Mercer is described here.
  • -l procs=num
    Total number of CPUs required. Use this if it does not matter how CPUs are grouped onto nodes - eg, for a purely-MPI job. Don't combine this with -l nodes=num or odd behavior will ensue.
  • -l nodes=num:ppn=num
    Number of nodes and number of processors per node required. Use this if you need processes to be grouped onto nodes - eg, for an MPI/OpenMP hybrid job with 4 MPI processes and 8 OpenMP threads each, use -l nodes=4:ppn=8. Don't combine this with -l procs=num or odd behavior will ensue. Default is 1 node and 1 processor per node. When using multiple nodes the job script will be executed on the first allocated node.
    Torque will set the environment variables PBS_NUM_NODES to the number of nodes requested, PBS_NUM_PPN to the value of ppn and PBS_NP to the total number of processes available to the job. 
  • -l nodes=num:ppn=num:gpus=num
    -l nodes=num:ppn=num:gpus=num:titan
    If your job needs GPUs, you should specify the number of GPUs per node in the -l nodes option.
    We have nodes with older Tesla GPUs and newer Titan GPUs - if you specifically need the newer GPUs you should add the requirement ":titan" to the gpus specification.
  • -l nodes=num:ppn=all
    This is an NYU HPC extension: if you need whole nodes but do not mind how many cores, you can request full nodes this way. Specifying -n will have the same effect. The environment variable $PSB_PPN will be set in the job to the number of cores on each node the job is running on. Jobs are always allocated to sets of nodes with the same number of cores, so you will not get one node with 12 cores and another with 20.
    Note that this currently only works when used in directives, not on the command line. 
  • -n
    Request exclusive use of nodes. If this is specified, no other jobs will share a node with this job, and if you did not specify a memory limit, no memory limit will be enforced (Note however that if you do not specify a memory limit, you may land on a node with only 24GB of memory)
  • -q queue
    Submit to a specific queue. If not specified, Torque will choose a queue based on the resources requested.

    In almost all cases, it is best not to specify a queue - the system will choose the best queue according to the resources you request

Options for running interactively on the compute nodes:

  • -I
    Don't just submit the job, but also wait for it to start and connect stdout, stderr and stdin to the current terminal.
  • -X
    Enable X forwarding, so programs using a GUI can be used during the session (provided you have X forwarding to your workstation set up)
  • -q interactive
    Run specifically in the interactive queue. At NYU HPC, this queue has smaller job limits (maximum of two nodes and 4 hours walltime) but very high priority.
  • -V
    Pass the current environment to the interactive batch job
  • To leave an interactive batch session, type exit at the command prompt.

Options for delaying starting a job:

  • -W depend=afterok:jobid
    Delay starting this job until jobid has completed successfully.
  • -a [MM[DD]]hhmm
    Delay starting this job until after the specified date and time. Month (MM) and day-of-month (DD) are optional, hour and minute are required. 

Options for many similar jobs (array jobs and pbsdsh):

  • -t 1,10,50-100
    Submit an array of jobs with array ids as specified. Array ids can be specified as a numerical range, a comma-separated list of numbers, or as some combination of the two. Each job instance will have an environment variable $PBS_ARRAYID
  • -t 1,10,50-100%5
    As above, but the appended '%n' specifies the maximum number of array items (in this case, 5) which should be running at one time
  • Submit a single "shepherd" job requesting multiple processes and from it start individual jobs with pbsdsh.

  • No labels