Migrating From PBS
Table 1 lists the common tasks that you can perform in Torque/PBS and the equivalent ways to perform those tasks in SLURM.
Task | Torque/PBS | SLURM |
Submit a job | qsub myjob.sh | sbatch myjob.sh |
Delete a job | qdel 123 | scancel 123 |
Show job status | qstat | squeue |
Show expected job start time | - (showstart in Maui/Moab) | squeue --start |
Show queue info | qstat -q | sinfo |
Show job details | qstat -f 123 | scontrol show job 123 |
Show queue details | qstat -Q -f <queue> | scontrol show partition <partition_name> |
Show node details | pbsnode n0000 | scontrol show node n0000 |
Show QoS details | - (mdiag -q <QoS> in Maui/Moab) | sacctmgr show qos <QoS> |
Table 2 lists the commonly used options in the batch job script for both Torque/PBS (qsub) and SLURM (sbatch/srun/salloc).
Option | Torque/PBS | SLURM |
Declares the time after which the job is eligible for execution. | -a date_time | --begin=<time> |
Defines the account string associated with the job. | -A account_string | -A, --account=<account> |
Defines the path to be used for the standard error stream of the batch job. | -e [hostname:][path_name] | -e, --error=<filename pattern> |
Specifies that a user hold be applied to the job at submission time. | -h | -H, --hold |
Declares that the job is to be run "interactively". | -I | srun -u bash -i |
Declares if the standard error stream of the job will be merged with the standard output stream of the job. | -j oe / -j eo | default behavior |
Requests a number of nodes be allocated to this job. | -l nodes=number | -n, --ntasks=<number> / -N, --nodes=<minnodes[-maxnodes]> |
Specifies the number of processors per node requested. | -l nodes=number:ppn=number | --ntasks-per-node=<ntasks> / --tasks-per-node=<n> |
Specifies the node feature. | -l nodes=number:gpu | -C, --constraint="gpu" |
Requests a specific list of node names. | -l nodes=node1+node2 | -w, --nodelist=<node name list> / -F, --nodefile=<node file> |
Specifies the real memory required per node in Megabytes. | -l mem=mem | --mem=<MB> |
Specifies the minimum memory required per allocated CPU in Megabytes. | no equivalent | --mem-per-cpu=<MB> |
Requests a quality of service for the job. | -l qos=qos | --qos=<qos> |
Sets a limit on the total run time of the job allocation. | -l walltime=time | -t, --time=<time> |
Defines the set of conditions under which the execution server will send a mail message about the job. | -m mail_options (a, b, e) | --mail-type=<type> (type = BEGIN, END, FAIL, REQUEUE, ALL) |
Declares the list of users to whom mail is sent by the execution server when it sends mail about the job. | -M user_list | --mail-user=<user> |
Specifies that the job has exclusive access to the nodes it is executing on. | -n | --exclusive |
Declares a name for the job. | -N name | -J, --job-name=<jobname> |
Defines the path to be used for the standard output stream of the batch job. | -o path | -o, --output=<filename pattern> |
Defines the destination of the job. | -q destination | -p, --partition=<partition_names> |
Declares whether the job is rerunnable. | -r y|n | --requeue |
Declares the shell that interprets the job script. | -S path_list | no equivalent |
Specifies the task ids of a job array. | -t array_request | -a, --array=<indexes> |
Allows for per job prologue and epilogue scripts. | -T script_name | no equivalent |
Defines the user name under which the job is to run on the execution system. | -u user_list | --uid=<user> |
Expands the list of environment variables that are exported to the job. | -v variable_list | --export=<environment variables | ALL | NONE> |
Declares that all environment variables in the qsub command's environment are to be exported to the batch job. | -V | --export=<environment variables | ALL | NONE> |
Defines the working directory path to be used for the job. | -w path | -D, --workdir=<directory> |
This job may be scheduled for execution at any point after jobs jobid have started execution. | -W depend=after:jobid[:jobid...] | -d, --dependency=after:job_id[:jobid...] |
This job may be scheduled for execution only after jobs jobid have terminated with no errors. | -W depend=afterok:jobid[:jobid...] | -d, --dependency=afterok:job_id[:jobid...] |
This job may be scheduled for execution only after jobs jobid have terminated with errors. | -W depend=afternotok:jobid[:jobid...] | -d, --dependency=afternotok:job_id[:jobid...] |
This job may be scheduled for execution after jobs jobid have terminated, with or without errors. | -W depend=afterany:jobid[:jobid...] | -d, --dependency=afterany:job_id[:jobid...] |
This job can begin execution after any previously launched jobs sharing the same job name and user have terminated. | no equivalent | -d, --dependency=singleton |
Defines the group name under which the job is to run on the execution system. | -W group_list=g_list | --gid=<group> |
Allocates resources for the job from the named reservation. | -W x=FLAGS:ADVRES:staff.1 | --reservation=<name> |
Enables X11 forwarding. | -X | srun --pty [command] |
Table 3 lists the commonly used environment variables in Torque/PBS and the equivalents in SLURM.
Environment Variable For | Torque/PBS | SLURM |
Job ID | PBS_JOBID | SLURM_JOB_ID / SLURM_JOBID |
Job name | PBS_JOBNAME | SLURM_JOB_NAME |
Node list | PBS_NODELIST | SLURM_JOB_NODELIST / SLURM_NODELIST |
Job submit directory | PBS_O_WORKDIR | SLURM_SUBMIT_DIR |
Job array ID (index) | PBS_ARRAY_INDEX | SLURM_ARRAY_TASK_ID |
Number of tasks | - | SLURM_NTASKS |