Slurm quick start#

Run one task of myApp on one core of a node:#

$ srun myApp

This is the simplest way to run a job on a cluster. In this example, the lone srun command defaults to asking for one task on one core on one node of the default queue charging the default account.

Run hostname in an interactive allocation:#

$ salloc
salloc: Pending job allocation 150096
salloc: job 150096 queued and waiting for resources

blocks here until job runs

salloc: job 150096 has been allocated resources
salloc: Granted job allocation 150096

$ srun hostname

Run it again

$ srun hostname

Now exit the job and allocation

$ exit
salloc: Relinquishing job allocation 150096
salloc: Job allocation 150096 has been revoked.

Like srun in the first example, salloc defaults to asking for one node of the default queue charging the default account. Once the job runs and the prompt appears, any further commands are run within the job's allocated resources until exit is invoked.

Create a batch job script and submit it#

$ cat > myBatch.cmd
#SBATCH -p compute
#SBATCH -A myAccount
#SBATCH -t 30

srun -N 4 -n 32 myApp

This script asks for 4 nodes from the pdebug queue for no more than 30 minutes charging the myAccount account. The srun command launches 32 tasks of myApp across the four compute nodes.

Now submit the job:

$ sbatch myBatch.cmd
Submitted batch job 150104

See the job pending in the queue:

$ squeue
 150104    compute myBatch.       me  PD       0:00      4 (Priority)

After the job runs, the output will be found in a file named after the job id: slurm-150104.out

See only your jobs in the queue#

 $ squeue -u <myName>

See all the jobs in the queue

 $ squeue

List queued jobs displaying the fields that are important to you

 $ man squeue

and scroll to the output format specifiers listed under the -o option. Then create an environment variable that contains the fields you like to see.

For example, you can format output :

# for BASH
$ export SQUEUE_FORMAT="%.7i %.8u %.8a %.9P %.5D %.2t %.19S %.8M %.10l %.10Q"

# fot CSH
$ setenv SQUEUE_FORMAT "%.7i %.8u %.8a %.9P %.5D %.2t %.19S %.8M %.10l %.10Q"

Display the pending jobs ordered by decreasing priority#

 $ squeue -t pd -S-p

Display details about a specific job#

 $ scontrol show job <jobID>

Display the job script for one of your jobs#

 $ scontrol -dd show job <jobID>

Show all the jobs you have run today#

$ sacct -X
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
1732343        hostname       gpus        ici          3     FAILED      1:0
1732350        hostname    compute        ici         48  COMPLETED      0:0
1732412            bash       gpus gpu-milcom         10 CANCELLED+      0:0

Show all the job steps that ran within a specific job#

$ sacct -j 1732411
       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
1732411        hostname       gpus gpu-milcom          4  COMPLETED      0:0

List the charge accounts you are permitted to use (sbatch/salloc/srun -A option)#

 $ sshare

This command also shows the historical usage your jobs have accrued to each charge account. The fair-share factor is also displayed for you for each of your accounts. This factor will be used in calculating the priority for your current pending jobs and any job you submit. For details, see Multi-factor Priority and Fair-Tree.

Display the factors contributing to each pending job's assigned priority#

$ sprio -l
 143104  harriet    1293802   28234     265568          0          0    1000000      0
 143105      sam    1293802   28234     265568          0          0    1000000      0

Cancel a job, whether it is pending in the queue or running#

 $ scancel <job_ID>

Send a signal to a running job#

For example, send SIGUSR1:

 $ scancel -s USR1 <job_ID>

Display the queues available#

 $ sinfo [-s]


  • -Rl : list down nodes with reasons described
  • -l : show long description

Display details about all the queues (aka partitions)#

$ scontrol show partition

Display QOS#

$ sacctmgr show qos <qos name> format=priority,flags,maxcpus,maxwall,maxjobsperuser,maxnodesperuser,maxsubmit