Please note script options are being provided using the long options and not the short options for readability and consistency e.g. --nodes versus -N.
Warning per cluster restrictions may require you to customize the following generic instructions. e.g. NOTS has a maximum of 1 node per job, but DAVinCI does not that restriction. Please refer to the introduction of each cluster for requirements.
SLURM Batch Script Options
All jobs must be submitted via a SLURM batch script or invoking sbatch at the command line . See the table below for SLURM submission options.
Required: You need to specify both the name of the account and partition to schedule jobs especially to use a condo on the cluster.
Use the command sacctmgr show assoc user=netID to show which accounts and partitions with which you have access.
Required: Assigns a job name. The default is the name of SLURM job script.
|#SBATCH --ntasks=2||Recommended: The number of tasks per job. Usually used for MPI jobs.|
You can get further explanation here .
Recommended: The number of nodes requested.
Recommended: The number of tasks per node. Usually used in combination with --nodes for MPI jobs.
|#SBATCH --cpus-per-task=4||Recommended: The number processes per task. Usually used for OpenMP or multi-threaded jobs.|
Recommended: Specify the name of the Partition (queue) to use. Use this to specify the default partition or a special partition i.e. non-condo partiton with which you have access.
Recommended: The maximum run time needed for this job to run, in days-hh:mm:ss. If not specified, the default run time will be chosen automatically.
Optional: The maximum amount of physical memory used by any single process of the job ([M]ega|[G]iga|[T]era)Bytes.
See our FAQ for more details.
|#SBATCH --export=ALL||Required: Exports all environment variables to the job. See our FAQ for details.|
|#SBATCH --mail-user=YourEmailAddress||Recommended: Email address for job status messages.|
|#SBATCH --mail-type=ALL||Recommended: SLURM will notify the user via email when the job reaches the following states BEGIN, END, FAIL or REQUEUE.|
|#SBATCH --nodes=1 --exclusive||Optional: Using both of these options will give your job exclusive access to a node such that no other jobs can share the node. |
This combination of arguments will assign eight tasks to your job and will give it exclusive access to all of the resources
(i.e. memory) of the entire node without interference from other jobs. Please see our FAQ for more details on exclusive access.
Optional: The full path for the standard output (stdout) and standard error (stderr) "slurm-%j.out" file, where the "%j" is replaced by the job ID. Current working directory is the default.
Optional: The full path for the standard error (stderr) "slurm-%j.out" files. Use this only when you want to separate (stderr) from (stdout). Current working directory is the default.
Serial Job Script
A job script may consist of SLURM directives, comments and executable statements. A SLURM directive provides a way of specifying job attributes in addition to the command line options. For example, we could create a myjob.slurm script this way:
#!/bin/bash #SBATCH --job-name=YourJobNameHere #SBATCH --account=commons #SBATCH --partition=commons #SBATCH --ntasks=1 #SBATCH --mem-per-cpu=1000m #SBATCH --time=00:30:00 #SBATCH --mail-user=YourEmailAddressHere #SBATCH --mail-type=ALL echo "My job ran on:" echo $SLURM_NODELIST if [[ -d $SHARED_SCRATCH/$USER && -w $SHARED_SCRATCH/$USER ]] then cd $SHARED_SCRATCH/$USER srun /path/to/myprogram fi
This example script will submit a job to the default partition using 1 processor and 1GB of memory per processor, with a maximum run time of 30 minutes.
For the clusters the --ntasks-per-node option means tasks per node.
It is important to specify an accurate run time for your job in your SLURM submission script. Selecting eight hours for jobs that are known to run for much less time may result in the job being delayed by the scheduler due to an overestimation of the time the job needs to run.
The --mem value represents memory per processor core. If your --mem value multiplied by the number of tasks (--ntasks-per-node) exceeds the amount of memory per node, your job will not run. If your job is going to use the entire node, then you should use the --exclusive option instead of the --mem or --ntasks-per-node options (See Here). It is good practice to specify the --mem option if you are going to be using less than an entire node and thus sharing the node with other jobs.
If you need to debug your program and want to run in interactive mode, the same request above could be constructed like this (via the srun command):
srun --pty --partition=interactive --ntasks=1 --mem=1G --time=00:30:00 $SHELL
For more details on interactive jobs, please see our FAQ on this topic.
When you submit a job, it will inherit several environment variables that are automatically set by SLURM. These environment variables can be useful in your job submission scripts as seen in the examples above. A summary of the most important variables are presented in the table below.
Location of shared scratch space. See our FAQ for more details.
|$LOCAL_SCRATCH||Location of local scratch space on each node.|
Environment variable containing a list of all nodes assigned to the job.
Path from where the job was submitted.
For jobs that need two or more processors and are compiled with MPI libraries, you must use srun to launch your job. The job launcher's purpose is to spawn copies of your executable across the resources allocated to your job. We currently support srun for this task and do not support the mpirun or mpiexec launchers. By default srun only needs your executable, the rest of the information will be extracted from SLURM.
The following is an example of how to use srun inside your SLURM batch script. This example will run myMPIprogram as a parallel MPI code on all of the processors allocated to your job by SLURM:
#!/bin/bash #SBATCH --job-name=YourJobNameHere #SBATCH --account=commons #SBATCH --partition=commons #SBATCH --ntasks=24 #SBATCH --mem-per-cpu=1G #SBATCH --time=00:30:00 #SBATCH --mail-user=YourEmailAddressHere #SBATCH --mail-type=ALL echo "My job ran on:" cat $SLURM_NODELIST if [[ -d $SHARED_SCRATCH/$USER && -w $SHARED_SCRATCH/$USER ]] then cd $SHARED_SCRATCH/$USER srun /path/to/myMPIprogram fi
This example script will submit a job to the default partition using 24 processor cores and 1GB of memory per processor core, with a maximum run time of 30 minutes.
The above example assumes that myMPIprogram is a program designed to be parallel (using MPI). If your program has not been parallelized then running on more than one processor will not improve performance and will result in wasted processor time and could result in multiple copies of your program being executed.
The following example will run myMPIprogram on only four processors even if your batch script requested more than four.
srun -n 4 /path/to/myMPIprogram
To ensure that your job will be able to access an mpi runtime, you must load an mpi module before submitting your job as follows:
module load GCC OpenMPI
Once your job script is ready, use sbatch to submit it as follows:
This will return a jobID number while the output and error stream of the job will be saved to one file inside the directory where the job was submitted, unless you specified otherwise.
The status of the job can be obtained using SLURM commands. See the table below for a list of commands:
Show a detailed list of all submitted jobs.
squeue -j jobID
Show a detailed description of the job given by jobID.
squeue -- start -j jobID
Gives an estimate of the expected start time of the job given by jobID.
There are variations to these commands that can also be useful. They are described below:
Show a list of all running jobs.
squeue -u username
Show a list of all jobs in queue owned by the user specified by username.
scontrol show job jobID
To get a verbose description of the job given by jobID. The output can be used as a template when you are attempting to modify a job.
There are many different states that a job can be after submission: BOOT_FAIL (BF), CANCELLED (CA), COMPLETED (CD), CONFIGURING (CF), COMPLETING (CG), FAILED (F), NODE_FAIL (NF), PENDING (PD), PREEMPTED (PR), RUNNING (R), SUSPENDED (S), TIMEOUT (TO), or SPECIAL_EXIT (SE). The squeue command with no arguments will list all jobs in their current state. The most common states are described below.
Running (R): These are jobs that are running.
Pending (PD): These jobs are eligible to run but there is simply not enough resources to allocate to them at this time.
A job can be deleted by using the scancel command as follows: