An Interactive partition is a higher priority partition with the purpose of serving debugging sessions and interactive jobs. The maximum number of nodes that can be accessed through this partition is four with a maximum job run time of 30 minutes. This partition will provide a mechanism for debugging your code, in that you will be given an interactive command line prompt on a compute node where you can run your job interactively. There is usually little or no wait time to get access to this partition. This partition is not intended for the purpose of running short compute jobs. It is for debugging sessions and for jobs that require interactive execution. The remainder of this document will describe how to use this partition.
Submitting an Interactive Job
The usual way to submit jobs to a partition is with a SLURM batch script. However, in the case of the Interactive partition, you must use the sbatch command line (rather than a SLURM batch script) in combination with the -I argument to access this partition as illustrated in the following example:
The sbatch arguments in this example represent the following options which you will recognize from your SLURM batch scripts:
Select the interactive partition
Exports all environment variables to the job.
Select one processor.
You can specify any SLURM batch script argument on the srun command line. Once this interactive job is submitted and begins to execute, you will receive a command line prompt on a compute node where you can manually run your code for debugging purposes as follows:
Submitting an MPI Interactive Job
If you need to submit a multiprocessor job to the Interactive partition, you will do so the same way as described above except that you will request a larger number of processor cores and/or nodes as in the following example:
Running this command will give you twentyfour processor cores in the interactive partition. You will be given an interactive prompt on one of those nodes where you can use srun to launch a multiprocessor job for debugging. Your MPI code will have access to all of the processors allocated to your job even though you have a command line prompt on only a single node. SLURM and srun will communicate with each other so that you have access to all of the processors allocated to your job.