Using SLURM on SCIAMA-4 (for PBS users)

You are here:
Estimated reading time: 1 min

Submitting/running jobs

To top

For details on how to submit (batch or interactive)jobs on SCIAMA-4 using slurm, please read this article on how to do this.

Managing jobs

To top

After submitting jobs, you can track their progress/status and, if necessary, cancel them. Please check this article for further details on job management.

Migration guide for Torque/PBS users

To top

For people who are used to the “old” Torque/PBS system, here a quick translation table for the most important commands:

FunctionTorqueSLURM
Interactive shell on compute nodeqsub -Isinteractive
Batch job submissionqsub sbatch
Queue statusqstat squeue
Delete jobqdel scancel
Hold jobqhold scontrol hold
Release jobqrls scontrol release

Below is a simple job submission script. Basically  #PBS is replaced by #SBATCH.

The Torque line of the form  “#PBS -l nodes=2:ppn=2” is replaced by the lines “#SBATCH –nodes=2” and “#SBATCH –ntasks-per-node=2”

A “queue (-q)” in Torque is replaced by a “partition(-p)” in SLURM.

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --time=0-2:00
#SBATCH --job-name=starccm
#SBATCH -p sciama4.q
#SBATCH -D /users/burtong/test-starccm
#SBATCh --error=starccm.err.%j
#SBATCH --output=starccm.out.%j

##SLURM: ====> Job Node List (DO NOT MODIFY)
echo "Slurm nodes assigned :$SLURM_JOB_NODELIST"
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
echo "working directory = "$SLURM_SUBMIT_DIR
echo "SLURM_NTASKS="$SLURM_NTASKS

echo ------------------------------------------------------
echo 'This job is allocated on '${SLURM_NTASKS}' cpu(s)'
echo 'Job is running on node(s): '
echo $SLURM_JOB_NODELIST
echo ------------------------------------------------------

module purge
module add starccm/12.06.011

starccm+  -rsh ssh -batch test1.sim -fabric tcp -power -podkey -np ${SLURM_NTASKS}
Was this article helpful?
Dislike 0
Views: 483