My first slurm job
Examples
Submit a simple MPI job
-
On this example we run a small MPI application doing the following steps:
- Create a submission file
- Submit the job to the default partition
- Execute a simple MPI code
- Check the status of the job
- Read the output
-
Create a submission file
vi my_first_slurm_job.sh
- Edit the file
#!/bin/bash
#SBATCH --job-name=MyFirstSlurmJob
#SBATCH --time=0:10:0
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --partition=HPC_4_Days #Be sure to request the correct partition to avoid the job to be held in the queue
# Used to guarantee that the environment does not have any other loaded module
module purge
# Load software modules. Please check session software for the details
module load gcc63/openmpi/4.0.13
# Compile application
echo "=== Compiling ==="
mpicc -o cpi cpi.c
# Run application. Please note that the number of cores used by MPI are assigned in the SBATCH directives.
echo "=== Running ==="
srun cpi
echo "Finished with job $SLURM_JOBID"
- Submit the job
sbatch my_first_slurm_job.sh
- Check status of the job
$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1171 HPC_4_Days MyFirstS username PD 0:00 1 wn075
- Check further details about your job (very long output)
scontrol show job 1171
- Read the output of the job:
If name is not specified slurm will create by default a file with the output of your run
slurm-{job_id}.out
e.g. slurm-1171.out
- Cancel your job
$ scancel 1171
MPI example:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
// Initialize the MPI environment
MPI_Init(NULL, NULL);
// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
// Finalize the MPI environment.
MPI_Finalize();
}