• eResearch
    • Collaborative Technologies
      • ARDC (Australian Research Data Commons)
      • ARDC Nectar Research Cloud
      • Australian Access Federation
      • QRIScloud
      • Video Collaboration
    • Data Management
      • Research Data Management Plans
    • Data Services
      • Australian and International Data Portals
      • CQUni Research Data Storage Options
      • CQUniversity Research Data Storage
      • GEOSCIENCE DATA PORTALS
    • eResearch and Security: MFA and CyberSafety
      • Encrypting Data on Portable Devices
    • High Performance Computing
      • The History of CQU’s HPC Facilities
        • Ada Lovelace Cluster (Current)
        • Marie Curie Cluster (Decommissioning)
        • Einstein Cluster (Decommissioned)
        • Isaac Newton HPC Facility (Decommissioned)
      • HPC User Guides and FAQs
        • Basics of working on the HPC
        • Getting started on CQUniversity’s Ada Lovelace HPC System
        • Compiling Programs (and using the optimization flags)
        • Creating a Conda Enviroment
        • Finding Installed Software
        • Frequently Asked Questions
        • Graphical connection HPC via Open On Demand
        • HPC Trouble Shooting
        • LLM’s on Ada Cluster
        • Machine and Deep Learning modules on Ada
        • PBS to Slurm Command tables (HPC Scheduler)
        • Running Python on Ada
        • Simple Unix Commands
        • Slurm Commands
        • Software Module Information
        • Transferring Files to the HPC System (Ada)
        • Using Abaqus
        • Using ANSYS (Fluent) on the HPC System
        • Using APSIM
        • Using HPC Scheduler on Ada Lovelace Cluster
        • Using MATLAB
        • Using R
        • Virtualisation and Containers
      • HPC Community
      • HPC Related Links
      • HPC Sample Code Scripts
        • Multiple Job Submission – Slurm
        • Parameter sweep multiple job – Slurm
        • R Sample Scripts – Slurm
        • Sample Slurm Submission Script
      • HPC Software
    • Research Software
    • Scholarly Communication
    • Survey Tools
    • Training
      • QCIF – Queensland Cyber Infrastructure Foundation
      • Teaching Lab Skills for Scientific Computing

eResearch

SAMPLE SLURM SUBMISSION SCRIPT

By default, when running anything on the CQU HPC systems, you should summit your programs and simulations to the HPC Scheduler.  The Scheduler will check if the requested resources are available.  If they are available, they will execute the job on one of the available compute resources, if no resources are available, the request is “queued”, until resources become available.

Thus, running a program interactively on a Login node is perfectly acceptable when developing programs and/or running short tests.  When you wish to execute jobs, especially jobs that run for a reasonable amount of time, you should submit each program to execute on the compute nodes to be run.

If you execute large jobs on a “Login” node, this will slow down usability and will impact other users performance.

The following guide provides details on how to submit a simple program to execute on the HPC Facilities.

In order to submit a job to the HPC system, it is recommended to write a script file similar to the one below, in which offers the benefit for the job to be re-submitted.

The variable $SLURM_SUBMIT_DIR indicates the directory where the Slurm script file is located and launched from. Replace the example email address provided to your email address. Change the “program name” to the name of the program executable file that you would like to be run on the HPC system.

Note all “[…]” are variables that require defining.

Example Script (example.slurm)

#!/bin/bash
###### Select resources #####
#SBATCH -J [Name of Job]
#SBATCH -c [number of cpu's required, most likely 1]
#SBATCH --mem=[amount of memory required]G
#SBATCH -p [partition name]
#SBATCH -t=[how long the job should run for in minutes]    ## You may wish to remove this line if the length of time required is unknown
#
#### Output File ##### 
#SBATCH -o [output_file].out                               ## If left blank will default to slurm-[Job Number].out
#
#### Error File ##### 
#SBATCH -e [error_file].err                                ## If left blank will default to slurm-[Job Number].err
#
##### Mail Options #####
#SBATCH --mail-type=ALL                                    ## BEGIN, END, FAIL, INVALID_DEPEND, REQUEUE, STAGE_OUT, ALL, TIME_LIMIT_%% 
#SBATCH --mail-user=[your email address]
#
##### Change to current working directory #####
cd $SLURM_SUBMIT_DIR

##### Execute Program #####
./[program executable]

Real Example

#!/bin/bash
###### Select resources #####
#SBATCH -J Job1
#SBATCH -c 1
#SBATCH -mem=4g
#SBATCH -p workq
#
#### Output File #####
#SBATCH -o job1.out  
#
#### Error File #####
#SBATCH -e Job1.err
#
##### Mail Options #####
#SBATCH --mail-type=BEGIN,END,FAIL,TIME_LIMIT_50     # will send email when halfway through time allotment
#SBATCH --mail-user=l.decosta@cqu.edu.au
#
##### Change to current working directory #####
cd $SLURM_SUBMIT_DIR

##### Execute Program #####
module load Python/3.12.3-GCCcore-13.3.0
python ./myprogram.py

Executing script on the HPC System

To submit a job, simply login to one of the “login nodes” and execute the command on a terminal:

sbatch [slurm_script_file].slurm

Handy commands, to check if your job is running, queued or completed is by using one of the following commands:

squeue

squeue -u [username]

Support

eresearch@cqu.edu.au

tasac@cqu.edu.au OR 1300 666 620

Hacky Hour (3pm – 4pm every Tuesday)

High Performance Computing Teams site