• eResearch
    • Collaborative Technologies
      • ARDC (Australian Research Data Commons)
      • ARDC Nectar Research Cloud
      • Australian Access Federation
      • QRIScloud
      • Video Collaboration
    • Data Management
      • Research Data Management Plans
    • Data Services
      • Australian and International Data Portals
      • CQUni Research Data Storage Options
      • CQUniversity Research Data Storage
      • GEOSCIENCE DATA PORTALS
    • eResearch and Security: MFA and CyberSafety
      • Encrypting Data on Portable Devices
    • High Performance Computing
      • The History of CQU’s HPC Facilities
        • Ada Lovelace Cluster (New HPC)
        • Marie Curie Cluster (Current HPC)
        • Einstein Cluster (Decommissioned)
        • Isaac Newton HPC Facility (Decommissioned)
      • HPC User Guides and FAQs
        • Basics of working on the HPC
        • Getting started on CQUniversity’s Ada Lovelace HPC System
        • Graphical Connection to the HPC System
        • Compiling Programs (and using the optimization flags)
        • Connecting to the Marie Curie Cluster
        • Finding Installed Software
        • Frequently Asked Questions
        • Graphical connection HPC via Open On Demand
        • HPC Job Scheduler
        • HPC Trouble Shooting
        • Machine and Deep Learning
        • PBS Commands
        • PBS to Slurm Command tables (HPC Scheduler)
        • Running LLM’s on the HPC system
        • Running Python on HPC
        • Simple Unix Commands
        • Software Module Information
        • Submitting an Interactive Job
        • Transferring Files to the HPC System
        • Transferring Files to the HPC System (Ada)
        • Using Abaqus
        • Using ANSYS (Fluent) on the HPC System
        • Using APSIM
        • Using HPC Scheduler on Ada Lovelace Cluster
        • Using MATLAB
        • Using R
        • Virtualisation and Containers
      • HPC Community
      • HPC Related Links
      • HPC Sample Code Scripts
        • MATLAB Sample Scripts
        • Multiple Job Submission
        • Multiple Run Job Submission
        • PBS Job Array Submission
        • R Sample Scripts
        • Sample PBS Submission Script
        • Sample Slurm Submission Script
      • HPC Software
        • Mathematica Sample Scripts
    • Research Software
    • Scholarly Communication
    • Survey Tools
    • Training
      • QCIF – Queensland Cyber Infrastructure Foundation
      • Teaching Lab Skills for Scientific Computing

eResearch

SAMPLE PBS SUBMISSION SCRIPT

By default, when running anything on the CQU HPC systems, you should summit your programs and simulations to the HPC Scheduler.  The Scheduler will check if the requested resources are available.  If they are available, they will execute the job on one of the available compute resources, if no resources are available, the request is “queued”, until resources become available.

Thus, running a program interactively on a Login node is perfectly acceptable when developing programs and/or running short tests.  When you wish to execute jobs, especially jobs that run for a reasonable amount of time, you should submit each program to execute on the compute nodes to be run.

If you execute large jobs on a “Login” node, this will slow down usability and will impact other users performance.

The following guide provides details on how to submit a simple program to execute on the HPC Facilities.

In order to submit a job to the HPC system, it is recommended to write a script file similar to the one below, in which offers the benefit for the job to be re-submitted.

The variable $PBS_O_WORKDIR indicates the directory where the PBS script file is located and launched from. Replace the example email address provided to your email address. Change the “program name” to the name of the program executable file that you would like to be run on the HPC system.

Note all “[…]” are variables that require defining.

Example Script (/apps/samples/PBS/example.pbs)

###### Select resources #####
#PBS -N [Name of Job]
#PBS -l ncpus=[number of cpu's required, most likely 1]
#PBS -l mem=[amount of memory required]
#PBS -l walltime=[how long the job should run for - you may wish to remove this line if the length of time required is unknown]

#### Output File #####
#PBS -o $PBS_O_WORKDIR/[output (standard out) file name]

#### Error File #####
#PBS -e $PBS_O_WORKDIR/[input (standard out) file name]

##### Queue #####
#pbs -q workq

##### Mail Options #####
#PBS -m abe
#PBS -M [your email address]

##### Change to current working directory #####
cd $PBS_O_WORKDIR

##### Execute Program #####
./[program executable]

Real Example

###### Select resources #####
#PBS -N Job1
#PBS -l ncpus=1
#PBS -l mem=1g

#### Output File #####
#PBS -o $PBS_O_WORKDIR/Job1.out

#### Error File #####
#PBS -e $PBS_O_WORKDIR/Job1.err

##### Queue #####
#pbs -q workq

##### Mail Options #####
#PBS -m abe
#PBS -M j.bell@cqu.edu.au

##### Change to current working directory #####
cd $PBS_O_WORKDIR

##### Execute Program #####
./myprogram

Executing script on the HPC System

To submit a job, simply login to one of the “login nodes” and execute the command on a terminal:

qsub [pbs_script_file]

Handy commands, to check if your job is running, queued or completed is by using one of the following commands:

qstat -an

qusers

myjobs

Support

eresearch@cqu.edu.au

tasac@cqu.edu.au OR 1300 666 620

Hacky Hour (3pm – 4pm every Tuesday)

High Performance Computing Teams site