• eResearch
    • Collaborative Technologies
      • ARDC (Australian Research Data Commons)
      • ARDC Nectar Research Cloud
      • Australian Access Federation
      • QRIScloud
      • Video Collaboration
    • Data Management
      • Research Data Management Plans
    • Data Services
      • Australian and International Data Portals
      • CQUni Research Data Storage Options
      • CQUniversity Research Data Storage
      • GEOSCIENCE DATA PORTALS
    • eResearch and Security: MFA and CyberSafety
      • Encrypting Data on Portable Devices
    • High Performance Computing
      • The History of CQU’s HPC Facilities
        • Ada Lovelace Cluster (Current)
        • Marie Curie Cluster (Decommissioning)
        • Einstein Cluster (Decommissioned)
        • Isaac Newton HPC Facility (Decommissioned)
      • HPC User Guides and FAQs
        • Basics of working on the HPC
        • Getting started on CQUniversity’s Ada Lovelace HPC System
        • Compiling Programs (and using the optimization flags)
        • Creating a Conda Enviroment
        • Finding Installed Software
        • Frequently Asked Questions
        • Graphical connection HPC via Open On Demand
        • HPC Trouble Shooting
        • LLM’s on Ada Cluster
        • Machine and Deep Learning modules on Ada
        • PBS to Slurm Command tables (HPC Scheduler)
        • Running Python on Ada
        • Simple Unix Commands
        • Slurm Commands
        • Software Module Information
        • Transferring Files to the HPC System (Ada)
        • Using Abaqus
        • Using ANSYS (Fluent) on the HPC System
        • Using APSIM
        • Using HPC Scheduler on Ada Lovelace Cluster
        • Using MATLAB
        • Using R
        • Virtualisation and Containers
      • HPC Community
      • HPC Related Links
      • HPC Sample Code Scripts
        • Multiple Job Submission – Slurm
        • Parameter sweep multiple job – Slurm
        • R Sample Scripts – Slurm
        • Sample Slurm Submission Script
      • HPC Software
    • Research Software
    • Scholarly Communication
    • Survey Tools
    • Training
      • QCIF – Queensland Cyber Infrastructure Foundation
      • Teaching Lab Skills for Scientific Computing

eResearch

R Sample Scripts for Slurm:

In order to submit an R job to the cluster, it is suggested to write a script file similar to the one below. Lines beginning with “##” represent comments.

The variable $SLURM_SUBMIT_DIR indicates the directory where the Slurm script file is located and launched from. Replace the example email address provided with your own email address. Change the R Script File name to the name of the R file that you want to be executed on the cluster.

Note all “[…]” are variables that require defining.

Example R Slurm Submission Script (R.slurm)

#!/bin/bash
###### Select resources ######SBATCH -J [Name of Job] 
#SBATCH -c [number of cpu's required, most likely 1] # CPUs per task
#SBATCH --mem=[amount of memory required(e.g., 4G)]
#SBATCH -p [partition]#SBATCH -t [how long the job should run for - e.g., 01:00:00 - you may wish to remove this line if the length of time required is unknown]
#### Output File #####
#SBATCH -o [output_file].out ## if left blank will default to slurm-[Job Number].out
#### Error File #####
#SBATCH -e [error_file].err ## if left blank will default to slurm-[Job Number].err
##### Queue #####
#SBATCH -p workq
##### Mail Options #####
#SBATCH --mail-type=ALL # Email notifications for job events (begin, end, fail)
#SBATCH --mail-user=[email address]
##### Change to current working directory #####
cd $SLURM_SUBMIT_DIR
##### Execute Program #####
R --vanilla < [Your R file].R > [R output file name]
 

Real Example

#!/bin/bash
###### Select resources #####
#SBATCH -J R-Job1
#SBATCH -c 1
#SBATCH --mem=4GB
#SBATCH -p workq#### Output File #####
#SBATCH -o R-job1.out
#### Error File #####
#SBATCH -e R-job1.err
##### Queue #####
#SBATCH -p workq
##### Mail Options #####
#SBATCH --mail-type=ALL
#SBATCH --mail-user=l.decosta@cqu.edu.au
##### Change to current working directory #####
cd $SLURM_SUBMIT_DIR
##### Execute Program #####
R --vanilla < input.R > results

Executing script on the cluster

The Ada Lovelace HPC uses a job scheduler that allows you to schedule and run jobs on the various compute nodes. To submit a job, simply execute the command:

sbatch [slurm_script_file]

A handy command, to check if your job is running, queued or completed is by using the command:

squeue -u

Support

eresearch@cqu.edu.au

tasac@cqu.edu.au OR 1300 666 620

Hacky Hour (3pm – 4pm every Tuesday)

High Performance Computing Teams site