• eResearch
    • Collaborative Technologies
      • ARDC (Australian Research Data Commons)
      • ARDC Nectar Research Cloud
      • Australian Access Federation
      • QRIScloud
      • Video Collaboration
    • Data Management
      • Research Data Management Plans
    • Data Services
      • Australian and International Data Portals
      • CQUni Research Data Storage Options
      • CQUniversity Research Data Storage
      • GEOSCIENCE DATA PORTALS
    • eResearch and Security: MFA and CyberSafety
      • Encrypting Data on Portable Devices
    • High Performance Computing
      • The History of CQU’s HPC Facilities
        • Ada Lovelace Cluster (Current)
        • Marie Curie Cluster (Decommissioning)
        • Einstein Cluster (Decommissioned)
        • Isaac Newton HPC Facility (Decommissioned)
      • HPC User Guides and FAQs
        • Basics of working on the HPC
        • Getting started on CQUniversity’s Ada Lovelace HPC System
        • Compiling Programs (and using the optimization flags)
        • Creating a Conda Enviroment
        • Finding Installed Software
        • Frequently Asked Questions
        • Graphical connection HPC via Open On Demand
        • HPC Trouble Shooting
        • LLM’s on Ada Cluster
        • Machine and Deep Learning modules on Ada
        • PBS to Slurm Command tables (HPC Scheduler)
        • Running Python on Ada
        • Simple Unix Commands
        • Slurm Commands
        • Software Module Information
        • Transferring Files to the HPC System (Ada)
        • Using Abaqus
        • Using ANSYS (Fluent) on the HPC System
        • Using APSIM
        • Using HPC Scheduler on Ada Lovelace Cluster
        • Using MATLAB
        • Using R
        • Virtualisation and Containers
      • HPC Community
      • HPC Related Links
      • HPC Sample Code Scripts
        • Multiple Job Submission – Slurm
        • Parameter sweep multiple job – Slurm
        • R Sample Scripts – Slurm
        • Sample Slurm Submission Script
      • HPC Software
    • Research Software
    • Scholarly Communication
    • Survey Tools
    • Training
      • QCIF – Queensland Cyber Infrastructure Foundation
      • Teaching Lab Skills for Scientific Computing

eResearch

COMPUTE NODES
Number of Compute Nodes 14
CPU Sockets 28
Cores 1152
GPU’s 6
Total Memory 11.264 TB
Disk ~829 TB’s
Theoretical Performance Over 500 Tflops (including GPU performance)
SHARED STORAGE
Disk Capacity (Front end Storage array) Back-end Capacity 320TB + 16TB SSD raw 700+TB’s
OTHER
Max Power T.B.C

Hardware Information of Systems

Ada (LOGIN NODE)
Hostname: ada.cqu.edu.au
System:
CPU: 2x AMD EPYC 9254 (24 core, 2.9GHz) – Total of 48 Cores per node
Memory: 384 GB
Network: 400GBs NDR InfiniBand
HPC Storage
Capacity: 829 TBs
Read IOPs: 22,300,000
Write OPs: 3,4000,000
Read Bandwidth: 540 GB/s
Write Bandwidth: 98.1 GB/s
STANDARD COMPUTE NODES (HPC05-N001 – HPC05-N008)
System: HPe Apollo 2000
CPU: 2x AMD EPYC 9274F (48 core, 3.6GHz)- Total of 96 Cores per node
Memory: 768GB
Network: 200GBs NDR InfiniBand
L40S GRAPHIC COMPUTE NODES (HPC05-GLN001 – HPC05-GLN002)
System: HPe Apollo 2000
CPU: 2x AMD EPYC 9254 (24 core, 3.6 GHz) – Total of 48 Cores per node
Memory: 512GB
Network: 200GBs NDR InfiniBand
GPU: 1 x nVidia L40S 48GB GPU
H100 GRAPHIC COMPUTE NODES (HPC05-GHN001 – HPC05-GHN002)
System: HPe Apollo 2000
CPU: 2x AMD EPYC 9254 (24 core, 2.9GHz) – Total of 48 Cores per node
Memory: 512GB
Network: 200GBs NDR InfiniBand
GPU: 2 x nVidia H100 96GB GPU
LARGE COMPUTE NODES (HPC05-LN001 – HPC05-LN002)
System: HPe DL360
CPU: 2x AMD EPYC 9274F (48 core, 2.9GHz)- Total of 96 Cores per node
Memory: 1536GB (~1.5TB)
Storage 2 x 1.6TB mixed use SSDs
Network: 200GBs NDR InfiniBand

 

Support

eresearch@cqu.edu.au

tasac@cqu.edu.au OR 1300 666 620

Hacky Hour (3pm – 4pm every Tuesday)

High Performance Computing Teams site