• eResearch
    • Collaborative Technologies
      • ARDC (Australian Research Data Commons)
      • ARDC Nectar Research Cloud
      • Australian Access Federation
      • QRIScloud
      • Video Collaboration
    • Data Management
      • Research Data Management Plans
    • Data Services
      • Australian and International Data Portals
      • CQUni Research Data Storage Options
      • CQUniversity Research Data Storage
      • GEOSCIENCE DATA PORTALS
    • eResearch and Security: MFA and CyberSafety
      • Encrypting Data on Portable Devices
    • High Performance Computing
      • The History of CQU’s HPC Facilities
        • Ada Lovelace Cluster (New HPC)
        • Marie Curie Cluster (Current HPC)
        • Einstein Cluster (Decommissioned)
        • Isaac Newton HPC Facility (Decommissioned)
      • HPC User Guides and FAQs
        • Basics of working on the HPC
        • Getting started on CQUniversity’s Ada Lovelace HPC System
        • Graphical Connection to the HPC System
        • Compiling Programs (and using the optimization flags)
        • Connecting to the Marie Curie Cluster
        • Finding Installed Software
        • Frequently Asked Questions
        • Graphical connection HPC via Open On Demand
        • HPC Job Scheduler
        • HPC Trouble Shooting
        • Machine and Deep Learning
        • PBS Commands
        • PBS to Slurm Command tables (HPC Scheduler)
        • Running LLM’s on the HPC system
        • Running Python on HPC
        • Simple Unix Commands
        • Software Module Information
        • Submitting an Interactive Job
        • Transferring Files to the HPC System
        • Transferring Files to the HPC System (Ada)
        • Using Abaqus
        • Using ANSYS (Fluent) on the HPC System
        • Using APSIM
        • Using HPC Scheduler on Ada Lovelace Cluster
        • Using MATLAB
        • Using R
        • Virtualisation and Containers
      • HPC Community
      • HPC Related Links
      • HPC Sample Code Scripts
        • MATLAB Sample Scripts
        • Multiple Job Submission
        • Multiple Run Job Submission
        • PBS Job Array Submission
        • R Sample Scripts
        • Sample PBS Submission Script
        • Sample Slurm Submission Script
      • HPC Software
        • Mathematica Sample Scripts
    • Research Software
    • Scholarly Communication
    • Survey Tools
    • Training
      • QCIF – Queensland Cyber Infrastructure Foundation
      • Teaching Lab Skills for Scientific Computing

eResearch

COMPUTE NODES  
Number of Compute Nodes 20
CPU Sockets 40
Cores 528
GPU’s 4
Total Memory 7.296 TB
Disk To be confirmed
Theoretical Performance ~63.65 Tflops
SHARED STORAGE  
Disk Capacity (Front end Storage array)

Back-end Capacity
320TB + 16TB SSD raw

700+TB’s
OTHER  
Max Power T.B.C

Hardware Information of Systems

MARIE (LOGIN NODE)  
Hostname: marie.cqu.edu.au
System: HPe Apollo 2000
CPU: 2x Intel Xeon Skylake 6126 (12 core, 2.6GHz, 125W) – Total of 24 Cores per node
Memory: 192GB
Storage: 2 x 1TB 7.2K SATA
Network: ??? x GigE ports2 x 10GigE ports1 x EDR 4X InfiniBand
   
CURIE(LOGIN NODE)  
Hostname: curie.cqu.edu.au
System: HPe Apollo 2000
CPU: 2x Intel Xeon Skylake 6126 (12 core, 2.6GHz, 125W) – Total of 24 Cores per node
Memory: 192GB
Storage: 2 x 1TB 7.2K SATA
Network: ??? x GigE ports2 x 10GigE ports1 x EDR 4X InfiniBand
   
GALILEO (STORAGE NODE)  
Hostname: galileo.cqu.edu.au
System: HPe DL380
CPU: 2x Intel Xeon Skylake 6134 (8 cores, 3.2GHz, 130W)
Memory: 384GB
Storage: 2 x 960GB SSDs RAID1
Network: 2x2p 32Gb FC2x 2p 16Gb FC1x 4P SAS1x 2P EDR infiniband1x 2P 10GbE
   
ATTACHED STORAGE ( DETAILS TO BE CONFIRMED  )
System: IS5700
Controllers: Duel Controllers
Cache:  
Network:  
Storage:  
Usable Storage:  
   
STANDARD COMPUTE NODES (HPC04-N001 – HPC04-N014)
System: HPe Apollo 2000
CPU: 2x Intel Xeon Skylake 6126 (12 core, 2.6GHz, 125W) – Total of 24 Cores per node
Memory: 192GB
Storage: 2 x 1TB 7.2K SATA
Network: ?? x GigE ports1 x EDR InfiniBand
   
GRAPHIC COMPUTE NODES (HPC04-GN001 – HPC04-GN004)
System: HPe Apollo 2000
CPU: 2x Intel Xeon Skylake 6126 (12 core, 2.6GHz, 125W) – Total of 24 Cores per node
Memory: 384GB
Storage 2 x 1TB 7.2K SATA
Network: ?? x GigE ports1 x EDR InfiniBand
GPU: 1 x nVidia P100
   
LARGE COMPUTE NODES (HPC04-LN001 – HPC04-LN002)
System: HPe DL360
CPU: 2x Intel Xeon Skylake 8268 (24 core, 2.9GHz, 125W) – Total of 48 Cores per node
Memory: 1536GB (~1.5TB)
Storage 2 x 1.6TB mixed use SSDs
Network: 2 x GigE ports1 x EDR InfiniBand

Support

eresearch@cqu.edu.au

tasac@cqu.edu.au OR 1300 666 620

Hacky Hour (3pm – 4pm every Tuesday)

High Performance Computing Teams site