HPC Clusters

High-throughput compute at scale.

Tightly-coupled HPC clusters with InfiniBand networking, parallel file systems, and MPI support. Run CFD, FEA, molecular dynamics, and weather simulations at any scale.

HEAD NODENODE_00NODE_01NODE_02NODE_03NODE_10NODE_11NODE_12NODE_13NODE_20NODE_21NODE_22NODE_23NODE_30NODE_31NODE_32NODE_33MPI OptimizedInfiniband 400G10,000+ Cores

200 Gbps InfiniBand

Interconnect

Up to 100,000+

Cores per cluster

Parallel I/O

File system

Minutes

Provisioning

Machine families

Purpose-built configurations for every workload profile — from web serving to GPU-accelerated ML training.

H3 / C3-HPC

HPC-Optimized

High-performance compute nodes with InfiniBand networking optimized for tightly-coupled, MPI-based workloads requiring low-latency inter-node communication.

CFD/FEAWeather modelingMolecular dynamicsSeismic processing
View all configurations

Cores

88 – 176

InfiniBand

200 Gbps

MPI Latency

< 2 μs

Memory/DDR

DDR5

Purpose-built for tightly-coupled workloads.

InfiniBand, parallel storage, and job scheduling — everything HPC needs.

200 Gbps InfiniBand

NDR InfiniBand fabric with < 2 μs MPI latency. Topology-aware placement ensures optimal inter-node bandwidth.

Parallel file system

Managed Lustre and DAOS file systems delivering 100+ GB/s aggregate throughput for checkpoint and data staging.

Compact placement

Nodes placed close in the network topology for minimal hop count. Critical for applications sensitive to inter-node latency.

Job scheduler integration

Native integration with Slurm, PBS Pro, and HTCondor. Managed head nodes with automatic scaling based on job queue depth.

Burst to cloud

Extend on-premises HPC clusters into the cloud during peak periods. Same scheduler, same workflows, instant capacity.

Bare metal HPC nodes

No hypervisor overhead. Direct MPI access to InfiniBand for maximum bandwidth and lowest possible latency.

Getting started

Launch your first instance in three steps. CLI, console, or API — your choice.

Terminal
ur hpc clusters create my-cluster \
  --node-type=h3-standard-88 \
  --node-count=32 \
  --scheduler=slurm \
  --zone=eu-west1-b

Simulations at any scale.

CFD, weather prediction, molecular dynamics — burst to thousands of cores on demand.

Computational fluid dynamics

Run OpenFOAM, ANSYS Fluent, and STAR-CCM+ across hundreds of tightly-coupled nodes with InfiniBand networking and parallel I/O.

View tutorial

Suggested configuration

32× H3-standard-88 · InfiniBand · Lustre

Estimate your costs

Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.

Configuration 1

Estimated: $154.54/mo

Platform & Architecture

Compute Resources

GB

Storage

GB

Cost Optimization

Preemptible InstanceSave up to 70% — may be reclaimed
Config 1 cost$154.54

Cost details

$154.54

MPI, Slurm, PBS Pro job schedulers included.

Configuration 1
$154.54
4 vCPU × 1 GB Compute$144.54
Persistent Storage$10.00

Works seamlessly with

InfiniBand Fabric
Parallel File System
Bare Metal Nodes
Cloud Monitoring
IAM
Cloud Logging

Frequently asked questions

Scale your simulations.

Launch an HPC cluster in minutes. Burst to thousands of cores on demand.