High-throughput compute at scale.
Tightly-coupled HPC clusters with InfiniBand networking, parallel file systems, and MPI support. Run CFD, FEA, molecular dynamics, and weather simulations at any scale.
200 Gbps InfiniBand
Interconnect
Up to 100,000+
Cores per cluster
Parallel I/O
File system
Minutes
Provisioning
Machine families
Purpose-built configurations for every workload profile — from web serving to GPU-accelerated ML training.
HPC-Optimized
High-performance compute nodes with InfiniBand networking optimized for tightly-coupled, MPI-based workloads requiring low-latency inter-node communication.
Cores
88 – 176
InfiniBand
200 Gbps
MPI Latency
< 2 μs
Memory/DDR
DDR5
Purpose-built for tightly-coupled workloads.
InfiniBand, parallel storage, and job scheduling — everything HPC needs.
200 Gbps InfiniBand
NDR InfiniBand fabric with < 2 μs MPI latency. Topology-aware placement ensures optimal inter-node bandwidth.
Parallel file system
Managed Lustre and DAOS file systems delivering 100+ GB/s aggregate throughput for checkpoint and data staging.
Compact placement
Nodes placed close in the network topology for minimal hop count. Critical for applications sensitive to inter-node latency.
Job scheduler integration
Native integration with Slurm, PBS Pro, and HTCondor. Managed head nodes with automatic scaling based on job queue depth.
Burst to cloud
Extend on-premises HPC clusters into the cloud during peak periods. Same scheduler, same workflows, instant capacity.
Bare metal HPC nodes
No hypervisor overhead. Direct MPI access to InfiniBand for maximum bandwidth and lowest possible latency.
Getting started
Launch your first instance in three steps. CLI, console, or API — your choice.
ur hpc clusters create my-cluster \
--node-type=h3-standard-88 \
--node-count=32 \
--scheduler=slurm \
--zone=eu-west1-bSimulations at any scale.
CFD, weather prediction, molecular dynamics — burst to thousands of cores on demand.
Computational fluid dynamics
Run OpenFOAM, ANSYS Fluent, and STAR-CCM+ across hundreds of tightly-coupled nodes with InfiniBand networking and parallel I/O.
View tutorialSuggested configuration
32× H3-standard-88 · InfiniBand · Lustre
Estimate your costs
Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.
Configuration 1
Platform & Architecture
Compute Resources
Storage
Cost Optimization
Cost details
MPI, Slurm, PBS Pro job schedulers included.
Works seamlessly with
Frequently asked questions
Scale your simulations.
Launch an HPC cluster in minutes. Burst to thousands of cores on demand.