Parallel File System

High-performance compute storage.

POSIX-compliant parallel file system delivering 100+ GB/s aggregate throughput. Purpose-built for HPC, AI training, and large-scale simulations.

Aggregate Throughput: 100+ GB/s100 GB/s

100+ GB/s

Throughput

Millions

IOPS

Fully compliant

POSIX

Petabytes

Capacity

100+ GB/s storage.

POSIX parallel file system for HPC and ML.

100+ GB/s throughput

Aggregate throughput scales with cluster size. Optimized for sequential and random IO patterns.

Millions of IOPS

Metadata and data servers scale independently for optimal performance.

Full POSIX semantics

Standard POSIX file operations including locks, hard links, and extended attributes.

Petabyte scale

Scale to petabytes of capacity with automatic data balancing across storage targets.

InfiniBand support

Native InfiniBand and RDMA for the lowest possible storage latency.

Tiering to object

Automatically tier cold data to object storage for cost optimization.

Getting started

Launch your first instance in three steps. CLI, console, or API — your choice.

Terminal
ur storage pfs create hpc-scratch \
  --capacity=100TB \
  --throughput=50GBps

High-throughput workloads.

HPC scratch and ML data loading.

HPC scratch storage

High-speed scratch storage for simulations, CFD, and molecular dynamics.

View tutorial

Suggested configuration

100 GB/s · InfiniBand · POSIX

Estimate your costs

Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.

Configuration 1

Estimated: $8.73/mo

Parallel FS

Capacity

TB

Operations & Transfer

GB
Config 1 cost$8.73

Cost details

$8.73

POSIX-compliant. Lustre/BeeGFS. Optimized for AI training.

Configuration 1
$8.73
10 TB Storage$0.23
Data Egress$8.00
Operations$0.50

Works seamlessly with

HPC Clusters
GPU Instances
Object Storage
Monitoring
IAM
Logging

Frequently asked questions

Storage at HPC speed.

100+ GB/s parallel file system for compute-intensive workloads.