High-performance compute storage.
POSIX-compliant parallel file system delivering 100+ GB/s aggregate throughput. Purpose-built for HPC, AI training, and large-scale simulations.
100+ GB/s
Throughput
Millions
IOPS
Fully compliant
POSIX
Petabytes
Capacity
100+ GB/s storage.
POSIX parallel file system for HPC and ML.
100+ GB/s throughput
Aggregate throughput scales with cluster size. Optimized for sequential and random IO patterns.
Millions of IOPS
Metadata and data servers scale independently for optimal performance.
Full POSIX semantics
Standard POSIX file operations including locks, hard links, and extended attributes.
Petabyte scale
Scale to petabytes of capacity with automatic data balancing across storage targets.
InfiniBand support
Native InfiniBand and RDMA for the lowest possible storage latency.
Tiering to object
Automatically tier cold data to object storage for cost optimization.
Getting started
Launch your first instance in three steps. CLI, console, or API — your choice.
ur storage pfs create hpc-scratch \
--capacity=100TB \
--throughput=50GBpsHigh-throughput workloads.
HPC scratch and ML data loading.
HPC scratch storage
High-speed scratch storage for simulations, CFD, and molecular dynamics.
View tutorialSuggested configuration
100 GB/s · InfiniBand · POSIX
Estimate your costs
Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.
Configuration 1
Parallel FS
Capacity
Operations & Transfer
Cost details
POSIX-compliant. Lustre/BeeGFS. Optimized for AI training.
Works seamlessly with
Frequently asked questions
Storage at HPC speed.
100+ GB/s parallel file system for compute-intensive workloads.