Distributed file system for ML.
Fully managed Lustre file system for AI/ML workloads. Scales to exabytes with automatic tiering to object storage. Zero operational overhead.
1 TB/s
Speed
Exabytes
Scale
Fully
Managed
S3 auto
Tiering
TB/s for ML training.
Managed Lustre with S3 data import.
Terabyte/s throughput
Sustained 1+ TB/s for training data loading. Scales linearly with capacity.
GPU-optimized
Designed for ML training workflows. Feed thousands of GPUs with training data.
Auto-tiering
Cold data automatically tiers to object storage. Hot data stays on NVMe.
Zero ops
No servers to manage. Automatic scaling, patching, and failover.
S3 data import
Import existing training data from S3 buckets. Lazy loading for instant availability.
Snapshot and clone
Instant snapshots for experiment reproducibility. Clone datasets in seconds.
Getting started
Launch your first instance in three steps. CLI, console, or API — your choice.
ur storage lustre create ml-data \
--capacity=50TB \
--import-path=s3://training-dataHigh-throughput workloads.
ML training and genomic analysis.
Suggested configuration
1 TB/s · S3 import · Snapshot
Estimate your costs
Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.
Configuration 1
Lustre File System
Capacity
Operations & Transfer
Cost details
Zero ops Lustre. Auto-tiering to object storage.
Works seamlessly with
Frequently asked questions
Lustre, fully managed.
TB/s throughput for ML training. Zero operational overhead.