Serverless Apache Spark.
Serverless Spark with no cluster management. Submit Spark jobs that auto-scale. Pay only for compute used.
Serverless
Infra
< 10 sec
Startup
Auto
Scaling
Per-second
Cost
Executor instances
Choose the right executor sizing for your Spark jobs.
Standard Executors
Standard compute-to-memory ratio for typical ETL and data processing jobs.
vCPUs
2 - 16
Memory
8 GB - 64 GB
Scale
Up to 1000 nodes
Startup
< 10 sec
Spark, serverless.
No clusters. 10s startup. Per-second billing.
No clusters
Submit jobs. No cluster management.
10-second startup
Warm pools for instant Spark startup.
Auto-scaling
Scale from 1 to 1000 executors automatically.
Per-second billing
Pay only for compute time used.
PySpark & SQL
PySpark, Spark SQL, and Scala support.
Delta Lake
Built-in Delta Lake for ACID transactions.
Getting started
Launch your first instance in three steps. CLI, console, or API โ your choice.
ur data spark submit \
--script=etl.py \
--input=s3://data/raw/ \
--output=s3://data/processed/Spark patterns.
Serverless ETL and ad-hoc analysis.
Suggested configuration
Serverless ยท Auto-scale ยท Per-second
Estimate your costs
Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.
Configuration 1
Spark Cluster
Compute Resources
Storage & Output
Cost details
Serverless and cluster-based Apache Spark.
Works seamlessly with
Frequently asked questions
Spark, serverless.
No clusters. 10s startup. Per-second billing.