Run inference on IoT end devices.
Purpose-built AI accelerators for edge inference. Run ML models on IoT devices with 4 TOPS to 200 TOPS. Ultra-low power consumption.
200 TOPS
Performance
< 5W
Power
TFLite/ONNX
Models
M.2/USB
Form factor
AI on every device.
200 TOPS edge inference. Under 5W.
200 TOPS
Up to 200 TOPS int8 inference. From 4 TOPS to 200 TOPS options.
Ultra-low power
Under 5W TDP. Battery-powered edge AI.
TFLite & ONNX
Run TensorFlow Lite and ONNX models natively.
M.2 & USB
M.2, USB, and PCIe form factors for any deployment.
Model compiler
Cloud compiler optimizes models for edge hardware.
OTA model updates
Update models on devices over-the-air.
Getting started
Launch your first instance in three steps. CLI, console, or API — your choice.
ur edge ai compile my-model.tflite \
--target=accelerator-v3 \
--quantize=int8Edge AI patterns.
Computer vision and voice processing.
Suggested configuration
200 TOPS · TFLite · Low power
Estimate your costs
Create detailed configurations to see exactly how much your architecture will cost. Pay for what you use, down to the second.
Configuration 1
Edge AI Hardware
Usage Volume
Infrastructure
Options
Cost details
Deploy AI models to low-power edge hardware.
Works seamlessly with
Frequently asked questions
AI on every device.
200 TOPS edge inference. Under 5W.