
AI Accelerator IP
Our proprietary AI accelerator IP is optimized for hardware acceleration of mainstream neural networks including CNN and Transformer models. With a configurable compute array architecture, it delivers efficient AI inference within tight power and area budgets, suitable for embedded AI, smart cameras, and ADAS edge computing.
Features
- CNN/Transformer Acceleration
- INT8/INT16 Quantized Inference
- Configurable Compute Array
- Low-power Edge Deployment
- Mainstream AI Framework Support