The head background
CAISA
  • 90% Theoretical Peak Performance Chip Utilization Ratio up to 94%
  • 3x Increase in Per-Unit Computing Power 3x Increased in Energy Efficiency Ratio
  • 5ms Latency 5ms Latency
Nebula Accelerator

Nebula Accelerator NA-100c

High performance AI inference acceleration board designed for for edge and backend devices

1.64 TOPS

凤凰平台网址PCIe 3.0 x8

 ResNet-50  

4.87 ms

205.7 FPS

 ResNet-101 

8.76 ms

114.2 FPS

凤凰平台网址   VGG16    

21.49 ms

46.5 FPS

凤凰平台网址Inception-V4

17.87 ms

55.9 FPS

Yolo V3

38.48 ms

25.9 FPS

SSD-FPN

113.33 ms

8.8 FPS

*KY-SSD

2.97 ms

337.5 FPS

*U-Net

445.18 ms

2.2 FPS

凤凰平台网址 Note: Batch=1, INT8 The above CNN models are created by TensorFlow framework, *KY-SSD and *U-Net are custom CNN networks

backend Read more
Rainman Accelerator

Rainman Accelerator

High performance AI inference acceleration board designed for frontend devices

102.4 GOPS

Gigabit Ethernet interface

凤凰平台网址7.0 ~ 8.5 W

frontend Read more

Advantages of "Nebula" and "Rainman"

  • High
    Performance

    High performance

    200FPS,

    凤凰平台网址16-channel real-time detection of single card

  • Low
    Latency

    Low latency

    10 ms

  • Low Power
    Consumption

    Low power consumption

    CAISA architecture,

    10 times energy efficiency ratio Increased

No entry barriers, supporting AI applications in various fields

  • Semantic segmentation
  • Object Detection
  • Image recognition
  • Feature Regression