Share

Tiny RAPTOR : Speed up your algorithms at a fraction of the energy spent by your CPU

Tiny RAPTOR

What is Tiny RAPTOR?

Tiny RAPTOR, developed by Dolphin Design, is a complete Neural Processor solution to deploy AI at the very edge combining Software and Hardware approaches resulting from more than 3 years of development, together with CEA-List in a joint laboratory.

Key benefits :

  • Minimize iteration from data science to deployment into tiny-constrained devices
  • Unlock extended battery life and gain a quicker market adoption by delivering differentiated features
  • Scale your AI-enabled use cases performances thanks to Raptor programmability & flexibility

Tiny RAPTOR key features :

  • Scalable from 32 to 128 MAC/cycle
  • 8-bits data type
  • 32k to 512k internal SRAM
  • Pre-verified and highly Flexible TCDM for near memory processing
  • Lossless weight compression
  • Neural framework: TensorFlow, PyTorch, ONNX
  • Comprehensive toolchain from Quantization Aware Training (QAT) to binaries
  • Early prototyping via virtual platform and ISS

Applications

Tiny RAPTOR is a Power efficient Neural Processing IP Platform specialized in sound and vision. It fits particularly well within any MCU subsystem, in particular Dolphin Design’s solution (CHAMELEON).

It is designed to be embedded in SoCs targeting a broad range of Edge AI use-cases, including:

  • Speech recognition
  • Noise cancellation
  • Sound recognition
  • Face identification
  • Object detection
  • Image classification

 

for a broad range of applications, including:

  • Surveillance
  • Smart camera
  • Wearable
  • TWS
  • Smart speaker
  • IoT sensor fusion

What’s new?

A DOLPHIN DESIGN demo chip embedding its Tiny RAPTOR accelerator has been produced.

Characterization and measurement results:

3x

higher power efficiency compared to state-of-the-art NPU (KWS-TinyML)higher power efficiency compared to state-of-the-art NPU (KWS-TinyML)

30mW

power @ MobileNet v2 image classification (224×224-60FPS-500MHz)

Up to 256GOPS

peak at 1GHz

2.2TOPS/W

computing efficiency

Small footprint

(0.045mm2 in 22nm for 32 GOPS)

Demonstration

The demonstrator applies a keyword spotting application and will be presented at the CEA-List booth.

Next step

What's next?

Executing deep neural networks in an energy-efficient way is an area of continued research for DOLPHIN DESIGN and CEA-List joint R&D lab.

Dolphin Design is currently developing a software generation flow that Dolphin Design’s customers will be able to use to develop lean, low-power applications.

 

Contact

See more

Conference

Leti Innovation Days 2022

CEA-List will be present from June 21 to June 23 in Grenoble for the Leti Innovation Days 2022.
Read more
press releases

July 19, 2021 | Dolphin Design and CEA-List

Dolphin Design and CEA-List have formed a new joint R&D lab of embedded systems. Their goal is to achieve the best tradeoff for Edge AI devices between SW flexibility, energy efficiency, and peak perf...
Read more