Share

Low-latency motion detection with event graphs

Event-graph-based processing reduces prediction latency by a factor of 1,000 compared to image-based processing.
Event-driven cameras provide low-latency motion detection. Our method leverages asynchronous event graphs that take full advantage of the cameras’ high time resolution to detect motion with very low latency (just 50 milliseconds) while reducing the number of operations 48-fold compared to the state of the art.

High temporal accuracy is a prerequisite for computer vision, especially for applications like autonomous driving and video stabilization. Conventional RGBimage-based methods present major limitations in terms of computational cost and image acquisition frequency. With their high time resolution and asynchronous operation, event-driven cameras are a promising alternative. Currently, however, event-driven cameras rely on convolutional neural networks to recreate images from events, cancelling out any latency advantages and making the solution too computationally costly for embedded systems.
 


Figure 1: Our proposed architecture (HUGNet and Periodic Aggregator)


Event graphs, interpreted by graph neural networks, enable event-based predictions with minimal latency. The HUGNet method previously developed at CEA-List proved that directed graphs updated on the fly could further reduce latency. However, due to a lack of global context aggregation, the lower latency comes at the cost of lower accuracy. We came up with a new neural network architecture combining the asynchronous branch with a periodic aggregation branch—effectively overcoming this limitation. Using a recurrent convolutional neural network on images of past events, the periodic aggregation branch extracts a global context from the scene. This improves the quality of the predictions without sacrificing latency.

Event graph (HUGNet)
EVK4 HD event sensor
produced by PROPHESEE

Our approach turned out to be 1,000 times faster than conventional solutions, reducing speed change detection latency down to a prediction-per-event time of just 50 μs on embedded hardware. In addition, it requires 17 times fewer parameters and 48 times fewer operations per second, while maintaining competitive accuracy levels.


 

Reduced speed change detection latency.

 

Our approach marks a step toward low-latency, energy-efficient smart sensors for embedded vision.

Rebecca Cabean

Manon Dampfhoffer

Research Engineer — CEA-List

Learn more

Use cases, applications, technology
transfer

  • The algorithm will be integrated into a very-low-power (<10 mW) three-layer smart event imager manufactured in partnership with PROPHESEE and STMicroelectronics. In addition to optical flow prediction, the algorithm can automatically recognize patterns like gestures or lip-reading.

Patent

  • DD24590ST – Pending.
  • DD22912ST – Filed.

Major project and/or partnership

  • ANR IRT Nanoelec (Smart Imager program), in partnership with PROPHESEE.

Flagship publication

  • « Graph Neural Network Combining Event Stream and Periodic Aggregation for Low-Latency Event-Based Vision ». M. Dampfhoffer, T. Mesquida, D. Joubert, T. Dalgaty, P. Vivet and C. Posch (2025). In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR52734.2025.00648

Contributors to this article:

  • Manon Dampfhoffer, Research Engineer, CEA-List
  • Christoph Posch, CTO, PROPHESE