vision-en

2025 Scientific Report • April 1, 2026

MiRAG: multi-level retrievalaugmented generation for visual question answering

CEA-List developed the MiRAG model for visual question answering about named entities. This is the first time a retrieval augmented generation (RAG)-based approach to generative AI has been applied to this task.

Read more
2025 Scientific Report • April 1, 2026

Annotation-free world discovery: With xMOD, 2D and 3D vision work together

xMOD combines 2D vision (cameras) and 3D vision (LiDAR sensors) in a novel cross-distillation method. The AI learns to segment its environment from motion cues in the images, delivering beyond state-of-the-art performance.

Read more
2025 Scientific Report • April 1, 2026

3D scene analysis using natural language queries

The DiSCO-3D semantic segmentation method is used to discover, in a 3D scene, the elements corresponding to the semantic subconcepts of a user query expressed in natural language.

Read more
2024 Activity Report • June 27, 2025

AIHerd uses CEA-List AI technologies to analyze cattle behavior

AIHerd commercializes a powerful AI-enabled herd monitoring solution for cattle farmers. The technology, developed by CEA-List, provides a precise analysis of herd health.

Read more
Startups • May 21, 2025

[Startup] Arcure, intelligent on-board vision for pedestrian detection

With its range of Blaxtair® systems, Arcure prevents collisions between mobile machinery and pedestrians in industry and public works. Two-thirds of its sales come from exports.

Read more
Technological advances • February 13, 2023

February 9, 2023 | Tracking soccer players in real time

Researchers at CEA-List brought their unique expertise to a new neural-network-based image analysis solution that successfully tracked soccer players in short video clips.

Read more
Research programs • March 6, 2022

Robotic systems

To create smart robots, development work in mechatronics, control, computer science and perirobotics must be carried out in a coordinated and unified manner.

Read more