CEA-List’s probabilistic deep learning tools can be used to quantitatively measure prediction reliability.
Read more
CEA-List researchers developed PyRAT, a formal verification tool for neural networks, to respond to growing demand for more reliable AI-based systems.
Read more
CEA-List has developed a runtime safety supervision environment for autonomous systems built using AI.
Read more
CAISAR (Characterizing Artificial Intelligence Safety and Robustness) is an end-to-end open source software environment for AI system specification and verification.
Read more
CEA-List recently came up with a method for selecting the most suitable existing neural network for adaptation and reuse for new target applications. This advance makes it possible to train classification neural networks, or CNNs, without subject matter expertise or large datasets.
Read moreRead more
Read more