CAISAR (Characterizing Artificial Intelligence Safety and Robustness) is an end-to-end open source software environment for AI system specification and verification.
Read more
Papyrus4Manufacturing can be used to model and deploy digital interfaces in factories as part of projects to roll out the digital, connected plant assets required for production line reconfiguration, supervision, and predictive maintenance.
Read more
The purpose of the e-Meuse Santé project with Barle-Duc Medical Center is to bring digital innovations that will improve access to healthcare in rural areas in eastern France.
Read more
CEA-List and Université Grenoble Alpes have developed the first-ever software security analysis technique capable of simulating a powerful attacker at scale.
Read more
CEA-List’s Cobomanip cobot, developed over a decade of R&D, gives operators precision load[1]handling assistance in complex environments.
Read more
When it comes to consensus-based distributed systems like the most recent blockchain protocols, innovative formal specification and verification methods can help increase trust.
Read more
An extremely wide variety of potential driving scenarios makes it notoriously difficult to guarantee the safety of autonomous vehicles.
Read more
New computing paradigms will require appropriate programming, verification, and code analysis tools.
Read more
To create trusted generative AI solutions, Thales’s AI Lab, the most powerful integrated laboratory for critical AI in Europe, and the CEA, which is one of the world’s most innovative research organisations and is listed alongside Thales in the Clarivate Analytics Top 100 Global Innovators, have joined forces to focus on a range of generative AI use cases, in particular for intelligence and command applications.
Read more