trustworthy

2025 Scientific Report • April 1, 2026

Analyzing and reducing political bias in large language models

CEA-List developed a new method for measuring political bias in large language models.

Read more
2025 Scientific Report • April 1, 2026

Verification of neural networks: a challenge to overcome

The formal verification of neural networks presents many challenges. Although there are languages to describe how a neural network is expected to behave, current validation tools do not consider the full richness of these languages. CEA-List is investigating practices from the field of programming languages to expand the scope of what can be formally verified.

Read more
2025 Scientific Report • April 1, 2026

Uncertainty in AI-guided Monte Carlo simulations

We developed a method, PEM, to adapt the Monte Carlo algorithm and penalize regions that are uncertain for the AI, mitigating AI errors and making deep-learning-based materials modeling trustworthy.

Read more