CEA-List developed a new method for measuring political bias in large language models.
Read more
The formal verification of neural networks presents many challenges. Although there are languages to describe how a neural network is expected to behave, current validation tools do not consider the full richness of these languages. CEA-List is investigating practices from the field of programming languages to expand the scope of what can be formally verified.
Read more
We developed a method, PEM, to adapt the Monte Carlo algorithm and penalize regions that are uncertain for the AI, mitigating AI errors and making deep-learning-based materials modeling trustworthy.
Read more