Share

Reliable neural network AIs, guaranteed

Image showing what the CEA-List developed. It is a tool called PyRAT that ensures neural network AIs are robust enough to be implemented in critical systems.
Neural network AIs are making inroads into autonomous vehicles, image and language processing, and a host of other use cases—some of which require a high degree of reliability. CEA-List developed a tool called PyRAT that ensures neural network AIs are robust enough to be implemented in critical systems.

Neural network AIs can’t be integrated into critical systems safely without some assurance that they will execute their functions correctly, perform robustly on imprecise sensor data, and resist cyberattacks. Energy engineering company Technip Energies turned to CEA-List for help doing just that. Drawing on its expertise in formal software verification and its in-depth knowledge of AI, CEA-List was able to come up with validation tools and methods to respond to Technip Energies’ neural network AI challenges.

Specifically, CEA-List adapted the principle behind its C-language code analysis software Frama-C to Python, the language used to program most neural networks. The result is a new tool called PyRAT. PyRAT simulates variations in system inputs—such as cyberattacks or disruptions to sensor data streams—and calculates the outputs after processing by the different layers of the neural network. The impact of these variations determines whether the neural network AI will behave as expected once it is implemented and in what conditions. When PyRAT analyzes a neural network AI used in image recognition, for example, it can determine how scrambled an image can be before the AI can no longer correctly identify it.

CEA-List is continuing to develop PyRAT through its partnership with Technip Energies and its in-house Confiance.ai trusted AI project. The goals are to make PyRAT faster and more accurate for applications involving high-definition images.

Read article at https://www.cea-tech.fr/cea-tech/english