Share

A safety supervision environment for autonomous systems

Crédit : Alessio Soggetti by Unsplash
CEA-List has developed a runtime safety supervision environment for autonomous systems built using AI. It factors in the uncertainties related to the system and to its environment to effectively determine the level of safety and potential risks. The system has been evaluated on an autonomous drone (UAV) use case.

The safety of autonomous systems built on AI can be difficult to guarantee. The main challenges arise from the complexity of the underlying software and from the fact that deep learning is often used to create these systems. CEA-List has developed a runtime safety supervision environment that fills these gaps

The different system runtime risks and variables (uncertainty-based confidence measurements) are identified and analyzed to produce a set of formal safety rules.

A model of safe behavior is built based on the set of rules and used to assess how “healthy” the system and its environment are. The system health assessment is based on the functional risks inherent to AIs, like processing data that diverges significantly from that used to develop the model. The health of the environment is determined by the risks present at a given point in time (Figure 1). A number of actions can be taken to maintain the system in its current state or restore it to a safe state, depending on the outcome of the health assessment. Here’s how it works: At runtime, the safety supervisor receives information from the other components of the system. This information is compared with the safe behavior model, triggering whatever action is appropriate.

 

Figure 1 : System Health and Environment Monitoring (crédit : F.Arnez/CEA)

 

This supervision architecture was tested on an autonomous drone use case that involved flying the drone through a set of gates whose location was not known in advance (Figure 2). The AirSim simulation environment was used. The drone was represented by three distributed functional blocks, and the ROS2 standard was used for communication.

  1. Automated navigation block. The navigation function was implemented using two neural networks (perception and control). Bayesian deep learning was used to capture the uncertainty associated with each neural network’s predictions. In addition, in the case of neural network pipelines, uncertainty is propagated from one neural network to another to ensure a more accurate estimation of the navigation system’s overall uncertainty.
  2. People detection and distance estimation block. In order to ensure that the system also addressed the safety of people, we used a depth camera and trained a Yolov5 neural network to detect people and estimate how far away they were. This allows the system to represent situations where a person enters a drone’s environment during flight.
  3. Safety supervision block (Figure 3).

 

Figure 2 : System Automated Navigation Task (crédit : F.Arnez/CEA)
Figure 3 : UAV Safe Behavior model for automated navigation (crédit : F.Arnez/CEA)

 

References

AIs deployed in modular, distributed robotic architectures can be supervised during safety-critical tasks using our framework.

Rebecca Cabean

Fabio Arnez

Research engineer — CEA-List