Increasingly, artificial intelligence is used to make decisions that affect our day-to-day lives. In this context, trust is vital. And to trust an AI’s decision, you need to know how the AI arrived at that decision. In PhD research co-supervised by the CentraleSupélec MICS Lab (which studies mathematics and computing for complexity and systems), the Carnot CEA List developed a new machine learning module that can classify images and annotate objects and generate an explanation at the same time. This module has been integrated into the CEA List symbolic AI platform ExpressIF®.
Here’s how it works. First, a neural network “understands” an image by identifying specific areas that correspond to different objects in the image. The new algorithms integrated into ExpressIF® then take over, identifying the objects according to their relative positions, and then annotating them. What makes the algorithms so powerful is that they can learn to identify objects error-free from just a few (fewer than ten) images. The initial tests—carried out on abdominal MRI images—were encouraging. Not only was ExpressIF® able to automatically annotate the organs pictured, but it was also able to justify its annotations with an explanation produced in natural language. This is a clear step forward toward helping doctors interpret their patients’ scans.
The solution augments users’ trust in the AI’s decision, of course. But it also aligns with future laws, which will likely require explainable AI for certain applications. Here, the researchers tested the new feature on medical images. However, it could also augment scene interpretation or the characterization of manufactured parts, for example.
Read article at https://www.cea-tech.fr