CEA-List creates theoretical frameworks, methods, and tools for use in designing reliable, frugal, embedded, and distributed AI systems.
Until recently, artificial intelligence (AI) has existed primarily in the cloud, on software installed on large servers. The future of AI will also be local, however. Whether it is for consumer or industrial use cases, AI will be embedded directly on IoT devices like sensors, as well as on process and other equipment. The trend toward moving AI off the cloud and onto the Edge is here!
Edge AI has to meet a number of different criteria. If it is used in critical systems with their high safety, privacy, robustness, and reliability requirements, trust has to be built in by design and validated through testing.
Deployment in constrained, networked environments creates additional challenges for Edge AI. Embedding algorithms on IoT devices requires solutions with low data, computing, memory, and power budgets, as well as efficient, secure operation in networked scenarios.
CEA-List research addresses these concerns.
Today’s approach to AI development is characterized by a low level of formalization and a high degree of experimentation. Understandably, this raises the issue of trust.
Our response hinges on the development and deployment of quality AI solutions that guarantee the required levels of performance, safety, privacy, robustness, and explainability. Methods and metrics for the formal evaluation of AI systems, including measurement and certification approaches, are another key research focus.
Working in partnership with Inria, CEA-List researchers have demonstrated a formal approach for validating the robustness and reliability of neural networks in the context of image recognition. The project focused on two main aspects: firstly, formal specification of the objects to identify and the properties to demonstrate; and secondly, proof of the reliability and robustness of a neural network. Our team made innovative use of computer-generated (simulated) images to train neural networks, which were then used to establish specifications. This research was presented at ECAI20.
It is now widely accepted that the growing energy consumption of digital technology is untenable. And yet, many of the current approaches to AI are “no limits” in terms of resources. Tomorrow’s AI will have to be much more frugal in terms of power, computing resources, and data. Edge computing will require specific strategies. More frugal approaches to everything from data to architecture will have to be implemented across the entire lifecycle.
CEA-List creates theoretical frameworks for the development of low-data learning methods, develops design environments for frugal applications, and designs optimized hardware architectures.
ArcelorMittal manufactures rolled sheet metal at high temperatures with an operating speed of 20 m/s. In these conditions, in-line quality inspection presents a significant challenge. The company partnered with CEA-List to develop a computer vision system for real-time defect detection, precise to within 1 mm and with a detection rate of 95%. The system developed features an optimized neural network designed using the N2D2 development suite.
Learn more about N2D2 (a neural network development suite for optimizing embedded systems)
With the rise of the IoT and the increasing need for connectivity, distributed artificial intelligence applications have a growing role to play. Concepts around decision-making and consensus for AI networks need to be formalized, and any approaches developed will have to be frugal. The ways in which data and learning are shared and protected will also need to be addressed.
Our research focuses on modelling the concepts involved in distributed decision making and distributed (or federated) learning. We are also working on methods and tools to optimize deployment on distributed hardware.
CEA-List worked with an R&D partner to develop distributed learning techniques for vehicle fleet maintenance. Our partner installed a three-way accelerometer on top of the steering wheels of cars in their fleet to record vibrations from the car, which are then analyzed to assess wear and tear on different mechanical parts, for example. A distributed learning approach was used to train the model without transferring data from the sensor to a central server. A scenario was simulated using data from 42 vehicles.
Using distributed learning techniques, data from different geographical locations can be leveraged without the need to send it over an external network. The privacy of sensitive data—and any trade secrets it contains—is protected. Data (vibration measurements, etc.) from machinery across multiple industrial sites can be gathered and used to design high-performance preventive maintenance tools, without any sensitive data ever leaving the premises.
Learn more with Cingulata, for applications that can process encrypted data without decrypting it
In AI, conventional analytical and deterministic models are often replaced by more robust, data-based black box models. However, the increase in power and flexibility comes at a cost: lower trust.
Hybrid AI could offer a better tradeoff. Robust and high-trust, it combines the power of AI with the precision of analytical simulation models, as in the case of physics-informed machine learning.
The use of digital twins opens up a broad range of new perspectives for AI:
Read more :
SPEED Platform, to protect neural network training data (in French)