Share

Ensure the responsible development of AI applications

Robustness, safety, privacy, and frugality are some of the fundamental questions that will need answers for AI to be deployed widely across cloud and embedded applications with varying degrees of criticality—from systems with high user interaction to rapidly-growing autonomous systems. CEA-List, which has been working to find answers to these questions for several years, is running an ambitious research program to support the responsible development of AI.

AI, with its capacity to enrich and optimize products and processes, has the potential to bring industrial companies some exciting new opportunities. However, there are some major obstacles on the path to fast, widespread deployment of the technology. AI simply cannot provide the necessary assurances in terms of robustness, safety, privacy, and quality to be integrated into the kinds of trusted systems outlined in French and EU recommendations. Power consumption and massive data availability are also important issues—ones that today’s approach to AI doesn’t address.

However, pressure to change this is mounting due to environmental imperatives, the low power requirements of constrained environments like embedded systems, the need to manage scarce data, and, of course, data privacy concerns.

CEA-List demonstrated thought leadership on these issues very early on, underscoring the challenges of trusted and embedded AI back in 2017. The institute is also the driving force behind specialized workshops at AI Safety, Waise and SafeAI, three major international AI and safety engineering conferences.

And, to provide all market stakeholders with the means to develop their AI systems responsibly, CEA-List is now building on its early initiatives through three industrial AI research programs.

Listen to the Podcast Paroles d’expert L’IA de confiance avec François Terrier, VP AI Programs

Research area 1

Creating trusted, frugal, embedded, distributed AI

Our work within this research area is focused on three main objectives:

  • A methodological and technical framework for the development of trusted AI systems that respond to quality, performance, safety, privacy, robustness, explainability, and certification requirements.
  • Frugal development approaches in terms of both data and power.
  • Design and deployment methods and optimized architectures for embedded and networked (distributed) environments.

Learn more about trusted, frugal, embedded, and distributed AI

Research area 2

Implementing an open and sovereign AI platform

CEA-List is currently working on an open platform to centralize the AI design, validation, and deployment tools developed by its labs.

Learn more about our development environments

Research area 3

Supporting the emergence of use cases

CEA-List is developing AI applications for the Factory of the Future, mobility, health, and cybersecurity, which, in addition to being strategic, also align with the institute’s areas of expertise.

Learn more about the main AI use cases addressed

Expertise in trusted software and embedded systems

CEA-List possesses a strong track record in trusted software, embedded systems, and AI research. The institute has built an excellent reputation for its work in rapidly growing AI subfields like computer vision and natural language processing, developing particularly rich applications that are widely used.

Learn more about CEA-List’s AI tools:

We are also well-versed in the needs of the companies and other organizations that use AI, with particular expertise in sensitive health, cybersecurity, the Factory of the Future, and mobility use cases, making us a leader on these topics.

Learn more about our target markets

Trusted AI and embedded AI are topics we address through our own research, through consortium-based projects, and through the R&D we do for and with companies.

François Terrier

VP, Artificial Intelligence program — CEA-List