Share Day: reveals its tool-based methods: a “digital common good” to ensure development of industrial and responsible AI

© Confiance.AI
Currently open to scientific and industrial communities, the““digital common good” ” onsists of an end-to-end methodology based on numerous open-source technology components. It aims to maintain technological leadership of French companies by promoting development of critical industrial applications with securely integrated trustworthy AI. It will bolster competitiveness of national economic actors in the value chain of industrial and responsible AI. has also announced the creation of a foundation to ensure dissemination and sustainability.

On March 7th, 2024, at Day, programme founding members (Air Liquide, Airbus, Atos, Naval Group, Renault Group, Safran, Sopra Steria, Thales, Valeo, CEA, Inria, IRT Saint Exupéry and IRT SystemX) revealed the methodology and the catalogue of technological components developed in the past three years to increase trustworthiness in AI-based critical systems. Intended as an end-to-end guidebook for industries, the tool-based methods are a means to characterise and qualify the trustworthiness of a data-based intelligent system, in order to integrate it into industrial products and services. The methodology can be applied to any business.

Launched in 2021, and funded by France 2030, is a cornerstone programme of the French national strategy for artificial intelligence, and a worldwide pioneer. Aimed at making France one of the leaders in industrial and responsible AI by developing a sovereign methodological and technological environment which is open, interoperable and durable, it furthers integration of industrial (explicable, robust, etc.) and responsible (trustworthy, ethical, etc.) AI in strategic industries. Due to its genuine momentum, it has created, in particular through several Calls for Expression of Interest (CEIs), a rich ecosystem of nearly fifty partners: laboratories and research institutes, start-ups, and large industrial groups. Furthermore, the programme is vital in implementing the AI Act in French and European sectors of industry, by working with industry-specific organizations.

Confiance.AI in figures


industrial partners(10 large groups, including  9 founding partners, 15 start-ups)


academic partners (including 4 founding partners)


international collaborations


published documents

+ 60

evaluated and accessible software components (30 of which were developed as part of the programme)

September 2024

programme ends

France 2030 invested in in 2021 with the aim of translating the excellence of our AI research into industrial leadership capability. The results are significant: the R&D projects that ensue enable us to build our industrial strategies under the best circumstances, but also to restore a climate of trust and acceptability around a technology that is structuring for our economy of tomorrow

Bruno Bonnell

Secretary General for Investment — responsible for France 2030

With 2030 in mind, those involved in the programme are keen to maintain their leadership in industrial and responsible AI by pinpointing main upcoming technological barriers, and establishing a foundation to ensure dissemination, evolution, sustainability and use of the tool-based methods.

Opening the tool-based methods to the community

Programme members have been particularly successful in using a transversal approach within industries. Their end‑to‑end tool‑based methods can address the same types of technological issues, regardless of context of application or industry.

During Day, partners have announced the introduction of the tool-based methods and the open-source components to scientific and industrial communities. Assets can be accessed here:

Components are divided into nine functional sets corresponding to specific engineering processes: end-to-end engineering, data lifecycle management, model and component lifecycle management, component evaluation, component rollout, operating system management, robustness, explicability, and uncertainty quantification.

As a result of the programme’s widespread adoption, partners plan to make their tool-based methodsol a global de facto standard.

Major industrial impact: a transition approach towards augmented engineering and integration of trustworthy AI

After three years in the making, the programme has enabled automotive, aeronautical, energy, defence and industry partners to rethink their engineering systems by factoring in data-based AI, and to further the use of AI-based functions in their critical systems.

Some examples of results in industrial use cases:
Programme partners put forth use cases in which to test applicability of the end-to-end tool-based methods. Testing was conducted directly within their engineering systems.

  • Air Liquide :

Air Liquide used generative AI to improve the robustness and reliability of its automated bottle-counting models used for inventory purposes, during adverse weather conditions (i.e.: rain, snow). They were able to reduce the number of counting errors during night shifts, and to obtain precision rates higher than 98%. Thanks to data preprocessing that eliminated water droplets and snowflakes, and to night-to-day image transformation, the system processed data as if conditions were normal, no additional learning required. Optimal performance was the result of improved management of new data (study, visualisation, characterisation) and completion of training scenarios.

  • Thales

Thales is well aware of the need to review traditional engineering processes (algorithmic engineering, software engineering, systems engineering) given their required integration into critical systems. The company became highly involved in establishing an end-to-end engineering methodology — a stringent and interdisciplinary approach compatible with business uses, whose design and validation would guarantee rollout, and safe and secure operating conditions. This approach will ensure better flow across the entire AI-based critical system engineering chain. Take use case “object of interest detection in aerial images”: Thales was able to verify algorithm correction, improve quality of learning data through enrichment of synthetic images, and characterise, evaluate and monitor performance thanks to trust attributes and scores recommended by the end-to-end methodology. These initial steps are necessary for the industrial rollout of a learning component in a critical system.

  • Renault Group

Renault Group applied the programme to a use case which entailed using an AI-based system to verify quality of vehicle frame welding. Although feasibility had already been established, quality managers were reluctant to employ the system in welding stations in which quality was monitored by an operator, especially in cases of critical welding.The programme’s methods and tools were perfectly applicable; programme partners were greatly involved in using components to evaluate AI robustness, explicability and monitoring functions.This was the first time an end-to-end evaluation of the method was carried out. tools and methods have come at a perfect time for Renault Group: its AI@Scale programme will entail organisation, and human, material, software and methodological resources to accelerate and securely scale up AI across the group’s entire value chain. driving development of a global trustworthy AI ecosystem

A pioneer in trustworthy AI, the programme is driving the creation and leadership of a global ecosystem. The following are some examples:

  • Signature of Memorandum of Understanding (MoU):

In Quebec (Canada),, in 2024, withConfiance IA, a programme that brings together private and public stakeholders to support industries in their need to industrialise, and adopt robust, secure, sustainable, responsible and ethical artificial intelligence. Companies from different business fields study generic use cases to co-develop pre-competitive methods and tools to qualify and quantify trust properties of generated AI. For over a year, the programme has collaborated with its French counterpart Further cooperation is likely in the coming months thanks to sharing of use cases, methods and tools. The Computer Research Institute of Montreal (CRIM) is a trustee of the Confiance IA programme.

In Germany, in 2022, with VDE, one of the most important technological organisations in Europe, in order to create a future French-German responsible AI certification label.


  • Close relationships through ad hoc initiatives around the world:

In Australia, with CSIRO’s Data61 Operationalisation of Responsible AI, an initiative that aims to develop innovative software/system engineering technologies and tools based on a risk-oriented approach that AI experts, system developers and other stakeholders can use to increase trustworthiness of AI systems and processes.
In Germany, with project ZERTIFIZIERTE KI, concurrently managed by Fraunhofer IAIS, theGerman Institute for Standardization (DIN) and theFederal Office for Information Security. The project aims to develop test procedures to evaluate AI systems. In addition, CERTAIN, a collaborative initiative with DFKI (Deutsches ForschungsZentrum für Kunstlische Intelligenze / German Research Centre for Artificial Intelligence that involves various partners, focuses on research, development, standardisation and promotion of trustworthy AI techniques in order to guarantee certification of AI systems.

An active participant in recommendation of norms regarding the risks set forth in the AI Act, and its tool-based methods help provide an answer to the AI Act’s operational implementation. Regulatory requirements focus mainly on high-risk and systemic-risk systems, as well as on various trust aspects (robustness, explicability, maintaining human control, transparency, lack of bias, etc.). The programme provides concrete elements — taxonomies, methodologies, technologies and tools — to further regulatory goals.

We are dealing with a particularly complex and demanding issue. Our results are in keeping with our goals, and remarkable in many respects. Take the human aspect. We have been able to get a hybrid group of people — industrials, scientists, data scientists, engineers — to work together. We have also overcome a great many science and technological challenges, more than we expected. And we have led numerous international initiatives. A truly global community focused on trustworthy AI is emerging

Juliette Mattioli

Steering Committee President at and Senior Artificial Intelligence Expert — Thales

We still need to overcome numerous scientific and technological challenges in order for France to maintain its competitive edge in the field; we are making a list. Technology transfer and research valorisation are priorities, as well as breaking scientific and technological barriers

Fabien Mangeant

Executive Committee President at and Scientific Director Computing & Data Sciences Chair — Air Liquide

2030 vision and outlook

Although the programme will end in late 2024, partners are already looking ahead. They are focused on three main areas: sustainability, industrialisation and further exploration.

To start, ever-increasing advancements in AI reveal new barriers. Programme partners have identified several issues on which to base new R&T projects: hybrid AI, generative AI (i.e.: LLM), cybersecurity of AI-based critical systems, etc. Such projects will further enrich tool-based methods for new fields of application.

Sustainability and dissemination of the tool-based methods are also to be considered. Partners are planning to create a foundation that will rally international members around a shared roadmap. It will also ensure the “digital common good” remains fully operational, and that feedback and improvements increase its level of maturity. Training opportunities, such as a Master’s programme in trustworthy AI co-designed with CentraleSupélec, will also drive maturity.

Finally, industrialisation of programme results will further boost maturity and ensure their use at a large scale in industrial engineering processes. The goal is to create, and make accessible, competitiveness tools that will take into account companies’ businesses, data and use cases.

AI opens up extraordinary possibilities for society. From personalised healthcare to smart transportation and fighting climate change, AI has the potential to revolutionise many aspects of our lives. A revolution, however, requires trust. It is a prerequisite to its acceptance by society, and to the adoption of smart systems by civilians, companies and administrations. Growing public concern about the risks of highly advanced AI models as well as the increasing number of initiatives regarding the international governance of AI, underscore the urgency. The programme, vital to the national AI strategy, is a means for our industries to securely develop smart systems, and above all, to be prepared when the AI Act comes into effect

Guillaume Avrin

Coordinator of the French National Strategy in AI —


Driven by a group of 13 French companies and research organizations*, is the technological pillar of the Grand Défi public investment
programme “Ensuring the security, reliability and certification of systems based on artificial intelligence”. Launched in January 2021 and financed
through France 2030, the ambition of this 4-year project is to design a platform of sovereign, open, interoperable and sustainable methods and tools
that will enable trustworthy AI to be integrated into critical products and services. It brings together some fifty industrial and academic partners in
Saclay and Toulouse around seven R&D projects. contributes to the implementation of the future “AI Act” led by the European Commission.

For further information:
* Air Liquide, Airbus, Atos, CEA, Inria, Naval Group, Renault Group, Safran, IRT Saint Exupéry, Sopra Steria, IRT SystemX, Thales, Valeo

About 2030

The France 2030 investment plan:
✔ Reflects a dual ambition: to transform key sectors of our economy (health, energy, automotive, aeronautics and space) in the long term
through technological innovation, and to position France not only as a player but also as the world leader of tomorrow. From basic research
to the emergence of an idea to the production of a new product or service, France 2030 supports the entire life cycle of innovation until its
✔ Is unprecedented in its scale: €54 billion will be invested so that our companies, our universities, our research organizations, can fully achieve
their transitions in these strategic sectors. The challenge is to enable them to respond in a competitive manner to the ecological challenges and attractiveness of the world that comes, and to bring out the future leaders of our lines of excellence. France 2030 is defined by two
transversal objectives consisting in devoting 50% of its spending to the decarbonation of the economy, and 50% to emerging players, carriers
of innovation without spending unfavourable to the environment (in the sense of the principle Do No Significant Harm).
✔ Will be implemented collectively: designed and deployed in consultation with economic, academic, local and European stakeholders to
determine strategic orientations and flagship actions. Project promoters are invited to submit their applications via open, demanding and
selective procedures to benefit from the support of the State.
✔ Is managed by the General Secretariat for Investment on behalf of the Prime Minister and implemented by the Ecological Transition Agency
(ADEME), the National Research Agency (ANR), Bpifrance and the Bank of Territories (CDC).

For further information: | @SGPI_avenir