Share

Integrating and linking technologies for smart robots

Giving robots advanced perception and action capabilities is one of the challenges inherent to smart robotics. The goal is for robots to be able to complete a wide variety of complex tasks in changing environments where robots and humans work together. At CEA-List, we feel that the best way to achieve this is through a system-level approach aiming at the development of a high-performance, scalable, safe, and secure smart robotics platform.

CEA-List is supporting better-performing robotic systems through unified and coordinated research and development in mechatronics, machine vision, artificial intelligence, human-machine interfaces, and communication networks, addressing both the robots themselves and perirobotic functions. The institute’s architectures and software program is making advances to enable the integration of these diverse technologies.

At the same time, CEA-List performs theoretical and experimental research focusing on increasing the performance and stability of these increasingly complex robotic systems.

Our researchers are leveraging therefore conventional model-based approaches and more innovative data-based strategies borrowed from AI and machine learning, implemented either on real robots or their digital twins.

Toward high-performance robot mechatronics

Robots are built from a variety of tightly integrated components:

  • Proprioceptive sensors enabling the measurement of the robot’s state and movements
  • Exteroceptive sensors that help the robot perceive its environment so that it can adapt its behavior accordingly
  • Segments and joints designed to enable the robot to complete the movements required for the target tasks
  • Actuators allowing to move the segments according to a chosen strategy
  • Electronics used to process data collected by the sensors and to control the actuators in real time

Robotics research at CEA-List addresses component-level performance improvements with the goal of obtaining higher speeds, greater versatility and precision, and increased energy efficiency.

Beyond our work on individual components, we are also focusing on components integration to enhance the overall system performance. Our research targets fixed and mobile robots that work alone or in clusters with or without interaction with human operators.

Focus

Perirobotics development

Smart robots must be able to navigate their environments to perform a wide variety of tasks.

At CEA-List we are developing a wide variety of innovative sensors (especially vision, movement, proximity, contact and force sensors). These devices are integrated into the robot or in the robot’s immediate surroundings. We are also leveraging advances in signal processing and artificial intelligence to develop powerful algorithms and data processing approaches that can detect changes in parameters, interpret the data gathered, and initiate appropriate responses, enabling robots capable of “understanding” their environments.

We are also investigating technological solutions that will give robots the ability to work efficiently with the objects of interest, grasp, move and manipulate them, either at a fixed workstation or while moving around, alone or with human operators. Specifically, we are designing and developing innovative robotic grippers which combine several gripping modalities or with several fingers that can adapt to the object being picked up, for example.

Our research also focuses on modeling the interactions occurring between the robots and the objects they pick up, with the goal to drive improvements in their control. Finally, we are developing mobile robotic and robotic operator assistance solutions.

Facilitating interactions between robots and humans

To work together, robots and humans must interact in a way that is natural, efficient, safe, and secure.

CEA-List research addresses the development of different types of robot-human interactions which are:

  • Bidirectional, which means that the robot must be able to understand what the human operator says and does, and that the human operator must be able to instantly understand the robot’s behavior
  • Multi-modal, combining movement, touch, speech, and vision, for situational adaptation

One example is our research to equip our Companion Robot with vision solutions allowing it to detect when a human operator it has summoned arrives and then to recognize the human operator’s actions. The objective is to enable the robot to continue working appropriately once the operator leaves.

We are also developing solutions that would allow robots to translate explanations given in natural language into operable instructions.

Leveraging the latest advances in artificial intelligence for context perception

Smart autonomous robots capable of adapting to different contexts must be able to understand their environment. Machine vision is one of the major branch of artificial intelligence and a key technologies for dynamic modeling of the robot’s environment. Thanks to rapid advances in machine learning and, especially, deep neural networks, machine vision now also plays a major role in robotics.

One of the objectives of our artificial intelligence research is to develop a sufficiently robust vision-based perception capable of meeting embedded systems’ frugality and real-time requirements. Multi-modal simultaneous localization and mapping (SLAM), visual servoing, environmental modelling (3D reconstruction, detection/localization and tracking of objects of interest, quality control), and context awareness (detection of people and recognition of activities) are some of the capabilities being developed at CEA-List.

Vision is central to various robotic processes, from simple robots’ movements to more complex actions like inspection, gripping, and machining, up to robots’ interactions with humans.  Whether for enabling robots to move around in crowded environments shared with human operators or for ensuring that tasks are completed to the required quality standard, vision plays a crucial role.

Harnessing the power of simulation and digital twins

A digital twin is a virtual clone of a physical system, process, or facility. Digital twin technology is extremely useful for improving the performance of robot components, facilitating the integration of robots into their environments, and for optimizing robots operation.

When a robot is being designed, a digital twin, which can represent robotic processes in all of their complexity, offers an excellent testing ground, allowing designers to ensure that the system is robust and can respond to the specifications of the actual task before the physical robot is ever built.

Once a system has been built and installed in its operating environment, its digital twin can help ensure more efficient, agile operations. Real-time adjustments of its parameters like how tasks are shared between robots and human operators, for example, can be tested on the digital twin before modifications of the physical equipment are made. A digital twin can also help detect process drift and make it easier to implement corrective measures in real time.

In terms of teaching robots new tasks, learning can take place on the digital twin before skills are transferred to the physical robot. CEA-List digital twin and AI research is supporting this type of simulation work in the field of robotics.

Optimizing communication

A robot cannot operate without communication technology, another major component of robotics research at CEA-List.

One of the solutions we are investigating to ensure efficient component-to-component and robot-to-robot communication is time-sensitive networking (TSN).

We are also developing cybersecurity solutions (learn more about our architectures and software research), including ones to secure wireless communication between robots and components in cloud architectures, for example.

Technologies implemented

  • Backdriveable, high-efficiency, low-inertia actuator solutions (screw-and-cable actuators, capstans, advanced transmission modeling)
  • Innovative robotic architectures (modeling, dimensioning, and optimization tools for the development of prototypes and demonstrator systems like robotic and cobotic arms, dextrous robotic grippers, and mobile robots for a wide range of use cases)
  • Advanced digital architectures (smart imagers, multi-sensor embedded computing, digital architectures for high-performance computing, tools for embedded AI)
  • PACT (near-sensor signal processing software)
  • Low-level control algorithms that respond to high performance and/or stability requirements and that can handle uncertainty
  • CORTEX (real-time robotic controller design, development, and operation software suite)
  • SCORE (digital twin modeling, robot modeling and integration into virtual environments)
  • XDE (simulation software used to set up a robot’s physical digital twin and its operating environment)
  • Papyrus for Robotics (a version of CEA-List’s Papyrus development tool tailored to the design of component-oriented robotics software provides models of existing interfaces and components and ROS2 code generation support)
  • LIVA (computer vision software)
  • LIMA (multilingual language analyzer)
  • Human-system interface (multi-modal HMIs with haptic and audio feedback for natural movement and touch interaction with robots)
  • ExpressIF® (decision automation tool)
  • Source-code (FRAMA-C) and binary code (BINSEC) analysis and validation; validation of embedded software on virtual platforms (UNISIM)
  • NEON (software-defined networking solution for multi-protocol network management)

See also

Research programs

Smart robotics

Creating smart interactive robots to serve humans.
Read more
Research programs

Architectures and software

The widespread adoption of smart robotics will depend on robotic functions that are easy to reuse and adapt to new systems and situations.
Read more
Software development environments

SCORE

SCORE: simplified robotic programming interface and supervised control system development.
Read more