To simplify and accelerate the adoption of intelligent robotics in industrial and other settings, CEA-List is developing a software architecture that will allow optimal interaction between all of the technologies used in smart robots and make it easier to adapt robotic systems to different use cases. Another objective is to make smart robots safer, more autonomous, and more versatile.
Smart robots integrate a significant number of heterogeneous technologies (mechatronics, control systems, data processing, etc.). In a smart robot, these technologies must be able to interface in a way that meets specific functional and/or time requirements.
At CEA-List, one of the focuses of our smart robotics research program is the development of a software architecture that will allow all of these technologies to interact optimally, considering both interactions between the components of the robotic system itself and between the robot and its environment. This open and modular architecture will align with current interoperability standards to ensure that robots can operate in a variety of use cases and operating environments.
Our architecture development work also addresses safety (and, specifically, compliance with ISO 15066), certification, and cybersecurity issues.
In terms of the software itself, we are developing advanced solutions designed to give robots autonomous functional and decision-making capabilities. The objective is for robots to be able to perform increasingly complex tasks and switch from one task to another on the fly in response to changing situational requirements.
The initial version of CEA-List’s companion robot demonstrator can assemble a set of ten mechanical parts. To do so, the robot executes a series of actions represented in a behavior tree, a model of the logical flow of decisions that must be made for the robot to produce the behavior required to complete the task. The decision-making logic is programed by a human operator, then it is executed by an orchestrator, a software module that manages the coordination and configuration of the many software components that make up the robotic system.
We are now developing a new orchestrator that can dynamically plan the robot’s behavior. By eliminating the need to manually program every possible action policy (an impossible task), dynamic orchestration will give robots added flexibility and autonomy.
To make this dynamic orchestrator a reality, we will need to design sophisticated solutions that combine machine learning, decision making, and replanning capabilities leveraging machine learning, expert systems, and knowledge management.
If robots are to be widely adopted in industrial use cases, they will have to be able to learn from their own experiences using machine learning paradigms like reinforcement learning, for example. Here, pairing the robots with their digital twins—a virtual reproduction of the robots themselves, but also of their environment—could support the development of hybrid approaches like sim2real that combine real and simulation data during the learning phase, and then transfer the acquired knowledge from simulated data to real world environments.
Human operators can also teach robots complex tasks. If humans can interact intuitively and naturally with robots, they will be able to “show” them how to do new tasks through observation and by physically guiding them. Knowledge of programming or robotics will no longer be needed as robots will simply learn by watching a human operator complete a task and imitating it.
Symbolic AI, which mimics the way humans learn, and federated learning, a type of collaborative, decentralized machine learning, will also improve robots’ ability to reason and respond to situations.