ROS4HRI: a common language for human-robot interaction in Industry 5.0

ROS4HRI: a common language for human-robot interaction in Industry 5.0

In the new era of Industry 5.0, robots are no longer just tools for automation, they are becoming active collaborators for people. The key is no longer only about producing faster, but about building flexible, personalized, and human-centric environments. And here comes a fundamental challenge: how can we enable robots to understand and communicate with us naturally?

The answer lies in Human-Robot Interaction (HRI), a field that seeks to make machines perceive, interpret, and respond to people in an appropriate way. Yet, one of the biggest obstacles is the lack of a universal language that allows different systems and sensors to work together seamlessly

This is where ROS4HRI comes in: an open standard driven by our partner in the ARISE project, PAL Robotics. Within this ecosystem, PAL contributes its expertise in humanoid and social robotics, ensuring that ROS4HRI is validated in real environments from testing labs to productive scenarios such as hospitals and healthcare centers.

Standard ROS4HRI

ROS4HRI is an extension of ROS2 (Robot Operating System) that defines a set of standardized interfaces, messages, and APIs designed for human-robot interaction.

Its goal is simple: to create a common language that unifies how robots perceive and interpret human signals, regardless of the sensors or algorithms used. With ROS4HRI, robots can manage key information such as:

  • Person identity: recognition and individual tracking.
  • Social attributes: emotions, facial expressions, even estimated age.
  • Non-verbal interactions: gestures, gaze, body posture.
  • Multimodal signals:voice, intentions, and natural language commands

The design of ROS4HRI follows a modular approach, breaking down barriers between different perception systems. This ensures robots can process human information in a coherent and consistent way, fully aligned with the open philosophy of ROS2. Its main components include:

  1. Standard messages: to represent human identities, faces, skeletons, and expressions.
  2. Interaction APIs:  giving applications uniform access to this data.
  3. Multimodal integration: combining voice, vision, and gestures for richer interpretation.
  4. Compatibility with ROS2 and Vulcanexus: enabling deployment in distributed, mission-critical environments.

You can see part of its core modules in the figure below. For more details, the code and documentation are available in the official repository: github.com/ros4hri

A example at PAL Robotics testing labs: ros4hri/hri_fullbody

In the European project ARISE, ROS4HRI plays a key role within the ARISE middleware, integrating with ROS2, Vulcanexus, and FIWARE.

This powerful combination enables Industry 5.0 scenarios where robots equipped with ROS4HRI can:

  • Recognize an operator and adapt their behavior based on role or gestures.
  • Interpret social signals such as signs of fatigue or stress, to provide more human-aware support.
  • Share information in real time with industrial management platforms, e.g. through FIWARE  enriching decision-making.

What makes it even more interesting is that ROS4HRI does not operate in isolation: it leverages resources already available within the community. A great example is MediaPipe, Google’s widely used library for gesture, pose, and face recognition. With ROS4HRI, MediaPipe outputs (like 2D/3D skeletons or hand detection) can be seamlessly integrated into ROS2 in a standardized way.

A practical example within ARISE using ROS4HRI is a module for detecting finger movements. A package was developed in ROS2 that follows the ROS4HRI standard and uses Google’s MediaPipe library to process video from a camera. The main node extracts the 3D coordinates of hand joints and publishes them in a ROS topic following ROS4HRI conventions, such as: /humans/hands/<id>/joint_states.

Thanks to this standardized format, other system components (for instance, an RViz visualizer or a robot controller) can consume this data interoperably, enabling applications like gesture-based robot control.

The evolution towards Industry 5.0 demands robots that can interact in ways that are more human, reliable, and efficient.On this path, ROS4HRI is emerging as a key standard to enable seamless human-robot collaboration ensuring interoperability, scalability, and trust. Its applications extend beyond industry, reaching into healthcare, education, and services, where the ability to understand and respond to people is essential.


References

Lemaignan, S.; Ferrini, L.; Gebelli, F.; Ros, R.; Juricic, L.; Cooper, S. Hands-on: From Zero to an Interactive Social Robot using ROS4HRI and LLMs. HRI 2025. https://ieeexplore.ieee.org/document/10974214

Ros, R.; Lemaignan, S.; Ferrini, L.; Andriella, A.; Irisarri, A. ROS4HRI: Standardising an Interface for Human-Robot Interaction.2023 PDF link

Youssef, M.; Lemaignan, S. ROS for Human-Robot Interaction. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.IEEE link  https://ieeexplore.ieee.org/document/9636816


The impact of PCB design on reliability and electronic performance

The impact of PCB design on reliability and electronic performance

In sectors as diverse as construction, logistics, heritage, and industry, the Internet of Things (IoT) has become a key factor in driving digitalization, improving efficiency, and opening up new opportunities for innovation.

When designing an electronic device, attention is often focused on the most important and prominent components, such as processors, sensors, or communication modules. However, the printed circuit board (PCB) is a key element, as its design has a decisive impact on the proper functioning, efficiency, and reliability of the system as a whole.

Energy efficiency is a fundamental pillar in the development of any equipment. A well-designed PCB allows for optimal energy transmission, minimizing losses associated with excessive resistance in copper tracks and poor component organization. If signals are forced to travel unnecessarily long distances or through tracks that are too narrow, the result is increased heat generation, higher energy consumption, and a shorter device lifespan.

The organization of the elements on the PCB is another crucial aspect. Very diverse functions coexist on the same circuit, such as power distribution, digital signal transmission, and high-sensitivity analog signal management. To obtain a clean and stable signal flow, it is essential that these functions are properly isolated from each other, allowing the device to operate predictably and without errors.



Protection against electromagnetic interference is no less important. In today’s environment, marked by the proliferation of wireless communications, broadcasting, and industrial machinery, devices are exposed to all kinds of external disturbances. These can generate noise and interference in signals, and can even cause power surges capable of damaging components and tracks. In addition, poor design can turn the PCB into a source of interference for itself and surrounding devices. The application of techniques such as continuous ground planes, compact layer stacking, reduced signal paths, and auxiliary filtering elements is essential to mitigate all these risks.

Aware of the importance of these aspects, at CARTIF we apply these principles from the prototyping phase, anticipating their adaptation to future industrialization processes. In the case of BATERURGIA project, this enabled the development of a monitoring and warning device for the transport and storage of electric vehicle batteries. Similarly, in AUTOLOG, it enabled the creation of a device integrated into self-guided industrial vehicles, aimed at collecting logistical data in order to improve process traceability and optimize transport routes.

A journey through production

A journey through production

Imagine if every product that reaches your hands could explain its story: where it comes from, what materials it was made with, what processes it went through, how its quality was guaranteed and under what conditions it was transported to its destination.

We live in an era in where information is everything. However, in the industrial world we still let valuable data get lost in siloed systems and time-sensistive decisiones. What if we could make that data visible, useful and connected?

Today, thanks to technologies such as Industry 4.0 and real-time capture systems, production plants generate more information than ever before. But having dat is not enough. The key is structuring, interpreting and connecting it. Turning disparate data into useful knowledge is the first step toward truly intelligent digital labeling.

This is precisely what the European project bi0space is seeking: to develop a digital labeling system for bio-based products that allows each batch to be traced from origin to delivery. This system not only collect technical information ont raw materials, processes and quality controls, but will also include logisitcs data, transportation conditions and environmental KPIs.


In today’s industrial processes, much of the key information about a product’s manufacturing is scattered across different platforms or unstructured. This makes complete traceability of what happens in the plant difficult and, consequently, complicates operational decision-making, continuous improvement, and the justification of sustainability and quality standards. In the context of bio-based production, where materials can vary depending on the supplier, harvest, or process, having control over each stage of the product’s lifecycle becomes especially important. Hence the need to establish a system that allows all this information to be collected and accessed in a unified and accessible manner.


The digital labeling system being designed at biOSpace includes five essential blocks of information:

All this data is linked by a unique digital identifier that accompanies the product throughout its entire journey, from entry into the factory to exit. This label is progressively completed, adding information as the product goes through different stages of the process: raw material reception, processing, quality control, packaging, transportation, etc.

This modular identifier structure allows for precise tracing of the product´s journey and condition at each stage, ensuring all relevant information is connected in a clear and structured manner.


The value of this information lies not only in its storage, but also in its practical use, tailored to each need. Therefore one of the goals is to enable the system to be accessed from internal dashboards that help plant staff make decisions in real-time, and that at the same time to be integrated into broader digital environments, such as management systems or digital twin platforms.

Furthermore, the same digital label can offer different levels of information depending on the profile of the user consulting it. An operator can view technical data on the process or quality controls, while a sutainability manager can acces environmental KPIs, and an end consumer can view an accesible summary of the product´s origin, characteristics, and traceability.

This detailed traceability will also contribute to what is now becoming known as the digital product passport, a tool that is gaining importance within the framework of European policies toward a more transparent and circular economy.





Although this solution is still in the design phase, it´s based on a simple but important question: what are we doing with all the information already generated in our factories?

In several cases, data exists, but it´s not connected, shared, or simply not used. This project seeks precisely that: to make sense of it, organize it, and make it availabke to those who need it, from the operator who manages a batch to the strategic decision-maker or the persona who, at the end of the chain, consumes the product.

It´s not about incorporating technology as a trend, but rather about using it with criteria. It´s about building tools that allow us to better understand what we produce, how we do it and what impact it has, at a time when traceability, sustainability and transparency are no longer options, but rather conditions for continued progress.

Interoperability in Industry 5.0: the Key Role of FIWARE

Interoperability in Industry 5.0: the Key Role of FIWARE

In the world of software development, interoperability is the ability of different devices, systems, and applications to work together in a coordinated manner, much like musicians in the Vienna Symphony Orchestra, regardless of their origin or technology. This concept is essential in digital transformation, where systems, such as a robotic application, must integrate with multiple platforms, including robotic control systems, artificial intelligence solutions, and industrial IT management platforms like ERP (Enterprise Resource Planning) or MES (Manufacturing Execution System).

The primary goal is to facilitate real-time data exchange for smarter decision-making. Interoperability plays a crucial role in robotics by enabling seamless integration between heterogeneous industrial production systems and digital platforms.


Adopting interoperability technologies in robotic application development brings multiple advantages, including:

  • Intelligent asset management and remote monitoring of robots and machine tools, allowing centralized, real-time control of distributed systems.
  • Optimized decision-making: With real-time data availability, organizations can enhance their responsiveness to unexpected events and optimize workflows.
  • Scalability and modularity: Enabling the integration of new technologies, sensors, and robots without the need for complete system redesigns, supporting adaptability to future industrial needs.
  • Cost and downtime reduction in production lines through the integration of heterogeneous systems, minimizing setup times and allowing quick reconfiguration and process flexibility in dynamic environments.
  • Predictive maintenance and resource optimization: Using AI-based models to anticipate failures, optimize spare part usage, and extend equipment lifespan without compromising productivity.

For robotic systems to integrate efficiently, they must be compatible with standardized platforms that enable intelligent data management and communication. FIWARE, which we work with in the ARISE project, is a set of technologies, architectures, and standards that accelerate the development and deployment of open-source solutions. As a leading technology in the European Union, FIWARE primarily contributes to the creation of interoperable tools and services for real-time data management and analysis, ensuring persistence, flexibility, and scalability, thereby enabling the development of customized applications without excessive costs.

Another key value proposition is its multi-sector nature. FIWARE’s standardized reference components and architectures allow any solution designed for a specific sector—such as manufacturing, logistics, or services—to be inherently interoperable with other verticals, including energy management, mobility, or emerging data spaces.

In ARISE, we develop robotic applications for human-robot interaction by integrating our ARISE middleware (a middleware solution that incorporates Vulcanexus, ROS2, FIWARE, and ROS4HRI) into four experimental environments. These environments explore connected robotic solutions with FIWARE in an Industry 5.0 scenario. One of these environments is in CARTIF, a laboratory for testing and validating technology in controlled environments (TRL 4-5). Figure 1 below shows this experimental setup:

Fig 1. CARTIF testing environment

FIWARE plays a fundamental role in providing tools that enable interoperability between heterogeneous systems, ensuring seamless integration of real-time data and IoT devices, as well as dynamic data management from the operational level, allowing communication between different systems, devices, and platforms toward the analytical level. This ensures deep integration with enterprise IT/OT infrastructures (see Figure 2):

Fig 2. ARISE middleware ecosystem

The design of a FIWARE architecture follows a modular approach, where components are integrated according to application needs. The architecture is built around its core component, the Context Broker, which manages real-time data flows. To implement FIWARE effectively, it is recommended to follow these steps:

  1. Define the use case: identify the application’s objectives and requirements.
  1. Select the appropriate architecture: include the Context Broker, IoT Agents, and other components as needed, converting heterogeneous protocols into FIWARE-compatible data. For example, the OPC-UA IoT Agent enables real-time management of data collected in industrial environments, facilitating interoperability with other systems.
  1. Integrate devices and systems: connect sensors, robots, or other systems via OPC-UA, MQTT, or other protocols.
  1. Implement security and access control: use Keyrock and PEP Proxy to ensure data protection, authentication, and access control.
  1. Store and analyze data: utilize Cygnus, Draco, or QuantumLeap for valuable insights, historical data storage, persistence, and Big Data analysis.
  1. Deploy in the cloud or local environments: consider FIWARE Lab or private infrastructure for hosting services.
  1. Monitoring and optimization: evaluate system performance and improve integration with platforms like AI-on-Demand or Digital Robotics. Wirecloud enables the creation of custom visual dashboards, facilitating easy integration with applications like Grafana and Apache Superset.

FIWARE component catalog: https://www.fiware.org/catalogue/

Fig 3. FIWARE architecture modules and application example

At CARTIF, we continue to invest in these technologies to build a future where system and platform collaboration is the key to success. Recently, we joined the FIWARE iHubs network under the name CARTIFactory. As an official iHub, it will not only promote FIWARE adoption but also serve as a reference center with its experimentation lab, fostering interoperability in robotic applications within our community and industrial ecosystem.

Interoperability is not just a technical requirement but a fundamental pillar for the success of digital transformation in industry. Technologies like FIWARE enable the connection of systems, process optimization, and the development of a flexible and scalable ecosystem. Thanks to this capability, companies can integrate artificial intelligence, robotics, and advanced automation seamlessly.


Aníbal Reñones. Head of the Industry 4.0 Area, Industrial and Digital Systems Division

Francisco Meléndez. Robotics Expert and FIWARE Evangelist, Technical Coordinator of the ARISE Project (FIWARE Foundation)

Artificial Intelligence: Driving the next industrial revolution

Artificial Intelligence: Driving the next industrial revolution

Artificial intelligence (AI) is no longer the stuff of futuristic fantasy; it has become a tangible part of our everyday lives. From personalised recommendations on streaming platforms to optimising logistics processes in a factory, AI is everywhere. What’s interesting is that it’s not just making our lives easier, it’s also transforming industry.

In the HUMAIN project, where we are working with companies such as BAMA and CENTUM, we are taking AI to the next level. Imagine a factory that can anticipate problems before they happen, thanks to data-driven predictive systems. Or robots working alongside humans to efficiently pack and palletise products, even if the boxes are of different sizes. It’s like switching from a manual to an automatic car!

But this is not science fiction. We are researching and developing artificial intelligence algorithms that turn vast amounts of data into intelligent decisions, computer vision systems that see beyond what the human eye can see, and machine learning-based predictive maintenance solutions that save time and money. AI acts as a strategic brain that optimises every aspect of the process, from production to logistics. The result? More sustainable operations, less waste and smarter factories.

These kinds of projects don’t just benefit large companies. They also have a direct impact on our lives. Think about it: every time you buy something online and it arrives on your doorstep in record time, there is probably an AI system behind it that has optimised every step of the process. From packaging to delivery.

In the HUMAIN project consortium, we are excited to be part of this revolution. It’s not just about making machines work faster, it’s about integrating disruptive technologies that put people at the centre of the process. After all, AI is a tool: it’s how we use it to improve our everyday lives that matters.

Are we ready to embrace this industrial revolution? The answer lies in every click, every purchase, and every robot working hand in hand with us.

Behind the Curtain: Explainable Artificial Intelligence

Behind the Curtain: Explainable Artificial Intelligence

Artificial intelligence (AI) is contributing to the transformation of a large number of sectors, from suggesting a song to analyzing our health status via a watch, along with manufacturing industry. One hindrance on this transformation relates to the overall complexity of AI systems, which often poses challenges in terms of transparency and comprehensions of the results delivered. In this context, the AI’s explanatory capability (or “explainability”) is referred as the ability to make their decisions and actions understandable to users – which is known as eXplainable AI (XAI); this is something crucial to generate trust and ensure a responsible adoption of these technologies.


A wide range of technological solutions are currently being investigated in order to improve the explainability of AI algorithms. One of the main strategies includes the creation of intrinsically explainable models (ante hoc explanations). This type of models, such as decision trees and association rules, are designed to be transparent and comprehensible by their own nature. Their logical structure allows users to seamlessly follow the reasoning behind the AI-based decisions. Tools for visualization of AI explanations are key, since they represent graphically the decision-making process performed by the model, thus facilitating user comprehension. These tools might take different forms, such as dedicated dashboards, augmented reality glasses, or natural language explanations (as speech or as text).

Tree decision. Explainable AI method
Intrinsically explainable system: decision tree. The intermediary nodes are conditions that are progressively verified until reaching the final result
Natural language explanation. Explainable AI methodes.
Natural Language explanations for a recommender system of new routes for exercising. Extracted from Xu et al. (2023). XAIR: framework of XAI in augmented reality.

Another commonly used family of explanation techniques is called post hoc methods: these consist in, once the AI model has been created and trained, a posteriori processing and analyzing this resulting model to provide explanations of the results. For example, some of these techniques evaluate how much is contributed by each input variable in the final result of the system (sensibility analysis). Among post hoc explainability techniques, SHAP (Shapley Additive exPlanations), a method based on cooperative game theory, allows to extract coefficients that determine the importance of each input variable on the final result of an AI algorithm.

Other XAI techniques include decomposition, which divides the AI model into simpler and more easily explainable components, and knowledge distillation into surrogate models, which approximate the function of the original system while being more easily comprehensible. On the other hand, the so-called “local explanations” consist in methods that explain individual examples (input-output), not the entire AI model. An example are the explanations provided by tools such as LIME (Local Interpretable Model-agnostic Explanations). As an illustration of LIME, the example in the following figure shows a specific inference in text classification task, in which a text is classified as “sincere” (with 84% of likelihood), and the most relevant words for that decision are highlighted, as an explanation of this individual classification [Linardatos et al. (2020)].

Decomposition tecnhique. Explainable AI

An additional approach for XAI relates to the integration of input by users in the process of AI model construction, which is known in general as “Human-in-the-Loop” (HITL). This approach allows users to interact (e.g. by labelling new data) and to supervise the AI algorithm building process, adjusting its decisions in real time and thus improving the overall system transparency.

At CARTIF, we are actively working in different projects related with AI, such as s-X-AIPI to help advance in the explainability of AI systems used in industrial applications. A significant example in our work are dashboards (visualization or control panels) designed for the supervision and analysis of the performance of fabrication processes studied in the project. These dashboards allow plant operators to visualize and understand in real time the actual status of the industrial process.

Predictive and anomaly detection models have been created in the context of asphalt industrial processes which not only anticipate future values, but also detect unusual situations in the asphalt process and explain the factors that have an influence on these predictions and detections. Thus, this helps operators make adequate informed decisions and better understand the results generated by the AI systems and how to take proper actions.

Explainability in AI methods is essential for the safe and effective AI adoption in all types of sectors: industry, retail, logistics, pharma, construction… In CARTIF, we are committed with the development of technologies to create AI-based applications that do not only improve processes and services, but also are transparent and comprehensible for users; in short, that are explainable.


Co-author

Iñaki Fernández. PhD in Artificial Intelligence. Researcher at the Health and Wellbeing Area of CARTIF.