Artificial Intelligence: Driving the next industrial revolution

Artificial Intelligence: Driving the next industrial revolution

Artificial intelligence (AI) is no longer the stuff of futuristic fantasy; it has become a tangible part of our everyday lives. From personalised recommendations on streaming platforms to optimising logistics processes in a factory, AI is everywhere. What’s interesting is that it’s not just making our lives easier, it’s also transforming industry.

In the HUMAIN project, where we are working with companies such as BAMA and CENTUM, we are taking AI to the next level. Imagine a factory that can anticipate problems before they happen, thanks to data-driven predictive systems. Or robots working alongside humans to efficiently pack and palletise products, even if the boxes are of different sizes. It’s like switching from a manual to an automatic car!

But this is not science fiction. We are researching and developing artificial intelligence algorithms that turn vast amounts of data into intelligent decisions, computer vision systems that see beyond what the human eye can see, and machine learning-based predictive maintenance solutions that save time and money. AI acts as a strategic brain that optimises every aspect of the process, from production to logistics. The result? More sustainable operations, less waste and smarter factories.

These kinds of projects don’t just benefit large companies. They also have a direct impact on our lives. Think about it: every time you buy something online and it arrives on your doorstep in record time, there is probably an AI system behind it that has optimised every step of the process. From packaging to delivery.

In the HUMAIN project consortium, we are excited to be part of this revolution. It’s not just about making machines work faster, it’s about integrating disruptive technologies that put people at the centre of the process. After all, AI is a tool: it’s how we use it to improve our everyday lives that matters.

Are we ready to embrace this industrial revolution? The answer lies in every click, every purchase, and every robot working hand in hand with us.

Behind the Curtain: Explainable Artificial Intelligence

Behind the Curtain: Explainable Artificial Intelligence

Artificial intelligence (AI) is contributing to the transformation of a large number of sectors, from suggesting a song to analyzing our health status via a watch, along with manufacturing industry. One hindrance on this transformation relates to the overall complexity of AI systems, which often poses challenges in terms of transparency and comprehensions of the results delivered. In this context, the AI’s explanatory capability (or “explainability”) is referred as the ability to make their decisions and actions understandable to users – which is known as eXplainable AI (XAI); this is something crucial to generate trust and ensure a responsible adoption of these technologies.


A wide range of technological solutions are currently being investigated in order to improve the explainability of AI algorithms. One of the main strategies includes the creation of intrinsically explainable models (ante hoc explanations). This type of models, such as decision trees and association rules, are designed to be transparent and comprehensible by their own nature. Their logical structure allows users to seamlessly follow the reasoning behind the AI-based decisions. Tools for visualization of AI explanations are key, since they represent graphically the decision-making process performed by the model, thus facilitating user comprehension. These tools might take different forms, such as dedicated dashboards, augmented reality glasses, or natural language explanations (as speech or as text).

Tree decision. Explainable AI method
Intrinsically explainable system: decision tree. The intermediary nodes are conditions that are progressively verified until reaching the final result
Natural language explanation. Explainable AI methodes.
Natural Language explanations for a recommender system of new routes for exercising. Extracted from Xu et al. (2023). XAIR: framework of XAI in augmented reality.

Another commonly used family of explanation techniques is called post hoc methods: these consist in, once the AI model has been created and trained, a posteriori processing and analyzing this resulting model to provide explanations of the results. For example, some of these techniques evaluate how much is contributed by each input variable in the final result of the system (sensibility analysis). Among post hoc explainability techniques, SHAP (Shapley Additive exPlanations), a method based on cooperative game theory, allows to extract coefficients that determine the importance of each input variable on the final result of an AI algorithm.

Other XAI techniques include decomposition, which divides the AI model into simpler and more easily explainable components, and knowledge distillation into surrogate models, which approximate the function of the original system while being more easily comprehensible. On the other hand, the so-called “local explanations” consist in methods that explain individual examples (input-output), not the entire AI model. An example are the explanations provided by tools such as LIME (Local Interpretable Model-agnostic Explanations). As an illustration of LIME, the example in the following figure shows a specific inference in text classification task, in which a text is classified as “sincere” (with 84% of likelihood), and the most relevant words for that decision are highlighted, as an explanation of this individual classification [Linardatos et al. (2020)].

Decomposition tecnhique. Explainable AI

An additional approach for XAI relates to the integration of input by users in the process of AI model construction, which is known in general as “Human-in-the-Loop” (HITL). This approach allows users to interact (e.g. by labelling new data) and to supervise the AI algorithm building process, adjusting its decisions in real time and thus improving the overall system transparency.

At CARTIF, we are actively working in different projects related with AI, such as s-X-AIPI to help advance in the explainability of AI systems used in industrial applications. A significant example in our work are dashboards (visualization or control panels) designed for the supervision and analysis of the performance of fabrication processes studied in the project. These dashboards allow plant operators to visualize and understand in real time the actual status of the industrial process.

Predictive and anomaly detection models have been created in the context of asphalt industrial processes which not only anticipate future values, but also detect unusual situations in the asphalt process and explain the factors that have an influence on these predictions and detections. Thus, this helps operators make adequate informed decisions and better understand the results generated by the AI systems and how to take proper actions.

Explainability in AI methods is essential for the safe and effective AI adoption in all types of sectors: industry, retail, logistics, pharma, construction… In CARTIF, we are committed with the development of technologies to create AI-based applications that do not only improve processes and services, but also are transparent and comprehensible for users; in short, that are explainable.


Co-author

Iñaki Fernández. PhD in Artificial Intelligence. Researcher at the Health and Wellbeing Area of CARTIF.

Beyond reality. Extended reality

Beyond reality. Extended reality

Imagine finding out that the pilot of your next flight will be using Apple Vision Pro while in command of the plane. Would you feel comfortable boarding that plane? If your answer is no, you might think the pilot is reckless and that your life is at risk. On the other hand, if your answer is yes, you probably know the potential of using this device in such a situation.

Recently, the world was caught up in this debate when a pilot in the United States was recorded using Apple Vision Pro during a flight1. The pilot claimed to have improved productivity with this device. However, he faced significant criticism and had to apologize after deleting the video.

Why did this case cause so much outrage? In reality, many sectors use these types of devices daily, such as surgery, architecture, engineering, and training. The reason is simple: we are progressing. Although humans are skeptical of new technologies, we recognize that they can improve our lives. A clear example is e-commerce; when it started, many people thought it was dangerous. Now, Amazon is the fifth most valuable company in the United States, and in Spain, 39% of the population shops online at least once a month2.

It’s likely that over time, this feeling will also dissipate in the case of extended reality. This term, which encompasses virtual reality, augmented reality, and mixed reality, can be confusing for many. Each technology serves a specific purpose based on the level of immersion: virtual reality creates entirely digital environments, augmented reality overlays digital elements onto the physical reality, and mixed reality combines both to provide spatial awareness to digital elements. This concept is best understood when looking at the following image.

Differences between virtual reality, augmented reality and mixed reality. Source: Avi Barel3

In the image, you can see how in mixed reality, an object like a rubber duck can recognize its surroundings and position itself behind a table instead of going through it as it would in augmented reality. This is the magic of mixed reality!

Although Apple Vision Pro has incredible features, similar devices have existed for a long time, something that CARTIF is well aware of. That’s why in the Industrial and Digital Systems Division, we have long been using the Microsoft HoloLens 2 mixed reality device for various purposes.

In the Baterurgia project, we are using this technology to automate the disassembly of electric car batteries and promote human-robot interaction. To achieve this, we rely on robotics and computer vision to detect screws present in a battery. Through the lenses of the Microsoft HoloLens 2, the operator sees holograms indicating the position of the screws in space. The operator can select a screw with a finger or gaze and issue instructions to the robot via voice commands. The system provides feedback on the progress of the activity, allowing the operator to perform other tasks simultaneously.

Secuence for picking up a screw (Recorded with Microsoft HoloLens 2)

  1. Display of the camera image showing detected screws.
  2. Identification and marking of the screws.
  3. The operator selects a screw.
  4. The robot picks up the selected screw.

As you have seen, mixed reality is gaining popularity and being applied in more sectors. The high cost of products like Apple Vision Pro and Microsoft HoloLens 2, which are around $3500, is a significant limitation. However, new more affordable devices like Meta Quest 3, which costs around $500, are making this technology more accessible for companies and users. Along these lines, it is projected that the global sales of extended reality devices will increase to 105 million by 20254 .

If this post has intrigued you and you wish to explore more about extended reality and its impact, I’d be happy to share more information with you!


1 J. Serrano, «Video of Man ‘Flying’ Plane While Wearing the Apple Vision Pro Sparks Outrage,» GIZMODO, 7 Febrero 2024. Available: https://gizmodo.com/pilot-flying-plane-apple-vision-pro-video-stunt-1851233997

2 Statista, «Frecuencia con la que los consumidores compran online al mes en España en 2023». Available: https://es.statista.com/estadisticas/496519/frecuencia-de-compra-mensual-en-comercio-electronico-de-espana/

3 A. Barel, «The differences between VR, AR & MR,» Medium, 7 Agosto 2017. [En línea]. Available: https://medium.com/startux-net/the-differences-between-vr-ar-mr-27012ea1c5

4 Statista, «Ventas de auriculares/gafas de realidad extendida (RE) en todo el mundo desde 2016 hasta 2025». Available: https://es.statista.com/estadisticas/1307118/envios-de-auriculares-de-realidad-extendida/

CAPRI, pathway and results

CAPRI, pathway and results

When a Project finalises, it is the time to recapitulate, time to collect all the information and the experience gained along. Along the three years and a half working in CAPRI project there has been a lot of time to do things, to obtain very good results or to feel bad because many times nothing seems to works well the first time.

CAPRI project, has finalised in September 2023 and has achieved its main objectives defined during the beginning which were driven by the need of help in the digital transformation of the European process industry by investigating, developing and testing a cognitive automation platform. CAP, that integrates 19 different cognitive solutions defined in each one of the three project’s pilot plants. This platform has been designed to achieve the ultimate goal to obtain reductions of use of raw materials, energy consumption and CO2 footprint. With the finalization of the project, it can be shown that the reductions have been achieved thanks to the very close collaboration of the twelve partners involved, from seven different countries. The cognitive platform and solutions were deployed in three important sectors of the process industry: Asphalt manufacturing, billets and bars of Steel production and the production of tablets in the pharma industry.

For example, the asphalt pilot plant from EIFFAGE Infraestructuras, the cognitive solutions were related with the four automation levels, from sensors to planning, covering all of them.

The final prototype demonstrated under actual operation of the asphalt plant included very different technologies such as computer vision, vibration analysis, neural networks or mathematical models for parametrization of the existing data to predict the key performance indicators (specific energy consumption per tonne of asphalt mix or the final amount of raw materials used).

The cognitive solutions developed, like the cognitive control of the dryer drum or the new sensors, assures the quality of products and production in real time, reducing the used energy and raw materials. Before the project, the control of the materials used was based on estimations and now, with the mathematical model for mass balance and new sensors, the plant operators can receive an information in real time they didn’t have before.

The expected results of each Cognitive Solutions were defined during the first stages of the project to verify the improvements of each one during the validation period of the project.

CAPRI Project offers innovative solutions that have the potential to transform industries and drive progress. It highlights the project’s focus on unlocking new possibilities and empowering various sectors with cutting-edge advancements thanks to the generated key exploitable results.

Respect these results, inside the Asphalt use case, it has been included as exploitable results 3 solutions: a sensor to measure the dust aspirated online inside a pipe, the amount of bitumen present in recycled asphalt, and a predictive maintenance system of plant’s baghouse based on cognitive sensors and expert knowledge. The steel use case generated 2 exploitable results: a cognitive steel solidification sensor for continuous casting processes and a steel Product tracking. The pharma use case has 2 exploitable results: a cognitive sensor for granule quality and a quality attributes sensor.

The project generated also some transversal key exploitable results useful for any kind of industry: the technical architecture of the cognitive automation platform or CAP, and another one related to the open data generated, showing CAPRI project’s commitment with the open science the FAIR principles through the generation of more than 50 assets shared in open platforms, like Zenodo

The main objectives of the proposal were the reduction of use of raw materials, energy and CO2 footprint. We can say with pride that we achieved those objectives as you can see in the summary table.

KPIAfter CAPRI
5% – 20% Savings in Raw Material10-20%
5% overall reduction in energy consumption3-16.75%
5% reduction of CO2 foot print3-16.75%

As an engineer, when a project finalises on time, and with these very good results, when your project has contributed to improve the industry, without damaging our environment, you feel better and all the sacrifices, extra hours and bad reviews was worth it.

LASER: from death ray to the swiss knife of technology

LASER: from death ray to the swiss knife of technology

“LA man discovers science-fiction death ray”. This was the shocking headline that appeared in a Los Angeles newspaper in July 1960. A few weeks earlier, on 16 May 1960, the American engineer and physicist Theodore H. Maiman at Hughes Research Laboratories had succeeded in making a synthetic ruby cylinder with reflective bases and a photographic lamp emit pulses of intense red light, the first physical implementation of laser.

Theodore H. Maiman with the first laser implementation
Theodore H. Maiman with the first laser implementation

This milestone in photonics was the consequence both of centuries of study by great scientists such as Newton, Young, Manxwell and Einstein trying to understand and explain the nature of light, and of a frantic race since the 1950s between a dozen laboratories, led by Bell´s, to demonstrate experimentally that the stimulated emission of light predicted by Albert Einstein in his 1917 paper “The Quantum Theory of Radiation” was possible.

The term LASER or “Light Amplified by Stimulated Emission of Radiation” was coined by Gordon Gould in 1957 in his notes on the feasibility of building a laser. Gould had been a PhD student of Charles Townes, who, in 1954, had built the MASER, the predecessor of the laser, which amplified microwave waves by stimulated emission of radiation. In 1964, Charles Townes received the Nobel Prize in physics for his implementation of the MASER, Gordon Gould became a millionaire with the laser patent, and Mainman received recognition for having created the first implementation of a laser, as well as numerous academic awards.

A laser is a light source with special characteritstics of coherence, monochromicity and collimation. These characteristics make it possible to concentrate, with the help of optical lenses, a high intensity of energy in a minimum area. To achieve these characteristics, the lase4r makes use of the quantum mechanism predicted by Einstein whereby the generation of photons in certain solid, liquid or gaseous media is greatly amplified when these media are excited electrically or by light pulses.

During the 1960s, in addition to Maiman´s solid-taste laser, other lasers were developed, such as the He-Ne laser in December 1960 and the CO2 laser in 1961, whose active medium was gases, or the diode laser in 1962. Although in the beginning the laser was said to be ” a solution for an undefined problem”, the number of applications of the laser rapidly increased to a great extent, making it an indispensable tool in most fields of science and manufacturing. We can find examples of this industry, where its multiple uses for cutting, welding or for surface treatments of a large number of materiales has made it indispensable, or in the communications sector, where its use as a transmitter of information by means of pulses of light through optical fibres has made it possible to achieve unimaginable data transfer rates without which the current digital transformation would not be possible.

Nowadays, the development of new lasers, their performance and applications continues to grow. For example, in recent years, green and blue lasers have become increasingly important in electro-mobility because their wavelenghts are more suitable for welding copper elements than other more common lasers.

Green laser for cutting and welding copper elements.
Green laser for cutting and welding copper elements.
Source: Cvecek, Kaufamnn Blz 2021. https://www.wzl.rwth-aachen.de/go/id/telwe?lidx=1

Since 2020 CARTIF is part of PhotonHub Europe, a platform made up of more than 30 reference centers in photonics from 15 European countris in which more than 500 experts in photonics offer their support to companies (mainly SMEs) yo help them to improve their production processes and products through the use of photonics. With this objective, training, project development and technical and financial advisory actions have been organized until 2024.

In addition, to be aware of what is happening in the world of photonics, we encourage you to be part of the community created in PhotonHub Europe. In this community you can be aware of the activities of the platforms as well as news and events related to photonics.

The evolution of HRIs (Human-robot interaction). More agile and adaptable to different scenarios

The evolution of HRIs (Human-robot interaction). More agile and adaptable to different scenarios

In a world where humans perform tasks that involve manipulating objects, such as lifting, dragging or interacting with them (for example, when we use our beloved mobile phones or we eat an apple), these actions are performed subconsciously, naturally. It is our senses that allow us to adapt our physical characteristics to the tasks instinctively. In contrast, robots act like little human apprentices, imitating our behaviour, as they currently lack the same awareness and intelligence.

To address this gap, Human Robot Interaction (HRI) emerged, a discipline that seeks to understand, design and evaluate the interaction between robots and humans. This field had its beginnings in the 1990s with a multidisciplinary approach but today its study is in constantly evolving and has given rise to important events1 that bring together visionaries in the field, who seek to promote this technology, bringing us ever closer to a world where artifical intelligence and humans understand each other and collaborate,transforming our near future.

Understanding the discipline of human-robot interaction is crucial. It is not a simple task; rather, it is tremendously challenging, requiring contributions from cognitive science, linguistics, psychology, engineering, mathematics, computer science, and human factors design. As a result, multiple attributes are involved:

  • Level of autonomy: decision making indepently
  • Exchange of information: fluency and understanding between different parts.
  • Different technologies and equipment: major adaptation between languages and models.
  • Tasks configuration: definition and execution of tasks efficiently.
  • Cognitive learning: abilities to learn and improve with time.

Here again, the type of interaction, is of particular importance, which is defined as a reciprocal action, relationship or influence between two or more persons, objects, agents, etc. and a key factor is the distance between human and robot, where it can be called a distance interaction, e.g. mobile robots that are sent into space, or a physical interaction, where the human being has contact with the robot.

Human-robot interaction levels according to standards defined in ISO8373//10218//15066
Source: V. Villani, et al., Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications, Mechatronics 55 (2018) 248–266,http://dx.doi.org/10.1016/j.mechatronics.2018.02.009

These attributes are just a sample of the complexities involved in these robotic interaction systems, where interdisciplinary collaboration is essential for their evolution.

At the moment the challenges are related to the highly unstructured nature of the scenarios where collaborative robots are used, as it is impossible for a technology developer to structure the entire system environment. Among the most important challenges aspects related to mobility, communications, map constructions and situational awareness.

So, what is the next step in human-robot interaction? Challenges include getting them to speak the same language and improving and simplifying communication, especially for non-technologically trained people, not presupposing these prior skills and not needing complicated instruction manuals; also discovering new forms of interaction, through natural language, in the case of assistive robots, special care for proximity and vulnerability; in general improving interfaces, making them more agile and flexible, so that they can be easily adapted to different scenarios and changes in the environment.

On the other hand, a challenge that has become particularly important in recent times, is to take into account emotional needs, human values and ethics in human-robot interactions, as highlighted in this HRI definition above:

HRI definition (Human-Robot interaction)

is the science that studies people’s behaviour and attitudes towards robots in relation to their physical, technological and interactive characteristics, with the aim of developing robots that facilitate the emergence of efficient human-robot interactions (in accordance with the original requirements of their intended area of use), but are also acceptable to people and satisfy the social and emotional needs of their individual users, while respecting human values (Dautenhahn, 2013).


Inspired by this exciting field of work, CARTIF, in collaboration with FIWARE Foundation and other leading partners in Europe, will start in 2024 the European ARISE project, which aims to achieve real-time, agile, human-centric, open source technologies that drive solutions in Human-Robot HRI interaction by combining open technologies such as ROS 2, Vulcanexus and FIWARE. And where the aims is to solve challenges by funding experiments that develop agile HRI solutions with increasingly adaptive and intuitive interfaces.

ARISE will address many of the following challenges: (1) Application of collaborative robotics for disassembly of value-added products, (2) Picking of complex products in industrial warehouses, (3) Flexible robotic collaboration for more efficient assembly and quality control, (4) Intelligent reprogramming ensuring adaptability for different products through intuitive interfaces, (5) Search and transport tasks in healthcare environments, (6) Improving multimodal interaction around different functional tasks, (7) Robotic assistance in flexible high-precision tasks, and (8) Improving ergonomics and worker efficiency, thus generating a multidisciplinary framework that takes into account both technological and social aspects.

In addition, the ARISE project opens its doors to robotics experts so that they can collaborate in solving the various challenges, thus generating new technological components for the HRI Toolbox, such as ROS4HRI. This collaborative grand challenge aims to make it easier for companies to create agile and sustainable HRI aplications in the near future.


1 ACM/IEEE International Conference on Human-Robot Interaction, IEEE International Conference on Robotics and Automation (ICRA) y Robotics Systems and sciences