CAPRI, pathway and results

CAPRI, pathway and results

When a Project finalises, it is the time to recapitulate, time to collect all the information and the experience gained along. Along the three years and a half working in CAPRI project there has been a lot of time to do things, to obtain very good results or to feel bad because many times nothing seems to works well the first time.

CAPRI project, has finalised in September 2023 and has achieved its main objectives defined during the beginning which were driven by the need of help in the digital transformation of the European process industry by investigating, developing and testing a cognitive automation platform. CAP, that integrates 19 different cognitive solutions defined in each one of the three project’s pilot plants. This platform has been designed to achieve the ultimate goal to obtain reductions of use of raw materials, energy consumption and CO2 footprint. With the finalization of the project, it can be shown that the reductions have been achieved thanks to the very close collaboration of the twelve partners involved, from seven different countries. The cognitive platform and solutions were deployed in three important sectors of the process industry: Asphalt manufacturing, billets and bars of Steel production and the production of tablets in the pharma industry.

For example, the asphalt pilot plant from EIFFAGE Infraestructuras, the cognitive solutions were related with the four automation levels, from sensors to planning, covering all of them.

The final prototype demonstrated under actual operation of the asphalt plant included very different technologies such as computer vision, vibration analysis, neural networks or mathematical models for parametrization of the existing data to predict the key performance indicators (specific energy consumption per tonne of asphalt mix or the final amount of raw materials used).

The cognitive solutions developed, like the cognitive control of the dryer drum or the new sensors, assures the quality of products and production in real time, reducing the used energy and raw materials. Before the project, the control of the materials used was based on estimations and now, with the mathematical model for mass balance and new sensors, the plant operators can receive an information in real time they didn’t have before.

The expected results of each Cognitive Solutions were defined during the first stages of the project to verify the improvements of each one during the validation period of the project.

CAPRI Project offers innovative solutions that have the potential to transform industries and drive progress. It highlights the project’s focus on unlocking new possibilities and empowering various sectors with cutting-edge advancements thanks to the generated key exploitable results.

Respect these results, inside the Asphalt use case, it has been included as exploitable results 3 solutions: a sensor to measure the dust aspirated online inside a pipe, the amount of bitumen present in recycled asphalt, and a predictive maintenance system of plant’s baghouse based on cognitive sensors and expert knowledge. The steel use case generated 2 exploitable results: a cognitive steel solidification sensor for continuous casting processes and a steel Product tracking. The pharma use case has 2 exploitable results: a cognitive sensor for granule quality and a quality attributes sensor.

The project generated also some transversal key exploitable results useful for any kind of industry: the technical architecture of the cognitive automation platform or CAP, and another one related to the open data generated, showing CAPRI project’s commitment with the open science the FAIR principles through the generation of more than 50 assets shared in open platforms, like Zenodo

The main objectives of the proposal were the reduction of use of raw materials, energy and CO2 footprint. We can say with pride that we achieved those objectives as you can see in the summary table.

KPIAfter CAPRI
5% – 20% Savings in Raw Material10-20%
5% overall reduction in energy consumption3-16.75%
5% reduction of CO2 foot print3-16.75%

As an engineer, when a project finalises on time, and with these very good results, when your project has contributed to improve the industry, without damaging our environment, you feel better and all the sacrifices, extra hours and bad reviews was worth it.

LASER: from death ray to the swiss knife of technology

LASER: from death ray to the swiss knife of technology

“LA man discovers science-fiction death ray”. This was the shocking headline that appeared in a Los Angeles newspaper in July 1960. A few weeks earlier, on 16 May 1960, the American engineer and physicist Theodore H. Maiman at Hughes Research Laboratories had succeeded in making a synthetic ruby cylinder with reflective bases and a photographic lamp emit pulses of intense red light, the first physical implementation of laser.

Theodore H. Maiman with the first laser implementation
Theodore H. Maiman with the first laser implementation

This milestone in photonics was the consequence both of centuries of study by great scientists such as Newton, Young, Manxwell and Einstein trying to understand and explain the nature of light, and of a frantic race since the 1950s between a dozen laboratories, led by Bell´s, to demonstrate experimentally that the stimulated emission of light predicted by Albert Einstein in his 1917 paper “The Quantum Theory of Radiation” was possible.

The term LASER or “Light Amplified by Stimulated Emission of Radiation” was coined by Gordon Gould in 1957 in his notes on the feasibility of building a laser. Gould had been a PhD student of Charles Townes, who, in 1954, had built the MASER, the predecessor of the laser, which amplified microwave waves by stimulated emission of radiation. In 1964, Charles Townes received the Nobel Prize in physics for his implementation of the MASER, Gordon Gould became a millionaire with the laser patent, and Mainman received recognition for having created the first implementation of a laser, as well as numerous academic awards.

A laser is a light source with special characteritstics of coherence, monochromicity and collimation. These characteristics make it possible to concentrate, with the help of optical lenses, a high intensity of energy in a minimum area. To achieve these characteristics, the lase4r makes use of the quantum mechanism predicted by Einstein whereby the generation of photons in certain solid, liquid or gaseous media is greatly amplified when these media are excited electrically or by light pulses.

During the 1960s, in addition to Maiman´s solid-taste laser, other lasers were developed, such as the He-Ne laser in December 1960 and the CO2 laser in 1961, whose active medium was gases, or the diode laser in 1962. Although in the beginning the laser was said to be ” a solution for an undefined problem”, the number of applications of the laser rapidly increased to a great extent, making it an indispensable tool in most fields of science and manufacturing. We can find examples of this industry, where its multiple uses for cutting, welding or for surface treatments of a large number of materiales has made it indispensable, or in the communications sector, where its use as a transmitter of information by means of pulses of light through optical fibres has made it possible to achieve unimaginable data transfer rates without which the current digital transformation would not be possible.

Nowadays, the development of new lasers, their performance and applications continues to grow. For example, in recent years, green and blue lasers have become increasingly important in electro-mobility because their wavelenghts are more suitable for welding copper elements than other more common lasers.

Green laser for cutting and welding copper elements.
Green laser for cutting and welding copper elements.
Source: Cvecek, Kaufamnn Blz 2021. https://www.wzl.rwth-aachen.de/go/id/telwe?lidx=1

Since 2020 CARTIF is part of PhotonHub Europe, a platform made up of more than 30 reference centers in photonics from 15 European countris in which more than 500 experts in photonics offer their support to companies (mainly SMEs) yo help them to improve their production processes and products through the use of photonics. With this objective, training, project development and technical and financial advisory actions have been organized until 2024.

In addition, to be aware of what is happening in the world of photonics, we encourage you to be part of the community created in PhotonHub Europe. In this community you can be aware of the activities of the platforms as well as news and events related to photonics.

The evolution of HRIs (Human-robot interaction). More agile and adaptable to different scenarios

The evolution of HRIs (Human-robot interaction). More agile and adaptable to different scenarios

In a world where humans perform tasks that involve manipulating objects, such as lifting, dragging or interacting with them (for example, when we use our beloved mobile phones or we eat an apple), these actions are performed subconsciously, naturally. It is our senses that allow us to adapt our physical characteristics to the tasks instinctively. In contrast, robots act like little human apprentices, imitating our behaviour, as they currently lack the same awareness and intelligence.

To address this gap, Human Robot Interaction (HRI) emerged, a discipline that seeks to understand, design and evaluate the interaction between robots and humans. This field had its beginnings in the 1990s with a multidisciplinary approach but today its study is in constantly evolving and has given rise to important events1 that bring together visionaries in the field, who seek to promote this technology, bringing us ever closer to a world where artifical intelligence and humans understand each other and collaborate,transforming our near future.

Understanding the discipline of human-robot interaction is crucial. It is not a simple task; rather, it is tremendously challenging, requiring contributions from cognitive science, linguistics, psychology, engineering, mathematics, computer science, and human factors design. As a result, multiple attributes are involved:

  • Level of autonomy: decision making indepently
  • Exchange of information: fluency and understanding between different parts.
  • Different technologies and equipment: major adaptation between languages and models.
  • Tasks configuration: definition and execution of tasks efficiently.
  • Cognitive learning: abilities to learn and improve with time.

Here again, the type of interaction, is of particular importance, which is defined as a reciprocal action, relationship or influence between two or more persons, objects, agents, etc. and a key factor is the distance between human and robot, where it can be called a distance interaction, e.g. mobile robots that are sent into space, or a physical interaction, where the human being has contact with the robot.

Human-robot interaction levels according to standards defined in ISO8373//10218//15066
Source: V. Villani, et al., Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications, Mechatronics 55 (2018) 248–266,http://dx.doi.org/10.1016/j.mechatronics.2018.02.009

These attributes are just a sample of the complexities involved in these robotic interaction systems, where interdisciplinary collaboration is essential for their evolution.

At the moment the challenges are related to the highly unstructured nature of the scenarios where collaborative robots are used, as it is impossible for a technology developer to structure the entire system environment. Among the most important challenges aspects related to mobility, communications, map constructions and situational awareness.

So, what is the next step in human-robot interaction? Challenges include getting them to speak the same language and improving and simplifying communication, especially for non-technologically trained people, not presupposing these prior skills and not needing complicated instruction manuals; also discovering new forms of interaction, through natural language, in the case of assistive robots, special care for proximity and vulnerability; in general improving interfaces, making them more agile and flexible, so that they can be easily adapted to different scenarios and changes in the environment.

On the other hand, a challenge that has become particularly important in recent times, is to take into account emotional needs, human values and ethics in human-robot interactions, as highlighted in this HRI definition above:

HRI definition (Human-Robot interaction)

is the science that studies people’s behaviour and attitudes towards robots in relation to their physical, technological and interactive characteristics, with the aim of developing robots that facilitate the emergence of efficient human-robot interactions (in accordance with the original requirements of their intended area of use), but are also acceptable to people and satisfy the social and emotional needs of their individual users, while respecting human values (Dautenhahn, 2013).


Inspired by this exciting field of work, CARTIF, in collaboration with FIWARE Foundation and other leading partners in Europe, will start in 2024 the European ARISE project, which aims to achieve real-time, agile, human-centric, open source technologies that drive solutions in Human-Robot HRI interaction by combining open technologies such as ROS 2, Vulcanexus and FIWARE. And where the aims is to solve challenges by funding experiments that develop agile HRI solutions with increasingly adaptive and intuitive interfaces.

ARISE will address many of the following challenges: (1) Application of collaborative robotics for disassembly of value-added products, (2) Picking of complex products in industrial warehouses, (3) Flexible robotic collaboration for more efficient assembly and quality control, (4) Intelligent reprogramming ensuring adaptability for different products through intuitive interfaces, (5) Search and transport tasks in healthcare environments, (6) Improving multimodal interaction around different functional tasks, (7) Robotic assistance in flexible high-precision tasks, and (8) Improving ergonomics and worker efficiency, thus generating a multidisciplinary framework that takes into account both technological and social aspects.

In addition, the ARISE project opens its doors to robotics experts so that they can collaborate in solving the various challenges, thus generating new technological components for the HRI Toolbox, such as ROS4HRI. This collaborative grand challenge aims to make it easier for companies to create agile and sustainable HRI aplications in the near future.


1 ACM/IEEE International Conference on Human-Robot Interaction, IEEE International Conference on Robotics and Automation (ICRA) y Robotics Systems and sciences

Digital Twin: Industry 4.0 in its digitised form

Digital Twin: Industry 4.0 in its digitised form

Digital twin has become one of the main trends or “mantras” in relation to digitalisation. It is practically a synonym of a product, something that you can buy as a commodity for a company. At CARTIF, we believe that the digital twin concept is a synonym of the Industry 4.0 paradigm, a “revolutionary” approach that has transformed the way we conceive and manage industrial processes.

The term “digital twin” was created by John Vickers of NASA in 2010, but its predecessor, the product lifecycle, was introduced by Michael Grieves in 2002. This philosophy focused on managing a product throughout its life, from creation to its disposal. In essence, the physical product generates data that feeds a virtual space, providing essential information for decision-making and optimisation of the actual object.

A definition of a digital twin could be “An accurate and complete digital representation of physical objects, processes or systems with real-time data and physical characteristics, behaviours and relationships”

A key questions is why do we need Digital Twins? In other words, what is their utility? These accurate, real-time digital representations offer a number of key advantages:

  • Data compilation and analysis to obtain valuable information and generate knowledge, driving efficiency and informed decision-making.
  • Accurate and dynamic simulation of the behaviour of physical objects, enabling virtual testing and experimentation before implementing changes, like risky investments, in the real world.
  • Reducing costs and risks by minimisong risk and accelerating innovation in a wide range of sectors, from manufacturing to healthcare.
  • Real-time update on a ongoing basis as new data is collected from the physical object, ensuring its validity along its lifecycle.

Like previous industrial revolutions, Industry 4.0 has transformed the way we work. This fourth revolution focuses on the interconnection of systems and processes to achieve greater efficiency throughout the value chain. The factory is no longer an isolated entity, but a node in a global production network.

To create an effective Digital Twin, at CARTIF we follow a systematic recipe of 9 steps:

  1. Objective definition: we identify the physical object, process or system we want to replicate and we clearly understand its purpose and objectives,
  2. Data compilation: we gather all relevant data from the physical object using IoT sensors, historical records or other sources of information.
  3. Data integration: we organise and combine the collected data in a suitable format for processing and analysis.
  4. Modelling and construction: we use different simulation and modelling technologies to create and accurate digital representation of the physical object.
  5. Validation and calibration: we verify and adjust the digital twin model using reference data and comparative tests with the real physical object.
  6. Real-time integration: we establish a real-time connection between the digital twin and the IoT sensors of the physical object to link real-time data.
  7. Analysis and simulation: we use the digital twin to make analysis, simulations and virtual tests of the physical object.
  8. Visualisation and shared Acces: we provide visual interfaces and shared access tools for users to interact with the digital twin.
  9. Maintenance and upgrade: we keep the digital twin up to date through real-time data collection, periodic calibration and incorporation of improvements and upgrades.

Just a previous industrial revolutions required enabling technologies, Industry 4.0 needs its own digital enablers. As we said at the beginning, we consider the digital twin a digitised form of the Industry 4.0 paradigm because digital enablers are fundamental to creating digital twins effectively. At CARTIF, we have accumulated almost 30 years of experience applying these technologies in various sectors, from industry to healthcare.

Digital enabling technologies fall into four main categories:

  1. Creation Technologies: these technologies allow the creation of Digital Twins using physical equations, data, 3D modelling or discrete events.
  2. Data sources: to feed digital twins, we use data integration platforms, interoperability with data sources and IoT technology.
  3. Optimisation: optimisation is achieved through methods such as linear or non-linear programming, simulations, AI algorithms and heuristic approaches.
  4. Presentation: the information generated can be presented through commercial solutions, open source tools such as Grafana or Apache superset ot even augmented reality visualisations.

Despite current progress, the challenge of keeping Digital Twins up to date remains an area of ongoing development. Automatic updating to reflect reality is a goal that requires significant investment in research and development.

In short, Digital Twins are the heart of Industry 4.0, boosting efficiency and informed decision-making. At CARTIF, we are committed to continuing to lead the way in this exciting field, helping diverse industries embrace the digital future.

Managing industrial data: prevention is better than cure

Managing industrial data: prevention is better than cure

In the field of health, it is known that is more effective prevent illnesses than treat them once they have manifested themselves. In a similar way, it can be apply in the context of industrial data, its continuous and proactive maintenance helps to avoid the need of an extensive pre-treatment before using advance data analytic techniques for decision-making and knowledge generation.

Pre-treatment data implies doing several tasks as: (1) data cleaning, (2) correction of errors, (3) elimination of atypical values and (4) the standardisation of formats, among others. These activities are necessary to assure quality and data consistency before using it in analysis, decision-making or specific applications.

Fuente: Storyset en FreePik

However, if robust data maintenance can be implemented from the outset, many of these errors and irregularities can be prevent. By establishing proper data entry processes, applying validations and quality checks, and keeping up-to-date records, it is possible to reduce the amount of pre-treatment need later, identifying and addressing potential problems before they become major obstacles. This includes early detection of errors such as inaccurate data, correction of inconsistencies and updating of outdated information. It is true that companies currently store large amounts of data but it is important to highlight that not all of this data is necessarily valid or useful, for example, for use in an artificial intelligence project. Indeed, many organisations face the challenge of mantaining and managing data that lacks relevance or quality. This management aims to ensure te integrity, quality and availability of data over time.

Efficient data maintenance is crucial to ensure that data are relaible, up-to-date and accurate, but this involves continuous monitoring and management by company staff, ensuring that they remain accurate, consistent, complete and up to date. The most common activities related to data maintenance include:

  1. Regular monitoring: Is carried out a periodic data tracking to detect possible problems, such as errors, inconsistencies, loses or atypical values. This can involves the revision of reports, tendance analysis or the implementation of authomatized alerts to detect anomalies.
  2. Updating and correction: If errors or inconsistencies in data are identified, maintenance staff will ensure that theyr are corrected and updated appropriately. This may involve reviewing records, checking external sources or communicating with those responsible for data collection.
  3. Backup and recovery: Procedures and systems are established to back up data and ensure its recovery in the event of failure or loss. This may include implementing regular backup policies and conducting periodic data recovery tests.
  4. Access management and security: Data maintenance staff ensure that data is protected and only accessible by authorised users. This may involve implementing security measures such as access control, data encryption or monitoring audit trails.
  5. Documentation and metadata update: Dara-related documentation, including field descriptions, database structure and associated metadat, is kept up to date. This facilitate the understanding and proper use of the data by users.

In summary, data maintenance involves: (1) regularly monitoring, (2) correcting errors, (3) backing up, and (4) securing the data to ensure that it is in good condition and reliable. These actions are fundamental to mantaining the quality and security of stored information.

At CARTIF, we face this type of problems in different projects related to the optimisation of manufacturing processes for different companies and industries. We are aware of the amount of time consumed in staff hours due to the problems explained, so we are working on providing certain automatic mechanisms that make life easier for those responsible for the aforementioned “data maintenance”. One example is s-X-AIPI project focused on the development of AI solutions with auto capabilities that require special attention to data quality starting with data ingestion.


CO-authors

Mireya de Diego. Researcher at de Industrial and Digital Systems Division

Aníbal Reñones. Head of Unit Industry 4.0 at the Industrial and Digital Systems Division

Terahertz technologies in industry

Terahertz technologies in industry

In this post, I would like to talk about devices capable of acquiring images in the Terahertz spectral range, an emerging technology with great potential for implementation in industry, especially in the agri-food sector.

Currrently, machine vision systems used in industry work with different ranges of the electromagnetic spectrum, such as visible light, infrared, ultraviolet, among others, which are not able to pass through matter. Therefore, these technologies can only examine the surface characterisitcs of a product or packaging, but cannot provide information from the inside.

In contrast, there are other technologies that do allow us to examine certain properties inside matter, such as metal detectors, magnetic resonance imaging, ultrasound and X-rays. Metal detectors are only capable of detecting the presence of metals. Magnetic resonance equipment is expensive and large, mainly used in medicine, and its integration at industrial level is practically unfeasible. Ultrasound equipment requires contact, requires some skill in its application and is difficult to interpret, so it is not feasible in the industrial sector. Finally, X-rays are a very dangerous ionising radiation, which implies a great effort in protective coatings and an exhaustive control of the radiation dose. Although they can pass through matter, X-rays can only provide information about the different parts of a product that absorb radiation in this range of the electromagnetic spectrum.

Technologies to examine properties inside matter

From this point of view, we are faced with a very important challenge, to investigate the potential of new technologies with the capacity to inspect, safely and without contact, the inside of products and packaging, obtaining relevant information on the internal characteristics, such as quality, condition, presence or absence of elements inside, homogeneity,etc.

Looking at the options, the solution may lie in promoting the integration in industry of new technologies that work in non-ionising spectral ranges with the ability to penetrate matter, such as the terahertz/near-microwave spectral range.

First radiological image. Röntgen´s wife´s hand
First radiological image in histroy. The hand of Röntgen´s wife

In 1985, Professor Röntgen took the first radiological image in history, his wife´s hand. 127 years have passed and research is still going on. In 1995, the first image in the Terhaertz range was captures, son only 27 years have passed since then. This shows the degree of maturity of Terahertz technology, still in its early stages of research. This radiation is not new, we know it is there, but today it is very difficult to generate and detect it. The main research work has focused on improving the way this radiation is emitted and captured in a coherent way, using equipment developed in the laboratory.

In recent years things have changed, new optical sensors and new terahertz sources with a very high industrialisation capcity have been obtained, which opens the doors of industry to this technology. Now there is still a very important task of research to see the scope of this technology in the different areas of industry.

CARTIF is committed to this technology and is currently working on the development of the industrial research project AGROVIS, “Intelligent VISual Computing for products/processes in the AGRI-food sector“, a project funded by the Junta de Castilla y León, framed in the field of computer vision (digital enabler of industry 4.0) associated with the agri-food sector, where one of the main objectives is to explore the different possibilities for automatically inspecting the interior of agri-food products safely.