Hard to measure

Hard to measure

Researchers are increasingly confronted with situations of “digitalise” something that has not been digitalised before, temperatures, pressures, energy consumes,etc. for these cases we look for measure systems or a sensor in a commercial catalogue: a temperature probe, a pressure switch, a clamp ammeter for measuring an electric current, etc.

Sometimes, we find ourselves in the need of measure “something” for which you can´t find commercial sensors. This can be due to they aren´t common measure needs and there isn´t enough market for these type of sensor or directly, doesn´t exist commercial technical solutions available for different reasons. For example, it could be necessary to measure characteristics such as humidity of solid matter currents, or characteristics only measurable in a quality control laboratory in an indirect way and that needs a high experimentation level.

Also, sometimes, characteristics are required to be measured in very harsh environments due to high temperatures, as it can be melting furnace, or environments with lots of dust that saturate any conventional measure system and it may sometimes be necessary to evaluate a characteristic that is not evenly distributed (for example, quantity of fat in a meat piece, presence of impurities). Other factor to take into account is, that not always possible to be installed a sensor without interferences in the manufacturing process of the material that we want to measure, or the only way is taking a sample to realise an analysis out of the line and obtain a value or characteristic time after, but never in real time.

In these situations, it is necessary to resort to custom-made solutions that we call smart sensors or cognitive sensors. Apart from calling them sound exotic or cool, these are solutions that need to use a series of “conventional” sensors together with software or algorithms, for example, artificial intelligence, that process the measurements returned by these commmercial sensors to try to give as accurate an estimate as possible of the quality we want to measure.

Nowadays we are developing these types of smart sensors for different process industries such as asphalt manufacturing, steel billet and bars or pharmaceutical industry (e.g. pills) in the framework of the European Project CAPRI.

For example, in the manufacture of asphalt, sands of different sizes need to be dried before they are mixed with bitumen. During the continuous drying process of these sands, the finer sand size, called filler, is “released” in the form of dust from larger aggreggates and this dust needs to be industrially vacuumed using what is called a bag filter. Nowadays, the drying and suction of filler is done in a way that ensures that all the filler is extracted. The disadvantage of this process is that it is actually necessary to add additional filler when mixing the dried sands with the bitumen, because the filler improves the cohesion of the mix by filling the gaps between the sand grains. All this drying and complete suction of the filler entails an energy cost that, in order to try to minimise, it would be necessary to have a measure of the filler present in the sand mixture. Today, this measurement is obtained in a punctual way through a granulometric analysis in a laboratory with a sample of the material before drying.

Within CAPRI Project we are working on the complex task of being able to measure the flow of filler sucked in during the drying process. There is no sensor on the market that are guaranteed to measure a large concentration of dust (200,000 mg/m3) in suspension at high temperatures (150-200ºC).

The development of this type of sensors requires various laboratory tests to be carried out under controlled conditions to verify the feasibility of this solution and then, also under laboratory conditions, to carry out calibrated tests to ensure that it is possible to estimate the true flow of filler sucked in during the sand drying process. CAPRI Project has successfully completed the testing of this sensor and others belonging to the manufacture of steel bars and pharmaceutical pills.

The Project in its commitment to the open science initiative promoted by the European Commission has published in its Zenodo channel, different results of these laboratory tests that allow us to corroborate the preliminary success of these sensors pending their validation and testing in the productive areas of the project partners. In the near future we will be able to share the results of the industrial operation of this and other sensors developed in the project.


Co-author

Cristina Vega Martínez. Industrial Engineer. Coordinator at CAPRI H2020 Project

AI potential for process industry and its sustainability

AI potential for process industry and its sustainability

The impact of Artificial Intelligence (AI) is highly recognized as a key driver of the industrial digital revolution together with data and robotics 1 2. To increase AI deployment that is practically and economically feasible in industrial sectors, we need AI applications with more simplified interfaces, without requiring highly skilled workforce but exhibiting longer useful life and requiring less specialized maintenance (e.g. data labelling, training, validation…)

Achieving an effective deployment of trustworthy AI technologies within process indsutries needs a coherent understanding of how these different technologies complement and interact with each other in the context of domain-specific requirements that industrial sectors require3, such as process industries who must leverage the potential of innovation driven by digital transformation, as a key enabler for reaching Green Deal objectives and expected twin green and digital transition needed for a full evolution towards circular economy.

One of the most important challenges for developing innovative solutions in the process industry is the complexity, instability and unpredictability of their processes and impact into their value chains. These solutions usually require: running in harsh conditions, under changes in the values of process parameters, missing a consistent monitoring/measurement of some parameters important for analysing process behaviour and difficult to measure in real time. Sometimes, such parameters are only available through quality control laboratory analysis that are responsible to get the traceability of origin and quality of feedstocks, materials and products.

For AI-based applications, these are even more critical constraints, since AI requires (usually) a considerable amount of high-quality data to ensure the performance of the learning process (in terms of precision and efficiency). Moreover, getting high quality data usually requires an intensive involvement of human experts for curating (or even creating) the data in a time-consuming process. In addition, a supervised learning process requires labelling/classifying the training examples by domain experts, which makes an AI solution not cost-effective.

Minimizing (as much as possible) human involvement in the AI creation loop implies some fundamental changes in the organizations of the AI process/life-cycle, especially from the point of view of achieving a more autonomous AI, which leads to the concept of self-X AI4 . To achieve such autonomous behaviour for any kind of application it usually needs to exhibit advanced (self-X) abilities like the ones proposed for the autonomic computing (AC)5:

Self-X Autonomic Computing abilities

Self-Configuration (for easier integration of new systems for change adaptation)
Self-Optimization (automatic resource control for optimal functioning)
Self-Healing (detection, diagnose and repair for error correction)
Self-Protection (identification and protection from attacks in a proactive manner)

Autonomic Computing paradigm can support many AI tasks with an appropiate management, as already reported in the scientific community 6 7 . In AI acts as the intelligent processing system and the autonomic manager (continuously executes a loop of monitoring-analyzing-planning-executing based on the knowledge (MAPE-K) of the AI system under control for developing a self-improving AI application.

Indeed, such new (self-X) AI applications will be, to some extent, self-managed to improve their own performance incrementally5. This will be realized by an adaptation loop, which enables “learning by doing” using MAPE-K model and self-X abilities as proposed by autonomic computing. The improvement process should be based on continuous self-Optimization ability (e.g. hyper-parameter tuning in Machine Learning). Moreover, in the case of having some problems in the functioning of an AI component, the autonomic manager should activate self-Configuration (e.g. choice of AI method), self-Healing (e.g. detecting model drify) and self-Protection abilities (e.g. generating artificial data to improve trained models) as needed, based on knowledge from AI system.

In just a few weeks, CARTIF will start a project with the help of AI experts and leading companies of various process industry sectors across Europe to tackle these challenges and close the gap between the AI and automation by proposing a novel approach for a continuous update of AI applications with minimal human expert intervention, based on an AI data pipeline, which exposes autonomic computing (self-X) abilities, so called self-X AI. The main idea is to enable the continuous update of AI applications by integrating industrial data from physical world with reduced human intervention.

We’ll let you know in future posts about our progress with this new generation of self-improving AI applications for the industry.


1 Processes4Planet, SRIA 2050 advanced working version

2 EFFRA, The manufacturing partnership in Horizon Europe Strategic Research and Innovation Agenda.

3 https://www.spire2030.eu/news/new/artificial-intelligence-eu-process-industry-view-spire-cppp

4 Alahakoon, D., et al. Self-Building Artificial Intelligence and Machine Learning to Empower Big Data Analytics in Smart Cities. Inf Syst Front (2020). https://link.springer.com/article/10.1007/s10796-020-10056-x

5 Sundeep Teki, Aug 2021, https://neptune.ai/blog/improving-machine-learning-deep-learning-models

6 Curry, E; Grace, P (2008), “Flexible Self-Management Using the Model–View–Controller Pattern”, doi:10.1109/MS.2008.60

7 Stefan Poslad, Ubiquitous Computing: Smart Devices, Environments and Interactions, ISBN: 978-0-470-03560-3

Consciousness is not a computation, it is quantum

Consciousness is not a computation, it is quantum

A lot of the new hype arounf the Artificial Intelligence (AI) is directly related with the potentiality for imitate or overcome the capabilities of the human brain (in terms of data volume managed and process speed) using computers. The neuroscientist Henry Markram in 2009 announced a project that pretend to simulate the human brain in a super-computer with different objectives such as “understanding perception or reality and maybe even undertanding physic reality as well”.

The so-called “technological singularity” established how the AI and robotics will surpass us humans. There are different predictions about when will occur this “apocalypse”. Elon Musk figured it out this singularity in 2025, the Russian millionaire Dmitri Itskov in 2045, to cite some examples. The continuous advance of microprocessors capabilities also feeds, wrongly, the hype of the AI. If someone compares only the numebr of neurons (around 86,000 million) with the number of transistors of the last M1 chip from Apple (16,000 millions) may be tempted to ensure that the “computation capacity” of the human being is easily overcome. I know, comparisons are hateful, and in this case, very daring.

Until very recently I was already among the expectant of such predictions, but with a reasonable scepticism degree. All this changed for me in the crudest of the confinement of 2020. I was wandering around YouTube in search of interesting videos related to AI and I came to a very curious one that gives the title to this post, and that attracted my curosity: 1consciousness is not a computation. In this video, a more than kucid Sir Roger Penrose, physicist, mathematician and philosopher, is interviewed by the vlogger Lex Fridman, expert in AI and autonomous driving.

I have to say that, even the scientific level of what is exposed in the video is very high, the lucidity, detail and kindness shown by Penrose, caught me and got me to be attentive throughout the whole interview. Specially, there is a part that sticks me on the chair, and makes me rewind several times to try to understand as much details as possible. The interview starts directly with this devastating thesis: “I´ m trying to say that whatever consciousness is, it´ s not a computation…it´ s not a physical process which can be described by computation”.

During the interview, Penrose explains how his curiosity about neurophysiology led him to explore the basic principles of the physic, cosmology, mathematics and philosophy in his book in 1989 “The Emperor´ s New Mind” to propose that human thinks could never be emulated by a machine, against the “mainstream” thesis of those times about how computers using artificial intelligence could soon make everything a human can do.

Which leads him to assure so bluntly the impossibility of emulating human consciousness by using a computer? It is not supposed that joining several chips of our computers one could overcome the number of neurons of our brain and its capacity of computation (if you allow me this crude comparison)? Just like life isn´ t a set of cells grouoed in organs, the “emulation” of the brain capacities is not a question of grouping a high number of transistors and their electrical impulses. We all remember the explanations of how neurons transport information throughout electrical impulses. In his analysis of the brain physiology, Penrose, even at the final of his book could not get to explain completely how it was possible that the nervous signals could transmit by electrical impulses consistently across the brain. Something did not fit or was missing in his theory. But it seems that, to a reader of his book, the anaesthesiologist Stuart Hameroff, was to the only one that figured it out. “I think you have missed something, don´ t you know what microtubules are?”- said to Penrose. “Is what you need to amke your theory work”. Microtubules could be the answer to the Penrose search about a no computable source in the human consciousness, from a physiological point of view.

But what the hell are microtubules? May molecular biologists forgives me, but it seems that they are molecular structures of tubular shape that we can found in different cells of our body, from red blood cells to neurons. These structures that “inhbait” the interconnections of our grey cells, have the property of conserving their state in a very effective way (quantum type state, but we will leave this for another post) and allow us to somehow return to being the same as we were after a loss of consciousness, for example, after and anaesthesia. We could say that microtubules are the basic storage unit (quantum) of our brain. Some scientifics call them “the neuron brain“.

Another reason for being able to aspire to emulate the brain has been to be able to replicate the number of connections that exist in our neurons. It´´ s a pretty big number actually. It is estimated that each neuron has an average of 1,000 connections. With 86,000 million of neurons this would give us about 86 trillion of connections or so. Even though the numbers give vertigo, for some experts they seems achievable with the current calculation capacity in operations per second (FLOP) of the processors. Going back to Apple´ s M1, this processor declares to be able to carry out 2.6 TFLOP, 2.6 billion operations per second (10 to the 12th). Again, a number apparently “near” to our connections if we join a lot of chips working at the same time. But it seems that consciousness is something more than connections, right?

If we focus only on the quantitative question and we return to microtubules that inhbait our neurons, how much of them can we have? Neurobiology said that inhabit our neurones, how much of them can we have? Neurobiology said that more than 1,000 microtubules per each one of our 86,000 million of neurons, that is, 86,000,000,000,000 micortubules (86 billion, similar to the neural connections) that “stores quantum information” in which some scientifics affirm, our consciousness live. We can say that actually our brain is a quantum computer, don´ t you think? Wow, sorry to fall back into a computational analogy. Let´´ s go back to the technology to conclude this post. IBM, promises a quantum computer of 1,000 qubits for 2023. Quite inferior to the 86 billion microtubules of our head. In my humble opinion, and comparing only quantitative aspects of actual and future computation capacities, the so called “technological singularity” as a promise of overcoming our human capacities only with actual computer technology and artificial intelligence i still very far away or seems almost unattainable. I don´ t know about you, but I still see a little bit far away the technological singularity, don´ t you think?


1 Human beings’ ability to recognize and relate to the surrounding reality

Industry 5.0, seriously?

Industry 5.0, seriously?

It seems unbelievable, but 5 years have passed since CARTIF inaugurated the blog with the post on Industry 4.0 in which I analysed some of the keys to the so-called “fourth industrial revolution” and how it could affect the industry in our country. It has always seemed risky to me to try to define this revolution from within itself. I suppose that time and the historical perspective will make it clearer if it really has been a revolution or simply a technological mantra. Fasten your seat belts because if we have not yet assimilated this revolution now they “threaten” us with the next one, Industry 5.0, they call it. Original, isn’t it?

If the fourth promised to interconnect the productive means of the entire value chain to make a transition to the intelligent industry or Smart Industry (everything has to be Smart as when many years ago any self-respecting appliance needed to carry “fuzzy logic” on-board). The fifth industrial revolution tries to humanize the concept beyond just producing goods and services for economic profit. The challenge of this revolution is to include social and environmental considerations in its purpose. Keywords if this revolution, as defined by the European Commission, should include human-centric approach, sustainability and resilience.

By developing innovative technologies with a human-centric approach, Industry 5.0 can support and empower workers, rather than replace them. Likewise, other approaches complement this vision from the consumer’s point of view in such a way that they can have access to products that are as personalized as possible or adapted to their possibilities. Thus, concepts such as personalized food or tailor-made clothing could be virtually applied to any consumer product.

The sustainability in the development of the industry needs to reconcile the economic and environmental progress objectives. To achieve such common environmental objectives, it is necessary to incorporate new technologies and integrate existing ones by rethinking the manufacturing processes by introducing environmental impacts in their design and operation. Industry must be a leading example in the green transition.

Industry resilience means developing a greater degree of robustness in its production, preparing it against disruptions and ensuring that it can respond in times of crisis such as the COVID-19 pandemic. The current approach to globalized production has shown great fragility during the pandemic that devastates us. Supply chains must also be sufficiently resilient, with adaptable and flexible production capacity, especially in those aspects of products that satisfy basic human needs, such as healthcare or security.

Just as the fourth needed digital enablers, this new revolution needs technological aspects to make it happen. From a practical point of view, we can say that the enablers we reviewed a while ago are fully up-ot-date for Industry 5.0. We could include some additional ones such as quantum computing or block-chain, incipient ones 4 or 5 years ago. If the enablers are similar, why are we talking about a new revolution? It is a matter of priorities. If the fourth spoke abou hyper-connection of processes to the digital world through cyber-physical systems or the IoT, in the fifth, a cooperation between human and digital technology is sought, either in the form of collaborative industrial robots, social robots or artificial intelligence systems that complement or assist in any task related to production, from installing a door in a car to deciding how to organize the next work shift to meet the productivity goal of the manufacturing plant.

IoT technology to improve the efficiency of industrial companies

IoT technology to improve the efficiency of industrial companies

With the promise of 75 billion devices connected to the Internet around the world in 2025, the ‘Internet of Things’ (IoT) opens the door to a future of opportunities for companies to optimize their processes, whether in the form of manufacturing their products, supervising their quality or monitoring the critical machines in the factories: ovens, manufacturing lines or refrigerated warehouses.

In our daily experience as consumers, we can find a multitude of technological offers in IoT devices that we integrate into our lives in a fast and, sometimes, impulsive manner, either because of fashions or real benefits. However, the incorporation of these technologies in companies is not done in such an impulsive way, since it involves a careful study of feasibility and profitability, often complex to demonstrate, as usually happens with new technologies.

In addition, IoT possesses  a significant flexibility to integrate itself into the IT infrastructures of the factories. The ‘i’ of IoT means “internet”, which seems to be automatically associated with a direct connection to the Internet of “things” in the factories, and this generates panic because of possible cybersecurity threats for almost any company. To fight against these barriers, information and training are key aspects.

Within this framework, the IOTEC Spain-Portugal cross-border cooperation project is being developed. This initiative aims to create a collaborative network of different actors (researchers, public bodies, ICT solutions providers and industrial companies) of both countries to facilitate the IoT integration in companies. Participants in IOTEC have analyzed different industrial and ICT companies to look for gaps and strengths and to be able to relate supply and demand of IoT. From CARTIF, we coordinate the activities around the industrial companies in order to know their IoT needs through a detailed analysis of their organizational and productive processes that include management, product design, manufacturing process and logistics.

This analysis included a series of technological audits to different agroindustrial companies, analyzing the potential of application of IoT in different parts of its productive process. 40 different organizational parameters were evaluated according to the methodology defined within the IOTEC project. For example, in the section on manufacturing processes, four aspects of great relevance were analyzed meticulously:

  • The type of process or productive transformation, which is fundamentally defined by aspects such as the raw materials used or the manufacturing steps.
  • The traceability requirements of raw materials, intermediate products and final products. This traceability has special relevance in agrifood companies.
  • The control of the production process that is triggered by different mechanisms according to the company: production orders, on demand, availability of raw materials (e.g. vintage).
  • The need to capture data in the plant as the first phase of complete digitalization of a productive process.

Once all the parameters were analyzed, it was carried out an exhaustive classification of different IoT technologies that could be applied in the industry and have a direct impact on the improvement of efficiency. Next, you can see these technologies:

All identified technologies were prioritized by those attending the “Forum of business opportunities through IoT and Blockchain” that took place on November 14, 2018 in Valladolid. The attendees to the event had the opportunity to reflect and vote on this set of technologies to assess their need and the importance of its dissemination by the IOTEC project. Once these priorities are established, it is now necessary to make them known so that IoT solution providers can adapt their offers to real needs.

Likewise, work is being carried out on dissemination and training activities to bring IoT technologies closer and concrete examples of their application to the set of industrial companies in the regions of Castilla y León and the Centre of Portugal participating in the IOTEC network. Any company supplying or demanding IoT technologies can participate in the project forum and benefit directly through collaboration and training opportunities in this exciting set of technological solutions such as the IoT.