For many science fiction fans, quantum computers are those gadgets than can make everything and that they are installed as on-board computers in spacecrafts or they appear as laptops of reduced size and sophisticated aesthetics. For many of those that aren´t fans of the genre, quantum computers don´t even ring a bell. In any case, common to both groups is that mostly didn´t think this computers are real.
Reality is that quantum computers exist and they are in use. It is true that this computers are far from being the all-powerful machines science fiction portraits, and even less are tiny and portable devices that we can use in our day a day.
Nowadays quantum computers are freezers of an adult size that hang up from the laboratories roof, with a eye-catching appearance: horizontal platforms with a lot of gold cables. The reason of its curious design is the instability of these computers. Due to their quantum nature, these computers are affected by all type of disturbances, from little seismic movements to electromagnetic waves such as radio waves or of telephones. Moreover, these computers function well only when they work at almost 0 kelvin, with just enough energy for a single electron to be able to move per quantum chip.
The characteristics of these computers, joint with a huge investment in their construction, makes very difficult that nowadays we have an own Personal Quantum Computer as we have PC´s. But far from discouraging, even with these disadvantages, quantum computers are in use thanks to remote control platforms. They exist software development kits1 with repository of algorithms (between them, machine learning algorithms and solvers of optimization problems), development tools of quantum circuits/algorithms, quantum simulators and access to quantum computers of different characteristics. In addition, bibliography and tutorials for the use of these tools are even increasingly prolific.
The increase of the use of quantum computing is due to the increase of public and private financing in sectors such as telecommunications, mobility, banking, cryptography or the science of life2. From the European Commission , is expected and investment of a billion euros dedicated to research projects in this field for the years 2018-2028. Until mid-2021, they have been supported more than 20 projects with a financing of 132 millions 3.
In particular, in Spain, the Council of Ministers approved a grant of 22 millions of euros to boost the field of quantum computing in 2021 with the project Quantum Spain, project with an estimated investment of 60 millions to 3 years. In addition, it arrives to Barcelona the first quantum computer in our country.
Although the order should have be the other way round, after all these figures of investment in the development of this technology, we wonder why there is so much interest in quantum computing. The answer is that these computers allows the resolution of impossible problems to solve for traditional computers. Moreover, due to their different functioning, they are able to perform operations in a much faster and efficient manner.
Do you know that all current cryptography is based on the inability of today´s computers to solve some mathematical problems? On the other hand, a quantum computer completely developed it wouldn´t have those problem. It could, for example, decode your bank account number and access to your savings. Or also enter into the pentagon and decode all type of secret documents. But don´t worry, for better or worse, quantum computers are yet far from this development level.
Another example of its usefulness would be the control of the switches of an electric network, when you want to determine the configuration that provides minimum losses together with a guaranteed supply of all loads in the network.
In general, quantum computers are useful in any control and logistic problem with binary and large variables.
It is clear that far from being science fiction, quantum computing is a reality that is becoming increasingly evident in academic and professional circles. Far from being the on-board computers of a spacecraft or the processing core of a laptop or similar, its presence has increased tremendously in recent years, and is expected to increase even more in the next 10 years. It is therefore important for researchers and scientist to become familiar with these new technologies as soon as possible.
Each landscape makes specific, different, unique feelings. When contemplating a meadow dotted with trees, we do feel something totally different from what we feel looking at a desert area. This also happens when facing cultural landscapes1. A Romanesque church does not make the same sensations as the ones perceived when contemplating cave paintings.
Numerous investigations conclude that there is a significant correlation between our personality and the landscape preferences. Other research argues that the human-landscape relationship has an “innate” basis, dating back to the survival needs of primitive humans, whose environment demanded perceptual abilities and predispositions, which today- at a psichological level- are still functioning. This explains why we still prefer open and slightly flat landscapes (watching predators), in addition to vegetation and good access to water (covering vital needs).
Then, it could be argued that the affective system brought into ply in landscape appraisal is a consequence of wider individual strategies concerning the personality, innate factors and the individual´s attitude towards the world (enhanced by their experiences and the society where they live).
In other words, the landscape assessment depends on factors that are totally subjective and, therefore, difficult to quantify. So what should I do if I want to measure “what we like” about a certain type of cultural landscape?
This is where the so-called “Affective Computing” pops up, which consists on the study and development of systems and devices able to recognize, interpret and process human emotions.
CARTIF, withinSRURAL project, is applying this set of techniques to obtain the “affection value” of any cultural landscape (“measuring how much you like the landscape”). To this ends, a cognitive system is being developed that on the one hand uses verbal language and facial expressions as input, and on the other hand, certain physiological signals (heart rate, sweating and body temperature while you are immersed into the landscape via virtual reality glasses)
All these inputs are introduced into a neural network previously trained by means of Deep Learning2 techniques to obtain the landscape´s “affection value” as useful output.
The “affection value” is very useful for decision-making by territory managers, for instance, to guide tourism promotion campaigns towards high affection values areas, but with no significant visits number. Also for profiling and segmenting tourists according to the type o landscapes they are most likely to visit, and thus to carry out targeted and more effective promotional campaigns.
It can also be used to know when it is necessary to take corrective measures or at least carry out a stud of causes in case of a tourist interesting area with a large number of visitors has a relative low affection value.
Since the decision-makers need few but very relevant information, as much graphical as possible, all kind of useful data is displayed in the most user-friendly for them y means of geolocated interfaces. Therefore, the system under development incorporates specific modules to show the information already processed, just ready to draw conclusions, which will quickly lead them to objective data-driven decisions upon Data Mining and Big Data techniques.
1 Is the landscape combining natural and cultural heritage. It has been modified by humans to be adapted to people´s needs according to their beliefs, economic activity, and the shaped society. The most obvious examples of these modifications are traditional crops, buildings and infrastructures.
In 2020, the European Commission launched a Research proposal (or “topic”) with a budget of 10 million Euros that aimed at the development of innovative and sustainable mini-hydropower solutions in Central Asia.
What makes this remote part of the world special for the European Commission to fund a project there? Central Asia is a geographic pivot of Eurasia and encompasses the five ex-Soviet republics of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan. It is one of the oldest inhabited areas and as such has witnessed rich culture and traditions such as the ancient Silk Road. Landlocked, it is an area of great energy and mineral resources. Specifically, according to a 2019 Report by the United Nations Industrial Development Organization, Central Asia has the second largest potential for Mini-hydropower generation in the world with 34.4 GW, and it is only behind Eastern Asia (China, Japan, the two Koreas and Mongolia) with 75.4 GW. However, to date, less than 1% of this potential has been exploited, which means that Central Asia is the region in the world with the lowest percentage of SHP development. Therefore, it seems clear that behind this “topic”, it is the Commission´s interest in opening new markets for the European mini-hydropower industry.
What are the main barriers that are preventing the development of the sector in Central Asia? We find a wide range of political, economic, social, technological, legal and environmental implications. There are common problems as the lack of information, the lack of financing from the private sector, or the absence of legal incentives. Moreover, some Central Asian countries have to deal with extreme weather conditions as for example, in high altitude regions where streams are likely to freeze in winter. In addition, it is crucial to consider the concept of a cross-border Water/Food/Energy/Climate nexus with a view to the future in order to avoid ecological disasters such as that of the Aral Sea, which continues to dry up due to unsustainable cotton exploitation.
The Hydro4U project was the winner of this call from the European Commission and began its journey in June 2021 with an expected duration of 5 years. Led by the Technical University of Munich, the rest of the consortium is completed by European turbine manufacturers such as Global Hydro Energy, entities from Central Asia such as the International Water Management Institute or technological centers such as CARTIF, which is leading the replication activities. Within the framework of the project, two new Mini-hydropower plants are being developed with designs adapted to the conditions of the region, and which will radically reduce planning, construction and maintenance costs, without compromising efficiency. The plants will be installed in two selected sites in Uzbekistan and Kyrgyzstan.
As for CARTIF, a key point of the work we are carrying out is the development of a replication guideline tool oriented to future investors or public authorities to support decision-making of new Mini-hydropower projects in Central Asia. The tool will be based on a computational model integrating Geographic Information System (GIS) mapping and statistical data. The tool will be implemented at river basin level, and will be applied in the two main rivers of the region: Syr-Darya and Amu-Darya. The tool will consist on several interactive modules, aiming to (1) visualize the total sustainable hydropower potential and installed capacity, (2) simulate Hydropower generation scenarios considering Water-Food-Energy-Climate Nexus constrains, sustainability of resources and socio-economic impacts and (3) provide technology recommendations as well as lessons learnt related to the implementation of new hydropower projects.
The guideline replication tool will be released by the end of 2025, and in CARTIF we are currently working on defining the sustainable hydropower potential as well as on the Water-Food-Energy-Climate Nexus model at the basin level that will allow us to simulate future generation scenarios sustainable with natural resources.
Stay informed of the progress of the project in the News&Events section of the Hydro4U webiste, as well as on its social netowrks: Twitter and Linkedin.
Researchers are increasingly confronted with situations of “digitalise” something that has not been digitalised before, temperatures, pressures, energy consumes,etc. for these cases we look for measure systems or a sensor in a commercial catalogue: a temperature probe, a pressure switch, a clamp ammeter for measuring an electric current, etc.
Sometimes, we find ourselves in the need of measure “something” for which you can´t find commercial sensors. This can be due to they aren´t common measure needs and there isn´t enough market for these type of sensor or directly, doesn´t exist commercial technical solutions available for different reasons. For example, it could be necessary to measure characteristics such as humidity of solid matter currents, or characteristics only measurable in a quality control laboratory in an indirect way and that needs a high experimentation level.
Also, sometimes, characteristics are required to be measured in very harsh environments due to high temperatures, as it can be melting furnace, or environments with lots of dust that saturate any conventional measure system and it may sometimes be necessary to evaluate a characteristic that is not evenly distributed (for example, quantity of fat in a meat piece, presence of impurities). Other factor to take into account is, that not always possible to be installed a sensor without interferences in the manufacturing process of the material that we want to measure, or the only way is taking a sample to realise an analysis out of the line and obtain a value or characteristic time after, but never in real time.
In these situations, it is necessary to resort to custom-made solutions that we call smart sensors or cognitive sensors. Apart from calling them sound exotic or cool, these are solutions that need to use a series of “conventional” sensors together with software or algorithms, for example, artificial intelligence, that process the measurements returned by these commmercial sensors to try to give as accurate an estimate as possible of the quality we want to measure.
Nowadays we are developing these types of smart sensors for different process industries such as asphalt manufacturing, steel billet and bars or pharmaceutical industry (e.g. pills) in the framework of the European Project CAPRI.
For example, in the manufacture of asphalt, sands of different sizes need to be dried before they are mixed with bitumen. During the continuous drying process of these sands, the finer sand size, called filler, is “released” in the form of dust from larger aggreggates and this dust needs to be industrially vacuumed using what is called a bag filter. Nowadays, the drying and suction of filler is done in a way that ensures that all the filler is extracted. The disadvantage of this process is that it is actually necessary to add additional filler when mixing the dried sands with the bitumen, because the filler improves the cohesion of the mix by filling the gaps between the sand grains. All this drying and complete suction of the filler entails an energy cost that, in order to try to minimise, it would be necessary to have a measure of the filler present in the sand mixture. Today, this measurement is obtained in a punctual way through a granulometric analysis in a laboratory with a sample of the material before drying.
Within CAPRI Project we are working on the complex task of being able to measure the flow of filler sucked in during the drying process. There is no sensor on the market that are guaranteed to measure a large concentration of dust (200,000 mg/m3) in suspension at high temperatures (150-200ºC).
Within the framework of the project, a solution to this problem has been developed, you can consult the laboratory results in the research article recently published in the scientific journal Sensors (“Vibration-Based Smart Sensor for High-Flow Dust Measurement”)
The development of this type of sensors requires various laboratory tests to be carried out under controlled conditions to verify the feasibility of this solution and then, also under laboratory conditions, to carry out calibrated tests to ensure that it is possible to estimate the true flow of filler sucked in during the sand drying process. CAPRI Project has successfully completed the testing of this sensor and others belonging to the manufacture of steel bars and pharmaceutical pills.
The Project in its commitment to the open science initiative promoted by the European Commission has published in its Zenodo channel, different results of these laboratory tests that allow us to corroborate the preliminary success of these sensors pending their validation and testing in the productive areas of the project partners. In the near future we will be able to share the results of the industrial operation of this and other sensors developed in the project.
Co-author
Cristina Vega Martínez. Industrial Engineer. Coordinator at CAPRI H2020 Project
Environmental concern and awareness linked to the expected population growth, and with it the increase in demand for food and the need to ensure the sustainability of resources through more efficient processes has led to a change in the consumption trends.
Consumers, increasingly concerned about health and the need to look for more natural foods, are leaning towards diets with less meat consumption, and even veggie diets (vegan, flexitarian and vegetarian), which ultimately translates into an increase in the search for alternative plant-based proteins and the generation of new plant-based foods.
Spain has 5.1 million veggies, rising from 8% in 2017 to 13% in 2021, representing a 34% growth in the veggie population in just 4 years. Moreover, a 56% of consumers indicate that they have bought at least one veggie brand simply out of curiosity due to the increase in the number of these products.
It is becoming increasingly common to find alternative products made from plant-based proteins on the shelves. Plant-based products range from plant-based alternatives to milk, the well-known plant-based drinks, which top the list of the most popular products, followed by meat analogues, but also alternatives to eggs, cheese, fish and their respective by-products.
To better understand how these products are obtained, let´s take a look at the most commonly used raw materials today, which include insects, algae, microproteins, vegetable proteins (legumes and cereals), cultured meat, which can be subjected to different processes such as fermentation, extrusion or 3D printing and which are intended to replace animal.
More extended and accepted raw materials are vegetable proteins, coming from legumes and/or cereals. With these vegetable proteins alternatives, the already known alternatives to meat products are made, such as meat analogs, meat substitutes, meat imitators or meat-without meat. All these terms makes reference to food products with sensorial characteristics, taste, texture, appearance and nutritional value similar to traditional meat products.
The most widely used and accepted raw materials are vegetable proteins from legumes and/or cereals. These vegetable proteins are used to produce the well-known alternatives to meat or meat-free meat products. All these terms refer to food products with sensory characteristics, taste, texture, appearance and nutritional value similar to those of traditional meat products.
Despite the increasing supply of meat analogues, there are still limitations to their widespread use, the main one being related to sensory properties. To ensure the success of these products, the use of plant-based proteins is not enough, as consumers are not willing to sacrifice the sensory experience. This is why the food industry is constantly working to improve the production of these products, developing and optimising technologies and processes in favour of high organoleptic and nutritional qualities. In this sense, extrusion technology for obtaining alternative protein structures to meat is one of the technological lines with the greatest potential.
Extrusion is a very versatile technology based on the application of high temperature and short times, where ingredients are continuously treated and forced through a matrix that forms and texturises them, producing several simultaneous changes in the structure and chemical composition of the ingredients through the application of thermal and mechanical energy, allowing a wide range of products to be obtained.
To learn a little more about this process and how it acts on vegetable proteins, it is necessary to differentiate the two types of routes that extrusion technology offers to obtain meat analogues. On the one hand, high moisture extrusion (also known as HME, high moisture extrusion), makes it possible to obtain non-expanded fibrous products that imitate the texture and mouthfeel of meat products. Therefore, they will be the protein base for the production of a meat analogue. On the other hand, dry extrusion produces the so-called textured vegetable protein (TVP), which is characterised by its expansion and requires subsequent hydration prior to use.
Since high-moisture extrusion creates a product with a meat-like structure, let’s see what actually happens to vegetable proteins during this process called texturisation:
It could be explained as a two-stage process; firstly, the protein is in its native state, with a complex structure and without access to its functionality. When heat and shear forces are applied during cooking, a denaturation of the protein takes place, losing its native structure and leaving the binding sites for new bonds accessible, which facilitates that in the second cooling stage, the protein reorganises itself by forming new bonds, giving rise to a product of a fibrous nature.
The greatest challenge of these processes is at the innovation in the use of extrusion-texturization technology combined with different blends of vegetable proteins to obtain improved textures.
This technology involves a double challenge: on the one hand, the choice of raw materials is a key parameter, being necessary to choose the appropiate vegetable protein source capable of providing the best characteristics to the final product with a good behaviour during processing and, on the other hand, to achieve and optimise the process conditions by adjusting the variables of each of the parameters to achieve the desired texture. Therefore, to achieve a better texture in mthe following must be taken into account: the choice of raw materials, the protein source, the protein content-isolate, concentrate, flour and the choice of conditions for the process parameters.
In short, obtaining products similar to those of animal origin by incorporating alternative protein sources such as cereals or legumes, and even algae, insects or microproteins, is one of the challenges facing the food industry. Although extrusion technology allows new plant-based products to be obtained, it is necessary to continue developing this technology in order to achieve the “perfect” analogue that meets all the requirements in terms of texture, taste and nutritional properties.
At CARTIF we work to integrate and optimise the texturization process with different ingredients and their mixtures, in order to obtain meat analogues with the best properties. An example of this, is the Meating Plants projects where we research the use of legume proteins to improve the quality of meat analogues.
This phrase, which is now part of history and sounds familiar to most of us, even if we belong to a different generation, was used by the astronauts on board the Apollo 13 spacecraft after an oxygen tank on board explosion. This happened two days after the start of their spatial mission to land on the Moon, which had been launched on April 11, 1970. It was watched by millions of people around the world for days to find out what the destiny of the three astronauts on boards the spacecraft would be. Meanwhile, NASA worked against the clock to generate a digital replica using computer-controlled simulators that would replicate the conditions that were occurring in space. This model, which was true to reality, allowed them to predict how the spacecraft would behave in space in order to find the most appropriate solution to bring the crew back. This could be considered as the first approach towards the concept of Digital Twin.
There are many different definitions of the concept of Digital Twin, one of the first being given by Michael Grieves, an expert in Product Lifecycle Management (PLM). The definition of Grieves was focused on the virtual comparison between what had been produced with the previous product design (produced vs designed), with the aim of improving production processes1. The field of application of Digital Twin is very broad, as are the possible definitions. In general terms, we can consider a Digital Twin as a digital representation of a physical asset, or a process or system, from the real physical world.
Digital twins are based on their fidelity to reality, to the physical world, allowing us to make future predictions and optimisations. The intention is that both ecosystems, that of the physical world and the ecosystem of the Digital Twin (with the representation of the virtual world), have a co-evolution with each other. That is, they are affected by each other in a synchronised manner. This is possible because both models are automatically connected in a bi-directional way. When there is only the automatic connection in a uni-directional way, and that would go from the real model existing in the physical world to the digital model of the virtual world, we cannot call it as such a Digital Twin. For these cases it would be called Digital Shadow. A digital model by itself could not be considered a Digital Twin if there is no automatic connection between the physical and the virtual world. The use of Information and Communication Technologies (ICT) together with Artificial Intelligence (AI) techniques, including Machine Learning (ML), allow the Digital Twin to learn, predict and simulate future behaviour to improve its operation.
And all this Digital Twin thing, for what’
The use of digital twins can be used in numerous fields, for example in industrial manufacturing lines, to improve production processes, or aspects such as energy and environmental sustainability, fields in which projects such as ECOFACT are currently working. Another use of digital twins could be their applications in Smart Cities, which could improve road management, waste collection, etc. At the building level, its application can be useful both at the tertiary level (those buildings dedicated to the service sector), for example an airport, where it could be used to predict and manage the building more adequately based on usage patterns associated with scheduled air traffic. It is also useful in commercial or industrial buildings, focusing in this case on the building itself, and not on the production line mentioned above. At the residential level, the Digital Building Twin (DBT) could also be of great use to us, as we could predict the thermal behaviour of the building, associated with usage patterns, in order to improve the thermal conditioning of the indoor environment and minimise the energy consumption, among other options.
CARTIF has been working for some time on the creation of Digital Models of building based on BIM (Building Information Modelling), for different purposes, such us improving decision-making when carrying out deep renovation buildings projects. In this case, the use of BIM is intended to achieve a more appropriate renovation, and to reduce the time and cost in this renovation projects, with projects such as OptEEmAL or BIM-SPEED. The use of BIM models would function as a facilitator for the integration of the static (Physical world) and dynamic (logical and Digital world from IoT-Internet of Things network data) systems of a building. In addition, the use of BIM provides control over all phases of a building’s life cycle, from design, construction, commissioning of systems, the operation and maintenance phase, as well as possible demolition.
Concept of linking the Physical and Digital world through BIM-based Digital Twins
The challenge ahead of us in the coming years, focused on achieving climate-neutral cities that are more sustainable, functional and inclusive, suggests that the use of digital twins will be increasingly used in these areas, thanks to the benefits they can bring.