Efficiency Wars (Episode VI) – The Return of Bohr

Efficiency Wars (Episode VI) – The Return of Bohr

Low cost alternative innovations. The barometer and how to think outside the box

I finished my previous post commenting how an ILM approach –to disaggregate energy consumption in a factory- can be an unbeatable challenge, financially, for those factories with highly distributed energy consumption.

The commercial market offers several alternatives for industrial measurement systems, designed by the main equipment manufacturers such as SIEMENS, ABB, SCHNEIDER, … capable of providing a hyper-exhaustive follow-up (several measures per second) of the energy consumptions of the different elements in a production chain. However, the cost of the necessary hardware, -the required computer and communications installation-, or the cost of the software licenses make such systems quite expensive. The consequence is that nowadays, they keep being a luxury only available to the large multinationals that also have several similar factories in different locations and, therefore a better purchase  negotiation capacity and an easy and high internal replicability. In addition, its production processes are highly automated and computerized through the latest generation MES (Manufacturing Execution System) systems. They already have the necessary IT and communications infrastructure. They just lack the investment in hardware and the “upgrade” of their software licenses.

For other small and medium-sized factories, these solutions can mean “using a sledgehammer to crack a nut”, so that the investment in monitoring will never be profitable (in terms of produced savings). However, these types of factories are increasing their interest in optimizing their energy costs, but employing a reasonable economic investment more appropriate to their billing volumes.

Every science student will have heard the supposed anecdote of Niels Bohr and the barometer in one of its many versions. Although the anecdote of Bohr and the barometer is not real but invented, the moral of trying to think differently when solving a possible problem is more relevant than ever. The difference is that we now call it “thinking outside the box“. The question now is not how to measure the height of a building with the help of a barometer, but, how the measurement and monitoring of energy consumption of a factory could be developed without spending the whole sum of the factory one-year investment budget ?

The answer, as in the problem of the barometer, is not unique, as it will depend on each particular factory. Fortunately, the IOT revolution is producing economies of scale in some of the necessary components. Continuing with the ‘Star Wars’ tribute, the low cost monitoring energy consumption systems can be compared to an X-wing starfighter formed by the following four wings:

  • The lower cost of electronics, which is allowing the development of new low-cost non-invasive sensors such as Hall effect-based electric current sensors, ultrasonic flow sensors, or infrared temperature sensors.
  • The open source hardware-software platforms for signals capturing and processing through low cost devices like Arduino, Raspberry Pi and others.
  • The emergence of new wireless communication protocols oriented to the M2M (Machine To Machine) communication with characteristics of low bandwidth and energy consumption and high resistance to the interferences, like Zigbee, BT LE or Wi-Fi HaLow.
  • Software systems for storage and processing all the recorded data, for example  the database systems, the multiple indicator reports automatic calculation tools and the use of displays showing the current values of the most important parameters. Both, residents on physical servers located on the factory intranet, or virtual cloud rented servers.

These new technologies are not yet mature and obviously the industry can be very reluctant to use them. If there is something that scares a production or maintenance manager those are the experimental systems that have not been tested previously for years. However, it is necessary to remember that we are not talking about modifying the control systems of processes and machines, but about deploying a parallel system throughout the factory that allows the monitoring and records the energy consumption of the main elements and production systems. We are talking about the detection of possible energy inefficiencies. We are talking about its correction and the corresponding economic savings. And we are talking about doing so with a reasonable investment cost, that is, that an SME can afford it.

Digital Transformation, to the moon and back

Digital Transformation, to the moon and back

It is July 20th, 1969, 20:18:04 UTC and after 102 hours, 45 minutes and 39.9 seconds of travel “the eagle has landed” and Neil is about to descend the ladder and touch an unknown surface for the first time: “That’s one small step for [a] man, one giant leap for mankind“. That 1969, Neil Armstrong, Michael Collins and “Buzz” Aldrin changed the world riding the biggest rocket ever built to the moon.

Some people may forgot it, others like me were not born at that time, but space race had its own digital transformation similar to the one foreseen for the industry and general public. Apollo program was the culmination of such first digital revolution in space exploration.

The landing achievement was, to a great extent, met thanks to the electronics onboard both the Apollo Command Module (CM) and Lunar Module (LM), the AGC or Apollo Guidance Computer. The computer was one of the first integrated digital circuit-based computers. With “just” 32kg of weight and a mere 55W of consumption this technical wonder was able to coordinate and control many tasks of the space mission, like calculating the direction and navigation angles of the spacecraft to commanding reaction control jets and orientate it in the desired direction. Moreover, the computer included one of the first demonstrations of a “fly-by-wire” feature where the pilot doesn’t command the engines directly but through control algorithms programmed into the computer. In fact, this computer was the basis for subsequent control of the space shuttle, military and commercial fly-by-wire systems.

As usual with this kind of breakthroughs, it did not happen overnight but through a series of incremental innovations done before.

By the 1950s, MIT Instrumentation Laboratory (IL) designed and developed the guidance system of Polaris ballistic missiles. Initially built with analog computers, soon they decided to go digital to achieve the accuracy required for the computations of missile trajectories and control.

Before President Kennedy set the ambitious goal of “… going to the moon in this decade …” 7 years earlier the first lunar landing, and after the launch of Sputnik in 1957, a Mars exploration study started at IL MIT’s laboratory. The design of a Mars probe set the basic configuration of the future Apollo guidance system including: a set of gyroscopes to keep the probe oriented, a digital computer and an optical telescope to orient itself relative to the moon and stars.

The launch of Sputnik in 1957 fueled America’s ambition to put the first human in space, but also contributed to the public debate of the pilots in the space race. A similar discussion to current views of the role of the worker in the factory. Should the astronaut just be payload or take full control of the spacecraft? Once aircraft pilots earned the task of being at the controls, several tests showed that it was nearly impossible that they would be able to control all the aspects of a mission due to the fast reaction needed and the amount different control commands.  Hence, pilots would need some automatic and reliable help for the pilot, and that was one of the main functionalities of the AGC.

Reliability was then one of the main concerns of the mission. Polaris program took 4 years to design a guidance control for a weapon in the air a couple of minutes. Kennedy’s bet of taking a man to the moon in less than 7 years meant to develop a guidance and control system for a spacecraft that should work without failure in a trip of more than a week of duration. The required levels of reliability were of more than two orders of magnitude. If a Polaris missile failed, a new one would take off. A failure in the spacecraft meant killing an astronaut.

Much of the reliability of the flight was in the shoulders of the Apollo Guidance Computer, and at some point of the program there were too many tasks planned, like complex guidance maneuvers, to be physically hardwired into electronics. To achieve these tasks it was needed software. Although software barely was taken into account at the beginning of the program it meant the difference between achieving the goal or program’s complete failure. The computer was the interface between the astronaut and the spacecraft, which in the end meant that computer software “controlled” the spacecraft, a revolution for that time. Today software is everywhere but then in the 60’s, software was seen as a set of instructions on punched cards. AGC software programs (frozen at 3 to 4 months before each launch) were “hard-wired” as magnetics cores and wires in a permanent (and reliable) memory but saved a lot of time, effort, and budget. In fact, it can be said Apollo software was more like a “firmware” using today’s vocabulary.

Today’s challenge of revolutionize industry through digital transformation can’t happen without the help of digital enablers. 48 years ago, digital electronics and first software programs were the “digital enablers” to achieve that “one small step for [a] man, one giant leap for mankind“. Today’s “Digital transformation is not an option” sounds like a cliché, a hype, a slogan from digital providers, but looking back in the history, the digital transformation in the Apollo program meant the difference of not achieving moon landing.

Extracting the juice from energy

Extracting the juice from energy

‘Energy cannot be created or destroyed, it can only be changed from one form to another’. This is the most commonly known formulation of the First Principle of Thermodynamics. However, we often forget that energy is degraded to a greater or lesser extent when it undergoes any transformation in the real world. Consequently, the quality of it is not the same for every of their possible forms and neither it is the level of usefulness for a given process or application.

There are evident differences between the energy flow of 1 MWh of heat at 90 C produced by a biomass boiler and 1 MWh of residual heat at 40 C coming from the industrial activity in a factory. The first one can supply numerous applications (space heating, domestic hot water supply, etc.) while the second one cannot be directly used for almost none of these uses and it is often considered as losses rejected to the environment.

The ‘guilty’ agent that causes such difference is exergy. Exergy is a term of renewed relevance these days among the concerns of engineers, technicians, policy-makers, etc. which represents the fraction of an energy flow capable of producing work, of producing a useful effect. In other words, exergy is the ‘juice’ that we really should extract from energy.

Residual heat coming out from the factory (although to a lesser extent than that one produced by the biomass boiler) also attains such potential, and wasting it involves luxuries that our society cannot afford.

In this sense, how we use energy in buildings, industries, etc. should address two main challenges: (i) producing more efficient energy transformations that will minimize its degradation, and (ii) exploiting exergy fluxes contained in low-grade energy forms that are otherwise rejected.

In CARTIF, we develop our activity in line with these objectives through our participation in different R&D projects.

One clear example of this is the LowUP project (‘Low valued energy sources UPgrading for buildings and industry uses’), leaded by the company ACCIONA and where our research center plays a remarking role, both collaborating in the leadership of different tasks as well as providing our technical experience in simulation, control, monitoring and instrumentation of energy systems.

The LowUP project is developing 3 efficient alternative systems to supply heating and cooling for building and industries, based on the use of renewable free energy and heat recovery from low-grade residual energy sources that are currently wasted. The 3 systems will be tested through 4 demonstrations in relevant environments. It involves the participation of 17 diverse partners from 7 countries seeking for the improvement and integration of several individual systems for energy production, storage and final use. As a result, these technologies will contribute to significantly reducing CO2 emissions and primary energy consumption thus creating greater energy efficiency in buildings.

After 6 months since the launch of the project, we hosted in our premises the first General Assembly of the LowUP Project, which turned to be a complete success. During the meeting, the partners presented the first advances, focused on the detailed revision of integration designs, the definition of requirements for operation, control and monitoring, as well as those first technological developments and prototypes.

Therefore, from CARTIF, we encourage all of you to follow our steps and do your bit to keep extracting the ‘juice’ from energy, without giving up trying to catch even that last tiny drop 😉

About recycling, celebrations and children

About recycling, celebrations and children

Last June 15 was a double celebration day in CARTIF. On the one hand, we celebrated the 25th Anniversary of the LIFE Programme, the EU’s funding instrument for the environment and climate action. It has passed 12 years for us since the first time we applied our first project to this call, and since then, we have participated in 20 projects, most of them related to the concept of air quality, circular economy and environmental footprints. We detail our on-going projects here.

CARTIF has never been the only beneficiary of these projects. The collaboration with many other entities is behind all of them and, that day, we were lucky for having several adventure partners at our headquarters, which made the celebration much more productive in terms of networking. Thanks from here to all of them!

And with 20 projects developed in 12 years … what have we learned?:

  • These projects have always the same three-phase sequence: proposal, project and post-project and all of them deserve the same attention and efforts.
  • (Taking advantage of the fact that LIFE program is not hearing now) The equation replicability + long-term sustainability + impacts is the key point which can make that this year your proposal wins.
  • And we have realized that we are a great team in CARTIF!

On the other hand, LIFE COLRECEPS project also celebrated its final conference, presenting publicly what we have achieved after 45 intense months of implementation, involved in the exciting world of expanded polystyrene.

Do you remember what we told you about recycling plastics some time ago? Until now, the recovery process for this waste was mechanical. One method is pressing the waste for briquettes manufacturing and ship them to China (think about the high environmental impact of this transport). The other is by grinding to reuse only 2% as part of new products. With this project, we have implemented a new recycling technology (unique in Europe) that allows valorising 100 % of the waste and obtaining new grit of EPS, suitable for use it in the manufacture of new plastics products used in the packaging sector. So, we achieve closing the life cycle of this plastic waste.

In addition, we have been able to develop a comprehensive database about the generation of this waste in Valladolid (202 t/year are produced!) and we have become aware of the difficulty in its quantification because, even today, asking companies how many waste they produce is a no-no.

Tuqueplast and Grupo Dia are the partners that have reached the end of the project beside us, sharing some issues during the execution. The implementation of the pilot plant in Turqueplast facilities has given us some headaches but during the workshops carried out with children, we have laughed a lot:

Call him Pepito, 7 years old, in response to the question “do you know in which recycling bin we should put into plastics?” he told us “of course! where my mother says!“).

Alicia Aguado & Laura Pablos

When the historic buildings talk (II)

When the historic buildings talk (II)

In a previous post the social and economic importance of heritage conservation were already described. Also we promised that on successive posts we will go into more detail describing the three main aspects that need to be monitored to ensure such conservation. Refreshing your memory, they were:

  • Relative humidity and temperature.
  • Lighting (natural and artificial).
  • Contaminants.

As promised is debt, in this post we will focus on the first point (be patient, we will talk about others further on), which makes us to face the heritage “bad boys”. Relative humidity and temperature are very damaging in the effects they can cause on the materials of which historic buildings are made of. Taking advantage of Physics, relative humidity is a very useful indicator of the water content in the air (vapour), and, on the other hand, temperature indicates the level of kinetic energy (movement) of the molecules of the air.

Both parameters vary according to the local meteorological conditions, the human actions and the conservation state of the historical buildings. This means that the atmosphere surrounding the historical buildings consist of a greater or lesser amount of water vapour at a certain temperature, definitely influencing the physical & chemical stability of the materials of which they are built on, or even of which the objects inside are composed.

In this sense, it is not negligible the effect caused by people, not only by our increasingly demanding comfort requirements, but by the number of visitors. We can influence the relative humidity and the temperature in such a way that inadequate values are reached. The effects of people are added to those of the local climate (more or less wet or warm), to the assets by itself (watertightness and ventilation capacity), to the derivatives of the proximity of heat sources (heating, sunny glass surfaces, old artificial lighting systems), the proximity of cold sources (external walls or air conditioning systems), as well as sources of humidity (leaks and floods).

The main factor to be controlled because the risk of direct deterioration that could originate is just humidity. The amount of water vapour in the air results in dimensional changes such as the well-known expansion and contraction of wood, making fractures and cracks when strong fluctuations happen. In addition, extreme relative humidity values cause softening or drying of organic materials such as adhesives and binders. But it also affects the stability of inorganic materials, such as metals, accelerating the corrosion processes, especially in the presence of salts. In poorly ventilation and dirty conditions, high values of relative humidity will cause the proliferation of living organisms causing biodeterioration (from microorganisms to rodents … Disgusting!). Even health problems as shown in the image.

Conversely, the temperature accelerates the chemical reactions and favours the biological activity. It contributes to the softening of waxes and adhesives and the loss of adhesion between different materials, such as enamels.

Perhaps reading all this causes some discomfort (and even itching …). But, what can we do to prevent these adverse effects? The answer is as simple as reasonable: just avoiding too high or too low levels of temperature and relative humidity, ensuring the highest possible stability

Following the indications of the IPCE (Spanish Cultural Heritage Institute, dependent on the Ministry of Culture), which establishes the National Preventive Conservation Plan (PNCP), the evaluation of risks derived from the microclimatic factors of which we are talking about, three aspects must be monitored:

  • Extreme levels of relative humidity and air temperature.
  • The magnitude and speed of fluctuations in relative humidity and air temperature.
  • The proximity of sources of humidity and heating or cooling emission sources.

A wide range of sensors is available on the market to monitor temperature and humidity, either continuously or timely (see image). Indeed, it is necessary to know how to properly treat, interpret and integrate the data they provide.

What is not so frequent is using alternative methods to evaluate the effects of moisture on the materials of the built heritage. Even before these appear and the remedy is worse than the disease. CARTIF is a pioneer in the use of laser scanners to make this assessment. A recent article published in the prestigious international journal Studies in Conservation, together with the developments carried out for the European research project INCEPTION show that while 3D documenting a historical building, the level of humidity present in a known type of material could be registered in parallel. A trustworthy 2×1 to take into account in the minimum conservation expenditure times we live in. The cloister of the Cathedral of Ciudad Rodrigo (Salamanca, Spain) has been the choice for on-site validations.

An important example that gives account of the scope of applied research in cultural heritage by a technology centre within a sector where it still takes more than expected that not so new technologies to be of daily use.

Efficiency Wars (Episode V) – The ROI strikes back

Efficiency Wars (Episode V) – The ROI strikes back

Watch out, the game might not be worth the candle.

In my previous post, I explained how beneficial could be for a factory to disaggregate (by direct measure and not by estimations based on nominal values) the energy consumptions of the factory between the different lines, machinery and systems that compose it. Jedi jokes aside, the fact is that such energy disaggregation is an example of the well-known rule “measure to know, know to control and control to improve.” And down to a more practical approach, the availability and study of such information will allow:

  • to map the energy consumptions within the factory
  • to visualize, through a simple pie chart, the energy contributions of the different elements.
  • to set up the priorities about what zones or machines must be modified or replaced due to their low energy efficiency.
  • to compare the energy efficiency between the different lines of a factory.
  • to compare the energy costs of the different products manufactured in the same production line.
  • to detect inappropriate consumptions due to devices’ malfunction, or sub-optimal working protocols.

Ok, let’s suppose we have already convinced the factory managers of the convenience of measuring to improve and doing it through the disaggregation of consumption. How do we start?

The most obvious approach would be to monitor the energy consumption of each machine with its corresponding sensor or meter. For electricity consumption, the installation of a network analyser will be required in the electrical cabinet where the electrical protections associated with the equipment are located. This installation, as long as there is available space in the corresponding cabinet, usually would require stopping machines for a few minutes. In the case of machinery whose energy consumption is natural gas, things get more complicated and expensive. Here it will be necessary to saw the gas supply pipe to install the new gas meter. The safety requirements and verifications of the new weldings will require a 24-48 hours supply interruption and machinery stop.

In addition, there may be machines or equipment that require a significant consumption of compressed air or heating (or cooling) thermal energy in the form of hot (or cold) water. In these cases, the specific meters must be installed in the supply pipes of the corresponding services.

In any case, formerly, the meters used to incorporate a mechanical (or electronic) mechanism of counting and accumulation. Periodically, the assigned worker would record their readings in the corresponding logbook. The mentioned readings would be later introduced manually into the computerized cost management system. However, nowadays, this approach is obsolete since, like any manual data collection process, it is costly, inefficient and leads to multiple errors. In other words, it is not only required to install the meters, but these models must be equipped (and all industrial models comply) with a communications module that allows the measured data to be sent to a computerized database storage system. It will also be necessary to deploy a new communications network (or extend the existing one if applies) to communicate all new sensors installed with the computer system that will periodically record data on energy consumption.

This type of consumption monitoring is known as Intrusive Load Monitoring (ILM). Its main advantage is the precision of the results, but its great disadvantage is the high expenses that it entails. In factories where consumption is highly distributed among a multitude of machines, the cost of equipment and installation of an ILM system can be a great investment compared to the annual cost of energy consumption in the factory.

It should not be forgotten that the purpose of a energy disaggregation system is to help reduce energy consumption and therefore the cost associated with such consumption. Obviously, it is not possible to precisely predict the economic savings that the energy disaggregation will produce. With regards to this, it is usual to use ranges, based on previous experiences, with the most and least favourable values. No matter how wide the potential savings are, if the initial investment is unreasonably high, the corresponding Return on Investment or ROI rates will be above any acceptable threshold considered by the relevant Chief Financial Officer.

To be continued…