Efficiency Wars (Episode VI) – The Return of Bohr

Efficiency Wars (Episode VI) – The Return of Bohr

Low cost alternative innovations. The barometer and how to think outside the box

I finished my previous post commenting how an ILM approach –to disaggregate energy consumption in a factory- can be an unbeatable challenge, financially, for those factories with highly distributed energy consumption.

The commercial market offers several alternatives for industrial measurement systems, designed by the main equipment manufacturers such as SIEMENS, ABB, SCHNEIDER, … capable of providing a hyper-exhaustive follow-up (several measures per second) of the energy consumptions of the different elements in a production chain. However, the cost of the necessary hardware, -the required computer and communications installation-, or the cost of the software licenses make such systems quite expensive. The consequence is that nowadays, they keep being a luxury only available to the large multinationals that also have several similar factories in different locations and, therefore a better purchase  negotiation capacity and an easy and high internal replicability. In addition, its production processes are highly automated and computerized through the latest generation MES (Manufacturing Execution System) systems. They already have the necessary IT and communications infrastructure. They just lack the investment in hardware and the “upgrade” of their software licenses.

For other small and medium-sized factories, these solutions can mean “using a sledgehammer to crack a nut”, so that the investment in monitoring will never be profitable (in terms of produced savings). However, these types of factories are increasing their interest in optimizing their energy costs, but employing a reasonable economic investment more appropriate to their billing volumes.

Every science student will have heard the supposed anecdote of Niels Bohr and the barometer in one of its many versions. Although the anecdote of Bohr and the barometer is not real but invented, the moral of trying to think differently when solving a possible problem is more relevant than ever. The difference is that we now call it “thinking outside the box“. The question now is not how to measure the height of a building with the help of a barometer, but, how the measurement and monitoring of energy consumption of a factory could be developed without spending the whole sum of the factory one-year investment budget ?

The answer, as in the problem of the barometer, is not unique, as it will depend on each particular factory. Fortunately, the IOT revolution is producing economies of scale in some of the necessary components. Continuing with the ‘Star Wars’ tribute, the low cost monitoring energy consumption systems can be compared to an X-wing starfighter formed by the following four wings:

  • The lower cost of electronics, which is allowing the development of new low-cost non-invasive sensors such as Hall effect-based electric current sensors, ultrasonic flow sensors, or infrared temperature sensors.
  • The open source hardware-software platforms for signals capturing and processing through low cost devices like Arduino, Raspberry Pi and others.
  • The emergence of new wireless communication protocols oriented to the M2M (Machine To Machine) communication with characteristics of low bandwidth and energy consumption and high resistance to the interferences, like Zigbee, BT LE or Wi-Fi HaLow.
  • Software systems for storage and processing all the recorded data, for example  the database systems, the multiple indicator reports automatic calculation tools and the use of displays showing the current values of the most important parameters. Both, residents on physical servers located on the factory intranet, or virtual cloud rented servers.

These new technologies are not yet mature and obviously the industry can be very reluctant to use them. If there is something that scares a production or maintenance manager those are the experimental systems that have not been tested previously for years. However, it is necessary to remember that we are not talking about modifying the control systems of processes and machines, but about deploying a parallel system throughout the factory that allows the monitoring and records the energy consumption of the main elements and production systems. We are talking about the detection of possible energy inefficiencies. We are talking about its correction and the corresponding economic savings. And we are talking about doing so with a reasonable investment cost, that is, that an SME can afford it.

Digital Transformation, to the moon and back

Digital Transformation, to the moon and back

It is July 20th, 1969, 20:18:04 UTC and after 102 hours, 45 minutes and 39.9 seconds of travel “the eagle has landed” and Neil is about to descend the ladder and touch an unknown surface for the first time: “That’s one small step for [a] man, one giant leap for mankind“. That 1969, Neil Armstrong, Michael Collins and “Buzz” Aldrin changed the world riding the biggest rocket ever built to the moon.

Some people may forgot it, others like me were not born at that time, but space race had its own digital transformation similar to the one foreseen for the industry and general public. Apollo program was the culmination of such first digital revolution in space exploration.

The landing achievement was, to a great extent, met thanks to the electronics onboard both the Apollo Command Module (CM) and Lunar Module (LM), the AGC or Apollo Guidance Computer. The computer was one of the first integrated digital circuit-based computers. With “just” 32kg of weight and a mere 55W of consumption this technical wonder was able to coordinate and control many tasks of the space mission, like calculating the direction and navigation angles of the spacecraft to commanding reaction control jets and orientate it in the desired direction. Moreover, the computer included one of the first demonstrations of a “fly-by-wire” feature where the pilot doesn’t command the engines directly but through control algorithms programmed into the computer. In fact, this computer was the basis for subsequent control of the space shuttle, military and commercial fly-by-wire systems.

As usual with this kind of breakthroughs, it did not happen overnight but through a series of incremental innovations done before.

By the 1950s, MIT Instrumentation Laboratory (IL) designed and developed the guidance system of Polaris ballistic missiles. Initially built with analog computers, soon they decided to go digital to achieve the accuracy required for the computations of missile trajectories and control.

Before President Kennedy set the ambitious goal of “… going to the moon in this decade …” 7 years earlier the first lunar landing, and after the launch of Sputnik in 1957, a Mars exploration study started at IL MIT’s laboratory. The design of a Mars probe set the basic configuration of the future Apollo guidance system including: a set of gyroscopes to keep the probe oriented, a digital computer and an optical telescope to orient itself relative to the moon and stars.

The launch of Sputnik in 1957 fueled America’s ambition to put the first human in space, but also contributed to the public debate of the pilots in the space race. A similar discussion to current views of the role of the worker in the factory. Should the astronaut just be payload or take full control of the spacecraft? Once aircraft pilots earned the task of being at the controls, several tests showed that it was nearly impossible that they would be able to control all the aspects of a mission due to the fast reaction needed and the amount different control commands.  Hence, pilots would need some automatic and reliable help for the pilot, and that was one of the main functionalities of the AGC.

Reliability was then one of the main concerns of the mission. Polaris program took 4 years to design a guidance control for a weapon in the air a couple of minutes. Kennedy’s bet of taking a man to the moon in less than 7 years meant to develop a guidance and control system for a spacecraft that should work without failure in a trip of more than a week of duration. The required levels of reliability were of more than two orders of magnitude. If a Polaris missile failed, a new one would take off. A failure in the spacecraft meant killing an astronaut.

Much of the reliability of the flight was in the shoulders of the Apollo Guidance Computer, and at some point of the program there were too many tasks planned, like complex guidance maneuvers, to be physically hardwired into electronics. To achieve these tasks it was needed software. Although software barely was taken into account at the beginning of the program it meant the difference between achieving the goal or program’s complete failure. The computer was the interface between the astronaut and the spacecraft, which in the end meant that computer software “controlled” the spacecraft, a revolution for that time. Today software is everywhere but then in the 60’s, software was seen as a set of instructions on punched cards. AGC software programs (frozen at 3 to 4 months before each launch) were “hard-wired” as magnetics cores and wires in a permanent (and reliable) memory but saved a lot of time, effort, and budget. In fact, it can be said Apollo software was more like a “firmware” using today’s vocabulary.

Today’s challenge of revolutionize industry through digital transformation can’t happen without the help of digital enablers. 48 years ago, digital electronics and first software programs were the “digital enablers” to achieve that “one small step for [a] man, one giant leap for mankind“. Today’s “Digital transformation is not an option” sounds like a cliché, a hype, a slogan from digital providers, but looking back in the history, the digital transformation in the Apollo program meant the difference of not achieving moon landing.

Efficiency Wars (Episode V) – The ROI strikes back

Efficiency Wars (Episode V) – The ROI strikes back

Watch out, the game might not be worth the candle.

In my previous post, I explained how beneficial could be for a factory to disaggregate (by direct measure and not by estimations based on nominal values) the energy consumptions of the factory between the different lines, machinery and systems that compose it. Jedi jokes aside, the fact is that such energy disaggregation is an example of the well-known rule “measure to know, know to control and control to improve.” And down to a more practical approach, the availability and study of such information will allow:

  • to map the energy consumptions within the factory
  • to visualize, through a simple pie chart, the energy contributions of the different elements.
  • to set up the priorities about what zones or machines must be modified or replaced due to their low energy efficiency.
  • to compare the energy efficiency between the different lines of a factory.
  • to compare the energy costs of the different products manufactured in the same production line.
  • to detect inappropriate consumptions due to devices’ malfunction, or sub-optimal working protocols.

Ok, let’s suppose we have already convinced the factory managers of the convenience of measuring to improve and doing it through the disaggregation of consumption. How do we start?

The most obvious approach would be to monitor the energy consumption of each machine with its corresponding sensor or meter. For electricity consumption, the installation of a network analyser will be required in the electrical cabinet where the electrical protections associated with the equipment are located. This installation, as long as there is available space in the corresponding cabinet, usually would require stopping machines for a few minutes. In the case of machinery whose energy consumption is natural gas, things get more complicated and expensive. Here it will be necessary to saw the gas supply pipe to install the new gas meter. The safety requirements and verifications of the new weldings will require a 24-48 hours supply interruption and machinery stop.

In addition, there may be machines or equipment that require a significant consumption of compressed air or heating (or cooling) thermal energy in the form of hot (or cold) water. In these cases, the specific meters must be installed in the supply pipes of the corresponding services.

In any case, formerly, the meters used to incorporate a mechanical (or electronic) mechanism of counting and accumulation. Periodically, the assigned worker would record their readings in the corresponding logbook. The mentioned readings would be later introduced manually into the computerized cost management system. However, nowadays, this approach is obsolete since, like any manual data collection process, it is costly, inefficient and leads to multiple errors. In other words, it is not only required to install the meters, but these models must be equipped (and all industrial models comply) with a communications module that allows the measured data to be sent to a computerized database storage system. It will also be necessary to deploy a new communications network (or extend the existing one if applies) to communicate all new sensors installed with the computer system that will periodically record data on energy consumption.

This type of consumption monitoring is known as Intrusive Load Monitoring (ILM). Its main advantage is the precision of the results, but its great disadvantage is the high expenses that it entails. In factories where consumption is highly distributed among a multitude of machines, the cost of equipment and installation of an ILM system can be a great investment compared to the annual cost of energy consumption in the factory.

It should not be forgotten that the purpose of a energy disaggregation system is to help reduce energy consumption and therefore the cost associated with such consumption. Obviously, it is not possible to precisely predict the economic savings that the energy disaggregation will produce. With regards to this, it is usual to use ranges, based on previous experiences, with the most and least favourable values. No matter how wide the potential savings are, if the initial investment is unreasonably high, the corresponding Return on Investment or ROI rates will be above any acceptable threshold considered by the relevant Chief Financial Officer.

To be continued…

Efficiency Wars (Episode IV) – A new (efficiency) hope

Efficiency Wars (Episode IV) – A new (efficiency) hope

Disaggregation of consumptions?  Why? To avoid the dark side

Within the world of management, the aphorism “If you can’t measure it, you can’t improve it” is often attributed to the twentieth century Austrian philosopher, Peter Drucker, whose writings contributed to the philosophical and practical foundations of the modern business corporation. He is indeed considered the founder of modern management.

Anyone with a minimum knowledge of quality control will have heard of the “Deming Cycle” also known as the “Plan-Do-Check-Act management method”. Measurement is essential in management. It is part of the administrative process and it is essential in the application of the PDCA method.

However, physicists know the expression does not come from the field of corporate management but from experimental thermodynamics. In particular, it was the nineteenth century British mathematician and physicist William Thomson Kelvin (Lord Kelvin) who formulated it in the following terms: “What is not defined, cannot be measured. What is not measured cannot be improved. What is not improving, always breaks down.” By the way, William Thomson Kelvin became Lord Kelvin-Britain’s first British scientist to be admitted to the House of Lords,-in recognition of his work in thermodynamics and electricity. He is buried in Westminster Abbey, next to the tomb of Isaac Newton.

Once defended the honour of “physics” versus “management”, the idea of measuring for improvement remains one of the most important ground rules of green manufacturing.

One of the problems encountered in the REEMAIN project when initiating the process of improving the energy efficiency of the production processes is the aggregation of energy consumptions: the individual energy consumptions of the main machines or stages of the production process are not accurately known. Only the global amount of energy consumed by the factory as a whole is known.

In the best case scenario, the total amount of energy consumptions of the different workshops will be available in terms of monthly values in large factories constructively organized in interconnected workshops. This is because, –in those kinds of factories-, the specific electricity and gas meters, and even thermal energy or compressed air meters, will have been installed, in the connection points of the workshops to the energy distribution factory networks. However, this “effort” (i.e. economic investment) in terms of energy meters has nothing to do with energy efficiency concerns. It is devoted to avoid discussions in the allocation of overhead costs for energy supplies and auxiliary services between the different workshops or departments.

Overhead costs must always be distributed, and given that financially the factory (or company) is a closed system, the different departments or workshops will try to use a criterion that benefits them –obviously at the expense of hurting others. For instance, electricity or natural gas costs are often split between different departments depending on the number of workers, the workshop area, the amount of produced units, the number of working hours, nominal power of the machineries or even some type of weighted mix of all the above parameters. As you can imagine, if total energy costs reach magnitudes of six zeroes, changing the weighting of the different criteria can represent hundreds of thousands of euros in the corresponding economic balances.

In any case, either within the workshop or at the global factory level, the challenge is to determine (i.e. monitor with temporal detailed recording) the contributions of the different lines, machines or systems to the energy consumption of the factory. And, why is this useful? Well, there are many reasons that will be discussed in the post. But, talking in general terms and paraphrasing Master Yoda, –now it is 40 years celebration, it could be said that “Aggregation of energy consumptions is the path to the dark side. Aggregation leads to lack of knowledge. Lack of knowledge leads to uncontrollability. Uncontrollability leads to inability to improve.”

To be continued…

What could mechanical simulation do for companies?

What could mechanical simulation do for companies?

The use of computer environments in the mechanical engineering field has grown significantly in recent decades. Most companies in the industry are aware of the benefits of computer-aided design (CAD) and engineering (CAE) systems. The traditional tasks associated with the design of machine elements, structures and manufacturing processes might prove very straight forward. The biggest benefit is obtained when interdisciplinary teams share models in order to designers, analysts and suppliers can evaluate several alternatives, understand design decisions and collaborate to achieve the requirements of functionality, quality and cost. This interaction requires agreed management systems, cross-platform environments and local and cloud computing and storage capabilities to take full advantage of its potential.

Nowadays simulation environments offer new capabilities to solve more complex problems. The major advantage of finite element analysis techniques is that it can handle coupled equations describing Multiphysics problems of interest to production companies. The traditional calculations to determine trajectories, tensions and deflections in mechanical structures, mechanisms and assemblies are now added abilities interaction with the surrounding fluids, allowing to address problems of combustion in biomass boilers, of undermining in piles of viaducts or vortex induced vibrations in slender structures.

The efficient use of these tools allows companies to accelerate the innovation, evaluating in a short period of time different alternatives of design, making experiments about prototypes, knowing the real performance of the process or product, updating the virtual model and simulating it against not tested conditions and proceed to optimization before it goes to market. However, some companies are not able to assimilate the full potential of their software investments, because sometimes the simulation remains disconnected from the production line and the methodological cycle discussed above is not completed. Trying to manage with this problem, CARTIF offers technological services of design, simulation, prototyping and testing, ranging from conceptual design to manufacturing and manufacturing supervision, applied to the automotive, renewable energy, chemical, agricultural, building, infrastructures and industrial machinery sectors.

Blockchain and the electric market customers

Blockchain and the electric market customers

In a previous post I tried to explain the Blockchain technology. In this occasion I will try to explain how customers in the electric market could benefit from it.

One of the most interesting Blockchain’s applications are the smart contracts. While a traditional contract is a piece of paper where two or more parties express their conditions and commitments, a smart contract is a computer program where the conditions and commitments are coded and automatically executed when the conditions are fulfilled. Currently smart contracts are restricted to simple agreements related to very specific applications. The Blockchain technology assures the fulfilment of the contract commitments with no need for a third supervising party. It is expected smart contract will reduce costs and speed up contract management. Besides this, they will enable almost real time audits. A Blockchain platform that supports smart contracts is Ethereum.

Smart contracts in the energy distribution sector could play the role of the current control algorithms. Among other duties, these algorithms are in charge of controlling energy flux between storage and generation depending on energy surplus. A first approach to smart contracts in the energy sector is POWR, developed by the Oneup company. The prototype runs on a neighbourhood where all the houses have solar panels installed. The energy that is not used in one house is offered to the neighbours and, at the same time, neighbours with a need for energy ask for it to their neighbours. Blockchain is used to record the energy flux between neighbours. The smart contract is stored in mini-computers attached to the meters in every house. It is continuously supervising the conditions coded in the smart contract and executing the commitments as soon as the conditions are met. Payments are done in its own cryptocurrency.

A similar example can be found in New York. The Brooklyn Microgrid project is building a microgrid to which the neighbours are connected. They have solar panels installed on the roofs of their premises. Neighbours use the energy they produce, but also they trade in energy to satisfy neighbours’ needs. This peer-to-peer market is supported by TransActive Grid, an initiative developed by LO3 Energy and ConsenSys. They use Ethereum technology. The project is studying how a microgrid autonomously managed by a group a people could behave. In a future the neighbours could become the owners of the microgrid according to a cooperative scheme.

Sharge participant installing Sharge at home

Alternatively to smart contracts, Blockchain technology is being demonstrated in other ways. One example is Sharge, a company that developed Blockchain-based technology that enables an electric car driver to charge the battery in any domestic plug engaged in the program. The house owner installs a small device on a plug, the car driver opens the device using his smart phone and then, after completing the charge, the plug owner is paid with a cryptocurrency. A similar idea is being developed by Slock.it and RWE in the BlockCharge project. In both cases, the target is to develop a payment system for charging electric vehicles with no need for a contract nor an intermediary, agent or broker.

There are also cryptocurrencies designed to encourage the generation of solar energy, like Solarcoin. Others seek to enhance energy interchange between machines, like Solether. In this case Blockchain meets the Internet of Things paradigm.

Blockchain is a technology that could benefit energy users and foster the use of renewable energy. It will also empower the energy user, in particular domestic ones.  While the technology is developed and tested, the legal and normative framework should be revised to remove barriers that could jeopardise Blockchain-based technology use.