Efficiency Wars (Episode V) – The ROI strikes back

Efficiency Wars (Episode V) – The ROI strikes back

Watch out, the game might not be worth the candle.

In my previous post, I explained how beneficial could be for a factory to disaggregate (by direct measure and not by estimations based on nominal values) the energy consumptions of the factory between the different lines, machinery and systems that compose it. Jedi jokes aside, the fact is that such energy disaggregation is an example of the well-known rule “measure to know, know to control and control to improve.” And down to a more practical approach, the availability and study of such information will allow:

  • to map the energy consumptions within the factory
  • to visualize, through a simple pie chart, the energy contributions of the different elements.
  • to set up the priorities about what zones or machines must be modified or replaced due to their low energy efficiency.
  • to compare the energy efficiency between the different lines of a factory.
  • to compare the energy costs of the different products manufactured in the same production line.
  • to detect inappropriate consumptions due to devices’ malfunction, or sub-optimal working protocols.

Ok, let’s suppose we have already convinced the factory managers of the convenience of measuring to improve and doing it through the disaggregation of consumption. How do we start?

The most obvious approach would be to monitor the energy consumption of each machine with its corresponding sensor or meter. For electricity consumption, the installation of a network analyser will be required in the electrical cabinet where the electrical protections associated with the equipment are located. This installation, as long as there is available space in the corresponding cabinet, usually would require stopping machines for a few minutes. In the case of machinery whose energy consumption is natural gas, things get more complicated and expensive. Here it will be necessary to saw the gas supply pipe to install the new gas meter. The safety requirements and verifications of the new weldings will require a 24-48 hours supply interruption and machinery stop.

In addition, there may be machines or equipment that require a significant consumption of compressed air or heating (or cooling) thermal energy in the form of hot (or cold) water. In these cases, the specific meters must be installed in the supply pipes of the corresponding services.

In any case, formerly, the meters used to incorporate a mechanical (or electronic) mechanism of counting and accumulation. Periodically, the assigned worker would record their readings in the corresponding logbook. The mentioned readings would be later introduced manually into the computerized cost management system. However, nowadays, this approach is obsolete since, like any manual data collection process, it is costly, inefficient and leads to multiple errors. In other words, it is not only required to install the meters, but these models must be equipped (and all industrial models comply) with a communications module that allows the measured data to be sent to a computerized database storage system. It will also be necessary to deploy a new communications network (or extend the existing one if applies) to communicate all new sensors installed with the computer system that will periodically record data on energy consumption.

This type of consumption monitoring is known as Intrusive Load Monitoring (ILM). Its main advantage is the precision of the results, but its great disadvantage is the high expenses that it entails. In factories where consumption is highly distributed among a multitude of machines, the cost of equipment and installation of an ILM system can be a great investment compared to the annual cost of energy consumption in the factory.

It should not be forgotten that the purpose of a energy disaggregation system is to help reduce energy consumption and therefore the cost associated with such consumption. Obviously, it is not possible to precisely predict the economic savings that the energy disaggregation will produce. With regards to this, it is usual to use ranges, based on previous experiences, with the most and least favourable values. No matter how wide the potential savings are, if the initial investment is unreasonably high, the corresponding Return on Investment or ROI rates will be above any acceptable threshold considered by the relevant Chief Financial Officer.

To be continued…

Efficiency Wars (Episode IV) – A new (efficiency) hope

Efficiency Wars (Episode IV) – A new (efficiency) hope

Disaggregation of consumptions?  Why? To avoid the dark side

Within the world of management, the aphorism “If you can’t measure it, you can’t improve it” is often attributed to the twentieth century Austrian philosopher, Peter Drucker, whose writings contributed to the philosophical and practical foundations of the modern business corporation. He is indeed considered the founder of modern management.

Anyone with a minimum knowledge of quality control will have heard of the “Deming Cycle” also known as the “Plan-Do-Check-Act management method”. Measurement is essential in management. It is part of the administrative process and it is essential in the application of the PDCA method.

However, physicists know the expression does not come from the field of corporate management but from experimental thermodynamics. In particular, it was the nineteenth century British mathematician and physicist William Thomson Kelvin (Lord Kelvin) who formulated it in the following terms: “What is not defined, cannot be measured. What is not measured cannot be improved. What is not improving, always breaks down.” By the way, William Thomson Kelvin became Lord Kelvin-Britain’s first British scientist to be admitted to the House of Lords,-in recognition of his work in thermodynamics and electricity. He is buried in Westminster Abbey, next to the tomb of Isaac Newton.

Once defended the honour of “physics” versus “management”, the idea of measuring for improvement remains one of the most important ground rules of green manufacturing.

One of the problems encountered in the REEMAIN project when initiating the process of improving the energy efficiency of the production processes is the aggregation of energy consumptions: the individual energy consumptions of the main machines or stages of the production process are not accurately known. Only the global amount of energy consumed by the factory as a whole is known.

In the best case scenario, the total amount of energy consumptions of the different workshops will be available in terms of monthly values in large factories constructively organized in interconnected workshops. This is because, –in those kinds of factories-, the specific electricity and gas meters, and even thermal energy or compressed air meters, will have been installed, in the connection points of the workshops to the energy distribution factory networks. However, this “effort” (i.e. economic investment) in terms of energy meters has nothing to do with energy efficiency concerns. It is devoted to avoid discussions in the allocation of overhead costs for energy supplies and auxiliary services between the different workshops or departments.

Overhead costs must always be distributed, and given that financially the factory (or company) is a closed system, the different departments or workshops will try to use a criterion that benefits them –obviously at the expense of hurting others. For instance, electricity or natural gas costs are often split between different departments depending on the number of workers, the workshop area, the amount of produced units, the number of working hours, nominal power of the machineries or even some type of weighted mix of all the above parameters. As you can imagine, if total energy costs reach magnitudes of six zeroes, changing the weighting of the different criteria can represent hundreds of thousands of euros in the corresponding economic balances.

In any case, either within the workshop or at the global factory level, the challenge is to determine (i.e. monitor with temporal detailed recording) the contributions of the different lines, machines or systems to the energy consumption of the factory. And, why is this useful? Well, there are many reasons that will be discussed in the post. But, talking in general terms and paraphrasing Master Yoda, –now it is 40 years celebration, it could be said that “Aggregation of energy consumptions is the path to the dark side. Aggregation leads to lack of knowledge. Lack of knowledge leads to uncontrollability. Uncontrollability leads to inability to improve.”

To be continued…

What could mechanical simulation do for companies?

What could mechanical simulation do for companies?

The use of computer environments in the mechanical engineering field has grown significantly in recent decades. Most companies in the industry are aware of the benefits of computer-aided design (CAD) and engineering (CAE) systems. The traditional tasks associated with the design of machine elements, structures and manufacturing processes might prove very straight forward. The biggest benefit is obtained when interdisciplinary teams share models in order to designers, analysts and suppliers can evaluate several alternatives, understand design decisions and collaborate to achieve the requirements of functionality, quality and cost. This interaction requires agreed management systems, cross-platform environments and local and cloud computing and storage capabilities to take full advantage of its potential.

Nowadays simulation environments offer new capabilities to solve more complex problems. The major advantage of finite element analysis techniques is that it can handle coupled equations describing Multiphysics problems of interest to production companies. The traditional calculations to determine trajectories, tensions and deflections in mechanical structures, mechanisms and assemblies are now added abilities interaction with the surrounding fluids, allowing to address problems of combustion in biomass boilers, of undermining in piles of viaducts or vortex induced vibrations in slender structures.

The efficient use of these tools allows companies to accelerate the innovation, evaluating in a short period of time different alternatives of design, making experiments about prototypes, knowing the real performance of the process or product, updating the virtual model and simulating it against not tested conditions and proceed to optimization before it goes to market. However, some companies are not able to assimilate the full potential of their software investments, because sometimes the simulation remains disconnected from the production line and the methodological cycle discussed above is not completed. Trying to manage with this problem, CARTIF offers technological services of design, simulation, prototyping and testing, ranging from conceptual design to manufacturing and manufacturing supervision, applied to the automotive, renewable energy, chemical, agricultural, building, infrastructures and industrial machinery sectors.

Blockchain and the electric market customers

Blockchain and the electric market customers

In a previous post I tried to explain the Blockchain technology. In this occasion I will try to explain how customers in the electric market could benefit from it.

One of the most interesting Blockchain’s applications are the smart contracts. While a traditional contract is a piece of paper where two or more parties express their conditions and commitments, a smart contract is a computer program where the conditions and commitments are coded and automatically executed when the conditions are fulfilled. Currently smart contracts are restricted to simple agreements related to very specific applications. The Blockchain technology assures the fulfilment of the contract commitments with no need for a third supervising party. It is expected smart contract will reduce costs and speed up contract management. Besides this, they will enable almost real time audits. A Blockchain platform that supports smart contracts is Ethereum.

Smart contracts in the energy distribution sector could play the role of the current control algorithms. Among other duties, these algorithms are in charge of controlling energy flux between storage and generation depending on energy surplus. A first approach to smart contracts in the energy sector is POWR, developed by the Oneup company. The prototype runs on a neighbourhood where all the houses have solar panels installed. The energy that is not used in one house is offered to the neighbours and, at the same time, neighbours with a need for energy ask for it to their neighbours. Blockchain is used to record the energy flux between neighbours. The smart contract is stored in mini-computers attached to the meters in every house. It is continuously supervising the conditions coded in the smart contract and executing the commitments as soon as the conditions are met. Payments are done in its own cryptocurrency.

A similar example can be found in New York. The Brooklyn Microgrid project is building a microgrid to which the neighbours are connected. They have solar panels installed on the roofs of their premises. Neighbours use the energy they produce, but also they trade in energy to satisfy neighbours’ needs. This peer-to-peer market is supported by TransActive Grid, an initiative developed by LO3 Energy and ConsenSys. They use Ethereum technology. The project is studying how a microgrid autonomously managed by a group a people could behave. In a future the neighbours could become the owners of the microgrid according to a cooperative scheme.

Sharge participant installing Sharge at home

Alternatively to smart contracts, Blockchain technology is being demonstrated in other ways. One example is Sharge, a company that developed Blockchain-based technology that enables an electric car driver to charge the battery in any domestic plug engaged in the program. The house owner installs a small device on a plug, the car driver opens the device using his smart phone and then, after completing the charge, the plug owner is paid with a cryptocurrency. A similar idea is being developed by Slock.it and RWE in the BlockCharge project. In both cases, the target is to develop a payment system for charging electric vehicles with no need for a contract nor an intermediary, agent or broker.

There are also cryptocurrencies designed to encourage the generation of solar energy, like Solarcoin. Others seek to enhance energy interchange between machines, like Solether. In this case Blockchain meets the Internet of Things paradigm.

Blockchain is a technology that could benefit energy users and foster the use of renewable energy. It will also empower the energy user, in particular domestic ones.  While the technology is developed and tested, the legal and normative framework should be revised to remove barriers that could jeopardise Blockchain-based technology use.

What is ‘Blockchain’?

What is ‘Blockchain’?

The “Blockchain” is the technology supporting Bitcoin, the infamous cryptocurrency known for being the first widely used and reportedly used in some criminal activities. Blockchain is also the technology underlying Ethereum, which is also a means to implement smart contracts. There is an increasing interest around Blockchain because it promises disruptive changes in banking, insurance and other sectors narrowly involved in everyday life. In this blog entry, I will try to explain what is Blockchain and how it works. In the next entry, I will present some uses in the energy sector.

Blockchain is an account book, a ledger. It contains the transaction records made between two parties, like “On April 3, John sold 3 potatoes kilos to Anthony and paid 1.05 Euro”. The way Blockchain works avoid any malicious change in the records. This feature is not granted by a supervisor, but is a consequence of the consensus reached by all peers participating in the Blockchain. This has consequences of paramount importance. For instance, when Blockchain is used to implement a payment system, like Bitcoin, it is not needed a bank supervising and facilitating the transaction anymore. Even it would not be necessary to have a currency as we currently have.

The blockchain is a decentralised application running on a peer-to-peer protocol, like the well-known BitTorrent, which implies all the nodes in the Blockchain have connections among them. The ledger is stored in all the nodes, so every node stores a complete copy of it. The last component is a decentralised verification mechanism.

The verification mechanism is the most important part because it is in charge of assuring the integrity of the ledger. It is based on consensus among nodes and there are several ways to implement it. The most popular ones are the proof-of-work and the proof-of-stake.

The proof-of-work is the most common verification mechanism. It is based on solving a problem that requires certain amount of computing effort. In a nutshell, the problem is to find out a code called hash using the block content (a block is a set of recent ledger inputs). The hash is unique for a given block, and two different blocks will always have different hashes. The majority of the nodes must agree in the hash value, and if some of them find a different hash, i.e. if there is no consensus, the transactions in the block are rejected.

Applications based on Blockchain can be classified into three different categories according to their development status. Blockchain 1.0 are the virtual cryptocurrencies like Bitcoin and Ether. Blockchain 2.0 are the smart contracts. A smart contract is a contract with the ability to execute by itself the agreements contained in it. This is done with no need for a supervisor who verifies the contract compliance. Finally, Blockchain 3.0 develops smart contract concept further to create decentralised autonomous organisational units that rely on their own laws and operate with a high degree of autonomy.

In my next post I will present some smart contract applications in the field of energy delivery.

Machine vision for quality control

Machine vision for quality control

Machine vision is behind many of the great advances in the automation of the industry since it allows the control of quality of 100% of the production in processes with high cadences.

A non-automated process can be inspected by the operators themselves in the production process. However, in a highly automated process, inspecting the total production manually is a really costly process. Sampling inspection, i.e. determining the quality of a lot by analyzing a small portion of the production, has been used as a compromise solution, but due to the increasingly demanding quality demands of the final product, sampling inspection is not the solution.

It is in this context that the need to incorporate automatic systems for quality control arises, among which stands out the visual inspection through machine vision. The human ability to interpret images is very high, adapting easily to new situations. However, repetitive and monotonous tasks cause fatigue and therefore the performance and reliability of the operator’s inspection decline rapidly. One must also consider the inherent human subjectivity that makes two different people provide different results in the same situation. It is precisely these problems that can best address a machine, because it never tires, is fast and results are constant over time.

It is logical to think that the aim of a machine vision system is to emulate the virtues of people’s vision. For this, the first thing we must ask ourselves is, “what do we see with?” A simple question that common mortals would answer without hesitation “with the eyes”. However, the people who dedicate ourselves to machine vision would answer in a quite different way and say “with the brain”. Similarly, it can be thought that cameras are in charge of “seeing” in a machine vision system, when really that process is carried out by the image processing algorithms.

Obviously, in both cases it is a simplification of the problem, since the process of vision, natural or artificial, cannot be carried out without involving both eyes / cameras and brain / processing, without forgetting another key factor, illumination.

Many efforts have been made to try to emulate the human capacity to process images. This is why in the 1950s the term Artificial Intelligence (AI) was used to refer to the ability of a machine to display human intelligence. Among those capacities is that of interpreting images. Unfortunately, our knowledge about the functioning of the brain is still very limited, so the possibility of imitating such functioning is too. The development of this idea in the field of machine vision has been carried out by means of what is called Machine Learning (ML) popularized in recent years with the techniques of Deep Learning (DL) applied to the understanding of scenes. However, these techniques do not really have intelligence behind them, but rather are based on feeding them with a huge amount of images previously labeled by people. The processing that allows to classify the images as expected is considered like a black box and really, in most cases, we do not know why it works or not.

When machine vision is applied to the industry for the quality control there is usually not enough data to apply these techniques and it is required that the behavior of the system is always very predictable, so these techniques have not yet been popularized in the industry. That is why, when developing applications of machine vision for the industry, the objective is to solve well-defined problems in which cameras and lighting are selected to enhance the characteristics that are desired to be inspected in the image and subsequently endowed the system with the capacity of interpreting the acquired images with really low error levels.

Finally, the inspection results are stored and used in the production process, both to discard the units that do not meet the quality requirements before adding them a new value or to improve the manufacturing process and therefore reduce the production of defective units. This information is also used to ensure that the product met the quality conditions when it was delivered to the customer.

Among the different applications in which these techniques can be use are geometric inspection, surface finish inspection, the detection of imperfections in manufacturing, product classification, packaging control, color and texture analysis… and so on.

At CARTIF we have carried out numerous installations of machine vision systems such as cracking and pore detection in large steel stamped pieces for bodyworks, detecting the presence, type and correct placement of car seat parts, the detection and classification of surface defects in rolled steel, inspection of brake disks, detection of the position of elements for their depalletising, quality control of plastic parts or the inspection of the heat sealing of food packaging.