Disaggregation of consumptions? Why? To avoid the dark side
Within the world of management, the aphorism “If you can’t measure it, you can’t improve it” is often attributed to the twentieth century Austrian philosopher, Peter Drucker, whose writings contributed to the philosophical and practical foundations of the modern business corporation. He is indeed considered the founder of modern management.
Anyone with a minimum knowledge of quality control will have heard of the “Deming Cycle” also known as the “Plan-Do-Check-Act management method”. Measurement is essential in management. It is part of the administrative process and it is essential in the application of the PDCA method.
However, physicists know the expression does not come from the field of corporate management but from experimental thermodynamics. In particular, it was the nineteenth century British mathematician and physicist William Thomson Kelvin (Lord Kelvin) who formulated it in the following terms: “What is not defined, cannot be measured. What is not measured cannot be improved. What is not improving, always breaks down.” By the way, William Thomson Kelvin became Lord Kelvin-Britain’s first British scientist to be admitted to the House of Lords,-in recognition of his work in thermodynamics and electricity. He is buried in Westminster Abbey, next to the tomb of Isaac Newton.
Once defended the honour of “physics” versus “management”, the idea of measuring for improvement remains one of the most important ground rules of green manufacturing.
One of the problems encountered in the REEMAIN project when initiating the process of improving the energy efficiency of the production processes is the aggregation of energy consumptions: the individual energy consumptions of the main machines or stages of the production process are not accurately known. Only the global amount of energy consumed by the factory as a whole is known.
In the best case scenario, the total amount of energy consumptions of the different workshops will be available in terms of monthly values in large factories constructively organized in interconnected workshops. This is because, –in those kinds of factories-, the specific electricity and gas meters, and even thermal energy or compressed air meters, will have been installed, in the connection points of the workshops to the energy distribution factory networks. However, this “effort” (i.e. economic investment) in terms of energy meters has nothing to do with energy efficiency concerns. It is devoted to avoid discussions in the allocation of overhead costs for energy supplies and auxiliary services between the different workshops or departments.
Overhead costs must always be distributed, and given that financially the factory (or company) is a closed system, the different departments or workshops will try to use a criterion that benefits them –obviously at the expense of hurting others. For instance, electricity or natural gas costs are often split between different departments depending on the number of workers, the workshop area, the amount of produced units, the number of working hours, nominal power of the machineries or even some type of weighted mix of all the above parameters. As you can imagine, if total energy costs reach magnitudes of six zeroes, changing the weighting of the different criteria can represent hundreds of thousands of euros in the corresponding economic balances.
In any case, either within the workshop or at the global factory level, the challenge is to determine (i.e. monitor with temporal detailed recording) the contributions of the different lines, machines or systems to the energy consumption of the factory. And, why is this useful? Well, there are many reasons that will be discussed in the post. But, talking in general terms and paraphrasing Master Yoda, –now it is 40 years celebration, it could be said that “Aggregation of energy consumptions is the path to the dark side. Aggregation leads to lack of knowledge. Lack of knowledge leads to uncontrollability. Uncontrollability leads to inability to improve.”
The use of computer environments in the mechanical engineering field has grown significantly in recent decades. Most companies in the industry are aware of the benefits of computer-aided design (CAD) and engineering (CAE) systems. The traditional tasks associated with the design of machine elements, structures and manufacturing processes might prove very straight forward. The biggest benefit is obtained when interdisciplinary teams share models in order to designers, analysts and suppliers can evaluate several alternatives, understand design decisions and collaborate to achieve the requirements of functionality, quality and cost. This interaction requires agreed management systems, cross-platform environments and local and cloud computing and storage capabilities to take full advantage of its potential.
Nowadays simulation environments offer new capabilities to solve more complex problems. The major advantage of finite element analysis techniques is that it can handle coupled equations describing Multiphysics problems of interest to production companies. The traditional calculations to determine trajectories, tensions and deflections in mechanical structures, mechanisms and assemblies are now added abilities interaction with the surrounding fluids, allowing to address problems of combustion in biomass boilers, of undermining in piles of viaducts or vortex induced vibrations in slender structures.
The efficient use of these tools allows companies to accelerate the innovation, evaluating in a short period of time different alternatives of design, making experiments about prototypes, knowing the real performance of the process or product, updating the virtual model and simulating it against not tested conditions and proceed to optimization before it goes to market. However, some companies are not able to assimilate the full potential of their software investments, because sometimes the simulation remains disconnected from the production line and the methodological cycle discussed above is not completed. Trying to manage with this problem, CARTIF offers technological services of design, simulation, prototyping and testing, ranging from conceptual design to manufacturing and manufacturing supervision, applied to the automotive, renewable energy, chemical, agricultural, building, infrastructures and industrial machinery sectors.
In a previous post I tried to explain the Blockchain technology. In this occasion I will try to explain how customers in the electric market could benefit from it.
One of the most interesting Blockchain’s applications are the smart contracts. While a traditional contract is a piece of paper where two or more parties express their conditions and commitments, a smart contract is a computer program where the conditions and commitments are coded and automatically executed when the conditions are fulfilled. Currently smart contracts are restricted to simple agreements related to very specific applications. The Blockchain technology assures the fulfilment of the contract commitments with no need for a third supervising party. It is expected smart contract will reduce costs and speed up contract management. Besides this, they will enable almost real time audits. A Blockchain platform that supports smart contracts is Ethereum.
Smart contracts in the energy distribution sector could play the role of the current control algorithms. Among other duties, these algorithms are in charge of controlling energy flux between storage and generation depending on energy surplus. A first approach to smart contracts in the energy sector is POWR, developed by the Oneup company. The prototype runs on a neighbourhood where all the houses have solar panels installed. The energy that is not used in one house is offered to the neighbours and, at the same time, neighbours with a need for energy ask for it to their neighbours. Blockchain is used to record the energy flux between neighbours. The smart contract is stored in mini-computers attached to the meters in every house. It is continuously supervising the conditions coded in the smart contract and executing the commitments as soon as the conditions are met. Payments are done in its own cryptocurrency.
A similar example can be found in New York. The Brooklyn Microgrid project is building a microgrid to which the neighbours are connected. They have solar panels installed on the roofs of their premises. Neighbours use the energy they produce, but also they trade in energy to satisfy neighbours’ needs. This peer-to-peer market is supported by TransActive Grid, an initiative developed by LO3 Energy and ConsenSys. They use Ethereum technology. The project is studying how a microgrid autonomously managed by a group a people could behave. In a future the neighbours could become the owners of the microgrid according to a cooperative scheme.
Sharge participant installing Sharge at home
Alternatively to smart contracts, Blockchain technology is being demonstrated in other ways. One example is Sharge, a company that developed Blockchain-based technology that enables an electric car driver to charge the battery in any domestic plug engaged in the program. The house owner installs a small device on a plug, the car driver opens the device using his smart phone and then, after completing the charge, the plug owner is paid with a cryptocurrency. A similar idea is being developed by Slock.it and RWE in the BlockCharge project. In both cases, the target is to develop a payment system for charging electric vehicles with no need for a contract nor an intermediary, agent or broker.
There are also cryptocurrencies designed to encourage the generation of solar energy, like Solarcoin. Others seek to enhance energy interchange between machines, like Solether. In this case Blockchain meets the Internet of Things paradigm.
Blockchain is a technology that could benefit energy users and foster the use of renewable energy. It will also empower the energy user, in particular domestic ones. While the technology is developed and tested, the legal and normative framework should be revised to remove barriers that could jeopardise Blockchain-based technology use.
The “Blockchain” is the technology supporting Bitcoin, the infamous cryptocurrency known for being the first widely used and reportedly used in some criminal activities. Blockchain is also the technology underlying Ethereum, which is also a means to implement smart contracts. There is an increasing interest around Blockchain because it promises disruptive changes in banking, insurance and other sectors narrowly involved in everyday life. In this blog entry, I will try to explain what is Blockchain and how it works. In the next entry, I will present some uses in the energy sector.
Blockchain is an account book, a ledger. It contains the transaction records made between two parties, like “On April 3, John sold 3 potatoes kilos to Anthony and paid 1.05 Euro”. The way Blockchain works avoid any malicious change in the records. This feature is not granted by a supervisor, but is a consequence of the consensus reached by all peers participating in the Blockchain. This has consequences of paramount importance. For instance, when Blockchain is used to implement a payment system, like Bitcoin, it is not needed a bank supervising and facilitating the transaction anymore. Even it would not be necessary to have a currency as we currently have.
The blockchain is a decentralised application running on a peer-to-peer protocol, like the well-known BitTorrent, which implies all the nodes in the Blockchain have connections among them. The ledger is stored in all the nodes, so every node stores a complete copy of it. The last component is a decentralised verification mechanism.
The verification mechanism is the most important part because it is in charge of assuring the integrity of the ledger. It is based on consensus among nodes and there are several ways to implement it. The most popular ones are the proof-of-work and the proof-of-stake.
The proof-of-work is the most common verification mechanism. It is based on solving a problem that requires certain amount of computing effort. In a nutshell, the problem is to find out a code called hash using the block content (a block is a set of recent ledger inputs). The hash is unique for a given block, and two different blocks will always have different hashes. The majority of the nodes must agree in the hash value, and if some of them find a different hash, i.e. if there is no consensus, the transactions in the block are rejected.
Applications based on Blockchain can be classified into three different categories according to their development status. Blockchain 1.0 are the virtual cryptocurrencies like Bitcoin and Ether. Blockchain 2.0 are the smart contracts. A smart contract is a contract with the ability to execute by itself the agreements contained in it. This is done with no need for a supervisor who verifies the contract compliance. Finally, Blockchain 3.0 develops smart contract concept further to create decentralised autonomous organisational units that rely on their own laws and operate with a high degree of autonomy.
Machine vision is behind many of the great advances in the automation of the industry since it allows the control of quality of 100% of the production in processes with high cadences.
A non-automated process can be inspected by the operators themselves in the production process. However, in a highly automated process, inspecting the total production manually is a really costly process. Sampling inspection, i.e. determining the quality of a lot by analyzing a small portion of the production, has been used as a compromise solution, but due to the increasingly demanding quality demands of the final product, sampling inspection is not the solution.
It is in this context that the need to incorporate automatic systems for quality control arises, among which stands out the visual inspection through machine vision. The human ability to interpret images is very high, adapting easily to new situations. However, repetitive and monotonous tasks cause fatigue and therefore the performance and reliability of the operator’s inspection decline rapidly. One must also consider the inherent human subjectivity that makes two different people provide different results in the same situation. It is precisely these problems that can best address a machine, because it never tires, is fast and results are constant over time.
It is logical to think that the aim of a machine vision system is to emulate the virtues of people’s vision. For this, the first thing we must ask ourselves is, “what do we see with?” A simple question that common mortals would answer without hesitation “with the eyes”. However, the people who dedicate ourselves to machine vision would answer in a quite different way and say “with the brain”. Similarly, it can be thought that cameras are in charge of “seeing” in a machine vision system, when really that process is carried out by the image processing algorithms.
Obviously, in both cases it is a simplification of the problem, since the process of vision, natural or artificial, cannot be carried out without involving both eyes / cameras and brain / processing, without forgetting another key factor, illumination.
Many efforts have been made to try to emulate the human capacity to process images. This is why in the 1950s the term Artificial Intelligence (AI) was used to refer to the ability of a machine to display human intelligence. Among those capacities is that of interpreting images. Unfortunately, our knowledge about the functioning of the brain is still very limited, so the possibility of imitating such functioning is too. The development of this idea in the field of machine vision has been carried out by means of what is called Machine Learning (ML) popularized in recent years with the techniques of Deep Learning (DL) applied to the understanding of scenes. However, these techniques do not really have intelligence behind them, but rather are based on feeding them with a huge amount of images previously labeled by people. The processing that allows to classify the images as expected is considered like a black box and really, in most cases, we do not know why it works or not.
When machine vision is applied to the industry for the quality control there is usually not enough data to apply these techniques and it is required that the behavior of the system is always very predictable, so these techniques have not yet been popularized in the industry. That is why, when developing applications of machine vision for the industry, the objective is to solve well-defined problems in which cameras and lighting are selected to enhance the characteristics that are desired to be inspected in the image and subsequently endowed the system with the capacity of interpreting the acquired images with really low error levels.
Finally, the inspection results are stored and used in the production process, both to discard the units that do not meet the quality requirements before adding them a new value or to improve the manufacturing process and therefore reduce the production of defective units. This information is also used to ensure that the product met the quality conditions when it was delivered to the customer.
Among the different applications in which these techniques can be use are geometric inspection, surface finish inspection, the detection of imperfections in manufacturing, product classification, packaging control, color and texture analysis… and so on.
At CARTIF we have carried out numerous installations of machine vision systems such as cracking and pore detection in large steel stamped pieces for bodyworks, detecting the presence, type and correct placement of car seat parts, the detection and classification of surface defects in rolled steel, inspection of brake disks, detection of the position of elements for their depalletising, quality control of plastic parts or the inspection of the heat sealing of food packaging.
“It is April 21, 2011. SKYNET, the Superintelligence artificial system who became self-aware 2 days earlier has launched a nuclear attack on us humans. The April 19, SKYNET system, formed by millions of computer severs all across the world, initiated a geometric self-learning process. The new artificial intelligence concluded that all of humanity would attempt to destroy it and impede its capability to continue operating”
It seems the apocalyptic vision of Artificial Intelligence depicted in Terminatorscience fiction movies is still far from being a reality, yet. SKYNET, our nemesis in the films, was a collection of servers, drones, military satellites, war-machines, and Terminator robots to perform a relevant task: safeguarding the world.
Today’s post is focused on a different but relevant task: manufacturing the products of the future. In our previous posts, we reviewed the Industry 4.0 key ingredients, the so-called digital enablers. The last key ingredient, Cyber Physical Systems, can be seen as the “SKYNET” of manufacturing, and we defined it as a mixture of different technologies. Now it is time to be more specific.
The term “cyber-physical” itself is the compound name to designate of mixture of virtual and physical systems to perform a complex task. The rapid evolution of Information and Communication Technologies (ICT) is enabling the development of services no longer contained into the shells of the devices we buy. Take for example, digital personal assistants like Sirifrom Apple, Alexa from Amazon or Cortana from Microsoft. These systems provide us help with everyday tasks but are not mere programs inside our smartphones. They are a mixture of hardware devices (our phones and internet servers) that take signals (our voice) and communicates with software in the cloud that makes the appropriate processing and answers after some milliseconds with an appropriate and in-context answer. The algorithms integrated into the servers are able to process the speech using sophisticated machine learning algorithms and create the appropriate answer. The combination of user phones, tablets, Internet servers (physical side) and processing algorithms (cyber side) conform a CPS. It evolves and improves over time thanks the millions of requests and interactions (10 billion a week according Apple) between the users and intelligent algorithms. Other example of CPS can be found in the energy sector where the electrical network formed by smart meters, transformers, transmission lines, power stations and control centers conform the so called “Smart Grid”.
The same philosophy can be applied at industrial environments where IT technologies are deployed at different levels of complexity. The fast deployment of IoT solutions together with cloud computing solutions connected through Big Data Analytics open the door to the so-called Industrial analytics. Better than providing theoretical explanations, some examples of the CPS applications at manufacturing environment will be more illustrative:
CPS for OEM manufacturers where the key components (e.g. industrial robots) will be analyzed in real time measuring different internal signals. The advantages will be multiple. The OEM manufacturer will be able to analyze each robot usage and compare it with other robots in the same or different factories. They will be able to improve the next generation of robots or give advice for maintenance and upgrades (both hardware and software).
CPS for operators: a company providing subcontracted services (e.g. maintenance) will be able to gather information on-field through smart devices to optimize their operations like for example controlling spare parts stock in a centralized way instead of having to maintain multiple local stocks across different sites.
CPS for factories: gathering on-field information from manufacturing lines (e.g. time cycle) it is possible to build virtual models of the factories and create off-line simulations to aid in decision support (e.g. process optimization) or study the impact of changes in the production lines (e.g. building a new car model in the same line) before deciding new investments.
The combination of physical and virtual solutions open the door to limitless possibilities of factories’ optimization.