The dynamic electricity prices and the washing machines with initiative

The dynamic electricity prices and the washing machines with initiative

The day all of us enjoy electricity dynamic prices thanks to the smart grid, we will see how the washing machine and other home appliances come into life. And they will do it to allow us to pay less for the energy they need to do their duties. This will be one of the advantages of dynamic prices that are those that change along the day to encourage us to use energy when there is a surplus and to dissuade us of using energy when there is a shortage.

To have a better understanding of how dynamic prices will impact on our lives, there has been a research project conducted in Belgium that involved 250 families equipped with smart home appliances, namely washing machines, tumble dryers, dishwashers, water heaters and electric car chargers. Smart home appliances are those that receive information about electricity rates and that can make decisions about their own operation. For the purposes of the project, the day was divided into six time slots with different electricity prices according to the energy market. The families involved in the experiment were divided into two groups.

Researchers of Liner Intelligent Networks project in a demostration

The first group got information about next day electricity prices through an app installed in a mobile device. Then, they have to plan the use of the appliance for the next day considering the prices and their needs.

The second group have appliances that reacted to the prices in an automated fashion while preserving the owners’ utility. To understand how it worked, imagine a family who wants to have their dishes ready for dinner at 6PM. At 8AM, when they left home to work, they switch on the washing machine and indicate the hour the dishes must be ready. In the case the washing machine needs two hour to complete the work, the machine knows it could start to work at some moment between 8AM and 4PM and it chooses the moment in the time slot with lower price. In the case the energy were cheaper after 4PM, the washing machine started to work at 4PM to assure the dishes were clean and dry at the moment the owners needed them. Other appliances, like the water heater, just chose the time slots with cheaper energy to keep water at desired temperature.

The customers in the first group found the system annoying and they left the experiment. However, those in the second group found the method did not affect their comfort and that their appliances preferred the night to work. Besides this, there was a reduction in the electricity bill: 20% for dishwashers, 10% for washing machines and tumble dryers, and 5% for water heaters.

One of the findings of the project was that customers do not like to be on the lookout of the next day prices. This result is quite surprising if we consider the success of the Opower company, that according to them they were capable of reducing the bill, energy use and CO2 emissions using a customer information system quite similar to the one used by the Belgians with the first group, the one based on getting information the day before to make decisions in advance. But today Opower is in the Oracle realm, maybe because this big company was more interested in the data and knowledge Opower had about how people demand energy than in the possible benefits for environment, electric grid and customers’ wallets. Anyway, it seems the original’s Opower spirit remains alive.

The smart grid will make possible our washing machines will be connected to the power company through Internet soon and it will be in charge of making decisions about when to work in order to reduce our electricity bill. After that, if the washing machine makers were able to design a machine capable of ironing the clothes our happiness would be complete.

Best practices in energy efficiency in industry projects

Best practices in energy efficiency in industry projects

Traditionally, factors that were taken into account in manufacturing processes were economic, management, production, etc. However, this situation has changed in recent years. Energy efficiency and sustainable management are fundamental aspects that many companies have incorporated in their processes. Aware of that reality, CARTIF is accompanying the companies to incorporate in them the “Factories of Future” concept. An example of work done is the REEMAIN project.

REEMAIN moves toward zero carbon manufacturing and Energy Efficiency 2.0 through the intelligent use of renewable energy technologies and resource saving strategies that consider energy purchase, generation, conversion, distribution, utilization, control, storage, re-use in a holistic and integrated way.

In addition to that, REEMAIN project has provided us with the opportunity to expand our knowledge and experience in the Resource and Energy Efficient Manufacturing world. During the demonstration actions at the factories, the team has experimented energy and materials saving technologies and process and, of course, tested their effectiveness.

As the project comes to an end, we have produced a Best Practices Book as a way of sharing our experience with other professionals in the material and energy efficiency manufacturing domain.

The REEMAIN Best Practice Book summarises the key findings from our experience of over four years working on the project and are recommendations we make to the overall community involved in this kind of projects (designers, research institutions, factory owners, workers, contractors, public bodies, investors, etc.), in order to provide a help if some of them decide to get involve in an efficiency improvement project within a factory.

18 Best Practices are featured. They were based on our experience while searching and testing efficiency measures in our three demo factories: GULLON (Biscuit), BOSSA (Textile) and SCM (Iron & Steel). Three main thematic areas had been identified: Best practices on “design”, best practices on “Operation and maintenance” and “Exploitation & Dissemination”.

Each of them is presented in a short and visual way. They are composed of: title, description (being itself a recommendation), stakeholders, replicability, practical guidelines and things to avoid, impact rating, and finally the REEMAIN practical experience.

The Best Practice Book is available online for download free.

Artificial vision in hot stamping

Artificial vision in hot stamping

With this post, I would like to try to show a very clear example where, the intelligent use of a suitable artificial vision system can solve a major problem in a production line at a reasonable price.

The body of our vehicle consists of a multitude of metallic pieces, each with its own requirement. The automotive industry manufactures these parts through a laminating sheet forming process called stamping. In this process a metal sheet is placed on a matrix, it is fixed and later, a punch pushes the sheet towards the matrix generating the desired cavity.

Depending on the temperature of the steel blanks two types of stamping are defined: cold stamping and hot stamping. In this case, we will focus on the hot stamping, which is applied mainly in elements of high structural requirement, such as reinforcements, pillars, etc.

Image captured by the vision system at the exit of the oven

In this process the steel blanks is heated above the austenization temperature, obtaining a high ductility and then proceeding to a rapid cooling to achieve the martensitic hardening of the sheet. The pieces obtained reach high resistance, complex shapes are obtained and the effects of springback are reduced. This allows, among other things, to improve the passive safety of our cars and reduce their weight.

In this manufacturing process, the steel blanks leave the furnace at high speed, at a temperature around 900-950 ºC, they stop abruptly in a fixed position and, later, a robot collects them to introduce them in the press as quickly as possible , In order to avoid its cooling before the press stroke.

The problem arises from the difficulty of ensuring a fixed position with mechanical fasteners. This is due, among other things, to the speed of the line, the great variety of references, the high temperatures of the steel blanks (which cools very quickly at the point where there is a contact) and the internal characteristics of the furnace (which can measure up to 30m).

An incorrect position means that the robot fails to pick up the steel blanks, or worse, to pick it up incorrectly and place it incorrectly in the press, producing a wrong press stroke and stopping the line, together with a deterioration of the tools.

In this case, the artificial vision is presented as the best choice to indicate to the robot if the actual position of the steel blanks is correct. The most important task of the vision system will be to correctly segment the steel blanks into the image in order to accurately determine the position of the steel blanks.

CARTIF position. Application developed by CARTIF

A priori, given the intense infrared radiation emitted by the plates due to their high temperature, it seems that the easiest alternative to achieve this task is to use industrial infrared cameras. This solution presents two problems: the high cost of these equipments and the low resolution of the infrared sensors.

The working area in which the steel blanks are positioned is very wide, due to the size of the parts and because in many cases it is worked in batches, handling up to four units simultaneously. Given the low resolution of these sensors, it is necessary to use several cameras to increase the precision with which the position is defined.

From CARTIF we have been developing more economical solutions, using industrial cameras within the visible electromagnetic spectrum with a greater sensitivity in the infrared range. The resolution of these cameras is much higher than that of the infrared cameras which allows to increase the accuracy of the measurements.

This has allowed companies such as Renault to obtain a robust and configurable system that avoids undesirable stops of the line and extends the useful life of its tools, which leads to a considerable improvement in the production line.

Efficiency Wars (Episode VI) – The Return of Bohr

Efficiency Wars (Episode VI) – The Return of Bohr

Low cost alternative innovations. The barometer and how to think outside the box

I finished my previous post commenting how an ILM approach –to disaggregate energy consumption in a factory- can be an unbeatable challenge, financially, for those factories with highly distributed energy consumption.

The commercial market offers several alternatives for industrial measurement systems, designed by the main equipment manufacturers such as SIEMENS, ABB, SCHNEIDER, … capable of providing a hyper-exhaustive follow-up (several measures per second) of the energy consumptions of the different elements in a production chain. However, the cost of the necessary hardware, -the required computer and communications installation-, or the cost of the software licenses make such systems quite expensive. The consequence is that nowadays, they keep being a luxury only available to the large multinationals that also have several similar factories in different locations and, therefore a better purchase  negotiation capacity and an easy and high internal replicability. In addition, its production processes are highly automated and computerized through the latest generation MES (Manufacturing Execution System) systems. They already have the necessary IT and communications infrastructure. They just lack the investment in hardware and the “upgrade” of their software licenses.

For other small and medium-sized factories, these solutions can mean “using a sledgehammer to crack a nut”, so that the investment in monitoring will never be profitable (in terms of produced savings). However, these types of factories are increasing their interest in optimizing their energy costs, but employing a reasonable economic investment more appropriate to their billing volumes.

Every science student will have heard the supposed anecdote of Niels Bohr and the barometer in one of its many versions. Although the anecdote of Bohr and the barometer is not real but invented, the moral of trying to think differently when solving a possible problem is more relevant than ever. The difference is that we now call it “thinking outside the box“. The question now is not how to measure the height of a building with the help of a barometer, but, how the measurement and monitoring of energy consumption of a factory could be developed without spending the whole sum of the factory one-year investment budget ?

The answer, as in the problem of the barometer, is not unique, as it will depend on each particular factory. Fortunately, the IOT revolution is producing economies of scale in some of the necessary components. Continuing with the ‘Star Wars’ tribute, the low cost monitoring energy consumption systems can be compared to an X-wing starfighter formed by the following four wings:

  • The lower cost of electronics, which is allowing the development of new low-cost non-invasive sensors such as Hall effect-based electric current sensors, ultrasonic flow sensors, or infrared temperature sensors.
  • The open source hardware-software platforms for signals capturing and processing through low cost devices like Arduino, Raspberry Pi and others.
  • The emergence of new wireless communication protocols oriented to the M2M (Machine To Machine) communication with characteristics of low bandwidth and energy consumption and high resistance to the interferences, like Zigbee, BT LE or Wi-Fi HaLow.
  • Software systems for storage and processing all the recorded data, for example  the database systems, the multiple indicator reports automatic calculation tools and the use of displays showing the current values of the most important parameters. Both, residents on physical servers located on the factory intranet, or virtual cloud rented servers.

These new technologies are not yet mature and obviously the industry can be very reluctant to use them. If there is something that scares a production or maintenance manager those are the experimental systems that have not been tested previously for years. However, it is necessary to remember that we are not talking about modifying the control systems of processes and machines, but about deploying a parallel system throughout the factory that allows the monitoring and records the energy consumption of the main elements and production systems. We are talking about the detection of possible energy inefficiencies. We are talking about its correction and the corresponding economic savings. And we are talking about doing so with a reasonable investment cost, that is, that an SME can afford it.

Digital Transformation, to the moon and back

Digital Transformation, to the moon and back

It is July 20th, 1969, 20:18:04 UTC and after 102 hours, 45 minutes and 39.9 seconds of travel “the eagle has landed” and Neil is about to descend the ladder and touch an unknown surface for the first time: “That’s one small step for [a] man, one giant leap for mankind“. That 1969, Neil Armstrong, Michael Collins and “Buzz” Aldrin changed the world riding the biggest rocket ever built to the moon.

Some people may forgot it, others like me were not born at that time, but space race had its own digital transformation similar to the one foreseen for the industry and general public. Apollo program was the culmination of such first digital revolution in space exploration.

The landing achievement was, to a great extent, met thanks to the electronics onboard both the Apollo Command Module (CM) and Lunar Module (LM), the AGC or Apollo Guidance Computer. The computer was one of the first integrated digital circuit-based computers. With “just” 32kg of weight and a mere 55W of consumption this technical wonder was able to coordinate and control many tasks of the space mission, like calculating the direction and navigation angles of the spacecraft to commanding reaction control jets and orientate it in the desired direction. Moreover, the computer included one of the first demonstrations of a “fly-by-wire” feature where the pilot doesn’t command the engines directly but through control algorithms programmed into the computer. In fact, this computer was the basis for subsequent control of the space shuttle, military and commercial fly-by-wire systems.

As usual with this kind of breakthroughs, it did not happen overnight but through a series of incremental innovations done before.

By the 1950s, MIT Instrumentation Laboratory (IL) designed and developed the guidance system of Polaris ballistic missiles. Initially built with analog computers, soon they decided to go digital to achieve the accuracy required for the computations of missile trajectories and control.

Before President Kennedy set the ambitious goal of “… going to the moon in this decade …” 7 years earlier the first lunar landing, and after the launch of Sputnik in 1957, a Mars exploration study started at IL MIT’s laboratory. The design of a Mars probe set the basic configuration of the future Apollo guidance system including: a set of gyroscopes to keep the probe oriented, a digital computer and an optical telescope to orient itself relative to the moon and stars.

The launch of Sputnik in 1957 fueled America’s ambition to put the first human in space, but also contributed to the public debate of the pilots in the space race. A similar discussion to current views of the role of the worker in the factory. Should the astronaut just be payload or take full control of the spacecraft? Once aircraft pilots earned the task of being at the controls, several tests showed that it was nearly impossible that they would be able to control all the aspects of a mission due to the fast reaction needed and the amount different control commands.  Hence, pilots would need some automatic and reliable help for the pilot, and that was one of the main functionalities of the AGC.

Reliability was then one of the main concerns of the mission. Polaris program took 4 years to design a guidance control for a weapon in the air a couple of minutes. Kennedy’s bet of taking a man to the moon in less than 7 years meant to develop a guidance and control system for a spacecraft that should work without failure in a trip of more than a week of duration. The required levels of reliability were of more than two orders of magnitude. If a Polaris missile failed, a new one would take off. A failure in the spacecraft meant killing an astronaut.

Much of the reliability of the flight was in the shoulders of the Apollo Guidance Computer, and at some point of the program there were too many tasks planned, like complex guidance maneuvers, to be physically hardwired into electronics. To achieve these tasks it was needed software. Although software barely was taken into account at the beginning of the program it meant the difference between achieving the goal or program’s complete failure. The computer was the interface between the astronaut and the spacecraft, which in the end meant that computer software “controlled” the spacecraft, a revolution for that time. Today software is everywhere but then in the 60’s, software was seen as a set of instructions on punched cards. AGC software programs (frozen at 3 to 4 months before each launch) were “hard-wired” as magnetics cores and wires in a permanent (and reliable) memory but saved a lot of time, effort, and budget. In fact, it can be said Apollo software was more like a “firmware” using today’s vocabulary.

Today’s challenge of revolutionize industry through digital transformation can’t happen without the help of digital enablers. 48 years ago, digital electronics and first software programs were the “digital enablers” to achieve that “one small step for [a] man, one giant leap for mankind“. Today’s “Digital transformation is not an option” sounds like a cliché, a hype, a slogan from digital providers, but looking back in the history, the digital transformation in the Apollo program meant the difference of not achieving moon landing.

Efficiency Wars (Episode V) – The ROI strikes back

Efficiency Wars (Episode V) – The ROI strikes back

Watch out, the game might not be worth the candle.

In my previous post, I explained how beneficial could be for a factory to disaggregate (by direct measure and not by estimations based on nominal values) the energy consumptions of the factory between the different lines, machinery and systems that compose it. Jedi jokes aside, the fact is that such energy disaggregation is an example of the well-known rule “measure to know, know to control and control to improve.” And down to a more practical approach, the availability and study of such information will allow:

  • to map the energy consumptions within the factory
  • to visualize, through a simple pie chart, the energy contributions of the different elements.
  • to set up the priorities about what zones or machines must be modified or replaced due to their low energy efficiency.
  • to compare the energy efficiency between the different lines of a factory.
  • to compare the energy costs of the different products manufactured in the same production line.
  • to detect inappropriate consumptions due to devices’ malfunction, or sub-optimal working protocols.

Ok, let’s suppose we have already convinced the factory managers of the convenience of measuring to improve and doing it through the disaggregation of consumption. How do we start?

The most obvious approach would be to monitor the energy consumption of each machine with its corresponding sensor or meter. For electricity consumption, the installation of a network analyser will be required in the electrical cabinet where the electrical protections associated with the equipment are located. This installation, as long as there is available space in the corresponding cabinet, usually would require stopping machines for a few minutes. In the case of machinery whose energy consumption is natural gas, things get more complicated and expensive. Here it will be necessary to saw the gas supply pipe to install the new gas meter. The safety requirements and verifications of the new weldings will require a 24-48 hours supply interruption and machinery stop.

In addition, there may be machines or equipment that require a significant consumption of compressed air or heating (or cooling) thermal energy in the form of hot (or cold) water. In these cases, the specific meters must be installed in the supply pipes of the corresponding services.

In any case, formerly, the meters used to incorporate a mechanical (or electronic) mechanism of counting and accumulation. Periodically, the assigned worker would record their readings in the corresponding logbook. The mentioned readings would be later introduced manually into the computerized cost management system. However, nowadays, this approach is obsolete since, like any manual data collection process, it is costly, inefficient and leads to multiple errors. In other words, it is not only required to install the meters, but these models must be equipped (and all industrial models comply) with a communications module that allows the measured data to be sent to a computerized database storage system. It will also be necessary to deploy a new communications network (or extend the existing one if applies) to communicate all new sensors installed with the computer system that will periodically record data on energy consumption.

This type of consumption monitoring is known as Intrusive Load Monitoring (ILM). Its main advantage is the precision of the results, but its great disadvantage is the high expenses that it entails. In factories where consumption is highly distributed among a multitude of machines, the cost of equipment and installation of an ILM system can be a great investment compared to the annual cost of energy consumption in the factory.

It should not be forgotten that the purpose of a energy disaggregation system is to help reduce energy consumption and therefore the cost associated with such consumption. Obviously, it is not possible to precisely predict the economic savings that the energy disaggregation will produce. With regards to this, it is usual to use ranges, based on previous experiences, with the most and least favourable values. No matter how wide the potential savings are, if the initial investment is unreasonably high, the corresponding Return on Investment or ROI rates will be above any acceptable threshold considered by the relevant Chief Financial Officer.

To be continued…