Even though the term Geographic Information System (GIS) is well-known, it is possible that many of you don’t know what applications it might have or its relevance in the energy field. Put in short, GIS (or SIG, in Spanish) are all software in charge of the treatment of data containing some geometric characteristic and that can be reflected on a map in their precise position. These data can be 2D or 2,5D* (described with points, lines and polygons), 3D, or cloud points (LIDAR data). Moreover, these geographic data are normally associated to attribute tables, where information on them is introduced. For example, we can have a map with the provinces in Spain and in the attribute table have assigned to each polygon representing a province their demographic data, economic data, etc.
One of the most remarkable aspects of these systems is not only being able to visualize elements in their precise geographic location, but also that these layers of information can be overlapped allowing to visualize at the same time geographical elements displaying different realities. This is quite straightforward and we are very used to seeing it in phone apps, for example GPS apps, where we can observe a base map (a city map or a satellite image) and several layers that are placed on top of it, such as the name of the streets, stores, etc.
A part from being able to use these systems in order to guide ourselves in cities (which is no small thing) the potential of these systems lies in being able to perform spatial analysis, which would be impossible with other means. This way, we could have answers to questions like the following:
What would be the floodable areas by this river?
If an incident occurs in this area, which are the closest hospitals? What would be the best route for ambulances with respect to distance? And with respect to time?
Where should the stops of this bus line be placed in order for them to be spaced at a maximum of 600 meters? Which areas in the city would benefit from it considering a radius from the bus stop of 10 minutes walking?
How have forest areas been modified in a concrete zone? Is there risk of desertification?
These only represent a small sample of the reach of GIS, which proves extremely useful to carry out planning activities in a wide range of fields (risks and accidents, traffic management, transport networks, environmental impact assessment, agriculture, natural risk assessment…). But focusing on the energy field, GIS have also a great potential for the support in the development of energy plans, compliance with energy directives and result monitoring. For example, we could get to know which areas are in need to perform an energy retrofit. To this respect it is worth mentioning as an example the map developed by the University of Columbia on the estimated consumption in New York City.
Additionally, several different scenarios can be evaluated where the effectiveness of the different actions is measures or if a determined area can be supplied by other type of energy source (renewable, for example). Calculating these indicators it can be checked if the objectives imposed in a determined directive are complied with or not.
In CARTIF, and in particular in the Energy Division, GIS are exploited and their applications to support to the compliance with the European Directives in the energy field, more specifically to the Directive package “Clean Energy for All Europeans”. Moreover, special attention is paid to the study of the data structure and the standards that should be followed to assure its interoperability. In this sense, it is worth highlighting the open standards proposed by the Open Geospatial Consortium (OGC), and also the INSPIRE Directive, which defines the infrastructure for the spatial information in Europe and which will be applicable in 2020.
This latter aims at harmonising and offering geospatial information in Europe in a range of 34 themes. Even though none of them is strictly related to energy (these aspects can be assigned to build elements, such as buildings (BU)), the study of the most relevant energy attributes is crucial in this moment prior to the implementation of the INSPIRE Directive, as it has been manifested by the European Commission when defining a project that studies the potential of the Directive in the energy field: the “Energy Pilot”. CARTIF, aiming for innovation and the alignment with the EU collaborates in this project interacting with one of the reference centres of the Commission: the Joint Research Centre in Ispra.
*Note for the curious: for example a cube can be considered 2,5D when it is defined instead of with eight vertexes with x, y and z values, it is defined only with the four above, since those contain the “z” value in contrast to the four lower vertexes, where this value would be 0.
In the subject of self-consumption there is a concept that we must never forget: energy efficiency. This efficiency must be understood from both the generation and consumption sides.
Let us first analyze efficiency from the point of consumer. It is evident that if my household consumes less electricity, the cost of my self-consumption facility will be lower. Are we taking any measure of efficiency for this to occur? A first step that can be taken is to reduce the consumption of lighting in the home. The change of halogen and low consumption bulbs by others of LED technology will allow us to reduce enough the electricity consumption in lighting. Another step that we can take is to replace our old appliances with others of class A +++ that have a lower level of consumption.
Efficient measures that are not always within reach of most budgets are to improve the isolation of our home. The insulation of the envelope of the building is fundamental. The use of insulation in facades, ceilings and floors and a suitable choice of windows can reduce the consumption of our building.
Other measures simply go through the change of habits in consumption that we must learn if we want to implement self-consumption in the home. The simple gesture of turning off light bulbs or electrical appliances that are not used, to avoid to leave electronic devices in stand.by (phantom energy) and to put the appliances in operation in the hours of the day when more energy is generated will allow an efficient management of our system. This can be done by implementing energy management software (EMS) in our home but it is an added cost.
If we are thinking about buying an electric car maybe this is the time to choose it with V2G (Vehicle to grid) technology with its Vehicle to Home (V2H) and Vehicle to Building (V2B) variants. This technology allows energy stored in an electric car to be injected into the electrical grid or to a dwelling or building using the car battery as an electrical storage system. In this way a better integration of renewable energies can be achieved in the electrical system.
Perhaps these measures will allow a home to consume only 1500 kw/h a year compared to the current 3000 kw/h of average consumption in Spain. This would reduce the cost of our self-consumption facility which would mean that many households will consider doing this installation in their homes.
From the point of view of the generation side, progress is being made by leaps and bounds. The efficiency of existing photovoltaic panels that use new materials with a longer useful life is very far from those manufactured 10 years ago and the price per w is lower, reaching values of 0.8 € per watt installed. Equally, the technology of the batteries makes them more efficient and with a greater durability supporting greater cycles of recharge and at a lower price.
And is the electricity grid ready for self-consumption? According to the operator of the Spanish electricity system the network is prepared for hundreds of thousands of self-consumers to enter the network.
What are the electric companies doing? Power companies are realizing that self-consumption will sooner or later arrive to settle permanently in each of our homes and that is the time has come to move ahead. Some companies start to market self-consumption kits, control systems or maintenance contracts that ensure a proper functioning of the system.
What is necessary so that everything starts to work? As easy as getting to an agreement point where distributors (power companies) begin to see prosumers (current users and future producers) as potential allies and non-competitors.
On the one hand, the electricity companies claim that the use of the distribution network should be paid not only by the current consumer but by the future producer and as we know these costs represent more than 50% of the current electricity tariff. But it is also true that companies are going to save the generation costs that are difficult to know at present.
But what would happen if a large number of users decide to become only their own power generators and definitely disconnect from the network? And it is then when the government enters and depending on the laws that apply and according to on whether they are advantageous for the consumer, the companies or both when the consumer decides.
Any processing plant – continuous, batch, hybrid – can improve its economic, safety and environmental indicators in two ways: improving its processing equipment or improving the control of those equipments.
The improvement of processing equipment is usually a task that requires large investments, since it almost always involves the acquisition of new processing equipment or in the best case requires expensive remodeling.
On the contrary, these performance indicators can be substantially improved through control, without, in the majority of cases, any investment in new technical means of instrumentation and control. This is because in practically all cases there is a wide margin to improve the performance indicators of a processing plant through its regulation.
The origin of this margin of improvement is multiple. The most common causes are: the control system is not well designed or tuned; due to ignorance or haste, all the benefits of the available control system are not used; the automaton programmer or the process engineer are not control experts; the dynamics of the processes under automatic regulation are not known with the required depth; the design of the plant has not been made under the integrated design approach.
Are also diverse and numerous actions that can be implemented to improve the performance of control loops without any investment and that we will review in next posts, such as: improve the tuning of the regulator, redesign the controller, implement anticipatory compensation of disturbances, enhance the tuning of cascade regulation loops, redesigning or re tuning the level controllers of the buffer tanks if necessary, using a control algorithm advanced available or supportable by the controller instrument, reducing couplings between loops, making a better assembly of the measuring probe, etc.
In its vast majority, the processing plants are automated under control structures (basic control, cascade control, split range control, selective control, coupled loops, etc.) based on the universal PID controller, in all its particularizations (P, PI, PD, PID, PI_D, etc.).
Despite its longevity and the development of multiple advanced control techniques, PID control maintains an overwhelming presence in the process industry.
Its extensive use in the industry is such that all the surveys known by the author conclude unanimously, in which more than 95% of the existing control loops are of the PID type. However, also many surveys conclude that a high percentage of the loops with PID control in the world, are operated in manual mode, while another similar percentage operates defectively. For example, as shown in the following figure, in [1] it is reported that only 16% of the PID regulation loops are optimally tuned and their performance is therefore excellent.
There is no doubt that in most cases, the incorrect or poor tuning of the controller can be the cause of the poor performance of the control loop or its irregularities in the operation.
However, it should not be forgotten that automatic regulation systems are holistic systems, and as such they must be analyzed as a whole and not only through the parts that compose them. That is why it is necessary to review the other components of the loop before deciding what action should be exercised on said loop.
Hence, the procedure of action in all cases, must begin with a field review of all the components of the loop (controller, process, actuator, measurer and communication channels), as well as an analysis of the possibility of coupling with other process loops.
The result of this first phase will determine what concrete action corresponds to perform to solve the poor performance of the automatic regulation loop.
CARTIF offers this service to optimize the performance of the regulation systems of processing plants. The optimization reduces the oscillations and the variability of the production plant, making the regulation system more accurate, faster, more stable and safer, and in this way improving its efficiency, safety, environmental impact and profitability.
In next post, the execution procedure will be described for each of the possible actions, starting with the simplest one, the re-tuning of the controller.
A little more than a year ago, in another post of this blog, our colleague Sergio Saludes already commented what is deep learning and detailed several of its applications (such as the victory of a machine based on these networks over the world champion of Go, considered the most complex game in the world).
Well, in these 16 months (a whole world in this topic) there has been a great progress in terms of the number of applications and the quality of the results obtained.
Considering, for example, the field of medicine, it has to be said that diagnostic toolsbased on deep learning are increasingly used, achieving in some cases higher success rates than human specialists. In specific specialties such as radiology, these tools are proving to be a major revolution and in related industries such as pharmaceuticals have also been successfully applied.
Autonomous vehicle driving projects, so in vogue these days, mainly use tools based on deep learning to calculate many of the decisions to be made in each case. Regarding this issue, there is some concern about how these systems will decide what actions to take, especially when human lives are at stake and there is already a MIT webpage where the general public can collaborate in creating an “ethics” of the autonomous car. Actually, these devices can only decide what has previously been programmed (or trained) and there is certainly a long way to go before the machines can decide for themselves (in the conventional sense of “decide”, although this would lead to a much more complex debate on other issues such as singularity).
Regarding the Go program discussed above (which beat the world champion by 4 to 1), a new version (Alpha Go Zero) has been developed that has beaten by 100 to 0 to that previous version simply knowing the rules of the game and training against itself.
In other areas such as language translation, speech comprehension and voice synthesis have also advanced very noticeably and the use of personal assistants on the mobile phone is beginning to become widespread (if we overcome the natural rejection or embarrassment of “talking” with a machine).
All these computer developments are associated with a high computational cost, especially in relation to the necessary training of the neural networks used. In this respect, progress is being made on the two fronts involved: much faster and more powerful hardware and more evolved and optimized algorithms.
It seems that deep learning is the holy grail of artificial intelligence in view of the advances made in this field. This may not be the case and we are simply looking at one more new tool, but theres is no doubt that is an extremely powerful and versatile tool that will give rise to new and promising developments in many applications related to artificial intelligence.
And of course there are many voices that warn of the potential dangers of this type of intelligent systems. The truth is that it never hurts to prevent the potential risks of any technology, although, as Alan Winfield says, it’s not just artificial intelligence that should be feared, but artificial stupidity. Since, as always happens in these cases, the danger of any technology is in the misuse that can be given and not in the technology itself. Faced with this challenge, what we must do is promote mechanisms that regulate any unethical use of these new technologies.
We are really only facing the beginning of another golden era of artificial intelligence, as there have been several before, although this time it does seem to be the definitive one. We don’t know where this stage will take us, but trusting that we will be able to take advantage of the possibilities offered to us, we must be optimistic.
The day all of us enjoy electricity dynamic prices thanks to the smart grid, we will see how the washing machine and other home appliances come into life. And they will do it to allow us to pay less for the energy they need to do their duties. This will be one of the advantages of dynamic prices that are those that change along the day to encourage us to use energy when there is a surplus and to dissuade us of using energy when there is a shortage.
To have a better understanding of how dynamic prices will impact on our lives, there has been a research project conducted in Belgium that involved 250 families equipped with smart home appliances, namely washing machines, tumble dryers, dishwashers, water heaters and electric car chargers. Smart home appliances are those that receive information about electricity rates and that can make decisions about their own operation. For the purposes of the project, the day was divided into six time slots with different electricity prices according to the energy market. The families involved in the experiment were divided into two groups.
The first group got information about next day electricity prices through an app installed in a mobile device. Then, they have to plan the use of the appliance for the next day considering the prices and their needs.
The second group have appliances that reacted to the prices in an automated fashion while preserving the owners’ utility. To understand how it worked, imagine a family who wants to have their dishes ready for dinner at 6PM. At 8AM, when they left home to work, they switch on the washing machine and indicate the hour the dishes must be ready. In the case the washing machine needs two hour to complete the work, the machine knows it could start to work at some moment between 8AM and 4PM and it chooses the moment in the time slot with lower price. In the case the energy were cheaper after 4PM, the washing machine started to work at 4PM to assure the dishes were clean and dry at the moment the owners needed them. Other appliances, like the water heater, just chose the time slots with cheaper energy to keep water at desired temperature.
The customers in the first group found the system annoying and they left the experiment. However, those in the second group found the method did not affect their comfort and that their appliances preferred the night to work. Besides this, there was a reduction in the electricity bill: 20% for dishwashers, 10% for washing machines and tumble dryers, and 5% for water heaters.
One of the findings of the project was that customers do not like to be on the lookout of the next day prices. This result is quite surprising if we consider the success of the Opower company, that according to them they were capable of reducing the bill, energy use and CO2 emissions using a customer information system quite similar to the one used by the Belgians with the first group, the one based on getting information the day before to make decisions in advance. But today Opower is in the Oracle realm, maybe because this big company was more interested in the data and knowledge Opower had about how people demand energy than in the possible benefits for environment, electric grid and customers’ wallets. Anyway, it seems the original’s Opower spirit remains alive.
The smart grid will make possible our washing machines will be connected to the power company through Internet soon and it will be in charge of making decisions about when to work in order to reduce our electricity bill. After that, if the washing machine makers were able to design a machine capable of ironing the clothes our happiness would be complete.
The main interest of this event “Robocup” is the advances in development of service robots, whose goal is to help humans in their daily life
At the end of July it took place in Nagoya Japan RoboCup 2017, and for the first time, I had the great opportunity to participate in this competition which brings together roboticists from around the world. Currently, RoboCup is the world’s largest robotic competition (this year participated almost 500 teams from 50 different countries), looking at a proud history of 20 years, starting from one league of football-playing robots, to now also cover many application areas of robots, such as rescue, logistics, and also service robots in people’s homes (RoboCup@Home).
As part of the SQPReL team, a very international team integrated by members of the L-CAS (University of Lincoln) and the LabRoCoCo (University of Rome “Sapienza”), I participated in the RoboCup at Home league. In that league the robots must be able to perform different housework in order to try to help humans in their daily life. Maybe one day in the future, most of the population will have one of these kinds of robots at home.
After 6 days of hard work in Nagoya added to the previous 2 months of preparation, the team was very proud to have achieved a good 3rd place in the RoboCup@Home Social Standard Platform League.
Nice but, what is actually the Robocup and what is it for?
RoboCup born as an international joint project to promote AI, robotics, and related fields. An attempt to foster AI and intelligent robotics research by providing standard problems. One of the effective ways to promote science and engineering research is to set challenging goals and these kind of competitions promote to compare developments and collaboration between the research community. Focusing on the RoboCup@Home, this league aims to develop service and assistive robots with high relevance for future domestic applications.
And when is the future? Are there no robots able to perform these tasks yet?
These kinds of robots have had a major presence in research centers in the recent years, such as Sacarino, the robot designed by CARTIF. However, there are few applications where robots are part of our daily life activities due to the difficulty to evaluate service robot applications and to obtain feedback mechanisms aimed at improving the general performance of the robot. Benchmarking in robotics has emerged as a solution to evaluate the performance of robotic systems in a reproducible way and to allow comparison between different research approaches, and here is where appears RoboCup@Home providing benchmarking through scientific competitions.
RoboCup@Home use a set of benchmark tests to evaluate the robot abilities and performance in a realistic non-standardized home environment setting (it changes every year and is not known until the day before the competition). Focus lies on the following domains:
Navigation: path-planning and safely navigating avoiding (dynamic) obstacles.
Mapping: building a representation of a partially known or environment.
Person Recognition: detecting and recognizing a person.
Person Tracking: tracking the position of a person over time.
Object Recognition: detecting and recognizing objects in the environment.
Object Manipulation: the ability of grasping, moving or placing an object.
Speech Recognition: recognizing and interpreting spoken user commands.
Gesture Recognition: recognizing and interpreting human gestures.
Cognition: understanding and reasoning about the environment.
How are those ability evaluated?
The competition is organized in several tasks (host a cocktail party, act as a waiter in a restaurant, follow verbal commands given by a human) that must be accomplished by the robots autonomously. These tasks are integrated tests, thus each test comprises a set of functionalities that must be properly integrated to achieve good performance. However, the scoring system allows for giving partial credit if only a part of the test is achieved.
In a future post I will explain more in detail the tasks that were carried out this year during the competition and our team’s experiences.