New strategies (and technologies) against huge forest fires

New strategies (and technologies) against huge forest fires

Although sometimes we forget it, forests provides huge benefits to the planet in general and to the human being in particular. They help us to mitigate climate change effects acting as carbon sinks and eliminating huge quantities of carbon dioxide of the atmosphere. The forests nourish the ground and serve as a natural barrier against ground erosion, ground movements, floods, avalanches and strong winds. Forests host more than three quarters of global terrestrial biodiversity, and represents a source of food, medicines and fuel for more than one thousand million people.

But forests are seriously threatened by deforestation, climate change and fires. The advance of the agricultural frontier and the unsustainable logging causes 13 million hectares of forest to be lost every year. Climate change is allowing that plants and invasive insects species have advantages over the native species increasing their negative effects. It also exists a direct relationship between fires, deforestation and pandemics: the destruction of forests, specially the tropical ones such as the Amazonia, Indonesia or the Congo, makes possible that human beings get in touch with wildlife populations carriers of pathogens.

incendios forestales

With regard to forest fires it has been noted that fires are becoming less frequent, but more destructive. Some of them, the most terrible, are the called “sixth generation fires“, and are ravaging the forests of the planet. This type of fires can´ t be fight and also they have the capacity to modificate the metheorology of the place where the fire is located. Against this type of fires it only works a defensive strategy, trying to direct it to non-populated areas and hope that the rain will help to control it. Not even areas that have hardly had any fires are not spared from this tragedy: 5.5 millions of hectares have burned in the Artic Circle in recent years. The Artic is warming twice as fast as the rest of the planet and, as a result, high intensity fires are starting.

It is clear that is fundamental to prevent fires and for that reason it is necessary to consider strategies that allows reducing forests vulnerability. Having a look at our nearer context, the European Union forest strategy promotes the forest sustainable and respectful management with climate and biodiversity, intensifying the surveillance of forests and giving a more specific support to silvicultures. Becomes evident that is needed a better forestry management with emphasis in the protection and sustainable regeneration. However, we have a steady decline in forest mass as the “reforestation” process cannot compete with the deforestation rate in Europe. Furthermore, in Europe, data shows a large increase in forestry exploitation in recent years, which reducing the continent´ s CO2 absorption capacity and possibly indicating wider problems with the EU´ s attempts to fight agains climate crisis. Another paradox regarding forests within the EU is that a large part of them are privately owned by timber companies. As a result, the regular logging of these forests, coupled with the private nature of their ownership, makes public awareness and greenning even more difficult to achieve. Biomass loss from 2016 to 2018, in compared to the period from 2011-2015, has increased by 69%, according to the satellite data.

Spain, as it occurs to all the countries of the mediterranean area, is specially vulnerable to fires, given the scenario of drought and desertification, accelerated by the climate change. In Spain we have a large experience putting out forest fires: we collaborate in a international level and we achieve the extinction of 65% of fires in their outbreak phase (less than 1 hectare), although this sometimes produces the effect called “the extinction paradox” (which means that we lose the opportunity for small fires to clear undergrowth and thus encourage large and dangerous accumulations of fuel. In Spain 1,000 million euros per year are destiny to fire extinction, however, only 300 millions euros to their prevention.

The extinction is necessary and positive but isn´ t enough, it is necessary to invest in other measures (prevention, detection and recovering) that allows facing forest fires from a more wide and complete perspective. In this sense is very important to take advantage of new tools that offers recent technologies and scientific advances.

incendios forestales

For example, the use of images obtained with drones and satellites and sensor grids joint with artificial intelligence techniques allows to detect fires faster and more accurately and is already underway several research projects in various countries: Bulgary, Greece, Portugal, Lebanon, Korea and much others. Even there are challenges planned for the European Spacial Agency for using satellite images and artificial intelligence in the detection of fires and other similar challenges of the NASA, H20.ai and Cellnex. Another interesting initiative is ALERTWildfire, a consortium of several northamerican universities that provides cameras and tools against fires to discover, locate and monitor forest fires. There are also commercial systems to detect forest fires, such as this one of Chile, that use Artificial Intelligence and several types of sensors or this one of Portugal.

Already in Spain, the Ecology Transition and Agriculture ministries have developed Arbaria project able to “predict” with a considerable hit rate where fires will break out.

Looking for a global approach in the prevention and management of fires the european project DRYADS have been launched, in which participates CARTIF. This project has as an objective the development of a fire management holistic platform based in the optimization and reuse of last generation socio-technologic resources. These techniques will be applied in the three main phases of forest fires:

  • In the prevention phase, DRYADS proposes the use of a real-time risk assessment tool that can receive multiple ranking inputs and work with a new risk factor indicator driven by a neuronal network. To create a community model adapted to fire, in parallel to the previous activity, DRYADS will use construction materials activated by alcali that integrates post-fires wood ashes for buildings and infrastructures resistant to fire. DRYADS will also use a variety of technological solutions, such as the Copernicus european satellite infrastructure and swarms of drone for a precise forest supervision.
  • In the detection phase, DRYADS proposes several technology tools that can be adapted to much of the needs of the project: use of virtual reality for the training, portable devices for the emergency services protection team, vehicles without driver -UAV (drones), UAG and aircrafts- to improve the capacity of temporary and spacial analysis, as well as to increase the coverage of the inspected area.
  • Finally, DRYADS will construct a new forestry restoration initiative based in modern techniques, such as agrosilviculture, drones for spreading seeds, IoT sensors that can adapt the seeding process in function of the ground needs and at the same time with the help of the AI to determine the risk factors after the fire.

The results of DRYADS project will be demonstrated and validated in real conditions in several forestry spaces of Spain, Norway, Italy, Rumany, Austria, Germany, Greece and Taiwan.

To sum up and as a conclusion, to fight against the forestry fires we have not only to focus in their extinction but also in a good sustainable management of the forest based in the prevention and introduction of modern techniques is essential to reinforce their resilience, the utilisation of the resources and their recovery capacity. This will lead to new opportunities for the rural environment, the biodiversity conservation and the fight against climate change. Let us hope that for once a time trees let us see the forest and we could avoid their destruction.

Improvement of the road maintenance through Artificial Intelligence

Improvement of the road maintenance through Artificial Intelligence

We all know that roads are necessary but normally we only remember them when they found them in bad conditions. We take it for granted that must always be available and in perfect condition, but this requries a great effort in terms of personnel, time and material resources. The spanish roads give support to the 86% of the inland transport of goods and to the 88% of the passenger transport. This high load of vehicles using the roads, together with the weather and environmental conditions cause a high level of wear with the consequent loss of properties of the road.

This cause to the users a series of severe inconveniences: the primary one is that it means a reduction in road safety, but also leads to a decrease in travel comfort, an increase of the fuel consumption of vehicles with the consequent increase of polluting gases emissions.

It is evident that the rehabilitation, preservation and maintenance of the road infrastructures is of fundamental importance, although we all know how annoying is founding roadworks. In Europe, in particular in Spain we have a good road grid, quite dense and good conected but certainly aged because of the decrease of the expenditure in preservation of the last years. It should be remembered that it requires a high level of investment in road maintenance; it is estimated, according to ACEX, that the annual maintenance cost of a motorway is of 80,000€, that of a conventional road of 38,000€ and that our country carries a preservation deficit of 8,000 million euros. This deficit, without going any further, it seems that will mean the approvement of tolls on motorways as of 2024. Therefore, these economic aspects and the need of a high level of service in roads demanded by the logistic and tourism sector, but especially the need of having safe roads, make the application of new technologies that can provide innovative solutions in road maintenance are in high demand.

The modern management of roads involves planning the maintenance actions to be carried out before the appearance of very serious or irreparable damage. This approach allows to undertake the interventions in the most adequate moment, causing as little inconvenience as possible and maintaining the fucntional capacity of the road and its economic value without allowing the network to be ruined and decapitalised. It is true that exist traditional solutions for the road preservation that are effective but it doesn´t make a optimus use of the available resources and it doesn´ t take in count the expected frime developments for planning the optimal time for action. To act effectively, is fundamental in first place to know the status of the road network as accurately and objectively as possible. This knowledge generally is obtained through road inspection equipments that make possible the evaluation and measurement of the corresponding parameters. In this way it achieves a large quantity of data related with the road status that it is necessary to manage and interpret to be able to prioritise the maintenance and preservation activities to be carried out. The problem that then arises is the processing of a massive quantity of information that makes impossible the manual evaluation.

One of the most difficulties, therefore, is the extraction of useful information of numerous data sources, For some type of data, exist software packages capable of extractinf global index that are useful for knowing in a general way the actual status of the road, but these tools often lack the capcity of predicting the road status evolution and its future degradation.

The artificial intelligence is becoming more and more present in a lot of areas of our environment and, often, without being conscious of it. The application of these artificial intelligence techniques can mean also a strong impact in the road maintenance because it allows the extraction of precise information of different data sources and identify relationships between them that otherwise could go unnoticed with the techniques applied until now. The processing and analysis, through the convolutional neural networks, of all the available data (data from the road auscultation equipment, climatological data, of traffic intensity…) allows obtaining unachievable data with the traditional methods. When training and adjusting those networks using massive quantity of data can be obtained, for example, highly reliable pavement degradation models that allow accurate estimation of the most appropriate maintenance actions.

road artificial intelligence

In this context, CARTIF and the company TPF actively collaborate in the development of these type of tools that can make a major breakthrough in improving road maintenance. Also there are other innitiatives that nowadays work in similar applications as Roadbotics (a spin-off of the Carneige Mellon University), the spanish company ASIMOB, Waterloo University in Canada, the finnish company Vaisala or the american company Blyncsy.

These tools will not eliminate the need of urgent repairs, as they can have many and varied origins, but it does have a significative impact on preventive and predictive interventions by making it possible to anticipate road deterioration and thus significantly reduce maintenance costs, reduce the time the road will be unavailable ad improve the degree of road comfort perceived by road users.

There are, finally, other interesting examples on how the artifical intelligence tools can help in the maintenance and improvement of the road safety, as for example the work of the MIT for predicting the road points in which it can occur traffic accidents and acting in consequence or the innitiative AI for Road Safety that use the artificial intelligence for reducing the number of road accidents.

In conclusion we can say that, thanks tot he help of these aritificial intelligence tools, in the next years we are going to have more safe and oeprative roads at the same time that we will notice that we found less works in our trips.

New technologies applied to security in confined spaces

New technologies applied to security in confined spaces

Ensuring the safety of workers inside confined spaces is a critical activity in the field of construction and maintenance because of the high risk involved in working in such environments. Perhaps it would be useful, first of all, to know what is meant by confined spaces. There are two main types of confined spaces: the so-called ‘open’ ones, which are those with an opening in their upper part and of such a depth that it makes their natural ventilation difficult (vehicle lubrication pits, wells, open tanks, tanks),…) and ‘closed’ ones with access openings (storage tanks, underground transformer rooms, tunnels, sewers, service galleries, ship holds, underground manholes, transport tanks, etc.). Workers entering these confined spaces are exposed too much greater risks than in other areas of construction or maintenance and it is therefore essential to apply extreme caution.

Each confined space has specific characteristics (type of construction, length, diameter, installations, etc.) and specific associated risks, which means that they require solutions that are highly geared to their specific safety needs.

The ‘conventional’ risks specific to confined spaces are mainly oxygen suffocation, inhalation poisoning of pollutants and fires and explosions. But new ’emerging’ risks from exposure to new building materials such as nanoparticles and ultrafine particles are also emerging. In addition, as research into new materials improves, there is also a better understanding of their potential negative effects on human health and how to prevent them.

The truth is that the training of workers and current safety regulations seek to anticipate risk situations before they occur in order to avoid them and thus prevent the appearance of accidents. But several problems arise: on the one hand, the regulations are not always strictly observed (whether due to workload, carelessness, fatigue, etc.) and on the other hand, there are always inevitable risks. In the case of carelessness, systems can be proposed to minimise this type of error and in the case of risks that cannot be avoided, systems can be proposed to detect them early and plan the corresponding action protocols.

It should be noted that risk situations do not usually appear suddenly and in most cases are detectable in time to avoid personal misfortunes. There are several problems: the detection of these risks is usually done with specific measurements using the portable equipment that the workers must carry, many times the workers are not controlled to access the premises with the corresponding protection equipmente and almost never a continuous monitoring of the indoor atmosphere is done.

In recent years, new technologies and equipment have been developed that can be applied to improve security in this type of environment and reduce the associated risks.

In this type of environment, an effective risk prevention system should be based on technological solutions capable of providing answers to safety aspects throughout the entire work cycle in confined spaces: Before entering the space itself, during all work inside the enclosure and when leaving the work space (whether it is at the end of normal work or by evacuation).

The latest confined space air quality monitoring systems are based on multisensorial technology that combine different detection systems to ensure the best possible conditions to avoid or reduce the risks present in the confined spaces.

Advanced data processing techniques (machine learning, data mining, predictive algorithms) are also being applied, enabling much more efficient and rapid information extraction.

In the same way, great advances have been made in access control and personnel tracking systems, allowing us to know the position of each worker and even his or her vital signs in order to detect almost immediately any problem that may arise.

Finally, it should be noted that the use of robots and autonomous vehicles (land and air) equipped with different types of sensorization are increasingly being used to determine the conditions of a site before it is accessed. This is especially useful in those where there may have been an incident: power failure, collapse, fire,… or simply because environmental conditions are suspected to have changed and the reason is unknown.

CARTIF has been working on these issues for many years now, both in safety projects in critical construction environments (PRECOIL, SORTI) and in specific systems for tunnels and underground works (PREFEX, INFIT, SITEER).

In short, the development and implementation of new specific technologies can help to save lives in such a critical environment as confined spaces.

New applications of Deep Learning

New applications of Deep Learning

A little more than a year ago, in another post of this blog, our colleague Sergio Saludes already commented what is deep learning and detailed several of its applications (such as the victory of a machine based on these networks over the world champion of Go, considered the most complex game in the world).

Well, in these 16 months (a whole world in this topic) there has been a great progress in terms of the number of applications and the quality of the results obtained.

Considering, for example, the field of medicine, it has to be said that diagnostic tools based on deep learning are increasingly used, achieving in some cases higher success rates than human specialists. In specific specialties such as radiology, these tools are proving to be a major revolution and in related industries such as pharmaceuticals have also been successfully applied.

In sectors as varied as industrial safety, they have recently been used to detect cracks in nuclear reactors, and have also begun to be used in the world of finance, energy consumption prediction and in other fields such as meteorology and the study of sea waves.

Autonomous vehicle driving projects, so in vogue these days, mainly use tools based on deep learning to calculate many of the decisions to be made in each case. Regarding this issue, there is some concern about how these systems will decide what actions to take, especially when human lives are at stake and there is already a MIT webpage where the general public can collaborate in creating an “ethics” of the autonomous car. Actually, these devices can only decide what has previously been programmed (or trained) and there is certainly a long way to go before the machines can decide for themselves (in the conventional sense of “decide”, although this would lead to a much more complex debate on other issues such as singularity).

Regarding the Go program discussed above (which beat the world champion by 4 to 1), a new version (Alpha Go Zero) has been developed that has beaten by 100 to 0 to that previous version simply knowing the rules of the game and training against itself.

In other areas such as language translation, speech comprehension and voice synthesis have also advanced very noticeably and the use of personal assistants on the mobile phone is beginning to become widespread (if we overcome the natural rejection or embarrassment of “talking” with a machine).

CARTIF is also working on deep learning systems for some time now and different types of solutions have been developed, such as the classification of architectural heritage images within the European INCEPTION project.

All these computer developments are associated with a high computational cost, especially in relation to the necessary training of the neural networks used. In this respect, progress is being made on the two fronts involved: much faster and more powerful hardware and more evolved and optimized algorithms.

It seems that deep learning is the holy grail of artificial intelligence in view of the advances made in this field. This may not be the case and we are simply looking at one more new tool, but theres is no doubt that is an extremely powerful and versatile tool that will give rise to new and promising developments in many applications related to artificial intelligence.

And of course there are many voices that warn of the potential dangers of this type of intelligent systems. The truth is that it never hurts to prevent the potential risks of any technology, although, as Alan Winfield says, it’s not just artificial intelligence that should be feared, but artificial stupidity. Since, as always happens in these cases, the danger of any technology is in the misuse that can be given and not in the technology itself. Faced with this challenge, what we must do is promote mechanisms that regulate any unethical use of these new technologies.

We are really only facing the beginning of another golden era of artificial intelligence, as there have been several before, although this time it does seem to be the definitive one. We don’t know where this stage will take us, but trusting that we will be able to take advantage of the possibilities offered to us, we must be optimistic.

Geolocation systems are reaching indoors

Geolocation systems are reaching indoors

With global positioning systems, a phenomenon similar to what happened with mobile phones has occurred: in a few years we have gone from non-existence to consider it essential. The truth is that, in fact, geolocation is one of those technologies that has led to the development of many applications and in many areas is not conceived to work without the use of commonly called GPS.

These types of positioning systems are based on receiving the signal from three or more satellites and using trilateration: position is obtained in absolute coordinates (usually WGS84) by determining the distance to each satellite.

satelite-geolocalizacion

Global positioning systems based on satellites have their origin in the US system TRANSIT  in the 60s. With this system you could get fix the position once an hour (at best) with an accuracy of about 400 meters. This system was followed by the Timation system and in 1973 the Navstar project began (both from USA). The first satellite of this project was launched in February 1978 and full operational capability was declared in April 1995. This Navstar-GPS system is the origin of the GPS generic name we usually apply to all global navigation systems. In 1982 the former Soviet Union launched the first satellite of a similar system called GLONASS that became operational in 1996. Meanwhile, the People’s Republic of China in 2000 launched the first satellite of BeiDou navigation system, which is scheduled to be fully operational in 2020. Finally, in 2003, it began the development of the positioning system of the European Union called Galileo, with a first launch in 2011. Currently there are 12 satellites in active (and 2 in tests) and the simultaneous launching of four more is scheduled on 17 November 2016. This way, 18 satellites will be in orbit and initial service of Galileo positioning system could begin in late 2016. It is expected to be fully operational in 2020. It must be said that there are also other systems, complementary to those already mentioned, in India and Japan in a local range.

As you can see, the global positioning systems are fully extended and are widely used both military and commercial level (transport of people and goods, precision agriculture, surveying, environmental studies, rescue operations …) and on a personal level (almost everyone has a mobile phone with GPS available, although their battery always run out at the worst moment).

Regarding the precision obtained with current geolocation equipment, it is about a few meters (and even better with the Galileo system) and can reach centimetre accuracy using multifrequency devices and applying differential corrections.

geolocation-system

One of the problems of these systems is that not work properly indoors since the satellite signal cannot be received well inside buildings (although there are highly sensitive equipment that reduce this problem and other devices called pseudolites, acting simulating the GPS signal indoors). And of course it’s not enough to know our exact position outdoors but now comes the need to also be located inside large buildings and infrastructure (airports, office buildings, shopping centres, …).

So indoor positioning systems (IPS) have appeared allowing location inside enclosed spaces. Unlike global positioning systems, in this case there are many different technologies that are usually not compatible with each other making it difficult to dissemination and adoption by the general public. There are already very reliable and accurate solutions in enterprise environments but these developments are specific and not easily transposable to a generic use of locating people indoors. In this type of professional context, CARTIF has done several projects indoor positioning for autonomous movement of goods and service robotics. There is not a standard indoor positioning system but there are many technologies competing for a prominent place.

The technologies used can be differentiated on the need or not of a communications infrastructure. Those who no need existing infrastructure are often based on the use of commonly available sensors in a smartphone: variations in the magnetic field inside the building that are detected by the magnetometer, measuring the movements by using accelerometers or identifying certain feature elements (such as QR codes) using the camera. In all these cases the accuracy achieved is not very high but may be useful in certain applications as simple guidance in a large building.

Indoor positioning systems using communications infrastructure exploit almost all available technologies of this kind for the location: WiFi, Bluetooth, RFID, infrared, NFC, ZigBee, Ultra Wideband, visible light, phone masts (2G / 3G / 4G), ultrasound, …

geolocalizacion-smartphone

With these systems, the position is usually determined by triangulation, calculating the distance to the fixed reference devices (using the intensity of the received signal, coded signals or by direct measurement of this distance). Thus you can reach greater precision than in the three previous cases. There are also new developments that combine several of the above technologies in order to improve the accuracy and availability of positioning.

Although, as has been said there is no standard, the use of systems based on Bluetooth low energy are spreading (BLE nodes). Examples of such systems are the Eddystone (from Google) and iBeacons (Apple).

Logically, as in the case of outdoor positioning the corresponding environment map is required to allow navigation. There are other systems, called SLAM, which generate environment maps (which may be known or not) as they move, widely used in robots and autonomous vehicles. A recent example is the Tango project (from Google once again) that generates 3D models of the environment just using mobile devices (smartphones or tablets).

As we have seen, we are closer to be located anywhere, which can be very useful but also can make us overly dependent on these systems while the usual privacy issues concerning positioning systems are increased. So although thanks to these advances the sense of orientation is less necessary, we must always keep common sense.