Hard to measure

Hard to measure

Researchers are increasingly confronted with situations of “digitalise” something that has not been digitalised before, temperatures, pressures, energy consumes,etc. for these cases we look for measure systems or a sensor in a commercial catalogue: a temperature probe, a pressure switch, a clamp ammeter for measuring an electric current, etc.

Sometimes, we find ourselves in the need of measure “something” for which you can´t find commercial sensors. This can be due to they aren´t common measure needs and there isn´t enough market for these type of sensor or directly, doesn´t exist commercial technical solutions available for different reasons. For example, it could be necessary to measure characteristics such as humidity of solid matter currents, or characteristics only measurable in a quality control laboratory in an indirect way and that needs a high experimentation level.

Also, sometimes, characteristics are required to be measured in very harsh environments due to high temperatures, as it can be melting furnace, or environments with lots of dust that saturate any conventional measure system and it may sometimes be necessary to evaluate a characteristic that is not evenly distributed (for example, quantity of fat in a meat piece, presence of impurities). Other factor to take into account is, that not always possible to be installed a sensor without interferences in the manufacturing process of the material that we want to measure, or the only way is taking a sample to realise an analysis out of the line and obtain a value or characteristic time after, but never in real time.

In these situations, it is necessary to resort to custom-made solutions that we call smart sensors or cognitive sensors. Apart from calling them sound exotic or cool, these are solutions that need to use a series of “conventional” sensors together with software or algorithms, for example, artificial intelligence, that process the measurements returned by these commmercial sensors to try to give as accurate an estimate as possible of the quality we want to measure.

Nowadays we are developing these types of smart sensors for different process industries such as asphalt manufacturing, steel billet and bars or pharmaceutical industry (e.g. pills) in the framework of the European Project CAPRI.

For example, in the manufacture of asphalt, sands of different sizes need to be dried before they are mixed with bitumen. During the continuous drying process of these sands, the finer sand size, called filler, is “released” in the form of dust from larger aggreggates and this dust needs to be industrially vacuumed using what is called a bag filter. Nowadays, the drying and suction of filler is done in a way that ensures that all the filler is extracted. The disadvantage of this process is that it is actually necessary to add additional filler when mixing the dried sands with the bitumen, because the filler improves the cohesion of the mix by filling the gaps between the sand grains. All this drying and complete suction of the filler entails an energy cost that, in order to try to minimise, it would be necessary to have a measure of the filler present in the sand mixture. Today, this measurement is obtained in a punctual way through a granulometric analysis in a laboratory with a sample of the material before drying.

Within CAPRI Project we are working on the complex task of being able to measure the flow of filler sucked in during the drying process. There is no sensor on the market that are guaranteed to measure a large concentration of dust (200,000 mg/m3) in suspension at high temperatures (150-200ºC).

The development of this type of sensors requires various laboratory tests to be carried out under controlled conditions to verify the feasibility of this solution and then, also under laboratory conditions, to carry out calibrated tests to ensure that it is possible to estimate the true flow of filler sucked in during the sand drying process. CAPRI Project has successfully completed the testing of this sensor and others belonging to the manufacture of steel bars and pharmaceutical pills.

The Project in its commitment to the open science initiative promoted by the European Commission has published in its Zenodo channel, different results of these laboratory tests that allow us to corroborate the preliminary success of these sensors pending their validation and testing in the productive areas of the project partners. In the near future we will be able to share the results of the industrial operation of this and other sensors developed in the project.


Co-author

Cristina Vega Martínez. Industrial Engineer. Coordinator at CAPRI H2020 Project

AI potential for process industry and its sustainability

AI potential for process industry and its sustainability

The impact of Artificial Intelligence (AI) is highly recognized as a key driver of the industrial digital revolution together with data and robotics 1 2. To increase AI deployment that is practically and economically feasible in industrial sectors, we need AI applications with more simplified interfaces, without requiring highly skilled workforce but exhibiting longer useful life and requiring less specialized maintenance (e.g. data labelling, training, validation…)

Achieving an effective deployment of trustworthy AI technologies within process indsutries needs a coherent understanding of how these different technologies complement and interact with each other in the context of domain-specific requirements that industrial sectors require3, such as process industries who must leverage the potential of innovation driven by digital transformation, as a key enabler for reaching Green Deal objectives and expected twin green and digital transition needed for a full evolution towards circular economy.

One of the most important challenges for developing innovative solutions in the process industry is the complexity, instability and unpredictability of their processes and impact into their value chains. These solutions usually require: running in harsh conditions, under changes in the values of process parameters, missing a consistent monitoring/measurement of some parameters important for analysing process behaviour and difficult to measure in real time. Sometimes, such parameters are only available through quality control laboratory analysis that are responsible to get the traceability of origin and quality of feedstocks, materials and products.

For AI-based applications, these are even more critical constraints, since AI requires (usually) a considerable amount of high-quality data to ensure the performance of the learning process (in terms of precision and efficiency). Moreover, getting high quality data usually requires an intensive involvement of human experts for curating (or even creating) the data in a time-consuming process. In addition, a supervised learning process requires labelling/classifying the training examples by domain experts, which makes an AI solution not cost-effective.

Minimizing (as much as possible) human involvement in the AI creation loop implies some fundamental changes in the organizations of the AI process/life-cycle, especially from the point of view of achieving a more autonomous AI, which leads to the concept of self-X AI4 . To achieve such autonomous behaviour for any kind of application it usually needs to exhibit advanced (self-X) abilities like the ones proposed for the autonomic computing (AC)5:

Self-X Autonomic Computing abilities

Self-Configuration (for easier integration of new systems for change adaptation)
Self-Optimization (automatic resource control for optimal functioning)
Self-Healing (detection, diagnose and repair for error correction)
Self-Protection (identification and protection from attacks in a proactive manner)

Autonomic Computing paradigm can support many AI tasks with an appropiate management, as already reported in the scientific community 6 7 . In AI acts as the intelligent processing system and the autonomic manager (continuously executes a loop of monitoring-analyzing-planning-executing based on the knowledge (MAPE-K) of the AI system under control for developing a self-improving AI application.

Indeed, such new (self-X) AI applications will be, to some extent, self-managed to improve their own performance incrementally5. This will be realized by an adaptation loop, which enables “learning by doing” using MAPE-K model and self-X abilities as proposed by autonomic computing. The improvement process should be based on continuous self-Optimization ability (e.g. hyper-parameter tuning in Machine Learning). Moreover, in the case of having some problems in the functioning of an AI component, the autonomic manager should activate self-Configuration (e.g. choice of AI method), self-Healing (e.g. detecting model drify) and self-Protection abilities (e.g. generating artificial data to improve trained models) as needed, based on knowledge from AI system.

In just a few weeks, CARTIF will start a project with the help of AI experts and leading companies of various process industry sectors across Europe to tackle these challenges and close the gap between the AI and automation by proposing a novel approach for a continuous update of AI applications with minimal human expert intervention, based on an AI data pipeline, which exposes autonomic computing (self-X) abilities, so called self-X AI. The main idea is to enable the continuous update of AI applications by integrating industrial data from physical world with reduced human intervention.

We’ll let you know in future posts about our progress with this new generation of self-improving AI applications for the industry.


1 Processes4Planet, SRIA 2050 advanced working version

2 EFFRA, The manufacturing partnership in Horizon Europe Strategic Research and Innovation Agenda.

3 https://www.spire2030.eu/news/new/artificial-intelligence-eu-process-industry-view-spire-cppp

4 Alahakoon, D., et al. Self-Building Artificial Intelligence and Machine Learning to Empower Big Data Analytics in Smart Cities. Inf Syst Front (2020). https://link.springer.com/article/10.1007/s10796-020-10056-x

5 Sundeep Teki, Aug 2021, https://neptune.ai/blog/improving-machine-learning-deep-learning-models

6 Curry, E; Grace, P (2008), “Flexible Self-Management Using the Model–View–Controller Pattern”, doi:10.1109/MS.2008.60

7 Stefan Poslad, Ubiquitous Computing: Smart Devices, Environments and Interactions, ISBN: 978-0-470-03560-3

Deep Learning in Computer Vision

Deep Learning in Computer Vision

Computer vision is a discipline that has made it possible to control different production processes in industry and other sectors for many years. Actions as common as the shopping process in a supermarket require vision techniques such as scanning barcodes.

Until a few years ago, many problems could not be solved in a simple way with classical vision techniques. Identifying people or objects located at different positions in images or classifying certain types of inhomogeneous industrial defects were highly complex tasks that often did not provide accurate results.

Advances in Artificial Intelligence (AI) have also accompanied the field of vision. While Alan Turing established the Turing test in 1950, where a person and a machine were placed behind a wall, and another person asked questions trying to discover who was the person and who was the machine, in computer vision through AI, systems capable of reproducing the behaviour of humans are sought.

One of the fields of AI is neural networks. Used for decades, it was not unitl 2012 that they began to play an important role in the field of vision. AlexNet1 , designed by Alex Krizhevsky, was one of the first networks to implement the 8-layer convolution filter design. Years earlier, a worldwide championship had been established where the strongest algorithms tried to correctly classify images from ImageNet2 , a database with 14 million images representing 1,000 different categories. While the best of the classical algorithms, using SIFT and Fisher vectors, achieved 50.9% accuracy in classifying ImageNet images, AlexNet brought the accuracy to 63.3%. This result was a milestone and represented the beginning of the exploration of Deep Learning (DL). Since 2012, the study of deep neural networks has deepened greatly, creating models with more than 200 layers of depth and taking ImageNet´ s classification accuracy to over 90% with the CoAtNet3 model. which integrates convolution layers with attention layers in an intelligent, deep wise way.

Turning to the relationship of modern computer vision models to AI, Dodge et. al (2017)4 found that modern neural networks classifying ImageNet images made fewer errors than humans themselves, showing that computer systems are capable of doing tasks better and much faster than people.

Among the most common problem solved by computer vision using AI are: image classification, object detection and segmentation, skeleton recognition (both human and object), one shot learning, re-identification, etc. Many of the problems are solved in two dimensions as well as in 3D.

Various vision problems solved by AI: Segmentation, classification, object detection

Classification simply tells us what an image corresponds to. So for example, a system could tell whether an image has a cat or a dog in it. Object detection allows us to identify several objects in an image and delimit the rectangle in which they have been found. For example, we could detect several dogs and cats. Segmentation allows us to identify the boundaries of the object, not just a rectangle. There are techniques that allow us to segment without knowing what is being segmented, and techniques that allow us to segment knowing the type of object we are segmenting, for example a cat.

Skeletal recognition allows a multitude of applications, ranging from security issues to the recognition of activities and their subsequent reproduction in a robot. In addition, there are techniques to obtain key points from images, such as points on a person´ s face, or techniques to obtain three-dimensional orientation from 2D images.

Industry segmentation using MaskRCNN5

One Shot Learning allows a model to classify images from a single known sample of the class. This technique, typically implemented with Siamese neural networks, avoids the need to obtain thousands of images of each class to train a model. In the same way, re-identification systems are able to re-identify a person or object from a single image.

The high computational cost of DL models led early on to the search for computational alternatives to CPUs, the main processors in computers. GPUs, or graphics processing units, which were originally developed to perform parallel computations for smoothly generating images for graphics applications or video games, proved to be perfectly suited to parallelising the training of neural networks. In neural network training there are two main stages, forward and back-propagation. During the forward process, images enter the network and pass through successive layers that apply different filters in order to extract salient features and reduce dimensionality. Finally, one or more layers are responsible for the actual classification, detection or segmentation. In backward propagation, the different parameters and weights used by the network are updated, in a process that goes from the output, comparing the obtained and expected output, to the input. The forward process can be parallelised by creating batches of images. Depending on the memory size of the GPUs, copies of the model are created that process all images in a batch in parallel. The larger the batch size we can process, the faster the training will be. This same mechanism is used during the inference process, a process that also allows parallelisation to be used. In recent years, some cloud providers have started to use Tensor Processing Units (TPUs), with certain advantages over GPUs. However, the cost of using these services is often high when performing massive processing.

Skeleton acquisition, activity recognition and reproduction on a Pepper robot6

CARTIF has significant deep neural network training systems, which allows us to solve problems of high computational complexity in a relatively short time. In addition, we have refined several training algorithms using the latest neural networks7 . We have also refined One Shot Learning systems using Siamese networks8. We also use state-of-the-art models in tasks such as object and human recognition, segmentation and detection, image classification, including industrial defects, and human-robot interaction systems using advanced vision algorithms.


1 Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

2 Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., … & Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3), 211-252.

3 Dai, Z., Liu, H., Le, Q., & Tan, M. (2021). Coatnet: Marrying convolution and attention for all data sizes. Advances in Neural Information Processing Systems, 34.

4 Dodge, S., & Karam, L. (2017, July). A study and comparison of human and deep learning recognition performance under visual distortions. In 2017 26th international conference on computer communication and networks (ICCCN) (pp. 1-7). IEEE.

5 He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).

6 Domingo, J. D., Gómez-García-Bermejo, J., & Zalama, E. (2021). Visual recognition of gymnastic exercise sequences. Application to supervision and robot learning by demonstration. Robotics and Autonomous Systems, 143, 103830.

7 Domingo, J. D., Aparicio, R. M., & Rodrigo, L. M. G. (2022). Cross Validation Voting for Improving CNN Classification in Grocery Products. IEEE Access.

8 Duque Domingo, J., Medina Aparicio, R., & González Rodrigo, L. M. (2021). Improvement of One-Shot-Learning by Integrating a Convolutional Neural Network and an Image Descriptor into a Siamese Neural Network. Applied Sciences, 11(17), 7839.

Artificial Intelligence and Intelligent Data Analysis: statistics and math, not magic!!

Artificial Intelligence and Intelligent Data Analysis: statistics and math, not magic!!

Artificial Intelligence, Machine Learning, Deep Learning, Smart Devices, terms that we are constantly bombarded with in the media, making us believe that these technologies are capable of doing anything and solving any problem we face. Nothing is further from reality!!

According to the European Commission, “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal.”1.

AI encompasses multiple approaches and techniques, among others machine learning, machine reasoning and robotics. Within them we will focus our reflection on machine learning from data, and more specifically on Intelligent Data Analysis aimed at extracting information and knowledge to make decisions. Those data (historical or streaming) that are stored by companies over time and that are often not put into value. Those data that reflect the reality of a specific activity and that will allow us to create statistical and mathematical models (in the form of rules and/or algorithms) that contain information about what reality is. Then, how to “cook” the data to obtain relevant information? What are the main actors involved? First the data, which will be our “ingredients”; second the algorithms capable of processing these data, which will be our “recipes”; third computer scientists and mathematicians, who will be the “chefs” capable of correctly mixing data and algorithms; and forth the domain experts, who will be our private “tasters” and whose task will be to validate the results obtained.

First one the data. Those data from which we want extract information in order to generate models or make predictions. Through a continuous learning process of trial and error, based on analysing how things were in the past, what trends there were, what patterns were repeated,etc. we can build models and make predictions that will be as “good” as data are. It is not a question of quantity, but of quality data. What does that mean exactly? It means that if we want to teach an AI system to multiply (giving it examples of correct multiplications) the system will know how to do that task (multiply) but it will never know how to subtract or divide. And if we give it ‘wrong’ examples (3*2=9 instead of 3*2=6) the system will learn to multiply, but in the wrong way. Therefore, as fundamental ingredient of our recipe, data must be well organized, be relevant and quality

On the other hand, the AI algorithms. Our “recipes” that tell us how to mix the “ingredients” correctly, how to use the available data to try to solve our problem. Algorithms that allow us to build computer systems that simulate human intelligence when automating tasks. However, not all algorithms can be used to solve any type of problem. On the “inside” of these algorithms there are mainly mathematical and statistical formulas proposed decades ago, and whose principles have advanced little in recent years, but which are now more effective thanks to (1) the increase in the amount of data and (2) the increase in power computer calculation (which is allowing much more complex calculations in less time and at low cost). However, skills such as intuition, creativity or consciousness are human abilities that (for now) we have not been able to transfer to a machine effectively. Therefore, our “chefs” and our “tasters” will be in charge of contributing these human factors in our particular”kitchen”.

That is why not all problems can be solved using AI. Because neither data are capable of “speaking” by themselves (they are not “carriers” of the absolute truth) nor are algorithms “seers” capable of guessing the unpredictable. What data and algorithms really know how to do is answer the questions we ask them based on the past, as long as the questions asked are the right ones. After the failure of a machine, how is the data provided by the sensors that monitor the machine mathematically related to the failure produced? When an image is analysed, how similar is it to images that have been previously analysed? When a question is asked of a virtual assistant, what answer has been given (by humans) more frequently in the past to that same question? It is therefore about questioning the data in the correct way so that they reveal the information we want.

Over the last century, AI has survived several technological ‘winters’ with lack of funding and research, mainly caused by the uncontrolled enthusiasm put into technology in the previous years2. It´ s time to “learn” from our hisorical data and not make the same mistakes again. Let´ s acknowledge AI for the capabilities it really has, and leave to wizards the ability to make the impossible come true. Only in this way AI will enter in its perpetual spring.


1 https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1

2 https://link.springer.com/chapter/10.1007%2F978-3-030-22493-6_2

Cybersecurity in industrial environments. Are we ready? The attacks that are still to come…

Cybersecurity in industrial environments. Are we ready? The attacks that are still to come…

Identity and user data theft, ransomware, phishing, pharming or denial-of-service attacks are terms that appear more and more in the media1,2,3,4. The hyper-connected world in which we live also affects companies that, as productive entities, are increasingly exposed to being the target of cybercrimes 5,6,7. Existing campaigns to raise awareness in cybersecurity are very diverse, but how can companies protect themselves against all these threats without compromising their final business objectives?

Traditionally, cybersecurity orchestration in industrial environments has been delegated almost exclusively to the company´ s IT department, which have focused on protecting office networks, applying well-known standards and regulations such as: ISO/IEC 27001, ISO/IEC 15408 or ISO/ICE 19790. For these cybersecurity expert teams, “your best defense is a good offense”. This quote by the Chinese general Sun Tzu (author of the book “The Art of War”, considered a masterpiece on strategy) underlies the background of what are known as penetration tests (or pentesting). Pentesting tests are basically a set of simulated attacks against a computer system with the sole purpose of detecting exploitable weaknesses or vulnerabilities so they can be patched. Why are these tests so important? Several studies show that most attacks exploit known vulnerabilities collected in databases such as CVE, OWASP or NIST that for various reasons have not already been addressed 8,9.

In the IT sector, some of the most popular security audit methodologies and frameworks for pentesting are: Open Source Security Testing Methodology Manual (OSSTMM), Information Systems Security Assessment Framework (ISSAF), Open Web Application Security Project (OWASP), and Penetration Testing Execution Standard (PTES). Each of these methodologies follows a different strategy to perform the penetration test according to the type of application to be audited (native mobile apps, web applications, infrastructure…), being in this sense complementary approaches.

cybersecurity

On a practical level, IT teams have a large number of tools to perfomr these tests both free and/or open-source and paid applications. Some of the best known are: Metasploit (Community Edition), NESSUS (Personal Edition), Saint, Nmap, Netcat, Burp Suite, John the Ripper or Wireshark. Most of these tools are already pre-installed in specific pentesting distributions such as Kali Linux, BlackArch Linux or Parrot Security.

However, office networks, of which the IT department is in charge, are not the only existing networks in an industrial company. Today, there is a growing number of production-related devices (PLC, SCADA, …), normally interconnected by fieldbus networks, that support the Internet TCP/IP protocol such as PROFINET or MODBUS TCP. Thanks to the routing function available in PLCs of some brands, it is possible to access to field buses that could not be accessed from the outside in the past, such as PROFIBUS, through gateways. The interconnection between IT (Information Technology) and OT (Operation Technology) networks, so necessary when talking about Industry 4.0, greatly increases the chances of the industry being a target of cyberattacks.

In the next article, we will talk about how we can defend ourselves against such a threat …


Post Authors

Daniel Gómez (dangom@cartif.es)

Javier Román (javrom@cartif.es)

Marta Galende (margal@cartif.es)


1 https://elpais.com/economia/2021-11-11/un-ataque-informatico-obliga-a-paralizar-la-principal-planta-de-la-cervecera-damm.html

2 https://www.lavanguardia.com/tecnologia/20211108/7847465/ciberataque-mediamarkt.html

3 https://www.elespanol.com/omicrono/tecnologia/20211025/supermercado-tesco-hackeo-clientes-sin-pedidos-varios/622188010_0.html

4 https://www.elmundo.es/economia/2021/03/09/6047578dfc6c83411b8b4795.html

5 https://cincodias.elpais.com/cincodias/2021/07/21/companias/1626821663_803769.html

6 https://directivosygerentes.es/innovacion/54-por-ciento-retailers-espanoles-sufrido-ciberataque

7 https://www.fortinet.com/lat/corporate/about-us/newsroom/press-releases/2021/fortinet-reporta-ataques-ransomware-multiplicado-diez-ultimo-ano

8 https://owasp.org/Top10/

9 https://www.muycomputerpro.com/2021/11/11/ransomware-ataques-vulnerabilidades-empresas

Cybersecurity in the industrial environment, are we ready? Defence comes next…

Cybersecurity in the industrial environment, are we ready? Defence comes next…

As we mentioned in our previous post, companies OT (Operation Technology) networks are no exception from suffering cyberattacks. So far, there have been multiple cyber-attacks suffered by industrial companies since the first registered one in 2010 that had a direct impact on the physical world1. These security incidents affect a wide range of entities ranging from large technology companies to final products suppliers2. All industrial infrastructures, and not only the critical ones, are in the crosshairs of cyber criminals or crackers, in which the OT sector is in a certain way “negligent”, since alomst 90% of vulnerabilities and attack vectors present in an industrial system are identifiable and exploitable using strategies widely known by the attackers, with 71% being extremely high or critical risk as they can partially or totally take to a halt all the company production activity3.

Given this outlined panorama, a series of questions should arise: Are there appropriate kit tools adapted to these OT network environments? Can cybersecurity experts protect the industry OT scenario? The detection and exposure of vulnerabilities that affect the resources associated with OT networks, key elements in the automation of industrial plants, is shown as a compulsory step for any penetration test. Once these vulnerabilities have been identifies, it will be possible to take the necessary preventive measures, adapting existing solutions and well-known good practices from the IT environment to the OT world, and not carrying out a direct implementation of them.

Some attempts to adapt existing standards are IEC 62443, based on the ISA 99 standar, which sets up the international reference framework for cybersecurity in industrial systems, or ISO/IEC 27019:2013 which provides guiding principles for the management of information security applied to the world of the process control systems. Regarding specific tools, we find, among others, the ControlThings platform, which is a specific Linux distribution to exposure vulnerabilities in industrial control systems, without forgetting tools dedicated to get a real-time asset inventory in the OT infrastructure like IND from Cisco, eyeSight from ForeScout (these are paid applications) or GRASSMARLIN opne source which passively maps the network and visually shows the topology of the different ICS/SCADA systems present in the network. The different objectives liable to be attacked in an OT environment in a specific way can be found in databases such as MITTRE-ATT&CK.

Nevertheless, these attempts at standardization are not enough and it is essential to continue going on different fronts supporting initiatives such as the following:

  • To allow experts from the OT environment to take the initiative and learn how to protect their systems. To train them in the correct way to commission the devices of these type of networks, making that commissioning easier for non-IT experts and thus, avoiding the possibility of misconfigurations due to lack of the associated technical information (simplifying the security aspect of this).
  • Improve the adaptation of SIEM (Security Information and Event Management) solutions to the OT networks, so that they ae less intrussive than current ones and making them to identify patterns that are typical of the indsutrial process networks, allowing and early identification of anomalous situations4.
  • Put into practice new ways of cyberprotecting industrial systems, not focused on the continuous software updating and/or the periodic investments on them5.

Until not long ago, OT network systems have run disconnected from the outside world and therefore with a false feeling of being secure6. However, the protection of these OT environments should be prioritized, as well as the creation of new professional profiles in OT cybersecurity, capable of understanding the needs and particularities of these specific environments.


Authors of the post

Daniel Gómez (dangom@cartif.es)

Javier Román (javrom@cartif.es)

Marta Galende (margal@cartif.es)


1 https://www.businessinsider.es/10-anos-stuxnet-primer-ciberataque-mundo-fisico-657755

2 https://www.incibe-cert.es/alerta-temprana/bitacora-ciberseguridad

3 https://security.claroty.com/1H-vulnerability-report-2021

4 https://www.incibe-cert.es/blog/diseno-y-configuracion-ips-ids-y-siem-sistemas-control-industrial