It is a robot and it has feelings (II)

It is a robot and it has feelings (II)

In an earlier post titled “It’s a robot and it has feelings” I discussed about the possibility of incorporating “feelings” into robots in a similar way as human emotions are displayed when facing external stimuli. Following that post, an obvious question arises: But what for? Or, being more pragmatic: What advantages can a robot obtain from emulating emotional responses?

Empathy is defined as the cognitive ability to perceive what another being can feel, and can be divided into two main components: affective empathy, also called emotional empathy which is the ability to emotionally respond with appropriately to the mental states of another being; And cognitive empathy, which is the ability to understand the point of view or mental state of another being.

What is the added value of this type of “empathic” communication? On the one hand, empathy improves the efficiency of interaction. Thus, as we perform actions we send signals that communicate our intentions (looks, movements of the hands, the body, etc.), which can allow other beings prepared to perceive these signals to identify them and to make a more efficient collaboration in order to achieve joint objectives. On the other hand, empathic interaction could help to lessen the apprehension that some users have when it comes to interacting with robotic devices, making them feel more comfortable with robots and other machines.

Endowing robots with a behavior that simulates human “emotional” behavior is one of the ultimate goals of robotics. Such emotional behavior could allow robots to show their mood (affective empathy) as well as perceive that of users interacting with them (cognitive empathy).

However, despite the impressive advances made in recent years in the fields of artificial intelligence, speech recognition and synthesis, computer vision and many other disciplines directly and indirectly related to emotional recognition and artificial emotional expressiveness, we are still far from being able to endow the robots with the empathic capacities similar to those a human being has.

Take as an example a person with a friend who just went to two job interviews but only one job was offered to him. Should that person show satisfaction or disappointment for his friend, or give that event any importance at all? Your response to this will obviously depend on what you know of your friend’s goals and aspirations. There are two components here that would be difficult to implement in a robot. In the first place, the robot would need to have a rich knowledge of itself, including personal motivations, weaknesses, strengths, history of successes and failures, etc. Second, its own identity should overlap with that of his human companion enough to provide a shared knowledge that is meaningful and genuine.

Since we are still far from being able to develop robots with such a human-like empathy, robotic system developers need to understand when to show empathy, at what level and in what contexts. For example, empathy may not be necessary in a washing machine, but it is clear that an empathic behavior can improve the performance of robotic applications in areas such as education or the home. On the other hand, pre-programmed empathic behaviors can become annoying and ineffective. For example, there are studies that indicate that drivers come to refuse (as an act of rebellion) to listen to a car that repeatedly says: “you seem to be tired and should stop.”

In this sense, one of the lines of research in which CARTIF participates is in the development of robots and social avatars with the capacity to recognize and express emotions, and to do so in a way that suits their operating environment and the services that they are offering. In this line, and as members of the European robotics platform euRobotics AISBL (Association Internationale Sans But Lucratif) we actively interact with other European centers of recognized prestige within the group “Natural Interaction with Social Robots”, which goal is the discussion and dissemination of cutting-edge advances at European level in the field of interaction between humans and social robots.

One of the recent activities carried out in this context has been the European Robotics Forum held last March in Edinburgh, where we had the opportunity to discuss with members of the group precisely the needs, recommendations and future lines in the development of robots with empathic capacity. From the discussions that took place in this forum, I would like to summarize the following notes which, although of a general nature, will surely mark the European trends in research on social robotics in the coming years:

  • It is necessary to continue researching what empathy means for different types of robots, such as exoskeletons, social robots, service robots, manufacturing robots, etc. And investigate how those robots can express empathy in their respective contexts of application.
  • Empathic interaction must be a dynamic process that evolves in order to build a relationship with the user over time. Preprogrammed repetitive behaviors are not perceived as empathic by the user, especially when the behavioral tips used to activate robot actions are known to the user.
  • Because robots do not possess the physiological processes that allow them to be empathic, the solution is to detect the socio-emotional signals transmitted by humans and have the robots mimic the empathic behavioral responses that would be displayed by humans.
  • During experimentation with empathic robots, we should make use of systems that are sufficiently complex and have the necessary capabilities to investigate the different aspects of empathic behavior and to quantitatively assess their impact. After studying the different aspects of empathic interaction, the most relevant aspects could be selected and realized in simple and low-cost systems for commercialization.

It’s a robot and it has feelings (I)

It’s a robot and it has feelings (I)

Can a robot have feelings? According to the science fiction world, the answer would clearly be affirmative. Films like Blade Runner; 2001: A Space Odyssey; I, Robot or ex Machina, show machines able to experience human feelings such as fear, anger or love.
Despite the growing interest in artificial intelligence (AI) and the many discussions about the implications of the development of machines endowed with an greater intelligence than humans (also known as strong AI) it seems clear, however, that current technology is far from reaching the levels of “near-human” behavior that science fiction authors show in their films.

Strong AI is therefore, a hypothetical type of artificial intelligence that would surpass the AI known so far. It would be an artificial intelligence which purpose would be to emulate the human intelligence, allowing the troubleshooting of general problems. It has to be noted that the term “general” means that instead of specializing in solving one type of problem (as current AI does), the system would be able to emulate what any human being can do.

Admitting that technology had reached a sufficient level of development as to reach an AI able to go beyond human intelligence in solving problems and daily activities, could such AI be able to feel emotions? Recent advances in the field of affective computing show machines with more and more elaborated “emotional intelligence” (although still very basic when compared to human intelligence) and have caused that an increasing number of researchers believe it is only a matter of time to merge “fiction” and science as emotional intelligence is concerned. However, many are still convinced that advances in AI will as much allow to “simulate” human emotions. And that even when we would be able to build machines endowed with strong AI, these systems’ intelligence will not be more than that: a clever way of simulation.

But, what are emotions? Emotions are psycho-physicological reactions that represent modes of adaptation to certain stimuli when we perceive an object, person, place, or event. Psychologically, emotions alter attention and activate relevant associative networks in memory. Physiologically, the emotions organize the answers of different biological systems, including facial expressions, muscles, voice, endocrine system, to establish an optimal internal environment for a more effective behavior. Behaviorally, emotions serve to establish our position regarding our environment, and move us towards certain people, objects, actions, ideas while taking us away from others. The emotions also act as deposits of innate and learned influences, and have certain invariable characteristics and others that vary between individuals, groups and cultures.

Given the definition of emotion, it is clear that an emotional reaction from a physiological point of view requires more than just an advanced Artificial Intelligence. However, it seems clear that with an appropriate level of technological development it would be possible to create a machine able to adapt to external stimuli, to change its behavior by activating various internal systems and generate sounds, expressions, and other changes in its components to be able to perform a more effective behavior. In short, to create an emotional reaction to external stimuli. Whether this emotional reaction is real or shall be considered as a mere simulation of human behavior is a current hot debate. A debate that will gain interest as we get closer to the levels of technological development that allow us to develop “sensitive” machines.