This has been in the works for some time. It is only a matter of time before we are all being driven around by emotional computers. "No need for friends I have my Ipad 9!" "She knows what I like and tells me what I want to hear."
It is very difficult to under stand the real thoughts of human from the face.It is for the very simple reason at many instances we are compelled from our inside mind to smile or keep neutral or cry. So computers never will be able to detect the true feelings of a human by only reading the faces. May be by analyzing the brain waves,saliva,sweat and other outputs it may be possible.
This is a small brick in the wall of research on artificial intelligence. We humans do identify the face blemish as emotions like shame, worry, extreme and sudden fear (our face turns white), or even being angry. Actually I’ve thought sometimes the mood of a person could become part of it’s social network information. In the not too far future, we won’t have to bother to update our Facebook to let our friends know how are we feeling. The app will do it for us, by taking a look at our face, hearing our voice, sensing our heart rate or even perhaps our temperature or how fast or slow we walk.
This is the era of the cognitive sciences. Interesting right?
Even if emotions are completely innate in humans, the response to other's emotions is almost certainly learned behavior. This is likely true for most mammals. Perhaps a shorter term goal for a machine would be to recognize emotion in a mouse or something a little less complex than a human.
When a human is born, the sensors and detectors built into their processor (brain) are all functional, but yet the human computer only appears to have the capability to operate in autonomic mode (i.e. heartbeat).
The challenge for affective computing would likely be to figure out what the human model goes through to create things like "Moral Compass". I would assume that we all want our machines to properly adapt to needs in a way that is simlar to the human response- Most of our daily habits are developed from the initial impressions that were instilled into us from our beginning environment.
Interesting study! I will be watching to see how this goes in the future. Will I like the personality of the robots in the future? Who knows!
The idea of including emotional cues in HMI is valid and I am sure that there are clever techniques to measure the emotions, like the one mentioned in the article, looking for skin color modulation. I looked at their demonstration video and couldn't see it with my own eyes but the digital sensors are more sensitive and also can enhance the effect by clever math.
At the same time, this is inexact science: polygraph testing is even more involved technologically but has been proven to be unreliable.
You're exactly right on one hand. In order to get successful results, the system has to be able to detect emotions in various environments, under any given condition (ex: blurry faces, bad lighting, outside interference). Our facial coding technology, Affdex, was designed to do just that by gathering spontaneous facial expressions (emotions occurring naturally in real-world environments). We continually evaluate the accuracy of current classifiers, while training and testing new classifiers. On the other hand, this isn’t a future idea, this is happening now. Affdex has already been validated in developed and emerging markets.
Robots reading and having emotions is right up there with slapping lipstick on a pig. I believe KISS is the best and safest mode for a robot’s “personality” and having a robot “guess” what people are “feeling” and “guess” what to do next is a recipe for inconsistent robot operations and will cause more problems than it is worth.