"Emotion" is the last thing any scientist or design engineer wants to deal with, especially when it comes to developing computing systems.
"Oh, dear," Rosalind Picard, professor at the Massachusetts Institute of Technology (MIT) Media Laboratory, remembers muttering to herself, when it first became unavoidably clear to her that “emotion is vital to intelligent functions.” Picard was then working on machine learning systems.
[Get a 10% discount on ARM TechCon 2012 conference passes by using promo code EDIT. Click here to learn about the show and register.]
"Scientists want to be rational. We develop machines that decide right or wrong in terms of 1’s and 0’s," said Picard. The first instinct about “emotion” among scientists and engineers is to roll their eyes at the very idea of feelings having any role in problem-solving, logical thinking or reason. But ignoring emotions is probably no longer an option for most scientists. Look closely, Picard said. Emotions play an important role in human intelligence, rational decision making, perception and learning.
"Emotions are necessary for intelligent day-to-day functions," said Picard in a recent interview with EE Times. Behind what matters [to humans] emotionally, there is "a mass of information" to which computing systems today are totally blind, said Picard.
How can we incorporate emotions into models of intelligence, more specifically, in computers? And how can we make machines that pay more attention to our human’s affect? These questions led her to her seminal work on “Affective Computing.”
Picard (left), the pioneer of affective computing, is a keynote speaker at the Design East Conference, scheduled on Wednesday, Sept. 19th. During her keynote speech, she plans to demonstrate several applications developed by taking advantage of initial findings in the affective computing field. One example is a brand new iPhone app, called Cardio, released last Thursday (Aug. 16).
Developed by a team of Picard’s students at MIT, the app detects one’s heart rate through facial skin color changes, as the user looks at the iPhone camera.
Picard noted that the development of affective computing is an "evolving process." The research challenge still has a distance to go, as she acknowledged that “this isn’t the field where we can simply put a USB plug and read out emotions.”
Picard, however, made it clear that the mission for affective computing is not about making computers "cute or endearing." This is about “making machines function much more intelligently.”
Beyond reading a lot of neuroscience books and learning how the visual cortex works, Picard is involved in breaking down basic building blocks of human perceptions and emotion, identifying key emotional triggers. Picard’s silver lining is that she’s not alone. "We now have a large number of engineers, researchers and psychologists involved in the [affective computing] field."
Computer vision is perhaps one of the first fields helped by affective computing. While humans can instinctively sense what matters to them most, balance reason with emotion (or not) and make decisions on where to concentrate their attention, a computer-vision system tends to survey all visual input objectively, with equal focus. Then, it needs to run a lot of data processing on every item and movement in its field of view before deciding what’s important, what’s not and what to do.
But what if the computer vision system knows it should pay its primary attention to human facial expressions or slight changes in skin color? Feeding computers certain algorithms that help decode human emotion and physiological clues could help computers extract the information that matters most -- more quickly. Success in this area could have major implications for computer vision applications ranging from toys and airport security to video surveillance systems and equipment used in hospital rooms, explained Jeff Bier, co-founder and president of Berkeley Design Technology, Inc.
This has been in the works for some time. It is only a matter of time before we are all being driven around by emotional computers. "No need for friends I have my Ipad 9!" "She knows what I like and tells me what I want to hear."
It is very difficult to under stand the real thoughts of human from the face.It is for the very simple reason at many instances we are compelled from our inside mind to smile or keep neutral or cry. So computers never will be able to detect the true feelings of a human by only reading the faces. May be by analyzing the brain waves,saliva,sweat and other outputs it may be possible.
This is a small brick in the wall of research on artificial intelligence. We humans do identify the face blemish as emotions like shame, worry, extreme and sudden fear (our face turns white), or even being angry. Actually I’ve thought sometimes the mood of a person could become part of it’s social network information. In the not too far future, we won’t have to bother to update our Facebook to let our friends know how are we feeling. The app will do it for us, by taking a look at our face, hearing our voice, sensing our heart rate or even perhaps our temperature or how fast or slow we walk.
This is the era of the cognitive sciences. Interesting right?
Even if emotions are completely innate in humans, the response to other's emotions is almost certainly learned behavior. This is likely true for most mammals. Perhaps a shorter term goal for a machine would be to recognize emotion in a mouse or something a little less complex than a human.
When a human is born, the sensors and detectors built into their processor (brain) are all functional, but yet the human computer only appears to have the capability to operate in autonomic mode (i.e. heartbeat).
The challenge for affective computing would likely be to figure out what the human model goes through to create things like "Moral Compass". I would assume that we all want our machines to properly adapt to needs in a way that is simlar to the human response- Most of our daily habits are developed from the initial impressions that were instilled into us from our beginning environment.
Interesting study! I will be watching to see how this goes in the future. Will I like the personality of the robots in the future? Who knows!
The idea of including emotional cues in HMI is valid and I am sure that there are clever techniques to measure the emotions, like the one mentioned in the article, looking for skin color modulation. I looked at their demonstration video and couldn't see it with my own eyes but the digital sensors are more sensitive and also can enhance the effect by clever math.
At the same time, this is inexact science: polygraph testing is even more involved technologically but has been proven to be unreliable.
You're exactly right on one hand. In order to get successful results, the system has to be able to detect emotions in various environments, under any given condition (ex: blurry faces, bad lighting, outside interference). Our facial coding technology, Affdex, was designed to do just that by gathering spontaneous facial expressions (emotions occurring naturally in real-world environments). We continually evaluate the accuracy of current classifiers, while training and testing new classifiers. On the other hand, this isn’t a future idea, this is happening now. Affdex has already been validated in developed and emerging markets.
Robots reading and having emotions is right up there with slapping lipstick on a pig. I believe KISS is the best and safest mode for a robot’s “personality” and having a robot “guess” what people are “feeling” and “guess” what to do next is a recipe for inconsistent robot operations and will cause more problems than it is worth.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.