Empathic Virtual Humans will Pass the Voight-Kampff Test

Illustration for article titled Empathic Virtual Humans will Pass the Voight-Kampff Test

The robotic Replicants in Blade Runner were indistinguishable from humans except for their lack of empathy. Now researchers are creating virtual humans that can detect human emotions through non-verbal cues and develop appropriate responses. This could lead to artificial life forms who are not only intelligent, but empathic as well.

Advertisement

Catherine Pelachaud develops virtual humans, called Embodied Conversational Agents (ECAs) at the Paris Institute of Technology. Pelachaud has found that people frequently lose interest in ECAs because they don’t seem sufficiently human. To create ECAs that keep human conversationalists engaged, her team is developing a virtual human that will recognize and respond to human emotions. They are training ECAs to detect emotional expressions via webcam, and studying how flesh and blood humans react to the virtual humans’ responses. They are hoping that this will improve the way that humans interact with virtual agents:

Pelachaud said this could be useful in applications where a person is seeking information from the agent. If the agent gets it wrong and detects the person becoming upset, it could show empathy through nonverbal signs, and this could help reduce the frustration the person feels, Pelachaud said.

"Having an agent that shows empathy can enhance the relationship between a user an agent," she said. "The user may still not get the information, but at least they won't feel so negative from the the interaction."

Advertisement

Greta, an ECA the team is training to become empathic, seems to be the antithesis of the character program “E,” which a team at Rensselaer is using to study computer-generated evil.

[Discovery News]

Share This Story

Get our newsletter

DISCUSSION

twdarkflame-old
twDarkflame

Recognising is far different from understanding.

For some sentient being to have empathy they have to have the capacity to "emulate" what they think someone else's point of view will be.

This requires both recognition AND the ability to expirence what is being recognised.

I'm sure emotion-detection has some use's, but its not empathy in itself.

Personaly, I'm highly skeptical of all forms of top-down AI development.

I think our best bet for true self-aware bits of software, is to set up a suitable enviroment, and evolve nural-nets based on selective critera.

(we can be a lot more focused then real evolution, and we can also overclock the speed of the simulated environment....we dont need to wait billions of years to get a result :p).

A good start would be to train nural-net bots to navigate maze's, and allow the bots to communicate at a simerla datarate to us, to exchange data about the maze to eachother.

Put selective critera so that the bots can get the most "food" (aka, most change of reproduction) if they work together.

This does a double-wammy of helping to evolve creatures able to communicate AND teaching them to the very core that co-operation is a good thing.