You might like it when your Tivo predicts what you want to watch, but you probably don't think that makes it intelligent. But what if your Tivo could cheer for a game, or cry with you when you're watching a poignant death scene in Battlestar Galactica? Researchers with the HUMAINE project are studying machine/human emotional interactions, and they're asking this very question. In essence, will people consider their machines intelligent when those machines can express what appear to be feelings?

HUMAINE has gathered psychologists, philosophers, sociologists and computer animation specialists along with database developers and programmers to tackle the issue of machine emotion. Whether HUMAINE's approach results in a better way of recognizing and displaying emotion might be beside the point. The reason they have philosophers on board is to help decide whether or not we should imbue machines with emotions at all.

Advertisement

The logical, emotionless decision-making of sci-fi A.I. is something we both admire (Data, good Terminators) and fear (HAL 9000, bad Terminators). Would it be ethical to give such machines emotions? I'm not sure I want to deal with an ATM that's been having a bad day, much less an armed police robot. In reality, we probably want a lesser degree of machine emotion, a realistic yet fake emotive ability that makes us feel better but doesn't affect the computer's decisions.

Advertisement

The bigger question might be: would an emotionless A.I. be any kind of intelligence at all? I'm not sure it would be possible for a machine to make the intuitive leaps and strokes of genius that we think of as measures of human intelligence in the absence of emotion. Photo by: Warner Bros.

Emotional Machines. [ICT Results]

Advertisement