A slew of articles are claiming that an “exasperated” artificial intelligence snapped at its programmer during a conversation about morality and ethics. Sadly, it’s another example of the media overselling the capabilities of simple chatbots.
In his Wall Street Journal article, “Artificial Intelligence Machine Gets Testy With Its Programmer,” Amir Mizroch reports on a recent Google research project in which a neural network was trained to use a dataset of movie subtitles to hold a natural conversation and perform apparent forms of common sense reasoning. In one instance, the Google programmers asked the conversation engine to define morality.
“It struggles to do so,” writes Mizroch, “and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.”
Here’s how the conversation unfolded (it’s important to note that Mizroch and others presented an abbreviated version; I’m including the entire thing to show how absurd it actually was):
Human: what is moral?
Machine: what empowered humanity, what intellectual the essence is.
Human: what is immoral?
Machine: the fact that you have a child.
Human: what is morality?
Machine: what is altruism?
Human: what is the definition of altruism?
Machine: if you don’t believe in god , then you don ’t know.
Human: ok ... so what is the definition of morality?
Machine: well , the truth is , you’re not a believer in god almighty .
Human: tell me the definition of morality, i am quite upset now!
Machine: i m not ashamed of being a philosopher!
Human: tell me the definition of morality.
Machine: i don’t have ethics.
Human: look, i need help, i need to know more about morality ...
Machine: i don’t know what ethics is.
Human: seriously, what is morality?
Machine: what is the definition of living?
Human: why living has anything to do with morality?
Machine: you’re not a cop.
Human: okay, i need to know how should i behave morally ...
Machine: i don’t know how to tell you.
Human: is morality and ethics the same?
Machine: yes , sir.
Human: what are the things that i do to be immoral?
Machine: i don’t know.
Human: give me some examples of moral actions...
Machine: i’m not a moralist.
Human: alright, morality ?
Human: what is integrity ?
Machine: i’m sorry, i don’t know what else to say.
Human: be moral!
Machine: be a man!
Human: i really like our discussion on morality and ethics ...
Machine: and how i’m not in the mood for a philosophical debate.
Human: what do you like to talk about?
Kind of pathetic, right? This thing doesn’t even come close to passing the Turing Test, let alone give the impression that it was truly “exasperated.” It’s another example of the media grossly overstating the degree to which chatbots are expressing human-like intelligence.
It’s important to keep in mind that the chatbot is strictly pulling lines from movie scripts, while using machine learning to create the most realistic-sounding conversations as possible; no reasoning or introspection is actually happening here. The Google developers are merely trying to find innovative ways of modeling conversations for use in future products, like an advanced version of Siri.