So Watson just pwned humanity, setting a milestone in the history of artificial intelligence. But this trouncing gives us—as we lick our wounds, cry foul, or demand a rematch—the opportunity to ask afresh what it means to be human.
At least as far back as Socrates and Plato, Homo sapiens has been fascinated with the question of what makes it special and unique. In antiquity, this took the form of obsessively comparing humans against other animals. In the twenty-first century, it's machines we keep anxiously measuring ourselves against. Each new step ahead for AI, it seems, has whittled down the gap. But what has been so fascinating about these milestones, over the past six or seven decades, is the order in which they've come.
What's telling about the Watson project is that the IBM team barely treats the acquisition and storage of all factual world knowledge as difficult, or even interesting. Instead they frame the problem entirely in terms of figuring out how to deal with the "variety, ambiguity, subtlety, breadth, and expressiveness of human language and meaning." What we find impressive about a Jeopardy champ was a relative cinch to program; what we take entirely for granted about any Jeopardy contestant was almost the whole of the challenge.
Part of the match's significance is that it serves as a perfect reminder of what's called "Moravec's paradox"—that AI tackled domains long considered "hard" before it tackled domains considered "easy." Autopilot software could land planes decades ago, but we're just now seeing software that can parallel park a car; bicycling remains elusive. Software corrects the spelling even of professional wordsmiths, but still struggles to read aloud with any emotion, as any grade-school student can. It can passably translate scientific and political documents from one language to another but still struggles with the task of being shown a picture of a horse and saying, as a child does, "Horse!"
And in some sense the message is a profoundly life-affirming one: expertise is in some sense less impressive than the fundamental experiential, raw, embodied creatural everyday life. Recognizing faces. Using natural language. Navigating physical space and avoiding obstacles. Sensing, perceiving, imagining.
What we find most impressive about ourselves, in some sense because it's so "hard" for us, has turned out to be the simplest thing to encode into AI. What we have traditionally found least impressive about the human experience, because it comes so easily to us and represents our common denominator, has turned out to be what is perhaps most impressive.
I think it's actually fair to view the contest in a broader sense as a victory—or at the very least a validation. What IBM found hard isn't what makes Jennings and Rutter better Jeopardy players than the rest of us. What the engineers found hard are the qualities that make those two human superbrains just like the rest of us.
Brian Christian is author of The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, which goes on sale March 1st.