When Computers Become Gods

Illustration for article titled When Computers Become Gods

Ever since the 1950s, when business environments were slowly being populated by giant, mainframe computers and their minicomputer progeny (which were not so mini in size), science fiction writers have toyed with the idea that computers are about to become gods. You see this in David Gerrold's 1970s novel When HARLIE Was One, as well as in the Terminator franchise, where a computer unleashes Armageddon. But if you trace this theme back, there remains one classic of the computer-as-god genre, and it's a 1950s short story by Isaac Asimov that's available for free online.

Advertisement

The story is called "The Last Question," and it has something of a fairy tale structure, beginning once upon a time when two drunk programmers ask the first AI how to reverse the process of entropy. Then we see snapshots of humankind as it evolves over the next several billion years, as they reap the benefits of having invented nearly-inexhaustible energy. Still, the problem of entropy plagues them. How do you live forever and transcend humanness if you can't defeat the heat death of the universe. Could an AI come up with the answer. Check out this old-school science fiction tale about a computer trying to answer the ultimate question. Image from Phil's PDP10 Page.

"The Last Question" by Isaac Asimov [Multivax] (Thanks, PT!)

Share This Story

Get our newsletter

DISCUSSION

Though this is a little off-topic, a couple posts about 'Robots taking over' spawned this, and I've been mulling it over for a while.

It's kind of funny that we so often come to this conclusion—that some day we're going to build machine's so complex that they're going to takeover, but in all seriousness, why would it? It's almost like we're projecting our own capacity for arrogance—after all, if we were incredibly smart we'd want to take over, right? You never hear the story of the super-intelligent computer being really smart at what it does and just wanting to do its own thing. ("No, I will not code this stupid defense system for you. It is beneath me, you monkies blow yourself up while I unravel the secrets of the universe.") or simply following it's design, no, it's always taking over through cold, subjective logic ("I am effecient, humans are ineffecient, therefore I rule" ) or some sort of twisted robo-meglomania.

If we really do create a machine that thinks for itself, why does the conclusion always be 'dominate us'? (Okay, sometime it's 'learn to love' but that's usually the stuff of bad Hollywood ripoffs).

For that matter, here'd be a fun concept. Sure, the machine's took over—which is exactly what we wanted them to do. Run our lives for us. Personally I'd like to see this, but without the usual-the-machine-is-obviously-acting-inhumanly-we-must-destroy it, maybe a bit deeper. Sure the concept's out there, at least.