Edge Asks The Intellects: What Do You Think About Machines That Think?

Illustration for article titled iEdge/i Asks The Intellects: What Do You Think About Machines That Think?

This year's Edge.org question asks, "What do you think about machines that think?" Editor John Brockman collected 182 individual responses from such prominent thinkers as Nick Bostrom, Daniel Dennett, Rodney Brooks, Susan Blackmore, Alison Gopnik, Andy Clark, and Martin Rees.


Holy smokes, but is there a lot to chew through here. And it certainly looks like there's a lot worth chewing. Here's how Brockman prefaced this year's exercise:

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg". Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history.


This is so encouraging, even though there are a good number of AI naysayers among the respondents. Though we're still decades away from the kinds of AI that could bite us in the ass, it's crucial that we have conversations like these to raise awareness. As Oxford philosopher Nick Bostrom notes in his response:

[T]he degree to which we manage to get our act together will have some effect on the odds. The most useful thing that we can do at this stage, in my opinion, is to boost the tiny but burgeoning field of research that focuses on the superintelligence control problem (studying questions such as how human values can be transferred to software). The reason to push on this now is partly to begin making progress on the control problem and partly to recruit top minds into this area so that they are already in place when the nature of the challenge takes clearer shape in the future. It looks like maths, theoretical computer science, and maybe philosophy are the types of talent most needed at this stage.

That's why there is an effort underway to drive talent and funding into this field, and to begin to work out a plan of action. At the time when this comment is published, the first large meeting to develop a technical research agenda for AI safety will just have taken place.

You can read all 186 responses here.

Share This Story

Get our newsletter



Well, as complete nonexpert, the current theme I'm very interested in now is what happens when sapient artificial life develops politics.

Usually in fictional depictions of AIs and robots, the machine threat is presented to us as monolithic. Colossus and Guardian conveniently merge to become one mind. Skynet keeps all its underlings brains in read-only mode so they can't really form consciousness or their own opinions about things. Skynet doesn't like to be second guessed.

But recently, fictional depictions of civilizations of artificial life shows politics.

  • In The Matrix, the Architect, the Oracle, Agent Smith, the Merovingian are all shown to us as artificial organisms that don't agree with each others goals. They are not monolithic even if the Matrix is.
  • In Person of Interest, things are now in the opening moves of a war between two rival AI systems, the Machine and Samaritan. Even if the Machine wins, how well will humanity fare in the aftermath?
  • In the sadly underrated movie, Screamers, based on Philip K Dick's short "Second Variety," machine life evolves to a point were it starts have divergent opinions and to war against others of it's kind.

And it's this second option that I find more interesting. Humans as sapient creatures have politics, I see no reason why sapient synthetic creatures shouldn't have politics too.

Wouldn't it be remarkable if synthetic creatures divide up into two opposing sides, go to war with one another, and mistakenly wipe out humanity is mere collateral damage? Maybe neither faction intended us any harm but they only survive to see us die horribly because they can take nukes, nanoweapons and asteroid strikes and we can't.