This year's Edge.org question asks, "What do you think about machines that think?" Editor John Brockman collected 182 individual responses from such prominent thinkers as Nick Bostrom, Daniel Dennett, Rodney Brooks, Susan Blackmore, Alison Gopnik, Andy Clark, and Martin Rees.

Holy smokes, but is there a lot to chew through here. And it certainly looks like there's a lot worth chewing. Here's how Brockman prefaced this year's exercise:

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg". Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history.

This is so encouraging, even though there are a good number of AI naysayers among the respondents. Though we're still decades away from the kinds of AI that could bite us in the ass, it's crucial that we have conversations like these to raise awareness. As Oxford philosopher Nick Bostrom notes in his response:

[T]he degree to which we manage to get our act together will have some effect on the odds. The most useful thing that we can do at this stage, in my opinion, is to boost the tiny but burgeoning field of research that focuses on the superintelligence control problem (studying questions such as how human values can be transferred to software). The reason to push on this now is partly to begin making progress on the control problem and partly to recruit top minds into this area so that they are already in place when the nature of the challenge takes clearer shape in the future. It looks like maths, theoretical computer science, and maybe philosophy are the types of talent most needed at this stage.

That's why there is an effort underway to drive talent and funding into this field, and to begin to work out a plan of action. At the time when this comment is published, the first large meeting to develop a technical research agenda for AI safety will just have taken place.

You can read all 186 responses here.