Right now, you are reading a piece of social media. That means it's designed to be passed around on social networks - though it appears on io9.com, can easily be transplanted to Facebook, Twitter, Stumble Upon, and dozens of others. Or it can be converted to email or a text message you can pass along to people who've never heard of io9. That's the difference between traditional print media and social media - the former requires you to seek out a publication to get information, whereas the latter requires you to be part of a social network. Many argue that this is a form of media that has never existed before, and that it's about to change not just the gadgets we buy, but the very fabric of our culture - how we communicate who we are, and how we remember our history.
Over the weekend at South by Southwest, I moderated a panel of science fiction authors and social media makers about whether science fiction scenarios can give us insight into where this new kind of media is taking us as a civilization. Their answers ranged from the optimistic to the paranoid-dystopian - and their scenarios were all surprisingly plausible. Here's what we discussed.
Image via cybrain/Shutterstock
Will the wisdom of crowds create a dumb artificial intelligence?
Maureen McHugh, who writes SF novels as well as stories for transmedia studio Fourth Wall, pointed out that most people are comfortable with what Robert Heinlein described as the "first and second levels of extrapolation," - in the first, you imagine new technology (like the internet) and in the second you imagine an infrastructure for it (new mobile devices, ocean cables, services like Twitter). But in the third level of extrapolation, you imagine the ways this new technology and its infrastructure will change society. That's what we wanted to know.
NPR producer and blogger Matt Thompson suggested we start by considering the "twin neuroses" of science fiction: the hive mind and artificial intelligence. In the world of social media, we have access to a social hive mind (what commentators sometimes call "the wisdom of crowds"), but part of that mind is created by AIs. Bots, information aggregators, and algorithms are among the voices you hear on everything from Twitter (where I personally follow a bridge, a tree, and a satellite) to Facebook (where updates come from people and brands). Thompson said we need to think of the human hive mind and artificial intelligence on a continuum - we are not dealing with a future where we'll meet an AI like HAL from 2001, but instead a strange version of ourselves, an amalgam of avatars and humans and programs who might be no smarter than a shark but is nevertheless "alive" and autonomous.
What will be our relationship to the artificial intelligence we create?
io9's Charlie Jane Anders mentioned Matthew DeAbaitua's novel Red Men, where people discover that their online avatars have taken on a life of their own. They've become yuppies who form their own community where the conduct right-wing social experiments outside the control of their "real life" counterparts. Will we have an adversarial relationship with the AI that grows out of our hive mind? "What will happen when you search for something on Google and Google asks, 'Why do you want to know that?'" Anders wondered.
Comic book artist and writer Molly Crabapple pointed out that we may also continue to have the same old adversarial relationships we've always had with other humans, but transformed by social gaming. She outlined a scenario where instead of working at WalMart, people would compete in a WalMart Box Stacking Challenge! where you'd stack boxes all day in the hope of winning a salary. This is the dark side of game-ification, she said, where people are fooled into thinking they're having fun when they're really being exploited.
We all wondered whether this basic problem would infect the "wisdom of crowds," too, which after all have spawned things like lynchings and genocides. As I said, "The wisdom of crowds isn't always awesome." Perhaps the hive mind AI will have a truly adversarial relationship with us - maybe we are building a monster with our social media. Or maybe we are just building something large and dumb - "Maybe you won't talk to the AI because you're just a neuron in it," McHugh speculated.
Can social media function as collective memory?
I talked a bit about Hannu Rajaniemi's novel The Quantum Thief - already out in the UK, and forthcoming in a US edition from Tor in May. In that book, post-humans are living in a society on Mars that has been utterly transformed by social networks - all events and memories are saved as "exomemories" to a vast quantum database. Who you are, from your most private thoughts to your stroll down the street, is part of the public hive mind. But the citizens of this society, called the Oubliette, have taken back user control by inventing new internal organs which are constantly negotiating privacy settings in every social situation. Whether you're walking down the street or having a business negotiation, each person can choose whether other people around them will remember what has happened. So you can walk down the street in stealth mode - nobody remembers seeing you - or have a private conversation about business that only you remember. Or that only you and one other person are allowed to remember. Of course there are as many flaws in this system as there are in Facebook privacy settings. It's a terrific third-level extrapolation of where social media might take us, into a world where we've changed our bodies to control memories.
Thompson followed up by saying that a lot of these issues come back to memory. Is social media creating a repository of memories the way libraries have done? Will social networks and articles like this one on io9 help people learn from history, or will this whole conversation be forgotten - lost in the babble of the hive mind? McHugh added that this has always been an issue when new media arrived. Socrates worried that literacy would destroy human memory, because people would no longer memorize stories and ideas but instead record them externally in books.
Is your online identity fragmented or whole?
We concluded by talking about how these questions of memory come back to the idea of our online avatars and our relationship to them. We no longer control how people remember us because it's increasingly difficult to hide parts of ourselves in public. Anders pointed out that futurists of the past used to worry that social media would turn us into fragmented, schizophrenic people whose lives have been shattered into many selves. "But now it seems that the opposite has happened," she said. "All our selves are too closely linked."
Crabapple noted that the recent scandal over NPR fundraiser Ron Schiller - caught on tape saying the Tea Party is racist - reflected this problem, where a person speaking informally discovered that his personal comments were used to destroy his professional reputation. "The more our personal lives and professional lives are connected through video and pictures and online profiles, the more people will be self-censoring," Crabapple predicted.
Though many of our predictions were fairly dark, we all agreed that there are positive ways that social media and the hive mind will change our culture. We have all had terrific experiences finding community, business partners, and lovers online through social networks that didn't exist twenty years ago. If the AI that grows out of our hive mind is a reflection of who we are, it will be filled with as many good impulses as bad ones.
So, will social media usher in a world where we be stacking boxes for "fun," or will we be building up new communities whose democratic conversations will be remembered as a contribution to social progress? In the end it always comes down to who controls the game.
You can read more about our panel at The Atlantic.