Sometimes, the best way to illustrate a complicated philosophical concept is by framing it as a story or situation. Here are nine such thought experiments with downright disturbing implications.
Top image: Isaac Gutiérrez Pascual; published with permission.
This is the classic game theory problem in which a suspect is confronted with a rather difficult decision: Stay silent or confess to the crime. Trouble is, the suspect doesn’t know how their accomplice will respond.
Here’s the Prisoner’s Dilemma in a nutshell, via the Stanford Encyclopedia of Philosophy:
Tanya and Cinque have been arrested for robbing the Hibernia Savings Bank and placed in separate isolation cells. Both care much more about their personal freedom than about the welfare of their accomplice. A clever prosecutor makes the following offer to each. “You may choose to confess or remain silent. If you confess and your accomplice remains silent I will drop all charges against you and use your testimony to ensure that your accomplice does serious time. Likewise, if your accomplice confesses while you remain silent, they will go free while you do the time. If you both confess I get two convictions, but I'll see to it that you both get early parole. If you both remain silent, I'll have to settle for token sentences on firearms possession charges. If you wish to confess, you must leave a note with the jailer before my return tomorrow morning.”
This thought experiment is troubling because it teaches us that we don’t always make the “right” decisions when confronted with insufficient information and when other self-interested decision-making agents are thrown into the mix. The “dilemma” is that each suspect is better off confessing than staying silent — but the most ideal outcome would have been mutual silence.
This has implications to everything from the coordination of international cooperation (including the prevention of nuclear war) through to our potential contact and communication with extraterrestrial intelligences (i.e. despite the fact that all interstellar civilizations would benefit from cooperation, it would likely be more prudent to take the dominant strategy of unleashing self-replicating berserker probes against everyone else before they do it).
Sometimes referred to as the Inverted Spectrum Problem or the Knowledge Argument, this thought experiment is meant to stimulate discussions against a purely physicalist view of the universe, namely the suggestion that the universe, including mental processes, is entirely physical. This thought experiment tries to show that there are indeed non-physical properties — and attainable knowledge — that can only be learned through conscious experience.
The originator of the concept, Frank Jackson, explains it this way:
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’...What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?
Put another way, Mary knows everything there is to know about color except for one crucial thing: She’s never actually experienced color consciously. Her first experience of color was something that she couldn’t possibly have anticipated; there’s a world of difference between academically knowing something versus having actual experience of that thing.
This thought experiment teaches us that there will always be more to our perception of reality, including consciousness itself, than objective observation. It essentially shows us that we don’t know what we don’t know. The thought experiment also gives us hope for the future; should we augment our sensory capabilities and find ways to expand conscious awareness, we could open up entirely new avenues of psychological and subjective exploration.
This one’s also known as the Private Language Argument and it’s somewhat similar to Mary the Neuroscientist. In Wittgenstein’s Philosophical Investigations, he proposed a thought experiment that challenged the way we look at introspection and how it informs the language we use to describe sensations.
For the thought experiment, Wittgenstein asks us to imagine a group of individuals, each of whom has a box containing something called a “beetle.” No one can see into anyone else’s box. Everyone is asked to describe their beetle — but each person only knows their own beetle. But each person can only talk about their own beetle, as there might be different things in each person’s box. Consequently, Wittgenstein says the subsequent descriptions cannot have a part in the “language game.” Over time, people will talk about what is in their boxes, but the word “beetle” simply ends up meaning “that thing that is in a person’s box.”
Why is this bizarre thought experiment disturbing? The mental experiment points out that the beetle is like our minds, and that we can’t know exactly what it is like in another individual’s mind. We can’t know exactly what other people are experiencing, or the uniqueness of their perspective. It’s an issue that’s very much related to the so-called hard problem of consciousness and the phenomenon of qualia.
Philosopher John Searle asks us to imagine someone who knows only English, and they’re sitting alone in a room following English instructions for manipulating strings of Chinese characters. So, for those outside of the room, it appears that the person inside the room understands Chinese.
The argument is supposed to show that, while advanced computers may appear to understand and converse in natural language, they are not capable of understanding language. This is because computers are strictly limited to the exchange of symbolic strings. The Chinese Room was meant to be a killer argument against artificial intelligence, but it’s a rather simplistic view of AI and where it’s likely headed, including the advent of generalized, learning intelligence, (AGI) and the potential for artificial consciousness.
That said, Searle is right in his suggestion that there is the potential for an AI to act and behave as if there’s conscious awareness and understanding. This is problematic because it may be convincing to us humans that true comprehension is going on where there is none. We best be careful, therefore, around seemingly “smart” machine minds.
Philosopher Robert Nozick’s Experience Machine is a strong hint that we should probably just plug ourselves into a kind of hedonistic version of The Matrix.
From his book, Anarchy, State and Utopia (1974):
Suppose there were an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life experiences?...Of course, while in the tank you won't know that you're there; you'll think that it's all actually happening...Would you plug in?"
The basic idea, here, is that we have very good reasons to plug ourselves into such a machine. Because we live in a universe with no apparent purpose, and because our lives are often characterized by less-than-ideal conditions, like toil and suffering, we have no good reason to not opt for something substantially better — even if it is “artificial.” But what about human dignity? And the satisfaction of our “true” desires? Nozick’s thought experiment may appear easily dismissible, but it’s one that’s challenged philosophers for decades.
Here’s one for the ethicists — and you can blame the renowned moral philosopher Philippa Foot for this one. This thought experiment, of which there are now many variations, first appeared in Foot’s 1967 paper, “Abortion and the Doctrine of Double Effect.”
Imagine that you’re at the controls of a railway switch and there’s an out-of-control trolley coming. The tracks branch into two, one track that leads to a group of five people, and the other to one person. If you do nothing, the trolley will smash into the five people. But if you flip the switch, it’ll change tracks and strike the lone person. What do you do?
Utilitarians, who seek to maximize happiness, say that the single person should be killed. Kantians, because they see people as ends and not means, would argue that you can’t treat the single person as a means for the benefit of the five. So you should do nothing.
A second variation of the problem involves a “fat man” and no second track — a man so large that, if you were to push him onto the tracks, his body would prevent the trolley from smashing into the group of five. So what do you do? Nothing? Or push him onto the tracks?
This thought experiment reveals the complexity of morality by distinguishing between killing a person and letting them die — a problem with implications to our laws, behavior, science, policing, and war. “Right” and “wrong” is not as simple as it’s often made out to be.
This one’s reminiscent of Plato’s Cave, another classic (and disturbing) thought experiment. Proposed by Thomas Nagel in his essay, “Birth, Death, and the Meaning of Life,” it addresses issues of non-interference and the meaningfulness of life. He got the idea when he noticed a sad little spider living in a urinal in the men’s bathroom at Princeton where he was teaching. The spider appeared to have an awful life, constantly getting peed on, and “he didn’t seem to like it.” He continues:
Gradually our encounters began to oppress me. Of course it might be his natural habitat, but because he was trapped by the smooth porcelain overhang, there was no way for him to get out even if he wanted to, and no way to tell whether he wanted to...So one day toward the end of the term I took a paper towel from the wall dispenser and extended it to him. His legs grasped the end of the towel and I lifted him out and deposited him on the tile floor.
He just sat there, not moving a muscle. I nudged him slightly with the towel, but nothing happened . . . . I left, but when I came back two hours later he hadn't moved.
The next day I found him in the same place, his legs shriveled in that way characteristic of dead spiders. His corpse stayed there for a week, until they finally swept the floor.
Nagel acted out of empathy, assuming that the spider would fare better — and perhaps even enjoy life — outside of its normal existence. But the exact opposite happened. In the end, he did the spider no good.
This thought experiment forces us to consider the quality and meaningfulness of not just animal lives, but our own as well. How can we ever know what anyone really wants? And do our lives actually do us any good? It also forces us to question our policies of intervention. Despite our best intentions, interference can sometimes inflict unanticipated harm. It’s a lesson embedded within Star Trek’s Prime Directive — but as the Trolley Problem illustrated, sometimes inaction can be morally problematic.
In this thought experiment, we are asked to imagine a world in which humans don’t care for the taste of meat. In such a scenario, there would be no animals raised as livestock. And by consequence, there would be a dramatic decrease in the number of animal lives, like pigs, cows, and chickens. As Virginia Woolf once wrote, “Of all the arguments for Vegetarianism none is so weak as the argument from humanity. The pig has a stronger interest than anyone in the demand for bacon. If all the world were Jewish, there would be no pigs at all.”
This line of reasoning can lead to some bizarre, and even repugnant conclusions. For example, is it better to have 20 billion people on the planet in a poor standard of living than 10 billion in a higher standard of living? If the latter, then what about the 10 billion lives that never happened? But how can we feel bad about lives that never occurred?
This thought experiment is why I’m a complete fanboy of John Rawls. He asks us to imagine ourselves in a situation in which we know nothing of our true lives — we are behind a “veil of ignorance” that prevents us from knowing the political system under which we live or the laws that are in place. Nor do we know anything about psychology, economics, biology, and other sciences. But along with a group of similarly situation-blind people, we are asked, in this original position, to review a comprehensive list of classic forms of justice drawn from various traditions of social and political philosophy. We are then given the task of selecting which system of justice we feel would best suit our needs in the absence of any information about our true selves and the situation we may actually be in in the real world.
So, for example, what if you came back to “real life” to find out that you live in a shanty town in India? Or a middle class neighborhood in Norway? What if you’re a developmentally disabled person? A millionaire? (Or as I proposed in my paper, “All Together Now,” a different species?)
According to Rawls, we would likely end up picking something that guarantees equal basic rights and liberties to secure our interests as free and equal citizens, and to pursue a wide range of conceptions for the good. He also speculated that we’d likely choose a system that ensures fair educational and employment opportunities.