The Prisoner’s Dilemma
Two members of a criminal gang, Simon and Peter, are arrested under suspicion of committing an armed robbery. The police do not have sufficient evidence to charge either one with the robbery. Instead, they interrogate both suspects individually and offer each the following bargain, hoping one will incriminate the other. If Simon and Peter both incriminate each other, each of them will serve two years in prison. If Simon betrays Peter but Peter remains silent, Simon will go free and Peter will serve three years in prison (and vice versa). However, if Simon and Peter both remain silent, they will each serve just one year in prison on the lesser charge of possession of a handgun.
On one hand, each individual prisoner will do better by betraying the other than by keeping silent. If Peter says nothing, Simon will do better by betraying Peter and going free than by keeping silent and serving one year. If Peter betrays Simon, Simon will again do better by betraying Peter and serving two years than by keeping silent and serving three years. On the other hand, if both prisoners follow the above logic, they will both betray one another and will each serve two years in prison. But if they both remain silent, they will each only serve one year.
The above problem, known as the Prisoner’s Dilemma, has many real-world applications, including game theory, economics, climate change, sports, and social/political issues. For example, we can imagine Simon and Peter as two farmers living in a lawless society. If they have a tacit agreement to not steal one another’s crops, each of them will do better by betraying the other’s trust. Simon will do better by stealing Peter’s pumpkins, because if Peter does not steal, Simon will have some of Peter’s pumpkins in addition to his own squash, and if Peter does steal, Simon will have a few pumpkins to compensate for his squash loss. Yet we obviously want individuals living in a society to refrain from such behavior!
In this second scenario, you have a choice between taking the contents of both closed boxes A and B, or the contents of only box B. Box A always contains $1,000. Box B will either contain nothing or $1,000,000. A week before you make your decision, a super-intelligent entity (god, alien, psychic, MRI scanner, etc.) will attempt to predict your decision. If the predictor thinks you will take both boxes, it puts nothing in Box B. If it thinks you will take only Box B, it puts $1,000,000 in Box B. The predictor guess your choice, allocates the money accordingly, and you have a week to weigh your options. What do you choose?
On one interpretation, you should always take Box B if you believe the predictor to be accurate. For example, if the predictor has been 90% accurate in the past, you can expect to win $900,000 (0.9 x $1,000,000 + 0.1 x $0) by taking Box B versus $101,000 (0.9 x $1,000 + 0.1 x $1,001,000) by taking both boxes. On another interpretation, you should take both boxes regardless of the predictor’s accuracy because the predictor has already allocated the money at the time of your choice. If it put nothing in Box B, you should take both boxes so that you at least win $1,000, and if it put $1,000,000 in Box B, you should take both boxes to win the maximum $1,001,000.
One and the Same?
At first glance, the Prisoner’s Dilemma is a problem about morality, while Newcomb’s Problem is a question of free will. The Prisoner’s Dilemma asks how one should balance self-interest against cooperation for mutual gain. Newcomb’s Problem asks whether it is possible for the predictor to accurately guess your choice. It implies that if the predictor is perfect, it would be impossible for you to actually choose on the day in question since the predictor has already verified the outcome a week in advance. But philosophers such as David Gauthier, Jan Narveson and David Lewis* have argued that the Prisoner’s Dilemma is akin to Newcomb’s Problem. Let’s see how this similarity arises.
Returning to our example of Simon and Peter as farmers, both will see the greatest long-term benefit if they honor their agreement. If Simon steals Peter’s pumpkins one year, it is likely Peter will retaliate the next year, and both will quickly become more concerned with stealing from the other while guarding his own crops than with simply producing pumpkins and squash. So each is willing to honor the agreement as long as the other does so. But can they expect each other to keep up their side of the bargain?
Simon will decide to steal if he expects Peter to steal and vice versa. So Simon must decide what Peter thinks he will do, knowing Peter will steal if he thinks Simon will steal and won’t steal if he thinks Simon won’t. Replace Peter with “predictor” and we begin to move into Newcomb’s problem, where Simon must decide if the predictor thinks he will take one box or two, with the knowledge that the predictor will behave differently based on its prediction.
This comparison shows one way in which morality depends on knowledge and free will. In order for Simon and Peter to successfully cooperate, they must have sound understandings of the pros and cons of their options and the tendencies of the other. They must also believe their behavior has real-world consequences. If Simon believes Peter will always steal because that is his inborn nature, Simon will choose to act very differently than if he believes Peter can change his ways (that is, if Simon can even choose at all and is not beholden to his own inborn nature!).
*-David Gauthier, Morals by Agreement, New York: Oxford University Press, 1986; Jan Narveson, The Libertarian Idea, Petersborough, Ontario, Canada: Broadview Press, 1988; David Lewis, “Prisoners’ Dilemma is a Newcomb Problem”, Philosophy and Public Affairs, Fall 1979.
An animated adaptation of this article courtesy of Monster Box: