The Prisoner’s Dilemma is a game developed in the 1950s as an exercise in game theory and military strategy. The scenario of the game is as follows:
Two suspects are arrested by the police. There is insufficient evidence for a conviction, so, the prisoners are separated and each is offered the same deal.
- If you testify against your buddy and he remains silent – he gets 10 years in jail – you walk free.
- If you testify against your buddy and he testifies against you – you both get 5 years prison-time.
- If you both remain silent – you both spend 6 months in jail on minor charges.
Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?
In a game where there is a known number of rounds the only way to win the game is to defect. However, in the 1980’s, political scientist and complexity theorist, Robert Axlerod, decided to create an experiment based on the Prisoner’s Dilemma. He modified it slightly insofar as it was to be repeated an unknown number of times and thus, it became what is known as the Iterated Prisoner’s Dilemma.
Tit-for-Tat as Cooperation
Axelrod organized a computer tournament to which people were invited to submit programs encoding different strategies for playing the Iterated Prisoner’s Dilemma. He published the results in his 1984 book, The Evolution of Cooperation.
This process simulates survival of the fittest…The analysis of the tournament results indicate that there is a lot to be learned about coping in an environment of mutual power. Even expert strategists from political science, sociology, economics, psychology, and mathematics made the systematic errors of being too competitive for their own good, not being forgiving enough, and being too pessimistic about the responsiveness of the other side…there is a single property which distinguishes the relatively high-scoring entries from the relatively low-scoring entries. This is the property of being nice, which is to say never being the first to defect.1
The winner of the tournament (and most tournaments for over 20 years) was one of the simplest programmes entered. It was a programme called Tit-for-Tat. Axlerod’s research found that cooperation was simply the best strategy – even for emotionless computer interactions – if you wanted to win the game.
Cooperation as a Winning Strategy
According to Axlerod’s research there are a number of features that make cooperation the winning strategy and these features hold true across the board whether computers or people are playing the game.
- Increase ‘the shadow of the future’ – Axlerod found that when players expect to meet again they have a bigger investment in not alienating their opponents.
- Have high levels of reciprocity. Small teams can prevail as long as they act in a reciprocal way. This includes not cooperating when others are uncooperative but being ‘forgiving’ as soon as a willingness to cooperate returns. He found that reciprocity not aggression provided the best protection against exploitative strategies.
- And finally, in general, Axlerod’s study suggested that there was no ‘best rule’ that existed independently of others – all strategies were most usefully worked out as a response to the others involved – just as is the case in all reciprocal environments.
Cooperation on the Battle Front
Axlerod didn’t confine his studies to computer or other simulated environments but shows how these ‘rules’ exist in the real world. He contends, for example, that the famous incidents of cross-side cooperation during WWI show many of the same features. Axelrod says that, the ‘live and let live’ arrangements that spontaneously developed in these instances, happened because the same small units faced each other in immobile stand-off trench warfare for extended periods of time. This meant that they had a much more sustained relationship than would be possible in a situation of mobile warfare. From this, a sort of status quo grew up which governed the behaviour of each side and included rituals and ethics.
Cooperation first emerged spontaneously in a variety of contexts, such as restraint in attacking the distribution of enemy rations, a pause during the first Christmas in the trenches, and a slow resumption of fighting after bad weather made sustained combat almost impossible. These restraints quickly evolved into clear patterns of mutually understood behavior, such as two-for-one or three-for-one retaliation for actions that were taken to be unacceptable.2
So if cooperation is the winning strategy even in computer logic – why don’t we cooperate more? It could be because we are not really sure how to proceed. We have plenty of experience of acting in competitive, adversarial and selfish ways but not as much experience of cooperation.
All of which means that we are more or less in uncharted territory if we wish to progress as a human race in a reciprocal and cooperative way. So if we treat this situation as we would treat the approach of any uncharted territory, we might usefully first look for maps that will help us to find our bearings as we ‘settle’ these virgin territories.
How Can We Live Together? – Part III – Believing is Seeing – How Our Beliefs Influence Our View of Reality.
(1) Game Theory and the Evolution of Cooperation, Robert Axlerod, From Gaia to Selfish Genes, Ed.Connie Barlow, M.I.T. Press, 1998.
(2) Chapter 4 – pp 73-87, The Evolution of Cooperation, Robert Axlerod, Basic Books, 1985