Repeated Games and Population Structure

From Evolution and Games

(Difference between revisions)
Jump to: navigation, search
(Representation)
(Run the simulations)
 
(14 intermediate revisions not shown)
Line 1: Line 1:
[[Category:Repeated Games and Population Structure]]
[[Category:Repeated Games and Population Structure]]
 +
 +
 +
== The paper is out in PNAS! ==
 +
 +
Matthijs van Veelen*, Julián García*, David G Rand and Martin A Nowak. '''Direct reciprocity in structured populations''' (2012) PNAS 109 9929-9934. [http://www.pnas.org/content/109/25/9929.abstract Go to the paper on PNAS] or  [http://www.evolbio.mpg.de/~garcia/preprints/11.pdf Download a copy in PDF format here].
== Run the simulations ==
== Run the simulations ==
-
You can download the software [http://staff.feweb.vu.nl/j.garcia/software/RepeatedGamesAndStructureOnline.jar here]. Once the program is running just click on the big ''play'' button to start the fun. Feel free to re-arrange and re-size the windows, zoom-in and out as the program is running. Click [[Repeated Games and Population Structure Simulation | here]] if you need more information about running this program.
+
You can download the software [http://web.evolbio.mpg.de/~garcia/jars/RepeatedGamesAndStructureOnline.jar here]. Once the program is running just click on the big ''play'' button to start the fun. Feel free to re-arrange and re-size the windows, zoom-in and out as the program is running. Click [[Repeated Games and Population Structure Simulation | here]] if you need more information about running this program.
== Lifecycle ==
== Lifecycle ==
Line 14: Line 19:
<center>
<center>
-
[[File:RepeatedGamesAndPopulationStructureLifeCycle.png|600px|center|thumb|Lifecycle]]  
+
[[File:RepeatedGamesAndPopulationStructureLifeCycle.png|550px|center|thumb|Lifecycle]]  
</center>
</center>
Line 30: Line 35:
<center>
<center>
-
'''[(C, 0, 1) | (D, 1, 1)]'''
+
'''[C, 0, 1 | D, 1, 1]'''
</center>
</center>
-
Each array position represents a state. The first state (indexed 0) codes the action for the empty game history, and the indexes of where to go if the opponent plays cooperation, and defection respectively.  
+
Each array position represents a state. Every state codes for an action and two indexes, where to go on cooperation, and where to go on defection. The first state (indexed 0) codes the action for the empty game history.  
 +
Some example (well-known) strategies and their computer code are as follows:
 +
<center>
<gallery>
<gallery>
-
File:AlwaysCooperateFSA.png | Always Cooperate
+
File:AlwaysCooperateFSA.png | Always Cooperate |  '''[C 0 0]'''
-
File:AlwaysDefectFSA.png | Always Defect
+
File:AlwaysDefectFSA.png | Always Defect | '''[D 0 0]'''
-
File:ConditionOnFirstMoveFSA.png  | Condition on first move
+
File:ConditionOnFirstMoveFSA.png  | Condition on the first move ''' [C 1 2 | C 1 1 | D 2 2]'''
-
File:GrimTriggerFSA.png | Grim Trigger
+
File:TitForTatFSA.png |Tit for Tat | ''' [C 0 1 | D 0 1]'''
-
File:NegativeHandshakeFSA.png | Negative Handshake (DTFT)
+
-
File:TitForTatFSA.png |Tit for Tat
+
-
File:TatForTitFSA.png | Tat for Tit
+
-
File:TitForTwoTatsFSA.png | Tit for two Tats
+
-
File:WslsFSA.png | Win stay lose shift
+
</gallery>
</gallery>
 +
</center>
 +
Note that there is no limit on the size of ''these arrays that can grow and shrink dynamically as mutations produce new strategies''.
 +
=== Mutation ===
 +
Possible mutations are adding (randomly constructed) states , deleting states (if the array has more than one state), and reassigning destination states at random. On deleting a state the transitions are rewired to the remaining states. The following video shows an example path of mutations:
-
There is no limit on the size of this arrays that can grow and shrink dynamically as mutations produce new strategies.
 
-
 
-
=== Mutation ===
 
-
 
-
Possible mutations are adding states (randomly constructed), deleting states (if the array has more than one state), and reassigning destination states at random. The following video shows an example:
 
<center>
<center>
{{#ev:youtube|mPUrB8aFFuI|450}}
{{#ev:youtube|mPUrB8aFFuI|450}}
</center>
</center>
-
 
== Results ==
== Results ==
-
=== Theoretical prediction ===
+
Please check our paper [http://www.pnas.org/content/109/25/9929.abstract on the PNAS website] or [http://www.evolbio.mpg.de/~garcia/preprints/11.pdf Download a PDF preprint here].
-
 
+
-
The following picture shows the theoretical predictions as described in the paper. Our results combine the two axes where biologists and economists have mainly focused separately. Note the area on the bottom right corner, where a little assortment (relatedness) combines with a high probability of repeating the game, this two ingredients in these doses provide the recipe for human cooperation.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructureTheory.png|450px|center|thumb|Average payoff as a function of continuation probability and assortment]]
+
-
</center>
+
-
 
+
-
=== Computer simulations ===
+
-
 
+
-
 
+
-
The following picture shows the average payoff obtained by all strategies across runs of 500.000 generations. On the x-axis we vary the continuation probability, and on the y-axis we vary the assortment parameter as described on the paper. Assortment equal to zero is equivalent to random matching, and assortment equal to one means everyone plays the game against like types.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructure.png|500px|center|thumb|Average payoff as a function of continuation probability and assortment]]
+
-
</center>
+
-
 
+
-
=== Match ===
+
-
 
+
-
 
+
-
The theoretical prediction with a reduced set of strategies matches the results of the computer simulations where a potentially infinite set of strategies is explored.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructureMatch.png|400px|center|thumb|Average payoff as a function of continuation probability and assortment]]
+
-
</center>
+

Current revision as of 19:30, 18 January 2013


Contents

The paper is out in PNAS!

Matthijs van Veelen*, Julián García*, David G Rand and Martin A Nowak. Direct reciprocity in structured populations (2012) PNAS 109 9929-9934. Go to the paper on PNAS or Download a copy in PDF format here.

Run the simulations

You can download the software here. Once the program is running just click on the big play button to start the fun. Feel free to re-arrange and re-size the windows, zoom-in and out as the program is running. Click here if you need more information about running this program.

Lifecycle

The following figure represents the lifecycle in the computer program.

  • In step one we start with an even-sized population of finite state automata (initial population is made out of ALLD strategies).
  • In step two individuals are matched in pairs to play the game - your are matched with your rightmost neighbor. The repeated game is played between each pair of matched players (one PD game is played for sure and from then on the game is repeated with probability δ).
  • In step three reproduction takes place, a payoff proportional distribution is constructed. To fill up the population in the next generation we proceed in the following manner: the individuals in even positions (we start counting at 0) are determined by sampling from the probability distribution; for odd positions, with probability r a copy of the parent of your left neighbor is created, and with probability 1 − r an individual is sampled from the payoff proportional distribution (this takes care of assortment).
  • Finally, in step four, every individual can mutate with a given probability. From then we go back to matching, and so on.
Lifecycle

Strategies in the computer

Representation

Strategies are programmed explicitly as finite state automata(the program also has options for regular expressions, or Turing machines - see information onRepeated Games for more details about these additional representations).

A finite automaton is a list of states, and for every state it prescribes what the automaton plays when in that state, to which state it goes if the opponent plays cooperate, and to which state it goes if the opponent plays defect.

Example Finite State Automata (Grim Trigger)

The computer representation of the automate above is an array with the following values:

[C, 0, 1 | D, 1, 1]

Each array position represents a state. Every state codes for an action and two indexes, where to go on cooperation, and where to go on defection. The first state (indexed 0) codes the action for the empty game history.

Some example (well-known) strategies and their computer code are as follows:

Note that there is no limit on the size of these arrays that can grow and shrink dynamically as mutations produce new strategies.

Mutation

Possible mutations are adding (randomly constructed) states , deleting states (if the array has more than one state), and reassigning destination states at random. On deleting a state the transitions are rewired to the remaining states. The following video shows an example path of mutations:

Results

Please check our paper on the PNAS website or Download a PDF preprint here.