Repeated Games and Population Structure

From Evolution and Games

(Difference between revisions)
Jump to: navigation, search
(Computer simulations)
(Replaced content with 'Coming soon.')
Line 1: Line 1:
-
[[Category:Repeated Games and Population Structure]]
+
Coming soon.
-
 
+
-
== Run the simulations ==
+
-
 
+
-
You can download the software [http://staff.feweb.vu.nl/j.garcia/software/RepeatedGamesAndStructureOnline.jar here]. Once the program is running just click on the big ''play'' button to start the fun. Feel free to re-arrange and re-size the windows, zoom-in and out as the program is running. Click [[Repeated Games and Population Structure Simulation | here]] if you need more information about running this program.
+
-
 
+
-
== Lifecycle ==
+
-
The following figure represents the lifecycle in the computer program.
+
-
 
+
-
* In '''step one''' we start with an even-sized population of finite state automata (initial population is made out of ALLD strategies).
+
-
* In '''step two''' individuals are matched in pairs to play the game - your are matched with your rightmost neighbor. The repeated game is played between each pair of matched players (one PD game is played for sure and from then on the game is repeated with probability <math>\delta</math>). 
+
-
* In '''step three''' reproduction takes place, a payoff proportional distribution is constructed. To fill up the population in the next generation we proceed in the following manner: the individuals in even positions (we start counting at 0) are determined by sampling from the probability distribution; for odd positions, with probability <math>r</math> a copy of the parent of your left neighbor is created, and  with probability <math>1-r</math> an individual is sampled from the payoff proportional distribution (this takes care of assortment).
+
-
* Finally, in '''step four''', every individual can mutate with a given probability. From then we go back to matching, and so on.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructureLifeCycle.png|550px|center|thumb|Lifecycle]]
+
-
</center>
+
-
 
+
-
== Strategies in the computer ==
+
-
 
+
-
=== Representation ===
+
-
 
+
-
Strategies are programmed explicitly as finite state automata(the program also has options for regular expressions, or Turing machines - see information on[[Repeated Games]] for more details about these additional representations).
+
-
 
+
-
A finite automaton is a list of states, and for every state it prescribes what the automaton plays when in that state, to which state it goes if the opponent plays cooperate, and to which state it goes if the opponent plays defect.
+
-
 
+
-
[[File:grimTriggerFSA.png|300px|thumb|center|Example Finite State Automata (Grim Trigger)]]
+
-
 
+
-
The computer representation of the automate above is an array with the following values:
+
-
 
+
-
<center>
+
-
'''[C, 0, 1 | D, 1, 1]'''
+
-
</center>
+
-
 
+
-
Each array position represents a state. Every state codes for an action and two indexes, where to go on cooperation, and where to go on defection. The first state (indexed 0) codes the action for the empty game history.
+
-
 
+
-
Some example (well-known) strategies and their computer code are as follows:
+
-
<center>
+
-
<gallery>
+
-
File:AlwaysCooperateFSA.png | Always Cooperate |  '''[C 0 0]'''
+
-
File:AlwaysDefectFSA.png | Always Defect  | '''[D 0 0]'''
+
-
File:ConditionOnFirstMoveFSA.png  | Condition on the first move  ''' [C 1 2 | C 1 1 | D 2 2]'''
+
-
File:TitForTatFSA.png |Tit for Tat | ''' [C 0 1 | D 0 1]'''
+
-
</gallery>
+
-
</center>
+
-
 
+
-
Note that there is no limit on the size of ''these arrays that can grow and shrink dynamically as mutations produce new strategies''.
+
-
 
+
-
=== Mutation ===
+
-
 
+
-
Possible mutations are adding (randomly constructed) states , deleting states (if the array has more than one state), and reassigning destination states at random. On deleting a state the transitions are rewired to the remaining states. The following video shows an example path of mutations:
+
-
 
+
-
<center>
+
-
{{#ev:youtube|mPUrB8aFFuI|450}}
+
-
</center>
+
-
 
+
-
== Results ==
+
-
 
+
-
=== Theoretical prediction ===
+
-
 
+
-
The following picture shows the theoretical predictions as described in the paper. Our results combine the two axes where biologists and economists have mainly focused separately. Note the area on the bottom right corner, where a little assortment (relatedness) combines with a high probability of repeating the game, this two ingredients in these doses provide the recipe for human cooperation.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructureTheory.png|450px|center|thumb|Theoretical prediction and the human cooperation zone]]
+
-
</center>
+
-
 
+
-
=== Computer simulations ===
+
-
 
+
-
 
+
-
The following picture shows the average payoff obtained by all strategies across runs of 500.000 generations. Population size is 200. On the x-axis we vary the continuation probability, and on the y-axis we vary the assortment parameter as described on the paper. Assortment equal to zero is equivalent to random matching, and assortment equal to one means everyone plays the game against like types.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructure.png|500px|center|thumb| Computer simulation results: average payoff as a function of continuation probability and assortment]]
+
-
</center>
+
-
 
+
-
The picture is composed of 10100 points, and the computation time for the value of each point ranges from 6 to 72 hours, averaging more than 20 hours per computation. We have gone as far as modern computation capabilities allow in producing this picture at the highest possible resolution.
+
-
 
+
-
=== Match ===
+
-
 
+
-
 
+
-
The theoretical prediction with a reduced set of strategies matches the results of the computer simulations where a potentially infinite set of strategies is explored.
+
-
 
+
-
<center>
+
-
[[File:RepeatedGamesAndPopulationStructureMatch.png|400px|center|thumb|Theoretical Prediction vs Computer Simulation Results]]
+
-
</center>
+

Revision as of 15:35, 20 February 2011

Coming soon.