Yesterday I was lucky enough to take part in the Seminar of Collective Decision-Making at the Hanse-Wissenschaftskolleg Delmenhorst. Interesting talks about current research, all of them involved socio-economic experiments with human subjects who got real money based on their behaviour. I'll try to briefly summarize all three talks here:
A look at cooperation towards a common good and a stab at representational democracies. Subjects played the "climate game": Everyone has to invest in a common good, or all are at risk to lose all they have. Players start with 40 Euro, and can for 10 rounds invest 0, 2 or 4 Euros. If after 10 rounds less than 120 Euros have been invested in the common good, they all are at risk (here they tried 10%, 50% and 90%) to lose all they have left. I was late to this talk, but I heard that often the players didn't make it to invest 120 Euros, even in the 90% risk case (all of the rules were obvious to everyone).
In another scenario, 18 players formed 6 "countries" a 3 players. Each country elected a representative, based on the strategy (s)he proposed (via chat). This setting was played three times, so that countries held several elections. It turned out that this worked even worse. Representatives who worked towards the common good left their countries with less Euros left, so they were not re-elected. In the next round, representatives all paid too little towards the common good. Sad.
Afterwards, there was a lively discussion about the transferability of this experiment to the real world (someone called the discussion a "snake pit").
"Coordination and communication in multiparty elections with costly voting"
This research started with modeling a problem in voting theory, but ended with a look under the hood of collective decision making in human groups. In some elections with three candidates, the winner is not preferred by most voters, but the ones who oppose him just split their votes on the two other candidates (think Bush vs Gore and Nader in 2000). Those voters should have been more strategic. In addition, lots of voters don't show up.
In the experiment, voters (the subjects) were assigned payoffs for each candidate, modeling that they all had different preferences. All voters wanted either candidate A or B while they all despised candidate C. In addition, there were costs assigned to actually voting, so that some voters would decide to abstain.
Voters knew what the preference distribution was. The researchers added communication among the voters via a chat before they actually voted. This enabled the voters to coordinate their behaviours in a way that candidate C got elected less often and the voters maximised their individual payoffs.
What then is really interesting is that the chat logs contain the "black box of group decisions". The researchers plan to analyse them in order to find out more about the coordination process of humans. Results are to be expected in a couple of months, but they can already say that
* the voters who organised most of the coordination got less personal payoff in the end
* ca. 50% of the people simply stick to their candidate in all rounds, no matter what is being discussed
* communication among all voters is needed for an optimal outcome
Very interesting. I think that if these chat logs are really a "black box of group decisions", then this data set should be opened for other researchers.
Judith Avrahami / Yaakov Kareev
"Do the weak stand a chance?"
These experiments concern situations were one player clearly has less resources than his opponent. What happens in terms of strategies?
In the "pebbles game", each player has 8 boxes. Player A and player B have 12 and 24 pebbles, respectively, which they can distribute among their 8 boxes. Then, one box of each player gets randomly chosen and who had more pebbles in his box wins. Player A, clearly being weaker, should leave some boxes empty so that he at least has some boxes with a good chance of winning. Expecting this, player B should also make his distribution of pebbles uneven (rather than putting 3 in each).
In essence, inequality in player strengths introduces variability into the strategy space on both sides.
The researchers than got further into the role of the evaluator: what difference does it make that only one box gets evaluated, rather than all? The hypothesis would be that adaptive agents are pushed into trying harder when they know that only some of their work will be evaluated but don't know which in advance. They ran an experiment in which subjects solved addition problems on 6 pages. When the knew that they would only be evaluated according to one page, performance rose.
This managing style got invented to save lazy evaluators time, but it might actually raise performance. I wonder then, if subjects who knew that they were weak in adding numbers also left some pages out to concentrate on the others.