http://www.digitopoly.org/2017/04/15/economists-found-something-surprising-and-you-wont-believe-what-happened-next/
Luigi Butera and John List have examined how cooperation is impacted on by uncertainty — and not just any uncertainty but Knightian uncertainty where outcomes cannot easily be described by a probability distribution. They examine a situation where experimental subjects are contributing to a public good whose returns are uncertain and where individuals may or may not hold information regarding those returns. What this means is that individuals do not know whether people are not contributing because of free riding or because their do not have high information regarding the quality of the public good they are contributing too. In some sense, you might think this might make free riding issues even worse for, if you have a draw that suggests the public good has a high return and you know that others have differing information, you may not be confident they will follow you and contribute. In other words, you may anticipated more free riding which causes cooperation to unravel faster. The alternative view is that if you have no information and observe some cooperation, that might signal that they know something you don't. But even there, for full rational agents, why cooperate when you don't have to. If I had to guess before reading the abstract of this paper, my guess is that uncertainty makes things worse. We saw instances of this in a public good game instituted by Stephen King that I outlined in Information Wants to be Shared; problems that were alleviated by crowd funding models that provided more information.
As it turns out Butera and List find that uncertainty increases cooperation.
We show that our results are unlikely driven by confusion, since cooperation when noisy signals are publicly observed is inversely correlated with the informativeness of the signals. Otherwise said, as we reduce uncertainty, cooperation decreases. In the limiting case where public signals fully resolve uncertainty, cooperation rates revert back to those observed in the baseline. We argue that the presence of Knightian uncertainty fosters conditional cooperation by generating ambiguity around the determinants of players' payoffs. When the returns from public goods contributions are perfectly observed, any reduction in payoffs can only be attributed to other players free-riding. When the exact quality of a public good is unobserved however, lower returns from a public good may be driven in part by a lowerthan-expected quality of the good itself. While uncertainty has no effect on the Nash 1 equilibrium outcome, it does affect decisions of conditional cooperators who may become more tolerant to payoffs' reductions, effectively limiting the "snowball effect" of free-riding on conditional cooperation. An alternative and related explanation is that the presence of uncertainty facilitates cooperation among betrayal averse individuals (Bohnet et al. 2008, Aimone and Houser 2012).
This is an interesting result and certainly suggests that there is more interesting theoretical work to be done. Another possible reason for the result may be that uncertainty interacts with certain behavioural tendencies of agents (something that Peter Landry and I have theorised about).
Now on to the "you won't believe what happened next part." This experimental finding is surprising. Actually very surprising. And usually this poses an issue because many people may not believe the result and wonder if it is the result of an abberation. Replications will assist this but as many have noted (see for instance the discussion in Scholarly Publishing and its Discontents) those studies do not receive the scientific kudos relative to their value in establishing the potential truth of something. So our authors here have a conundrum. They have anticipated, correctly, that their finding will be discounted because it is surprising. And they have also anticipated that there is little incentive for independent replication. In other words, there is a break down in cooperation in the production of science.
One option may be to add uncertainty to the mix and see what happens but we still don't know for sure that that is a thing. The other is what they chose to do:
This paper proposes and puts into practice a novel and simple mechanism that allows mutually beneficial gains from trade between original investigators and other researchers. In our mechanism, the original investigators, upon completing their initial study, write a working paper version of their study. While they do share their working paper online, they do however commit not to submit it to any journal for publication, ever. The original investigators instead offer co-authorship of a second paper to other researchers who are willing to independently replicate the experimental protocol in their own research facilities.2 Once the team is established, but before beginning replications, the replication protocol is pre-registered at the AEA experimental registry, and referenced in the first working paper. This is to guarantee that all replications, both successful and failed, are properly accounted for, eliminating any concerns about publication biases. The team of researchers composed by the original investigators and the other scholars will then write and coauthor a second paper, which will reference the original unpublished working paper, and submit it to an academic journal. Under such an approach, the original investigators accept to publish their work with several other coauthors, a feature that is typically unattractive to economists, but in turn gain a dramatic increase in the credibility and robustness of their results, should they replicate. Further, the referenced working paper would provide a credible signal about the ownership of the initial research design and idea, a feature that is particularly desirable for junior scholars. On the other hand, other researchers would face the monetary cost of replicating the original study, but would in turn benefit from coauthoring a novel study, and share the related payoffs. Overall, our mechanism could critically strengthen the reliability of novel experimental results and facilitate the advancement of scientific knowledge.
In other words, in response to a problem of breakdown in cooperation they have proposed a fairly standard solution — integration. Basically, they have offered to sell a share of the kudos they would receive if the study is successfully replicated to those who are replicating the study.
From the perspective of a reader, this paper both offers an experiment (with results) and the proposes another for which the results are yet to be determined. It will be very interesting to see how it works out.
But I have a question. If a replication is done, it will be a little surprising given that this paper is already out there so the true allocation of kudos can only be partially transferred. And so if that is the case, won't they have to offer another experiment with a surprising result coupled with their new mechanism in order for the kudos allocation experiment to be replicated? And if that is so, when will this end?
-- via my feedly newsfeed
No comments:
Post a Comment