A week ago we reported on how the results of a study on persuasion applied to animal activism. Yesterday that study was publicly retracted by Donald Green, one of its two authors, after the other author admitted to falsifying at least some of the data reported. This is an extremely serious blow to the study’s usefulness. While a study might have some validity even if its methods were questioned by other scientists or later regretted by its author, if the data is fabricated it cannot support the study’s claims.
Fortunately for those of us interested in the results of the study, the faked results were discovered because they had attracted so much attention that other Researchers were trying to replicate and extend the study’s methods and findings. (David Broockman and Joshua Kalla’s replication attempts led them to investigate the data provided by the study’s authors, and their questions and concerns ultimately led to Green’s retraction.) These similar studies by other research teams should ultimately provide real data about the effectiveness of the methods of persuasion used in the study. When that happens, we’ll know whether the suggestions from our last blog post were based on a real effect, or only one which sounded plausible enough to be supported by simulated data.
For now, animal advocates may be wondering whether they should give any credence to the results of the study. After all, we frequently rely on flawed studies in the absence of more rigorous ones, or on studies conducted in one context to inform us about what will happen in another. Because this data was not just collected using a flawed methodology but at least partially (and perhaps entirely) fabricated, we do not recommend placing any reliance at all on the results of the study. There is no way to correct for faked data. Media reporting on the study has quoted same-sex marriage advocates who noticed that the methods used appeared successful in the field, and we think those anecdotal reports are probably as accurate as any other anecdotes. However, they do not provide evidence for long-term change, as the study purported to, because they were based on impressions at the time of the canvassing effort.
For now, this study serves only to point animal advocates to some interesting anecdotal results from another field. We hope that later efforts will provide genuine information about the scale and duration of the effects of persuasion through individual connections.
To prevent confusion, we’ve added a disclaimer to the beginning of our earlier blog post.
As far as I understood the ACE team was responsible of finding out the falsness of the results. Great job!
To reduce the risk of such falsification, I think standards could make it much more easy to compare studies with each other, check reproductibility and identify suspicious results. Standards on what to measure, how to measure it, etc…
Very interesting post. Thanks ACE!
It’s interesting that this case, although it clearly represents very bad conduct on the part of at least one researcher, is also one where the system as a whole has worked moderately well. Certainly it could have worked better–some of the irregularities in the data are things that could have been noticed in the peer review process. But it’s because the authors published replication data and shared their methods that Broockman and Kalla were able to identify that something was seriously wrong, and to determine what it was. There’s a movement for increased transparency in science, and it seems to be working.
it is indeed very surprising that the falsificaton went through the peer review process, especially from such a respected journal like “Science”. That shouldn’t be impossible to ask for some authenticated source data or at least to make some random control.
The peer-review process is known to miss many things (though most can be explained as honest errors), especially if the way to catch them would be to re-analyze the replication data (which this study did provide), because reviewers don’t have time to check every claim in detail. This study did claim to have a randomized control group (and provided “data” from it), which is one of the things reviewers would definitely have been checking for.
Still, it is always surprising when a scientist fakes results that actually make it to publication and then are discovered.