We started a study last fall to investigate whether the reported effects of leafleting to spread concern for animals and encourage vegetarianism or veganism would hold up when compared to a relevant control group. We’ll be publishing our findings on that question very soon. In the meantime, we’ve learned a lot of other things from this process.
Experience Required
Before the study launched, the design team got feedback on the initial methodology from many people in the effective altruism community. Some of the suggestions were taken to heart and resulted in fundamental changes to the design, including the creation of a control group which had received a leaflet that was about animals but not about farmed animals. Other suggestions were not incorporated, because they were too hard to implement or seemed too likely to change what the study was measuring in a way that would make its results less useful.
We hoped that having a very open design process and incorporating feedback and ideas from interested people would help compensate for the fact that none of the people working on the study, and few of the people giving feedback, had previously conducted similar research. It did help. Unfortunately, it couldn’t compensate fully for our lack of experience. There were big gaps in our study plan that we could have answered if we had more experience, but that were not answered by asking around among other people who also had little experience. Most of these gaps could have been filled, at least partially, if we had chosen to pilot our study—to run a smaller-scale version of it first, so that we would know what to expect when running the full version. This would have given us some practice with each stage of the study, which we could have used to strengthen our study design.
Unknown Rates
We wanted the study to measure two important rates: the rate at which people who received a leaflet from Vegan Outreach reduced their meat consumption, and the rate at which the control group reduced theirs. We had a pretty good estimate for the first rate from a previous study, and we assumed the second would be somewhat lower, but not 0. Had we run a pilot study, our guess wouldn’t have changed much.
However, many other unknown rates were important to our study design, and our estimates for these could have been significantly improved by conducting a pilot study. They include:
- the number of surveys one surveyor could expect to collect per hour;
- the percentage of respondents who would report having received a leaflet from us;
- the percentage of those students who would report having received a control leaflet only; and
- how much these rates would change if more or less effort was put into leafleting.
Without experience bearing on any of those rates, we chose to invest similar amounts of effort in leafleting and in surveying at each of several locations. As a result, while we distributed hundreds or even thousands of leaflets at each school, we collected many fewer surveys, between 0 and 135 from each school. About 1 in every 3 respondents reported receiving a leaflet of some kind. While these results were workable, we could have done better if we’d known what to expect beforehand. In particular, we could have committed more strongly to recruiting and preparing surveyors, including working at fewer locations if that meant we could have done a better job surveying at each one.
Predictable Difficulties
We also experienced some setbacks that we could have expected without doing a pilot study, though a pilot study would have forced us to think about some of them sooner. Working with volunteers spread throughout a large area posed challenges. At one school, our volunteer to lead the surveyors backed out after leaflets had been distributed and we could not find another in time, so we didn’t conduct any surveys there. At others, scheduling conflicts led to unexpected variations in the timing of the leafleting and surveying, including control and treatment leaflets being distributed on different days and variable times between the leafleting and surveying stages. These variations were within the original parameters set for the study, but each time a schedule change occurred, we were reminded that we had less control over the conduct of our study than we would have liked. Additionally, when volunteers in the field had questions we had not anticipated, they were usually forced to make their own judgement of what would be best to do and move on.
We also encountered predictable difficulties when it came time to enter data from surveys into a spreadsheet. Here, the problem was less that we were using a distributed team of volunteers and more that data entry is a demanding task in any case. Errors that are easy to make can change the meaning of what is entered, but making no errors at all requires unrealistic precision. Although we planned ahead far enough to get all our data entered in a coherent manner, we didn’t predict how many errors we’d find on checking it, something we could have done by talking to people with more experience about this stage of the project. Although the errors we found were probably not enough to shift the conclusions of the study, we decided to implement double-entry checking on the data for several schools in order to further reduce error rates, which has delayed publication of the study results. If we’d anticipated this decision, not only would we have budgeted time for it, but we could have implemented the procedure on all of the data without making additional demands of our volunteers after they expected to be finished working on the study.
Moving Forward
We’ve learned a lot from conducting this study. We hope our experiences will help us do a better job planning and carrying out studies in the future, and that some of our lessons are also useful to others who are thinking about conducting their own studies. There’s a lot to learn, and by using each other’s progress, we can move more quickly towards the answers to the important questions. We have now started the analysis phase of this study, which we are carrying through with a team of experienced volunteers from Statistics Without Borders. With their help, we hope that the next thing we learn from this process will actually be about the effectiveness of leafleting!
Leave a Reply