This summer ACE ran a small pilot of a study on the effectiveness of pay-per-view video in creating vegetarians and reducing meat consumption. We chose to do this for three main reasons:
- Pay-per-view video, an intervention in which people are paid to watch a short video about factory farming, is a popular and potentially effective way of motivating people to adopt a more plant-based diet.
- The effectiveness of pay-per-view video is hard for advocates to observe directly since, like leafleting or online veg ads, it relies on individuals adjusting their behavior in the time after seeing farm animal advocacy materials.
- Our leafleting and humane education studies were underpowered (not large enough to reliably detect the effects we cared about). If we run a full study of pay-per-view video, we’d like it to be stronger than these previous studies.
There have been attempts to measure the effectiveness of pay-per-view video, but they have not included all the features we would like to see in a reliable study. Specifically, a strong study would have randomized control and treatment groups that were tracked from the time they were shown (or not shown) the video, rather than relying on self-report. Relying on self-report led to confusing group assignments in our leafleting study, and we think it may also have affected results in the only study of pay-per-view video that we know to have included a control group, as people may be more likely to report having seen a video that affected them more strongly.
Before beginning our pilot, we produced a draft methodology of a study of pay-per-view video outreach and showed it to several people. We realized that a pilot for this study should answer at least:
- Should we make sure that people who come to the booth together see the same video, or is it okay if randomization causes them to see different videos?
- How many people will we need to show each video to get enough survey responses?
How to randomize
For the pilot, we showed each participant a random video. That meant that people who arrived at the same time, such as friends who showed up at the table together, might see different videos. To check whether this would cause problems, on the survey we sent, after we asked questions about participants’ diets, we disclosed that the survey was related to a video that participants had been shown about animal cruelty. We then asked whether participants had discussed that video with people who had been there when they saw it, with other people later, or with no one. 21 people answered this question. Only 10% of people hadn’t discussed the video at all, and 67% had discussed it with someone who was there when they had seen the video. Based on these results, we see a significant risk that a complete study carried out with this kind of randomization would under-report the effects of pay-per-view video, as even control group participants would be exposed to the effects of the treatment video through discussions with friends who were shown a different video. Instead we’d want to use cluster-randomization, for instance selecting randomly between the treatment and control videos for what video to show every 15 minutes and showing the same video to everyone who arrived in that time period. We think this problem could be solved by adjusting the software we used during the pilot study.
How many viewers
How many viewers we would need to reach for our study to be useful depends not only on the response rate to our survey, but also on the rates at which viewers and nonviewers of pay-per-view video give up eating particular animal products. To assess the feasibility of our study design, we make some assumptions based on the results observed in previous studies of this and other forms of vegetarian outreach.
For an animal product like poultry or red meat which people are relatively likely to stop eating upon exposure to vegetarian or farm animal advocacy materials:
- We assume that 1% of the target population for pay-per-view video gives up the product in any 2 month period, even if they don’t actually see the video. This is the percentage of people in the control group we’d expect to report having stopped eating the product since seeing the control video. It’s hard to know this rate with accuracy, since control group populations have generally been low in previous studies; we considered rates from 0.5% to 2%, but our best guess is 1%, and that is what we discuss here. The larger this rate is, the harder it would be to detect an effect of showing the treatment video.
- We assume that 5% of people who actually see the treatment video stop eating the product in the 2 months after seeing the video (and don’t start eating it again until that period is over). This is the percentage of people in the treatment group we’d expect to report having stopped eating the product since seeing the video. It’s hard to know this rate with accuracy, but previous studies of pay-per-view video and leaflets (where we are looking at rates for people who took the treatment leaflet) have placed it between 2% and 9%, depending on the study and the product1. We discuss a rate of 5% here, because we consider it realistic and because the effect size of in-person pay-per-view video would need to be larger than that of leafleting or online videos for this to be a cost-effective intervention, since it is slightly more costly per person reached. The smaller this rate is, the harder it would be to detect an effect of showing the video.
We also assume that we will show the treatment video to several times more people than we show the control video to. This makes the overall sample size we need larger, but because animal advocacy groups already show the type of video that we would use as the treatment video to many people, it reduces the additional costs imposed by conducting the study, assuming we’re able to work together with such a group.
Under these assumptions, we’d need survey responses from about 1,100 viewers of the treatment video and 200 viewers of the control video in order to be likely to detect that the treatment video has a stronger effect than the control video does. (We would have approximately an 80% probability of detecting an effect if all our assumptions are correct, which is a standard used by Researchers in choosing study sizes.) We showed videos to 103 people in our pilot study, of whom 21 responded to enough of our survey for their responses to be useful. This is about a 20% response rate, so we’d probably need to show the treatment video to at least 5,500 people and the control video to at least 1,000 people to get the number of survey responses we want. More viewers of either video would make us more likely to detect a difference in effects.
How to proceed
We don’t think we can carry out this study entirely on our own; we aren’t set up to do video outreach, and we don’t have the staff or volunteer time to show videos to 6,500 or more people. But we do think this study would be feasible to do at scale in partnership with an organization that already does video outreach. We’d love to hear from organizations interested in working together to carry out this study.
We used data that FARM provided us during the review process to get rates that applied directly to pay-per-view video; as far as we’re aware their published data focuses on reduction and does not report how many people give up products entirely, so we can’t link the survey that got those rates. However, you may view similar published rates from both our leafleting study and Farm Sanctuary’s leafleting study.
“we’d probably need to show the treatment video to at least 5,500 people and the control video to at least 1,000 people”
I don’t know that much about statistical power, but what I do know leads me to believe that even numbered control – treatment groups are better at getting higher power for a lower sample size. Though, there are certainly reasons to want a higher treatment group size if you’re concerned about showing more people the allegedly effective videos.
Am I wrong about power here or are you boosting the treatment group intentionally? (Not that there’s anything wrong with that.)
You’re right that an even split between treatment and control groups would give us higher power for a lower overall sample size. We’re proposing the split described above, that is larger in the treatment group, because balancing the size of the two groups still doesn’t put the study within a size range such that we could expect to conduct it on our own. Since we would need to work with another group to get the resources to conduct the study anyway, we think it makes sense to use the fact that groups already show the treatment videos to thousands of people per year (in some cases, per week) to get the power we need, rather than balance the size of the two groups (as we would do if we wanted to strictly minimize the number of study participants). Our protocol doesn’t call for the treatment group to have a very different experience than the one people already do have seeing pay-per-view video, so we think the way to do this study that imposes the least additional use of resources minimizes the control group size, rather than the overall sample size.