Our Approach to Evaluating Animal Advocacy Interventions
ACE reviews evidence on the efficacy of interventions in animal advocacy in order to help determine which are likely to be most useful. Our research on interventions serves multiple purposes: not only do we use our intervention evaluations to help shape our charity evaluations, but they inform animal advocates so they can plan their campaigns with evidence in mind. Not every implementation of an intervention is equally effective, so in our intervention evaluations we try to consider representative implementations, and we discuss the factors that are most likely to affect results when they vary.
Selection and Prioritization
Our selection of interventions is guided by two factors. First, we seek to conduct evaluations that will provide relevant evidence to animal advocacy organizations and to potential donors. Interventions should be implemented by at least one animal advocacy group working in a high-potential-impact area (for instance, farm animal advocacy or wild animal suffering) to be considered. Interventions that form a large part of some organization’s activities are particularly relevant to examine from a donor’s perspective, as donations to these organizations then support mostly or exclusively that intervention, as with leafleting and Vegan Outreach.
Second, we prioritize evaluations of interventions based on the probability that after the evaluation, we will be able to conclude that the intervention is either better or worse than average. The more information that is available about an intervention, the more likely our evaluation will be to end with a clear recommendation regarding it. For instance, we will likely have a difficult time evaluating the effectiveness of campaigns to promote legislative change, relative to campaigns like leafleting for individual diet change that produce more incremental results and whose effectiveness can thus be measured at any point in time.1
This section describes the process for our evaluation work starting in early 2014.
ACE’s Director of Research works with staff and qualified volunteers to conduct evaluations. ACE staff and volunteers working on evaluations will be referred to below as ‘evaluators’. When possible, two evaluations of the same intervention are conducted independently, and evaluators reconcile their findings.
We use an evaluation template to ensure that crucial factors are considered for each intervention. Evaluators ask questions of individuals who use the intervention; review financial reports; perform searches of academic, industry, and activist literature; and sometimes conduct original research. Evaluators document conversations and any original research so that sources can be provided in the final report.
In our evaluations, we seek to consider both quantitative and qualitative evidence. We also consider evidence directly connected to the intervention in question (for instance, studies of a particular implementation in an animal advocacy context) as well as evidence more distantly related (for instance, from general psychological studies or from sociological work done on other movements for social change). However, not all evidence is equally strong. Characteristics we consider important in judging the strength of evidence include the following.
- The source of the evidence. The strongest evidence is that produced by an impartial and reputable source. Evidence that argues against its source’s interests is also particularly strong.
- The representativeness of the evidence. Evidence that includes all relevant instances or an apparently random sample of them is stronger than evidence that has been or may have been curated to make a specific point.
- The length of the inference chain of which the evidence is a part. Long chains of inference magnify the possibility for error, even if each step in the chain is relatively well understood.
Ideally, we consider evidence showing whether interventions caused significant change, but in many cases, the available evidence is only evidence about whether interventions are correlated with change. Most interventions on behalf of animals have not been tested with large scale randomized controlled trials; indeed doing so would prove difficult. Therefore, in addition to considering evidence that directly suggests causation, we consider evidence that suggests correlation when paired with a potential causal mechanism. In this case, there should also be evidence that the causal mechanism is plausible, such as randomized controlled trials that show an analogous mechanism or parts of the mechanism working elsewhere.
The finished product of each evaluation is a narrative report summarizing the evaluator’s research and conclusions. One component of this report is a detailed cost-effectiveness estimate, but it is important to note that such estimates are subject to many and varying sources of error and should not be used in isolation.
We publish the results of finished evaluations along with relevant supporting materials. When we have used materials that we do not have the right to reproduce, we summarize and cite the sources we have used.
We revise individual intervention evaluations as we become aware of evidence that has the potential to affect our opinion of the intervention in question. Due to time constraints, we may sometimes wait for multiple new pieces of evidence to accumulate before revising an evaluation. If you are aware of evidence we haven’t considered that is relevant to an intervention evaluation, please contact us.
We are committed to reconsidering our recommendations every year in December. While we update individual evaluations on a rolling basis and some may not be revised in a given year, this gives us a concrete time to make sure that our overall recommendations are consistent with our latest research.
GiveWell has an interesting series of posts explaining their understanding of the effectiveness of policy-oriented philanthropy, as compared to developing-world aid. It is notable that they conclude that determining whether philanthropy has a strong track record of influencing public policy would require “an enormous, long-term effort.”