Animal charities employ a wide range of interventions in order to help animals. Despite the many unknowns and the comparative lack of research in this emerging field, ACE seeks to make informed judgments about the effectiveness of these interventions to the best of our ability. This means periodically updating our approach in light of new information or ideas. To this end, we recently updated our intervention evaluation process. To read this process in full, please see the detailed guide.
Although we have developed a standardized process for evaluating interventions, this process is intended as a flexible guide rather than a strict procedure. This is because there is a wide range of potentially applicable interventions, and the same exact evaluative approach may not be appropriate for every one of them. The overarching intent, however, is to bring together a range of different evidence sources before making evaluative judgments. Our rationale is that even when none of the evidence sources alone provide adequate answers, considering them in conjunction with one another can provide a clearer picture of the effects of an intervention.
Our new evaluation process consists of six parts:
- Part One: Intervention Description and Theory of Change
- Part Two: Evidence from Animal Advocacy Research
- Part Three: Evidence from the Social Sciences
- Part Four: Case Study Analysis and Cost-Effectiveness Estimates
- Part Five: Conversations in the Field
- Part Six: Overall Assessment
This new process involves a more systematic literature search, in which we work to locate the most relevant research from animal advocacy literature and the relevant social sciences. Additionally, we now make use of case studies. These allow us to (a) gain greater insight into how the intervention is used in practice, (b) observe differences in how an intervention can be implemented, and (c) gather data for our cost-effectiveness estimates. Moving forward, our intervention evaluations will sometimes include theories of change for the intervention. While understanding how an intervention produces change is not directly relevant to understanding its effectiveness, it can sometimes be helpful for increasing our confidence in our evaluation of its effectiveness.
Our process for evaluating interventions cannot be strictly objective; it requires that we make subjective judgments. We hope that by publishing our process and reasoning, we can be transparent about the judgments we make. Even when it is imperfect, we believe that formal analysis is often a useful supplement to decision making. We strive to make clear where there are gaps in our knowledge, so that our conclusions can be integrated appropriately with other sources of information.