Evaluation Process
Summary
Introduction
There are a wide range of interventions that animal charities employ to help animals, and there are currently many unknowns regarding the effectiveness of those interventions. Despite the many unknowns and the comparative lack of research in this emerging field, Animal Charity Evaluators seeks to make informed judgments about the effectiveness of interventions to the best of our ability. This page provides an overview of our current process for evaluating interventions. Our previous intervention evaluation process is archived here.
We’ve developed a standardized process for evaluating interventions. Since it may be used to evaluate a wide range of interventions, it is intended as a flexible guide rather than a strict procedure. The overarching intent is to bring together a range of different evidence sources before making evaluative judgements. Our rationale is that, even when none of the evidence sources alone can provide adequate answers, considering them in conjunction with one another can provide a clearer picture of the effects of an intervention. The evaluation process is made up of six parts, listed below.
- Part One: Intervention Description and Theory of Change
- Part Two: Evidence from Animal Advocacy Research
- Part Three: Evidence from the Social Sciences
- Part Four: Case Study Analysis and Cost-Effectiveness Estimates
- Part Five: Conversations in the Field
- Part Six: Overall Assessment
Our process for evaluating interventions is not strictly mechanical; it requires that we make subjective judgments. We hope that by publishing our process and reasoning, we can be transparent about the judgments we make. Even when it is imperfect, we believe that formal analysis is often a useful supplement to decision making. We strive to make clear where there are gaps in our knowledge, so that our conclusions can be integrated appropriately with other sources of information.
Summary of the Evaluation Process
Please see the “Detailed Evaluation Guide” tab for more detailed descriptions on each of part of our process.
Part One: Intervention Description and Theory of Change
The intervention description provides an overview of the type of intervention under evaluation and how widespread the intervention is. We describe variations in the implementation of the intervention and indicate which variations will—and will not—be considered in our evaluation. We provide a theory of change for the intervention as well as a summary of the outcomes we’d expect to see as a result of this intervention. We include both positive and negative outcomes and both intended and unintended outcomes. To help us identify different kinds of effects that an intervention can have, we’ve developed a Menu of Outcomes for Animal Advocacy.
Part Two: Evidence from Animal Advocacy
Part Two brings together relevant evidence from the field of animal advocacy in an accessible format via a summarized table to enable comparison. We provide an assessment of the state of the evidence with regard to quality and effect size, as well as noting significant gaps in the evidence. Finally, we provide a judgment on the extent to which the evidence supports the use of the intervention under consideration.
Part Three: Evidence from the Social Sciences
The intent of Part Three is to bring together relevant research from the social sciences. We look to fields like psychology and social movement studies to build our understanding of the intervention. We provide a judgment on the extent to which the available literature supports the use of the intervention under consideration.
Part Four: Case Study Analysis and Cost-Effectiveness Estimate
Case studies are used to gain greater insight into an intervention as well as to provide data for our cost-effectiveness estimates. We generally aim to conduct multiple case studies, which can provide a useful way to compare different variations in implementations of an intervention.
Part Five: Conversations in the Field
We conduct semi-structured interviews to draw from the knowledge and experience of experts on the intervention. Experts may include academic Researchers or individuals who employ the intervention and have in-depth practical knowledge of its implementation and effects. This should include interviews with people who do not advocate for this intervention, if possible.
Part Six: Overall Assessment
When possible, we will describe the extent of the variation in an intervention’s effectiveness, and we will provide evidence-based advice for implementing the intervention as effectively as possible. Acknowledging that any type of intervention probably varies widely in its effectiveness, we aim to provide an overall judgment of the effectiveness of an intervention type by answering the following evaluative questions:
- To what extent does this intervention achieve positive outcomes for animals?
- To what extent does the available evidence support our theory of change for this intervention?1
- To what extent is this intervention cost-effective when compared to other interventions we have evaluated?
- Should the animal advocacy movement continue to devote the same amount of resources to this intervention that it does currently?
Research Products
Part of Report | Product of Research |
---|---|
Part One: Intervention Description and Theory of Change | 1.1 Description of intervention (<250 words) 1.2 Table of intended and unintended intervention outcomes (<100 words) and/or 1.3 Description of a theory of change for this intervention (<300 words) 1.4 Discussion |
Part Two: Evidence from Animal Advocacy Research | 2.1 Table of studies from the field of animal advocacy (<50 words per study, ≤15 studies) 2.2 Written summary of the available research from the field of animal advocacy (500–1,200 words) 2.3 Discussion and assessment of evidence (500–1,200 words) |
Part Three: Evidence from the Social Sciences | 3.1 Table of relevant studies from the social sciences (<50 words per study, ≤15 studies) 3.2 Written summary of the available research from social science (500–1,200 words) 3.3 Discussion and assessment of evidence (500–1,200 words) |
Part Four: Case Study Analysis and Cost-Effectiveness Estimates | 4.1 Written description of each case study (500–1,000 words for each study) 4.2 Written description of cost-effective models and links to models on Guesstimate 4.3 Summary of the evidence gathered from the case studies (500–1,000 words) 4.4 Discussion and assessment of evidence |
Part Five: Conversations in the Field | 5.1 Introduction and links to individual conversation summaries (500–1,500 words per summary) 5.2 Summary and comparison of conversations (500–1,000 words) 5.3 Discussion and assessment of evidence |
Part Six: Overall Assessment | 6.1 Overview of evidence from Parts One through Five (500–1,000 words) 6.2 Description of variance of intervention effectiveness and advice for maximizing effectiveness (400–1,000 words) 6.3 Overall assessments and discussion (<400 words for each evaluative question) |
Note that the word counts suggested on this page have been kept low to support usability. However, it may be necessary to exceed these limits. Please consider the suggested word count to be a flexible guide. |
Research and Review Process
- The Project Leader completes Parts One through Five of the report.
- The Primary Critic reviews the report and the Project Leader updates the report according to the feedback.
- The rest of the research team reviews the report. The Project Leader updates the report according to feedback from the team.
- The research team, including the Project Leader and Primary Critic, independently make overall assessments according to the questions in Part Six.
- The results of independent assessments are compared and discussed by the research team.
- The Project Leader makes the final assessment and completes the report.
- The report is externally reviewed and the Project Leader updates the report accordingly.
Note that this process may be adapted according to time constraints and other possible limitations.
Read Detailed Evaluation Guide
Note that, in cases for which we are highly certain of an intervention’s outcomes and cost-effectiveness, it is not necessarily important, for our purposes, to understand how the intervention works. However, since we are often quite uncertain about the outcomes and effectiveness of a given intervention, we sometimes feel more confident in recommending interventions with well-understood theories of change.
Detailed Evaluation Guide
Overview
The goal of our evaluation process is to determine how to allocate the animal advocacy movement’s resources in order to help animals as much as possible. Put simply, we’re interested in the following:
- To what extent does this intervention achieve positive outcomes for animals?
- To what extent does the available evidence support our theory of change for this intervention?1
- To what extent is this intervention cost-effective when compared to other interventions we have evaluated?
- Should the animal advocacy movement continue to devote the same amount of resources to this intervention that it does currently?
In part because the lack of available research, these questions are often quite difficult to answer. Appendix One details the limitations of various approaches to intervention evaluation. Appendix Two provides background reading on the various approaches that have been utilized in designing this method. Appendix Three provides further information on utilizing a theory-driven approach to evaluation.
Due to the limitations of the various approaches to evaluation as discussed in Appendix One and the lack of evidence available on effective animal advocacy, we will use a mixed methods2 approach, maintaining some degree of flexibility within this process.
Our evaluation process consists of six parts; each part need not be completed in succession. For example, before deciding on what will be included in this analysis, it could be appropriate to commence the literature search.
- Part One: Intervention Description and Theory of Change
- Part Two: Evidence from Animal Advocacy Research
- Part Three: Evidence from the Social Sciences
- Part Four: Case Study Analysis and Cost-Effectiveness Estimate
- Part Five: Conversations in the Field
- Part Six: Overall Assessment
- Appendices
- Further Reading
Part One: Intervention Description and Theory of Change
Intent: The intervention description provides an overview of the type of intervention under evaluation and how widespread the intervention is. We describe variations in the implementation of the intervention and indicate which variations will—and will not—be considered in our evaluation. We provide a theory of change for the intervention as well as a summary of the outcomes we’d expect to see as a result of this intervention. We include both positive and negative outcomes and both intended and unintended outcomes. To help us identify different kinds of effects that an intervention can have, we’ve developed a Menu of Outcomes for Animal Advocacy.
1.1 Intervention Description
Description of Intervention (<250 words)
- Provide a description of the intervention type to be assessed.
- Are there forms of this intervention to be excluded from this analysis?
- Consider variances in approach to this intervention including implementation or expected outcomes. Consider whether the analysis will only include interventions based in a specific country/location/population group/setting.
- If two approaches to the intervention are different and both will be included, explicitly state why we are evaluating these interventions together. For example, online ads that promote the health benefits of plant based eating work via a different mechanism than online ads of footage inside abattoirs. Although the two interventions could be discussed in the same intervention report, we generally need to ensure that we make conclusions on the effectiveness of interventions that operate via differing mechanisms separately.
- Describe the prevalence of this intervention.
1.2 Intervention Outcomes
Table of intended and unintended outcomes we expect to see from this intervention (<150 words)
- The Menu of Outcomes for Animal Advocacy provides examples of outcomes to include. There may be additional outcomes outside of the ones on this list. There is no need to choose an outcome from each section.
- Focus on the most likely and significant outcomes rather than all possible outcomes.
Outcomes | Positive Examples | Negative Examples |
---|---|---|
Short-Term | An individual reduces their meat consumption | An animal activist gets arrested |
Intermediate | Animal issues advance on the political agenda | Corporations counter-mobilize |
Long-Term (Optional) | Society becomes less speciesist | Negative stereotypes of animal advocates increase |
1.3 Theory of Change (optional)
A theory of change3 explains how activities are understood to produce a series of results that contribute to achieving the final intended impacts.
As a starting point for developing the theory of change it would be useful to:
- Examine whether previous research has already described the mechanisms by utilizing the research conducted in Part Two.
- If possible, use the case study analysis in Part Four to understand the underpinning logic behind the approach and explore the interviewees theories of how the intervention works.
If useful, another approach to developing a theory of change is the development of a Context-Mechanism-Outcome framework. This has been described in further detail in Appendix Three and there is further reading on this approach in Appendix Two. In the process of completing an intervention report it is also possible that the first theory of change may be modified before being finalized.
1.4 Discussion
Discuss reasoning for theory of change and any evidence supporting it, or indicate whether evidence for the theory of change will be discussed later in the report.
Part Two: Evidence from Animal Advocacy Research
Intent: Part Two brings together relevant evidence from the field of animal advocacy in an accessible format via a summarized table to enable comparison. We provide an assessment of the state of the evidence with regard to quality and effect size, as well as noting significant gaps in the evidence. Finally, we provide a judgment on the extent to which the evidence supports the use of the intervention under consideration.
2.1 Evidence from Animal Advocacy Research
Complete a literature search using databases including but not limited to the following:
- Google Scholar
- ACE Research Library
- Faunalytics’ Research Library
- Humane League Labs’ Research
- MFA’s Research
Evidence can also come in the form of media coverage if, for example, an intervention has had significant victories that have been documented in the media.
Include a table or spreadsheet of studies from the field of animal advocacy (<50 words per study, no more than 15 studies)
Author(s) | Year | Title (Link) | Approach | Context | Key Findings | Key Limitations |
---|---|---|---|---|---|---|
[Last, F.] | [Max. 50 words] | [Max. 50 words] |
2.2 Summary
Provide a written summary of the available research from the field of animal advocacy (500–1,200 words)
Consider the quality of research available and use discretion when choosing studies to be included in this summary. Spend the majority of the discussion on the findings of the highest quality studies, rather than discussing the methodological flaws of lower quality studies. As a guide, spend > 75% of the summary on 3–6 of the highest quality studies and < 25% on lower quality studies.4
2.3 Discussion
Discuss the quality of the research5 and the extent to which it supports the use of the intervention and (optionally) the theory of change provided in Part One. (500–1,200 words)
Use the following questions as prompts, though not all questions need to be explored:
- Is triangulation possible? Can data be cross-verified using different sources (e.g., quantitative and qualitative)?
- Do the implications of the reported findings make sense?
- Where are there gaps, inconsistencies, or limitations in the evidence?
- Do studies attempt to make the causal mechanisms explicit?
- Does this research take into account this intervention in the context of other campaigns or the broader ecosystem of change?
- How generalizable is this evidence to other population groups or contexts?
- For example, if research has been conducted in colleges/universities, how generalizable is it to other settings? Or, if the study was of a pilot program, was it similar enough to full-scale programs that we think it also tells us how the intervention works?
- How generalizable is the evidence to other variations of this intervention?
- For example, what effect does variations in implementation play on the resulting outcomes?
- Are there multiple studies replicating the same findings?
Provide an overall assessment of the research on the following scale:
Poor | Weak | Moderate | Strong |
---|---|---|---|
There is little to no evidence to support this choice of intervention. Or, the evidence suggests an intervention may have no effect or a negative impact. | There is weak evidence to support this intervention but it is either exploratory in nature, weak in effect, or the studies are of low quality.
For example there may be a limited number of exploratory case studies or qualitative studies that demonstrate positive results. If there has been a single site experimental study the evidence may suggest that the intervention is having a weak impact/ or the study is of poor quality. |
There is moderate evidence to suggest this choice of intervention.
For example, multiple studies have demonstrated positive results. The intervention has been studied using a variety of methods such as exploratory studies (e.g., observational studies), quasi-experimental studies or single site experimental evaluations. Research is moderate to high quality. |
There is strong, high quality evidence to support this choice of intervention.
For example there are multi-site experimental evaluations with standardized approaches that demonstrate a positive impact. Or multiple high quality studies using a variety of methods provides strong evidence. |
Part Three: Evidence from the Social Sciences
Intent: The intent of Part Three is to bring together relevant research from the social sciences. We look to fields like psychology and social movement studies to build our understanding of the intervention. We provide a judgment on the extent to which the available literature supports the use of the intervention under consideration.
3.1 Evidence from the Social Sciences
It may be possible to identify the most relevant research from the social sciences to build a base of evidence regarding the intervention; another possibility may be using the theory of change to identify and test the assumptions behind how this intervention works.
Include a table or spreadsheet of studies from the social sciences (<50 words per study, no more than 15 studies).
Author(s) | Year | Title (Link) | Approach | Context | Key Findings | Key Limitations |
---|---|---|---|---|---|---|
[Last, F.] | [Max. 50 words] | [Max. 50 words] |
3.2 Summary
Write up a summary of evidence from the social sciences (500–1,200 words).
Focus the summary on research that is of the highest quality and/or most dispositive in our understanding of the intervention. As a guide, focus on 3–6 of the highest quality studies.
3.3 Discussion
Discuss the quality of the research6 and the extent to which it supports the use of the intervention and (optionally) the theory of change provided in Part One (500–1,200 words).
Use the following questions as prompts, though not all questions need to be explored:
- Is the research in the preliminary/exploratory stages?
- What assumptions are being made?
- Where are there gaps in the evidence?
- To what extent are the underlying mechanisms understood?
- Are there certain contexts in which this intervention is more or less effective?
Provide an overall assessment of the research on the following scale:
Poor | Weak | Moderate | Strong |
---|---|---|---|
There is little to no evidence to support this choice of intervention. Or, the evidence suggests an intervention may have no effect or a negative impact. | There is weak evidence from the social sciences that supports this choice of intervention but it is either exploratory in nature, weak in effect or the studies are of low quality. | There is moderate evidence from the social sciences that further builds our understanding of the underlying causal mechanisms.There is some understanding of interactions within various contexts and for whom this intervention works.
Research is moderate to high quality. |
There is a strong evidence from the social sciences to support this choice of intervention. We have a strong understanding of the causal mechanisms. There is an understanding of how the interventions works in various contexts and for whom this intervention works.
Research is high quality. |
Part Four: Case Study Analysis and Cost-Effectiveness Estimate
Intent: We generally aim to conduct two or three case studies for each intervention under evaluation. These can provide a useful way to compare different variations in implementations of an intervention. For further background information on developing case studies see Appendix Two. Given the lack of high quality intervention research, case studies provide a useful way to gain greater insight into cases of an intervention. They can also provide data for a cost effectiveness analysis. Case studies provide rich data, usually at the expense of generalizability. Other limitations of case study analysis are summarized in Appendix One.
4.1 Case Study Descriptions
A case study may be a single case (e.g., a single protest) or it might encompass an organization’s program (e.g., a protest program.) Whether the evaluator decides to investigate a single case or a program as a case will depend on the most pertinent questions that need to be answered. The evaluator may decide to research cases that have had positive, negative, or indeterminate outcomes.
Prepare for and conduct the case studies.
- Aim to develop 2–3 case studies via interviewing people in the field. It’s also possible for the case study interviews to be combined with the semi-structured interviews in Part Five.
- Provide an explanation/rationale on how the case study or case studies were chosen.
- Parts of this process will be best completed via email (e.g., collecting data for the cost-effectiveness estimates)
- Summarize the data gleaned from each case study in a table (see Table 4 as an example).
People Interviewed | [Name,Title] |
---|---|
Other Data Sources | |
Description of Intervention | |
Implementation Description | |
Costs | |
Indicators of Success | |
How does this intervention work, according to the interviewee? | |
Were there outside factors/influences that may have influenced outcomes? | |
Were there indicators to suggest that the intervention caused any of the measured changes? |
4.2 Cost-Effectiveness Estimates
See ACE’s write up of cost-effectiveness estimates (Our Use of Cost-Effectiveness Estimates).
Consider the following factors:
- Are the case studies on which the cost-effective estimates are built representative of this intervention type?
- Are there cases in which this intervention would be more or less cost-effective?
- How does other evidence influence the cost-effectiveness estimates?
4.3 Summary
Provide written summaries of each case study, including details of cost-effectiveness estimates (500–1,000 words for each case study).
4.4 Discussion
Provide a discussion of the evidence gleaned from case study research and cost-effective estimates. Consider how well the evidence supports the use of the intervention and (optionally) our theory of change (400–1,000 words).
Use the following questions as prompts, though not all questions need to be explored:
- How does case study analysis contribute to our understanding of this intervention type? Use the criteria to assess the level of support case studies provide.
- How does the cost-effectiveness estimate affect our assessment of this intervention?
- Did the case study intervention meet its intended outcomes in the short, medium, and (possibly) long term?
- What evidence is available to indicate that the case study caused the intended outcomes?
Provide an overall assessment of the case study evidence on the following scale:
Poor | Weak | Moderate | Strong |
---|---|---|---|
Development of a case study did not provide evidence to support this intervention choice.
For example, the case study demonstrated that this intervention did not have an impact on participants or there was no measurement of indicators of success. |
The case study provided weak evidence to support this intervention choice.
For example, the case study provided details in regards to implementation, developed our understanding of the intervention theory of change and there were some potential indicators of success. |
The case study provided moderate evidence to support this intervention choice.
For example, the case study provided enough detail in regards to implementation, further developed our understanding of the intervention theory of change, and there were multiple indicators of success. |
The case study provides strong evidence to support this intervention choice.
For example, a case study that includes multiple indicators that demonstrate that the intervention was successful and cost-effective. It’s unlikely that there are other causes that could account for the positive results. The case study accounts for contextual factors. |
Part Five: Conversations in the Field
Intent: Here we conduct semi-structured interviews to draw on the opinions and experience of experts or individuals who have utilized this approach to support the previous parts of this evaluation process. This should include interviews with people who do not advocate for this intervention, if possible.
The intent of the interviews is to support other parts of this evaluation process. And thus, will be dependent upon questions that arise during the evaluation process. Some examples of areas worth exploring are:
- Developing a deeper understanding of the causal chain
- Developing a deeper understanding of the academic literature
- Comparing the intervention to other interventions
- Understanding the interviewee’s personal experience with and opinion of this intervention
5.1 Conversations in the Field
Aim to complete 1–4 conversations with experts or individuals in the field depending on the time available, the number of case studies that have been completed, and the questions to be answered. The semi-structured interviews can be completed as part of the case study interview or on its own.
Prepare semi-structured interview questions. Use the Better Evaluation website and Appendix Four for further guidance.
Provide links to summaries of each conversation (500–1,500 words each). In most cases this could be completed by interns.
5.2 Summary of Conversations
Summarize the evidence from the conversations, highlighting convergences and divergences in opinion among experts (400–1,000 words).
5.3 Discussion
Analyze the gathered evidence. The level of analysis will be dependent on the number of interviews completed and similarity of themes across conversations. Appendix Four provides further guidance.
The following questions may be useful to consider:
- To what extent do the conversations from the field support this intervention choice?
- To what extent do the conversations further develop our understanding of the theory of change?
Provide an overall assessment of the evidence drawn from conversations on the following scale:
Poor | Weak | Moderate | Strong |
---|---|---|---|
Our field conversations do not provide evidence to support this intervention choice. | Our field conversations provide weak evidence to support this intervention choice. | Our field conversations provide moderate evidence to support this intervention choice. | Our field conversations provide strong evidence to support this intervention choice. |
Part Six: Overall Assessment
Intent: When possible, we will describe the extent of the variation in an intervention’s effectiveness, and we will provide evidence-based advice for implementing the intervention as effectively as possible. Acknowledging that any type of intervention probably varies widely in its effectiveness, we aim to provide an overall judgment of the effectiveness of an intervention type.
6.1 Overview
Provide a brief overview of the evidence from each of the parts and their assessment according to the criteria (<400 words).
6.2 Variance of Intervention Effectiveness
Since interventions often vary widely in implementation, they often also vary widely in effectiveness. If possible, provide evidence-based advice for maximizing the positive impact of the intervention.
6.3 Evaluative Questions
Follow the process outlined in the overview before making a final decision on the evaluative questions outlined below.
Provide a rationale and a brief summary of the evidence that has informed each assessment (<400 words for each question).
Consider the following:
- How has the evidence been weighted? Where have the limitations of evidence sources been taken into account?
- Since this process utilizes mixed methods, take into account where the evidence does and doesn’t validate each other. For example; is quantitative data from a randomized controlled trial in Part Two validated by qualitative interviews in Part Five.
- In some cases there will be variations in assessments of the same intervention type. In this case, provide a range.
- Are there potentially negative outcomes as a consequence of this intervention? How does this affect our overall assessment?
- Are there other interventions that result in the same outcomes as this intervention, but do not cause the negative outcomes that this intervention may cause?
- Is this intervention necessary for the success of the movement?
Table 5. Questions and Scales
Discussion: | ||||
Scale: | ||||
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
This intervention creates no net positive change (and might even create net negative change) for animals. | This intervention creates some net positive change for animals. | This intervention creates significant net positive change for animals. | ||
Level of Certainty: | ||||
1 | 2 | 3 | 4 | 5 |
We are highly uncertain about the impact this intervention has for animals. | We are moderately certain about the impact this intervention has for animals. | We are highly certain about the impact this intervention has for animals. |
Discussion: | ||||
Scale: | ||||
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
The available evidence does not support our theory of change. | The available evidence provides moderate support for our theory of change. | The available evidence strongly supports our theory of change. | ||
Level of Certainty: | ||||
1 | 2 | 3 | 4 | 5 |
We are highly uncertain about the extent to which the evidence supports our theory of change. | We are moderately certain about the extent to which the evidence supports our theory of change. | We are highly certain about the extent to which the evidence supports our theory of change. |
Discussion: | ||||
Scale: | ||||
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
This intervention is not cost-effective compared to other interventions we have evaluated. | This intervention is comparable to the other interventions we have evaluated, in terms of cost-effectiveness. | This intervention is cost-effective compared to other interventions we have evaluated. | ||
Level of Certainty: | ||||
1 | 2 | 3 | 4 | 5 |
We are highly uncertain about the cost-effectiveness of this intervention. | We are moderately certain about the cost-effectiveness of this intervention. | We are highly certain about the cost-effectiveness of this intervention. |
Discussion: | ||||
Scale: | ||||
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
The animal advocacy movement should devote far fewer resources to this intervention than it does currently. | The animal advocacy movement should continue to devote the same amount of resources to this intervention that it does currently. | The animal advocacy movement should devote far greater resources to this intervention than it does currently. | ||
Level of Certainty: | ||||
1 | 2 | 3 | 4 | 5 |
We are highly uncertain about the amount of resources that the animal advocacy movement should devote to this intervention. | We are moderately certain about the amount of resources that the animal advocacy movement should devote to this intervention. | We are highly certain about the amount of resources that the animal advocacy movement should devote to this intervention. |
Conclusions
Outline the conclusions we’ve drawn from our research and suggest avenues for further research.
Note that, in cases for which we are highly certain of an intervention’s outcomes and cost-effectiveness, it is not necessarily important, for our purposes, to understand how the intervention works. However, since we are often quite uncertain about the outcomes and effectiveness of a given intervention, we sometimes feel more confident in recommending interventions with well-understood theories of change.
Mixed methods research is defined as a class of research where the Researcher mixes or combines qualitative and quantitative research techniques, methods, approaches, concepts or language into a single study. It’s also an attempt to legitimate the use of multiple approaches in answering research questions (Johnson and Onwuegbuzie 2004).
For guidance on developing a theory of change the following resources are useful:
Rogers, P. (2014). Theory of Change, Methodological Briefs: Impact Evaluation 2
Rogers, P. (2014) Develop Programme Theory, Better EvaluationSee Belcher et al’s (2016) paper on assessing research quality, Table 3.
For a guide, see Belcher B., Rasmussen K., Kemshaw M., Zornes, D. (2016) Defining and assessing research quality in a transdisciplinary context. Research Evaluation, 25 (1), 1–17. Use Table 3 as a guide to make an assessment of the evidence.
For a guide, see Belcher B., Rasmussen K., Kemshaw M., Zornes, D. (2016) Defining and assessing research quality in a transdisciplinary context. Research Evaluation, 25 (1), 1–17. Use Table 3 as a guide to make an assessment of the evidence.
Appendices
Appendix One: Background on our Approach
There have been extensive debates over various methods for evaluating interventions (Cook et al., 2010). Additionally, even where there have been significant resources allocated to evaluating interventions, there are difficulties with generalizability (Vivalt, 2015. Retrieved from https://economics.stanford.edu/sites/default/files/eva_vivalt.pdf).
Due to the challenges and limitations discussed below, ACE is utilizing a mixed methods approach to analyzing interventions. We’re taking from various approaches with the understanding that none of these methods alone provide adequate answers to our primary questions.
In designing this process the following limitations have been considered:
Research/Approaches | Limitations |
---|---|
Animal advocacy research | There are few high quality studies available that have assessed the effectiveness of interventions in the animal advocacy space |
The Humane League Labs has detailed unique challenges for animal advocacy including small effect size and measurement error in self-reported survey data | |
Cost-effectiveness estimates | We have previously written about some of the difficulties regarding cost-effectiveness estimates, and Givewell has described the limitations of cost-effectiveness studies |
Theory of change approaches (e.g., realist synthesis) | A non-standardized approach is time consuming, human resource-intensive, and requires significant expertise (Rycroft-Malone et al., 2012) |
Realist reviews do not provide simple answers (e.g., whether something works or not) (Pawson et al., 2005) | |
Case studies | Without a counterfactual there are issues with inferring causation |
Provide little basis for generalization (i.e., producing findings that are generalizable to other settings) | |
Bias in selection of case studies; including choosing for positive outcomes or accessibility of evidence | |
Experimental/counterfactual studies | There is ongoing debate about the extent to which counterfactuals are a sufficient basis for understanding causation (Stern et al., 2010, Scriven, 2008) |
Counterfactual studies associate a single cause with a given effect without providing information on what happens in between (Stern et al., 2010) | |
Counterfactual studies answer contingent, setting-specific causal questions (did it work there and then?) and cannot be used for generalization to other settings and timeframes—unless accompanied by knowledge on the causal mechanisms at play (Stern et al., 2010) | |
Further limitations of counterfactual studies have been described by Scriven (2008) and Campbell et al., (2002) | |
Intervention research | Interventions are frequently complex with multiple outcomes |
Interventions work in parallel and overlap, making it difficult to disentangle causes from effects | |
Interventions contribute to change, which makes it difficult to demonstrate attribution | |
Intervention effect is dependent upon implementation approaches | |
Interventions can be long-term in nature, embedded in changing contexts, and with extended causal pathways—thus creating difficulties in measuring impact | |
Difficulties with providing concise estimations of long-term future effects |
Despite these limitations, by utilizing various approaches to evaluation—including case study analysis, interviews, cost-effectiveness estimates, and the pre-existing research—we hope to gain a clearer picture and to make an informed assessment of interventions.
Appendix Two: Background Reading
Impact Evaluations
- Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012) Broadening the range of designs and methods for impact evaluations, Department for International Development.
Mixed Methods
- Johnson, R., Onwuegbuzie J. (2004) “Mixed Methods Research: A Research Paradigm Whose Time Has Come.” Educational Researcher, 33(7).
Theory-Driven Evaluation
- Astbury, B. & Leeuw, F. (2010) “Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation.” American Journal of Evaluation,31(3).
- Westhorp, G. Realist impact evaluation: an introduction, Methods Lab. (London: Overseas Development Institute, 2014).
- Wong, G., Westhorp G., Pawson, R., Greenhalgh, T., (2013) Realist Synthesis, Rameses Training Materials.
- Rogers, P. (2014). Theory of Change, Methodological Briefs: Impact Evaluation 2.
Case Study Research
- Balbach, E. (1999) “Using Case Studies to do Program Evaluation.” Stanford Centre for Research in Disease Prevention, California Department of Health Services.
Appendix Three: An Introduction to the Utilization of Theory-Driven Approaches to Evaluation
Theory-driven approaches have been suggested as an alternative to other forms of evaluation—such as systematic review processes and traditional forms of impact evaluation—when it comes to studying complex social interventions (Stern et al., 2010), (Petticrew, 2015).
Realist synthesis is one of these theory-driven approaches that utilizes various sources of evidence to explore the underlying mechanisms attempting to answer the question “what works for whom and under what circumstances?” (Pawson & Tilley, 1997). Realist approaches often use Context-Mechanism-Outcome (C-M-O) frameworks. In a C-M-O framework:
Context describes those features of the conditions in which programs are introduced that are relevant to the operation of the program mechanisms. Context includes broad social or geographical features (e.g., country, culture) or features that affect implementation (e.g., institutions, funding, staff qualifications). It could also relate to differing population profiles (Wong et al., 2013).
Mechanism describes the features of programs and interventions that work together as a system to bring about any effects. Mechanisms are causal forces or powers, generally hidden and sensitive to variation in context (Astbury & Leeuw, 2010). They’re not inherent to an intervention but are a function of the participants and the context; the same intervention can trigger different mechanisms for different participants.
Outcome describes the intended and unintended consequences of programs, resulting from the activation of different mechanisms in different contexts (Pawson & Tilley, 2004. Retrieved from http://www.communitymatters.com.au/RE_chapter.pdf).
Here is an example of a possible C-M-O configuration for an intervention that involved the release of footage from the live export industry. In this example the year and place are relevant because the resulting mechanisms were dependent upon this timeframe. Similarly disturbing footage released after this date has not resulted in the same public or political response. We could have multiple different hypotheses about why these mechanisms are not triggered in 2017 (e.g., people do not feel that they can make an impact through action, a different government may be less likely to act on public outrage, or people have become desensitized to this kind of suffering).
Context | Australia, 2011 |
---|---|
Mechanism | Footage of live export suffering provoked an emotional response in viewer (disgust/despair/anger) |
Outcomes |
|
If use of a C-M-O framework for an intervention is not useful, possible, or practical an alternative is to develop a theory of change.
Appendix Four: Notes on Completing Semi-Structured Interviews
A semi-structured interview involves the use of predetermined questions centered on specific themes which are open to adaptation.
Prompts for preparing interview questions:
- In general use open-ended questions, so as to increase the likelihood of getting lengthy and descriptive answers
- Good questions should be clear, specific, unambiguous, and directly related to the research
- Keep the questions as concise as possible; avoid asking two-in-one questions
- Don’t use leading questions which encourage the respondent to answer in a particular way (thus biasing your answers)
- Avoid using questions with a strong positive or negative association
Before/during the interview:
- Let the interviewee know how long the interview will take
- Let the interviewee know how the data is going to be transcribed/used/reported on
- Build rapport at the beginning
- Use probing questions (e.g., “Can you tell me more about…?” “How come…?” “Why is this…?”)
The level of analysis will be dependent on the number of interviews completed and similarity of themes across interviews. For basic analysis, create a table of themes/questions and copy and paste quotes from each of the interviews into the table. This data display enables easier analysis. If it is necessary to be more comprehensive with the analysis, thematic coding can be used.
Further Reading
References and Background Reading List
Astbury, B. & Leeuw, F. (2010) “Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation.” American Journal of Evaluation, 31(3).
Balbach, E. (1999) “Using Case Studies to do Program Evaluation.” Stanford Centre for Research in Disease Prevention, California Department of Health Services.
Belcher, B., Rasmussen, K., Kemshaw M., Zornes, D. (2016) “Defining and assessing research quality in a transdisciplinary context.” Research Evaluation, 25(1).
Johnson, R., Onwuegbuzie J. (2004) “Mixed Methods Research: A Research Paradigm Whose Time Has Come.” Educational Researcher, 33(7).
Pawson, R. and Tilley, N. Realistic Evaluation (London: SAGE, 1997).
Petticrew, M. (2015) “Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’.” Systematic Reviews, 4(36).
Rogers, P. (2014). Develop Programme Theory. Better Evaluation.
Rogers, P. (2014). Theory of Change, Methodological Briefs: Impact Evaluation 2, UNICEF Office of Research, Florence.
Rycroft-Malone J., McCormack B., Hutchinson A. et al., (2012) “Realist synthesis: illustrating the method for implementation research.” Implementation Science, 7(33).
Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012) Broadening the range of designs and methods for impact evaluations, Department for International Development.
Westhorp, G. Realist impact evaluation: an introduction, Methods Lab. (London: Overseas Development Institute, 2014).
Wong, G., Westhorp G., Pawson, R., Greenhalgh, T., (2013) Realist Synthesis, Rameses Training Materials.
Vivalt, E. (2015) “Heterogeneous Treatment Effects In Impact Evaluation,” American Economic Review: Papers & Proceedings, 105(5).