Introduction
I would like to introduce myself, Kathryn Asher, as ACE’s new Research Scientist. I have been brought on to head up ACE’s new experimental research division, which will operate with the support of our existing (and recently expanded) research team. This division is aligned with our goal of finding and advocating for highly effective opportunities to improve the lives of animals on a wide scale, which we do by providing research about advocacy interventions and charity recommendations. This research initiative is also poised to add capacity to the larger animal advocacy movement’s efforts to be as evidenced-based as possible.
We anticipate that our research division will focus largely (though perhaps not exclusively) on experimental methods. We will emphasize randomized controlled trials (RCTs), where the randomization of participants and the inclusion of a control condition helps to establish causality for the treatment under study. Randomization removes the bias in treatment assignment, while the comparison to a control allows for an assessment of the “real efficacy” of the treatment through the removal of other potentially influencing factors. RCTs have been “adopted in several scientific fields as the ‘gold standard’ of evidence.”1 We are interested in the effects of interventions, and RCTs are one of the best ways we know of to establish causality in this regard. The broader effective altruism movement has also embraced the use of RCTs, where they have been said to be “as central to evidence-based altruism as they are for evidence-based medicine.”2 Likewise, experimental designs are also generally considered of higher value by our Animal Advocacy Research Fund, so our methods of choice are in keeping with these values. Important to note however, is that RCTs are not without their limitations.
Prioritization
We are currently engaged in a research project identification and prioritization exercise, following which we will turn our attention to individual designs and study execution. In accordance with our thinking on cause prioritization,3 we anticipate that our experimental research initiative will focus mostly on farmed animal interventions, while allowing for the possibility of smaller-scale studies into wild animal suffering interventions and/or activist mobilization and societal-level attitudinal change around speciesism.
While we are early in our prioritization work, we have identified several areas potentially ripe for study, including:
- Advocacy interventions (institutional and individual)
- Consumer acceptance and food tech (i.e., plant-based alternatives and cultured meat)
- Messaging (e.g., reduce vs. eliminate, how vs. why, or vegan vs. vegetarian vs. meat-free vs. plant-based)
- Foundational questions that help us understand the general mechanisms behind societal or individual change (e.g., confrontational vs. non-confrontational tactics)
- Dietary assessment outcome measures
We anticipate allocating time internally within ACE’s research team (and externally if needed) to further elaborate on and fine-tune our focus areas.
As part of the launch of this research initiative, we have outlined below the research practices that we see as necessary to establish and foster a high-value research culture.
Study Design
Impactful Research
Our goal is for this division to produce highly impactful applied research that will significantly contribute to answering important practical questions facing animal advocates. We aim for our research to consistently demonstrate the potential for on-the-ground applicability through the generation of usable recommendations. To this end, we will ask ourselves:
- Does this research address an important problem facing animal advocacy groups?
- How likely is this research to contribute to solving that problem, and to what extent?
Part of achieving affirmative answers to these questions will be ensuring that our research is situated to contribute to the advancement of knowledge, rather than being unproductively duplicative.4 For this reason, we will stay informed about relevant research being conducted by movement Researchers, academics, and industry partners alike.
In prioritizing impactful research, we expect to give serious thought to whether we can concretely and in a step-by-step fashion say for any proposed study how it might improve the lives of animals. We will work towards this in two ways: (i) considering which research will have the highest expected value for ACE’s charity evaluations; and (ii) taking into account the likelihood that individuals, organizations, and companies will be able to increase their impact by incorporating the findings into their own work.
On the first point, most of ACE’s research work supports our charity evaluations. We see these evaluations as particularly high impact given that we influenced over $3.5 million in donations to our recommended charities last year. We can be relatively confident in the applicability of research designed for our own use, since we can control our own implementation of the findings. Research that will be the most directly impactful for our charity evaluations is likely to provide new insights on the (comparative) effects of interventions that are often used (or have the potential to be often used) where we did not previously have an informed estimate of those effects. Particularly valuable in this regard is research that leads to largely different results than our previous estimates, thereby offering more than a simple reduction in our uncertainty but also (and perhaps more usefully) an update to our views about effects.
On the second point, while we see the implementation of our research findings by entities outside of ACE as less within our sphere of control and so will apply a higher degree of caution, we also recognize that when feasible it can bring about change that has far-reaching effects. Some questions that may be helpful for us to consider in this regard include:
- Are there any specific entities currently struggling because they do not know the answer to this question?
- Are there entities who could change their behavior based on the results of this research?5
- Do we think they are likely to do so?
- How would we expect the results of this research to be used by specific entities?
Importantly, we will go further by taking steps to identify those who are extremely likely to act on a finding in order to understand the potential for implementation buy-in ahead of time. This will be most useful when entities who are particularly engaged in the topic at hand are involved in discussions about potential implementation so as to increase the scale of any change by dialoguing with those most directly involved. Following reporting, we will also directly share user-friendly summaries of findings and discuss practical applications with relevant parties to improve the chances of implementation beyond what might come from simply hoping that key players will come across the findings on their own.
Study Power & Size
We anticipate our studies will range in cost from relatively small (<$2,500) to relatively large (>$50,000). The cost will depend on the nature of the study design (i.e., type of respondent, length of the instrument, number of administrations, and nature of the manipulation). It will also depend on the sample size required to make it likely that we will be able to detect a meaningful effect, should one exist. This is important because of our commitment to running adequately powered studies.
We maintain some uncertainty about the value of prioritizing a few large-scale studies at the expense of a host of smaller ones. The drawbacks to prioritizing time- and cost-intensive studies are that this reduces the number of useful truths we can uncover, by virtue of addressing fewer research questions. It also leaves less room for replications (whether at ACE or elsewhere). Such research can be particularly problematic when it is isolated rather than iterative, which can result in missed opportunities to learn valuable lessons from study to study. In the course of carrying out a research project, it is not uncommon to be confronted with information that challenges—or even seriously undermines—the approach at hand, or suggests that several more studies are needed to properly answer the research question.
Consequently, it is important that we refrain from setting ourselves up for failure by jumping into an isolated high-input study without sufficient “build”6 and the ability to be responsive to past learnings. Ultimately, we want to avoid conducting a multi-year RCT simply because doing so might seem exciting—we would need a clear idea of its comparative advantage. There is of course a place for more intensive studies, which we see as coming after we have sufficient internal and/or external studies that can speak to how best to maximize the research protocol (and even then, piloting may be necessary).
Regardless of the size of the study, we tend to have a relatively strong preference for simplicity in research. Specifically, we see value in studies that include fewer independent variable and even fewer dependent variables than is the norm.7 We do, however, also think it is important to leave open the possibility of collecting additional variables that can be used to generate or test later hypotheses—as these can in turn inform future study priorities and designs.
Experimental Conditions
Despite our preference for simplicity, we anticipate favoring designs with more than two conditions (when resources allow). In addition to being able to demonstrate an overall intervention effect (treatment vs. control)—an important contribution given the lack of data on this front—we foresee even greater applicability for studies that are powered to additionally detect a difference between two or more treatments. Such treatments could, for example, include comparisons within interventions (e.g., health vs. animal themed videos) or, even more powerfully, between interventions (e.g., videos vs. leaflets), though the latter would not be without design challenges.
One advantage of having more than two conditions is that we see less opportunity for advocates to apply the findings from a study that is only designed to detect an intervention effect. For instance, if a study demonstrated an effect from leafleting against a control, this may at best indicate to organizations who are engaged in leafleting to maintain the status quo. It does not tell organizations who are not leafleting (or who do so in conjunction with other outreach) whether they should place a larger focus on leafleting, nor does it indicate to those who are already leafleting how they can make their leaflets more effective. Likewise, an advantage of an intervention study with three conditions is that it incorporates a conceptual replication of sorts, which helps mitigate situations where an intervention might only be effective when it is in the form of X but not Y. In this sense, if we only include one treatment arm (e.g., one particular leaflet) we may erroneously conclude something about that leaflet that is not true for leafleting in general.8
Despite all of this, we recognize that powering a study to detect an effect between treatments (especially across interventions) can be cost prohibitive given that the sought-after effect is generally very small. In such instances, we would prefer to overpower a two-condition controlled study than to risk underpowering a three-condition one. While such a two-condition study may not have as direct applicability for the movement, it can still provide valuable information for our charity evaluations given that the current process entails making estimates about the effects of interventions. As a consequence, a well-designed study aimed at uncovering an intervention effect would presumably change the estimates we use in our evaluations (whether the expected value, the variance, or both). Additionally, a two-condition study could also serve as part of a larger collection of studies aimed at testing intervention effects using common outcome measures with the goal of comparing across studies.
Outcome Measures
Because we recognize individual diet choices as having an effect on decreasing farmed animal suffering, we see value in finding better ways to track dietary modification—given that some of the self-reported outcome measures that have been used in the past (e.g., food frequency questionnaires) have notable limitations particularly for intervention research at the individual level. Consequently, we are interested in testing other self-reported dietary measures, as well as more objective ones, to increase the level of confidence we have in our results.
For self-reported measures, we are particularly interested in (unannounced) 24-hour recalls administered via the Automated Self-Administered 24-Hour Dietary Assessment (ASA24). We are also curious about meal photography and food records (via apps like MyFitnessPal), though for the latter the potential for reactivity limits the suitability of food records for intervention research. We are also interested in administering self-reported measures using unannounced experience sampling to bring the timing of the data collection more in line with the timing of the events. Additionally, we see benefit in collecting social desirability markers to help identify bias for studies relying on self-reported measures where such concerns are not able to be circumvented through the design, particularly in intervention studies where differential response bias is likely.
For more objective measures, we have an interest in observational data (e.g., snack selection as part of study compensation), sales data (e.g., Nielsen retail scanner data or purchase tracking with dining hall coupons), and even biomarkers—though their value for assessing animal product (or even meat) consumption may be limited. Additionally, we see value in prioritizing animal-centric outcome measures when feasible,or at least those that can be translated accordingly.
Research Practices
Statistical Practices
In the analysis of data we will always attempt to: (i) use the statistical methods that are the most appropriate, and (ii) be clear about the statistical methods that we use. Which statistical method is most appropriate will vary substantially depending on several factors, including the data type involved and what research question we are attempting to answer.9 When reporting and interpreting the results of all significance tests we will attempt to follow best practices.10 We also recognize that the dichotomization of evidence into the typical binary of either rejecting or failing to reject a hypothesis is often less informative than the way in which Bayesian approaches update the probabilities of hypotheses. Although we will primarily use frequentist approaches in the analysis of our data, for the aforementioned reason we may use Bayesian approaches in some cases, or partner Bayesian methods with significance testing during the analysis and interpretation of evidence.11 Lastly, in situations where we have a substantial amount of doubt about the extent to which an analysis method is appropriate, we will seek informed external advice.
Open Science
ACE has an open science policy, aiming to set a threshold even higher than that established for academic peer-review to address publication bias and Researcher degrees of freedom. As part of this policy, we commit to: (i) pre-registering a pre-analysis plan on the Open Science Framework (OSF) or another open access platform prior to data collection; (ii) publicly releasing our de-identified datasets, code, and study design materials; and (iii) following best practices for reporting by: a) providing the results of all pre-registered analyses, b) reporting any unregistered analyses separately and labelling them as such to differentiate between confirmatory and exploratory analyses, and c) detailing any known departures from the pre-registered plan. During design and reporting, we will also consider guidelines from the CONSORT (Consolidated Standards of Reporting Trials) statement, which is an “evidence-based, minimum set of recommendations for reporting randomized trials.”12
Review Process
Our experimental research will adhere to ACE’s research review process that we established to help improve the rigor of our research efforts. It entails both internal and external review, where each project will: (i) be assigned an internal primary critic, (ii) undergo multiple reviews by our Research Editor, (iii) receive feedback from ACE’s research team, and (iv) be circulated for external review when necessary. Prior to data collection, we will also openly circulate our study designs (either via our website or on the OSF) to allow for feedback from Researchers and members of the animal advocacy community. We also value piloting relevant aspect of our research protocols ahead of time, as well as engaging in a “soft” (i.e., limited) launch of data collection prior to a full launch when feasible.
Partnerships
We appreciate that there is richness in collaborations and are open to engaging in partnerships with academics, advocacy organizations, and industry partners, particularly for field and/or lab studies. We foresee this being especially valuable when we can avoid a “too many cooks in the kitchen” scenario as well as steer clear of partnerships that lead to project stagnation or mission drift (i.e., an unproductive move away from the initial research goals). We hope to prioritize collaborations that are positioned to improve the efficiency and caliber of our research by enhancing external validity, streamlining staff time, and/or granting access to research subjects, materials, or data sources that might otherwise be out of our reach.
Ethical Conduct
We will work to adhere to ethical protocols (e.g., the Declaration of Helsinki) similar to those set by academic Institutional Review Boards, and indeed may even seek out external board review for some studies. In particular, we will pay close attention to: informed consent, participant risk, recruitment protocols, the use of deception, individuals incapable of giving consent including the use of children as research subjects, the voluntariness of inducements, identifiable personal information, data security, and participant debriefing. We are also committed to being transparent with disclosures and conflicts of interest as well as error correction, much as we do now by documenting mistakes as an organization.
Publishing
We may seek to peer-review publish our work in academic journals. While peer-review is valuable and can catch flaws, we strive to have other strong quality control measures in place before peer-review, i.e., our open science approach and our review process. Our main reasoning for engaging in academic publishing is threefold: (i) the credibility value it brings by elevating the impression of ACE’s research; (ii) the opportunity it gives academics to build on our work to uncover further high-impact findings; and (iii) the opportunity it provides to increase the prominence of animal advocacy research within academia and in turn add legitimacy to human-animal studies research. Importantly, we recognize that it takes time to publish in these channels and so we are committed to sharing our findings in user-friendly formats (one-on-one, our website, preprints, creative/visual outputs, etc.) with our constituents in advance of this more formal, and often less accessible, type of dissemination.
We are also cognizant that academia’s incentive structure—resting on a largely publish or perish mentality for career advancement—has had downsides for research quality. Because we are designing our research culture from scratch, we have a great deal of agency in ensuring that we do not stray from our overarching goal of doing the most good by way of identifying highly effective opportunities to improve the lives of animals. We are committed, therefore, to moving beyond being incentivized by publishing to prioritizing a culture of truth finding, or more aptly useful truth finding.
Conclusion
As our program of experimental research begins to take shape, we look forward to learning from and sharing our work with others conducting research and engaging in advocacy in this field. We will be particularly eager to see how the findings can be used on the ground and will be vigilant about continually reassessing our approach (including the suitability of the parameters included herein) in an effort to maximize this likelihood.
Thank you to ACE Board Member S. Greenberg for his insights on many of the topics relating to research culture discussed in this post. Thank you also to Kieran Greig for contributing the Statistical Practices section of this blog.
Risk and Evidence of Bias in Randomized Controlled Trials in Economics
See also consideration of cause areas
Of note, duplicative research can be beneficial in some cases (e.g., replication studies).
Note that we refer to research in the singular here, however it may be that recommendations cannot be generated from any one study, in which case we may look at these questions in relation to a series of studies (whether wholly internal or a mix or internal and external research). It is important to remember that “a single experiment (or even many experiments) can’t provide a general answer” (GiveWell blog).
The “build” in this case would see a study fit into a sequential program of research.
That is, fewer pre-registered independent and dependent variables and consequently a more streamlined analysis process. We do, however, foresee collecting other variables for exploratory (rather than confirmatory) purposes and so anticipate less simplicity in research in this regard.
Though it will not mitigate this concern, one related good practice is to compare a variety of leaflets/randomizations of parts of leaflets on an affordable platform such as Mechanical Turk before proceeding with one to test more thoroughly.
To be more specific, for descriptive statistics the most appropriate statistical methods will most accurately and clearly describe important features of the observed distributions. For instance, if a distribution is sufficiently skewed, then reporting informative quantiles is more appropriate than reporting the mean and standard deviation. Relatedly, for inferential statistics the most appropriate statistical method will most accurately and efficiently determine relevant characteristics of the pertinent underlying probability distributions.
For instance, p-values will be accompanied by the value of the relevant test statistic, the associated degrees of freedom and estimates of the effect size, probably in the form of a 95% confidence interval. Where applicable the alpha level used when conducting significance tests will be adjusted to account for multiple comparisons.
We will always commit to a specific analysis in a pre-analysis plan. If we deviate from that approach we will make that known and state reasons for the deviation.
Peter says
It would be nice to see more focus on research about whether individual interventions work or not, and trying to figure out what the effect size is. It doesn’t make sense to me to prioritize fine-tuning an intervention (e.g., studies comparing messaging) until we know whether the intervention itself works. Of course, this intervention test would be much more difficult and expensive, but I also think much more valuable.
Kathryn Asher says
Hi Peter, thanks for weighing in on this, it’s an important issue to consider indeed. Yes I agree about the importance of testing intervention effects, though mostly when there are sample size constraints. Otherwise, I tend to see value in an additional condition to allow for a within or between-intervention examination (in addition to testing for an intervention effect), particularly because of the benefits this brings for applying the findings outside of ACE.
Kevin Watkinson says
Hi, i’m pleased to hear about the Experimental Research Division.
After reading the article, I wondered more about the issue of foundational questions, and taking a step further back to contemplate how it is those questions are considered. In relation to a recent article from Sentience Institute they state that EA researchers generally supported three areas:
‘We attempt to categorize each piece of evidence into the side of the debate that we expect significant majority of EAA researchers agree it supports.[4] Where there is less agreement, or when the evidence’s direction depends largely on other questions in this document, it will be noted as pointing in an “unclear direction.” We also note when there seems to be significant majority agreement among EAA researchers on which side of the overall debate is most likely correct. Note that this does not mean it’s highly likely to be correct, just that it seems most compelling given the current evidence. The three questions with that level of agreement are:
• Individual vs. institutional interventions and messaging (favoring institutional interventions and messaging)
• Momentum vs. complacency from welfare reforms (favoring momentum from welfare reforms)
• Animal protection vs. environmental vs. human health focus (favoring animal protection focus)[5]’
https://www.sentienceinstitute.org/foundational-questions-summaries#individual-vs.-institutional-interventions-and-messaging
My question here would be how the researchers are viewing the issues through the ethical system they are using. As we know from annual surveys EA is primarily constituted by people who identify as utilitarian, and so I think we would need to weight certain perspectives in order to achieve a semblance of balance or neutrality, otherwise there can be an issue with objectivity.
“The distribution of responses regarding a stance on moral philosophy is extremely similar to the last survey. In 2015, 56% selected Consequentialism (Utilitarian), 22% No opinion or not familiar with these terms, 13% Non-utilitarian consequentialism, 5% Virtue Ethics and 3% Deontology. Among respondents, the distribution of philosophical stances has not noticeably changed.” http://effective-altruism.com/ea/1e1/ea_survey_2017_series_community_demographics/
So we can identify for instance that deontology takes up a very minor position in EA, but in my view it ought not follow that consideration for that view is 3% or less. In terms of the animal movement different perspectives need to be considered and a reasonable amount of consultation taking place in order to encourage diversity and inclusion, whilst it can help to fine tune different approaches and further bring people on board.
In relation to agreement between EA researchers, we could also consider the issues brought up in the following paper (http://onlinelibrary.wiley.com/doi/10.1111/j.1477-9552.2010.00266.x/abstract) where welfare considerations can lead to people giving up meat consumption. So we could consider (mentioned though not explored in the paper) how welfare campaigning might nullify the issue of switching out meat consumption for alternatives, essentially by helping people remain within the system of carnism, instead of encouraging those values that would cause them to leave. In this way I think a foundational consideration of what animal welfare means is an important and often neglected issue. It is something which Lee Hall differentiates as ‘welfare that deceives’ and ‘authentic welfare’ in the book ‘On Their Own Terms’. I believe there are reasonable grounds to consider why welfarism isn’t perhaps the most effective approach, whilst we would also need to consider that within different ethical systems campaigning for animals to be harmed in different ways (which is a different perspective) within a system of exploitation is not something which would take place, which I think further indicates EA leaning.
The key seems to be demonstrating how EA principles and thinking have been applied to different issues, particularly in relation to considering interventions and counterfactual considerations. Another issue in EA is where the desire for institutional messaging has impacted the vegan movement more broadly, particularly in the sense of undermining shared meanings, take for example ‘veganism’ or ‘animal rights’ where there is little clarity in the way these terms are used, and this has had an impact in the EA movement and in the broader animal movement, and it isn’t clear how this impact has been evaluated.
To utilise institutional messaging is fine. However, it seems strange to want to use reducetarian messaging in relation to a vegan movement (as part of a broader animal movement), in this way it could be better to compartmentalise rather than push this form of messaging across the movement. This is partly because institutions and social movements are different things. So it may be the case that reducetarians would be more effective at an institutionally speciesist level, but it seems awkward at the very least to do this at a vegan (social) movement level. I think this issue is neglected, and neither has it been thoroughly evaluated by the people who use it.
I further note how ACE support groups and organisations which take the institutional approach, and subsequently appear to reconstitute veganism and animal rights into ‘strategies’, particularly in regard to CEVA, Vegan Strategist, Pro-Veg, all groups which avoid defining and explaining veganism as a position in opposition to animal exploitation, and this introduces a conflict within the movement, when instead a thorough consideration of the principles of EA ought to help avoid this type of issue. It also seems to me that part of the problem is rooted in the model of welfare (effective) and abolition (ineffective) that is used by most utilitarian EAs, and i think it leads to the over simplification of certain approaches favoured within EA generally.
In order to address this issue I would like to suggest that ACE work collaboratively on a new model to use within the animal movement that necessarily takes a larger number of perspectives into account, and ensures more people are brought into the conversation. In this way the value of different approaches can more easily be taken into account, and movement functioning and effectiveness likely improved.
I would be happy to help with this in any way that I can.
Kathryn Asher says
Hi Kevin, many thanks for your comment and enumerating your thoughts so fully.
As I understand the main gist of your feedback, it sounds like you’re keen to see individuals whose approach to advocacy may be less informed by “pragmatism” (as I’ve heard it called) aide us in setting our research priorities. This is certainly a worthwhile point to consider and I appreciate your offer to help in this regard.
Many of the early research priorities identified thus far do tend to fall closer to the more pragmatic end of the continuum, but indeed we are open to considering research that comes from a more principled or foundational perspective. We mention, for example, activist mobilization and societal-level attitudinal change around speciesism as potential avenues. We must of course keep in mind the extent to which such matters can be answered by research, meaning that they are a matter of strategy as opposed to solely ideological or moral considerations, since we’re sensitive to the fact that opposition to some forms of advocacy can additionally rest on avoiding any violation of a moral imperative even outside of concerns about effectiveness (which is, we recognize, highly subjective).
We certainly agree that considering a wide range of perspectives is beneficial for the movement and are open to ways we might be able to continue to make progress on this front. So I’d happily welcome thoughts from you and others on specific research questions we should think about on the topic you raise that are well-positioned to lead to high impact research with the potential for implementation on the ground as well as outcome measures we may want to consider that speak to different conceptions of effectiveness.
Kevin Watkinson says
Hi Kathryn, thanks for the response.
I am thinking about perspectives which are often considered less pragmatic and effective, related to the idea they are believed to be less flexible, but I think this characterisation is often a little unfair given that I believe most people take into account various limitations, such as the non-vegan world scenario, as far as is possible and practicable, and the fact that different perspectives and approaches exist within the animal movement space. I believe for instance that ‘effectiveness’ is a matter of diversity rather than trying to find out which particular approach works best. I think few people are unconcerned by consequences, and if we are breaking issues down to welfare or abolition, and saying welfare is more effective, then that wouldn’t be sophisticated enough to make a reasonable judgement on effectiveness generally.
One of the things I quite regularly see in EAA is the idea that we need to be ‘strategic’ about what we do, and instead I think that ought to be replaced with ‘thorough consideration’ for what we do. For instance, if ACE are supporting a particular mainstream approach that will consequently dominate the animal movement space, either intentionally or otherwise, then that it is an important area of study, one which I believe has been neglected.
I believe the starting point for this issue Is foundational, particularly in relation to factors of welfare and how that can interfere with reducing the numbers of animals consumed. Secondly, with institutional messaging and favouring the system of power that institutional messaging is built upon. I think this is part of the reason EAA has experienced little success with diversity and inclusion, and this merits further consideration.
I would also be particularly interested in finding out how pragmatism is reflected in different approaches, and how effective this is in relation to communication, for instance where vegans are eating animals strategically then how does that effect how vegans are viewed, and how non-vegans understand the diet and lifestyle, incorporating issues with recidivism and the impact on a social justice framework.
In terms of specific questions I would ask two that cover quite broad areas:
The degree to which the abolitionist / welfare framework can be relied upon as a useful model for depicting the animal movement.
How reducetarianism impacts messaging that is geared toward social justice. Particularly how it relates to speciesism, disableism, racism, sexuality, ageism, sexism, class and how these issues deviate from social norms / privilege. Paying attention to the way that terms such as animal rights and veganism have been reconstituted by the mainstream animal movement and how that has impacted the grassroots animal rights movement that constructs a case based on justice.
I would use these questions to better inform the evaluation process at ACE, particularly in relation to diversity and inclusion. I’ve said elsewhere that it is very difficult to decide which groups are most effective when different approaches are not thoroughly considered and accounted for, whilst the scope of consequences ought to be far broader than is often considered within EAA. I think the purpose of EAA is to consider a wide variety of different issues within the animal movement, and to be neutral within the cause. This would present a serious challenge to which groups could be recommended.
We know for instance that carnists do carnism, and i don’t believe it follows that to work outside this system whilst articulating a different view would be intrinsically ‘ineffective’. Resolving this issue would require work on foundational issues which might consequently alter the culture within EAA, from favouring top down institutional advocacy to give more thought to ground up approaches that are more equivalent to social movements. In this way it might follow there are changes in the advisory boards of EAA related groups to better reflect diversity, and for EAA researchers to be more representative of different perspectives. This would be fundamental to EAA, and groups choosing not to do this would have to justify how that would be effective, rather than it being largely considered as de facto effective.
In terms of ACE generating cash flow to top recommended groups then it wouldn’t necessarily need to do any of this work, it seems to me that most EAs seem to be satisfied with the work that ACE does, so in that sense perhaps there isn’t so much motivation to expand or change it. However, it seems to me that partly thanks to the Open Philanthropy Project there is now an opportunity to more thoroughly evaluate ‘effectiveness’, and what it would mean to be an effective advocate / effective organisation.
Some associated articles.
http://encompassmovement.org/celebrating-the-globalization-of-animal-advocacy/
https://animalcharityevaluators.org/blog/welfarists-or-abolitionists-division-hurts-animal-advocacy/
https://qz.com/829956/how-the-vegan-movement-broke-out-of-its-echo-chamber-and-finally-started-disrupting-things/
Kathryn Asher says
Hi Kevin, many thanks for this follow-up. I certainly appreciate your point about seeking out diverse perspectives on conceptions of what constitutes effectiveness and related topics. Thanks also for sharing your thoughts on potentially useful research questions and resources, great to have your suggestions on this front.