2024 Evaluation Process
This page provides an overview of how we currently decide which charities to recommend out of the small group that makes it through the application and selection stages. For more details on earlier stages of the process, visit How We Evaluate Charities. To find details on our process from previous years, see the Process Archive.
How We Gather Information
Once a charity is selected for evaluation, we contact them and share the Charity Evaluation Handbook to explain the process, policies, and expectations. Over the next few months, we send charities a series of questions based on our charity evaluation criteria, study the materials submitted to us, and survey other materials published by or about the charity to develop our understanding of their work. We also send an engagement survey to the charity’s staff to help us understand the charity’s work environment.
We ask questions related to the following topics to all charities under consideration:
- The activities of the organization, the intended outcomes of those activities, and the ultimate change for animals that they are trying to achieve.
- Evidence of the benefits that the organization’s work has brought about or will bring about for animals and the amount of money spent to achieve those benefits.
- Their historical revenue, expenditures, and staff size.
- Their current reserves held, reserves policy, and reserves target.
- How they would adapt their operations and programs under different funding scenarios.
- The organization’s human resource policies and processes.
- The organization’s governance and structure.
How We Evaluate Charities
Using the information we collect, we assess charities on three criteria: Impact, Room for More Funding, and Organizational Health. For more details, please visit our charity evaluation criteria page.
To make assessments, we conduct research, consult experts, create quantitative models, produce qualitative analyses, and continue asking increasingly tailored and detailed questions to charities until we have sufficiently resolved our uncertainties.
Once an initial draft of an assessment is ready, we go through a round of “red teaming.” Here, members of ACE’s evaluation team stress-test the arguments, claims, and decisions in the assessments made by their coworkers to detect errors and counterbalance any biases that other team members may hold. Additionally, the charities themselves have the opportunity to see intermediate versions of our analysis to question and disagree with the content before it is finalized.
How We Make Recommendation Decisions
Once assessments are complete, we consider all the information we have about each charity and explicitly compare them against others being evaluated in the same year.
Each member of the evaluation team scores each charity according to the five decision guidelines below on a scale of 1–3 (1 = comparatively weak, 2 = unclear/middling, 3 = comparatively strong), using the assessments as their guide. Then, each member uses those scores to decide on a final score for each charity on a scale of 1–7 (1 = strongly do not recommend, 4 = neutral, 7 = strongly recommend) based on whether they think ACE should recommend them. This is done independently and anonymously.
The evaluation team then comes together to share their scores and discuss why each charity should be recommended or not. Afterward, team members adjust their final score for each charity based on the discussion and submit a second set of scores independently and anonymously.
Finally, the team reviews the updated scores and arrives at final recommendation decisions via consensus. We define consensus as general agreement among the members of a group. It implies that while not everyone may fully agree with a decision, they are willing to accept it and support it because they believe it is the best option, or because they respect the group’s collective expertise.
Decision guidelines
We used the following guidelines to decompose our decisions into a predefined set of sub-judgments.1 They roughly correspond to our charity evaluation criteria:
- Does the charity have a strong theory of change with evidence and reasoning that supports that it can advance their desired outcomes for a large number of animals?
- Is the charity taking reasonable mitigating actions to address the risks and limitations of their programs?
- Do our cost-effectiveness assessments of the charity’s programs compare favorably to those of other programs at organizations we’ve evaluated who engage in similar work?
- Does the charity have sufficient room for more funding, and are we confident that their future plans will be effective?
- Does the charity have any organizational health issues substantial enough to undermine their future effectiveness and stability?
The relative importance of these guidelines is not fixed (e.g., cost-effectiveness can play a large role for one charity but a minimal role for another), but in general, the first three (which are more directly related to positive impact for animals) play the biggest role.2 Because these guiding questions are intended to be comprehensive in terms of decision-relevant factors, all other information is considered irrelevant. This allows us to make decisions entirely based on the merits of the charities. We explicitly do not take into account the following:
- Whether a charity was previously recommended.
- ACE’s past relationship with the charity.
- The potential impact of recommending/not recommending a charity on ACE’s reputation and public relations.
What do we publish?
Once decisions are made, we write a detailed review for each charity we choose to recommend. This allows donors and charities to transparently follow our reasoning. We write a summary review for charities we evaluate but do not recommend. The detailed reviews include an overview followed by sections explaining how well the charity performs on each of our evaluation criteria. The summary reviews of the charities only include an overview section. We also prepare supporting documents, such as Theory of Change tables for all charities and Financials and Future Plans spreadsheets for Recommended Charities.
We then share these detailed and summary reviews with charities for feedback and approval. Because our reviews rely on information that may be confidential, we sometimes make substantive changes to our reviews as a result of feedback from the respective charity to protect private information. We also correct factual errors or alter wording or emphasis, without affecting the substance of the review.
If the charity agrees, we publish the detailed or summary review and approved supporting materials on our website and list the charity as either “Recommended” or “Evaluated.” If the charity does not agree to their review being published, we list them on our site as “Declined to be Reviewed/Published” and do not publish the review.
Finally, we award participation grants of at least $2,000 to all charities that participate. These grants are not contingent on charities’ decision to publish their review; we also award participation grants to charities whose reviews we do not publish, assuming they made a good faith effort to engage with us during the evaluation process.
See Ploder (2024) for additional context considered in our decision-making.
The reason for taking this approach is that one of ACE’s guiding principles is to follow a rigorous process and use logical reasoning and empirical evidence to make decisions. While in some circumstances it can be a useful thought experiment to make guesses, we are unwilling to base our assessment of charities on cost-effectiveness analyses that are highly speculative.