2023 Evaluation Process
Overview
Each year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that are able to do the most good with additional donations. This post describes the process that led to our 2023 charity recommendations.
Our 2023 evaluation process took place from June to November:
- June: We selected and invited charities to be evaluated.
- July to September: We gathered information from charities and drafted comprehensive and summary reviews.
- September: We made recommendation decisions and shared review drafts with charities.
- October: We addressed charities’ feedback and finalized reviews.
- November 8: We published our reviews and announced our Recommended Charities.
This process resulted in the publication of six comprehensive reviews, seven summary reviews, and an update to our list of Recommended Charities.
Fig. 1: Flowchart depicting ACE’s 2023 charity evaluation process
Consideration | Evaluation | Recommendation |
73 considered | 6 comprehensive reviews researched and written | 6 listed as Recommended Charities |
21 invited to participate | 8 summary reviews researched and written | 7 listed as Evaluated Charities |
15 agreed to participate | 1 review started but discontinued for unforeseen reasons | 1 withdrew from evaluation after viewing a draft of our review |
6 declined to participate |
Charity Selection
Because we have historically re-evaluated recommended charities every two years, our list of organizations for 2023 began with 10 charities that were already scheduled for re-evaluation. At the time of inviting charities, one of those charities did not meet one of our eligibility criteria (capacity to receive funds) to participate in the evaluation process due to circumstances outside their and ACE’s control. We hope to evaluate them in a future year. This year, we had the capacity to evaluate 15 charities, so our first step was to select an additional six.
Selection began with a list of 73 charities to consider. That list included charities that were not invited last year but were close to the top of our shortlist, charities that had requested to be evaluated and were expected to score well on our criteria based on an initial review, and the most promising Movement Grants applicants. Additionally, we considered charities that work in high-priority countries, target high-priority animal groups, and use high-priority interventions but are underrepresented in our current list of recommended charities. We then researched publicly available information about each of these 73 charities. We used a quantitative model to score each organization based on the countries they work in and the animal groups they target.1 Finally, the evaluation committee used an iterative discussion and voting process to prioritize the list of charities to invite.
By the end of the selection process, we invited 21 charities to participate. Six charities declined to be reviewed (29% decline rate, compared to 19% last year). Unfortunately, one charity had to withdraw several weeks into the evaluation process, bringing our final cohort down to 14.
How We Gathered Information
After charities agreed to participate, we asked each of them to provide documentation about their programs, accomplishments, finances, and governance. Additionally, we distributed an engagement survey to each charity’s staff members (and volunteers, for some), which helped us gain a better understanding of their perspectives on their organizational health. If we had any questions about their submissions, we worked with representatives from each charity to ensure that we had a clear understanding of their work. We then used the information to assess each charity based on the criteria outlined in the next section.
Our Evaluation Criteria
We evaluated charities on four criteria similar to those we used in 2022: Impact Potential, Cost Effectiveness, Room for More Funding, and Organizational Health. However, in 2023, we updated the names of two of these criteria2 and refined the way we assess all four of them. Below, we outline this year’s approach and describe how it differs from the previous year. For more details on changes to our evaluation criteria in 2023, please visit our evaluation criteria page.
Impact Potential
This year, we refined the scoring framework to assess the impact potential of charities’ programs, building on the categories we used in 2021 (animal groups, countries, and interventions). We used the Scale, Tractability, and Neglectedness (STN) framework to score different types of animal groups, countries, and interventions based on their priority level; for countries, we also included an assessment of global influence.3 This year, we introduced synergy scores to account for the unique impact that a combination of factors can create.4 Using those scores and information supplied by the charity about their use of program funding, we arrived at a singular impact potential score for each charity, representing the expected impact of their collective programs. This year, we also introduced an uncertainty score that represents the degree of disagreement among our team members’ individual scores and the amount of research about each intervention’s impact potential.
Cost Effectiveness
To assess the cost effectiveness of charities’ programs, we considered their approaches to implementing interventions, their recent achievements, and the costs associated with those achievements. As with the Impact Potential criterion, we adopted a scoring framework to assess cost effectiveness. We began our analysis by comparing a charity’s reported expenditures over 12 months to their reported achievements in each intervention category during that time. We then weighted each program proportional to its cost to arrive at an overall cost-effectiveness score for each charity. We also verified select achievements reported by charities using publicly available information, internal documents, media reports, and independent sources.
Room for More Funding
The Room for More Funding (RFMF) criterion aims to assess whether charities can absorb and effectively utilize funding that a new or renewed recommendation may bring in. We ask charities to estimate their projected revenue and plans for expansion for the next two years, assuming that their ACE recommendation status and the amount of ACE-influenced funding they receive will stay the same. We also assessed our level of uncertainty in charities’ plans for expansion if they were to receive funds beyond their predicted income. We used those assessments to estimate their RFMF, adding additional funding to replenish their reserves where applicable.
Organizational Health
In this criterion, we examine the information provided by charity leadership, results of our engagement survey, transparency regarding leadership and governance, and any unsolicited testimonials or whistleblower reports we receive. However, because ACE is not a watchdog organization and we lack the expertise and capacity to investigate any whistleblower reports we receive, we aim to determine whether a charity has organizational issues that negatively impact staff productivity and wellbeing. Like last year, we used an internal scoring framework5 to facilitate detailed discussions about the organizational health of the charities under review and compare charities with each other. In the end, we noted whether we had any concerns regarding the charity’s organizational health.
How We Made Recommendation Decisions
To select this year’s recommended charities, ACE’s Programs team gathered and ranked each charity’s assessments and scores. Like last year, we used an iterative ranking approach that assumes that disagreements between committee members are inevitable and expected—rather than disagreements causing a roadblock (e.g., needing to decide what to do in cases of a split vote). In other words, committee members do not need to agree or reach a consensus to generate a ranked list of charities. We outline all of this in detail below.
- Stage 1: Evaluation committee members scored each charity on a scale of 1–7 (1 = strongly reject, 7 = strongly recommend) using the criterion scores/assessments as their guide. This was done independently and anonymously. Individual team member scores were added to a combined scoring spreadsheet, where charities were ranked based on the scores they received.
- Stage 2: Evaluation committee members discussed their scores and made their case about why they felt a given charity should be recommended or not.
- Stage 3: Evaluation committee members were encouraged to adjust their original recommendation scores asynchronously (based on others’ perspectives) and submit a second set of scores independently and anonymously. Again, adjusted individual scores were reflected in a combined scoring spreadsheet.
- Stage 4: Evaluation committee members discussed the final rankings and decided which charities to recommend.
The table below contains the final scores and where the team decided to draw the cut-off for recommended charities this year.
Charity Name | Researcher A | Researcher B | Researcher C | Researcher D | Researcher E | Average |
Charity 1 | 6 | 7 | 6 | 6 | 7 | 6.4 |
Charity 2 | 6 | 6 | 5 | 6 | 6.5 | 5.9 |
Charity 3 | 6 | 6 | 5 | 6 | 6 | 5.8 |
Charity 4 | 6 | 5 | 3 | 6 | 5 | 5.0 |
Charity 5 | 6 | 4.5 | 3 | 4.5 | 6 | 4.8 |
Charity 6 | 5 | 5 | 4 | 4.5 | 5 | 4.7 |
Charity 7 | 4 | 4 | 4.5 | 3.5 | 3 | 3.8 |
Charity 8 | 3 | 3 | 4 | 5 | 3.5 | 3.7 |
Charity 9 | 3.5 | 4 | 3 | 3.5 | 4.5 | 3.7 |
Charity 10 | 5 | 3 | 3 | 3 | 3 | 3.4 |
Charity 11 | 3 | 3 | 3 | 3 | 3 | 3.0 |
Charity 12 | 3 | 3 | 3 | 3 | 1 | 2.6 |
Charity 13 | 2 | 2 | 3 | 3 | 2 | 2.4 |
Charity 14 | 1 | 2 | 2 | 2 | 1 | 1.6 |
Once decisions were made, we finalized each charity’s draft review and sent them to charities for feedback and approval. Charities were given the opportunity to request edits, including removing confidential information or correcting factual errors. That said, all of our reviews represent our own understanding and opinions, which are not necessarily those of the charities reviewed. This year, 13 out of 14 charities for which we drafted reviews or summary reviews agreed to have them published.
Recommended Charities
Out of the 14 charities that participated in our 2023 evaluation process, six were selected as Recommended Charities, joining the five charities that were recommended in 2022. These charities will retain their recommendation status for two years. See our 2023 recommendations announcement to learn about our new recommended charities!
Participation Grants
We offer our sincere gratitude to each of the charities we evaluated this year. Participating in our evaluation process takes time and energy, and we are grateful for their willingness to be open with us about their work. To that end, we will award participation grants of at least $2,500 to all charities that participated in our 2023 evaluation process. These grants are not contingent on charities’ decision to publish their review; we award participation grants to charities whose reviews we do not publish, assuming they made a good faith effort to engage with us during the evaluation process.
As we do not have access to charities’ exact expenditure breakdowns during the selection process, we use scores only as a guide to supplement our qualitative judgments about their interventions and programs.
The Impact Potential criterion used to be named Programs, and the Organizational Health criterion used to be named Leadership and Culture.
Our methodology for scoring countries was inspired by Mercy For Animals’ Farmed Animal Opportunity Index, using most of their proxies for Scale, Tractability, and Global Influence. We also considered Neglectedness as a factor.
For example, we had synergy scores for combinations such as corporate outreach to help farmed chickens in the United States, research to help farmed fishes in Japan, and 149 other combinations of factors (intervention, animal group, and country).
This year, we shared a copy of the organizational health scores with each charity’s leadership, but we did not publish them to protect confidentiality.