Faunalytics
Recommended CharityFaunalytics is a U.S.-based organization that connects animal advocates with information relevant to advocacy. Its work mainly involves conducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for animal advocates through its website’s content library.
Review Published: | 2023 |
Animals Need You. You Need Data.
What does Faunalytics do?
Faunalytics is a U.S.-based organization that connects animal advocates with information relevant to advocacy. Their work mainly involves conducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for animal advocates through its website’s content library.
2022 revenue: $1,207,709
Staff size: 15 (including full-time staff, part-time staff, and contractors)
How does Faunalytics create change for animals?
After analyzing Faunalytics’ programs and expenditures, we assess that about 96% of their expenditures on programs go toward high-priority animal groups (farmed animals and wild animals), at least 72% goes toward high-priority countries (U.S., China, and Brazil), and 100% goes toward high-priority interventions (research and skill and network building). Their work on conducting research to help farmed animals and wild animals in the United States and China has a particularly high impact potential. We think that out of all of Faunalytics’ achievements, the addition of new content to their research library, their Global Animal Slaughter Statistics and Charts, and their intervention research into 15 types of animal advocacy are particularly cost-effective because they combine high-priority interventions with robust implementation.
See our 2023 Evaluation Process web page for information about our charity selection, evaluation methods, and decision-making process.
How is Faunalytics’ organizational health?
We did not detect any concerns in Faunalytics’ leadership and organizational culture. Based on our assessment, they appear to have strong policies and processes in place and high levels of staff engagement. We also positively noted that since their last evaluation in 2023, they have developed an algorithm for setting salaries in an objective, transparent manner. They have also improved their employee benefits package and are continuing to assess how they can further improve employee health and retirement benefits.
How will Faunalytics use your donation to help animals?
With additional funding, Faunalytics would hire key personnel and offer extra support to organizations making a research-based strategic shift. We estimate that they have sufficient reserves. Based on our assessment of their plans to use unexpected funding, we believe that overall, Faunalytics has room for roughly $0.4M of additional funding in 2024 and $0.6M in 2025. These two figures represent funding beyond their projected revenues, meaning that we believe they could effectively use a total revenue of roughly $1.6M in 2024 and $2.0M in 2025. By supporting Faunalytics, you play a crucial role in helping them achieve their goals and create a better experience for farmed and wild animals around the world.
See more details in Faunalytics’ Room for More Funding spreadsheet.
Introduction
At Animal Charity Evaluators (ACE), we provide comprehensive reviews of all the organizations we recommend after conducting our yearly charity evaluations. During the evaluation period, our researchers thoroughly analyze publicly accessible information about each organization. Additionally, we ask participating organizations for supplemental materials and information to aid our assessments and help us identify the charities to recommend.
This review is based on our assessment of Faunalytics’ performance on ACE’s four charity evaluation criteria. Each section of the review focuses on a different criterion: (i) Impact Potential, an overview of the charity’s programmatic work and an assessment of its impact potential; (ii) Cost Effectiveness, an analysis of the charity’s recent expenditures and achievements; (iii) Room for More Funding, an overview of the charity’s future plans and an estimate of how much additional funding they can effectively use in 2024 and 2025; and (iv) Organizational Health, an assessment of whether there are any management or governance issues substantial enough to affect the charity’s effectiveness and stability. Each of the four sections is divided into these subsections: Introduction, Our Method, Limitations of Our Method, Our Analysis, and Our Assessment of the charity in that criterion. Finally, we conclude with a summary of why we recommend this charity based on our evaluation.
Impact Potential: How promising are Faunalytics’ programs?
With this criterion,1 we assess the impact potential (IP) of a charity’s programs without considering their specific program achievements. During our assessment, we analyze the groups of animals the charity’s programs target, the countries where they take place, and the intervention types they use. We also examine how the charity allocates expenditures among different animal groups, countries, and interventions. A charity that performs well on this criterion has programs with great potential to reduce animal suffering or improve animal wellbeing. The key aspects that ACE examines when evaluating a charity’s programmatic work are discussed in detail below.
ACE characterizes promising programs as those that (i) target high-priority animal groups, (ii) work in high-priority countries, and/or (iii) use interventions that work toward high-priority outcomes. We used a version of the Scale, Tractability, and Neglectedness framework to score the impact potential of charities’ programmatic work in three categories: animal groups, countries, and interventions. Specifically, we calculated:
- Animal-relative scores to assess the expected impact of targeting different animal groups relative to each other
- Country-relative scores to assess the expected impact of working in different countries relative to each other
- Intervention-relative scores to assess the expected impact of using different interventions relative to each other
- Synergy scores to assess the expected impact of using specific combinations of animal groups, countries, and interventions together
For each animal group, country, and intervention, we assigned an impact potential score (IP score) on a 1–7 scale. For each animal group and intervention, we also assigned an uncertainty score on a (1–7 scale), which accounts for our team’s confidence in the IP score. We designate our overall impression as “high priority” when the IP score is equal to or greater than the median (of all animal group scores, country scores, and intervention scores, as applicable) and as “high uncertainty” when the uncertainty score is equal to or greater than the median (of all animal group scores, country scores, and intervention scores, as applicable). Scores below the median were categorized as “moderate priority” and “moderate uncertainty.”
We asked charities to estimate the percentage of their annual programmatic expenditures (i.e., non-overhead) allocated toward different categories of animal groups, countries, and interventions. The final IP score for each charity is the average of the four scores mentioned above (the animal-relative score, the country-relative score, the intervention-relative score, and the synergy score) weighted by those percentages. This final IP score represents ACE’s assessment of the impact potential of the charity’s collective programs without considering their specific program achievements. Similarly, we also arrived at a final uncertainty score for each charity.
We designate our overall impression of each charity’s impact potential as “high IP” when the final IP score is equal to or greater than the median (of all 2023 charities under evaluation) and as “high uncertainty” when the final uncertainty score is equal to or greater than the median (of all 2023 charities under evaluation). Scores below the median were categorized as “moderate IP” and “moderate uncertainty.”
Prioritizing animals
We used the Scale, Tractability, and Neglectedness (STN) framework to score 16 animal groups relative to each other. By using this framework, we aim to prioritize groups of animals who are affected on a large magnitude, where there appear to be tractable solutions to improve their situation, and whose situation is relatively neglected in animal advocacy.
For Scale, we used the following proxies:
- Global animal population data. Most of the data for farmed animal groups was extracted from the Food and Agriculture Organization of the United Nations’ statistics website (FAOSTAT) and from other sources for other animal groups. We note every source in the scoring spreadsheet.
- Welfare range composite score. This score is an average of three scores: Rethink Priorities’ Welfare Range Estimates, which were adjusted for ACE;2 an egalitarian score;3 and the average of our Programs team members’ welfare range scores. We selected one default animal species for scoring each animal group’s welfare range. We made sure that each default animal species we selected for each animal group was one of the eleven species included in Rethink Priorities’ welfare range project.
- Typical welfare situation score. This score aims to represent the welfare situation each animal group typically experiences. This score is the average of our Programs team members’ individual scores for the typical welfare situation of each animal group. Factors we considered include the typical space per individual, typical lifespan vs. potential natural lifespan, and the main types of welfare issues.
Our Programs team members individually scored Tractability, defined as “our understanding of the problem and its solutions,” and Neglectedness, defined as “the amount of global funding going toward each animal group.” We then averaged Scale, Neglectedness, and Tractability scores to calculate an IP score for each animal group.
Additionally, we calculated an uncertainty score for each animal group, reflecting the standard deviation of team members’ scores.
For more details on how we currently prioritize animals, see the animal-relative scores spreadsheet.
Prioritizing countries
The countries and regions in which a charity operates can affect the impact potential of their work. In the case of farmed animal organizations,4 we used the STN framework to score 196 countries relative to each other. By using this framework, we aim to prioritize countries with relatively large animal agriculture industries, few other charities engaged in similar work, and where animal advocacy is likely to be feasible and have a lasting impact. Additionally, we consider a country’s Global Influence as a fourth factor in prioritizing countries.
Our method of scoring countries was inspired by Mercy For Animals’ Farmed Animal Opportunity Index (FAOI), using most of their proxies for Scale, Tractability, and Global Influence. We also considered Neglectedness as a factor.
For Scale, we used the following proxies:
- Current farmed animal population in the country. We extracted data on farmed animal populations from FAOSTAT.
- Projected farmed animal population in 2050. These are based on the FAO’s projections for farmed fishes5 and land animals.6 We include these projections to account for the importance of advocacy in countries where animal agriculture will likely be widespread in the near future, even if it is relatively limited at present.
For Tractability, we used the following proxies:
- Gross National Income per capita. This is the dollar value of a country’s annual income divided by its population, according to data from The World Bank.
- Lack of corruption. The Corruption Perception Index by Transparency International ranks 180 countries and territories around the world by their perceived levels of public sector corruption.
- Giving Index. The World Giving Index is an annual report published by the Charities Aid Foundation, using data gathered by Gallup. It ranks over 140 countries according to how charitable they are.
- Innovation Index. The Global Innovation Index is an annual ranking of countries by their capacity for and success in innovation, published by the World Intellectual Property Organization.
For Global Influence, we used the following proxies:
- Elcano Global Presence Index. This index is an annual measurement of the international influence and reach of 150 countries based on three factors: economic presence, military presence, and soft presence.
- Meat trade. This is the aggregate number of tonnes of meat imported and exported, according to FAOSTAT’s Food Balances Portal.
- Live trade. This is the aggregate number of live animals imported and exported, according to FAOSTAT’s Food Balances Portal.
For Neglectedness, we calculated the ratio of the human population to the number of farmed animal organizations in each country. Specifically, we compared our own data on the number of farmed animal organizations (excluding farmed animal sanctuaries) working in each country to the human population (in millions) of that country.
For more details on how we currently prioritize countries, see the country-relative scores spreadsheet.
Prioritizing interventions
We categorized the interventions animal advocacy charities use into 26 types and the main outcomes they work toward into eight types. Using the STN framework, we calculated different IP scores for 77 combinations of interventions and expected outcomes. For each combination, our Programs team members individually scored:
- Scale in the short term, defined as the amount of animal suffering this intervention can relieve in the short (0–5 years) and medium term (5–100 years) relative to other interventions working toward the same outcome
- Scale in the long term, defined as how much this intervention can create systemic change.
- Tractability, defined as how easy it is for this intervention to cause each outcome.
- Neglectedness, defined as the amount of funding going toward this intervention relative to other interventions.
Each team member assigned a percentage weight to the above four factors. We then averaged these scores and percentage weights to calculate an IP score for each combination of interventions and expected outcomes.
In line with our guiding principle of following empirical evidence and logical reasoning, we used existing research to inform our assessments and explain our thinking about the impact potential of different interventions. We compiled research about the effectiveness of each intervention using information from our research newsletter, research library, and research briefs.
Additionally, we calculated an uncertainty score for each combination of interventions and expected outcomes. These scores factor in the standard deviation of team members’ scores, the number of relevant publications on the intervention’s effectiveness, and the quality of those publications. We determined each publication’s quality based on its peer-review status, open-access availability, publication date (ideally 2020 or later), transparency in methodology, and whether it was a meta-analysis, meta-review, or systematic review (if applicable).
For more details on how we currently prioritize interventions, see the intervention-relative scores spreadsheet.
Assessing synergy
This year, we introduced synergy IP scores to account for the unique impact that a combination of factors can create. These scores represent the impact of a specific combination of factors (intervention, animal group, and country) and aim to capture a dimension of charities’ impact that is different from the one reflected in the animal-relative, country-relative, and intervention-relative scores. With the synergy scores we attempt to analyze charities’ work in a way that is (to some extent) sensitive to each charity’s particular context as well as consistent across charities.
We identified all different “animal-country” combinations and “intervention-country” combinations of all 2023 charities under evaluation, and we produced the following scores:
- Animal-country scores (if applicable). Our Programs team members scored the impact potential of helping a specific animal group in a specific country. To inform our scores, we considered the animal population (current and projected for 2050) in each country.
- Intervention-country scores (if applicable). Our Programs team members scored the impact potential of using a specific intervention type in a specific country. To inform our scores, we considered specific tractability proxies depending on the intervention type.7
We then created a synergy IP score for each synergy combination (i.e., intervention-animal-country) by averaging the corresponding animal-country score and intervention-country score. For more details on how we scored synergy impacts, see the synergy scores spreadsheet.
In this criterion, we assess the impact potential (IP) of charities’ programs without considering their implementation or achievements. We address this limitation in the Cost Effectiveness criterion, where we analyze charities’ program achievements since 2022.
Due to the lack of available data for STN proxies and our limited capacity to gather data, we were unable to assess the effectiveness of interventions and animal groups based solely on data and without any subjective scoring by ACE team members, as we did for countries. The intervention-relative scores are based on ACE team members’ scores. In contrast, the animal-relative scores use a hybrid approach, incorporating ACE team member scores for tractability, neglectedness, and some scale proxies. For other scale proxies, we used animal population data and Rethink Priorities’ adjusted data on welfare ranges, each with its own limitations.8 Because we assigned numerical values to non-numerical data, we advise caution when interpreting results, as they can appear more objective than they are.
Due to limited capacity, we decided to estimate the welfare range of each animal group using only one species. Rethink Priorities recommended a more complex method that involves population-weighted averages of welfare ranges for categories of multiple animal species (e.g., wild animals). Although we did not use Rethink Priorities’ suggested method this year, we may implement a refined version of it in the future if we expect it would significantly improve the accuracy of our estimates.
We based the country-relative scores entirely on data, but unfortunately, not all the data we intended to include was available. Our original plan for this year was to use the amount of farmed animal advocacy funding per human population as a proxy for neglectedness in different countries. However, due to a lack of reliable data on the amount of farmed animal advocacy funding in each country, we instead used the number of farmed animal organizations in the country as a proxy. The data we used for the country-relative scores also have limitations, as they are estimations made by different institutions using various methods. Another limitation of our country-scoring method is that it only applies to farmed animal advocacy. We do not have country scores to evaluate programs that aim to support wild animals or other animal groups. In such cases, we scored countries as “n/a” and excluded the country factor from our analysis.
ACE team members used empirical evidence when available to score interventions and synergy impacts; however, evidence was inconsistent across interventions. For example, we found limited research on the effects of corporate litigation or funding.
We are particularly uncertain about the long-term impact of interventions, which is plausibly what matters most.9 The potential number of animals affected increases over time due to an accumulation of generations. Thus, we expect that the long-term impacts of an action will likely affect more animals than the short-term impacts of the same action. This year, we included some considerations of long-term impact in our assessment of each intervention type. However, we remain highly uncertain about the specific long-term effects of each intervention. Due to this uncertainty, our reasoning about the potential impact of each charity (including our diagrams) tends to emphasize short-term effects.
Our Analysis of Faunalytics’ Impact Potential
Animal groups
Faunalytics’ programs focus primarily on helping farmed animals, which we assess as a high-priority cause area. Some of their work aims to help companion animals, wild animals, animals in labs, and animals in entertainment.
Faunalytics notes that while their Original Research program primarily focuses on farmed animals, their Research Library covers a wide range of animal issues and topics. Although study summaries related to farmed animals, veganism/vegetarianism, and effective advocacy strategies make up the majority of their library content, they also cover additional topics to inform the broader movement.
Fig. 1: Faunalytics’ spending toward each animal group
In the table below, we report for each animal group our scores (on a 1–7 scale) for Scale, Tractability, and Neglectedness, as well as the general IP score and the uncertainty score. We also provide our overall impression of each animal group based on the latter two scores. For more details on how we scored animal groups, see the Prioritizing animals section.
Animal group | Scale | Tractability | Neglectedness | IP Score | Uncertainty | Overall impression |
Farmed animals (general) | 4.4 | 5.8 | 5.3 | 5.1 | 2.7 | High priority, moderate uncertainty |
Companion animals | 3.1 | 6.3 | 1.3 | 3.3 | 5.7 | Moderate priority, high uncertainty |
Wild animals | 7 | 2 | 7 | 5.7 | 7 | High priority, high uncertainty |
Animals in labs | 2.7 | 5.3 | 2.8 | 3.5 | 1.8 | Moderate priority, moderate uncertainty |
Animals in entertainment | 1 | 6.3 | 2.5 | 2.8 | 6.9 | Moderate priority, high uncertainty |
Countries
Faunalytics’ headquarters are currently located in the U.S. They do not have any subsidiaries.
Faunalytics’ programs target animals in the U.S., China, and Brazil. Some of their work is not directed toward any specific country.
Fig. 2: Faunalytics’ spending toward each country
In the table below, we report our scores (on a 1–7 scale) for Scale, Tractability, Global Influence, and Neglectedness, as well as the IP score for each country where Faunalytics runs programs to help farmed animals. We also provide our overall impression of each country based on the IP score. For more details on how we scored countries, see the Prioritizing countries section.
Country | Scale | Tractability | Global Influence | Neglectedness | IP Score | Overall impression |
United States | 5.5 | 6.6 | 6.9 | 4.1 | 5.8 | High priority |
China | 7 | 5.3 | 6.6 | 6.5 | 6.4 | High priority |
Brazil | 5.3 | 4.8 | 6.1 | 5.2 | 5.3 | High priority |
Interventions
Faunalytics uses two intervention types to increase knowledge and skills for animal advocacy: research, and skill and network building.
Fig. 3: Faunalytics’ spending toward each intervention
We use theory of change diagrams to communicate our interpretation of how a charity creates change for animals through interventions and outcomes. It is important to note that these diagrams are not complete representations of real-world mechanisms of change. Rather, they are simplified models that ACE uses to represent our beliefs about mechanisms of change. For the sake of simplicity, some diagrams may not include relatively small or uncertain effects.
Fig 4: Faunalytics’ theory of change diagram
In the table below, we report for each intervention-outcome combination our scores (on a 1–7 scale) for Scale (short term), Scale (long term), Tractability, and Neglectedness, as well as the general IP score and the uncertainty score. We also provide our overall impression of each intervention type based on the latter two scores. For more details on how we scored interventions, see the Prioritizing interventions section.
Intervention | Outcome | Scale (short term) | Scale (long term) | Tractability | Neglectedness | IP Score | Uncertainty | Overall impression |
Research | Increased knowledge/skills for animal advocacy | 5.0 | 5.8 | 4.5 | 6.0 | 5.3 | 4.1 | High priority, moderate uncertainty |
Skill and Network Building | Increased knowledge/skills for animal advocacy | 4.8 | 5 | 4.5 | 4.8 | 4.8 | 5 | High priority, high uncertainty |
Research on the impact potential of interventions
Below, we summarize the most relevant research on the effectiveness of each of these intervention types.
Research
ACE’s 2021 research brief on conducting animal advocacy research suggests that it is a promising intervention, especially when considering its potential impact in the longer term. However, our confidence in the short-term effects of this intervention is low due to the lack of empirical evidence about the extent to which animal advocates use research results to prioritize and implement their work. Additionally, we note that the impact of research can vary significantly, with some research projects being far more influential than others. Researchers’ rigor seems to be a key factor in projects’ impact.
A 2022 essay argues that the research, measurement, and evaluation of farmed animal advocacy is severely underfunded, receiving only 3% of the total funding for improving the lives of farmed animals.10 The author largely attributes this to lack of knowledge about intervention effectiveness within the animal advocacy movement.
Skill and Network Building
The National Council of Nonprofits argues that capacity building enables organizations to develop the competencies and skills to make their team more effective and sustainable, thus increasing their potential to fulfill their mission and create change.11 They also suggest that networks can be especially effective for capacity building because they catalyze innovation, improve communications, reduce duplication of past mistakes, and spread ideas faster and more efficiently than other capacity-building approaches.12
ACE’s 2018 report on the allocation of movement resources suggests that capacity building is neglected relative to other interventions aimed at influencing public opinion and industry.
A 2012 article argues that investments in capacity building are an effective adaptation response to global change and that strong and well-supported scientific networks are an indispensable component of capacity building, as they are a key source for new knowledge that enables continual and dynamic adaptation practice.13
Synergy impacts
Faunalytics’s programs can be interpreted as 18 combinations of different types of interventions used, countries where those interventions are conducted, and/or animal groups aimed to be helped. In the table below, we report the IP score (on a 1–7 scale, ranging from lowest to highest IP) for each intervention-animal-country combination that applies to Faunalytics. Note that we did not produce any IP scores for synergy combinations that are not specific to a particular country. For more details on how we scored the synergy impacts, see the Assessing synergy section.
Synergy combinations | % Annual Expenditures | IP Score |
Research to Help Farmed Animals (General) in United States | 58.5 | 6.0 |
Research to Help Farmed Animals (General) in China | 3.7 | 5.8 |
Research to Help Farmed Animals (General) in Brazil | 1.6 | 5.3 |
Research to Help Wild Animals in United States | 1.1 | 6.3 |
Research to Help Companion Animals in United States | 0.3 | 3.8 |
Research to Help Animals in Entertainment in United States | 0.2 | 3.5 |
Research to Help Animals in Labs in United States | 0.2 | 4.3 |
Skill and Network Building to Help Farmed Animals (General) | 21.5 | n/a |
Research to Help Farmed Animals (General) | 9.9 | n/a |
Skill and Network Building to Help Wild Animals | 1.3 | n/a |
Skill and Network Building to Help Companion Animals | 0.4 | n/a |
Skill and Network Building to Help Animals in Entertainment | 0.3 | n/a |
Skill and Network Building to Help Animals in Labs | 0.3 | n/a |
Research to Help Wild Animals | 0.3 | n/a |
Research to Help Companion Animals | 0.1 | n/a |
Research to Help Animals in Entertainment | 0.1 | n/a |
Research to Help Animals in Labs | 0.1 | n/a |
Our Assessment of Faunalytics’ Impact Potential
We estimate that about 96% of Faunalytics’ expenditures on programs go toward high-priority animal groups (farmed animals and wild animals), at least 72% goes toward high-priority countries (U.S., China, and Brazil),14 and 100% goes toward high-priority interventions (research and skill and network building). Their work on conducting research to help farmed animals and wild animals in the United States and China seems to have a particularly high impact potential.
In general, we are highly uncertain about skill and network building relative to other interventions, as well as work aiming to help wild animals, animals in entertainment, and companion animals relative to work targeting other animal groups. This high uncertainty is due to a high standard deviation in team scores, which can be interpreted as a relatively high level of disagreement among our team members in the impact potential of using this intervention type and working to help these animal groups. The high uncertainty is also due to the low quality of research about the effectiveness of skill and network building.
Overall, we assessed the impact potential of Faunalytics’ programs as relatively high, with a final IP score of 5.5 (on a 1–7 scale), placing them in the 4th quartile (top 25%) of the charities we evaluated in 2023. Based on the final uncertainty score, we assessed our overall uncertainty in Faunalytics’ impact potential as moderate. For more detailed information, see Faunalytics’ IP Assessment spreadsheet.
Cost Effectiveness: How much has Faunalytics achieved through their programs?
With this criterion, we assess the effectiveness of a charity’s approach to implementing interventions, their achievements, and the costs associated with those achievements. Charities that perform well on this criterion likely use their available resources in a cost-effective manner. The key aspects that ACE considers when examining cost effectiveness are reviewed in detail below.
We conducted our analysis by comparing a charity’s reported expenditures over 12 months to their reported achievements in each intervention category during that time. To simplify the reporting process for charities, we gave them the choice to report achievements for the last full calendar year or their organization’s last full fiscal year. Faunalytics reported their achievements between January and December 2022.
We asked charities to report up to 10 key achievements per intervention category, alongside their estimated expenditures for each achievement.15
During our evaluation, we verified a subset of the charity’s reported achievements. Specifically, we identified the three intervention categories for which the charity reported the highest expenditures and selected up to three key claims per intervention category to verify.16 We aimed to identify the claims with the highest reported expenditures, ideally by finding at least one independent source to confirm the claim. When we were unable to do so, we directed follow-up questions to the charity to verify their achievements.
We used a Weighted Factor Model17 (WFM) approach to calculate the charity’s final cost-effectiveness score (see image below) based on their achievement scores. The achievement scores represent a combination of the intervention impact potential (IP) score and the implementation score for each key achievement.
- For each key achievement, we assigned the respective intervention IP score. For more details on how we calculated intervention IP scores and prioritized interventions, see the Impact Potential criterion.
- We also computed the achievement quantity, i.e., how much the charity accomplished per U.S. dollar or per $100,000.18 We applied discounts to the achievement quantities if a charity collaborated with other organizations and in certain other cases, (e.g., if a charity influenced funding rather than directly providing it, if they summarized and disseminated research rather than conducting original studies, or if a corporate campaign did not result in any commitments). We then normalized the achievement quantity against other achievements in the same intervention category and with the same unit and mapped it onto a 1–7 scale.
- We then used a rubric to score the achievement quality (i.e., how well the charity had implemented the intervention). The rubric included the respective animal-relative IP score for the animal group targeted by the achievement, as well as intervention-specific factors. Where an achievement targeted multiple animal groups, we used the average animal score.19
- We calculated a weighted average of the achievement quantity per dollar/$100k and the achievement quality to arrive at the implementation score.20
- We then multiplied the intervention IP score by the implementation score and normalized and mapped the resulting score onto a 1–7 scale to arrive at the achievement score.21
- The final cost-effectiveness score for each charity is the average of all of their achievement scores, weighted by their expenditures. This score indicates, on a 1–7 scale, how cost effective we think the charity has been from April 2022 to March 2023, with higher scores indicating higher cost effectiveness. Note that charities were benchmarked against other charities under evaluation rather than against all charities, and the standard of effectiveness in the charities we selected to evaluate is likely high. Low cost-effectiveness scores therefore don’t necessarily indicate low cost effectiveness in absolute terms.
Fig. 5: A model depicting the breakdown of a charity’s cost-effectiveness score
We also rated our level of uncertainty in each achievement score based on the intervention uncertainty score, the number of missing scores in the achievement scoring rubric,22 and whether we were able to successfully verify a charity’s achievements. Uncertainty was scored on a 1–7 scale, with higher scores indicating higher uncertainty. The final uncertainty score for each charity is the average of the achievement uncertainty scores, weighted by the relative achievement expenditures.23 We consider a score to represent “high uncertainty” when it is equal to or greater than the median of all charities under evaluation.
In Our Analysis, we report the 1–5 key achievements with the highest expenditures for each intervention category used by Faunalytics and the estimated cost effectiveness of each achievement and of Faunalytics’ overall work. For a full list of Faunalytics’ key achievements, please see their Cost-Effectiveness Assessment spreadsheet.
ACE used quantitative cost-effectiveness models until 2018. Between 2019 and 2021, we moved away from this approach due to concerns about the usefulness of purely quantitative models. Instead, we employed a qualitative analysis of charities’ cost effectiveness, which consisted of an assessment of whether we had concerns about the way a charity was using its funds. Since 2022, we have again been moving toward an approach that includes more quantitative information. For ACE’s 2023 evaluations, we decided not to use a model that fully quantifies the expected impact on animals for the following reasons:
- The charities we evaluate employ a wide range of interventions, and we lack the empirical research for most interventions that would enable us to make informed, quantified estimates of their impact on animals. The resulting uncertainties would be significant, limiting the estimates’ usefulness.24
- Fully quantified cost-effectiveness estimates often ignore important factors that are hard to quantify and are sometimes arbitrary in what they include and exclude.
- Interventions are interdependent and their effectiveness is context-sensitive, so there might not be a clear answer as to which intervention is the most effective.
- Improving on these limitations may lead to greater complexity, making it harder for the public to understand and critically appraise the model.
To include as much quantifiable information as possible this year, we opted for a Weighted Factor Model25 (WFM) approach, which can be useful when limited hard data is available. This approach allows us to combine objective (hard) factors (e.g., the number of downloads of an educational resource) and subjective (soft) factors (e.g., how evidence-based the content of the resource is) when evaluating a charity’s achievements. Because factors and factor weights are standardized, this approach also ensures consistency in evaluation across charities. However, there are several noteworthy limitations.
Given that most factors and factor weights are set in advance, a WFM can have limited flexibility.26 Additionally, because we assign numerical values to non-numerical data, the data can be misinterpreted as more objective than it is. Therefore, the results of a WFM need to be interpreted with caution.
Further, due to limited team capacity, we could not independently verify every achievement charities reported to us. Instead, we followed a protocol where we aimed to verify one to three achievements in each of the top three intervention categories for which charities reported the highest expenditures. In some cases, we were either partially or not able to independently verify the achievements that were reported. In these cases, we sent follow-up questions to the charities to ask them for evidence of the achievement. While we made every effort to independently verify all the achievements we selected using our verification protocol, there were some achievements we were not able to verify. This increased our uncertainty score for these achievements.
To allow for comparison across charities, we evaluate them based on a standardized 12-month time period. However, this limited timeframe may not accurately reflect a charity’s long-term cost effectiveness. Some accomplishments that are reported to us might seem more cost-effective than they actually are if a significant portion of the associated expenses occurred before or after the assessed timeframe. On the other hand, for a charity that focuses on long-term systemic change, the 12-month time period may not capture the full impact of their achievements. Complex issues like policy reform or behavior change often require sustained efforts over multiple years to yield significant results. Therefore, assessing cost effectiveness within this limited timeframe does not fully reflect the long-term potential and cumulative impact of the charity’s work.
Additionally, it’s important to note that charities could report achievements based on the last full calendar year or the last full fiscal year, depending on their financial reporting practices. While this flexibility allowed charities to align their reporting with their financial cycles, it means that we did not evaluate the exact same time period for each charity. Consequently, although there is significant overlap between the charities’ selected reporting periods, some caution should be exercised when directly comparing charities.
Generally, assessing cost effectiveness by considering a charity’s key achievements has inherent limitations. It could bias cost-effectiveness estimates upward because it tends to disregard work that did not result in an achievement. To help mitigate this, we asked charities to report achievements and expenditures that cover at least 90% of their overall program expenditures.
Our Analysis of Faunalytics’ Cost Effectiveness
The following tables show Faunalytics’ key achievements and achievement expenditures per intervention category from January to December 2022, the quantity of achievements per $1/$100,000, and the achievement cost-effectiveness score.27 The tables show the five highest-expenditure achievements per intervention category. For a full list of Faunalytics’ achievements, please see their Cost-Effectiveness Assessment spreadsheet.
Research
Key achievements | Achievement expenditures (USD) 28 | Number of research projects | Number of research projects per $100,000 | Achievement score (1–7) 29 |
Added 228 study summaries and 55 blog posts to their research library and produced six explainer videos and two fact sheets based on these posts | $306,268 | 28330 | 92.4 | 7.0 |
Conducted a market research project on Chinese consumers’ attitudes towards animal welfare | $81,376 | 1 | 1.2 | 5.1 |
Conducted research into attitudes toward chickens and fishes in four countries | $74,818 | 4 | 5.4 | 5.4 |
Conducted qualitative research on current strategy for animal agriculture reform | $61,668 | 1 | 1.5 | 4.8 |
Conducted secondary research into the effect of local laws on state laws | $53,781 | 1 | 1.9 | 4.5 |
Skill and network building
Key achievements | Achievement expenditures (USD) | Number of individuals or organizations reached | Number of individuals or organizations reached per $100,000 | Achievement score |
Provided one-on-one advocate support via office hours or email | $32,230 | 118 individuals | 366 individuals | 4.6 |
Added the Faunalytics Research Glossary to the Research Advice Hub | $27,947 | 4,421 individuals | 15,820 individuals | 3.9 |
Launched the Faunalytics’ Research Ambassador Project to deliver research and data to animal advocates | $25,194 | 326 individuals | 1,294 individuals | 4.4 |
Helped Animal Friends Jogja with their impact monitoring and evaluation | $18,791 | 1 organization | 5.3 organizations | 3.5 |
Cofounded a coalition of researchers with Rethink Priorities and Stanford University’s Quantitative Sciences Unit | $16,822 | 2 organizations | 11.9 organizations | 4.3 |
Documentaries and films
Key achievements | Achievement expenditures (USD) | Number of views | Number of views per $1 | Achievement score |
Produced and distributed a short film about greyhound racing | $3,250 | 14,506 | 4.5 | 2.7 |
Our Assessment of Faunalytics’ Cost Effectiveness
Faunalytics’ overall cost-effectiveness score is 5.7, placing them in the 4th quartile (top 25%) among all charities evaluated in 2023. This score was reached by averaging the individual scores calculated for each achievement, weighted by the relative expenditures on the achievement.31 This overall score is an estimate of how well Faunalytics has implemented their interventions from January to December 2022, taking their expenditures into account.
We think that out of all of Faunalytics’ achievements, the addition of new content to their research library, their Global Animal Slaughter Statistics and Charts, and their intervention research into 15 types of animal advocacy32 are particularly cost effective because they combine high-priority interventions with robust implementation. In contrast, we think that the short film about greyhound racing is less cost effective because we consider documentaries and films a lower-priority intervention.
We think our score may overestimate Faunalytics’ cost effectiveness for the following reason: Faunalytics submitted achievements accounting for only 48.1% of their program expenditures. However, they explained that the remaining program expenditures largely went toward other research projects that were in progress but not completed and published in the reporting period. However, some communication and public relations work was also not included, and intervention scores for this type of work are lower than for Faunalytics’ other main interventions.
We think our score may underestimate Faunalytics’ cost effectiveness for the following reason: We largely focus on the direct and short-term impact of research and skill and network building, but the indirect and long-term impact may be much higher.
Our uncertainty in the cost-effectiveness score is moderate (below the median of all evaluated charities). This is based on no missing information when scoring achievements, mostly moderate uncertainty scores of the relevant intervention categories, and the outcome of our verification process. Of the seven achievements selected for verification, two were verified as true, and five were partially verified as true.
Room For More Funding: How much additional money can Faunalytics effectively use in the next two years?
A recommendation from ACE could lead to a large increase in a charity’s funding. With this criterion, we investigate whether a charity would be able to absorb the funding that a new or renewed recommendation may bring, and the extent to which we believe that their future uses of funding will be as effective as their past work. All descriptive data and estimations for this criterion can be found in the model spreadsheet.
We begin our room for more funding (RFMF) assessment by inspecting the charity’s revenue and plans for expansion through 2025, assuming that their ACE recommendation status and the amount of ACE-influenced funding they receive will stay the same. We then outline how the charity would likely expand if they were to receive funds beyond their predicted income and use this information to calculate their RFMF.
Plans for expansion
To estimate charities’ RFMF, we request their financial and staffing records from 2020 onwards and ask them to predict their revenue and staff size through 2025. We ask them to report how their projections are allocated across different interventions, animal groups, and countries. We then assess our overall level of uncertainty in the charity’s projected revenue, expenditures, and hiring plans on a scale of 1–7, with higher scores indicating higher uncertainty. This assessment is based on factors such as sustainability of growth,33 alignment with projections from our previous evaluation (if applicable), and our uncertainty in charities’ ability to find and train planned new hires in the projected time frame based on our understanding of the talent landscape.34
Our focus is to determine whether additional resources will likely be used for programs with high impact potential or other beneficial organizational changes. The latter may include investments into infrastructure and staff retention, both of which we think are important for sustainable growth.
Unexpected funding
We ask charities to indicate how they would spend additional, unexpected funding that an ACE recommendation may bring. Because previously recommended charities tend to receive more ACE-influenced funding over time, we also ask those charities to specify how they would use additional funding.
We then assess our level of uncertainty in the effectiveness of the charity’s plans in 2024 and 2025 and estimate their RFMF for those years. To do this, we assign an uncertainty score on a 1–7 scale, with higher scores indicating higher uncertainty, based on questions such as the following for each plan:
- How uncertain are we that the charity’s plans will make as effective use of the funding as their previous expenditures?
- Is there a percentage threshold of the charity’s proposed plans beyond which the additional funding is not used as effectively?
- Are there nonfinancial barriers that may impact the charity’s ability to carry out their plans?
We use these uncertainty scores and the charity’s revenue and expenditure trajectory to define two RFMF dollar estimates that represent the amount beyond the charity’s projected revenue in 2024 and 2025 that we believe they could effectively use. If the charity has plans for a large amount of unexpected funding that are likely to be as effective as their past work, they will receive higher RFMF estimates.
Reserves
We may adjust RFMF based on the status of a charity’s reserves. It is common practice for charities to hold more funds than needed for their current expenses to be able to withstand changes in the business cycle or external shocks that may affect their incoming revenue.35 Such additional funds can also serve as investments in future projects. Thus, it can be effective to provide a charity with additional funding to secure the organization’s stability and/or provide funding for larger projects in the future. Therefore, we increase a charity’s RFMF if they are below their targeted amount of reserves. If a target does not exist, we suggest that charities hold reserves equal to at least one year of expenditures.
Revenue diversity
The charities we evaluate typically receive revenue from a variety of sources, such as individual donations and grants from foundations.36 A review of the literature on nonprofit finance suggests that revenue diversity may be positively associated with revenue predictability if the sources of income are largely uncorrelated.37 However, there is evidence that revenue diversity may not always be associated with financial stability.38 Therefore, although revenue diversity does not play a direct role in our recommendation decision, we indicate charities’ major sources of income in this criterion for donors interested in financial stability.
Because we cannot predict exactly how much charities will fundraise in the future and how their plans for expansion will unfold, our estimates are speculative—not definitive. For instance, a charity could lose a major funder or discover a way to use additional funding that they did not anticipate, in which case our estimates would be too low. Conversely, they could fail to hire an employee with the necessary skills or experience to enable an expansion, in which case our estimates would be too high.
Our RFMF estimates are intended to identify the point in time at which we would want to check in with a charity to ensure that they have used their funds effectively and can still absorb additional funding. Therefore, we check in with our recommended charities twice a year leading up to our Recommended Charity Fund distributions to update our sense of their RFMF.
Finally, because we assign numerical values to non-numerical assessments of uncertainty, the data can be misinterpreted as more objective than it is.
Our Analysis of Faunalytics’ Room For More Funding
The chart below shows Faunalytics’ revenues, expenditures, and total staff size from 2020–2022, as well as their own projections for the years 2023–2025.
Fig. 6: Faunalytics’ financials and staff size (2020–2025)
Assessment of Projected Revenue and Expenditures
Concerns about Alignment with Previous Projections 39 | Level of Concern about Charity’s Sustainability (1–7) | Reasoning |
No | 3 | Projecting a moderate increase in expenditures, with assets to handle the increase even if revenue is flat in the future |
We consider the charity’s projected growth (uncertainty level 3 out of 7) to be somewhat conservative. A more detailed summary of their financials, including breakdowns by intervention, animal group, and country, can be found in the “Overall Financials” tab of their model spreadsheet.
Assessment of Hiring Plans
Year | # FTEs | Hiring plans | Uncertainty (1–7) |
2020 | 4.5 | N/A | N/A |
2021 | 6.5 | N/A | N/A |
2022 | 8 | N/A | N/A |
Projection for 2023 | 8.5 | Data Analyst and Visual Coordinator | 1 |
Projection for 2024 | 10 | Content Manager/Senior Research Scientist, Operations/HR/Finance Director | 2 |
Projection for 2025 | 10 |
Overall, we consider it likely that the charity will be able to find and train the FTEs projected. A more detailed summary of their hiring plans and our reasoning behind their uncertainty scores can be found in the “Assessment: Hiring Plans” tab of their model spreadsheet.
Plans for expansion
Faunalytics plans to conduct and share more research and data-driven resources, as well as offer animal organizations greater support in terms of conducting their own research, finding data to apply to their campaigns and programs, and measuring their effectiveness and impact. Their Research Library will continue to focus on farmed animals but will include more content on wild animals and invertebrates in the future. They will also increase the amount of resources focused on animal issues in the United States, China, and numerous other countries and translate select research studies into additional languages.
A more detailed summary of their future plans can be found in their model spreadsheet.
Unexpected funding
Faunalytics shared that they could absorb a total of $850,000 beyond their most likely scenario projections while still meeting the high standards of their current programs. They provided the following plans:
Priority for Funds | Amount of Funds | Type of Work Funded | Uncertainty about Effectiveness of Plans (1–7) |
1 | $100,000 | Hire an H.R./Admin/Finance Director and bring the Operations Associate position to full time | 2 |
2 | $220,000 | Hire a Content Manager, a Senior Research Scientist, and an M&E researcher and bring the Data Analyst position to full time | 2 |
3 | $53,000 | Funds for research workshops and for increased presence at conferences | 1 |
4 | $35,000 | Funds for academic publication of studies | 4 |
5 | $158,000 | Hire an additional Research Scientist, research interns, and a Development Director | 3 |
6 | $8,000 | Obtain Employer of Record services | 1 |
7 | $74,000 | Hire a Global Strategy Director | 2 |
8 | $132,000 | Hire a Deputy Research Director and a Research Scientist (both with a Southeast Asia focus) | 2 |
9 | $20,000 | Funds for continued research translation efforts (Southeast Asia focus) | 1 |
10 | $50,000 | Funds to support organizations making research-based strategy shifts | 1 |
Based on these plans and Faunalytics’ conservative financial projections, we believe that the charity can effectively use more than the above amounts in the next two years in a way that is as effective as their past work.
A more detailed summary of their plans for unexpected funding and the reasoning behind our uncertainty assessments can be found in the “RFMF Estimate” tab of their model spreadsheet.
Reserves
With more than their target amount of 100% of annual expenditures held in reserves (as reported by Faunalytics for 2023), we believe that they hold a sufficient amount of reserves.
Our Assessment of Faunalytics’ Room For More Funding
Based on our assessment that they have sufficient reserves and our assessment of their plans to use unexpected funding, we believe that overall, Faunalytics has room for $400,000 of additional funding in 2024 and $600,000 in 2025. These two figures represent the amount beyond their projected revenues of $1,215,000 and $1,360,000 in 2024 and 2025, meaning that we believe that they could effectively use a total revenue of up to $1,615,000 and $1,960,000.
Organizational Health: Are there any management issues substantial enough to affect Faunalytics’ effectiveness and stability?
With this criterion,40 we assess whether any aspects of an organization’s leadership or workplace culture pose a risk to its effectiveness or stability, thereby reducing its potential to help animals. Problems with leadership and workplace culture could also negatively affect the reputation of the broader animal advocacy movement, as well as employees’ wellbeing and their willingness to remain in the movement. For example:
- Schyns & Schilling (2013) report that poor leadership practices result in counterproductive employee behavior, stress, negative attitudes toward the entire company, lower job satisfaction, and higher intention to quit.
- Waldman et al. (2012) report that effective leadership predicts lower turnover and reduced intention to quit.
- Wang (2021) reports that organizational commitment among nonprofit employees is positively related to engaged leadership, community engagement effort, the degree of formalization in daily operations, and perceived intangible support for employees.
- Gorski et al. (2018) report that all of the activists they interviewed attributed their burnout in part to negative organizational and movement cultures, including a culture of martyrdom, exhaustion/overwork, the taboo of discussing burnout, and financial strain.
- A meta-analysis by Harter et al. (2002) indicates that employee satisfaction and engagement are correlated with reduced employee turnover and accidents and increased customer satisfaction, productivity, and profit.
We review aspects of organizational health by examining information provided by top leadership staff and by capturing staff perspectives via our engagement survey. We also distribute the survey to volunteers working at least five hours per week.
Poor performance on a single aspect of the organizational health assessment does not automatically lead to an unfavorable overall outcome. Instead, we seek to assess organizational health from multiple perspectives to arrive at the most appropriate conclusion within the time available based on the information we have. This information includes our follow-up conversations with the charity’s leadership.
People policies and processes
We ask charities to report their people policies. We provide a checklist of policies that we believe are strong indicators of organizational health and ask charities to indicate which of these they have implemented. Policies are grouped into the following categories:
- Compensation
- Workplace safety
- Organizational design and communication
- Performance and hiring assessments
- Learning and development
- Workplace conditions
- Representation, equity, and inclusion
We do not assess which policies from our list are the most important. As we gather aggregated information about the charities we evaluate, we hope to build up a clearer picture over time of which policies are the strongest indicators of organizational health in different contexts. We take into account that many charities work in contexts (e.g., geographical regions) where these policies may not be common practice.
A safe and inclusive working environment is likely to deliver significant benefits not only for advocates, but also for the effectiveness and stability of organizations and the broader animal advocacy movement.41 This is why we collect information about policies and activities regarding representation, equity, and inclusion. We use the term “representation” broadly to refer to the diversity of certain social characteristics (called “protected classes” in some countries).42 Additionally, charities should generally have human resources policies against harassment43 and discrimination44 and ensure that cases of harassment and discrimination in the workplace45 are addressed appropriately.
We also ask about the internal accessibility of people policies and processes, i.e., which policies are shared with employees at the organization and in what way. This is because written policies have little use without employees knowing that they exist, understanding them, and believing that the organization enforces them. For example, it is important for employees to understand their entitlement to sick days and how to submit internal reports of harassment and discrimination.
Governance and accountability
We ask charities to report whether they have basic governance policies and processes in place, including: an anti-retaliation policy protecting whistleblowers and those who report grievances, a Conflict of Interest policy, a policy setting out procedures for the storage and destruction of documents, and a process for documenting minutes of board and board committee meetings.
We also consider leadership’s commitment to transparency.46 Firstly, we require organizations selected for evaluation to be transparent with ACE throughout the evaluation process. This is essential for us to be confident that we have the necessary information to carry out a full, well-informed evaluation. Secondly, we consider organizations’ public-facing transparency by asking charities to report what information they have available on their website, such as key staff members and financial information. Although we value such transparency, we recognize that some organizations may be able to have a greater impact by keeping certain information private. For example, organizations and individuals working in some regions or on particular interventions could be harmed by publicizing certain information about their work. We seek to understand where this is the case based on conversations with the charity’s leadership.
Leadership and governance
First, we consider key information about the composition of leadership staff and the board of directors. There appears to be no firm consensus in the literature on the specifics of the relationship between board composition and organizational performance.47 However, BoardSource (a 501(c)(3) organization that provides the most reliable research we have found on nonprofit board leadership) recommends that, if the law permits, the Executive Director (ED) or equivalent should be an “ex officio, non-voting member of the board.”48 In this way, the ED can provide input in board meeting deliberation and decision-making while avoiding perceived conflicts of interest, questions concerning accountability, or blurring the line between oversight and execution. We also ask if there have been any recent transitions in leadership and what measures were taken to ensure this transition happened smoothly.
We ask about the board’s membership and functions, so we can understand to what extent these align with the ‘Compliance Practices’ set out in BoardSource’s Recommended Governance Practices. For example, we ask how often the board meets, how its performance is evaluated, and what term limits are in place for board members.
Our engagement survey also asks staff to identify the extent to which they feel that leadership competently guides the organization, both as an indicator of leadership competence and staff engagement. The questions we ask are based on cross-cultural research by Culture Amp and Google’s Project Oxygen.
Staff engagement and satisfaction
We solicit staff and volunteer perspectives via our engagement survey. We developed this survey in collaboration with organizational consultants Scarlet Spark. To help ensure that our questions were reliable predictors of organizational health, we based them where possible on recognized frameworks such as the cross-culturally validated Gallup Q12 Employee Engagement Survey, the Maslach Burnout Inventory, Google’s Project Oxygen, and cross-cultural research by Culture Amp.
We require at least 65% of the charity’s staff to respond to the survey to ensure that we have a representative sample of responses. There is no participation threshold for volunteers, recognizing that most organizations do not have a fixed number of volunteers as their participation tends to fluctuate.
If a charity scores particularly low on any aspect of staff engagement, we follow up on these factors with the charity’s leadership to hear their perspective and understand any relevant context. We only share aggregated organizational-level data with leadership and do not share individual survey responses or other confidential information. ACE may recommend that the charity address any outstanding concerns, for example, by:
- Conducting a comprehensive staff survey to assess employee engagement, satisfaction, and areas for improvement.
- Establishing regular channels for communication and feedback, such as open-door policies, suggestion boxes, or anonymous reporting mechanisms.
- Developing professional development opportunities and career advancement pathways for staff.
- Seeking external expertise on how to improve staff morale.
- If low staff morale is being caused by a specific person, carrying out a performance review with that person and agreeing on the specific ways in which their behavior needs to change, including a timeline by which changes must happen.
Our engagement survey contains questions based on the 12 statements from the Gallup Q12 Employee Engagement Survey, with staff requested to rate each statement on a scale from 1 (no, I strongly disagree) to 5 (yes, I strongly agree). Where possible, we avoided making adjustments to standard assessments since these questions have been validated with large, cross-cultural samples of participants. However, we made minor amendments to some statements in the original Gallup survey that charities have found unclear in the past. We consider an average engagement score of below 3 (the scale midpoint) to warrant follow-up with the charity’s leadership.
In addition to the engagement questions based on the Gallup Q12 Employee Engagement Survey, we ask questions designed to elicit information about the risk of burnout among staff, the level of psychological safety at the organization, and organizational stability. We designed these questions together with Scarlet Spark, based largely on the frameworks mentioned above (the Gallup Q12 Employee Engagement Survey, the Maslach Burnout Inventory, Google’s Project Oxygen, and cross-cultural research by Culture Amp). We also ask all staff about wage satisfaction since this can serve as an indicator of retention.49
We ask volunteers an alternative set of questions specifically designed to assess volunteer engagement and satisfaction. These questions are similar to the ones in the employee engagement survey but with added dimensions to understand whether volunteers feel they are making a difference, whether their workload is fair, the ease of volunteering for the organization, and their pride in volunteering at the organization.
Harassment and discrimination
The engagement survey contains a link to an anonymous Whistleblower Form,50 developed with support from legal experts at Animal Defense Partnership, for any staff or volunteers who wish to report issues of harassment and discrimination. In most cases where we decide to take action based on such reports, this consists of sharing relevant non-confidential information with the leadership of the organization in question and hearing their perspective. We do this to improve our understanding of what happened, whether the leadership members were aware, and what measures they took or plan to take, if relevant. We then factor this information into our overall Organizational Health assessment.
While we strive to continually improve our assessment of charities’ organizational health, we recognize that several limitations remain.
Firstly, we are currently unable to fully investigate harassment and discrimination claims due to a combination of time constraints, lack of expertise, and the often anonymous nature of the reports that we receive. We recognize that this may cause frustration among charities that we evaluate, especially when we are unable to share specific details about these claims for reasons of confidentiality.
This year, we have sought to improve the channel for people to submit such reports, linking to the more comprehensive Whistleblower Form co-developed with Animal Defense Partnership, rather than asking about harassment and discrimination in the engagement survey directly. We hope this helps ensure that claimants understand the implications of providing such information, improve the comprehensiveness of any such information that we receive, better enable us to follow up with claimants, and better identify the level of detail we are able to share with the leadership of the charity in question. At the same time, we recognize that requiring claimants to fill out a separate, more comprehensive form may reduce the number of reports that we receive.
Secondly, our engagement survey only provides a limited window into a charity’s workplace culture and may not fully represent the broad range of experiences within the organization. In particular, we recognize that surveying staff and volunteers can lead to inaccuracies due to selection bias and also may not reflect employees’ true opinions, as respondents are aware that their answers could influence ACE’s evaluation of their employer. We also recognize that our assessment represents a snapshot at a point in time and may not fully capture ongoing cultural shifts within an organization.
This year, we have included a wider range of questions in the survey and collaborated with the organizational consultant Scarlet Spark to help ensure these questions are likely to be effective predictors of organizational stability and effectiveness. As in previous years, we do not rely solely on the results of the engagement survey to make our assessment. Rather, we assess organizational health from multiple perspectives to arrive at the most appropriate decision within the time available based on all the information we have, including our follow-up conversations with the charity’s leadership.
Thirdly, there is no universally agreed-upon “best practice” for organizational leadership and culture. With a wide range of frameworks, models, and approaches available, it can be challenging to establish a singular standard for evaluation, which may lead to a variety of interpretations and expectations among charities.
As mentioned, this year, we developed our organizational health assessment in collaboration with organizational consultants Scarlet Spark to help ensure we are using the most relevant research. Where possible, we used recognized frameworks such as the cross-culturally validated Gallup Q12 Employee Engagement Survey, the Maslach Burnout Inventory, Google’s Project Oxygen, and cross-cultural research by Culture Amp. As in previous years, we also seek to gather input both from the charity’s leadership and non-leadership staff so that we can understand any issues from multiple perspectives.
Lastly, our assessment may be biased toward certain Western workplace practices. As a U.S.-based organization with staff based predominantly in the U.S. and Western Europe, our understanding of best-practice organizational health is inevitably skewed toward the cultures with which we are most familiar.
We seek to recognize this bias at all stages of the assessment and to continually learn from the charities that we evaluate, rather than imposing a ’one size fits all’ approach onto each charity’s unique situation. For example, we recognize that not all of the policies and processes that we ask charity leadership about will be common or relevant in all countries and situations. Where there are indications that important policies and processes may be lacking, we follow up with the charity to gain a better understanding of the context. Particularly if the charity is based outside of the U.S., we are also eager to learn of additional policies they may have that they find to be important contributors to their effectiveness. In this way, we hope that this exercise can be mutually informative for ACE and for the charities that we evaluate.
This year, we also modified our engagement survey questions to reduce their focus on Western cultures and piloted the questions with charities from different global regions to help ensure this was successful. We will continue to explore how best to improve the applicability of our assessment across all national contexts, using evidence from the countries where our evaluated charities are based.
Our Analysis of Faunalytics’ Organizational Health
People policies and processes
The policies that Faunalytics reported having in place are listed below. They reported that they proactively make all of the relevant policies accessible to their staff.
Faunalytics also noted that they are currently in the process of developing their paid medical leave and paid family leave policy, aiming to finalize the policy by the end of the year.
Has policy | Partial / informal policy | No policy |
Compensation | |
Paid time off | |
Paid sick days | |
Paid medical leave | |
Permission to use sick days for mental health purposes | |
Healthcare coverage or health insurance | |
Paid family and caregiver leave | |
Paid internships (if relevant) | N/A |
Compensation strategy (i.e., a policy detailing how the charity determines their staff’s pay and benefits in a standardized way) | |
Accountability | |
An anti-retaliation policy protecting whistleblowers and those who report grievances | |
Board meeting minutes | |
Conflict of interest policy | |
Records retention and destruction policy | |
Workplace safety | |
A clearly written workplace code of ethics/conduct | |
A written statement that the charity does not tolerate discrimination on the basis of race, gender, sexual orientation, disability status, or other irrelevant characteristics | |
A simple and transparent written procedure for filing complaints | |
Mandatory reporting of harassment and discrimination through all levels, up to and including the board of directors | |
Explicit protocols for addressing concerns or allegations of harassment or discrimination | |
Documentation of all reported instances of harassment or discrimination, along with the outcomes of each case | |
Training on topics of harassment and discrimination in the workplace | |
Organizational design and communication | |
Clearly defined responsibilities for all positions, preferably with written job descriptions | |
Clearly defined objectives and expectations for all roles | |
Documentation of all key knowledge and information necessary to fulfill the needs of the organization | |
Mission and/or vision, defining the purpose and future of the organization | |
Clear organizational goals and/or priorities communicated to all employees | |
Performance and recruitment assessments | |
Annual (or more frequent) performance evaluations for all roles | |
Performance evaluation process based on predefined objectives and expectations | |
Annual (or more frequent) process to measure staff engagement or satisfaction | |
A process in place to support performance improvement in instances of underperformance | |
Learning and development | |
New hire onboarding or orientation process | |
Training and development available to each employee | |
A simple and transparent written procedure for employees to request additional training or support | |
Workplace conditions | |
Flexible work hours | |
Remote work option | |
Representation, equity, and inclusion | |
Process to attract a diverse candidate pool | |
Structured hiring, assessing all candidates using the same process | |
Standardized process for employment termination decisions | |
Two or more decision-makers for all hiring, promotion, and termination decisions |
Transparency
Faunalytics was transparent with ACE throughout the evaluation process.
All of the public-facing information we requested was available on the charity’s website. This included: a list of board members; a list of key staff members; information about the organization’s key accomplishments; the organization’s mission, vision, and/or theory of change; a privacy policy disclosing how the charity collects, uses, and shares third-party information; an IRS Form 990 or equivalent tax form; and financial statements.
Leadership and governance
Faunalytics’ Executive Director (ED) is Brooke Haggerty, who has been involved in the charity for 4.5 years.
The board of directors has six members. The Executive Director does not sit on the board.
We found that the charity’s board of directors aligned with our understanding of best practice. All of their board members are independent from the organization, board meetings take place 10 times per year, and the board has robust term limits and performance evaluation processes in place.
Among Faunalytics staff who responded to our engagement survey, the average score across questions on confidence in leadership and management was 5 on a 1–5 scale, indicating very high confidence. 100% of respondents agreed with the statement “I have confidence in the leaders at our organization.”
Staff engagement and satisfaction
Faunalytics has 15 staff members (including full-time staff, part-time staff, and contractors). Eleven staff members responded to our engagement survey, yielding a response rate of 73%.
Faunalytics has three volunteers working at least five hours per week. One volunteer responded to our survey.
Faunalytics has a formal compensation plan to determine staff salaries. Survey respondents’ average score to questions regarding satisfaction with wages and benefits was 4.7 on a 1–5 scale, indicating very high satisfaction.
The average score across all questions was 4.9 on a 1–5 scale, suggesting that, on average, staff exhibit very high engagement and satisfaction.
Harassment and discrimination
We did not receive any reports of harassment or discrimination at Faunalytics.
Our Assessment of Faunalytics’ Organizational Health
We did not detect any concerns in Faunalytics’ leadership and organizational culture. Based on our assessment, they appear to have strong policies and processes in place and high levels of staff engagement. We also positively noted that since their last review, they have developed an algorithm for setting salaries in an objective, transparent way. They have also made improvements to their employee benefits package and are continuing to assess how they can specifically improve employee health and retirement benefits further.
Overall Recommendation
Faunalytics’ work to increase knowledge and skills for animal advocacy is highly promising because it focuses on animal groups, countries, and interventions that we consider high priority. We find their work on conducting research to help farmed animals and wild animals in the United States and China particularly likely to be impactful. We assess Faunalytics’ recent work as highly cost effective and believe they are in a strong position to use additional funding. These efforts are well-aligned with ACE’s organizational values and theory of change.
Faunalytics performed very strongly compared to other charities we evaluated in 2023. During the decision-making phase of our evaluation process, we took into account their performance on our four evaluation criteria—Impact Potential (high), Cost Effectiveness (high), Room for More Funding across 2024 and 2025 ($1,000,000), and Organizational Health (no concerns)—as well as our level of uncertainty in their scores. In this particular case, our uncertainty in Faunalytics’ Impact Potential score was higher than our uncertainty in their Cost Effectiveness score, so we put more emphasis on the latter when making recommendation decisions. Overall, we find Faunalytics to be an excellent giving opportunity for those looking to create the most positive change for animals.
To view all of the sources cited in this review, see the reference list.
This criterion was called Programs from 2020 to 2022. We decided to rename it Impact Potential to better reflect its focus on assessing the effectiveness of charities’ programs without considering their implementation. This name is more specific and less confusing internally, especially since we recently changed the name of our research team to the Programs team.
Rethink Priorities adjusted their welfare range estimates for use in ACE’s evaluations. Because ACE compares animal charities with each other rather than with human charities, Rethink Priorities reindexed the ranges to pigs instead of humans—see this page for more information.
An “egalitarian” score is a score of 1 that we assign to each animal group to represent the view that all animal groups have equal welfare range or probability of sentience.
The framework we used to prioritize countries only applies to farmed animal advocacy. We have not developed a framework to prioritize wild animal welfare work because there are very few organizations that work on wild animal welfare, and those we have considered so far are focused on indirect work such as research and academic development, which is less country-specific.
For example, when scoring the intervention category “apps and other digital resources,” we considered the following tractability proxies: the Global Innovation Index, Education (mean years of schooling), and Internet Penetration rate.
For more information on the limitations of FAOSTAT data, see Šimčikas (2019). For more information on Rethink Priorities welfare ranges project, see Fischer (2020).
For arguments supporting the view that the most important consideration of our present actions should be their long-term impact, see Greaves & MacAskill (2019) and Beckstead (2019).
The remaining 28% of Faunalytics’ expenses go toward unspecified countries.
We asked that reported achievements and associated expenditures amount to at least 90% of a charity’s total program expenditures during the reporting period. We also adjusted achievement expenditures by taking the charity’s reported expenditures and adding a portion of their non-programmatic expenditures (i.e., overhead or administration). This process allowed us to incorporate general organizational running costs into our consideration of cost effectiveness.
We selected key claims from the achievements with the highest expenditures, given that those achievements contribute most to the cost-effectiveness score.
For more information about Weighted Factor Models, see Charity Entrepreneurship (2019).
We standardized this unit to achievements per one U.S. dollar or per $100,000, depending on which was easier to interpret, to allow for comparison across achievements. For example, we calculated how many individuals a social media campaign reached per dollar spent or how many legal actions a charity filed per $100,000 spent. For some intervention categories, the number of achievements was too low to normalize the achievement quantity. In these cases, we used the average of two researchers’ subjective assessment of the quantity on a 1–7 scale.
See here for the full rubric. Two researchers scored each achievement on the rubric, and discussed significant disagreements before a second round of revising scores. We averaged the two researchers’ scores for each factor. Where we did not have enough information to score an achievement, we set the corresponding factor weight to zero.
We defaulted to giving achievement quality 75% and achievement quantity 25% weight. In some cases, e.g., if we were particularly uncertain about the achievement quantity, we gave achievement quality a higher weight.
By using a multiplicative method, we avoid giving high scores to achievements that implement promising interventions poorly (i.e., high intervention score but low implementation score). Consider the example where a charity focuses on an intervention like cage-free campaigns, which has the potential to be highly impactful, but fails to achieve any significant commitments. With a weighted average approach, the charity would still receive a relatively high score despite an unsuccessful implementation of their campaigns. However, by using a multiplicative method, the overall score accounts for the interaction between intervention and implementation scores. This means that if the implementation quality is lacking, the overall score will appropriately reflect that.
We encouraged charities to give as much information as possible about each achievement. In order to protect their capacity, we also marked some questions as optional. Where we did not have the relevant information to score an achievement on a factor in the scoring rubric, this increased our uncertainty score for that achievement.
We increased the uncertainty score for charities that reported fewer than 10 achievements to account for the fact that measurement errors and uncertainties have a higher impact on the final score when fewer achievements are averaged.
For interested readers, we compiled a list of existing quantified cost-effectiveness estimates for animal advocacy interventions here. You can find our summaries of existing empirical research on the impact potential of interventions here.
For more information about Weighted Factor Models, see Charity Entrepreneurship (2019).
Some factors and factor weights were adjusted slightly after receiving the data from charities.
Faunalytics reported achievements amounted to only 48.1% of program expenditures. They provided information on what the remaining expenditures were spent on in response to our follow-up questions. (See Our Assessment of Faunalytics’ Cost Effectiveness for more information).
We adjusted the achievement expenditures charities reported to us by adding a portion of their overhead costs, weighted by the relative achievement expenditures, in order to take general organizational running costs into account in our cost-effectiveness assessment.
To calculate the achievement score, we multiplied the intervention score by the implementation score. We then min-max normalized those scores against all other achievement scores across charities and converted the result to a 1–7 scale.
Note that we discounted the achievement quantity in our model to reflect that these are summaries of other researchers’ work rather than original studies.
Please see Faunalytics’ Cost-Effectiveness Assessment spreadsheet and the Our Method section for more detailed information.
Because the tables in Our Analysis of Faunalytics’ Cost-Effectiveness include only the five most expensive achievements per intervention category, some achievements do not appear in that section. For a full list of Faunalytics’ achievements, please see their Cost-Effectiveness Assessment spreadsheet.
Sustainability of growth is based on historical revenue/expenditures and other context-dependent variables, such as non-financial barriers to the scalability of their programs (e.g., time).
National Council of Nonprofits; Propel Nonprofits; Boland & Freedman (2021)
To be selected for evaluation, we require that a charity has a budget size of at least about $100,000 and faces no country-specific regulatory barriers to receiving money from ACE.
This assessment is only applicable to charities that provided projected financial information in a previous ACE evaluation.
This criterion was called Leadership and Culture from 2020 to 2022. We found that ‘leadership’ was often misunderstood as referring solely to the qualities of individual leaders and that ‘culture’ was understood in very different ways across countries and demographics. With the new name Organizational Health, we intend to highlight the broad focus of this criterion and to clarify that its goal is to identify any significant risks to the organization’s effectiveness and stability.
For example, in a study by Anderson (2020), 49% of paid animal advocates and 28% of unpaid animal advocates reported having experienced discrimination or harassment. Advocates who were members of a minoritized group (i.e., people of color, people with disabilities, and LGBTQ+ people) were significantly more likely to leave the movement as a result of discrimination than non-minoritized advocates.
Examples of such social characteristics include: race, color, ethnicity, religion, sex, gender or gender expression, sexual orientation, pregnancy or parental status, marital status, national origin, citizenship, amnesty, veteran status, political beliefs, age, ability, and genetic information.
ACE defines “harassment” as bullying, intimidation, and other behavior (whether physical, verbal, or nonverbal) that has the effect of upsetting, demeaning, humiliating, intimidating, or threatening an individual. Sexual harassment includes unwelcome sexual advances, requests for sexual favors, and other verbal or physical harassment of a sexual nature.
ACE defines “discrimination” as the unjust or prejudicial treatment of or hostility toward an individual on the basis of certain social characteristics.
ACE defines the ‘workplace’ as any place where work-related activities occur, including physical premises, meetings, conferences, training sessions, transit, social functions, and electronic communication (such as email, chat, text, phone calls, and virtual meetings).
Charity Navigator defines transparency as ‘an obligation or willingness by a charity to publish and make available critical data about the organization’.
BoardSource (2016), p. 4
For example, see Mitchell et al. (2001).
The publicly accessible version of this form can be found via ACE’s Third-Party Whistleblower Policy on our website.
Faunalytics' Achievements
Recent Achievements
- Faunalytics released eight new original research studies and analyses, including reports on industry costs for animal products, global slaughter statistics, an international study of strategies and needs for animal advocates, and more.
- Faunalytics added more than 200 study summaries to their Research Library and introduced the Tactics in Practice resource series, which summarizes the scientific basis for various advocacy interventions. Additionally, they launched small, specialized websites in Spanish and Simplified Chinese to make research accessible to advocates around the world.
- Faunalytics directly supported 187 advocates via office hours, presented research to 52 organizations, and created dozens of new factsheets and infographics. They also held their third annual Fauna Connections research symposium.
Future Outlook
With your support, Faunalytics will publish new studies on various topics, including advocacy in Southeast Asia and low- to middle-income countries, political and legislative advocacy, and youth engagement in animal protection. They will also develop new databases for advocates and launch an advocacy-academic network in collaboration with other organizations.