Some Updates to our Charity Evaluation Process: 2019
We have been conducting annual charity evaluations since 2014. Throughout this time, our goal has remained the same: to find and promote the most effective animal charities. Our seven evaluation criteria have also remained broadly consistent (though we’ve reworded, reordered, and tweaked them over the years). Our process for evaluating charities, however, continues to develop each year. For instance, in 2017, we began having conversations with employees at each charity and began offering charities small grants for participating in our evaluations. In 2018, we began conducting culture surveys at each charity, added new dimensions to some of our criteria, and made some logistical changes to increase our efficiency.
This year, we are making a number of changes, including:
- Publishing overall ratings of each charity on each criterion
- Increasing the number of visual aids (e.g., charts, tables, and images) in each review
- Making changes to our cost-effectiveness models
- Making our culture survey mandatory for charities receiving a recommendation
- Hiring a fact-checker
Publishing Overall Ratings of Each Charity on Each Criterion
To allow our readers to quickly form a general idea of how organizations perform on our seven evaluation criteria, this year we have included overall ratings of each charity on each criterion in our reviews. This decision is also based on feedback from some readers who have told us that after skimming through the reviews, it was not clear enough how charities were performing on each criterion. We believe the ratings we produced can give a better sense of our overall assessment as they are visual representations of charities’ performance (weak, average, or strong) on each criterion, relative to other charities under review, as well as our confidence level (low, moderate, or high) in every case. We hope these ratings make it easier for our audience to compare charities’ performance by criterion and help us better express how confident we feel about our appraisal depending on the available evidence.
Increasing the Number of Visual Aids (e.g., Charts, Tables, and Images) in Each Review
This year, in order to make our charity evaluations more accessible to a wider audience, we have made an effort to represent more information visually rather than as blocks of text. In addition to the ratings described above, we added tables representing charities’ main programs, key results, estimated future expenses, and our assessment of their track records. We also added a table representing each charity’s human resources policies with different color marks indicating the policies they have; the ones they do not; and the ones for which they have a partial policy, an informal or unwritten policy, or a policy that is not fully or consistently implemented. We think that these changes will make it easier for our audience to gather the most essential findings from our reviews in a quicker and more efficient manner.
Making Changes to our Cost-Effectiveness Models
Since 2014, we have been creating quantitative cost-effectiveness models that compare a charity’s outcomes to their expenditures for each of their programs and attempt to estimate the number of animals spared per dollar spent. As these models have developed each year, we found that some repeated issues emerged:
- We were only able to model short-term, direct impact. Our attempts to model medium/long-term or indirect effects of interventions were too speculative to be useful. This meant that we could not produce models at all for charities that were focused mostly on medium/long-term or indirect outcomes, and we often had to omit programs from the charities for which we did produce models.
- The estimates produced by the models were too broad to be useful for making recommendation decisions. Ultimately, we want each criterion to support our recommendation decisions, and we found that we often were not confident enough in the models to give them weight in those decisions.
- While we appreciate the value of using numbers to communicate our estimates and uncertainty, we found that by using numbers, our estimates were often misinterpreted as being more certain than we intended.
- The variation in cost-effectiveness between charities was more dependent on which interventions the charity used, rather than how they were implemented. This suggests that, rather than modeling the cost-effectiveness of each charity, we would be better served to model the average cost-effectiveness of each intervention and incorporate that into our discussion of effectiveness in Criterion 1.
Addressing these issues fully was not something we could resolve in a single review cycle, but we have taken significant steps to provide an assessment of cost-effectiveness that is more useful. We have moved away from using a fully quantitative model and instead transitioned to a qualitative approach that, for each intervention type, compares the resources used and outcomes achieved across all the charities being reviewed. In the discussion, we have also included aspects of each charity’s specific implementation of their interventions that seem likely to have influenced their cost-effectiveness, either positively or negatively.
This approach does have limitations, as focusing on qualitative comparisons can lead us to be overly confident in our assessment. As such, we have highlighted where this approach may not work and we have continued to put limited weight on this criterion as a whole when making decisions. That said, it has provided some insight into the cost-effectiveness of all reviewed charities regardless of the timescale or directness of their work, allowing us to make comparisons that we were previously unable to make. We have focused on comparisons within interventions and not between interventions so as not to overlap with Criterion 1, as well as to provide an insight into how cost effective the charity might be in implementing new programs in the future.
We welcome feedback on this approach, which can be directed to Jamie Spurgeon.
Making our Culture Surveys Mandatory for Charities Receiving a Recommendation
Our evaluations of each charity’s culture have evolved every year. In 2016, we simply asked each organization’s leadership about the health of their organization’s culture. In 2017, we began reaching out to two randomly selected staff members at each charity to corroborate leaderships’ claims. In 2018, we introduced culture surveys to our evaluation process and we distributed our surveys to each charity’s staff, with the agreement of their leadership. In some cases, a charity’s leadership preferred to send us the results of their internal surveys instead, which we accepted in 2018 as well.
We found that distributing our own culture survey to each charity under evaluation gave us a much fuller picture of the charity’s culture. We also found that distributing the same culture survey to every organization was essential, since charities’ internal surveys vary widely in content, relevance, and quality.
This year, we decided to make participation in our culture survey an eligibility requirement for receiving a recommendation from ACE. Our goal is not to uncover and report any small conflict or cultural problem at the charities we evaluate; rather, we only report general trends that bear upon the charity’s effectiveness. We view the distribution of our culture surveys as essential due diligence since we seek to promote charities that will contribute to the long-term health and sustainability of the animal advocacy movement.
Watch our blog for a forthcoming post with more information about our culture survey.
Hiring a Fact-Checker
ACE places high priority on using accurate and reliable evidence in our work. In order to improve our capacity to more deeply investigate empirical information, we have hired a Field Research Associate whose main role is to identify and verify the factual statements included in our research. These statements include claims made by charities under evaluation. We hope that this additional staff member will improve ACE’s decision-making by allowing us to better verify the information reported to us.
Filed Under: Recommendations Tagged With: charity evaluations, evaluation criteria, process
Hi ACE,
I heard from someone who runs one your top recommended charities that the reason they chose to send internal surveys instead of having staff complete the culture survey was because there were concerns about things being revealed in surveys that the charity wouldn’t have access to, and they might have mandatory reporting requirements for, etc. (for example, if sexual harassment was reported that they weren’t aware of).
Is this the case, and if so, how is it being addressed in this year’s survey? Additionally, how are charities incentivized to participate in a mandatory survey in general? If I ran a large charity in the EAA space, with a reputation for being EA aligned, I probably wouldn’t have staff take the survey and would drop out of the process – it seems like enough bad things happen in animal advocacy space that the risks of having donations lost because of that vs the benefits from participating in evaluations would weigh in favor of not participating or not having evaluations published because of what was revealed in the survey. And if a leader knows that there are issues at the organization, it would look better to not participate at all vs pull an MFA, and pull out of the process awkwardly after seeing the evaluation to avoid having it published.
In any case, making this mandatory seems like it could mean that the only charities who get evaluated or get evaluations published are those with good cultures. Which might be a good thing, but ends up meaning the culture criterion is weighted way more heavily than others.
I don’t really see a good solution to this, as it might be the only other way to get accurate culture information, but I’d be curious about how you weighed these sorts of considerations in deciding to make it mandatory.
Hi Animal Advocate,
Thanks for the questions! Charities might have all kinds of reasons for preferring to distribute their own culture surveys to taking ours. However, distributing our own survey is the best way we’ve found so far to get a full picture of each charity’s culture. Thus, it’s the only way we feel we can responsibly recommend a charity.
As for the question about mandatory reporting, charities may be required to take certain actions based on certain kinds information that their employees report to them. We encourage animal advocates to report any culture concerns they have directly to the charity they work for (when they can safely and comfortably do so), and we encourage charities to be receptive to those concerns and to act on them responsibly (and as legally required). The purpose of our survey is to provide animal advocates with a way to participate in our evaluation of their workplace. This in no way interferes with other charities collecting the same information themselves or acting on it as legally required.
Finally, the incentive for charities to participate in our survey is the eligibility to receive an ACE recommendation. Charities can, of course, decline to take the survey, but that means we would not be comfortable recommending them in the same year.
We will be publishing a separate blog post soon with more details on our culture survey, so keep an eye out for that!
All the best,
Toni