This is an archived version of our General Approach to recommending charities, which was used prior to 2016.
Process for Evaluating Charities & Making Recommendations
ACE’s goal in evaluating animal charities is to find and promote the charities which work most efficiently to help animals. We want the charities we recommend to be very strong, both in terms of the programs they choose to implement and in organizational characteristics like the way they make decisions and how well they find, develop, and retain skilled employees and volunteers. We promote the strongest charities we have reviewed as our top charities, and we also recognize some excellent charities about which we are slightly less confident or which are slightly less efficient as standout charities. We do not rank or rate charities below these levels.
Our review process consists of several rounds in which we consider progressively fewer charities in progressively greater detail. We give basic consideration to a very large number of groups, conduct shallow reviews of a smaller number, and then conduct medium reviews of a small group of likely candidates for our recommendations. We experimented with conducting a deep review in 2015 of one of our top charities, The Humane League, but we are unsure if we will continue evaluating organizations with such a high level of detail in the future.
After conducting these rounds of evaluation, we update our recommendations and the reviews published on our site. All our top-recommended and standout charities have received medium reviews, but not all charities for which we have written medium reviews are in these categories.
This is the first round of our recommendation process. It is intended to generate a list of animal charities and their focus areas and to improve ACE’s overall understanding of the animal charity landscape. We need to have a sense of the types of animal charities in existence, and of how prevalent each is, in order to make strategic decisions about our research in general as well as the evaluation process in particular.
Which groups are considered?
Any group can request to be considered in this round. We also maintain an internal list of organizations to consider based on recommendations we have solicited from experts and the general public, other lists of animal charities maintained online, and groups we have encountered at conferences and through other means. We consider every group which has requested to be considered since the previous recommendation process, as well as most other groups on our list. We do not always consider groups that provide direct care to animals, in particular companion animal shelters and rescues, because we find that these groups are usually able to help relatively few animals with a given amount of resources and because there are very many of these groups, including thousands in the United States alone. We also do not consider groups if, from previous interactions with their leadership, we believe they would prefer not to be considered in our recommendation process.
What do we do to research each group during this round?
We search for and visit each group’s website to determine their general focus area and methods. We record basic information about the group for future use. Our research in this round addresses the following questions:
- Is this a currently active animal advocacy organization with a presence on the internet? If an organization has no web presence, it will be difficult for us to learn enough about them to seriously consider recommending them, and our audience will have a hard time verifying our review and even donating to the organization. On the other hand, we sometimes find references or links to defunct organizations, branches of other organizations, and personal blogs or websites. While we occasionally review branches of larger organizations (for example, the Farm Animal Protection Campaign of the HSUS) because donations can sometimes be meaningfully restricted to a particular program, we have no reason to review defunct organizations or personal blogs.
- Is the organization within our scope of evaluation? ACE evaluates charities and non-profits which work to benefit animals. We do not require organizations to be registered in any particular country or under any particular framework; we would consider evaluating a for-profit company which had room for funding by individuals and strong evidence of cost-effectiveness in helping animals. We do not evaluate organizations whose primary activity is making grants to other organizations we could or do evaluate; this excludes some but not all foundations.
- What language or languages does the organization use on their website? ACE is based in the United States; we conduct the majority of our work in English. We look for organizations to have a version of their website available in English; this signals that we will have an easier time learning about and communicating with the organization than if they operated only in another language. It also allows our audience to read about the organization in their own words. We may consider evaluating organizations that appear very strong in other ways but do not have an English website, if the organization and relevant ACE staff have a common language available. We do not currently have the resources to have entirely translated/interpreted interactions with any organization.
- What does the organization seek to accomplish, and what general tactics do they use toward this goal? These are big questions, and at this stage we only try to understand them in broad terms. For example, we expect to be able to distinguish among an organization which provides sanctuary for farmed animals, one which provides sanctuary for big cats, and one which advocates for farmed animals by lobbying legislators. We do not necessarily expect to be able to distinguish more specific divisions of resources, such as between an organization which has a major program advocating for lab animals and a side program advocating for farmed animals and an organization which has a major program advocating for farmed animals and a side program advocating for lab animals.
- Does the organization advocate or participate in violence or the destruction of property? ACE does not endorse violence or property destruction as a means to help animals. We make an exception for very small amounts of property damage in support of a larger project, such as when a lock is broken in order to take footage of a farming operation.
What do we publish as a result of this round?
We list organizations we considered in this round on our website with icons denoting their focus areas and links to their websites. We do not list organizations that did not have websites, websites that did not correspond to active animal advocacy organizations, or organizations that advocate violence. We also do not list organizations whose websites are not available in English or another language we can read, and which we have decided not to investigate further for that reason, because it is not fair to say that we have compared these organizations to the other organizations we have reviewed. We err on the side of inclusion for organizations which seem to be genuine but may be defunct.
If we have considered an organization at only this level and it appears to be within our scope of evaluation, it is listed as “Considered” on our complete list of organizations.
This is the second round of our evaluation process. It is intended to identify those charities that engage in promising activities and may have room for more funding.
Which groups are considered?
At the start of the recommendation process, we set a target number of groups to investigate in this round, based on our available resources. We select organizations for this round from those considered in the previous round and in previous recommendation cycles, using the following guides:
- Focus area. The area a charity focuses on has a high expected impact on their cost-efficiency with respect to ACE’s primary metrics of lives spared per dollar spent and suffering averted per dollar spent. Accordingly, this is usually the main factor we use in deciding whether or not to include an organization in our round of shallow reviews. We prioritize reviewing groups working in areas where large numbers of animals are involved and which receive relatively little attention and funding. In practice, this means that we review many groups which focus on helping farmed animals, because there are so many farmed animals compared to other groups of animals that animal charities typically focus on. We also see charities that focus on anti-speciesism generally as potentially very effective, though in practice such charities often have a large focus on helping farmed animals for reasons similar to our own. Finally, we are interested in reviewing charities which focus on reducing animal suffering in the wild, but have found few such organizations (as opposed to charities focused on species or habitat preservation, which are plentiful but often focus on specific species at the expense of others, or on caring for individual animals in rehabilitation centers).
- Methods. The general methods a charity uses also have a significant impact on its cost-effectiveness. In particular, organizations which rescue, rehabilitate, or provide sanctuary for individual animals are likely to spend much more per animal helped than organizations which focus on creating widespread change through more leveraged means. Accordingly, we prioritize reviewing groups which focus on creating change through institutions such as governments and corporations, public outreach, and technological advancement. We are less likely to review groups which devote a large proportion of their resources to caring for individual animals, which is relatively costly compared to educational efforts even when done as efficiently as possible.
- Ease or difficulty of evaluation. Because of our limited resources, we sometimes find charities that we would like to review based on their focus and methods, but that we are not yet able to devote sufficient resources to review fairly. This can be the case for charities which operate only in languages other than English, groups whose websites are particularly uninformative or appear outdated, or organizations whose methods seem promising but unusual and would require significant additional time and research to understand. We have also used other factors under this general heading: for instance, for the round of evaluations which concluded in May 2014, we did not evaluate groups that did not work in the United States, because we did not have the resources to fairly evaluate such groups at that time. We now do evaluate some such groups. Similarly, we would expect to evaluate in the future some groups that we have so far delayed evaluating due to other factors that make evaluation difficult.
- General likelihood of receiving a recommendation. We also consider other factors that affect whether a group is likely to receive a recommendation from us as a result of our review, if we know about them. For instance, because we do not condone violence as a means of helping animals, we are not likely to review a group which advocates violence. If we have previously written a shallow review for an organization, we are not likely to write another unless either some of our thinking on relevant issues has changed or we have reason to believe that the organization has changed in a relevant way. We are also less likely to review local groups than national or international groups; while a local group could be highly effective and we could recommend them, we think that efficiencies of scale function to make national and international groups slightly more effective on average and therefore prioritize reviewing them.
We do not have a formal process which we always follow when selecting organizations to include in a round of shallow reviews. Typically we start from the list of organizations which have been given basic consideration at any time during our history, and compose a shorter list using criteria of focus area and method. We then restrict the list further using other criteria until we are in range of our target number. Typically we apply slightly different criteria in each iteration of our process in order to evaluate as many new and promising groups as possible; for example, in one round (May 2014) we did not evaluate any groups working mostly outside the United States, but did evaluate local groups within the United States. In the next round (December 2014) we evaluated groups working in any country, but evaluated only national or international groups. We revisit each organization we have previously reviewed at least every third year, to account for changes in the organization over time.
What do we do to research each group during this round?
Multiple ACE staff members visit each group’s website for longer than during our basic consideration round. Our site visits are guided by a worksheet based on our organization evaluation criteria. Each person involved in the evaluation process is assigned at least one criterion to research for every organization in the shallow review round, and visits the sections of the website they need to fill out the corresponding section of the worksheet, as well as to gain a general sense of the organization.
We also look up basic financial information regarding each group, such as revenues, expenditures, and assets for recent years. This information is not always available on organizations’ websites, but registered charities and non-profits in some countries have to complete standardized forms which are available either through a government agency or through other online services. If we cannot find this information for a particular organization, we contact the organization to request it before writing a shallow review, because we find that knowing the rough size of an organization’s budget and reserves is crucial to our ability to write a fair report.
The exact worksheet we use changes from round to round as we learn more about the types of information we can reasonably expect to be available on organizational websites and online in general. Here is a version we used in one round. Once all information has been collected, staff members involved in the recommendation process read over each other’s notes and we discuss our overall opinions of the groups we have reviewed.
What do we publish as a result of this round?
For most groups, we write a brief review of the organization as a result of the shallow review, consisting of a one paragraph summary of their activities followed by 2-4 paragraphs describing what we found to be the most salient features of the group, both positive and negative. We send this review to the organization for approval, often triggering small amounts of additional research as organization representatives identify mistakes we have made or provide additional information about their organization. We don’t publish supplementary materials along with our shallow reviews, so we generally make updates to our reviews at this stage only if we believe they are reasonable based on publicly available information. The most common changes we make are changes in the emphasis of the summary of activities (for instance, if a group engages in many activities and we have listed some of them, they may request we change which activities are listed) and wording changes which we do not believe substantially alter the meaning of our review.
If the organization concerned agrees, we publish the shallow review (with any agreed-upon revisions) on our website and list the organization as “Reviewed.” If the organization does not reply to our contact after repeated attempts, requests that we not publish the review, or at any point stops communicating with ACE without having agreed to the review being published, we list the organization on our site as “Considered*” but do not publish the review. Occasionally, a group specifically requests not to be listed on our site, and in that case we do not list them. We note at the top of our list of organizations the number of such groups.
After researching the organizations involved in this round, but before writing the reviews, we select a small number of organizations to be part of the medium review round. We do not write shallow reviews for these organizations unless they choose not to participate in the medium review process.
This is the third round of our evaluation process. It is intended to allow careful and detailed reflection on the most promising groups so that we can select top recommendations and standout charities.
Which groups are considered?
At the start of the recommendation process, we set a target number of groups to investigate in this round, based on our available resources. We always include the top-recommended groups from the previous review process. We don’t always include standout organizations, since this could lead to difficulty finding time to evaluate new organizations as the number of standout organizations grows. We include other groups we have previously included in medium review rounds, or that we researched during the shallow review round, based on how likely we think we would be to recommend them. We revisit each organization we have previously reviewed at least every third year, to account for changes in the organization over time.
At this point we are dealing with a relatively small number of groups and our decisions about which groups to include in the medium review round are highly specific to the individual organizations, many of which we already know a great deal about (particularly if they have previously participated in a medium review). Some of the factors we consider are:
- How cost-effective do we expect the organization’s programs to be? Every group is researched at the shallow review level before being researched at the medium review level, so we have some idea what their programs are and how efficiently they are conducted. We only include charities in the medium review round if we think there is a substantial chance their programs have cost-effectiveness similar to those of our current top-recommended charities.
- Does the organization seem to collect and use appropriate evidence to guide its programs? Because animal advocacy and related fields have not been studied extensively and it is difficult to understand what works best, it is particularly valuable for advocates to carefully and thoughtfully collect data that allows them to assess which of their programs are most effective. While a focus on evidence and data collection is not a requirement, signs that an organization takes an active approach to monitoring its own programs make us more likely to include that organization in a round of medium reviews.
- How much would we learn from conducting the review? We will learn more from reviewing an organization the first time than from reviewing an organization for the second time, unless the repeat organization has undergone serious changes recently or we have changed how we think about something relevant to their work. We will also learn more from reviewing an organization whose programs are very different from the programs of other groups we have included in medium review rounds than from reviewing one whose programs are very similar to programs we have already investigated in detail. A review in which we learn a lot is better for us than one in which we learn less, and it is better for the organization concerned because there is a larger chance that something we learn will cause us to believe the charity in question is exceptionally effective and deserves a top recommendation.
We ask charities to participate actively in our medium review process, so we also use these factors to select some backup groups in case one of the groups we want to include in this round does not wish to participate.
What do we do to research each group during this round?
Once we have decided we would like to conduct a medium review of an organization, we contact that organization and set up a time to talk with a high-level staff member of the organization. During the conversation, we explain our review process in more detail and ask questions about the organization. We have several questions addressing our organization evaluation criteria which we ask each group; we also ask questions tailored to the individual group that we prepare in advance or that come up during the conversation. After the conversation, we request an approximate programmatic budget from each organization (detailing the programs funds are directed to, rather than accounting categories such as salaries or rent) and any other materials the organization has to supplement their responses to our questions from the conversation.
We write summaries of our conversations, read the materials submitted to us and other materials published by or about the organization we are reviewing, and discuss each organization on several occasions. Sometimes we return to our organizational contact with more questions or to request to have another conversation with someone else in the organization. Often we write most of our review of an organization before deciding whether to recognize it as a top-recommended or standout charity or as neither.
For organizations we have previously conducted medium reviews for, the process is very similar to the process for new organizations. The questions we ask in our conversations tend to be more closely tailored to the organization, and if the previous review was recent, the organization may not update certain information, such as their budget.
What do we publish as a result of this round?
We write a detailed review for each new organization that participates in our medium review round, and update our previous review for organizations we have written medium reviews for already. The review includes a general summary section followed by sections explaining how well we think the organization fits each of our organization evaluation criteria. Because we use information in our medium review round that may not already be publicly available, we also write summaries of the conversations we had with organization representatives and produce spreadsheet versions of our cost-effectiveness calculations. We send these, our review, and other supporting materials the organization provided and which we would like to publish, to our contact at the organization for approval prior to publishing.
Because these reviews rely on information that may be confidential, we sometimes make substantive changes to our reviews as a result of feedback from the organization concerned, in order to protect private information. We also make changes similar to those we make on shallow reviews, such as correcting errors or altering wording or emphasis without affecting the substance of the review. Ultimately the conversation summary, because it focuses on the views of the person we spoke with, usually represents the organization’s views, but our review represents our views.
If the organization concerned agrees, we publish the medium review and approved supporting materials (with any agreed-upon revisions) on our website and list the organization as “Reviewed”. If the organization does not agree to the review being published, we list the organization on our site as “Considered*” and don’t publish the review or supporting materials (unless the supporting materials are approved separately).
Note this section is taken from our description of our 2015 review process, and currently only pertains to that round of review. We are still considering if we should continue doing deep reviews in the future.
We conducted our first deep review in 2015, as a trial of the process. Our intention was to see how much time it would take, whether we would learn information potentially relevant to our recommendations that we could not learn in a more efficient way, and whether the resulting review would be more informative and persuasive for donors than a medium review.
Selecting an organization to review
We chose to conduct this review on The Humane League, because:
- They were one of our top-recommended groups in 2014, so they had a strong chance of being chosen for a top recommendation in 2015. We like to have detailed reviews of as many groups as possible, but we think it’s most important to provide detailed reviews of our top-recommended groups and other groups near the top of our list, since those are the reviews we expect to be read most frequently, and since this best helps explain our selection process for readers.
- We had conducted two medium reviews of THL and had several other conversations and interactions with them. This meant we already knew a lot of the things about them that we would expect to learn about any group we worked with over time on other projects. If we learned new things during the deep review, we could be relatively sure they were things we would only learn through adding new steps to our review process.
- They had been very open and responsive during previous review processes and collaborations in general. We expected to ask a lot of the groups we worked with during the deep review, so we wanted to choose a group that would be able and willing to spend extra time with us on the project.
Conducting the review
We created a plan for the review, including a list of people and groups of people we would like to talk to. In June, we discussed the idea of conducting a review with David Coman-Hidy, THL’s Executive Director, and asked him to put us in touch with as many as possible of the people on the list. Over the next several months, he helped us speak with most of the people we wanted to talk to, including all the people we wanted to talk to who actually worked for THL. We also conducted four site visits at various THL offices and events. Our last interviews were at the beginning of October.
We wrote conversation summaries and notes on site visits throughout the process. We sent conversation summaries first to our conversation partner for approval, and then to David to approve on behalf of THL. Site visit notes we sent only to David, unless they also covered a conversation in detail, in which case we also sent them to the parties involved in the conversation. We talked to some people who did not work for THL, including donors, teachers whose classes participate in their humane education program, and collaborators from other advocacy organizations, and we did not seek to publish full summaries of all these conversations, in some cases preferring to ask less of our conversation partner’s time or to better protect their confidentiality. In total, we published nAfter an extensive conversation on the subject, we had reached what turned out to be our final decision about each group. However, we continued to talk over the next few days to make sure we were all comfortable with the decisions, because our individual opinions had varied substantially and because we had selected a larger number of standout organizations than some of us expected. We’re limited in how much detail we can provide about this decision process, since most of it had to do with specific aspects of individual organizations. We knew how we planned to categorize each group before we reached out to any about publication of their review.
We wrote and published a review for THL that contained all the sections we typically include in a medium review, as well as additional sections concerning criticisms of the organization and the methodology of the review. We included them in our recommendation decision process along with the groups on which we had conducted medium reviews.
After the research on the medium and deep reviews was finished, but before the writing was finished, our Executive Director, Research Manager, and Research Associate each individually decided whether each of the charities we’d conducted a medium or deep review on should be a top recommendation, a standout charity, or neither, or if we were not certain. We compared lists and discussed each group. In some cases we were very certain and agreed with each other; in others we disagreed, or some of us were not sure what status was appropriate for a particular group.
After this round, we had 3 top charities, the same top charities we had at the start of the round. We also had 9 standout organizations, including the 4 we had at the start of the round. We had decided during the course of the previous round that we did not want to place a cap on the number of standout charities; this number could continue to grow in following rounds. We wanted the number of top charities to remain small, so that our recommendations provide a clear call to action. We don’t expect the number of top charities to grow much in the future.
We notify charities that they will be our new top-recommended or standout charities, or that they will be neither, when we ask them to approve the materials we want to publish as a result of their participation in our round of medium reviews. We ask top-recommended charities to work with us to set up a system for tracking donations directed to them through ACE’s review so that we can understand the impact of our recommendations. This includes either including a check box on their donate page where donors can acknowledge ACE’s influence on their donation or tracking unique URLs resulting from ACE links to the organization’s website. We also publish shortened versions of our reviews for top-recommended charities on our site, linking to the longer versions. This allows visitors to our site to receive a brief, clear summary of why we have recommended an organization, and to choose whether to proceed to reading our full evaluation, so that the amount of information presented at one time is not overwhelming.