When we table at conferences, we are often asked why we don’t rate charities. Many people have one specific charity that they care about and want our opinion in a letter grade or equivalent rating. For instance, they might prefer a system in which we assigned every group we consider a rating on a scale from 1 to 10, rather than our present system of assigning ratings to groups we consider very good (our top and standout charities) and leaving the rest unrated.
We think it would be unwise to rate all charities for the following reasons:
We would need to be similarly informed about all charities we rated
We now feel we have enough information to form three groups of charities: our top charities and standout charities, and those that do not fit into either group. However, with those charities that are neither top nor standout charities, we often do not have enough information to be certain we would not change our opinion upon further consideration. If we rated all charities on a scale, we would want to be just as certain about the difference between a 4 and a 5 as we are now about the difference between a standout and a top charity, what might in another system be a 9 and a 10. We would need to learn substantially more about most charities we have considered to achieve this. That would require those charities spending considerable time interacting with us, which we don’t honestly believe would be beneficial for groups which would be very unlikely to receive high ratings. Currently, if we think a group is unlikely to receive a high rating from us upon further review, we don’t pursue a deeper evaluation for that group, which we feel is a better use of both our time and theirs.
It creates a tier structure
Using a rating system would inevitably lead to someone thinking that an organization scoring an 8 was significantly “better” than an organization that scored a 6. This oversimplifies our views; we take many considerations into account and groups that are strong in some areas may not be strong in others. A group that scored an 8 would surely be better in some areas than a group that scored a 6, but might be worse in other respects. We could discuss these complexities in our reviews, but think that rating all groups on an extended scale would direct too much focus to the final rating.
Ratings are often misjudged
People have different opinions about what constitutes a 10 vs a 6 vs a 3. We could explain our rating system, but people would likely use their preconceived notions about what the numbers mean, which would lead to misinterpretation of what ACE intended. This is still a potential issue with our current system, but we hope that by using a minimal number of different ratings with a descriptive name for each, we can ensure that most people using our recommendations understand what we mean by each of our categories.
It would negatively impact views of ACE
Undoubtedly some people will see that their organization did not score as high as they thought it should, or that other organizations scored higher than they think that they should. This would result in more negativity about ACE’s work. While this might be an acceptable consequence if we saw great positive benefits to such a system, we don’t think the trade-off is actually worth it. We are primarily concerned with directing donations to the best organizations, which we are already able to clearly identify in the present system.
Ultimately, ACE is trying to do the maximum amount of good for animals, and we don’t think a more comprehensive rating system will enable us to achieve that. Do you agree, or do you think we should reconsider? Let us know in the comments.