We’ve written elsewhere about our general and December 2014 review processes, with the goal of helping readers understand what we did to arrive at our recommendations. Here, I’d like to discuss in greater detail some of the surprises we encountered during our December 2014 organization review process.
It was harder than we expected to obtain basic financial information about organizations outside the United States – unless they were in the United Kingdom.
Most non-profit organizations in the US are required to file tax documents, and those documents are subject to public disclosure by both the non-profit and the IRS. Several organizations, including Charity Navigator, GuideStar, and Foundation Center offer databases that simplify the process of obtaining these documents for members of the public. Before beginning this round of evaluations, we’d found the vast majority of financial information we needed using these services. Occasionally we had trouble getting financial information for an organization, but that happened only when the organization was very young and hadn’t yet been required to file tax documents.
We expected more difficulty in obtaining financial information from groups working in other countries, primarily because we anticipated we would need to learn to understand documents put together by different governments, including in languages other than English. We also anticipated that there might not be independent sources compiling financial information for many charities in other countries, making accessing documents more difficult.
We did not realize that in many cases the documents would either be nonexistent or not publicly available, but this is what we found. Eight of the 13 organizations we investigated that were based outside the US or UK (where registered charities’ financial information is available from the Charity Commission’s website) did not have any financial documentation publicly available online. In several cases, these organizations were willing to provide us with financial statements upon request; these were sometimes internal accounting summaries for previous years, and sometimes compiled specifically for us. It was clear that organizations in some countries simply are not normally expected to make financial data public. This meant that the early parts of our evaluation process presented a greater burden for some groups than others, since some had to specifically provide information that we had expected to find online. Now that we know more about which groups’ information is unlikely to be available online, we can start looking for that information earlier, giving groups more time to react to our requests.
More organizations than we expected allowed us to publish shallow reviews.
In May, we published only 27% of the shallow reviews we’d written during our evaluation process; most organizations we contacted didn’t want our reviews published, didn’t reply to us at all, or even preferred that we not mention them on our site.
Based on this experience, we expected that only a small proportion of the new shallow reviews we wrote would be able to be published in December. In fact, we were able to publish 65% of the shallow reviews that we wrote for this round of evaluations. In part, this was because we never wrote reviews for some of the groups that did not want to engage with us at all (those who did not reply to our contact emails or said they would prefer we not review them). Because we (unexpectedly) could not find financial information for some groups, we had to write to those groups and obtain financial information before we could write reviews for them. In some cases these organizations told us they didn’t want us to write a review or simply never replied to our attempts to contact them. Even including these groups, however, we were able to publish shallow reviews for 46% of the groups we wanted to review at that level, a big increase compared to May.
There are many possible reasons for the increase in publications, including:
- Organizations were more likely to have heard of us before we contacted them.
- Our website now looks more authoritative than it did before last May.
- Organizations were able to view other shallow reviews to see how what we’d written about them compared to other things we have written.
- We contacted a different set of organizations, possibly ones which were more interested in being reviewed on our site.
- Our reviews were more positive, either because we had better selected organizations to review that matched our criteria for effectiveness or because we became more lenient over time.
Subjectively, it felt like the difference was mostly due to ACE seeming more authoritative and to our having selected organizations that better matched our criteria for effectiveness. Organizations seemed generally less skeptical of our methods and motivations and more willing to work with us to arrive at review language that we could agree on. We did write some reviews that we did not think, based on our experience in May, organizations would approve for publication, and in some cases these reviews did get published. We are hopeful that future results will be more like December’s than like May’s, because ACE’s credibility to other organizations will continue to improve and we will continue to choose well in deciding which organizations to review.
Fewer organizations than we expected allowed us to publish medium reviews.
In May, we had been able to publish every medium review that we wrote. In December, of six new organizations that we selected for medium review, two declined to have their reviews published at all and we had extensive conversation with another organization which resulted in our publishing a review which did not contain as much information as we had hoped.
We can’t talk in detail about these individual situations, because we appreciate these groups’ participation in our review process and don’t want to provide reasons for other groups not to engage with us. We did learn a lot from reviewing these groups, but of course we were disappointed not to be able to publish everything that we had learned.
We think it’s likely that the major reason why we were able to publish fewer of the medium reviews this time is that in the fall we investigated a wider range of groups at this level than we had in the spring. In particular, most of the groups we evaluated early in the year engaged in a mix of activities that prominently included promoting vegan, vegetarian, or meat reduction diets at the individual level, and possibly included a limited range of other efforts such as corporate and legislative outreach. The groups we evaluated in the process ending in December engaged in a wider variety of activities, some of which were more speculative or harder for us to compare to what other groups were doing. In particular, some groups engaged mainly in corporate outreach designed to improve animal welfare, causing complications when we tried to estimate their impact in comparable terms to other groups (using equivalents for animals spared rather than animals helped).
As we plan for future rounds of evaluations, we’ll be actively balancing concerns about producing publishable material through our medium-depth evaluations with our commitment to evaluating the organizations we find most promising. In particular we’re aware that organizations doing unusual types of work that we don’t know much about may be very good candidates for our recommendations (if we find that their work is exceptionally effective), or may be difficult to publish materials about (if we find that their work is not particularly effective or they don’t feel we’ve adequately understood what they do). We’ll likely retain our basic model of requesting organizations’ participation in our process and allowing them final control over whether we publish evaluations of them, because this balances our interest in choosing what organizations we research with those organizations’ interest in protecting their reputations. However, we may start asking for clearer indications that organizations understand our general process and the types of outputs it produces, now that we have many examples of completed evaluations on our site. We might also request pre-commitments to the publication of certain types of materials, such as edited conversation notes, especially if we continue to have problems publishing the results of medium reviews in the future.