We realize that we are not perfect in our work, and as is the case with any organization, we sometimes make mistakes. In the interest of fully disclosing our past activities, and showcasing how our thoughts and research have developed over time, we have composed this page. We will continue to update it as new mistakes are made.
Hiring people to work in novel positions without strong procedures in place, 2017: We created several new positions in 2017 but did not spend an appropriate amount of time considering the materials and background that were important for success. This led to an inefficient use of resources at times, and ultimately to less productivity from the team. We will attempt to better plan for future positions, and seek to hire individuals with more experience in relevant areas.
Piloting a social movements project without sufficient planning, 2014-2017: We created a social movements project with the hope that examining other movements would provide insights into how to increase the effectiveness of the animal advocacy movement. However, we did not adequately consider the scope of the project, and this led to several deficiencies and ultimately the closure of the project. More details can be found in our blog post examining the decision to cease work on this project.
Not using the results from ACE’s randomized controlled trial on leafleting in our leafleting cost-effectiveness estimate, 2014-2017: The leafleting cost-effectiveness estimate in our 2014 leafleting report was derived from our assessment of the results of the 2013 Farm Sanctuary/Humane League Labs randomized trial (FS/HLL 2013). In 2014, we also had the results of ACE’s randomized controlled trial on leafleting (ACE 2013) and could have used those results in our leafleting cost-effectiveness estimate in addition to or instead of the FS/HLL 2013 results. At the time of our previous leafleting report we believed that the cost-effectiveness of leafleting could be adequately modelled without using the results from ACE 2013 and that nothing in our analysis seemed particularly untenable. In hindsight, we think that including the results of ACE 2013 would have led to a more accurate estimate. We are unsure whether we could or should have known to include them at the time, given the resources and information available to us in 2014.
Postponing update of leafleting report, and insufficient disclaimers on the current version, 12/2016, 1/2017: Last year we decided not to update our leafleting report in favor of prioritizing other projects. Although we always try to explain our current thinking in any new materials and evaluations that we produce, it has become clear in January of 2017 that there remains considerable confusion about our stance on leafleting. Though we added a small disclaimer in December of 2016 noting the date of publication of the original report, we didn’t anticipate the number of people who would be relying solely on the outdated leafleting report on our site. Because of this, we have now added a longer disclaimer to the leafleting report. We have also taken down the leafleting impact calculator to avoid creating more confusion. We will update the leafleting report in 2017.
Taking on too many projects, 2016-2017: We believe that we tried to do too much in 2016, and the quality of our work suffered. In particular, we scheduled far too many projects at the end of the year, when our small team was tasked with coordinating our first research symposium, completely overhauling our website, completing our annual charity recommendations, producing advocacy advice materials, and running a matching campaign. We are planning to focus more on quality moving forward, and reduce our expectation of the number of projects that we can complete concurrently.
Rushed studies, 3/2016-11/2016: We ran several small MTurk studies in 2016, but we did so while trying to complete a number of different projects concurrently. This left us with insufficient time to review and critique our methodology and subsequent analyses. We are reconsidering the circumstances under which we would conduct similar studies in the future. We are also developing a new protocol, including more detailed guidelines for additional internal and external reviews. This will improve the quality of our research output in general moving forward, which will in turn improve any further studies we conduct.
Lack of progress on our social movements project, 1/2016-1/2017: We now realize that we did not allow enough time to focus on our social movements project. We attempted to use an intern model to produce regular case studies, but did not properly estimate the amount of time and knowledge needed for completion. As such, we currently have two case studies which are close to publication, and two other case studies which are unfinished. We will be reassessing this project in 2017, which will entail discussing how we hope to eventually synthesize data to inform animal advocacy, as well as examining the practicality of having ACE continue to work in this area.
Inefficient use of volunteer program, 1/16: We have spent a long time recruiting and attempting to work with volunteers on an explainer video that would introduce the concept of effective altruism to the average animal advocate. We did this in an effort to be as efficient as possible with the funds that we have, but the end result of these efforts was a loss of work hours from our staff. We have noticed this general trend with other volunteer projects as well, some of which are described in earlier mistake entries. We are taking two actions to resolve these issues. First, we have set aside money in our budget to fund a professional service to assist with our explainer video. Second, we have eliminated the general volunteer application on our website, and are in the process of replacing that form with a few very specific projects where we need assistance. By more clearly communicating our needs and requirements for volunteer positions, we hope to increase the frequency of successful volunteer projects.
Messaging about cost-effectiveness estimates, 12/15: On certain medium reviews, we note that cost-effectiveness estimates reside either at the high end or the low end of the range of estimates for other groups that we reviewed at that depth. We have since come to believe this is not a helpful description, as the estimates would probably need to vary over at least 2 orders of magnitude in a single round of reviews to constitute a strong signal of differences, and so far they have only varied by about 1 order of magnitude. We intend to cease using that description in our cost-effectiveness sections of future reviews.
Insufficient explanation about availability of review revisions, 9/15: We have been contacted by several organizations hoping that we would update their review from a previous year while considering new information about their programs. While we would love to do this for all groups that have significant new information to provide, we are only able to offer revisions for organizations that contacted us on or before June 1 of the current year. This is because we operate on a tight schedule when writing reviews for new organizations, and don’t have the resources to be able to consider new information as it becomes available without proper planning. If we do not hear from an organization, we default to revisiting those organizations for which we have conducted a medium-depth review or greater every three years. To correct any misunderstanding in this area in the future, we are continuing to write more detailed explanations for our website, and we will publish this information as it becomes available.
Unclear messaging about “considered” organizations, 8/19/15: While we make every effort to be transparent about our work, sometimes we fail to accurately convey our message. Several individuals approached ACE with feedback that “considered” organizations on our list of organizations page might be viewed poorly since they were not recommended, which is not something we had intended. Therefore we have added an additional paragraph on our list of organizations page denoting that some of the “considered” groups do excellent work, but that for one reason or another they did not perform highly enough in our criteria to warrant a recommendation. We will also be more explicit about this in our future printed materials so that there is less chance of confusion on the issue.
Using volunteers for projects with hard deadlines, 7/15: Being a small nonprofit, we often work with volunteers on a variety of projects out of necessity, as we simply don’t have the resources to complete all projects by staffed employees. While we have been especially fortunate in the past to have worked with some volunteers who were extremely reliable, lately we have not been as fortunate. As a result, some research and video projects have been delayed past our original deadline. In the future, we will assign time-sensitive tasks to staff and interns, with rare exceptions for proven volunteers, to avoid spending staff time managing a project that fails to meet a deadline.
Estimate of average good done by top charities used in print materials, 5/15: In our December 2014 site update, we missed the Impact of Donations page, which should have been updated promptly to reflect our new charity recommendations and estimates of effectiveness. As a result, when we prepared our 2015 print materials, we did not have available an estimate of the number of animals spared by a donation to our top charities that combined information about all three groups. We used the average of the estimates for the three individual organizations, because this was convenient and we had a deadline for sending materials to the printer. When we did update the Impact of Donations page using a more careful methodology, our best guess for the number of animals spared by a donation to our top charities was noticeably different from the number we had used in our print materials. We now wish we had updated the Impact of Donations page first, so that we could have used the results in our print materials. We have noted this on pages of our website leading to digital copies of our print materials, and we plan to update our materials in this order after future changes in our recommendations.
Missing deadline, 5/15: We publicized prospective goals indicating that we would publish bi-monthly case studies for our social movements project. While we have released two case studies in 2015, we missed our May deadline for one of these case studies. This is due to our reliance on interns to work on this project; as one of our interns working in this area left unexpectedly, we were left with a partial case study that still needed considerable work. We intend to meet the rest of our deadlines by planning further in advance to allow for unexpected interruptions to research in this area.
Loss of Productivity from Unpublished Reviews, 11/1/14: For many reasons, we allow organizations to decide what materials that we can publish from our evaluation procedure. Until recently, we had not faced the problem of any medium-reviewed organizations deciding not to allow us to publish our review, but two groups made this decision for the December 2014 round of recommendations. As these reviews take a significant amount of time to write, and we were not allowed to publish any materials from two organizations, we ended up expending a considerable amount of time on reviews that we were not allowed to release to the public. While the review process for these two organizations was informative for ACE internal staff, not being able to share that information publicly means that our efforts in these reviews were not as informative as our other efforts. We haven’t currently figured out a better way to handle this situation for future reviews, but we are planning to reframe our review process in subsequent discussions with organizations with the hope that we will increase the likelihood of being able to publish our findings.
Top Interventions, 1/1/14 – 12/18/14: When we started work on evaluating interventions and charities, we felt confident that leafleting college students with information about factory farming was a strong method of helping animals. After running our own leafleting study, we were less certain of its effects, but felt that, due to its low cost and ease of engagement, it was still an especially strong method of helping animals. While we still believe this to be the case, we have decided to remove our page officially recognizing it as a top intervention until more research can be done on both leafleting and other tactics commonly used to help animals. Once we have a better understanding of these interventions, we will revisit adding a top interventions page back to our site.
Suboptimal Wording, 5/14: We released new recommendations on May 14, 2014. The process included writing reviews for many organizations, including more detailed reviews for select organizations. In these detailed reviews, we featured summary questions at the beginning, and included a final question of “Why don’t we recommend X” for each charity that did not receive our top recommendation. As it was not our intention to direct people not to donate to these groups, we revised the question to read “Why did X not receive our top recommendation?” We used our initial wording in an attempt to be strong and clear in our recommendations, but we were alerted to an undesired effect of seemingly actively advocating against donations to reviewed groups, which was not our intention. We have fixed this error and communicated with all groups affected, and we will continue to examine the best methods of communication in the future.
Unexpected Delays, 1/14: Despite being aware of the planning fallacy, we were still optimistic that we would be able to release our leafleting study results by the end of January. However, unforeseen delays in recording and analyzing the data from some of our volunteers made us postpone releasing our results until the middle of March. We are grateful that the volunteers helped us with this project, but as they were volunteers, we were unable to enforce a stricter timeline. We faced a similar challenge with our humane education study, as that will be released later than intended in April. These inaccurate estimates led to people expecting results sooner than we could produce them. We will use more caution in assigning report dates for subsequent studies to allow for additional time.
Titles, 12/13: When we rebranded as ACE, we also re-examined the titles that we gave our staff. When we hired our employees, we saw them as running their own department, and thus gave them “director” titles. However, after some external consultation, we discussed changing their titles to “manager,” as this allowed for more developmental advancement within the organization based on performance. The employees agreed to this change; in the future, we will give more consideration to titles before assigning them.
Record Keeping, 4/13 – 5/14: As EAA moved from being entirely volunteer run to an organization with paid staff, not all documents bearing on our recommendation process were retained. When we later went to redesign our site and create new content, we found that we had limited understanding of the process used to arrive at our existing recommendations. While we had justification for our decisions, we didn’t present the level of information we hope to be associated with in the future. As such, we are developing charity and intervention templates, and creating a much more focused and thorough approach to composing our list of recommended charities.
Pilot Studies, 9/13: We launched a study on leafleting and humane education without any piloting procedures, despite having many questions about how the studies would work in practice. Our lack of information led to wasteful study design. A significant number (20%) of students involved in the humane education study received emails from us that were not well-targeted, leading to a worse response rate than we could otherwise have achieved. Partner organizations and EAA volunteers leafleted at many schools, but the volunteers we relied on to conduct surveys for that study varied in dedication and ability, in one case not surveying at all. To get a good response rate, we should have committed more resources to surveying at each school, even if that meant using fewer schools. We will avoid this issue by piloting studies that have significant areas of methodological uncertainty before committing our and others’ resources at full scale.
Changing Priorities, 8/13, 9/13: We put time and effort into soliciting for a community manager position, investing many hours into interviews and planning for the role. We ultimately decided that ACE should focus our efforts on developing solid content and building our credibility, and explore community aspects at a later date. We inconvenienced staff and applicants during this process. We have since spent considerable time discussing strategy, which will help us to not make a similar mistake again.
Publication Error, 8/12/13: The EAA site very temporarily featured a page that included the email addresses of all members of our community. However, that page was not linked from the site, and could only be found through a targeted web search. We have no indication that anyone outside the organization accessed the page, and it was taken down immediately after discovery. This page was published through the error of a volunteer who was working on another project on the site. To prevent this from happening in the future, we developed more rigorous guidelines for publishing pages on our site, and restricted access to staff.
Name Selection, 8/12/12: After polling a small group of people, we chose Effective Animal Activism as our name. While descriptive, it failed to accurately identify what we do (we do not engage in actual activism), and thus caused some confusion publicly. Additionally, as the organization grew and we decided to professionalize our efforts, it became clear that having “animal activism” in our name might deter those we are targeting in our attempts to move money to more effective organizations. Some segments of the public have a view of “animal activism” as a negative and radical concept, and we therefore were not able to appeal to those individuals. We rectified this by changing our name to Animal Charity Evaluators in December 2013.