We realize that we are not perfect in our work, and as is the case with any organization, we sometimes make mistakes. In the interest of fully disclosing our past missteps and sharing how our thoughts and processes have developed over time, we have composed this “Mistakes” page. We will add new mistakes and what we have learned from them to this page annually.
2021
Failed to use best practices when hiring, 2019–2021: ACE filled several positions over three years without holding formal internal or external hiring rounds, causing multiple problems: concerns about whether candidates were the right fit, confusion about the hiring process, and a lack of clarity on reporting structures. These concerns and uncertainties resulted in frustration, reduced output, and the resignation of two staff members during one of the busiest periods of the year for charity evaluations. To fix this mistake, we standardized hiring rounds for open staff positions and developed processes for both internal and external hiring. We will only forego external hiring rounds if we have an internal candidate who is a good fit for the position and has successfully completed an internal hiring round.
Miscommunication across all levels of leadership, 2021: Communication issues among ACE staff leads, the Executive Director, and the board of directors occurred over a number of years and resulted in different parties having incomplete information. Other issues arose due to novel situations (e.g., high turnover during the peak of our charity evaluations work) that the board and leads had little experience handling. Regarding the latter, the board and the former Executive Director took too long to reach a consensus on how to communicate personnel issues to leads. As a result, communication broke down between the parties at critical moments. Lacking communication and context from the board, the leads escalated tensions with board members, further deteriorating relationships between both parties. The situation contributed to more staff resignations, which meant that remaining staff had increased workloads alongside the work of improving communication practices and hiring to fill vacancies during ACE’s busiest time of year.
As of the time of writing, we have instated monthly calls among leadership, including board members, to improve communication and information transfer between the staff and board. In addition, we will provide opportunities for all team members at ACE to learn about the roles and responsibilities of each part of the organization and clarify the norms and expected frequency of communication between ACE leadership and the board members. We are actively working with a consultant to identify opportunities for improved internal communications between the board and department leadership and among all staff.
ACE will also increase opportunities for connection among all staff. We have re-instituted regular all-staff meetings, encouraged more opportunities to connect outside of core project work, and will schedule an in-person team retreat as soon as enough staff members feel comfortable with the COVID-19 situation to gather in person.
Delayed strategic plan, 2021: ACE’s last strategic plan, completed in May 2018, was for the period 2018–2020. While we did release a slate of prospective goals for 2021, we never completed an updated strategic plan. The board and Executive Director failed to make the strategic plan a priority and assure its completion, allowing competing events to divert their attention. As a result, staff experienced conflicting priorities that were difficult to resolve without more guidance. The situation resulted in many difficulties: confusion around what to prioritize, uncertainty of direction, and mission creep as some staff members’ personal priorities influenced their work. To resolve this, we will compose and publish an updated three-year strategic plan by the end of April 2022.
Continued challenges with the implementation of our new operating model, 2020–present: In March 2020, we implemented a new operating model based on Scrum, a framework for agile project management. In many contexts, this model has been a helpful and efficient way to keep all project stakeholders focused and connected. However, ACE ran into some issues with the Scrum framework that we did not address quickly enough and are still struggling to resolve.
The model was rolled out with insufficient planning or training, leaving some staff feeling confused about their role in projects. We did not include rest periods between sprints, which contributed to burnout and reduced efficiency. We also did not create contingency plans for staff transitions, so team members were often left managing a significant amount of increased workload when turnover occurred, which damaged morale.
Now that ACE has used the Scrum system for a few years, we are revisiting our operating model and working with staff to understand which parts work and which parts do not. We are in the process of deciding whether to modify our existing system or adopt a new one.
Difficulties in the implementation of our new operating model, March 2020–March 2021: In March 2020, we implemented a new operating model based on Scrum, a framework for agile project management. In general, we think this new model has been a positive change for our organization—both in terms of improving productivity and internal communication. That said, our recent culture survey results highlighted some areas for improvement. In line with the new model, we have structured some of our work in two-week sprints that contribute to the production of a defined product (e.g., charity reviews). This has worked well for most of our research work, but not as well for many of our communications and philanthropy projects. While this isn’t a problem in itself, it has made it difficult to involve non-research staff in sprints, leading to a situation in which some staff feel more siloed into their departments, despite efforts to remove formal departments at the organizational level. To address this concern, we have started planning sprints with more notice so that staff have more time to organize around them. Another challenge we’ve discovered is some perceived ambiguity between the role of the Product Owner and “Scramster” (ACE’s creative way of describing Scrum’s “Scrum Master”—a role focused on building team velocity by removing bottlenecks and improving efficiency). In particular, there has been a lack of clarity regarding how the decision-making responsibilities of the Product Owner interact with those of the leadership staff. We are currently producing internal guidelines to delineate the responsibilities of ACE’s Research Co-Directors in relation to researchers acting as Product Owners. Most significantly, we have recently appointed our former Director of Communications, Erika Alonso, into a newly-formed Project Manager role, whose main responsibility is to oversee the continued implementation of the operating model.
Confusion over roles and responsibilities, September 2019–present: The structure of our organization has changed a lot over the past year. In addition to implementing an agile operating model, we hired a Managing Director and shifted from one to two Co-Directors to lead the research competency area. We think we could have done more to formalize the responsibilities of these roles and to clarify both internally and externally the Managing Director’s level of authority, which was the same as that of other leadership staff. We could have better explained to staff where the Managing Director’s responsibilities overlapped with other roles (e.g., that of the Executive Director and the Director of Operations). Additionally, we failed to anticipate the need to clarify exactly how the role of Research Co-Directors (and leadership staff more generally) fit into our agile operating model and to define decision-making responsibilities between Product Owners and leadership staff. We have since developed documentation to clarify some important differences and will test structures over the next few months to see which works best in our team’s context. We have also changed certain job titles to improve external clarity on which areas of work each staff member is responsible for.
2020
Failing to allocate enough time to make improvements to our charity evaluation process, 2020 and earlier: Along with surveying staff and charities at the end of the evaluation process, in 2020 we tried to systematically log any and all suggestions for improvements during the evaluation process, rather than relying on staff’s ability to generate ideas after the fact. This approach yielded a large number of improvements to consider and opened us up to make more substantial changes to our process than we had in previous years. In doing this, we realized that in the past, we had not built enough time into our process to systematically identify and implement improvements—instead, we relied on minor improvements that we mostly carried out while conducting the evaluations. This year we have allocated several months specifically to updating the process. Improvements we have identified include: (i) setting aside more time for and increasing the rigor of our charity selection and recommendation decision processes, (ii) resolving inefficiencies in our information-gathering process and reducing the burden for participating charities, (iii) improving the structure of our published reviews and ensuring the content accurately reflects all criteria and factors relevant to our recommendation decisions, (iv) setting aside more time to consume research relevant to important assessments we are making, and (v) improving the presentation and accessibility of our reviews.
Difficulties resolving issues around racial equity, July 2020–present: Our discussions related to representation, equity, and inclusion (REI) in the second half of the year were tense, with team members disagreeing about how to handle tough decisions and specific concerns going unaddressed without actionable next steps. Conversations to resolve these issues have been difficult because they require a high level of vulnerability and trust among participants that unfortunately, we may not have currently at ACE.
We’ve taken steps internally to better understand how ACE can support our staff who are Black, Indigenous, or of the Global Majority1 through individual and all-staff meetings. Additionally, we recently appointed a Director of People Operations, who will focus—among other things—on improving internal literacy around REI topics. Together, we are working toward better integrating REI principles into our core programs as well as our internal policies.
Miscommunication regarding available funding in ACE Movement Grants, June 2020–July 2020: We unintentionally did not disburse all of the funds available for Movement Grants in 2020. Due to an internal miscommunication, we failed to use an updated balance including the donations that we received for the program during the first months of the year, which resulted in the distribution of only the funds received in 2019. Additionally, during the summer round of 2020, we were unable to disburse funds to several of the grantees we had initially selected; we also changed a few of our granting decisions due to the discovery of new information. Some of the funds were accounted for in the fall round but due to an oversight, a balance remained. To avoid this in the future, we will verify the total funds available for distribution at an explicitly defined point: the closing date of the current application round. We have updated the spreadsheet we use to record Movement Grant decisions to better track the amount of funding remaining to be disbursed.
The total remainder of 2019 funds after the 2020 granting rounds was approximately 118,500 USD. This amount will be disbursed in the 2021 round or future rounds, along with some or all of the donations received for Movement Grants in 2020, depending on the quality of the applications we receive.
2019
AARF Round 5 application data shared outside of organization, July 2019: While processing applications for Round 5 of the Animal Advocacy Research Fund, application and assessment information for candidates was inadvertently shared with a single individual representing an organization with which ACE has a professional relationship. This was due to an error made by one of our staff members and, once aware of the issue, we immediately corrected the document sharing permissions. We have notified all applicants whose information was shared and drafted an action plan to reduce the likelihood of similar errors occurring in the future. As part of that plan, we will conduct an organization-wide document permissions audit and have all staff complete training on proper Google Drive usage.
Experimental Research Division, 2017–2019: Beginning in 2017, we launched an experimental research division. We started this division before we had a clear idea of what kinds of questions it would investigate, or what the relationship would be between the experimental research division and the rest of the team. Ultimately we found these questions difficult to navigate, and the result was a prolonged project prioritization and discussion process with few decisions reached. In the meantime, our experimental Researchers worked on projects that suited their skill sets and interests, but were not necessarily the most impactful. In 2019, we had some staff transitions and reabsorbed the experimental research division into our research department. Our future experimental research will focus on topics that directly impact animal advocates’ work and/or that can directly inform our own work.
Delayed announcement for Effective Animal Advocacy Fund grants, early 2019: We opened applications for our EAA Fund in December, 2018 with the goal of making our first round of grants in January, 2019. It is clear in hindsight that this timeline was not realistic. We received many more applications—and much more funding—than we anticipated. It took our research team longer than expected to review all of the applications and to make our grant decisions. Once decisions were made, it also took longer than expected to draft and publish the grant announcements, as various stakeholders had different ideas about how to best present the information.
2018
Lacking robust onboarding program for new hires, 2018: We recognize that our onboarding program failed to offer sufficient opportunities for new hires to gain a deep understanding of our mission, philosophy, strategy, and thought processes. We also realized we could do more to introduce EA concepts, and to make new staff members’ onboarding experiences more engaging overall. We have already made some improvements for 2019. We’ve created a more structured and engaging format for new hires’ first month working with ACE. This new system includes more relevant video/audio content, a “scavenger hunt” for new hires to familiarize themselves with ACE and other EA-aligned organizations’ websites, an improved “buddy system” to ensure a smooth transition into ACE’s team, and more. We’ve also created a 3-month evaluation form specifically tailored to new employees. Previously, we used our yearly evaluation form for this purpose, but we think that we can gain more from these early-stage evaluations by using a more specific and relevant form.
Publishing a report that could negatively impact other organizations/individuals, 2018: We published a report in 2018 that aimed to outline the greatest opportunities for and barriers to effective animal advocacy in China. Upon publishing the report, we were informed that the content of the report could (i) be perceived as foreign interference attempting to disrupt Chinese agricultural economy and spread a negative image of the Chinese government and/or (ii) have negative consequences for animal charities and advocates in China. Prior to publication, we had gotten approval from our external reviewers and every individual whose interview was featured in the report. However, we realize now that we were not sufficiently conscious of the extent to which cultural barriers could impede successful communication with these individuals and our understanding of the risks involved. We’ve since taken down the full report and have made it available by request only, to a limited audience. In the future we will be more aware of cultural differences in communication style, and we will be more conscientious when dealing with sensitive issues outside of our sphere of geographical familiarity.
Failure to archive outdated content in a timely manner, 2018: In 2018, we archived our intervention reports on corporate outreach, undercover investigations, humane education, and online ads. In general, archiving out-of-date content is a normal part of our research process, and not necessarily indicative of a mistake. However, it can be difficult to determine the best point at which to archive a report, and in this case we believe we waited too long. We’d like to thank John Halstead for raising some concerns about our older intervention reports, which prompted us to take action when we did.
Significant loss in website traffic due to Google AdWords grant suspension, 2018: Google changed the requirements and restrictions to their Google Ads Grant program as of January 01, 2018. While we thought we made the changes necessary to comply with the new requirements, we ended up having our AdWords grant account suspended twice for non-compliance. In hindsight, we could have been more vigilant in checking our account for compliance prior to the rule change, and we should have placed greater priority on getting our account reinstated as soon as possible. In late 2018, our Data Analyst worked with volunteers to restructure the account and we have not had any account suspensions since. While we expected a very significant drop in our overall web traffic as a result of the new grant restrictions, we had an even greater drop due to losing all search advertising traffic for about 60 days throughout the course of 2018.
2017
Failure to adequately evaluate workplace culture in our charity evaluations, prior to 2017: Prior to 2017, our evaluations of workplace culture relied primarily on information received from designated contacts at each charity. We think that this led us to severely misjudge the workplace culture of some charities. After our 2016 evaluation season, we updated our review process. We believe that these changes have significantly improved our ability to identify workplace culture issues at the organizations we evaluate. We now contact non-leadership staff at each group we comprehensively review, inviting them to take part in confidential conversations about their experiences in the workplace and to provide their impressions of the organization’s response to incidents of harassment and discrimination.
Hiring people to work in novel positions without strong procedures in place, 2017: We created several new positions in 2017 but did not spend an appropriate amount of time considering the materials and background that were important for success. This led to an inefficient use of resources at times, and ultimately to less productivity from the team. We will attempt to better plan for future positions, and seek to hire individuals with more experience in relevant areas.
Taking on too many projects, 2016–2017: We believe that we tried to do too much in 2016, and the quality of our work suffered. In particular, we scheduled far too many projects at the end of the year, when our small team was tasked with coordinating our first research symposium, completely overhauling our website, completing our annual charity recommendations, producing advocacy advice materials, and running a matching campaign. We are planning to focus more on quality moving forward, and reduce our expectation of the number of projects that we can complete concurrently.
Piloting a social movements project without sufficient planning, 2014–2017: We created a social movements project with the hope that examining other movements would provide insights into how to increase the effectiveness of the animal advocacy movement. However, we did not adequately consider the scope of the project, and this led to several deficiencies and ultimately the closure of the project. More details can be found in our blog post examining the decision to cease work on this project.
Not using the results from ACE’s randomized controlled trial on leafleting in our leafleting cost-effectiveness estimate, 2014–2017: The leafleting cost-effectiveness estimate in our 2014 leafleting report was derived from our assessment of the results of the 2013 Farm Sanctuary/Humane League Labs randomized trial (FS/HLL 2013). In 2014, we also had the results of ACE’s randomized controlled trial on leafleting (ACE 2013) and could have used those results in our leafleting cost-effectiveness estimate in addition to or instead of the FS/HLL 2013 results. At the time of our previous leafleting report we believed that the cost-effectiveness of leafleting could be adequately modelled without using the results from ACE 2013 and that nothing in our analysis seemed particularly untenable. In hindsight, we think that including the results of ACE 2013 would have led to a more accurate estimate. We are unsure whether we could or should have known to include them at the time, given the resources and information available to us in 2014.
Postponing update of leafleting report, and insufficient disclaimers on the current version, December 2016–January 2017: Last year we decided not to update our leafleting report in favor of prioritizing other projects. Although we always try to explain our current thinking in any new materials and evaluations that we produce, it has become clear in January of 2017 that there remains considerable confusion about our stance on leafleting. Though we added a small disclaimer in December of 2016 noting the date of publication of the original report, we didn’t anticipate the number of people who would be relying solely on the outdated leafleting report on our site. Because of this, we have now added a longer disclaimer to the leafleting report. We have also taken down the leafleting impact calculator to avoid creating more confusion. We will update the leafleting report in 2017.
Lack of progress on our social movements project, January 2016–January 2017: We now realize that we did not allow enough time to focus on our social movements project. We attempted to use an intern model to produce regular case studies, but did not properly estimate the amount of time and knowledge needed for completion. As such, we currently have two case studies which are close to publication, and two other case studies which are unfinished. We will be reassessing this project in 2017, which will entail discussing how we hope to eventually synthesize data to inform animal advocacy, as well as examining the practicality of having ACE continue to work in this area.
2016
Rushed studies, March–November 2016: We ran several small MTurk studies in 2016, but we did so while trying to complete a number of different projects concurrently. This left us with insufficient time to review and critique our methodology and subsequent analyses. We are reconsidering the circumstances under which we would conduct similar studies in the future. We are also developing a new protocol, including more detailed guidelines for additional internal and external reviews. This will improve the quality of our research output in general moving forward, which will in turn improve any further studies we conduct.
Inefficient use of volunteer program, January 2016: We have spent a long time recruiting and attempting to work with volunteers on an explainer video that would introduce the concept of effective altruism to the average animal advocate. We did this in an effort to be as efficient as possible with the funds that we have, but the end result of these efforts was a loss of work hours from our staff. We have noticed this general trend with other volunteer projects as well, some of which are described in earlier mistake entries. We are taking two actions to resolve these issues. First, we have set aside money in our budget to fund a professional service to assist with our explainer video. Second, we have eliminated the general volunteer application on our website, and are in the process of replacing that form with a few very specific projects where we need assistance. By more clearly communicating our needs and requirements for volunteer positions, we hope to increase the frequency of successful volunteer projects.
2015
Messaging about cost-effectiveness estimates, December 2015: On certain medium reviews, we note that cost-effectiveness estimates reside either at the high end or the low end of the range of estimates for other groups that we reviewed at that depth. We have since come to believe this is not a helpful description, as the estimates would probably need to vary over at least 2 orders of magnitude in a single round of reviews to constitute a strong signal of differences, and so far they have only varied by about 1 order of magnitude. We intend to cease using that description in our cost-effectiveness sections of future reviews.
Insufficient explanation about availability of review revisions, September 2015: We have been contacted by several organizations hoping that we would update their review from a previous year while considering new information about their programs. While we would love to do this for all groups that have significant new information to provide, we are only able to offer revisions for organizations that contacted us on or before June 1 of the current year. This is because we operate on a tight schedule when writing reviews for new organizations, and don’t have the resources to be able to consider new information as it becomes available without proper planning. If we do not hear from an organization, we default to revisiting those organizations for which we have conducted a medium-depth review or greater every three years. To correct any misunderstanding in this area in the future, we are continuing to write more detailed explanations for our website, and we will publish this information as it becomes available.
Unclear messaging about “considered” organizations, August 2015: While we make every effort to be transparent about our work, sometimes we fail to accurately convey our message. Several individuals approached ACE with feedback that “considered” organizations on our list of organizations page might be viewed poorly since they were not recommended, which is not something we had intended. Therefore we have added an additional paragraph on our list of organizations page denoting that some of the “considered” groups do excellent work, but that for one reason or another they did not perform highly enough in our criteria to warrant a recommendation. We will also be more explicit about this in our future printed materials so that there is less chance of confusion on the issue.
Using volunteers for projects with hard deadlines, July 2015: Being a small nonprofit, we often work with volunteers on a variety of projects out of necessity, as we simply don’t have the resources to complete all projects by staffed employees. While we have been especially fortunate in the past to have worked with some volunteers who were extremely reliable, lately we have not been as fortunate. As a result, some research and video projects have been delayed past our original deadline. In the future, we will assign time-sensitive tasks to staff and interns, with rare exceptions for proven volunteers, to avoid spending staff time managing a project that fails to meet a deadline.
Estimate of average good done by top charities used in print materials, May 2015: In our December 2014 site update, we missed the Impact of Donations page, which should have been updated promptly to reflect our new charity recommendations and estimates of effectiveness. As a result, when we prepared our 2015 print materials, we did not have available an estimate of the number of animals spared by a donation to our top charities that combined information about all three groups. We used the average of the estimates for the three individual organizations, because this was convenient and we had a deadline for sending materials to the printer. When we did update the Impact of Donations page using a more careful methodology, our best guess for the number of animals spared by a donation to our top charities was noticeably different from the number we had used in our print materials. We now wish we had updated the Impact of Donations page first, so that we could have used the results in our print materials. We have noted this on pages of our website leading to digital copies of our print materials, and we plan to update our materials in this order after future changes in our recommendations.
Missing deadlines for social movements project, May 2015: We publicized prospective goals indicating that we would publish bi-monthly case studies for our social movements project. While we have released two case studies in 2015, we missed our May deadline for one of these case studies. This is due to our reliance on interns to work on this project; as one of our interns working in this area left unexpectedly, we were left with a partial case study that still needed considerable work. We intend to meet the rest of our deadlines by planning further in advance to allow for unexpected interruptions to research in this area.
2014
Top interventions, January–December 2014: When we started work on evaluating interventions and charities, we felt confident that leafleting college students with information about factory farming was a strong method of helping animals. After running our own leafleting study, we were less certain of its effects, but felt that, due to its low cost and ease of engagement, it was still an especially strong method of helping animals. While we still believe this to be the case, we have decided to remove our page officially recognizing it as a top intervention until more research can be done on both leafleting and other tactics commonly used to help animals. Once we have a better understanding of these interventions, we will revisit adding a top interventions page back to our site.
Loss of productivity from unpublished reviews, November 2014: For many reasons, we allow organizations to decide what materials that we can publish from our evaluation procedure. Until recently, we had not faced the problem of any medium-reviewed organizations deciding not to allow us to publish our review, but two groups made this decision for the December 2014 round of recommendations. As these reviews take a significant amount of time to write, and we were not allowed to publish any materials from two organizations, we ended up expending a considerable amount of time on reviews that we were not allowed to release to the public. While the review process for these two organizations was informative for ACE internal staff, not being able to share that information publicly means that our efforts in these reviews were not as informative as our other efforts. We haven’t currently figured out a better way to handle this situation for future reviews, but we are planning to reframe our review process in subsequent discussions with organizations with the hope that we will increase the likelihood of being able to publish our findings.
Suboptimal wording, May 2014: We released new recommendations on May 14, 2014. The process included writing reviews for many organizations, including more detailed reviews for select organizations. In these detailed reviews, we featured summary questions at the beginning, and included a final question of “Why don’t we recommend X” for each charity that did not receive our top recommendation. As it was not our intention to direct people not to donate to these groups, we revised the question to read “Why did X not receive our top recommendation?” We used our initial wording in an attempt to be strong and clear in our recommendations, but we were alerted to an undesired effect of seemingly actively advocating against donations to reviewed groups, which was not our intention. We have fixed this error and communicated with all groups affected, and we will continue to examine the best methods of communication in the future.
Record keeping, April 2013–May 2014: As EAA moved from being entirely volunteer run to an organization with paid staff, not all documents bearing on our recommendation process were retained. When we later went to redesign our site and create new content, we found that we had limited understanding of the process used to arrive at our existing recommendations. While we had justification for our decisions, we didn’t present the level of information we hope to be associated with in the future. As such, we are developing charity and intervention templates, and creating a much more focused and thorough approach to composing our list of recommended charities.
Unexpected delays, January 2014: Despite being aware of the planning fallacy, we were still optimistic that we would be able to release our leafleting study results by the end of January. However, unforeseen delays in recording and analyzing the data from some of our volunteers made us postpone releasing our results until the middle of March. We are grateful that the volunteers helped us with this project, but as they were volunteers, we were unable to enforce a stricter timeline. We faced a similar challenge with our humane education study, as that will be released later than intended in April. These inaccurate estimates led to people expecting results sooner than we could produce them. We will use more caution in assigning report dates for subsequent studies to allow for additional time.
2013
Job titles, December 2013: When we rebranded as ACE, we also re-examined the titles that we gave our staff. When we hired our employees, we saw them as running their own department, and thus gave them “Director” titles. However, after some external consultation, we discussed changing their titles to “manager,” as this allowed for more developmental advancement within the organization based on performance. The employees agreed to this change; in the future, we will give more consideration to titles before assigning them.
Pilot studies, September 2013: We launched a study on leafleting and humane education without any piloting procedures, despite having many questions about how the studies would work in practice. Our lack of information led to wasteful study design. A significant number (20%) of students involved in the humane education study received emails from us that were not well-targeted, leading to a worse response rate than we could otherwise have achieved. Partner organizations and EAA volunteers leafleted at many schools, but the volunteers we relied on to conduct surveys for that study varied in dedication and ability, in one case not surveying at all. To get a good response rate, we should have committed more resources to surveying at each school, even if that meant using fewer schools. We will avoid this issue by piloting studies that have significant areas of methodological uncertainty before committing our and others’ resources at full scale.
Changing priorities, August–September 2013: We put time and effort into soliciting for a community manager position, investing many hours into interviews and planning for the role. We ultimately decided that ACE should focus our efforts on developing solid content and building our credibility, and explore community aspects at a later date. We inconvenienced staff and applicants during this process. We have since spent considerable time discussing strategy, which will help us to not make a similar mistake again.
Publication error, August 2013: The EAA site very temporarily featured a page that included the email addresses of all members of our community. However, that page was not linked from the site, and could only be found through a targeted web search. We have no indication that anyone outside the organization accessed the page, and it was taken down immediately after discovery. This page was published through the error of a volunteer who was working on another project on the site. To prevent this from happening in the future, we developed more rigorous guidelines for publishing pages on our site, and restricted access to staff.
2012
Name selection, August 2012: After polling a small group of people, we chose Effective Animal Activism as our name. While descriptive, it failed to accurately identify what we do (we do not engage in actual activism), and thus caused some confusion publicly. Additionally, as the organization grew and we decided to professionalize our efforts, it became clear that having “animal activism” in our name might deter those we are targeting in our attempts to move money to more effective organizations. Some segments of the public have a view of “animal activism” as a negative and radical concept, and we therefore were not able to appeal to those individuals. We rectified this by changing our name to Animal Charity Evaluators in December 2013.
For a discussion of this terminology, see Encompass (2019).