Responsible AI Usage Policy
This policy will be reviewed at least every six months. Last updated: December 2, 2025.
Overarching AI Usage Principles
While AI usage has the potential to greatly benefit productivity and effectiveness, irresponsible usage carries significant risks. In this vein, Animal Charity Evaluators (ACE) abides by the following principles when using AI tools:
- Exercise caution. We approach AI usage with caution, recognizing that AI is rapidly evolving and often poorly understood.
- Prioritize security. We prioritize the protection of sensitive information in all AI usage.
- Promote transparency. We are open with our audiences about how we use AI for content generation.
- Build understanding. We continually develop our understanding of AI, including the range of tools available, what they can achieve, and how they can be used most safely and effectively. To support this, ACE encourages staff to use their professional development budget for training on safe and effective AI use. More broadly, we recognize that AI has significant implications for nonhuman animals beyond the use of AI tools by animal advocacy organizations, and we monitor developments in this area to inform our charity evaluations and grantmaking.
- Use AI to help more animals, not cut staff costs. We use AI tools to increase and improve our output (where safe and possible), not to cut staff costs or reduce paid positions. In the face of increasing AI capabilities, ACE will not aim to do the same with fewer people or reduced hours and pay, but rather to accomplish more with the same or greater human capacity, supported by AI. If AI seems likely to substantially increase the productivity of certain employees’ roles, this will be seen as an opportunity for those employees to achieve more ambitious goals in that role and/or take on more projects across the organization, with appropriate cross-training, capacity support, and recognition of added responsibility. To support effective AI use, we will share use cases and best practices within our organization and with fellow advocates, while acknowledging that AI tools may not always increase productivity or be equally useful across teams.
Specific Staff Guidance on AI Usage
Large language models
Large language models (LLMs), such as ChatGPT and Claude, are trained on large datasets to enable them to process human language and generate a response. They can serve as helpful virtual assistants for specific tasks, but are also prone to errors, biases, and other limitations. When using LLMs, follow ACE’s Overarching AI Usage Principles and abide by the specific guidelines below.
- Explore how you can use LLMs most effectively in your work and share use cases with others. Suitable use cases could include generating ideas for blog posts, creating first drafts of internal documents, suggesting formulas for spreadsheets, and assisting with logistical planning.
- Make sure that any potentially sensitive information (i.e., information that should not be made public) is not used to train the models, by opting out in the LLM’s settings.
- ‘Sensitive information’ in this context includes, but is not limited to:
- Personally identifiable information: This includes names, addresses, phone numbers, emails, or any other data that could be used to identify an individual.
- Information about ACE that is not intended for public disclosure: This could include certain financial data, strategic plans, and information about internal processes.
- Third-party organizational information that is not confirmed to be public knowledge: Third-party organizations include, but are not limited to, charities we evaluate or that apply to be evaluated, organizations that apply for Movement Grants, and external consultants.
- Avoid sharing any strictly-confidential material with LLMs. There is always a small chance of data leaks, so avoid providing any strictly-confidential material to an AI tool (i.e., information that could be disastrous if published). ‘Strictly confidential information’ in this context includes, but is not limited to:
- Passwords or other security credentials.
- Information shared under a non-disclosure agreement (NDA) or marked confidential by third-party organizations.
- Confidential information about organizations’ strategies that could significantly advantage their opponents, if revealed online.
- Do not use any AI-generated content verbatim in external content. While AI tools can help generate ideas, they are not currently suitable for drafting high-quality content. Their output is typically too wordy, high level, and repetitive to meet the specifications of ACE’s Style Guide. LLMs are also trained on datasets that include copyrighted materials and can therefore generate content that would be classified as plagiarism or violate intellectual property rights. To avoid this, it is important to heavily paraphrase any written content produced by LLMs, ensuring that the end results are substantially different from the original. See the following example of sufficient and insufficient paraphrasing:
- Original text: LLMs are trained on datasets that include copyrighted materials and can therefore generate content that would be classified as plagiarism or violate intellectual property rights.
- Insufficient paraphrasing: LLMs are trained on datasets that include copyrighted materials. Therefore, they can produce content that would be considered plagiarism or a violation of intellectual property rights.
- Sufficient paraphrasing: LLM training datasets may contain copyrighted material, which could lead to generating content that constitutes plagiarism or a violation of intellectual property rights.
- Verify all factual information generated by LLMs. LLMs can generate erroneous output and should not be relied on as definitive sources of evidence. To verify information, cross-check it with reliable sources (such as the ones listed here) and/or subject matter experts.
- Check for bias. LLMs are trained on biased datasets. As with any source, check AI-generated content for potential biases. The Google article Fairness: Types of Bias provides some examples of common biases to watch out for and how these might manifest in LLM-generated output.
Image generators
AI image generators, such as Midjourney and Gemini 3’s ’Nano Banana’, can generate images—including photorealistic ones—based on a simple text or image prompt. When using AI image generators, follow ACE’s Overarching AI Usage Principles and abide by the specific guidelines below.
- Avoid using photorealistic AI-generated images, and never use photorealistic AI-generated images of factory farming or animal suffering. Using them could be seen as misleading and dishonest, with negative implications for ACE and the broader animal advocacy movement. For example, using AI-generated images to depict the suffering of industrially-farmed animals could cause viewers to question the credibility of real-life undercover footage of conditions in factory farms. Using such images is also generally unnecessary, given the wealth of real-life photos available in the public domain. When using public-domain images, prioritize ethically-aligned sources such as We Animals Media. There may be exceptional cases where a specific image of animal farming or suffering is needed, but where there is a risk that using a real photo might directly or indirectly support animal exploitation (for example, if purchasing the photo would provide payment to the intensive animal agriculture industry, or if the photo was otherwise sourced through unethical means). In these cases, consult with the Communications Manager for advice on whether an AI-generated image could be justified in this case, and ensure that this image is a) clearly labelled as AI-generated, and b) sufficiently stylized that it could definitely not be mistaken for a real photograph.
- Make clear when you use AI-generated images. To ensure clarity and transparency, any AI-generated images that you publish must be clearly identified as such. This could be a simple image caption of the form ‘This image was created using [AI / specific AI tool name]’.
- Carefully check any AI-generated images before publishing. AI-generated images can contain strange, often subtle, idiosyncrasies that others may find off-putting. This includes the frequent incorporation of misspelled, irrelevant, or nonsensical text in the generated image.
AI connectors
- A connector is any integration that grants an AI tool permission to read, copy, cache, analyze, or act on data from a third-party account or storage location, such as Google Drive or Gmail.
- By default, staff must not connect any AI tools to ACE accounts or storage locations. Specific connectors will be allowed for certain approved data once they are confirmed to be safe and data security best practices have been put in place.
AI note-takers
- Prioritize using AI note-takers in meetings to allow all attendees to fully participate. Default to using the built-in Google Meet transcription tool.
- In meetings with external attendees, at the beginning of the meeting, ask them for permission to use an AI note-taker. This is important because in some jurisdictions, recording communications typically requires the consent of all parties. This is not necessary for internal meetings; instead, all ACE staff are asked to sign a waiver consenting to meetings being recorded.
- Respect meeting participants’ privacy. Do not share meeting recordings, transcripts, or unedited summaries without the permission of all meeting participants.
AI tools for recruitment
ACE is committed to fair, inclusive hiring practices. Given the risk of AI perpetuating existing biases, follow the guidelines below when recruiting for positions at ACE.
- Be careful using AI to assess applications or screen candidates. AI tools may unintentionally favor certain demographics or backgrounds, leading to biased assessments of candidates’ resumes and written applications, or produce broadly-unreliable results. While humans are also subject to biases and misjudgment, human review currently seems more likely to ensure a comprehensive and fair evaluation of each candidate’s suitability for the role. However, AI may be appropriate for narrow, objective tasks that don’t rely on subjective judgment, such as checking factual yes/no responses or assessing whether applicants have identified deliberate errors in a sample text. These uses can help streamline early-stage screening, provided they are transparently designed and regularly reviewed for fairness and accuracy.
- Be cautious when using AI to identify the most suitable job listing sites. While AI can provide helpful ideas for where best to post open roles, it has been trained on potentially-biased historical hiring data and can therefore reinforce existing biases by disproportionately targeting specific groups.
Assessment of AI-generated applications
We recognize that people can find it helpful to use AI tools when drafting applications for our Movement Grants, Charity Evaluations, and job positions. This might be particularly useful for people who do not speak English as their first language. As such, we do not automatically penalize applications that we suspect to have been written with the support of AI. However, when publishing calls for applications, make clear that:
- Applicants must not rely solely on AI tools. Their applications should reflect their own ideas.
- Applicants should be aware of the limitations of AI. From our experience, clearly AI-generated applications tend to perform worse on detail, relevance, and accuracy, meaning they are significantly less likely to be successful. Where relevant, make this clear to applicants for Movement Grants, Charity Evaluations, and job positions.
In addition, follow the guidelines below when assessing applications that you suspect to be partly or wholly AI-generated.
- Test the application questions on AI tools in advance, where safe to do so. This can be helpful to get a sense for the typical indicators of AI-generated content. However, be aware of the limits of such an exercise, especially as AI tools become increasingly sophisticated and tailored to the individual user.
- As a rule, use the same assessment criteria as for any other application. Particularly for Movement Grants and Charity Evaluations, we are not assessing applicants based on their English language skills. We should therefore not penalize charities whose ideas may have plenty of substance but who used AI tools to help present these ideas in a clear and structured way.
- Note topics for follow-up discussion. If you doubt the applicant’s understanding of the content of their own application, note the particular areas that you want to discuss with them in more detail, either in writing or over a call.
- Double-check verifiable information. While this is important for any application, this is particularly relevant for AI-generated content, which is especially prone to fabrications and erroneous information.
Policy Updates
The ACE Operations team will review and update this policy at least once per year, with feedback from all ACE staff.
Please contact charlie.messinger@animalcharityevaluators.org if you have any comments or questions about this policy.