Responsible AI Usage Policy
Overarching AI Usage Principles
While artificial intelligence (AI) usage has the potential to greatly benefit productivity and effectiveness, irresponsible usage carries significant risks. In this vein, Animal Charity Evaluators (ACE) will abide by the following principles when using AI tools:
- Exercise caution. We will approach AI usage with caution, recognizing that AI is a rapidly evolving and often poorly understood field.
- Prioritize security. We will prioritize the protection of personally identifiable and other sensitive information in all AI usage.
- Promote transparency. We will be open about how we use AI for content generation and other purposes.
- Build understanding. We will continually develop our understanding of AI, including the range of tools available, what they can achieve, and how they can be used most safely and effectively. To support this, ACE will provide organization-wide training at least once a year on safe and effective AI use. More broadly, we recognize that AI has significant implications for nonhuman animals beyond the use of AI tools by animal advocacy organizations, and we will continue to monitor developments in this area to inform our evaluations and grantmaking decisions.
- Support our work. We will use AI tools to increase and improve our output (where safe and possible), not to cut staff costs. If AI seems likely to substantially increase the productivity of certain employees’ roles, this will be seen as an opportunity for those employees to achieve more ambitious goals in that role and/or take on more projects across the organization, supported by relevant cross-training as needed. It will not be seen as an opportunity to reduce those employees’ hours while expecting the same output. To support this, we will share use cases and best practices within our organization and with fellow advocates, while also acknowledging that AI may not currently be able to increase productivity for all teams and tasks.
Specific Staff Guidance on AI Usage
Large Language Models
Large Language Models (LLMs), such as ChatGPT and Claude, are trained on large datasets to enable them to process human language and generate responses. They can serve as helpful virtual assistants for specific tasks but are also prone to errors, biases, and other limitations. When using LLMs, consider ACE’s Overarching AI Usage Principles and follow the specific guidelines below.
- Only share information with LLMs that is safe for sharing publicly. Do not input sensitive information into any LLM, as this information could be integrated into the model’s training data and thereby inadvertently shared with other users. “Sensitive information” in this context includes (i) personally identifiable information, (ii) information about ACE that is not intended for public disclosure, and (iii) third-party organizational information that is not confirmed to be public knowledge. In addition, when available, always select the option to exclude your input data from being used to train the LLM.
- Avoid relying on LLMs to draft external content. While AI tools can help generate ideas, they are not currently suitable for drafting high-quality content. Their output is typically too wordy, overarching, and repetitive to meet the specifications of ACE’s Style Guide, and even with careful revision, there is a strong chance that factual errors and biases will remain. In practice, the boundary between generating ideas and drafting content is unclear, so use your best judgment and seek feedback from your Supporter when in doubt.
- Do not use any AI-generated text verbatim in external content. LLMs are trained on datasets that include copyrighted materials and can therefore generate content that would be classified as plagiarism or violate intellectual property rights. To avoid this, it’s important to paraphrase (i.e., reword) any written content produced by LLMs, ensuring that the end results are substantially different from the original. See the following example of sufficient and insufficient paraphrasing:
- Original text: LLMs are trained on datasets that include copyrighted materials and can therefore generate content that would be classified as plagiarism or violate intellectual property rights.
- Insufficient paraphrasing: LLMs are trained on datasets that include copyrighted materials. Therefore, they can produce content that would be considered plagiarism or a violation of intellectual property rights.
- Explanation: While the second sentence contains substantial alternate phrasing, the text in the first sentence copies the exact wording of the original.
- Sufficient paraphrasing: LLM training datasets may contain copyrighted material, which could lead to generating content that constitutes plagiarism or a violation of intellectual property rights.
- Explanation: This version changes the structure and wording of the original text significantly enough to differentiate it from the original.
- Verify all factual information generated by LLMs. LLMs can generate erroneous output and should not be relied on as definitive sources of evidence.
- Check for bias. LLMs are trained on biased datasets. As with any source, check AI-generated content for potential biases. The Google article Fairness: Types of Bias provides some examples of common biases to watch out for and how they might manifest in LLM-generated output.
- Explore how you can use LLMs most effectively in your work and share use cases with others. Suitable use cases could include generating ideas for blog posts, creating first drafts of internal documents, suggesting formulas for spreadsheets, and assisting with logistical planning.
Image Generators
AI image generators, such as DALL-E 3 and Midjourney, can generate images, including photorealistic ones, based on a simple text or image prompt. These may be useful in some cases for creating illustrations or infographics, but their use is more likely to be warranted for internal documents than for public-facing ones. When using AI image generators, consider ACE’s Overarching AI Usage Principles and follow the specific guidelines below.
- Avoid using photorealistic AI-generated images. Using photorealistic AI-generated images could be seen as misleading and dishonest, with negative implications for ACE and the broader animal advocacy movement. For example, using AI-generated images to depict the suffering of industrially farmed animals could lower public credibility of real-life undercover footage of animals’ living conditions in factory farms. Using such images is also generally unnecessary, given the wealth of real-life photos available in the public domain. When using public-domain images, prioritize ethically aligned sources such as We Animals Media. There may be exceptional cases where a specific image is needed but where there is a risk that using a real photo might directly or indirectly support animal exploitation (e.g., if purchasing the photo would provide payment to the intensive animal agriculture industry, or if the photo was otherwise sourced through unethical means). In these cases, consult with the Communications Manager for advice on whether an AI-generated image could be justified in this case.
- Carefully check any AI-generated images before publishing. AI-generated images can contain strange, often subtle idiosyncrasies that others may find off-putting. This includes the frequent incorporation of misspelled, irrelevant, or nonsensical text in the generated image.
- Make clear when you use AI-generated images. To ensure clarity and transparency, any AI-generated images that you publish must be clearly identified as such. This could be a simple image caption such as “This image was created using [AI / specific AI tool name].”
AI Tools for Recruitment
ACE is committed to fair, inclusive hiring practices. Given the risk of AI perpetuating existing biases, follow the guidelines below when recruiting for positions at ACE.
- Avoid using AI to assess applications or screen candidates. AI tools may unintentionally favor certain demographics or backgrounds, leading to biased assessments of candidates’ resumes and written applications. While humans are also subject to biases and misjudgment, human review currently seems more likely to ensure a comprehensive and fair evaluation of each candidate’s suitability for the role.
- Be cautious when using AI to identify the most suitable job listing sites. While AI can provide helpful ideas for where best to post job listings, it has been trained on potentially biased historical hiring data and can therefore reinforce existing biases by disproportionately targeting specific groups.
Assessment of AI-Generated Applications
We recognize that people can find it helpful to use AI tools when drafting applications for our Movement Grants, Charity Evaluations, and job positions. These tools might be particularly useful for people who do not speak English as their first language. As such, we do not automatically penalize applications that we suspect to have been written with the support of AI. However, from our experience, such applications should be subject to closer scrutiny as they tend to perform more poorly on detail, relevance, and accuracy. Follow the guidelines below when assessing applications that you suspect to be partly or wholly AI-generated.
- Where safe, test the application questions on AI tools in advance. This can be helpful to get a sense of the typical indicators of AI-generated content. However, be aware of the limits of such an exercise, especially as AI tools become increasingly sophisticated and tailored to the individual user.
- As a rule, use the same assessment criteria as for any other application. Particularly for Movement Grants and Charity Evaluations, we are not assessing applicants based on their English language skills. We should therefore not penalize applicants whose ideas may have plenty of substance but who may have used AI tools to help present these ideas in a clear and structured way.
- Note topics for follow-up discussion. If you doubt the applicant’s understanding of the content of their own application, note the particular areas that you want to discuss with them in more detail, either in writing or over a call.
- Include AI-specific guidance in application instructions. If there are any applications or specific questions where AI-generated content will be disregarded or otherwise penalized, make this clear from the outset in the application instructions.
- Double-check verifiable information. While this is important for any application, this is particularly relevant for AI-generated content, which is especially prone to fabrications and erroneous information.
Policy Updates
The ACE Operations team will review and update this policy at least once per year, with feedback from all ACE staff.
Please contact our Operations Director, Charlie Messinger, at charlie.messinger@animalcharityevaluators.org if you have any comments or questions about this policy.