FAS Statement on Generative AI Use

FAS is closely watching the emerging utility, potential, and government policy concerning generative artificial intelligence use and development. FAS regularly publishes memos, public statements, and policy recommendations on artificial intelligence and emerging technology. We encourage media inquiries for our experts working in this domain: Oliver Stephenson and Clara Langevin

FAS Provisional Principles on Generative AI Use and Citation

Guidance for Authors and Contributors, including: fellows, memo authors, accelerator/sprint participants, staff, Senior Fellows.

To provide guidance to our staff, fellows, Day One policy entrepreneurs, contributors, and offer disclosure to the public, here is how our organization will use generative AI tools on external publications.

We are open to exploration and use of generative AI tools as they develop; however, we recommend caution and disclosure while being attentive to safety impacts on personal and societal levels. 

We move forward with these provisional principles for all contributors:

Consider project needs and environmental impact before using AI.

Will this project benefit from AI use? For example, are you generating illustrations when existing illustrations would suffice? At present most AI tools use an enormous amount of electricity and water compared to other critical needs like transportation, heating, and cooling, which at scale runs counter to societal aims to address climate change and its many ill effects. (More here.) Over-reliance on AI also presents scarcity challenges for the available energy we have already.

Protect sensitive data and information.

Authors are free to experiment with any AI tool but must never input any sensitive FAS data or personally identifiable information into any tool.  We urge caution until the author has confirmed that user inputs are properly protected.

Comply with existing FAS policies.

Authors using AI tools to develop content for FAS – internally or externally – retain responsibility to comply with existing policies concerning plagiarism, research misconduct, and acceptable technology use. Authors must review and validate outputs to ensure quality and integrity. Authors alone are responsible for all AI errors, omissions, or data hallucinations (i.e., manufactured data and/or “facts”).

Disclose use.

Individuals should always disclose to their supervisor, reviewer, or editor if an external work product has been created with the assistance of generative AI. 

Authors should always disclose tool(s) on externally facing work products.

Extensive quoting of AI-generated material must be validated by the authors and reviewers against inadvertent plagiarism or hallucination. Please add a sentence disclosure at the end of any work product describing the AI system, date of use and/or version. 

For example: Research assisted by Elicit (Jan 2025) and ideas refined using Claude (Feb 2025). 

This guidance last updated: March 2025