You are here

Artificial Intelligence Policy

Sage recognises the value of artificial intelligence (AI) and its potential to help authors in the research and writing process. Sage welcomes developments in this area to enhance opportunities for generating ideas, accelerating research discovery, synthesising, or analysing findings, polishing language, or structuring a submission.

Large language models (LLMs) or Generative AI offer opportunities for acceleration in research and its dissemination. While these opportunities can be transformative, they are unable to replicate human creative and critical thinking. Sage’s policy on the use of AI technology has been developed to assist authors, reviewers and editors to make good judgements about the ethical use of such technology.

For authors

AI  assistance

We recognise that AI assisted writing has become more common as the technology becomes more accessible. AI tools that make suggestions to improve or enhance your own work, such as tools to improve language, grammar or structure, are considered assistive AI tools and do not require disclosure by authors or reviewers. However, authors are responsible for ensuring their submission is accurate and meets the standards for rigorous scholarship.

Generative AI

The use of AI tools that can produce content such as generating references, text, images or any other form of content must be disclosed when used by authors or reviewers. Authors should cite original sources, rather than Generative AI tools as primary sources within the references. If your submission was primarily or partially generated using AI, this must be disclosed upon submission so the Editorial team can evaluate the content generated. 

Authors are required to follow Sage guidelines, and in particular to:

  1. Clearly indicate the use of language models in the manuscript, including which model was used and for what purpose. Please use the methods or acknowledgements section, as appropriate.
  2. Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors, biases or inconsistencies.
  3. Be conscious of the potential for plagiarism where the LLM may have reproduced substantial text from other sources. Check the original sources to be sure you are not plagiarising someone else’s work.
  4. Be conscious of the potential for fabrication where the LLM may have generated false content, including getting facts wrong, or generating citations that don’t exist. Ensure you have verified all claims in your article prior to submission.
  5. Please note that AI bots such as ChatGPT should not be listed as an author on your submission.   

While submissions will not be rejected because of the disclosed use of generative AI, if the Editor becomes aware that Generative AI was inappropriately used in the preparation of a submission without disclosure, the Editor reserves the right to reject the submission at any time during the publishing process. Inappropriate use of Generative AI includes the generation of incorrect text or content, plagiarism or inappropriate attribution to prior sources. 

For Reviewers and Editors

The use of AI or LLMs for Editorial work presents confidentiality and copyright issues. The tool or model will learn from what it receives over time and may use it to provide outputs to others.

AI assistance

Reviewers may wish to use Generative AI to improve the quality of the language in their review.  If they do so, they maintain responsibility for the content, accuracy and constructive feedback within the review.

Journal Editors maintain overall responsibility for the content published in their journal and act as gatekeepers of the scholarly record. Editors may use Generative AI tools for assistance in looking for suitable peer-reviewers.

Generative AI

Reviewers using ChatGPT or other Generative AI tools to generate review reports inappropriately will not be invited to review for the journal and their review will not be included in the final decision.

Editors must not use ChatGPT or other Generative AI to generate decision letters, or summaries of unpublished research.

Undisclosed or Inappropriate use of Generative AI

Reviewers suspecting the inappropriate or undisclosed use of generative AI in a submission should flag their concerns with the Journal Editor. If Editors suspect the use of ChatGPT or any other generative AI in a submitted manuscript or a submitted review, they should consider this policy in undertaking an editorial assessment of the matter or contact their Sage representative for advice.

Sage and the Journal Editor will lead a joint investigation into concerns raised about the inappropriate or undisclosed use of Generative AI in a published article. The investigation will be undertaken in accordance with guidance issued by COPE and our internal policies

Further information

Using AI in peer review and publishing | SAGE Publications Inc

Assistive and Generative AI Guidelines for Authors — Sage (sagepub.com)

New white paper launch: Generative AI in Scholarly Communications - STM (stm-assoc.org)

Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools.

World Association of Medical Editors (WAME) recommendations on chat bots, ChatGPT and scholarly manuscripts