You are in: North America
Change location
Sage recognises the value of artificial intelligence (AI) and its potential to help authors in the research and writing process. Sage welcomes developments in this area to enhance opportunities for generating ideas, accelerating research discovery, synthesising, or analysing findings, polishing language, or structuring a submission.
Large language models (LLMs) or Generative AI offer opportunities for acceleration in research and its dissemination. While these opportunities can be transformative, they are unable to replicate human creative and critical thinking. Sage’s policy on the use of AI technology has been developed to assist authors, reviewers and editors to make good judgements about the ethical use of such technology.
We recognise that AI assisted writing has become more common as the technology becomes more accessible. AI tools that make suggestions to improve or enhance your own work, such as tools to improve language, grammar or structure, are considered assistive AI tools and do not require disclosure by authors or reviewers. However, authors are responsible for ensuring their submission is accurate and meets the standards for rigorous scholarship.
The use of AI tools that can produce content such as generating references, text, images or any other form of content must be disclosed when used by authors or reviewers. Authors should cite original sources, rather than Generative AI tools as primary sources within the references. If your submission was primarily or partially generated using AI, this must be disclosed upon submission so the Editorial team can evaluate the content generated.
Authors are required to follow Sage guidelines, and in particular to:
While submissions will not be rejected because of the disclosed use of generative AI, if the Editor becomes aware that Generative AI was inappropriately used in the preparation of a submission without disclosure, the Editor reserves the right to reject the submission at any time during the publishing process. Inappropriate use of Generative AI includes the generation of incorrect text or content, plagiarism or inappropriate attribution to prior sources.
The use of AI or LLMs for Editorial work presents confidentiality and copyright issues. The tool or model will learn from what it receives over time and may use it to provide outputs to others.
Reviewers may wish to use Generative AI to improve the quality of the language in their review. If they do so, they maintain responsibility for the content, accuracy and constructive feedback within the review.
Journal Editors maintain overall responsibility for the content published in their journal and act as gatekeepers of the scholarly record. Editors may use Generative AI tools for assistance in looking for suitable peer-reviewers.
Reviewers using ChatGPT or other Generative AI tools to generate review reports inappropriately will not be invited to review for the journal and their review will not be included in the final decision.
Editors must not use ChatGPT or other Generative AI to generate decision letters, or summaries of unpublished research.
Reviewers suspecting the inappropriate or undisclosed use of generative AI in a submission should flag their concerns with the Journal Editor. If Editors suspect the use of ChatGPT or any other generative AI in a submitted manuscript or a submitted review, they should consider this policy in undertaking an editorial assessment of the matter or contact their Sage representative for advice.
Sage and the Journal Editor will lead a joint investigation into concerns raised about the inappropriate or undisclosed use of Generative AI in a published article. The investigation will be undertaken in accordance with guidance issued by COPE and our internal policies.
Further information
Using AI in peer review and publishing | SAGE Publications Inc
Assistive and Generative AI Guidelines for Authors — Sage (sagepub.com)
New white paper launch: Generative AI in Scholarly Communications - STM (stm-assoc.org)
Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools.