Generative AI Policy

AI and AI-Assisted Technologies Policy

This policy is developed by INKLUSI: Journal of Disability Studies, published by the Center for Disability Services (PLD), UIN Sunan Kalijaga, with reference to Elsevier’s Guidelines for the Use of Generative AI in Publishing and the guidance of the Committee on Publication Ethics (COPE) on artificial intelligence.
This policy aims to ensure transparency, accountability, inclusivity, and ethical integrity in scholarly communication, in alignment with the values of disability studies.


For Authors

  1. Authors may use generative AI or AI-assisted tools solely to improve language clarity, readability, and grammar, particularly for authors writing in a second language.
  2. AI tools must not be used to generate scientific content, including research ideas, theoretical arguments, data interpretation, findings, or conclusions.
  3. All use of AI tools must be fully supervised by humans, and authors remain fully responsible for the accuracy, originality, and integrity of the manuscript.
  4. Any use of AI tools must be transparently disclosed in the manuscript. A disclosure statement will be included in the published article.
  5. AI systems cannot be listed as authors or co-authors. Authorship implies intellectual responsibility, accountability, and approval of the final manuscript—roles that can only be fulfilled by humans.
  6. Authors are responsible for ensuring that the manuscript is original, that authorship is valid, and that the work does not infringe third-party rights.
  7. The use of AI in figures, images, or artwork is not permitted, except where AI forms an explicit part of the research methodology (e.g., AI-assisted data analysis or imaging). In such cases, full details (tool name, version, and purpose) must be clearly described in the Methods section.
  8. Permissible image adjustments (e.g., brightness, contrast, colour balance) are allowed only if they do not misrepresent or obscure the original data.
  9. The use of AI to generate graphical abstracts or cover images is not permitted without prior written permission from the Editor-in-Chief and the Publisher, with appropriate rights and attribution.

For Reviewers

  1. Manuscripts under review are confidential documents and must not be uploaded to, shared with, or processed by any AI tools.
  2. Reviewers must not use AI tools to draft, edit, or refine peer-review reports, including for language improvement, as this may compromise confidentiality and ethical responsibility.
  3. Peer review is a human scholarly responsibility requiring critical judgment, contextual sensitivity, and ethical reasoning that cannot be delegated to AI systems.
  4. Reviewers remain personally accountable for the content, evaluations, and recommendations provided in their review reports.
  5. Reviewers should note that authors may include an AI disclosure statement at the end of the manuscript, as permitted under journal policy.

For Editors

  1. Editors must treat all submitted manuscripts as strictly confidential and must not upload any part of submissions into AI tools.
  2. This restriction also applies to editorial communications, including decision letters, reviewer invitations, and internal editorial correspondence.
  3. Editorial evaluation, peer-review coordination, and decision-making require human oversight and professional judgment and cannot be delegated to AI systems.
  4. Editors are fully responsible and accountable for editorial processes and final publication decisions.
  5. Editors should verify the presence and adequacy of AI use disclosure statements in submitted manuscripts.
  6. If misuse or undisclosed use of AI by authors or reviewers is suspected, editors must report the case to the Publisher and handle it in accordance with COPE procedures.