Print ISSN: 2305-2147
AI Policies
Asian Economic and Financial Review (AEFR) recognizes the significance of
artificial intelligence (AI) and machine learning in scholarly publishing. As
generative AI tools such as ChatGPT, Gemini, Claude, and others become
increasingly accessible, AEFR emphasizes the importance of addressing both
their potential advantages and the ethical considerations involved. This policy outlines the journal’s official position on the
use of AI tools by authors, reviewers, and editorial staff, ensuring compliance
with the standards set by Elsevier and the Committee on Publication Ethics
(COPE).
1. Use of AI Tools by Authors
Authors are
permitted to use AI tools to support the preparation of manuscripts, provided
such use is transparent, responsible, and ethical. AI tools may be used for
tasks such as language editing, grammar improvement and references formatting.
However, AI must not be used to generate content that substitutes for original
scientific thinking or interpretation of results. Crucially, authors remain
solely responsible for the content of their work, including any sections
developed with the help of AI tools.
In accordance
with the COPE Position Statement (2023), AI tools cannot be listed as authors
under any circumstances. Authorship implies the capacity for accountability,
consent, and intellectual contribution—criteria which AI tools do not fulfill.
Therefore, all named authors must be human individuals who meet established
authorship requirements.
2. Disclosure of AI Use
Authors must
fully disclose the use of AI tools in their manuscript submissions. This
includes, but is not limited to, tools used for text generation, image
creation, data analysis, coding assistance, or translation. The disclosure
should be placed in the Acknowledgements section of the manuscript and should
specify the name of the AI tool, version used, and the purpose of its
application.
For example,
authors may include a statement such as:
“The author(s)
used OpenAI’s ChatGPT to edit and refine the wording of the Introduction. All
outputs were reviewed and verified by the authors.”
Failure to
disclose AI usage may be considered a breach of ethical publishing standards
and could result in rejection or retraction of the article.
3. Author Responsibility and
Accountability
Authors are
wholly responsible for the content and integrity of their manuscripts. Even
when AI tools are used to assist in writing or other tasks, authors must ensure
the accuracy, originality, and appropriateness of the final submission. Authors
are expected to verify that AI-generated content does not contain hallucinated
references, incorrect scientific claims, biased interpretations, or plagiarized
text.
Misuse of AI
tools, including submission of entirely or largely AI-generated papers without
human oversight or disclosure, will be treated as unethical conduct in
accordance with COPE guidelines and AEFR’s editorial policy.
4. Use of AI in Peer Review
AEFR expects all
peer reviewers to conduct their evaluations based on confidentiality,
integrity, and scholarly competence. Reviewers must not use AI tools to
generate reviews or input manuscript content into AI platforms without explicit
authorization, as this may violate confidentiality agreements.
If a reviewer
wishes to use AI for non-content tasks (e.g., grammar improvement of their
review text), they must ensure that no confidential information is shared and
must disclose such use to the editor. The editorial team reserves the right to
reject reviews generated with inappropriate use of AI tools.
5. Editorial Use of AI
Editorial staff
at AEFR may utilize AI tools to support non-decision-making tasks such as
plagiarism detection, formatting checks, and language editing. However, AI
tools will not be used to make acceptance or rejection decisions. All editorial
judgments will be made by qualified human editors to ensure accountability and
adherence to ethical standards.
6. Ethical Considerations and Bias
Prevention
The use of AI
must not compromise ethical integrity, and authors must ensure that AI tools do
not introduce bias, misrepresentation, or offensive content. Authors are
encouraged to critically assess AI-generated outputs and to avoid over-reliance
on such tools, particularly in tasks that require nuanced academic judgment.
7. Violations
and Consequences
Any attempt to
misrepresent AI-generated content as original work, fabricate references or
data with AI, or fail to disclose the use of AI tools may constitute a breach
of publication ethics. AEFR reserves the right to:
- Reject the manuscript outright
- Request revisions or corrections
- Retract the article
post-publication
- Inform the authors’ affiliated
institutions, if necessary
All such cases
will be investigated following COPE guidelines on misconduct.
8. Policy Review and Updates
This AI policy
will be reviewed regularly and updated as necessary to reflect technological
advancements and evolving best practices in academic publishing. AEFR remains
committed to supporting responsible innovation while protection the quality and
integrity of scholarly communication.
References
- Elsevier (2023). Generative AI Policies for Journals.
Retrieved from:
https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals - Committee on Publication Ethics (COPE) (2023). Position
Statement on Authorship and AI Tools. Retrieved from:
https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools - Committee on Publication Ethics (COPE) (2024). Discussion
Document on AI and Peer Review. Retrieved from:
https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review - Committee on Publication Ethics (COPE) (2023). Discussion
Paper: Ethical Considerations in the Use of Generative AI in Publishing.
https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers