Artificial Intelligence in Publishing: Stewardship in a Digital Era
Published online: 7 April, TAPS 2026, 11(2), 1-3
https://doi.org/10.29060/TAPS.2026-11-2/EV11N2
Artificial intelligence (AI) is now a part of all areas of academic work. Journal reviewers and editors have noticed that more manuscripts are being written with the help of AI, specifically generative AI (GenAI), and reviews are being improved through chatbots. To make things more cost-efficient and effective, editorial workflows now include automated screening. The question is no longer if GenAI will affect scholarship. It already does! The key question is, how can we ensure authors are the primary agents of their conceptions and, thus, motivate authors to write articles in a transparent manner that authentically represents their own ideas.
Recent discussions across leading journal editorial boards reflect both optimism and caution. Commentaries in The Lancet Infectious Diseases warn that large language models may generate confident but flawed critiques, amplify bias and hallucinate references (Donker, 2023). Such systems lack epistemic responsibility. They predict language. They do not understand method. Peer review, however, is a moral and scholarly act. It demands judgement, accountability and contextual reasoning. Similarly, discussions in Health Affairs Scholar and Critical Care highlight GenAI’s growing presence in peer review processes. GenAI may assist with triage, language refinement, and detection of plagiarism or reporting omissions. Yet it cannot replace human oversight (Bauchner & Rivara, 2024; Cheng, Sun, Liu, Wu & Li, 2024). These perspectives are not anti-technology. They are pro-accountability. They call for stewardship. Major journal organisations now articulate consistent policy principles. The International Committee of Medical Journal Editors (ICMJE, 2024), the World Association of Medical Editors (Zielinski et al., 2024), the Committee on Publication Ethics (COPE Council, n.d.), and others converge on several points. GenAI tools cannot be authors. Authorship requires responsibility, the ability to declare conflicts of interest, and legal accountability. GenAI meets none of these criteria.
The key is transparency. This can be achieved if authors are required to identify the GenAI tool (e.g., ChatGPT, Claude, Gemini, and Microsoft Copilot) and its version. The JAMA Network further requires the author to describe how GenAI contributed to writing and or analysis (Flanagin et al., 2024). Disclosure is now part of scholarly honesty, which requires a sense of responsibility. The British Medical Journal and The Lancet adopt similar positions. GenAI may assist in writing or editing, but it cannot generate scientific insight, interpret data independently, or substitute researcher judgement (BMJ, 2024; The Lancet, n.d.). Confidentiality remains central. As for reviewers, they must not upload unpublished manuscripts into publicly available GenAI platforms. The National Institutes of Health (NIH, 2023) has formalised this requirement through revised nondisclosure agreements. The integrity of peer review depends on trust. That trust cannot be compromised for convenience. Human accountability remains the anchor.
Yet policy clarity does not eliminate deeper tensions.
First, enforcement remains uncertain. Disclosure depends largely on author and reviewer honesty. Detection tools are imperfect. Investing in digital literacy and understanding the GenAI technologies by journal editors should be the way forward rather than prohibiting them.
Second, GenAI use raises questions of equity. For many medical educators specially in the Asia-Pacific region where English is the second language, GenAI can improve clarity and confidence. For some others, access to expensive GenAI tools may widen disparities. Responsible governance must consider inclusion, not merely control.
Third, we must confront the educational implications. In medical education scholarship, GenAI shapes how learners write, search, and reflect. Editorial policies therefore signal curricular values. If we treat GenAI only as threat, we model fear. If we treat it uncritically as a cost-efficient mechanism, we risk eroding critical thinking. We must instead teach discernment. GenAI literacy should become part of scholarly professionalism. Basil et al. (2026) have conducted a comprehensive review of the impact of GenAI in health profession education and one of their policy suggestions is to regularly audit GenAI policies due to the evolving nature of GenAI technology.
At its heart, this moment is not about technology. It is about identity and professionalism. What does it mean to be an author? A reviewer? An editor? GenAI can assist with language, much like the use of a human proof-reader in the past. However, it cannot assume responsibility for truth as that would mislead and mask the true authorship of the idea being presented. That responsibility remains human.
For The Asia Pacific Scholar, the way forward is balanced and transparent. We should require clear disclosure of GenAI use in manuscript preparation. We should prohibit uploading confidential material into unsecured systems. We should allow cautious use for language improvement when declared. This is also important as English is not the first language of most of the scholars in the region. Journals may employ licensed GenAI tools for plagiarism detection or reviewer matching however with human oversight. It is important that we should preserve human judgement in decisions that shape academic careers and patient care.
GenAI is here to stay. And further to this, we need to be mindful of the dynamic nature of the development of AI technology, as Bennani (2024) amongst many others inform the academic world of the impending advent of artificial general intelligence (AGI). The idea of AGI being to make AI decisions more autonomous thus requiring greater vigilance to ensure these technological changes continue to align with human values of integrity and professionalism. Our task is not surrender, nor resistance for its own sake. It is stewardship and to continue to be well informed. As such, we must guide its use in ways that strengthen scholarship, protect integrity and support our diverse academic community across the Asia-Pacific region.
Technology can accelerate manuscript generation and reviews. However, it cannot replace wisdom.
And wisdom remains our responsibility!
Dujeepa D. Samarasekera
Centre for Medical Education (CenMED), NUS Yong Loo Lin School of Medicine,
National University of Singapore, Singapore
Marcus A. Henning
Centre for Medical and Health Sciences Education, Faculty of Medical and Health Sciences,
University of Auckland, New Zealand
Basil, M., Ahmed, W., Hajeomar, R., Strawbridge, J., Lynch, M., & Mukhalalati, B. (2026). A scoping review of the use of generative artificial intelligence tools in health profession education. BMC Medical Education, 26, Article 291. https://doi.org/10.1186/s12909-025-08527-3
Bauchner, H., & Rivara, F. P. (2024). Use of artificial intelligence and the future of peer review. Health Affairs Scholar, 2(5), qxae058. https://doi.org/10.1093/haschl/qxae058
Bennani, T. (2024). Advancing Healthcare with GenerativeAI: A Multifaceted Approach to Reliable Medical Information and Innovation (Doctoral dissertation, Massachusetts Institute of Technology). https://hdl.handle.net/1721.1/156048
BMJ. (2024). AI use. BMJ. https://www.bmj.com/content/ai-use
Cheng, K., Sun, Z., Liu, X., Wu, H., & Li, C. (2024). Generative artificial intelligence is infiltrating peer review process. Critical Care, 28(1), 149. https://doi.org/10.1186/s13054-024-04933-z
COPE Council. (n.d.). COPE position – Authorship and AI – English. Committee on Publication Ethics. https://doi.org/10.24318/cCVRZBms
Donker, T. (2023). The dangers of using large language models for peer review. The Lancet Infectious Diseases, 23(7), 781. https://doi.org/10.1016/S1473-3099(23)00290-6
Flanagin, A., Pirracchio, R., Khera, R., Berkwits, M., Hswen, Y., & Bibbins-Domingo, K. (2024). Reporting use of AI in research and scholarly publication—JAMA Network Guidance. Jama, 331(13), 1096-1098. https://doi.org/10.1001/jama.2024.3471
International Committee of Medical Journal Editors (ICMJE). (2024). Recommendations for the conduct, reporting, editing and publication of scholarly work in medical journals (revised in January 2024): A Korean translation. The Ewha Medical Journal, 47(4). https://doi.org/10.12771/emj.2024.e48
National Institutes of Health (NIH). (2023). The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html
The Lancet. (n.d.). Editorial Policies. The Lancet. https://www.thelancet.com/editorial-policies
Zielinski, C., Winker, M. A., Aggarwal, R., Ferris, L. E., Heinemann, M., Lapeña, J. F., … & WAME Board. (2024). Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. Current Medical Research and Opinion, 40(1), 11-13. https://doi.org/10.1080/03007995.2023.2286102
Announcements
- Best Reviewer Awards 2025
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2025.
Refer here for the list of recipients. - Most Accessed Article 2025
The Most Accessed Article of 2025 goes to Analyses of self-care agency and mindset: A pilot study on Malaysian undergraduate medical students.
Congratulations, Dr Reshma Mohamed Ansari and co-authors! - Best Article Award 2025
The Best Article Award of 2025 goes to From disparity to inclusivity: Narrative review of strategies in medical education to bridge gender inequality.
Congratulations, Dr Han Ting Jillian Yeo and co-authors! - Best Reviewer Awards 2024
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2024.
Refer here for the list of recipients. - Most Accessed Article 2024
The Most Accessed Article of 2024 goes to Persons with Disabilities (PWD) as patient educators: Effects on medical student attitudes.
Congratulations, Dr Vivien Lee and co-authors! - Best Article Award 2024
The Best Article Award of 2024 goes to Achieving Competency for Year 1 Doctors in Singapore: Comparing Night Float or Traditional Call.
Congratulations, Dr Tan Mae Yue and co-authors! - Best Reviewer Awards 2023
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2023.
Refer here for the list of recipients. - Most Accessed Article 2023
The Most Accessed Article of 2023 goes to Small, sustainable, steps to success as a scholar in Health Professions Education – Micro (macro and meta) matters.
Congratulations, A/Prof Goh Poh-Sun & Dr Elisabeth Schlegel! - Best Article Award 2023
The Best Article Award of 2023 goes to Increasing the value of Community-Based Education through Interprofessional Education.
Congratulations, Dr Tri Nur Kristina and co-authors! - Best Reviewer Awards 2022
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2022.
Refer here for the list of recipients. - Most Accessed Article 2022
The Most Accessed Article of 2022 goes to An urgent need to teach complexity science to health science students.
Congratulations, Dr Bhuvan KC and Dr Ravi Shankar. - Best Article Award 2022
The Best Article Award of 2022 goes to From clinician to educator: A scoping review of professional identity and the influence of impostor phenomenon.
Congratulations, Ms Freeman and co-authors.









