Hidden pitfalls in AI-generated MCQs: A call for caution
Submitted: 3 October 2025
Accepted: 29 October 2025
Published online: 7 April, TAPS 2026, 11(2), 129-130
https://doi.org/10.29060/TAPS.2026-11-2/LE3898
Nghia Phu Nguyen
College of Health Sciences, Nam Can Tho University, Vietnam
Dear Editor,
The rapid emergence of large language models (LLMs) in medical education has transformed the process of generating multiple-choice questions (MCQs). Recent literature has comprehensively summarised the classical flaws in MCQ design, including weak distractors, convergence errors, incomplete stems, and the importance of systematic post-hoc item analysis (Steele et al., 2025). It has also highlighted that, even as generative AI becomes integrated into assessment design, expert review remains indispensable to ensure validity, reliability, and cognitive depth (Elzayyat et al., 2025).
As generative AI becomes integrated into the question-writing process, these flaws are emerging as factors that can compromise the quality and fairness of assessments. My review of AI-generated questions reveals several recurring problems that pose real risks to assessment quality. Weak distractors are common: they may be implausible, overly brief, include absolute terms that reduce discrimination, or contrast sharply with the correct option in length and detail, making the correct answer identifiable even without content knowledge. Word overlap or convergence, in which key terms from the stem are repeated in answer choices, often serves as another unintended cue. Another frequent flaw is the over-explained correct option, which goes beyond simple identification and provides additional functional characteristics that are absent in other distractors. Finally, bias in answer distribution has also been observed; for example, the correct answer appeared disproportionately less often in option A, which may create predictable patterns and encourage strategic guessing. Although computer-based assessments typically randomise question and option order, reducing the impact of such bias, it could still influence small-scale paper-based tests such as in-course assessments, where students may exploit positional patterns.
These problems are not minor. If they are ignored, they can reduce the fairness of exams, make test scores less meaningful, and allow poor-quality questions to become available. As AI-generated content becomes more common, educators need to be cautious and actively involved in checking its quality. Questions created by AI should always be carefully reviewed by humans before being used in any exam. Each item should be examined for the quality and plausibility of distractors, the balance of language, possible cues that reveal the answer, and the overall distribution of correct options. AI should be seen only as a tool to support question development, not as a replacement for human judgment. Careful and systematic review is essential if we want to maintain the quality, fairness, and credibility of assessments in the era of generative AI.
Notes on Contributors
Nghia Phu Nguyen conceptualised and drafted the letter, contributed to critical revision of the letter for clarity and intellectual content, and approved the final version for submission.
Funding
This work received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Declaration of Interest
The authors have no conflicts of interest to disclose.
References
Elzayyat, M., Mohammad, J. N., & Zaqout, S. (2025). Assessing LLM-generated vs. expert-created clinical anatomy MCQs: A student perception-based comparative study in medical education. Medical Education Online, 30(1), 2554678. https://doi.org/10.1080/10872981.2025.2554678
Steele, S., Nayak, N., Mohamed, Y., & Panigrahi, D. (2025). The generation and use of medical MCQs: A narrative review. Advances in Medical Education and Practice, 16, 1331-1340. https://doi.org/10.2147/AMEP.S513119
*Nghia Phu Nguyen, M. D.
College of Health Sciences,
Nam Can Tho University,
168 Nguyen Van Cu Street,
An Binh Ward, Can Tho City, Vietnam
Email: npnghia@nctu.edu.vn
Announcements
- Best Reviewer Awards 2025
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2025.
Refer here for the list of recipients. - Most Accessed Article 2025
The Most Accessed Article of 2025 goes to Analyses of self-care agency and mindset: A pilot study on Malaysian undergraduate medical students.
Congratulations, Dr Reshma Mohamed Ansari and co-authors! - Best Article Award 2025
The Best Article Award of 2025 goes to From disparity to inclusivity: Narrative review of strategies in medical education to bridge gender inequality.
Congratulations, Dr Han Ting Jillian Yeo and co-authors! - Best Reviewer Awards 2024
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2024.
Refer here for the list of recipients. - Most Accessed Article 2024
The Most Accessed Article of 2024 goes to Persons with Disabilities (PWD) as patient educators: Effects on medical student attitudes.
Congratulations, Dr Vivien Lee and co-authors! - Best Article Award 2024
The Best Article Award of 2024 goes to Achieving Competency for Year 1 Doctors in Singapore: Comparing Night Float or Traditional Call.
Congratulations, Dr Tan Mae Yue and co-authors! - Best Reviewer Awards 2023
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2023.
Refer here for the list of recipients. - Most Accessed Article 2023
The Most Accessed Article of 2023 goes to Small, sustainable, steps to success as a scholar in Health Professions Education – Micro (macro and meta) matters.
Congratulations, A/Prof Goh Poh-Sun & Dr Elisabeth Schlegel! - Best Article Award 2023
The Best Article Award of 2023 goes to Increasing the value of Community-Based Education through Interprofessional Education.
Congratulations, Dr Tri Nur Kristina and co-authors! - Best Reviewer Awards 2022
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2022.
Refer here for the list of recipients. - Most Accessed Article 2022
The Most Accessed Article of 2022 goes to An urgent need to teach complexity science to health science students.
Congratulations, Dr Bhuvan KC and Dr Ravi Shankar. - Best Article Award 2022
The Best Article Award of 2022 goes to From clinician to educator: A scoping review of professional identity and the influence of impostor phenomenon.
Congratulations, Ms Freeman and co-authors.









