Hidden pitfalls in AI-generated MCQs: A call for caution

Number of Citations: 0

Submitted: 3 October 2025
Accepted: 29 October 2025
Published online: 7 April, TAPS 2026, 11(2), 129-130
https://doi.org/10.29060/TAPS.2026-11-2/LE3898

Nghia Phu Nguyen

College of Health Sciences, Nam Can Tho University, Vietnam

Dear Editor,

The rapid emergence of large language models (LLMs) in medical education has transformed the process of generating multiple-choice questions (MCQs). Recent literature has comprehensively summarised the classical flaws in MCQ design, including weak distractors, convergence errors, incomplete stems, and the importance of systematic post-hoc item analysis (Steele et al., 2025). It has also highlighted that, even as generative AI becomes integrated into assessment design, expert review remains indispensable to ensure validity, reliability, and cognitive depth (Elzayyat et al., 2025).

As generative AI becomes integrated into the question-writing process, these flaws are emerging as factors that can compromise the quality and fairness of assessments. My review of AI-generated questions reveals several recurring problems that pose real risks to assessment quality. Weak distractors are common: they may be implausible, overly brief, include absolute terms that reduce discrimination, or contrast sharply with the correct option in length and detail, making the correct answer identifiable even without content knowledge. Word overlap or convergence, in which key terms from the stem are repeated in answer choices, often serves as another unintended cue. Another frequent flaw is the over-explained correct option, which goes beyond simple identification and provides additional functional characteristics that are absent in other distractors. Finally, bias in answer distribution has also been observed; for example, the correct answer appeared disproportionately less often in option A, which may create predictable patterns and encourage strategic guessing. Although computer-based assessments typically randomise question and option order, reducing the impact of such bias, it could still influence small-scale paper-based tests such as in-course assessments, where students may exploit positional patterns.

These problems are not minor. If they are ignored, they can reduce the fairness of exams, make test scores less meaningful, and allow poor-quality questions to become available. As AI-generated content becomes more common, educators need to be cautious and actively involved in checking its quality. Questions created by AI should always be carefully reviewed by humans before being used in any exam. Each item should be examined for the quality and plausibility of distractors, the balance of language, possible cues that reveal the answer, and the overall distribution of correct options. AI should be seen only as a tool to support question development, not as a replacement for human judgment. Careful and systematic review is essential if we want to maintain the quality, fairness, and credibility of assessments in the era of generative AI.

Notes on Contributors

Nghia Phu Nguyen conceptualised and drafted the letter, contributed to critical revision of the letter for clarity and intellectual content, and approved the final version for submission.

Funding

This work received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Declaration of Interest

The authors have no conflicts of interest to disclose.

References

Elzayyat, M., Mohammad, J. N., & Zaqout, S. (2025). Assessing LLM-generated vs. expert-created clinical anatomy MCQs: A student perception-based comparative study in medical education. Medical Education Online, 30(1), 2554678. https://doi.org/10.1080/10872981.2025.2554678

Steele, S., Nayak, N., Mohamed, Y., & Panigrahi, D. (2025). The generation and use of medical MCQs: A narrative review. Advances in Medical Education and Practice, 16, 1331-1340. https://doi.org/10.2147/AMEP.S513119

*Nghia Phu Nguyen, M. D.
College of Health Sciences,
Nam Can Tho University,
168 Nguyen Van Cu Street,
An Binh Ward, Can Tho City, Vietnam
Email: npnghia@nctu.edu.vn

Announcements