Understanding the factors affecting duration in answering MCQ examination: The students’ perspective

Number of Citations: 0

Submitted: 6 April 2024
Accepted: 10 December 2025
Published online: 1 April, TAPS 2025, 10(2), 57-64
https://doi.org/10.29060/TAPS.2025-10-2/OA3332

Chatchai Kreepala1, Srunwas Thongsombat2, Krittanont Wattanavaekin3, Taechasit Danjittrong4, Nattawut Keeratibharat5 & Thitikorn Juntararuangtong1

1School of Internal Medicine, Institute of Medicine, Suranaree University of Technology, Thailand; 2Department of Orthopedics, Faculty of Medicine, Prince of Songkla University, Thailand; 3Department of Surgery, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Thailand; 4Department of Anesthesiology, Chulabhorn Hospital, Thailand; 5School of Surgery, Institute of Medicine, Suranaree University of Technology, Thailand

Abstract

Introduction: Factors affecting decision-making duration in MCQs can enhance assessment effectiveness, ensuring they accurately measure the intended objectives and address issues related to incomplete exams due to time constraints. The authors aimed to explore the aspects of medical student’s perspective regarding the factors influencing their decision making on MCQ assessments.

Methods: A mixed-methods explanatory sequential design was utilised. Initial surveys were conducted using percentages, mean and non-parametric analysis obtained via online questionnaires from the sample group: all 2nd – 5th year medical students from SUT, Thailand. The validity of the questionnaires was verified by three independent reviewers (IOC=0.89). This was followed by semi-structured group interviews to explore student’s perspective on the factors affecting their decision. Qualitative analysis was conducted to explore detailed information until data saturation was achieved.

Results: Data from the quantitative analysis identified four factors that students believe affect the duration of the exam: the total word count of each question, test difficulty, and images in tests. Meanwhile, the qualitative analysis provided additional insights on factors such as the examination atmosphere affecting their decisions.

Conclusion: This report indicated that data acquired from a comprehensive reading question should be distinguished from those requiring decisive reading. Apart from text length, question taxonomy-such as recall or application- and questions with given images and tables should be considered as factors determining time allocation for an MCQ. Future research based on these results should aim to develop a mathematical formula to calculate exam duration, accounting for question difficulty and length.

Keywords:           MCQ, Medical Assessment, Medical Education, Testing Time Estimation, Qualitative Research, Students’ Perspective

Practice Highlights

  • The multiple-choice question (MCQ) stands as one of the objective assessment methods, widely regarded as the most utilised form of assessment.
  • The word-length effect has been proposed to determine the length of each examination.
  • Educational theories on decision-making have posited that decision-making is a dynamic process stemming from prior experiences.
  • The authors were interested in exploring the aspects of the medical student’s perspective about the factors affecting their decision on MCQs answering.

I. INTRODUCTION

The multiple-choice question (MCQ) stands as one of the available objective assessment methods, widely regarded as the most utilised form of assessment, particularly within the fields of medical sciences and technology. Evidence suggests that the recall of short words often surpasses that of longer words (Tehan & Tolan, 2007). This observation is frequently analysed within the framework of a working memory model and the role of the phonological loop in immediate recall. However, the word-length effect has also been observed in delayed tests and in lists that surpass the memory span, thereby challenging the working memory interpretation of the phenomenon. Three alternative interpretations of the word-length effect have been proposed to explain how an exam length should be determined (Arif & Stuerzlinger, 2009; Kumar et al., 2021).

Educational theories on decision-making have posited that decision-making is a dynamic process stemming from prior experiences (Phillips et al., 2004) and meaningful learning (Foley, 2019). As a result, the ability to comprehend text while reading does not automatically equate to reading for decision-making or answering questions. From the literature, the context of factors influencing medical students’ decisions on MCQs includes 1) Length or number of words: The time students need to read to gather information before making a decision on an answer (Arif & Stuerzlinger, 2009). 2) Difficulty of the questions: analytical thinking, especially calculations are involved, may increase decision-making time. This depends on the students’ prior learning experiences before the exam (González et al., 2008). 3) Language comprehension: since exams in medical schools are often in English, non-native speakers may take longer to read and understand the questions (Schenck, 2020). 4) Visuals and tables: these serve as symbols that help students retrieve information from their prior learning experiences more easily (Ziefle, 1998). It is certain that teachers want academic assessment tests, such as MCQs, to be used to distinguish between high-performing and low-performing students and to assess the knowledge and understanding they have acquired. However, these objectives may be undermined by issues such as students running out of time and resorting to guessing. This inevitably reduces the reliability of the test.

The authors were interested in exploring medical student’s aspect regarding factors affecting their decision on MCQs answering. Previous studies focused on duration required for question comprehension and understanding but not for analysis. These were also mostly done in native Englisher speakers. This study builds upon previous studies but with an emphasis on factors affecting non-native English speakers’ decision making after analysis of the provided questions to answer MCQs in English. This research should be approached from the perspective of the student to obtain appropriate data. Semi-structured qualitative interviews were analysed in conjunction with quantitative data to identify and clarify the reasons and factors that students believe influence their performance on exams.

II. METHODS

A. Study Population

The research participants were second to fifth-year Thai medical students who had taken MCQ tests during their preclinical and clinical years between the academic years 2021-2022. Questionnaires were sent to all students without sampling.

To minimise data artifacts caused by recall bias, the online questionnaires were distributed the first week after each MCQ test before the study to the students who completed the exams. All examinations in this study were computer-based, closed book, single best answer MCQs written in English. The participants were non-native English speakers of Thai nationality (as detailed in Definition of Terms). An online survey or questionnaire-based study was used to collect information from participants in this study. If the data was unsaturated, triangulated data from a group of interviews consisting of students from different rotations was included to receive as much information from students’ perspectives as possible.

B. Study Design and Data Collection

The authors employed a mixed method study comprising a quantitative approach and a sequential, explanatory approach. The literature review unveiled several factors influencing MCQ test duration, including the number of questions, question types (recall or comprehension), subject matter difficulty, calculation items, and picture identification, as outlined in the questionnaire (O’Dwyer, 2012). 

An online survey or questionnaire-based study was used to collect information from participants with minimal disruption to their learning activities. The quantitative research section was managed by CK, NK and TJ.  Students completed the questionnaire once, based on their experiences in medical school. This, therefore, necessitates the researcher to summarise the responses and, if required, categorise interviews into groups according to their year of study. Open-ended questions were included in the last section of the questionnaire. The open-ended questions about the factors that, in the student’s opinions, were helpful information about the other factors affecting MCQ time (Lertwilaiwittaya et al., 2019). Survey research was employed as a quantitative method, while semi-structured group interviews were utilised in qualitative data collection to gather insights from medical students’ perspectives. The interview questions were designed to investigate whether students possessed any additional insights regarding the factors influencing MCQ test duration (Carnegie Mellon University, 2019; Schenck, 2020; Wang, 2019).

There were three sections in the questionnaire. Part I consisted of the instruction and informed consent. Part II consisted of general information of the participants, including sex, age, and academic year. Part III consisted of the questionnaires covering all four constructive domains previously mentioned (the domains affecting MCQ time from the literature included: 1) the number and total word count, 2) English language questions, 3) calculation questions, 4) the analytical thinking questions and open-ended questions about the factors that, in the students’ opinions, were helpful information about the other factors affecting MCQ time. After the questions in Part One were completed, they would be taken away so that the researchers would not be able to identify whose students have answered Part II and Part III.

To prevent neutral opinions from students, each questionnaire item featured a four-point Likert scale corresponding to levels of agreement: ‘Strongly disagree,’ ‘Disagree,’ ‘Agree,’ and ‘Strongly agree.’ The researcher wanted clear opinion whether the students were trending towards which side, hence the four-point Likert scale to prevent neutral opinion which may complicate statistical analysis. Validity of the questionnaires were verified by three independent reviewers with an Index of Item-Objective Congruence (IOC) value of 0.89.

Semi-structured group interviews were adopted into this study as insufficient flexibility is provided by a structured interview, whereas unstructured interviews would be too flexible. Semi-structured group interviews were the combination of formal and informal interviews focusing on personal experience; this often leads to unexpected results, enhancing the quality of data collected.

These interviews would take place after class by independent interviewers without any conflict of interest. Two facilitators were present in each session, CK facilitated the conversation and NK contributed ideas. The two facilitators were known by the student participants as faculty members, but they were not actively engaged in their academic learning.  Audio and written recording would be coded then decoded by the researchers (SK, KW and TD).

The interview would take around 30-45 minutes per group, with each group consisting of five to eight people. Analysis would be done after the first three groups using relevant domain analysis and further analysis done after new interviews until data saturation was achieved.  Coding, theme identification, and triangulation would be undertaken following the analysis and evaluation of the quantitative and qualitative data of which the analysis could be extrapolated to form a conclusion of the study. In this study, the open-end question would be analysed, and the semi-structured interview would be done.

Triangulation helped to provide meaning and helped to gain broader and more precise understanding. It could help increase validity. Triangulation was undertaken following the analysis and evaluation of the quantitative and qualitative data of which the analysis was extrapolated to form a conclusion of the study.

C. Definition of Terms

1) Multiple choice question (MCQ): This paper exclusively focused on the Single Best Answer (SBA) Multiple Choice Questions (MCQs), which were structured as questions followed by 4 or 5 potential answers, with only one correct response per question (Coughlin & Featherstone, 2017).

2) Taxonomy MCQ: MCQs were formulated based on two assumptions: that they could be categorised into higher or lower orders according to Bloom’s taxonomy (Stringer et al., 2021). This study sought to comprehend students’ approaches to questions by examining variances in their perceptions of the Bloom’s level of MCQs regarding their knowledge and confidence. The authors employed Bloom’s taxonomy in this study, classifying questions as “recall,” “comprehension,” and “application” (Stringer et al., 2021).

3) Non-native English speakers: The term non-native English speakers was defined as those students who spoke a language other than English domestically. Non-native English speakers were inclusive of both competent bi-literate and limited English proficiency students. In addition, it is also defined as students who learn the language as older children or adults (Cassels & Johnstone, 1984).

D. Statistical Analysis

Statistical analyses were performed for quantitative analysis with SPSS Statistics for Windows, Version 18.0 (SPSS Inc., Chicago, Illinois, USA). Information in the quantitative section was elaborated and displayed in and counts percentage. The qualitative data was analysed by code grouping of text fragments based on content. Subsequently, the codes were reorganised and grouped, main themes and subthemes were identified, and illustrative quotations were selected. The authors assigned other three medical teachers to undertake independent coding of the transcripts for each interview. The final coding and discussions continued until the frameworks were agreed upon and new themes were derived (CK, SK, KW and TD).

III. RESULTS

A. Demographic Information

The questionnaire was done online by the participants from second to fifth-year medical students in the academic year 2021-2022. There were 93 second-year medical students, 92 third-year medical students, 92 fourth-year medical students, and 93 fifth-year medical students, respectively, with 370 participants in total. It was found that there were 298 respondents (a return rate of 81%). 73 second-year medical students (78% response rate) answered the questions, while 70 third year (76%), 75 fourth year (81%), and 80 fifth year medical students (86%) answered the questions respectively as shown in Table 1.

General information

Category

n (%)

Gender

Male

102(34)

Female

196(66)

Age (year)

Mean ± SD

21.3 ± 1.23

Max, Min

28, 19

College Year

Second Year

73(24)

Third Year

70(23)

Fourth Year

75(25)

Fifth Year

80(27)

Table 1. Demographic information of student participants in the survey

Abbreviation: n= number, Max=maximum, Min= minimum

B. Students’ Perspective on Examination Time and Number of MCQs

From the questionnaires, it was found that the medical students thought that the suitable number of questions in the 1-hour examinations that consisted of the intermediate level questions was approximately 41.4±15.62 questions (min-max: 20-120 questions). Moreover, students wanted to gain some more points by guessing rather than leaving the answer blank during the final period of the examination. Regardless of the difficulty of the examinations or the time given, the students would rush to finish the examination in time. Most of the students started to guess the answers at the last 5.4±1.11 minutes (min-max: 2-10 minutes). 

C. The Information from the Survey and Semi-Structured Interview

The quantitative data also indicated that various factors influenced the examination duration according to the students’ perspectives. The first three factors were identified through quantitative survey research, encompassing 1) the number of tests and total word count, 2) English language questions, and 3) test difficulty influencing time allocation (including calculation questions and analytical thinking questions) (Table 2). Concurrently, the examination environment also impacted students’ concentration during each test. The latter two pieces of information were corroborated through triangulation from the semi-structured group interviews.

Question

Level of Agreement n (%) (total n =298)

Strongly Agree

Agree

Moderate

Disagree

Strongly disagree

1. Number of word count (texts)

80(27)

105(35)

110(37)

3(1)

0(0)

2. The English questions

77(26)

80(27)

110(37)

24(8)

7(2)

3. The Calculation questions

131(44)

60(20)

92(31)

11(4)

4(1)

4 Analytical thinking tests (not a comprehension test)

105(35)

105(35)

77(26)

11(4)

0(0)

Table 2. Evaluating Factors Affecting MCQ Test Time in Student’s Perspectives and the Rating Scores

Abbreviation: n= number

D. The Number Tests and Total Word Count

The exam questions, according to some students, were challenging and time-consuming, and the answer options were likewise lengthy. It was shown that not only the number of tests, but the length of each test item also affected the testing time.

Quote: Student B1F*; “The questions were too long. I can’t complete them in time.”

Quote: Student A2M*; “If there are too many questions in the exam, I wouldn’t be able to finish it”

* student’s code

English Language Questions and Examiners (Native Versus Non-Native English Speakers): The respondents, who were not native English speakers, believed that the English-language test took longer to finish than the Thai-language test. Accordingly, they decided to guess or answer each question slowly since they could not understand the questions. They believed that the English-language tests took longer to finish than the Thai-language tests. Accordingly, they decided to guess or answer each question slowly since they could not fully understand the English questions.

Quote: Student D1F*; “I’m not good at reading English. Sometimes I just have to guess on the exam.”

Quote: Student C1M*; “The language in the test is too hard to understand.”

* student’s code

E. Test Difficulty Determining Time Allocation

For the analysis of coding, grouping, and generating themes, the author found that the medical students paid attention to the difficulty level of the questions which affected the decision to answer the questions.

1) The Calculation and Analytical Thinking: The calculation and analytical thinking tests took students longer to read. Additionally, students believed that examinations they had never taken before or exams that required knowledge application took longer to complete, such as exams that included questions requiring the students to diagnose patients by themselves which occasionally left them unsure of how to respond.

Quote: Student C2M*; “Calculation tests take a long time to get the answers.”

* student’s code

2) Recall Question Leads to Quick Answers: Students commented that recall-type questions, including tests from previous academic years, contained duplicated sentences, pictures, or messages from textbooks that students remembered. This led to students being able to complete the test in a short thinking time.

Quote: Student K1M*; “If the teacher copied the exact words from the course sheet, I would remember and answer questions quickly.”

Quote: Student L1M*; “If the questions are the same as in the sheet provided, I can answer them.”

*student’ code

This information indicated that the taxonomy of the test (recall -compression-application) had a large effect on decision time. Applied questions, not direct or calculated questions, required more attention and time for decision-making when compared to comprehensive questions (questions about knowledge understanding). In contrast, recall questions required the least decision-making time.

F. The Visual Image and Atmosphere of the Examination: the New Derived Domains Recognised by Qualitative Analysis

1) Questions with images, graphs, or tables serve as key guides for decision-making: The students thought that the exams that consisted of graphs and tables helped them understand the questions and were better than the questions that only had descriptions. That would lead to less time consumed.

Quote: Student L2M*; “If the test got the exact same summary table from the book, I could remember and get the answers right away.”

* student’s code

2) The Atmosphere of the Examination: The environment and atmosphere of the exam were also mentioned. The student’s response time was slowed down by the distractions during the exam. The environment such as brightness, temperature, and examination devices affected the concentration of the students.

Quote: Student H1F*; “The atmosphere in the exam venue, noise, and the air quality in the room affect the exam results.”

*student’s code

IV. DISCUSSION

The results revealed that students perceived lengthy exam content or a large number of questions as time-consuming, particularly when exams were conducted in English. Studies indicated that English speakers could read up to 150 words per minute (Trauzettel-Klosinski et al., 2012). However, for non-native English speakers, the expected reading time for exams was longer. Hence, using the English reading rate as a basis for determining exam duration was deemed unsuitable for Thai students, given that English was not their primary language of communication. When compared with a previous study (Trauzettel-Klosinski et al., 2012), the increased duration may result from decision making, thus this implicates reading for decision making requires more time than reading for the context which is cumulatively longer for non-English native speakers.

Qualitative findings indicated that irrespective of the exam duration set by the administering professor, students generally completed exams within the allotted time frame. This often entailed guessing answers towards the end of the exam period, as students might not have adequate time to complete the exam thoroughly. It was observed that students tended to resort to guessing exam questions approximately five minutes before the exam conclusion, thereby minimising threats to validity posed by guessing due to time constraints during exam (Foley, 2019).

There may be limitations if the exam questions contain lengthy content that cannot be comprehended and decided upon within one minute. Furthermore, the difficulty level of the exam questions is often established as a passing criterion, prioritising validity considerations in terms of content format and achieving the intended objectives. Moreover, students naturally desire to obtain the highest possible score on the exam, regardless of the level of difficulty or length of the exam. Therefore, it is important for students to manage their time effectively to ensure they can complete all the exam questions within the given timeframe.

The qualitative results indicated that regardless of the exam duration set by the administering professor, students ultimately would complete the exam within the allotted time frame. Additionally, students agreed that application and calculation questions on the exam require more time to read and decide upon, as opposed to questions with figures and tables that aided in faster decision making. Based on these findings, it could be concluded that comprehensive reading rates may not be a reliable indicator of decision-making reading rates, particularly in the context of medical school exams. Therefore, studying decision-making reading rates within the context of medical school exams was crucial.

The researcher, therefore, examined the domain and specific factors on the characteristics of the MCQ test. Additionally, the study scope was limited to English tests administered to non-native English speakers and onsite computer-based tests, thereby eliminating unrelated factors that could impact exam duration. The analysis yielded the following results: Firstly, factors positively correlated with exam duration (negatively correlated with decision-making) included the number of questions, total word count, calculation questions, and analytical thinking questions. Secondly, factors negatively correlated with exam duration (positively correlated with decision-making) were recall questions, questions with provided images, and tables.

A factor contributing to longer reading times for decision-making purposes was when the exam contained a higher proportion of application or calculation questions, comprising over 33% of the exam questions, as evidenced by qualitative data from students. Therefore, analysing exam completion time based on reading comprehension data for decision-making purposes is not recommended. Moreover, it should be noted that these factors present internal threats to validity, but they can be managed to ensure that examination tools are effectively used and aligned with intended objectives. Incorporating data from research can lead to the identification of new themes related to factors influencing examination time.

Five constructive domains were identified: 1) the number and total word count, 2) positive difficulty factors (application/calculation questions), 3) negative difficulty factors (recall questions), 4) examiners (non-native English speakers or not), and 5) pictures/symbols in tests.

A distinctive aspect of this study was its targeted focus on Thai medical students who were nonnative English speakers. While many studies have examined MCQ performance across broad and diverse populations, this research concentrated on a specific demographic, enabling a more in-depth exploration of how cultural and linguistic factors influence test-taking behaviour. The study uniquely combined quantitative survey data with qualitative insights from semi-structured group interviews. While some research utilised either quantitative or qualitative methods, this study’s integration of both provided a more holistic understanding of student perspectives and experiences (Lertwilaiwittaya et al., 2019). This methodological triangulation strengthened the validity of the findings by cross-verifying quantitative data with qualitative insights. In contrast to many existing studies that focused predominantly on performance metrics (such as scores or pass rates), this research examined the cognitive processes and decision-making strategies students employed while answering MCQs. It investigated how elements like question difficulty, language comprehension, and prior experiences shaped students’ approaches to test questions—a dimension less frequently explored in previous literature.

In conjunction with examination-related factors, students also recognised the importance of considering the test environment within the examination room which was a new finding found using qualitative analysis from this research. This was crucial for promoting student concentration and facilitating accurate response selection in line with assessment tool objectives. It aligned with existing literature, which suggested that the test environment poses a construct irrelevant threat to the validity of educational measurement. The findings from this study may have led to future research on developing a mathematical formula to tailor the exam duration for different sets of questions. This would have involved analysing factors such as the number of words, length, difficulty, and the presence of images and tables in the exam. Additionally, the impact of language proficiency on reading and decision-making time should have been considered, as there may have been differences between native and non-native speakers. The study suggested that the future research direction should include diverse populations of non-native English speakers from different countries and educational contexts. This could help identify whether the findings are consistent across various cultural backgrounds and educational systems. Moreover, conducted longitudinal studies should be used to track students’ performance and decision-making processes over time. This approach could provide insights into how experiences and familiarity with MCQs influence their strategies and confidence levels throughout their medical education.

A major limitation of this research was the variation in learning experiences, exam-taking skills, and analytical thinking among medical students at different year levels, which might lead to differing opinions. Therefore, the researcher needed to conduct qualitative analysis to examine the reasons behind these differences. However, the diversity of experiences might also introduce bias due to varying familiarity with different types of exams. The online format restricted the depth of responses, as students often did not fully articulate their thoughts without immediate follow-up questions, which limited the richness of the qualitative data. Additionally, the focus on Thai medical students constrained the applicability of the findings to other populations or contexts, thereby limiting broader conclusions about non-native English speakers in different educational settings.

V. CONCLUSION

Based on the student’s perspective, data showed questions with lengthy content required more time whilst those with tables or diagrams required less time. This report indicated that the data acquired from a comprehensive reading examination should be distinguished from a decisive reading examination.

In addition to the number of questions and the length of text, factors that should be positively correlated with the duration of the exam include the number of questions, word count, calculation-based questions, and analytical thinking questions. These factors should be considered for additional time allocation beyond the regular exam duration, particularly when the proportion of analytical thinking questions exceeds one-third of the total question set. On the other hand, recall questions, as well as questions accompanied by images and tables, should be taken into account to ensure a balanced distribution of exam time, as they can be answered more easily and quickly in terms of decision-making compared to general questions.

Notes on Contributors

CK conceived of the presented idea, developed the theory, and performed the computations and discussed the results and contributed to the final manuscript. ST, KW, and TD. discussed the results and wrote the manuscript with support from CK, NK, and TJ, designed the model and the computational framework and analysed the data.

Ethical Approval

All participants voluntarily signed a consent form prior to participating in the study. The participation protocol was approved by the Human Research Ethics Committee, Suranaree University of Technology (Issue # EC-64-102).

Data Availability

Institutional ethical clearance was given to maintain the data in the secure storage of the principal investigator of the study. The data to this study may be provided upon reasonable request to the corresponding author. A preprint of our manuscript, which is not peer-reviewed, is available at https://www.researchsquare.com/article/rs-3019852/v1

Acknowledgement

The authors would like to thank the participants of this study, the medical students in the Institute of Medicine, Suranaree University of Technology. Without their passionate participation and input, the validation survey could not have been successfully conducted.

Funding

This work was supported by the Grant of Suranaree University of Technology (contract number SUT-602-64-12-08(NEW)).

Declaration of Interest

The authors have no conflicts of interest to disclose.

References

Arif, A. S., & Stuerzlinger, W. (2009). Analysis of text entry performance metrics [Conference presentation]. 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Canada. https://doi.org/10.1109/TIC-STH.2009.5444533

Cassels, J., & Johnstone, A. (1984). The effect of language on student performance on multiple choice tests in chemistry. Journal of Chemical Education, 61(7), 613. https://doi.org/10.1021/ed061p613

Coughlin, P., & Featherstone, C. (2017). How to write a high-quality multiple-choice question (MCQ): A guide for clinicians. European Journal of Vascular and Endovascular Surgery, 54(5), 654-658. https://doi.org/10.1016/j.ejvs.2017.07.012

Carnegie Mellon University. (2019). Creating exams Eberly Center. https://www.cmu.edu/teaching/assessment/assesslearning/creatingexams.html

Foley, B. P. (2019). Getting lucky: How guessing threatens the validity of performance classifications. Practical Assessment, Research, and Evaluation, 21(1), 3. https://doi.org/10.7275/1g6p-4y79

González, H. L., Palencia, A. P., Umaña, L. A., Galindo, L., & Villafrade M, L. A. (2008). Mediated learning experience and concept maps: A pedagogical tool for achieving meaningful learning in medical physiology students. Advances in Physiology Education, 32(4), 312-316. https://doi.org/10.1152/advan.00021.2007

Kumar, D., Jaipurkar, R., Shekhar, A., Sikri, G., & Srinivas, V. (2021). Item analysis of multiple choice questions: A quality assurance test for an assessment tool. Medical Journal Armed Forces India, 77, S85-S89. https://doi.org/10.1016/j.mjafi.2020.11.007

Lertwilaiwittaya, P., Sitticharoon, C., Maikaew, P., & Keadkraichaiwat, I. (2019). Factors influencing the National License Examination step 1 score in preclinical medical students. Advances in Physiology Education, 43(3), 306-316. https://doi.org/10.1152/advan.00197.2018

O’Dwyer, A. (2012). A teaching practice review of the use of multiple-choice questions for formative and summative assessment of student work on advanced undergraduate and postgraduate modules in engineering. All-Ireland Journal of Teaching and Learning in Higher Education, 4(1). https://doi.org/10.21427/D7C03R

Phillips, J. K., Klein, G., & Sieck, W. R. (2004). Expertise in judgment and decision making: A case for training intuitive decision skills. Blackwell Handbook of Judgment and Decision Making, 297, 315. https://doi.org/10.1002/9780470752937.ch15

Schenck, A. (2020). Examining the influence of native and non-native English-speaking teachers on Korean EFL writing. Asian-Pacific Journal of Second and Foreign Language Education, 5(1), 2. https://doi.org/10.1186/s40862-020-00081-3

Stringer, J., Santen, S. A., Lee, E., Rawls, M., Bailey, J., Richards, A., . . . Biskobing, D. (2021). Examining Bloom’s taxonomy in multiple choice questions: Students’ approach to questions. Medical Science Educator, 31(4), 1311-1317. https://doi.org/10.1007/s40670-021-01305-y

Tehan, G., & Tolan, G. A. (2007). Word length effects in long-term memory. Journal of Memory and Language, 56(1), 35-48. https://doi.org/10.1016/j.jml.2006.08.015

Trauzettel-Klosinski, S., Dietz, K., & Group, I. S. (2012). Standardized assessment of reading performance: The new international reading speed texts IReST. Investigative Ophthalmology & Visual Science, 53(9), 5452-5461. https://doi.org/10.1167/iovs.11-8284

Wang, A. (2019, July 15). How to determine the best length for your assessment. Pear Deck Learning. https://www.peardeck.com/blog/how-to-determine-the-best-length-for-your-assessment

Ziefle, M. (1998). Effects of display resolution on visual performance. Human Factors, 40(4), 554-568. https://doi.org/10.1518/001872098779649355

*Assoc. Prof. Chatchai Kreepala, M.D.
Institute of Medicine
Suranaree University of Technology
Thailand
+66(93)3874665
Email: chatchaikree@gmail.com

Announcements