Impact of Scholarly Project on students’ perception of research skills: A quasi-experimental study
Submitted: 22 January 2022
Accepted: 4 May 2022
Published online: 4 October, TAPS 2022, 7(4), 50-58
https://doi.org/10.29060/TAPS.2022-7-4/OA2748
Nguyen Tran Minh Duc, Khuu Hoang Viet & Vuong Thi Ngoc Lan
University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, Vietnam
Abstract
Introduction: The Scholarly Project provides medical students with an opportunity to conduct research on a health and health care topic of interest with faculty mentors. Despite the proven benefits of the Scholarly Project there has only been a gradual change to undergraduate medical education in Vietnam. In the academic year of 2020-2021, the University of Medicine and Pharmacy (UMP) at Ho Chi Minh City launched the Scholarly Project as part of an innovative educational program. This study investigated the impact of the Scholarly Project on the research skills perception of participating undergraduate medical students.
Methods: A questionnaire evaluating the perception of fourteen research skills was given to participants in the first week, at midterm, and after finishing the Scholarly Project; students assessed their level on each skill using a 5-point Likert scale from 1 (lowest score) to 5 (highest score).
Results: There were statistically significant increases in scores for 11 skills after participation in the Scholarly Project. Of the remaining three skills, ‘Understanding the importance of “controls”’ and ‘Interpreting data’ skills showed a trend towards improvement while the ‘Statistically analyse data’ skill showed a downward trend.
Conclusion: The Scholarly Project had a positive impact on each student’s perception of most research skills and should be integrated into the revamped undergraduate medical education program at UMP, with detailed instruction on targeted skills for choosing the optimal study design and follow-up assessment.
Keywords: Study Skills, Scholarly Project, Undergraduate, Medical Education, Self-Assessment
Practice Highlights
- The Scholarly Project is an essential component of the undergraduate medical education curriculum.
- Targeted researching skills is a valuable method to optimise competency-based criteria.
- The initial choice of study design is important to the overall research skill self-perceptive improvement.
I. INTRODUCTION
Scholarly Project has emerged as an essential component of the modern undergraduate medical curriculum. This entails mentored study in a single topic area and may include classical hypothesis-driven research, literature reviews, or the creation of a medically-related product (Boninger et al., 2010). By researching a topic, designing and implementing experiments and analysing the results, students not only gain knowledge and experience but also essential skills including critical thinking, time management, collaboration, information technology and confidence, all of which benefit their academic endeavours and result in higher undergraduate graduation rates (Bickford et al., 2020; Carson, 2007). Furthermore, the Scholarly Project program, which allows students to learn about research, was rated positively by most undergraduates. In addition, it provides faculty members with assistance in their research projects and the chance to influence future generations (Dagher et al., 2016). It has also been noted that the process of exposing undergraduate students to research benefits the researchers who take part as instructors by refining and shaping their scientific minds (Zydney et al., 2002).
The number of research studies with Vietnamese authorship published in ISI-indexed journals increased considerably between 2001 and 2015, with an annual growth rate of 17%. However, the majority of this growth (77%) was accounted for by international collaboration research rather than domestic-only projects, especially in the clinical medicine area. Thus, scientific research in Vietnam had not changed considerably or achieved independence in this field (Nguyen et al., 2016).
In the academic year of 2020-2021, the University of Medicine and Pharmacy at Ho Chi Minh City (UMP), Vietnam, pioneered the launch of a one-year Scholarly Project for all fifth-year medical students. This medical student population is the first generation to learn under the refreshed Undergraduate Medical Curriculum of the UMP and the first class to experience the Scholarly Project. Undergraduate research experiences are characterised by four features: mentorship, originality, acceptability, and dissemination (Kardash, 2000). Assessment of undergraduate research experience, which determines whether students gained any research skills (such as identifying the research question, collecting data, thinking independently and creatively) is best performed after completing the research program (Blockus et al., 1997; Manduca, 1997). The quasi-experimental work presented here provides one of the first investigations into how the Scholarly Project at the UMP, Vietnam, impacted on the participating students’ perception of how their medical research skills improved in the academic year of 2020-2021.
II. METHODS
A. Description of the Scholarly Project
The Scholarly Project is a compulsory academic module that aims to enable fifth-year medical students to conduct medical research early in their careers. It provides these students with an active experience in conducting a research project with faculty members starting at the beginning of the fifth academic year. The data reported here were collected from medical students and mentors who participated during the 2020-2021 academic year.
For most medical students, the Scholarly Project provides the first exposure to the field of research. There are 48 groups of nine medical students, including one team leader, one secretary, and team members, with one faculty mentor. Medical students are expected to contribute actively to the best of their ability in committed teamwork and an ethical manner.
Members of the faculties of Medicine and Public Medicine who have active ongoing research projects are eligible to participate in the Scholarly Project. Faculty members act as mentors to the students and facilitate the students’ learning process by providing supervision, guidance, and support. In addition, members should allocate suitable tasks for each student based on their skills, expertise, interests, and background.
B. Scholarly Project Steps
1) Student orientation: Student orientation occurred in the first week, informing students of the program’s procedure, and their roles and responsibilities (Figure 1). Also, in the first week, the medical student curriculum included a medical research course, describing the formation of research ideas, study design and statistics, literature searching and referencing, and research ethics. Students were also provided with important dates and deadlines for the Scholarly Project stage.
2) Matching: Matching is the process of pairing students with project mentors. From the first weeks of the Scholarly Project, each student team is required to create a team profile on the university website, including the scientific interest, skill, and research fields of interest for each team member. Each medical student team then chose a mentor from a provided list, taking into account medical research fields and their research curriculum vitae. Each team picked up to 2 mentors, in order of preference. After the deadline, mentors chose which team they would like to work with based on the students’ choice; this process continued until all teams were paired.
3) Work initiation: Students were expected to initiate contact with the faculty member after being notified via the university website that they have been matched to a project. During the second week of the Scholarly Project, faculty members and students discussed the research project, and the roles and responsibilities. Upon finalising the agreement between the two parties, students completed a meeting report form, which was signed by both the mentor(s) and the team leader. During online learning periods due to COVID-19, online meetings were encouraged, along with completion of the meeting report form. This meeting report form included information about topics discussed during the meeting, future work, each student’s role in the research project, and confirmed the next appointment date. Student teams and faculty members scheduled meetings based on the design of their study. In follow-up meetings, faculty mentors continued to discuss and evaluate the medical students’ work, and further plans were discussed. There was no upper limit for the number of meetings. However, there was a second required meeting at the third week of the Scholarly Project, which was nearly the end of the modules, for the research team to update the collected data, trouble-shooting solutions, or feedback.
4) Presentation: In the final week of the fifth-year curriculum, a Scholarly Project Symposium provided the opportunity for research teams to present their project findings. This allowed the scientific committee to evaluate both the performance of each student and the research project in general. Another aim of the symposium was for medical students to learn and share their findings with other teams, and the presentation also provides a valuable reference for the subsequent classes.

Figure 1. Integration of the Scholarly Project into the new reformed undergraduate and postgraduate medical curriculum in Vietnam.
C. Study Setting and Participants
This one-group pretest-posttest study had a quasi-experimental design. Research skills assessed were chosen based on fourteen individual research skills (Kardash, 2000). The questionnaire has been used previously, with a Cronbach’s alpha calculated at 0.9 and item-total correlation varied between 0.49 to 0.76 (Kardash, 2000). The questionnaire was translated into Vietnamese, then the local language version was pre-tested and the final text was amended as necessary. The translation process was undertaken in accordance with Guidelines for the Cross-Cultural Adaptation Process (Beaton et al., 2000). Translations were evaluated and compared with the original questionnaire by the Education and Research Council of the UMP to ensure accuracy of the Vietnamese version prior to study initiation. Medical student surveys were administered during the first week of the Scholarly Project and students were asked to indicate their current level of performance for each skill and the extent to which they hoped that the project would develop each skill on a 5-point Likert scale from 1–5 (where higher scores indicate greater skill level). Surveys were repeated at midterm and during the last week of the Scholarly Project module; at these times the students used the same scale to rate the extent to which they felt capable of performing each skill and how they believed the internship had developed their skills in general. Medical students had to provide informed consent on the first page of the electronic form before accessing the rest of the questionnaire.
D. Statistical Analysis
Raw data were extracted from the online survey link for each participating medical student and saved in Excel sheets. R (R Core Team, Vienna, Austria) was applied to analyse data. First, scores for each skill at baseline were compared with those obtained after project completion using a paired t-test (Student’s t-test). The same method was used to compare expected skill level evaluated at baseline and the actual skill level rating at the end of the Scholarly Project. A p-value of <0.05 was considered to be statistically significant.
III. RESULTS
A. Response Rate and Participant Data
Of 384 students participating in the Scholarly Project, 194 (50.5%) completed the survey. The majority of participants were male (60%) and had the role of project team member (75.3%) (See Table 1). The most common Scholarly Project design was a cross-sectional study (47.9%), followed by study protocol development (21.1%), case/case series report (11.9%), and literature review (10.3%) (Table 1). Twenty-one different departments with a wide range of specialties provided scientific mentors for the Scholarly Projects undertaken by 48 research groups (See Table 1).


Table 1. Demographic and project characteristics for survey respondents.
Values are mean ± standard deviation, or number of respondents (%).
B. Research Skills at Baseline, Midterm and Project Completion
At baseline, self-rated competency was highest for ‘Understand the importance of “controls”’, ‘Understand contemporary concepts’, ‘Identify a specific question’, and ‘Observe and collect data’ (Figure 2). All skills had self-evaluating levels above “moderate” (score of >3), except for ‘Write research for publication’ (mean score 2.696). Students expected that all skills would increase after participating in the Scholarly Project (p<0.001).
In the midterm survey, five skill groups showed significant improvement from baseline (Figure 2). These were ‘Make use of scientific literature’, ‘Identify a specific question’, ‘Observe and collect data’, ‘Relate results to the “bigger picture”’, and ‘Orally communicate research project skills’. Conversely, there was a significant decrease in self-rated skill for ‘Statistically analyse data’ and ‘Interpret data skills’, while other skill ratings were stable (Figure 2).

Figure 2. Change in self-rated medical research skills of 194 participants from baseline to the midterm of the Scholarly Project
M: mean; SD: standard deviation; CI: confidence interval
At the completion of the Scholarly Project, the five skills that showed improvement at the midterm assessment showed continued improvement, and another six skills had also improved significantly compared with baseline (Figure 3). However, scores for ‘Understand the importance of “controls”’, ‘Interpret data’ and ‘Statistically analyse data” did not change significantly from baseline, and the mean score for the latter parameter was actually slightly below baseline (Figure 3).

Figure 3. Change in self-rated medical research skills of 194 participants from baseline to completion of the Scholarly Project (SP)
M: mean; SD: standard deviation; CI: confidence interval.
Looking more closely at analytical skills relating to six types of study design showed that self-rated skill for the ability to interpret data for a literature review decreased significantly, as did self-rated skill scores for statistically analyse data in relation to study protocol development and literature review (Table 2). In contrast, there was a significant improvement in self-rated skill for data interpretation for cross-sectional studies and for statistical analysis of data in cohort studies (Table 2).

Table 2. Self-evaluated skill level scores for ‘Interpret data’ and ‘Statistically analyse data’ from baseline to completion of the Scholarly Project
Values are mean ± standard deviation. *p<0.05 vs baseline.
IV. DISCUSSION
A. Impact of Scholarly Project on Students’ Perception of Research Skills
Our results show that ratings for most skills increased during and after the Scholarly Project. Increases in ratings for ‘Identifying a specific question’, ‘Orally communicate research projects’, and ‘Relate results to the “bigger picture”’ in our study were consistent with data from Schor et al. (2005), who reported that the Scholarly Project could be beneficial by fostering analytical thinking skills, improving oral communication skills, and enhancing skills for evaluating and applying new knowledge to their profession (Schor et al., 2005). A significant increase in ‘Make use of scientific literature’ in our study reflects the idea-forming process at the study design stage of the Scholarly Project, during which students could practice the ability to read and critically evaluate medical literature. These are essential components of undergraduate medical education, irrespective of whether students intend to pursue a career in academic medicine or in public or private clinical practice (Holloway et al., 2004).
B. Data-related Skills and the Concept of a Control Group
The two skills of ‘Statistically analyse data’ and ‘Interpret data’ are introduced mainly in the Advanced Statistics Module with a training period of 2 weeks before starting the Scholarly Project, and briefly presented in the ‘Basic statistics informatics’ module during the first year of training and in the ‘Basic epidemiology’ module during the third year of the undergraduate curriculum. Therefore, baseline assessments in our study took place after the Advanced Statistics Module, which could have influenced ratings on the above skills. Given that our midterm assessment was performed at a time when most students had not had the opportunity to practice these skills, there may have been a negative impact on self-evaluation. The change in scores for ‘Statistically analyse data’ and ‘Interpret data’ at the midterm assessment was therefore influenced by an external factor (the Advanced Statistics Module) and an internal factor (the Scholarly Project). Therefore, future assessments of the impact of the Scholarly Project on learning should not have the quasi-experimental design used here, but instead, use an interrupted time-series design. This will mean that several surveys would be conducted before starting the Advanced Statistics Module, with the aim of eliminating confounding factors.
The final assessment showed significant improvements in scores for ‘Statistically analyse data’ and ‘Interpret data skills’ compared with the midterm survey. When applied in students’ projects, the improvement of these two skills indirectly supported the aforementioned context. This highlights the value of active learning compared with passive learning. It has conclusively been shown that cramming statistical knowledge means that students do not understand basic concepts to apply appropriately (Leppink, 2017). As noted by Leppink, statistics should be integrated into medical subjects; familiarity with these subjects and the repeated use of these skills provides opportunities to develop statistical skills. The Scholar Project is a typical example of this trend. However, only the ‘Statistically analyse data skill’ showed a downward pattern, while the ‘Interpret data skill’ increased slightly, suggesting that the Scholarly Project should focus more on these skills. Additional studies that take these variables into account are needed.
The control group concept is taught in Basic Epidemiology during the third year of Basic Science and the first sessions of the Scholarly Project. The control group has a pivotal role in study design should have elements that match the experimental group’s characteristics, except for the intervention/variable applied to the latter (Kinser & Robins, 2013). This scientific control group enables the experimental study of one variable at a time, and it is an essential part of the scientific method. Two identical experiments are carried out in a controlled experiment: in one of them, the treatment or tested factor is applied (experimental group), whereas in the other group (control), the tested factor is not applied (Pithon, 2013). However, due to the limitation that only four respondents had a project with a case-control study design, the ‘Understand the importance of “controls”’ skill only showed a modest improvement, despite having been taught previously, which is similar to a previous undergraduate research study (Kardash, 2000). Compared with cross-sectional study design, which was the most popular design for studies in this Scholarly Project, case-control studies often required a greater amount of human and facility resources. We suggest that a case-control study with a small sample size of 10–20 could be a suitable study design for medical students to understand how best to conduct research with a control group.
Of the 194 respondents in our study, 56.7% of the cohort should have been able to fully experience all fourteen of the skills assessed. In contrast, those who participated in study protocol development, literature review, and case/case series report projects had limited opportunities to practice analytical skills. Similar to our findings, a previous study demonstrated that only 13% of 475 projects conducted by medical students contained four main research skill areas, including research methods, information gathering, critical analysis and review, and data processing (Murdoch-Eaton et al., 2010). Furthermore, the COVID-19 outbreak during the academic year 2020-2021 significantly impacted the originally planned Scholarly Project data collection process. As a result, some research teams switched to more feasible design studies such as study development or literature review, which potentially influenced the two skills of statistical analysis and data interpretation skills. Therefore, it could be hypothesised that these conditions are less likely to occur if participants recognise the skills required for research before designing the study protocol. Thus, there is room for further progress in determining the optimal project descriptions provided to medical students participating in the Scholarly Project to allow them to benefit from the research opportunities and fully develop essential skills.
C. The Role of Scholarly Project in Medical Education in Vietnam
This Scholarly Project is an essential step in curriculum reform for Vietnam’s medical education system. In the last two decades, medical educators in Vietnam have collaborated to promote the social trend for undergraduate medical education, and identify the goals and outcomes of learning from medical graduates in expected knowledge, attitudes, and skills (Hoat et al., 2009). Furthermore, Vietnamese policymakers created an environment that enabled academic innovation by implementing the necessary changes to national university autonomy policies (Duong et al., 2021). These policies enable public universities to be financially independent, manage their operation and human resources, prioritise technology, and develop new curricula. The Scholarly Project helps to train physicians who are better prepared to meet patient requirements and health needs (Fan et al., 2012). Based on competency in medical education, the Scholarly Project focuses on outcomes, emphasises the application of knowledge and practice, and promotes greater learner-centeredness (Carraccio et al., 2002; Frank et al., 2010; Iobst et al., 2010). In addition, the Scholarly Project helps to reduce the time spent in passive lectures, which can negative affect medical students (Deslauriers et al., 2019; Schwartzstein et al., 2020; Schwartzstein & Roberts, 2017). Instead, students are encouraged to explore research topics based on their interests, human and institutional resources, and university mentors’ guidance and follow-up. Compared with the large class sizes from Vietnam’s traditional teaching method, the Scholarly Project (with an average of eight students and one mentor) provides low faculty-to-students ratios, creating desired small group learning. Starting for the first time in the 2020-2021 school year, Scholarly Project had to adapt to the impact of the COVID-19 pandemic, with two periods of online learning required in September 2020 and May 2021 due to local COVID-19 outbreaks. To help manage this, the university applied for technical assistance from Microsoft Office 365 with a full-access subscription to maintain the scheduled small group meetings between students and their mentors while optimising social distancing (Duong et al., 2021).
We recommend introducing the 14-skill questionnaire as a tool for medical students to self-monitor their improvement during participation in the Scholarly Project. From the mentors’ perspective, the questionnaire provides a reliable and convenient reference for providing feedback to students and suggestions about areas that need further improvement. These approaches could also be utilised in other institutions, either locally or internationally, who include a Scholarly Project for a number of reasons: (1) the Scholarly Project is a lengthy module that could be impacted by unexpected events (e.g. COVID-19); (2) the need for routine self-check and mentor feedback to facilitate the required research skills improvement; and (3) because the questionnaire is a validated, convenient and accessible method for both medical students and mentors.
D. Study Limitations
Although the survey was sent to all medical students participating in the Scholarly Project, only just over half of students responded. Therefore, the impact of the Scholarly Project on non-responding medical students may not reflect the trends reported here, limiting the generalizability of our findings. Nonresponse bias is another potential limitation, although this is not necessarily associated with a lower response rate (Davern, 2013; Halbesleben and Whitman, 2012). Participants might perceive that self-evaluation about how much their research skills had improved could indirectly reflect their level of participation in Scholarly Project, the contribution of their mentor, and the level of their academic performance, leading to social desirability bias in their responses. We attempted to reduce nonresponse and social desirability bias, and any perception that responses could impact on academic assessments, by making survey responses anonymous and keeping the study survey completely separate from any academic assessments (e.g. grade-point average). Another limitation is the lack of a control group of medical students, but this is difficult because participation in the Scholarly Project is mandatory for all students. Using a control group would have strengthened the study from a methodological perspective and allowed investigation of the impact of specific aspects of the Scholarly Project.
Respond shift bias is inevitable while conducting this research. To reduce this, instead of completing self-evaluation for all fourteen skills initially and then after the completion of the whole project, students should assess their skill level immediately after the completion of each Module. However, response shift bias happened because respondents perceived the purpose of the survey as assessing the program’s effectiveness. In the context of our research, even if assessments were completed after each Module, students would realise the aim of the survey meaning that respond-shift bias would not decrease considerably.
V. CONCLUSION
Scholarly Project is an excellent learning opportunity for medical students in the refreshed undergraduate medical curriculum. Participating in a Scholarly Project provides students with research experience, including the knowledge, structure, and support needed to engage in scholarly work. By providing the foundations for scholarly work, medical students can enter the health care workforce with solid clinical expertise and the basic skills required to conduct high-quality projects that improve the safety and quality of care delivered to patients. We suggest integrating the Scholarly Project curriculum throughout the undergraduate medical education curriculum in Vietnam. This is important in terms of early experience of medical research and fostering a good understanding of medical scientific research for all future doctors, regardless of their ultimate career destination.
Notes on Contributors
N.T.M.D. and K.H.V. drafted and revised the manuscript. V.T.N.L. helped in reviewing the manuscript. All authors (N.T.M.D., K.H.V., V.T.N.L.) have made substantial contributions to the conception and design of the work and the acquisition, analysis, and interpretation of data. All authors read and approved the final manuscript.
Ethical Approval
The authors declare that this study did not require human ethics approval and did not include experiments on animal or human subjects. This study was submitted to the Institutional Review Board (IRB) at University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, Vietnam. This project was determined to be exempt from IRB review. All methods were carried out in accordance with relevant guidelines and regulations. Respondents were informed that their participation in the survey was completely voluntary and there were no risks associated with their participation.
Data Availability
The datasets generated and/or analysed during the current study are not publicly available for reasons of data protection but are available from the corresponding author on reasonable request.
Acknowledgement
The authors would very much like to acknowledge Ms. Le Minh Chau, Mr. Ung Nguyen Vu Hoang, Ms. Duong Kim Ngan, Mr. Nguyen Hai Dang, Ms. Tran Thi Hong Ngoc, Mr. Giang Luu Thanh Hoang, and Mr. Nguyen Hoang Nhan (University of Medicine and Pharmacy at Ho Chi Minh City, Vietnam) for their support of this study.
Funding
No funding has been received for the study.
Declaration of Interest
The authors declare that they have no competing interests.
References
Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186-3191. https://doi.org/10.1097/00007632-200012150-00014
Bickford, N., Peterson, E., Jensen, P., & Thomas, D. (2020). Undergraduates interested in STEM research are better students than their peers. Education Sciences, 10(6), 150. https://doi.org/10.3390/educsci10060150
Blockus, L., Kardash, C. M., Blair, M., & Wallace, M. (1997). Undergraduate internship program evaluation: A comprehensive approach at a research university. Council on Undergraduate Research, 18, 60–63.
Boninger, M., Troen, P., Green, E., Borkan, J., Lance-Jones, C., Humphrey, A., Gruppuso, P., Kant, P., McGee, J., Willochell, M., Schor, N., Kanter, S. L., & Levine, A. S. (2010). Implementation of a longitudinal mentored scholarly project: An approach at two medical schools. Academic Medicine, 85(3), 429–437. https://doi.org/10.1097/acm.0b013e3181ccc96f
Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting Paradigms. Academic Medicine, 77(5), 361–367. https://doi.org/10.1097/00001888-200205000-00003
Carson, S. (2007). A new paradigm for mentored undergraduate research in molecular microbiology. CBE—Life Sciences Education, 6(4), 343–349. https://doi.org/10.1187/cbe.07-05-0027
Dagher, M. M., Atieh, J. A., Soubra, M. K., Khoury, S. J., Tamim, H., & Kaafarani, B. R. (2016). Medical Research Volunteer Program (MRVP): Innovative program promoting undergraduate research in the medical field. BMC Medical Education, 16(1), Article 160. https://doi.org/10.1186/s12909-016-0670-9
Davern, M. (2013). Nonresponse rates are a problematic indicator of nonresponse bias in survey research. Health Services Research, 48(3), 905–912. https://doi.org/10.1111/1475-6773.12070
Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences, 116(39), 19251–19257. https://doi.org/10.1073/pnas.1821936116
Duong, D. B., Phan, T., Trung, N. Q., Le, B. N., Do, H. M., Nguyen, H. M., Tang, S. H., Pham, V. A., Le, B. K., Le, L. C., Siddiqui, Z., Cosimi, L. A., & Pollack, T. (2021). Innovations in medical education in Vietnam. BMJ Innovations, 7(Suppl 1), s23–s29. https://doi.org/10.1136/bmjinnov-2021-000708
Fan, A. P., Tran, D. T., Kosik, R. O., Mandell, G. A., Hsu, H. S., & Chen, Y. S. (2012). Medical education in Vietnam. Medical Teacher, 34(2), 103–107. https://doi.org/10.3109/0142159x.2011.613499
Frank, J. R., Snell, L. S., Cate, O. T., Holmboe, E. S., Carraccio, C., Swing, S. R., Harris, P., Glasgow, N. J., Campbell, C., Dath, D., Harden, R. M., Iobst, W., Long, D. M., Mungroo, R., Richardson, D. L., Sherbino, J., Silver, I., Taber, S., Talbot, M., & Harris, K. A. (2010). Competency-based medical education: Theory to practice. Medical Teacher, 32(8), 638–645. https://doi.org/10.3109/0142159x.2010.501190
Halbesleben, J. R. B., & Whitman, M. V. (2012). Evaluating survey quality in health services research: A decision framework for assessing nonresponse bias. Health Services Research, 48(3), 913–930. https://doi.org/10.1111/1475-6773.12002
Hoat, L. N., Lan Viet, N., van der Wilt, G., Broerse, J., Ruitenberg, E., & Wright, E. (2009). Motivation of university and non-university stakeholders to change medical education in Vietnam. BMC Medical Education, 9(1), Article 49. https://doi.org/10.1186/1472-6920-9-49
Holloway, R., Nesbit, K., Bordley, D., & Noyes, K. (2004). Teaching and evaluating first and second year medical students’ practice of evidence-based medicine. Medical Education, 38(8), 868–878. https://doi.org/10.1111/j.1365-2929.2004.01817.x
Iobst, W. F., Sherbino, J., Cate, O. T., Richardson, D. L., Dath, D., Swing, S. R., Harris, P., Mungroo, R., Holmboe, E. S., & Frank, J. R. (2010). Competency-based medical education in postgraduate medical education. Medical Teacher, 32(8), 651–656. https://doi.org/10.3109/0142159x.2010.500709
Kardash, C. M. (2000). Evaluation of undergraduate research experience: Perceptions of undergraduate interns and their faculty mentors. Journal of Educational Psychology, 92(1), 191–201. https://doi.org/10.1037/0022-0663.92.1.191
Kinser, P. A., & Robins, J. L. (2013). Control group design: enhancing rigor in research of mind-body therapies for depression. Evidence-Based Complementary and Alternative Medicine, 2013. https://doi.org/10.1155/2013/140467
Leppink, J. (2017). Helping medical students in their study of statistics: A flexible approach. Journal of Taibah University Medical Sciences, 12(1), 1–7. https://doi.org/10.1016/j.jtumed.2016.08.007
Manduca, C. (1997). Broadly defined goals for undergraduate research projects: A basis for program evaluation. Council on Undergraduate Research, 18(2), 64–69.
Murdoch-Eaton, D., Drewery, S., Elton, S., Emmerson, C., Marshall, M., Smith, J. A., Stark, P., & Whittle, S. (2010). What do medical students understand by research and research skills? Identifying research opportunities within undergraduate projects. Medical Teacher, 32(3), e152–e160. https://doi.org/10.3109/01421591003657493
Nguyen, T. V., Ho-Le, T. P., & Le, U. V. (2016). International collaboration in scientific research in Vietnam: An analysis of patterns and impact. Scientometrics, 110(2), 1035–1051. https://doi.org/10.1007/s11192-016-2201-1
Pithon, M. M. (2013). Importance of the control group in scientific research. Dental Press Journal of Orthodontics, 18(6), 13–14. https://doi.org/10.1590/s2176-94512013000600003
Schor, N. F., Troen, P., Kanter, S. L., & Levine, A. S. (2005). The scholarly project initiative: Introducing scholarship in medicine through a longitudinal, mentored curricular program. Academic Medicine, 80(9), 824–831. https://doi.org/10.1097/00001888-200509000-00009
Schwartzstein, R. M., Dienstag, J. L., King, R. W., Chang, B. S., Flanagan, J. G., Besche, H. C., Hoenig, M. P., Miloslavsky, E. M., Atkins, K. M., Puig, A., Cockrill, B. A., Wittels, K. A., Dalrymple, J. L., Gooding, H., Hirsh, D. A., Alexander, E. K., Fazio, S. B., & Hundert, E. M. (2020). The Harvard Medical School pathways curriculum: Reimagining developmentally appropriate medical education for contemporary learners. Academic Medicine, 95(11), 1687–1695. https://doi.org/10.1097/acm.0000000000003270
Schwartzstein, R. M., & Roberts, D. H. (2017). Saying goodbye to lectures in medical school — Paradigm shift or passing fad? New England Journal of Medicine, 377(7), 605–607. https://doi.org/10.1056/nejmp1706474
Zydney, A. L., Bennett, J. S., Shahid, A., & Bauer, K. W. (2002). Impact of undergraduate research experience in engineering. Journal of Engineering Education, 91(2), 151–157. https://doi.org/10.1002/j.2168-9830.2002.tb00687.x
*Nguyen Tran Minh Duc
217 Hong Bang Street, Ward 11,
District 5, Ho Chi Minh City, Vietnam
+84 988 127 948
Email: ntmduc160046@ump.edu.vn
Submitted: 13 January 2022
Accepted: 9 May 2022
Published online: 4 October, TAPS 2022, 7(4), 35-49
https://doi.org/10.29060/TAPS.2022-7-4/OA2699
Yuan Kit Christopher Chua1*, Kay Wei Ping Ng1*, Eng Soo Yap2,3, Pei Shi Priscillia Lye4, Joy Vijayan1, & Yee Cheun Chan1
1Department of Medicine, Division of Neurology, National University Hospital Singapore, Singapore; 2Department of Haematology-oncology, National University Cancer Institute Singapore, Singapore; 3Department of Laboratory Medicine, National University Hospital Singapore, Singapore; 4Department of Medicine, Division of Infectious Diseases, National University Hospital Singapore, Singapore
*Co-first authors
Abstract
Introduction: In-class engagement enhances learning and can be measured using observational tools. As the COVID-19 pandemic shifted teaching online, we modified a tool to measure the engagement of instructors and students, comparing in-person with online teaching and different class types.
Methods: Video recordings of in-person and online teachings of six identical topics each were evaluated using our ‘In-class Engagement Measure’ (IEM). There were three topics each of case-based learning (CBL) and lecture-based instruction (LLC). Student IEM scores were: (1) no response, (2) answers when directly questioned, (3) answers spontaneously, (4) questions spontaneously, (5) initiates group discussions. Instructor IEM scores were: (1) addressing passive listeners, (2) asking ≥1 students, (3) initiates discussions, (4) monitors small group discussion, (5) monitoring whole class discussions.
Results: Twelve video recorded sessions were analysed. For instructors, there were no significant differences in percentage time of no engagement or IEM scores when comparing in-person with online teaching. For students, there was a significantly higher percentage time of no engagement for the online teaching of two topics. For class type, there was overall less percentage time of no engagement and higher IEM scores for CBL than LLC.
Conclusion: Our modified IEM tool demonstrated that instructors’ engagement remained similar, but students’ engagement reduced with online teaching. Additionally, more in-class engagement was observed in CBL. “Presenteeism”, where learners were online but disengaged was common. More effort is needed to engage students during online teaching.
Keywords: Engagement, Observational Tool, Online Learning, E-learning, COVID-19, Medical Education, Research
Practice Highlights
- Lectures to large class (LLC) and case-based learning (CBL) are associated with lower levels of student engagement when conducted on a virtual platform.
- Instructors’ engagement during online teachings remained similar to that of in-person teachings.
- LLC is associated with reduced student engagement than CBL.
I. INTRODUCTION
Educational theories suggest that learning should be an active process. According to social constructivist theory, learning can be better achieved by social interactions in the learning environment (Kaufman, 2003). Active learning strategies fostering the students to interact with each other and the instructor such as discussions, talks, questions, may yield desirable learning outcomes in terms of knowledge, skills, or attitudes (Rao & DiCarlo, 2001). Therefore, using in-class learner engagement as an important keystone of active learning strategies is known to stimulate and enhance the learner’s assimilation of content and concepts (Armstrong & Fukami, 2009; Watson et al., 1991).
There is good evidence for the importance of engagement in online learning and use of an engagement metric has been advocated to better understand student online interactions to improve the online learning environment (Berman & Artino, 2018). While medical literature suggests that virtual education games foster engagement (McCoy et al., 2016), the level of engagement and learning fostered by online methods for group discussion and teaching is unknown. Teleconferencing is among some of the methods suggested for maintaining education during the COVID-19 pandemic (Chick et al., 2020).
Possible methods of quantifying student engagement include direct observation and student self-report. O’Malley et al. (2003) has published a validated observation instrument called STROBE to assess in-class learner engagement in health professions without interfering with learner activities. This observation instrument is used to document observed dichotomized types of instructor and student behaviors in 5-minute cycles and quantify the number of questions asked by the instructor and students in different class subtypes. This instrument as well as revised forms of this instrument has since been used as “in-class engagement measures” to compare instructor and student behaviors in different class types (Alimoglu et al., 2014; Kelly et al., 2005).
In our institution, a hybrid curriculum of case-based learning as well as lecture-style courses is used to teach the post graduate year one (PGY-1) interns. We had video recordings of these courses performed in-person prior to the COVID-19 pandemic. With the advent of the pandemic, these courses were shifted onto Zoom teleconferencing platform, but delivered by the same instructors, in the same class format.
We therefore aimed to determine and compare in-class learning engagement levels via observing instructor and student behaviours in different platforms of learning (either observed online or in-person retrospectively via video recording) delivered by the same instructor before and during the COVID-19 pandemic. We also aimed to compare instructor and student behaviours in different class types (either case-based learning or lecture style instruction). To do this, we planned to modify a known in-person observational tool for student engagement – “STROBE” (O’Malley et al., 2003) for use in analysing and recording the behaviours of students in both online and in-person teaching.
II. METHODS
A. Observed Class Types
In this study, we observed two different class types, case-based learning (CBL), as well as lecture-based instruction to teach basic medical/surgical topics to a large classroom (LLC) of PGY-1 interns. Video recordings of these in-person teachings were made in 2017. Both these class types were replicated in the same format on an online Zoom teleconferencing platform and were delivered by nearly all of the same tutors using the same content and Powerpoint slides during the COVID-19 pandemic in 2020. We aimed to view the 2017 video-recordings of the in-person teachings and compare them with the 2020 online teaching of PGY-1 interns. Written consent was obtained from the tutors and implied consent from the students. Students were informed beforehand via email that the sessions were going to be observed and they were again reminded at the start of each session where they had the chance to opt out. Subsequently, all student feedback and observation scores were amalgamated and de-identified. This study was approved by the institution’s ethics board.
Three topics each of case-based learning as well as lecture-style instruction were selected in chronological order as scheduled for students. Each topic of instruction was allotted up to a maximum of 90 minutes of time, but the instructor could choose to end the class earlier if the session was completed. Description of both class types are below.
1) Description of case-based learning in large classroom
The content of the learning was designed by the instructor, and consisted of clinical cases involving patient scenarios, where the main pedagogy was problem-solving and answering case-based questions relating to the patient scenario (e.g., diagnosis, reading clinical images or electrocardiograms, creating an investigation or treatment plan). Each case would typically take about 15 to 20 minutes to complete, and there would typically be five to six cases. Students were expected to answer the questions, and the instructor gave feedback on the answers and provided additional information, sometimes via additional Powerpoint slides. Class discussions were encouraged where students were encouraged to debate and discuss with each other over their classmates’ answers. The titles of the case-based learning were “ECG – tachydysrhythmias”, “Approach to a confused patient” and “Approach to chest pain”.
2) Description of lecture in large classroom
This is a typical lecture-style instruction performed with participation of around 86 PGY1-interns and one instructor. The instructor delivers information via a Powerpoint slide presentation and rarely adds clinical case-based questions into the slides to invite student discussion. The titles of the lectures were “Cardiovascular health – hypertensive urgencies”, “Trauma – chest, abdomen and pelvis” and “Stroke”.
B. Instructor and Student Characteristics
The instructors all had at least ten years of teaching experience in medical education, and all had been teaching the same topics to the PGY-1 interns for at least the last five years. Student feedback scores on their teaching activities have been satisfactorily high (mean 4.63 for 2019, the year prior to the shift to online learning for the pandemic). All the tutors (except for one instructor who taught “Stroke”) had taught the same topics using the same content and Powerpoint slides in 2017 via in-person teaching which was caught on camera.
The students were all PGY-1 interns, who have been asked by the institution to attend at least 70% of a mandatory one-year long teaching program where they are given weekly instruction on various medical or surgical topics. The teaching program commences from May of each year. There were 86 PGY-1 interns commencing their rotations in our institution and attending the teaching program from May 2020. There were 75 PGY-1 interns attending the teaching program in the video recordings caught in 2017.
C. Observation Tool
A revised form of STROBE (O’Malley et al., 2003) was used to analyze and record the behaviors of the instructor and students in classes, to provide a more objective third-person measure of student engagement. The original STROBE tool was an instrument that was developed to objectively measure student engagement across a variety medical education classroom settings. The STROBE instrument consists of 5-minute observational cycles repeated continuously throughout the learning session with relevant observations recorded on a data collection form. Within each cycle, observers record selected aspects of behavior from a list of specified categories that occur in each interval recorded. Observations include macrolevel elements such as structure of class, major activity during time, and a global judgment of the proportion of class members who appear on task, as well as microlevel elements such as instructor’s behavior and the behaviors of four randomly selected students. Observers also record who the behaviours of instructors and students were directed at. After which, observers tally the number of questions asked by the students and instructor in the remainder of the 5 minutes. The revision of this tool was made by the 3 Clinician-educators from the research team (CYC, YES, KN), having discussed what kind of instructor and student behaviors were considered as “active student engagement”, keeping the main statements and principles of the original STROBE tool. The scale was modified to make it suitable for use in an online learning setting, where the observers may not be able to observe the student’s body language cues when the student does not turn on his/her video function. We called this modified scale our ‘In-class engagement measure’. The modified scales were as follows:
A 5-item list of instructor and student behaviors was therefore created and rated from 1 to 5 each, with different scales for instructor and student. For the student behavior scale, each item was to show progressively increasing levels of interaction, and perceived engagement, both with the instructor and with each other. For the instructor behavior list, each item was also about progressively interactive behaviors by the instructor to get the students to engage. We called these scales our “In-class Engagement Measure (IEM)”. The scales were as follows:
Student:
- No response even when asked
- Answers only when directly questioned
- Answers questions spontaneously
- Speaks to instructor spontaneouslyg.,Poses questions, discusses concepts
- Speaks to instructor and 1 or more other student during a discussion
Instructor:
- Talking to entire class while all the students are passive receivers
- Telling/asking to one or a group of students, or teaching/showing an application on a student
- Starting or conducting a discussion open to whole class, or assigning some students for some learning tasks
- Listening/monitoring actively discussing one or a group of students
- Listening/monitoring actively discussing entire class
For the student behaviour list, we also sub-categorized the student behaviour item “1”, where “1*” was defined as no response when a question was posed to a specific student and not just the whole class, where the student-in-question would have his/her name called by the tutor.
D. Observation Process
Drawing from the described process for the STROBE observation tool (O’Malley et al., 2003), as well as other described modifications of the STROBE tool (Alimoglu et al., 2014), we used the same observation units and cycles. Modifications to the original described process for the STROBE observation tool was made to make it suitable for not being in-person when observing a large group of students and their instructor. Three observers from the research team (CYC, YES, KN) observed and recorded the instructor and student behaviors for the three case-based learning and three lecture-style learning conducted live online in 2020, and as a video recording of in-person teaching in 2017. A total of 12 lectures were therefore analyzed. One observation unit was a 5-minute cycle. The 5-minute cycle would proceed as such: The observer would write the starting time of the cycle and information about the class (number of students, title of session). The observer would select a student from the class and observe that student for 20 seconds and mark the type of engagement observed according to the IEM scale created. As the observers were not in-person for the teaching at either the 2017 video recording, and for the 2020 online learning, students who responded to the instructor or posed questions were marked at the same time by all the three observers. The 5-minute cycle would consist of four 20-second observations of individual learners, so marking of student engagement would be performed four times within that cycle with different students in succession. The observer would also observe the instructor for that 5-minute cycle and similarly mark the instructor’s behavior once for that 5-minute cycle. For the remainder of the modified STROBE cycle, the observer would tally the number of questions asked by all the students and the instructor.
Observers independently and separately observed and marked the students’ and instructors’ behaviors. Due to the lack of in-person observation, students who responded or posed questions during the session were uniformly chosen for marking by the three observers. If a student had already been marked once during that cycle, the same student was not used for remaining three observations within the same cycle. At the end of the marking, two observers (KN and YES) compared their scores for both students and instructor. The marks given by the third observer (CYC) was used to validate the final score awarded and used as the tiebreaker when there was a discrepancy in the marks given by the first two observers.
E. Collation of Post Teaching Survey Feedback
Apart from the data derived from our modified observational tool, we also reviewed data from surveys conducted by the educational committee after each of these teaching sessions (see Table 1). These were general surveys used to solicit student feedback on the teaching sessions. They were distributed in-person in 2017, with the same forms distributed to the students online in 2020. Responses from the students were in response to five statements, with scoring 0 to 5 (1 for Strongly disagree, 2 for disagree, 3 for neither agree nor disagree, 4 for agree, and 5 for Strongly agree). These feedback forms had an overall feedback score marked by the student, as well as a score marked by the student in response to a question assessing for self-reported engagement – “The session was interactive and engaging”. The other questions were “The session has encouraged self-directed learning and critical thinking”, “The session was relevant to my stage of training”, “The session helped me advance my clinical decision-making skills”, and “The session has increased my confidence in day-to-day patient management”. Means of the feedback scores were taken as a qualitative guide, and we analyzed the overall feedback scores (“Overall feedback score” in Table 1), and the scores in response to the question assessing for self-reported engagement (“Self-reported engagement feedback score” in Table 1).
F. Statistical Analyses
Descriptive statistics were used to determine frequencies and median number of questions asked, as well as mean student feedback scores and absolute duration of each teaching session. Fisher exact test was also performed to analyze the differences in scores between different lectures and case-based learning, and the scores in the 2017 in-person learning versus that of the 2020 online learning. For analysis of the scores, we dichotomized our scores using the cut-off of “1”, or our first item on the behavior list for both students and instructors, as we felt that the first item reflected an extreme non-participation for both student and instructor, which if left to continue, can result in negative learning and teaching behaviors.
III. RESULTS
A. Class Types, Characteristics, Feedback Scores
A total of 12 sessions were observed, consisting of in-person and online teaching sessions of six topics (Table 1). There were 3 topics of CBL and LLC each. Duration of the class sessions range from 30-55 minutes for the in-person sessions and 40-90 minutes for the online sessions. Total number of PGY-1 students eligible to attend the in-person teaching sessions in 2017 was 82, and 86 for the online teaching sessions in 2020. Student attendance for the in-person sessions ranged from 11 (13.4%) to 31 (37.8%) and that for the online session ranged from 28 (32.6%) to 77 (89.5%). Median (range) of feedback scores for in-person sessions were 4.57 (4.25 to 4.72) vs 4.32 (4.04 to 4.61) for online sessions. Median (range) of self-reported engagement scores for in-person sessions were 4.55 (4.25 to 4.79) vs 4.34 (4.00 to 4.67) for online sessions (Table 1).
Table 1. Class types and characteristics (*Different tutors, but using same content)
B. Instructors’Engagement Behaviour
1) Comparing in-person vs online teaching: Percentage time during which there is no engagement/interaction (or scoring “1” on the IEM score). This ranges from 0-80% for in-person teaching vs 0-100% for online teaching (Table 2A). For each topic, there is no significant difference between percentage time of no engagement.
Most frequent IEM scores. Most frequent IEM scores for each 5-minute segment were 3 for in-person teaching (48.9%) and online teaching (52.9%) (Table 2B).
2) Comparing CBL vs LLC: Percentage time during which there is no engagement/interaction. This ranges from 0-23.1% for CBL vs 50-100% for LLC (Table 2A).
Most frequent IEM scores. Most frequent IEM score was 3 for CBL (77.3%) and 1 for LLC (71.4%). (Table 2B).

Table 2A. Comparison of instructors’ behaviour showing percentage time with no engagement (scoring “1” on the IEM score)

Table 2B. Numbers (percentages) of a particular IEM score received for a 5-minutes segment of teaching – for instructors
C. Students’ Engagement Behaviour
1) Comparing in-person vs online teaching: Percentage time during which there is no engagement/interaction. This ranges from 0-95% for in-person teaching vs 78.8-100% for online teaching (Table 3A). There is significant difference in percentage time of no engagement in two topics (ECG, chest pain), where there is higher percentage of no engagement time with online teaching.
Most frequent IEM scores. Most frequent IEM scores were 1 for both in-person teaching (63.8%) and online teaching (85.1%) (Table 3B).
2) Comparing CBL vs LLC: Percentage time during which there is no engagement/interaction. This ranges from 0-81.9% for CBL vs 84.4-100% for LLC (Table 3A).
Most frequent IEM scores.
Most frequent IEM scores were 1 for both CBL (65.3%) and LLC (91.8%) (Table 3B).
Presence of 1* scores, where “1*” was defined as no response when a question was posed to a specific student called by name. There was no 1* IEM score for in-person teaching for either CBL or LLC, and 8.4% (12/143) of the “1” responses were 1* for online-teaching for CBL and 6.5% (6/92) of the “1” responses were 1* for LLC.

Table 3A. Comparison of students’ behaviour showing percentage time with no engagement (scoring “1” on the IEM score)

Table 3B. Numbers (percentages) of a particular IEM score received for a 5-minutes segment of teaching – for students
D. Number of Questions Asked Per 5-minute Cycle
Median number of questions asked by instructors ranged from 0-2 for in-person teaching and 1-3 for online teaching (See Appendix 1). These range from 1-3 for CBL vs 0-1 for LLC.
Median number of questions asked by students in all sessions were 0.
The results for this study can be derived from the dataset uploaded onto the online repository accessed via https://doi.org/10.6084/m9.figshare.18133379.v1 (Chua et al., 2022).
IV. DISCUSSION
We modified the known STROBE instrument (O’Malley et al., 2003) to create an observational tool “IEM” which could be used to quantify instructor and student engagement despite the observer not being present in-person. Our IEM scores were derived by taking scores that were in agreement when independently scored by two main observers (YES and KN). The third observer (CYC) was used as the validator of the scores by the two main observers. When there was a discrepancy in the scores awarded by the two observers, the score which was in agreement with the score awarded by CYC was used. To give an indication of the IEM tool’s effectiveness where the observer is not present in-person, we postulated that our modified IEM score should still demonstrate the well-documented difference in engagement between lecture-style learning and case-based learning sessions (Kelly et al., 2005). Our modified IEM score did indeed show more frequent higher scores as expected for case-based learning sessions (Tables 2B and 3B). We also compared our IEM scores with the students’ self-reported engagement scores (Table 1) that had been collected as part of student feedback. The general correlation in the trend of observed IEM scores with that of the students’ self-reported engagement scores also suggest the usefulness of our modified STROBE tool in situations where the observer is not present in-person, although this needs to be further validated in prospective studies.
Our initial study hypothesis was that students may find themselves more engaged in online teaching sessions and open to posing questions to the instructor and their peers, due to the presence of the “chat”, “likes” and “poll” functions available on the Zoom tele-conferencing platform, which may be more familiar to a younger generation accustomed to using social media. We had postulated that live online lectures would encourage further engagement from students who would not otherwise participate in-person, due to the less intimidating online environment where they can ask and answer questions more anonymously (Kay & Pasarica, 2019; Ni, 2013). In an Asian-pacific context, video conferencing had been found to be able to improve access for participation for more reticent participants who prefer written expression, through alternative communication channels like the ‘chat box’, although there was a potential trend to reduced engagement. (Ong et al., 2021).
Our data, shows, that Zoom teleconferencing during the COVID-19 pandemic can be associated with reduced student engagement. The percentage time where there was no engagement was significantly higher with online sessions (Table 3A) and the most frequent IEM score was lower (1 for online vs 3 for in-person), for CBL sessions (Table 3B). This phenomenon in medical education during the COVID-19 pandemic has previously been described. Using student and instructor feedback, students were more likely to have reduced engagement during virtual learning (Longhurst et al., 2020; Dost et al., 2020), and would have increased difficulties maintain focus, concentration and motivation during online learning (Wilcha, 2020).
Our data also suggests that for the instructor to even try to achieve close to the same levels of engagement as before, a longer duration of time was spent by each instructor per topic when executing CBL (Table 1). This may include time where the instructor needs multiple attempts at questioning and discussion before there is a student response. It is also possible that for in-person learning, the instructor relies greatly also on non-verbal cues (e.g., body language, nods of the head, collective feel of the room) to determine if a question has been satisfactorily answered, and therefore can move on quicker than when on a Zoom platform where one cannot see most, or even every student.
The higher number of attendees for online learning compared to in-person attendance (see Table 1) highlights one of the strengths of online learning, which is where online learning is more easily accessible for students who would save on time getting to a designated lecture room and provides flexibility for students to enter and exit (Dost et al., 2020). Unfortunately, this also likely encourages the phenomenon of “presenteeism”, where students are not focused on the learning session, but instead engage in other tasks simultaneously, e.g., reading or composing emails, or completing work tasks instead of having dedicated protected teaching time. Resident learners have been described to participate in nearly twice as many non-teaching session related activities per hour during an online session than when in-person (Weber & Ahn, 2020). This has likely contributed to the number of 1* scores we had, where the student has logged into the Zoom platform, but is not available to even respond in the negative when called upon to answer a question. This presenteeism, however, is not just a problem for online learning, but even for in-person learning, where pretending to engage has been found to be a significant unrecognized issue (Fuller et al., 2018).
The main implication that our study highlights that to improve student engagement when using online learning, a face-to-face platform cannot simply be transposed into a virtual platform. It had been suggested that engagement during live virtual learning could be enhanced with the use of interactive quizzes with audience polling functions (Morawo et al., 2020) and possibly other methods such as “gamification” (Nieto-Escamez & Roldan-Tapia, 2021). Our instructors for the CBL sessions had used both poll functions and live questioning for their sessions, but without increased success in engagement. Smaller groups are likely required to enhance student engagement, but this would lead to the need for increased time and teaching manpower. Increasing the opportunity for interaction via a virtual platform would also require the need to create additional online resources, which would take up more faculty time where creating new resources can take at least three times as much work compared to a traditional format (Gewin, 2020). Online resources would need to be modified in such a way that increases student autonomy to increase student engagement in medical education (Kay & Pasarica, 2019). Our study also shows that as a first step, in time and resource-limited settings, a case-based approach to teaching would be more ideal to enhance student engagement than lecture style teaching.
A culture of accountability also needs to be fostered within the online teaching sessions, where students need to be educated on how Zoom meetings can be more enriching when cameras are on (Sharp et al., 2021). PGY-1 interns, as recent graduates, also need to be educated on the aspect of professionalism when entering the medical work force, where they can be called upon to answer questions during meetings or conferences. When initial questions are not voluntarily answered, our tutors often practice “cold-calling”, which can help keep learners alert and ready (Lemov, 2015). Unfortunately, these evidence-based teaching methods that work well when the student is in-person, ultimately will fail if online students are not educated on their need to be accountable to the instructor or their peers.
This study has several limitations. Firstly, the level of student engagement may also be affected by external factors, such as a different physical learning environment, class size and avenues of communication. The stresses of the on-going pandemic may also have affected student engagement, as a decrease in quality of life and stress would negatively impact student motivation (Lyndon et al., 2017). Secondly, the topics for lecture to large class and case-based learning were not identical as these topics were picked in chronological order and there were no topics in the curriculum that had material for both the lecture and case-based learning class types. This difference in topics may have potentially contributed to confounding when we try to make direct comparisons between the two class types, although, we have attempted to mitigate this by including a variety of topics in each class type. Thirdly, the improved student engagement and feedback scores for in-person learning may also have had some bias given the smaller student size for in-person learning. It is also possible that only the more motivated, and hence more likely to be engaged students, would turn up for in-person learning. Fourthly, due to the online nature as well as the retrospective viewing of the video recordings, the observers were not present in-person to observe the non-verbal cues of the students or instructors. The tool, however, was modified to take into account only the verbal output that could be observed online or via video recording. Lastly, our IEM tool will benefit from more studies and research to further confirm its validity in observing students when the observer is not present in-person.
V. CONCLUSION
Lectures are associated with reduced student engagement than case-based learning, while both class types are associated with lower levels of student engagement when conducted on a virtual platform. Instructor levels of engagement, however, remain about the same. This highlights that a face-to-face platform cannot simply be transposed into a virtual platform, and it is important to address this gap in engagement as this can lower faculty satisfaction with teaching and ultimately result in burnout. Blended teaching or smaller group teaching as the world turns the corner in the COVID-19 pandemic may be one way to circumvent the situation but is also constrained by faculty time and manpower. Our study also shows that as a first step, in time and resource-limited settings, a case-based approach to teaching would be more ideal to enhance student engagement than lecture style teaching.
Notes on Contributors
Dr Ng Wei Ping Kay and Dr Chua Yuan Kit Christopher are co-first authors and contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. They contributed to drafting and revising the work and approved the final version to be published. They agree to be accountable for all aspects of the work.
Dr Lye Pei Shi Priscillia contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. She contributed to drafting and revising the work and approved the final version to be published. She agrees to be accountable for all aspects of the work.
Dr Joy Vijayan contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. He contributed to drafting and revising the work and approved the final version to be published. He agrees to be accountable for all aspects of the work.
Dr Yap Eng Soo contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. He contributed to drafting and revising the work and approved the final version to be published. He agrees to be accountable for all aspects of the work.
Dr Chan Yee Cheun contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. He contributed to drafting and revising the work and approved the final version to be published. He agrees to be accountable for all aspects of the work.
Ethical Approval
I confirm that the study has been approved by Domain Specific Review Board (DSRB), National Healthcare Group, Singapore, an institutional ethics committee. DSRB reference number: 2020/00415.
Data Availability
The data that support the findings of this study are openly available in Figshare at https://doi.org/10.6084/m9.fig share.18133379.v1.
Acknowledgement
We would like to acknowledge Ms. Jacqueline Lam for her administrative support in observing the recordings and online-teaching.
Funding
There was no funding for this research study.
Declaration of Interest
The authors report no conflicts of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.
References
Alimoglu, M. K., Sarac, D. B., Alparslan, D., Karakas, A. A., & Altintas. (2014). An observation tool for instructor and student behaviors to measure in-class learner engagement: A validation study. Medical Education Online, 19(1), 24037. https://doi.org/10.3402/meo.v19.24037
Armstrong, S. J., & Fukami, C. V. (2009). The SAGE Handbook of Management Learning, Education and Development. SAGE Publications Ltd. https://www.doi.org/10.4135/9780857021038
Berman, N. B., & Artino, A. R. J., (2018). Development and initial validation of an online engagement metric using virtual patients. BMC Medical Education, 18(1), 213. https://doi.org/10.1186/s12909-018-1322-z
Chick, R. C., Clifton, G. T., Peace, K. M., Propper, B. W., Hale, D. F., Alseidi, A. A., & Vreeland, T. J. (2020). Using technology to maintain the education of residents during the COVID-19 Pandemic. Journal of Surgical Education, 77(4), 729–732. https://doi.org/10.1016/j.jsurg.2020.03.018
Chua, Y. K. C., Ng, K. W. P., Yap, E. S., Lye, P. S. P., Vijayan, J., & Chan, Y. C. (2022). Evaluating online learning engagement (Version 1) [Data set]. Figshare. https://doi.org/10.6084/m9.figshare.18133379.v1
Dost, S., Hossain, A., Shehab, M., Abdelwahed, A., & Al-Nusair, L. (2020). Perceptions of medical students towards online teaching during the COVID-19 pandemic: A national cross-sectional survey of 2721 UK medical students. BMJ Open, 10(11), e42378. https://doi.org/10.1136/bmjopen-2020-042378
Fuller, K. A., Karunaratne, N. S., Naidu, S., Exintaris, B., Short, J. L., Wolcott, M. D., Singleton, S., & White, P. J. (2018). Development of a self-report instrument for measuring in-class student engagement reveals that pretending to engage is a significant unrecognized problem. PLOS ONE, 13(10), e0205828. https://doi.org/10.1371/journal.pone.0205828
Gewin, V. (2020). Five tips for moving teaching online as COVID-19 takes hold. Nature, 580(7802), 295–296. https://doi.org/10.1038/d41586-020-00896-7
Kaufman, D. M. (2003). Applying educational theory in practice. BMJ, 326(7382), 213–216. https://doi.org/10.1136/bmj.326.7382.213
Kay, D., & Pasarica, M. (2019). Using technology to increase student (and faculty satisfaction with) engagement in medical education. Advances in Physiology Education, 43(3), 408–413. https://doi.org/10.1152/advan.00033.2019
Kelly, P. A., Haidet, P., Schneider, V., Searle, N., Seidel, C. L., & Richards, B. F. (2005). A comparison of in-class learner engagement across lecture, problem-based learning, and team learning using the STROBE classroom observation tool. Teaching and Learning in Medicine, 17(2), 112–118. https://doi.org/10.1207/s15328015tlm1702_4
Lemov, D. (2015). Teach like a champion 2.0: 62 techniques that put students on the path to college. (2nd ed.). Jossey-Bass.
Longhurst, G. J., Stone, D. M., Dulohery, K., Scully, D., Campbell, T., & Smith, C. F. (2020). Strength, weakness, opportunity, threat (SWOT) analysis of the adaptations to anatomical education in the United Kingdom and Republic of Ireland in response to the Covid-19 pandemic. Anatomical Sciences Education, 13(3), 301–311. https://doi.org/10.1002/ase.1967
Lyndon, M. P., Henning, M. A., Alyami, H., Krishna, S., Zeng, I., Yu, T.-C., & Hill, A. G. (2017). Burnout, quality of life, motivation, and academic achievement among medical students: A person-oriented approach. Perspectives on Medical Education, 6(2), 108–114. https://doi.org/10.1007/s40037-017-0340-6
McCoy, L., Pettit, R. K., Lewis, J. H., Allgood, J. A., Bay, C., & Schwartz, F. N. (2016). Evaluating medical student engagement during virtual patient simulations: A sequential, mixed methods study. BMC Medical Education, 16, 20. https://doi.org/10.1186/s12909-016-0530-7
Morawo, A., Sun, C., & Lowden, M. (2020). Enhancing engagement during live virtual learning using interactive quizzes. Medical Education, 54(12), 1188. https://doi.org/10.1111/medu.14253
Ni, A. Y. (2013). Comparing the effectiveness of classroom and online learning: Teaching research methods. Journal of Public Affairs Education, 19(2), 199-215. https://doi.org/10.1080/15236803.2013.12001730
Nieto-Escamez, F. A., & Roldan-Tapia, M. D. (2021). Gamifica- tion as online teaching strategy during COVID-19: A mini-review. Frontiers in Psychology, 12, 648522. https://doi.org/10.3389/fpsyg.2021.648552
O’Malley, K. J., Moran, B. J., Haidet, P., Seidel, C. L., Schneider, V., Morgan, R. O., Kelly, P. A., & Richards, B. (2003). Validation of an observation instrument for measuring student engagement in health professions settings. Evaluation & the Health Professions, 26(1), 86–103. https://doi.org/10.1177/0163278702250093
Ong, C. C. P., Choo, C. S. C., Tan, N. C. K., & Ong, L. Y. (2021). Unanticipated learning effects in videoconference continuous professional development. The Asia Pacific Scholar, 6(4), 135-141. https://doi.org/10.29060/TAPS.2021-6-4/SC2484
Rao, S. P., & DiCarlo, S. E. (2001). Active learning of respiratory physiology improves performance on respiratory physiology examinations. Advances in Physiology Education, 25(2), 55–61. https://doi.org/10.1152/advances.2001.25.2.55
Sharp, E. A., Norman, M. K., Spagnoletti, C. L., & Miller, B. G. (2021). Optimizing synchronous online teaching sessions: A guide to the “new normal” in medical education. Academic Pediatrics, 21(1), 11–15. https://doi.org/10.1016/j.acap.2020.11.009
Watson, W. E., Michaelsen, L. K., & Sharp, W. (1991). Member competence, group interaction, and group decision making: A longitudinal study. Journal of Applied Psychology, 76(6), 803–809. https://doi.org/10.1037/0021-9010.76.6.803
Weber, W., & Ahn, J. (2020). COVID-19 conferences: Resident perceptions of online synchronous learning environments. Western Journal of Emergency Medicine, 22(1), 115–118. https://doi.org/10.5811/westjem.2020.11.49125
Wilcha, R. J. (2020). Effectiveness of virtual medical teaching during the COVID-19 crisis: Systematic review. JMIR Medical Education, 6(2), e20963. https://doi.org/10.2196/20963
*Chua Yuan Kit Christopher
5 Lower Kent Ridge Road,
National University Hospital,
Singapore 119074
+65 7795555
Email: christopher_chua@nuhs.edu.sg
Submitted: 6 January 2022
Accepted: 4 May 2022
Published online: 4 October, TAPS 2022, 7(4), 22-34
https://doi.org/10.29060/TAPS.2022-7-4/OA2735
Amelah Abdul Qader1,2, Hui Meng Er3 & Chew Fei Sow3
1School of Postgraduate Studies, International Medical University, Kuala Lumpur, Malaysia; 2University of Cyberjaya, Faculty of Medicine, Cyberjaya, Malaysia; 3IMU Centre for Education, International Medical University, Kuala Lumpur, Malaysia
Abstract
Introduction: The direct ophthalmoscope is a standard tool for fundus examination but is underutilised in practice due to technical difficulties. Although the smartphone ophthalmoscope has been demonstrated to improve fundus abnormality detection, there are limited studies assessing its utility as a teaching tool for fundus examination in Southeast Asian medical schools. This study explored the perception of medical students’ toward using a smartphone ophthalmoscope for fundus examination and compared their abilities to diagnose common fundal abnormalities using smartphone ophthalmoscope against direct ophthalmoscope.
Methods: Sixty-nine Year-4 undergraduate medical students participated in the study. Their competencies in using direct ophthalmoscope and smartphone ophthalmoscope for fundus examination on manikins with ocular abnormalities were formatively assessed. The scores were analysed using the SPSS statistical software. Their perceptions on the use of smartphone ophthalmoscopes for fundus examination were obtained using a questionnaire.
Results: The students’ competency assessment scores using the smartphone ophthalmoscope were significantly higher than those using the direct ophthalmoscope. A significantly higher percentage of them correctly diagnosed fundus abnormalities using the smartphone ophthalmoscope. They were confident in detecting fundus abnormalities using the smartphone ophthalmoscope and appreciated the comfortable working distance, ease of use and collaborative learning. More than 90% of them were of the view that smartphone ophthalmoscopes should be included in the undergraduate medical curriculum.
Conclusion: Undergraduate medical students performed better in fundus examination on manikins with ocular abnormalities using smartphone ophthalmoscope compared to direct ophthalmoscope. Their positive perceptions toward smartphone ophthalmoscope support its use as a supplementary teaching tool in undergraduate medical curriculum.
Keywords: Medical Students, Smartphone, Ophthalmoscope, Teaching Tool
Practice Highlights
- The smartphone ophthalmoscope is a useful supplementary teaching tool for fundus examination in undergraduate medical education.
- Fundus examination is performed at a safe working distance from the patient using a smartphone ophthalmoscope.
- Students are able to detect fundus abnormalities with greater ease and accuracy using a smartphone ophthalmoscope compared to a direct ophthalmoscope.
- Students appreciate the collaborative learning through peer discussion of the fundus findings using the smartphone ophthalmoscope.
I. INTRODUCTION
Fundus examination is one of the essential procedures which provides information about ocular conditions that may compromise the quality of vision and lead to blindness (Leonardo, 2018). The direct ophthalmoscope (DO) is one of the robust ocular clinical examination tools to be grasped during clinical skill training in medical schools as well as clinical practice. However, students have difficulty mastering the technique of using it (Kim & Chao, 2019), particularly when they have to coordinate their hand movements at a very near distance to the patient and close one eye when examining the patient’s fundus through the pupil (MacKay et al., 2015). They also have to adjust the power of the direct ophthalmoscope lenses to get a clearer picture if there is a refractive error with the patient’s eye or their own eyes. Instead of concentrating on detecting fundus findings, the students are preoccupied with adjusting the direct ophthalmoscope.
Technical constraints may be the main reason for the underuse of direct ophthalmoscopes. Experienced physicians who use the direct ophthalmoscope may lack confidence and frequently miss significant abnormalities (Purbrick & Chong, 2015), causing delayed diagnosis of preventable eye disorders and permanent vision impairment (Myung et al., 2014). This has led to the exploration of alternative tools to overcome some of these challenges (Giardini et al., 2014; Kim & Chao, 2019). Smartphone ophthalmoscope, for example, is a breakthrough digital portable retinal imaging system that allows medical practitioners to view the fundus with high-definition images or video of a routine ophthalmoscope examination.
The D-EYE smartphone ophthalmoscope was developed by Doctor Andrea Russo in 2015 (Russo et al., 2015). It is a small, portable, and inexpensive retinal imaging system that can capture retinal images using an attachment to a smartphone that uses a cross-polarisation technique to reduce corneal reflections. It is integrated with the smartphone’s autofocus feature to accommodate the patient’s refractive error.
A. Problem and Rationale
Fundus examination requires extensive practice to develop adequate interpretation skills (Leonardo, 2018). Medical students are taught to use the direct ophthalmoscope in order to recognise retinal signs of life-threatening disorders (Benbassat et al., 2012). The International Council of Ophthalmology recognises direct ophthalmoscope examination as one of the seven core ocular medical education competencies. All graduating medical students are expected to recognise common abnormalities of the ocular fundus using a direct ophthalmoscope (Dunn et al., 2021). However, there is a lack of competencies among the medical graduates using this tool (MacKay et al., 2015). This needs to be addressed as at least 2.2 billion people globally have visual impairment or blindness, of which at least 1 billion have deterioration in vision that could have been prevented if they were screened or detected earlier, World Health Organization (2019). Tan et al. (2020) reported several favourable studies carried out in Italy, UK, and India on the advantages of smartphone ophthalmoscopes for fundus examination and visualisation of the retinal image. In a randomised cross-over study done by Curtis et al. (2021) on the ease of use of D-EYE smartphone ophthalmoscope versus direct ophthalmoscope, 44 Year-one medical students in Canada examined the patients’ fundus for optic disc assessment and compared their findings with the respective photographs provided. The ease of use and confidence was more significant with the D-EYE smartphone ophthalmoscope.
Although the smartphone ophthalmoscope is available in Southeast Asian countries such as Malaysia, it is not commonly used in public hospitals and general practitioner clinics. This is probably due to resource constraint issues in developing countries. Moreover, there are limited studies assessing its use, in particular, there is no literature report on such studies among undergraduate medical students in Southeast Asia. However, based on the positive findings from the literature, it has been proposed that smartphone ophthalmoscope be included in clinical skill training for fundus examination among undergraduate medical students at the university where this study was conducted. Therefore, this study was carried out to explore the students’ perceptions of using smartphone ophthalmoscopes for fundus examination and determine whether their competencies in fundus examination improved using this tool compared to using the direct ophthalmoscope. In this study, the D-EYE smartphone ophthalmoscope was chosen over the other types of smartphone ophthalmoscope due to reasons including the ease of data management using the available app facility, cost feasibility and convenience. The two research questions of the study were:
1) What were the perceptions of medical students on the use of smartphone ophthalmoscope for fundus examination?
2) Was there a difference between students’ competencies in fundus examination when using the smartphone ophthalmoscope compared to the direct ophthalmoscope?
The cognitive theory of multimedia learning can be applied in the context of fundus examination using a smartphone ophthalmoscope. Using a smartphone ophthalmoscope, the student can visualise the fundus on the smartphone screen. According to the cognitive theory of multimedia learning (Figure 1), the students engage in active cognitive processing in order to create a cohesive mental representation of their experiences based on their recall knowledge of fundus structures and ocular abnormalities. This will allow them to integrate the findings with other relevant information. They can then describe their findings and organise the selected images into a “mental model” of the items they are learning. Finally, their prior knowledge of ocular disorders is incorporated and reconciled with these verbal explanations and graphical representations.

Figure 1. Cognitive theory of multimedia learning
According to the social constructivism theory, learning is social, active and constructed through social interaction (Lötter & Jacobs, 2020). Technologies have been shown to enhance students’ problem solving by breaking down complex concepts into sub-problems (Kim & Hannafin, 2011). A smartphone ophthalmoscope is an appropriate tool for encouraging active interaction between the students and lecturer to work on real-world problems in the teaching and learning environment. When the students perform fundus examination using the smartphone ophthalmoscope, they can see the findings on the screen together with their peers and the lecturer. This will allow them to gain more knowledge and understanding as they can discuss and link the new ideas in the context of their prior knowledge.
II. METHODS
The study was approved by the International Medical University Joint-Committee on Research and Ethics (IMU-JC). Informed consent was obtained from all respondents. The nature and purpose of the study were explained to them. The respondents were assured of anonymity and confidentiality of the collected information.
A. Study Setting
The data were collected from Year-4 undergraduate medical students who undertook ophthalmology rotation for the academic year 2020/2021, at the University of Cyberjaya, Malaysia.
In the fourth year of the medical curriculum, the students undertake four major postings (Orthopedic, Family Medicine, Psychiatry and a speciality posting) over two semesters (Semesters 7 and 8). These are conducted in four rotations per year (rotations 1 & 2 in Semester 7, rotations 3 & 4 in Semester 8). The speciality posting includes Ophthalmology, Anaesthesia, ENT and Radiology. The duration of each of these speciality posting is two weeks. In the ophthalmology posting the students are taught the principles of history taking and ocular examination in the Clinical Skill Training Department and in the hospital, where they clerk patients with eye conditions. Additionally, they learn about basic common eye conditions during interactive sessions and case-based discussion sessions in small groups. However, during the COVID-19 pandemic, the posting was affected by lockdown measures. Therefore, the case-based discussion sessions were conducted online, and ocular examination was demonstrated through online interactive video sessions. Nevertheless, there was a window of opportunity where the students could return to the campus physically for a one-week revision. During this period, the students practice ophthalmoscopy examination on manikins in the Clinical Skill Training Department.
B. Study Design
The direct ophthalmoscope examination technique was introduced to the students virtually through video demonstrations and during online interactive discussion sessions. During the revision week, the students were trained for two hours to perform fundus examination using the direct ophthalmoscope. For the smartphone ophthalmoscope, they were briefed and trained on its use for 20-30 minutes. The training was conducted by a member of the teaching staff (who is the researcher in this study, AMAQ). Following that, the students were required to examine various slides of fundus images provided in the manikins (M1 and M2).
The selected slides on the manikins represented the common pathological fundus findings, i.e. optic disc swelling, branch retinal vein occlusion, optic atrophy/glaucoma and diabetic retinopathy/ maculopathy. Each student performed the fundus examination on M1 and M2 using the direct and smartphone ophthalmoscopes separately (approximately 2-3 minutes on each manikin) on the same day. The students were required to fill in their findings based on their observation (without discussing with their peers) on the formative assessment forms (shown in Appendix 1) and indicate the tools they utilised (direct or smartphone ophthalmoscopes). The formative assessment form was adapted from Mamtora et al. (2018) and had been validated by two ophthalmologists in the department.
To avoid bias, all the completed formative assessment forms were collected and submitted to another researcher (SCF) who was not involved in marking (to remove information on the tool used by the student on each form). These were then returned to the researcher in this study, (AMAQ) for marking.
After completing the formative assessment, the students were requested to fill in an online questionnaire regarding their perception on the use of smartphone ophthalmoscope for fundus examination. This questionnaire (Appendix 2) was adapted from Nagra & Huntjens (2020). In addition, the students were requested to provide the reasons for their suggestions to include smartphone ophthalmoscopes or replace direct ophthalmoscopes with smartphone ophthalmoscopes in the medical curriculum.
C. Data Analysis
All data were statistically analysed using SPSS version 23. The paired t-test was used to compare the performance of the students in the formative assessments using direct ophthalmoscopes and smartphone ophthalmoscopes. The number of students getting the correct diagnosis using both tools was statistically analysed using the McNemar (Chi Square) test (Liao & Lin, 2008). The statistical significance was determined based on the p-values (the difference is significant if p ≤ 0.05). The responses of the students in the perceptions questionnaire related to the ease of use, confidence, and preference were analysed.
III. RESULTS
Sixty-nine Year-4 medical students participated in this study. The demographic data are shown in Table 1.

Table 1: Demographic data
A. Comparison of Formative Assessment Scores Using Smartphone Ophthalmoscope and Direct Ophthalmoscope
The mean scores of the students were higher using the smartphone ophthalmoscope (59%) than the direct ophthalmoscope (39%). The same trend was observed for the students with and without refractive error. The results are shown in Table 2. A higher number of students were able to make the correct diagnosis for all fundus abnormalities using the smartphone ophthalmoscope compared to the direct ophthalmoscope. The difference is statistically significant (p-value < 0.05). The results are presented in Table 3. The data that support the findings are openly available in Figshare at https://figshare.com/s/d45da87ea42c596e714b

Table 2: Comparison of formative assessment scores using direct ophthalmoscope (DO) and smartphone ophthalmoscope (SPO).
*p-value (paired t-test)

Table 3: Comparison of correct diagnosis using direct ophthalmoscope and smartphone ophthalmoscope.
*McNemar (Chi square) test (Liao & Lin, 2008),
**(Branch retinal vein occlusion)
A. Students’ Perceptions on the Use of Smartphone Ophthalmoscope for Fundus Examination
A total of 69 students participated in the online questionnaire. All the students appreciated that their peers could share the findings with them on the smartphone screen. Most of the students (87%) preferred using smartphone ophthalmoscopes over direct ophthalmoscopes, and 86% felt confident when using the smartphone ophthalmoscope. In addition, the comfortable working distance was appreciated by 87% of the students. The responses of the participants are shown in Table 4.
|
Online student evaluation Form
|
Likert scale 1= Strongly disagree 2= disagree, 3 = Neutral, 4 =agree, 5= Strongly agree |
||||
|
Section 1 Perception on smartphone ophthalmoscope use |
1 |
2 |
3 |
4 |
5 |
|
I feel confident while using it |
1.4% |
2.9% |
8.7% |
56.5% |
30.4% |
|
I feel easy to view the fundus |
0.0% |
5.8% |
11.6% |
44.9% |
37.7% |
|
I feel comfortable when my peer can observe with me the findings |
0.0% |
0.0% |
0.0% |
30.4% |
69.6% |
|
My hand is steady while I am performing examination |
0.0% |
4.3% |
20.3% |
40.6% |
34.8% |
|
I can pick the finding faster |
0.0% |
4.3% |
21.7% |
42.0% |
31.9% |
|
Smartphone ophthalmoscope user-friendly |
0.0% |
1.4% |
7.2% |
39.1% |
52.2% |
|
I prefer to use it
|
0.0% |
4.3% |
8.7% |
40.6% |
46.4% |
|
Online student evaluation Form |
Likert scale 1= Strongly disagree 2= disagree, 3 = Neutral, 4 =agree, 5= Strongly agree
|
||||
|
Section 2 Efficiency of smartphone ophthalmoscope |
1 |
2 |
3 |
4 |
5 |
|
It takes shorter duration to detect finding |
0.0% |
4.3% |
27.5% |
33.3% |
34.8% |
|
It has comfortable working distance |
0.0% |
0.0% |
13.0% |
40.6% |
46.4% |
|
I found difficulty in handling it |
10.1% |
44.9% |
21.7% |
20.3% |
2.9% |
|
I think Smartphone ophthalmoscope must be added to the medical curriculum |
0.0% |
0.0% |
4.3% |
47.8% |
47.8% |
|
I think direct ophthalmoscope should be replaced by smartphone ophthalmoscope |
1.4% |
10.1% |
26.1% |
33.3% |
29.0% |
Table 4: Responses of participants in the questionnaire to evaluate their perception and efficiency on the use of smartphone ophthalmoscope for fundus examination
B. Students’ Preference for Types of Ophthalmoscopes
Most of the students (94%) suggested that the smartphone ophthalmoscope be included in the medical curriculum, and 62% suggested to replace the direct ophthalmoscope with the smartphone ophthalmoscope. Their preference was mainly attributed to the efficiency, ease of use (for those with refractive error and amblyopia (lazy eye)), autofocus function using the smartphone, and the possibility of using both eyes to see the images on the smartphone screen. In addition, the comfortable working distance, ease of cleaning after use and peer discussion were cited. Meanwhile, 11% of the students suggested keeping direct ophthalmoscope alongside the smartphone ophthalmoscope in the curriculum. They opined that smartphone ophthalmoscope should be included as an additional teaching and learning tool for fundus examination but disagreed that it should replace direct ophthalmoscope totally as the smartphone ophthalmoscope might not be readily available in all healthcare settings. One of the participants commented that “eye examination using direct ophthalmoscope was thought to be a basic procedural skill that doctors must-have. Smartphone ophthalmoscope was a newer technology that might not be available in hospitals, unlike direct ophthalmoscope, which was more common“.
IV. DISCUSSION
The students scored significantly higher in the formative assessment for fundus abnormalities using the smartphone ophthalmoscope compared to the direct ophthalmoscope. The findings from this study were consistent with those of Kim and Chao (2019) and Dunn et al. (2021). In addition, the study also showed that the difference was statistically significant regardless of the presence of refractive error.
The students with refractive error and amblyopia have commented that they found the smartphone ophthalmoscope more convenient and efficient than the direct ophthalmoscope. They stated that they had difficulty using their amblyopic eye when performing the examination using the direct ophthalmoscope as they had to follow the ‘Three R rule’ in which students should use their right eye and their right hand when examining the right eye of the patient at the side of the patient at about 45 degrees to avoid kissing position with the patient. The students with refractive errors highlighted another issue that they needed to adjust the direct ophthalmoscope very frequently to get a proper and clear view. However, when they used the smartphone ophthalmoscope, they were able to perform the examination using both eyes, as they could view the fundus on the smartphone screen without having to close one eye. Fifty percent of the students in this study reported they had refractive errors. In the study by Al-Rashidi et al. (2018), it was found that 89 out of 162 medical students (54 %) had refractive errors. In this study, a significantly higher number of students obtained the correct diagnosis of branch retinal vein occlusion (86%) and glaucoma (62%) using the smartphone ophthalmoscope compared to the direct ophthalmoscope (p-value < 0. 001).
In a study conducted by Mrad et al. (2021) on the accurate method for glaucoma screening, they found that the D-EYE smartphone ophthalmoscope was more accurate for capturing fundus images and assessing the optic disc in detecting glaucoma compared to the direct ophthalmoscope. In addition, Mamtora et al. (2018) reported that it was more convenient and easier to detect optic disc and blood vessels using the D-EYE smartphone ophthalmoscope. Providing alternative tools in medical education could help students learn and perform more efficiently during their teaching and learning activities.
In our study, 86% of the students felt confident using the smartphone ophthalmoscope, and 83% of them found it easy to view the fundus. The majority of the students (91%) found the smartphone ophthalmoscope user friendly, and 73% indicated that they were able to identify the findings quickly while using the smartphone ophthalmoscope. It has been reported previously that medical students preferred smartphone ophthalmoscopes to direct ophthalmoscopes and were more likely to make correct and faster diagnoses (Nagra & Huntjens, 2020). Though mastering the technique of using the direct ophthalmoscope is important, it is equally paramount to be able to identify the fundus findings accurately. The cognitive load theory states that the human working memory can only hold a certain number of interrelated objects (Chu, 2014). Motivational components can enhance student learning by boosting generative processing as long as the learner is not constantly overburdened with needless processing or diverted from critical processing (Mayer, 2014). The technical challenges faced while using the direct ophthalmoscope could hamper the students’ ability to recognise the features associated with fundus abnormalities. The smartphone ophthalmoscope offers an advantage in this context.
In this study, 87 % of the students found that the working distance of a smartphone ophthalmoscope was more comfortable compared to the typical 1–3 cm working distance of a direct ophthalmoscope. This finding was similar to the study conducted by Huntjens & Nagra (2020), where they found that 92% of the students preferred the longer working distance of 20–60 cm of the D-EYE smartphone ophthalmoscope.
The use of smartphone ophthalmoscope as a teaching tool increases student engagement and enhances their learning experience. All students appreciated that their peers could observe the findings together with them on the smartphone screen. They were able to discuss among themselves, as well as with the lecturer. Learning must be an engaging and meaningful experience for the learners to be productive (Mellis et al., 2013). Learners will utilise strategies developed earlier in their training to optimise their knowledge and skills through reflection. When the students record the fundus images, they can discuss their interpretation of findings with the lecturers and peers. Feedback from this process will improve their learning efforts (Kaufman, 2019). The feedback and reflection facilitate the construction of new knowledge, as well as strategies for improving the performance as all of them could see the same findings on the smartphone screen and discuss accordingly.
In our study, 93% of the students’ suggested that smartphone ophthalmoscopes should be included in the medical curriculum. It was easier for them to see the findings without spending a longer time trying to focus by squinting and shutting one eye to look for the findings, as the image is automatically adjusted in a smartphone ophthalmoscope. This has been highlighted as one of the advantages of using the smartphone ophthalmoscope in medical training and screening in primary care centres (Nagra & Huntjens, 2020). Smartphone-based fundus image could even replace the direct ophthalmoscope in clinical medicine (Wintergerst et al., 2020). In our study, out of the 69 students, only eight students (11%) opined that the direct ophthalmoscope should not be totally replaced with a smartphone ophthalmoscope. From their point of view, the direct ophthalmoscope is a must-know clinical skill that contributes to their professional identity. In particular, the smartphone ophthalmoscope may not be easily available in developing countries due to resource constraints. The direct ophthalmoscope is one of the fundamental skills that all clinicians should be able to perform. It is included in the assessment of the final year undergraduate curriculum as well as the postgraduate membership assessment. (Purbrick & Chong, 2015).
With a specific instructional scaffolding strategy, smartphone ophthalmoscopes can be used as a prologue to the direct ophthalmoscope. Students will be able to share the fundus pictures with their peers through the screen simultaneously for the same patient during clinical practice sessions in packed clinics, without having to struggle with the technical challenges of the direct ophthalmoscope. As a result, patients will be less burdened in terms of examination time, and students will be able to evaluate more patients with fundus abnormalities in a shorter amount of time. The concept of just-in-time learning can be a useful pedagogical tool for medical academicians to improve their teaching and learning approach in the age of technology. The just-in-time learning idea uses technology to deliver teaching and learning activities, allowing learning communities to understand better and practise (Naseem et al, 2019). According to Riel (2000), academics continue to play an essential role in encouraging learners to apply their knowledge effectively. As new technologies emerge, educators must prepare students to be lifelong learners who are digitally literate and resourceful in their application of technology.
A. Limitations of the Study
As the study was conducted during the COVID-19 pandemic, the duration for recruitment and training of the students was limited. As a result, the students had a shorter period of face-to-face clinical training. This limited the student’s exposure to performing fundus examinations on real patients in the hospital and using the various ophthalmoscopic tools. In addition, the lack of practice could have affected the students’ performance in the formative assessment on fundus examination using the smartphone and direct ophthalmoscopes. Therefore, we recommend repeating this study when the COVID-19 situation is resolved.
Another limitation of the study was that the students performed the fundus examination on the same manikins using the direct ophthalmoscope followed by the smartphone ophthalmoscope (or vice versa) on the same day. This could result in bias in their judgement in identifying the fundus abnormalities. Nevertheless, the students were reminded to be objective and record their findings accurately based on their observations using either tool.
V. CONCLUSION
Smartphone ophthalmoscope is an effective teaching tool for improving the skills in detecting common clinical ocular diseases. It provides a comfortable working distance and promotes collaborative learning by enabling peer discussion. It is also convenient for students with refractive errors. Therefore, the smartphone ophthalmoscope is a valuable supplementary teaching tool for fundus examination and is highly recommended to be included in the undergraduate medical curriculum.
Notes on Contributors
AMAQ designed and conducted the study, reviewed the literature, analysed the data and wrote the manuscript EHM designed the study, analysed the data, gave critical feedback and edited the manuscript before submission. SCF designed the study, gave critical feedback and edited the manuscript before submission.
Ethical Approval
The study was approved by the International Medical University Joint-Committee on Research and Ethics (IMU-JC), Project ID No.: MHPE I/2021(01). Informed consent was obtained from all respondents, and the nature and purpose of the study were explained to them. The respondents were assured of anonymity and confidentiality of the collected information.
Data Availability
All data are available at https://figshare.com/s/d45da87ea42c596e714b and can be accessed on request and approval from the corresponding author.
Acknowledgement
The authors would like to thank the medical students at the University of Cyberjaya who showed their enthusiasm for learning. And special thanks to the statisticians, Dr Norhafizah Ab Manan, University of Cyberjaya and Dr Shamala Ramasamy, International Medical university, for their advice on statistical tests. The authors would also like to thank Professor Ian Wilson for proofreading the manuscript.
Funding
This study was funded by the International Medical University, Malaysia. MHPE I/2021(01)
Declaration of Interest
Authors declare that they do not have possible conflicts of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.
References
Al-Rashidi, S. H., Albahouth, A. A., Althwini, W. A., Alsohibani, A. A., Alnughaymishi, A. A., Alsaeed, A. A., Al-Rashidi, F. H., & Almatrafi, S. (2018). Prevalence refractive errors among medical students of Qassim University, Saudi Arabia: Cross-sectional descriptive study. Open Access Macedonian Journal of Medical Sciences, 6(5), 940–943. https://doi.org/10.3889/oamjms.2018.197
Benbassat, J., Polak, B. C. P., & Javitt, J. C. (2012). Objectives of teaching direct ophthalmoscopy to medical students. Acta Ophthalmologica, 90(6), 503–507. https://doi.org/10.1111/j.1755-3768.2011.02221.x
Chu, H.-C. (2014). Potential Negative Effects of Mobile Learning on Students’ Learning Achievement and Cognitive Load—A Format Assessment Perspective. Educational Technology & Society, 17 (1), 332–344
Curtis, R., Xu, M., Liu, D., Kwok, J., Hopman, W., Irrcher, I., & Baxter, S. (2021). Smartphone Compatible versus Conventional Ophthalmoscope: A Randomized Crossover Educational Trial. Journal of Academic Ophthalmology, 13(02), e270–e276. https://doi.org/10.1055/s-0041-1736438
Dunn, H. P., Kang, C. J., Marks, S., Witherow, J. L., Dunn, S. M., Healey, P. R., & White, A. J. (2021). Perceived usefulness and ease of use of fundoscopy by medical students: A randomised cross-over trial of six technologies (eFOCUS 1). BMC Medical Education, 21(1), 41. https://doi.org/10.1186/s12909-020-02469-8
Giardini, M. E., Livingstone, I. A. T., Jordan, S., Bolster, N. M., Peto, T., Burton, M., & Bastawrous, A. (2014). A smartphone based ophthalmoscope. [paper presentation]. 36th Annual International Conference of the Engineering in Medicine and Biology Society, EMBC, Chicago, United States. https://doi.org/10.1109/EMBC.2014.6944049
Kaufman, D. M. (2019). Teaching and Learning in Medical Education. How Theory can Inform Practice. Tim Swanwick, Kirsty Forrest, Bridget C. O’Brien (Eds), Understanding medical education evidence, theory, and practice (pp. 37-69). The Association for the Study of Medical Education.
Kim, M. C., & Hannafin, M. J. (2011). Scaffolding problem solving in technology-enhanced learning environments (TELEs): Bridging research and theory with practice. Computers & Education, 56(2), 403-417. Elsevier Ltd. https://www.learntechlib.org/p/67172/.
Kim, Y., & Chao, D. L. (2019). Comparison of smartphone ophthalmoscopy vs conventional direct ophthalmoscopy as a teaching tool for medical students: The COSMOS study. Clinical Ophthalmology, 13, 391–401. https://doi.org/10.2147/OPTH.S190922
Leonardo, D. (2018). Development of a virtual reality ophthalmoscope prototype: Mechatronic Engineering Program Faculty of Engineering, Universidad Militar Nueva Granada, Bogotá D.C., Colombia. http://hdl.handle.net/10654/17843
Liao, Y. Y., & Lin, Y. M. (2008). McNemar test is preferred for comparison of diagnostic techniques. American Journal of Roentgenology, 191(4), 2008. https://doi.org/10.2214/AJR.08.1090
Lötter, M. J., & Jacobs, L. (2020). Using smartphones as a social constructivist pedagogical tool for inquiry-supported problem-solving: An exploratory study. Journal of Teaching in Travel & Tourism, 20(4), 347–363. https://doi.org/10.1080/15313220.2020.1715323
MacKay, D. D., Garza, P. S., Bruce, B. B., Newman, N. J., & Biousse, V. (2015). The demise of direct ophthalmoscopy: A modern clinical challenge. Neurology: Clinical PraFalctice, 5(2), 150–157. https://doi.org/10.1212/CPJ.0000000000000115
Mamtora, S., Sandinha, M. T., Ajith, A., Song, A., & Steel, D. H. W. (2018). Smart phone ophthalmoscopy: A potential replacement for the direct ophthalmoscope. Eye (Basingstoke), 32(11), 1766–1771. https://doi.org/10.1038/s41433-018-0177-1
Mayer, R. E. (2014). Cognitive theory of multimedia learning. The Cambridge Handbook of Multimedia Learning, Second Edition, 43–71. https://doi.org/10.1017/CBO9781139547369.005
Mellis, S., Carvalho, L., & Thompson, K. (2013, December 1-5). Applying 21st century constructivist learning theory to stage 4 design projects. [Conference presentation]. Joint Australian Association for Research in Education Annual Conference, Adelaide. https://files.eric.ed.gov/fulltext/ED603249.pdf
Mrad, Y., Elloumi, Y., Akil, M., & Bedoui, M. H. (2021). A Fast and Accurate Method for Glaucoma Screening from Smartphone-Captured Fundus Images. Irbm, 1, 1–11. https://doi.org/10.1016/j.irbm.2021.06.004
Myung, D., Jais, A., He, L., Blumenkranz, M. S., & Chang, R. T. (2014). 3D Printed Smartphone Indirect Lens Adapter for Rapid, High Quality Retinal Imaging. Journal of Mobile Technology in Medicine, 3(1), 9–15. https://doi.org/10.7309/jmtm.3.1.3
Nagra, M., & Huntjens, B. (2020). Smartphone ophthalmoscopy: Patient and student practitioner perceptions. Journal of Medical Systems, 44(1), Article 10. https://doi.org/10.1007/s10916-019-1477-0
Naseem, A., Ghias, K., Bawani, S., Shahab, M. A., Nizamuddin, S., Kashif, W., Khan, K. S., Ahmad, T., & Khan, M. (2019). Designing EthAKUL: A mobile just-in-time learning environment for bioethics in Pakistan. Scholarship of Teaching and Learning in the South, 3(1), 36–56. https://doi.org/10.36615/sotls.v3i1.70
Purbrick, R. M. J., & Chong, N. V. (2015). Direct ophthalmoscopy should be taught to undergraduate medical students—No. Eye, 29(8), 990-991. https://doi.org/10.1038/eye.2015.91
Riel, M. (2000). Education in the 21st century: Just-in-Time learning or learning communities, Technology and Learning, 137-160.
Russo, A., Morescalchi, F., Costagliola, C., Delcassi, L., & Semeraro, F. (2015). Comparison of smartphone ophthalmoscopy with slit-lamp biomicroscopy for grading diabetic retinopathy. American Journal of Ophthalmology, 159(2), 360-364. https://doi.org/10.1016/j.ajo.2014.11.008
Tan, C. H., Kyaw, B. M., Smith, H., Tan, C. S., & Car, L. T. (2020). Use of smartphones to detect diabetic retinopathy: Scoping review and meta-analysis of diagnostic test accuracy studies. Journal of Medical Internet Research, 22(5), e16658.
Wintergerst, M. W. M., Jansen, L. G., Holz, F. G., & Finger, R. P. (2020). Smartphone-Based Fundus Imaging-Where Are We Now? Asia-Pacific Journal of Ophthalmology, 9(4), 308–314. https://doi.org/10.1097/APO.0000000000000303
World Health Organization. (2019). Report of the 4th global scientific meeting on trachoma: Geneva, 27–29 November 2018. World Health Organization.
*Amelah Mohammed Abdul Qader
University of Cyberjaya Campus
Persiaran Bestari, Cyber 11, 63000 Cyberjaya,
Selangor Darul Ehsan, Malaysia
Email: amelah@cyberjaya.edu.my/ dramelahariqi@gmail.com
Submitted: 22 September 2021
Accepted: 27 April 2022
Published online: 4 October, TAPS 2022, 7(4), 1-21
https://doi.org/10.29060/TAPS.2022-7-4/OA2785
Yao Chi Gloria Leung1*, Kennedy Yao Yi Ng2*, Ka Shing Yow3*, Nerice Heng Wen Ngiam4, Dillon Guo Dong Yeo4, Angeline Jie-Yin Tey5, Melanie Si Rui Lim6, Aaron Kai Wen Tang7, Bi Hui Chew8, Celine Tham9, Jia Qi Yeo10, Tang Ching Lau11,12, Sweet Fun Wong13,14, Gerald Choon-Huat Koh15,16** & Chek Hooi Wong14,17**
1Department of Anaesthesiology, Singapore General Hospital, Singapore; 2Department of Medical Oncology, National Cancer Centre Singapore, Singapore; 3Department of General Medicine, National University Hospital, Singapore; 4Department of General Medicine, Singapore General Hospital, Singapore; 5Department of General Medicine, Tan Tock Seng Hospital, Singapore; 6Department of General Paediatrics, Kandang Kerbau Hospital, Singapore, 7Department of Psychiatry, Singapore General Hospital, Singapore; 8Tan Tock Seng Hospital, Singapore; 9Ng Teng Fong General Hospital, Singapore, 10National Healthcare Group Pharmacy, Singapore, 11Department of Medicine, NUS Yong Loo Lin School of Medicine, Singapore; 12Division of Rheumatology, University Medicine Cluster, National University Hospital, Singapore; 13Medical Board and Population Health & Community Transformation, Khoo Teck Puat Hospital, Singapore; 14Department of Geriatrics, Khoo Teck Puat Hospital, Singapore; 15Saw Swee Hock School of Public Health, National University of Singapore, Singapore; 16Future Primary Care, Ministry of Health Office of Healthcare Transformation, Singapore; 17Health Services and Systems Research, Duke-National University of Singapore Medical School, Singapore
*Co-first authors
**Co-last authors
Abstract
Introduction: Tri-Generational HomeCare (TriGen) is a student-initiated home visit programme for patients with a key focus on undergraduate interprofessional education (IPE). We sought to validate the Readiness for Interprofessional Learning Scale (RIPLS) and evaluate TriGen’s efficacy by investigating healthcare undergraduates’ attitude towards IPE.
Methods: Teams of healthcare undergraduates performed home visits for patients fortnightly over six months, trained by professionals from a regional hospital and a social service organisation. The RIPLS was validated using exploratory factor analysis. Evaluation of TriGen’s efficacy was performed via the administration of the RIPLS pre- and post-intervention, analysis of qualitative survey results and thematic analysis of written feedback.
Results: 79.6% of 226 undergraduate participants from 2015-2018 were enrolled. Exploratory factor analysis revealed four factors accounting for 64.9% of total variance. One item loaded poorly and was removed. There was no difference in pre- and post-intervention RIPLS total and subscale scores. 91.6% of respondents agreed they better appreciated the importance of interprofessional collaboration (IPC) in patient care, and 72.8% said MDMs were important for their learning. Thematic analysis revealed takeaways including learning from and teaching one another, understanding one’s own and other healthcare professionals’ role, teamwork, and meeting undergraduates from different faculties.
Conclusion: We validated the RIPLS in Singapore and demonstrated the feasibility of an interprofessional, student-initiated home visit programme. While there was no change in RIPLS scores, the qualitative feedback suggests that there are participant-perceived benefits for IPE after undergoing this programme, even with the perceived barriers to IPE. Future programmes can work on addressing these barriers to IPE.
Keywords: Interprofessional Education, Student-Initiated Home Visit Programme, RIPLS, Validation
Practice Highlights
- We validated the Readiness for Interprofessional Learning Scale (RIPLS) in Singapore, a multi-ethnic Asian country.
- A student-initiated, interprofessional, longitudinal home visit program is feasible.
- While there was no significant change in RIPLS scores, participants reported qualitative benefits of the programme in their attitudes towards IPE.
- Qualitative feedback highlighted four main barriers to IPE: Time constraints, unmotivated teammates, administrative burden, and unsuitable patients.
I. INTRODUCTION
Interprofessional education (IPE) aims to prepare healthcare professionals for effective collaboration, and while becoming increasingly common, is challenging to initiate, implement, evaluate and sustain (Fahs et al., 2017). Key challenges include designing a curriculum that integrates IPE with traditional academic frameworks, active engagement of facilitators and students, and accommodating various professions (Sunguya et al., 2014). IPE is context-specific, evolving, and involves continuous interaction and interdependence, and many traditional top-down approaches such as forums and lectures do not effectively teach it (Briggs & McElhaney, 2015).
Experiential IPE programmes employ a ground-up approach and potentially tackle some of the aforementioned challenges. Students involved in on-the-ground interprofessional healthcare visits to older adults showed that such experiences improved student collaboration and students’ self-perception of interprofessional team care-related skills (Blythe & Spiring, 2020; Conti et al., 2016; McManus et al., 2017; Toth-Pal et al., 2020; Vaughn et al., 2014). Therefore, a group of undergraduates from the National University of Singapore (NUS) Yong Loo Lin School of Medicine initiated an experiential student-led IPE programme which is aimed at improving health outcomes in older people with frequent hospital readmissions. This longitudinal service-learning programme was anchored by several educational aims including enhancing students’ IPE outcomes and improving attitudes towards IPE.
Formal evaluation of such programmes and investigating student IPE attitudes after being involved in a longitudinal home visit programmes are lacking in the current IPE literature (Grice et al., 2018). This study aims to evaluate TriGen’s effectiveness by investigating student IPE attitudes through the use of the Readiness for Interprofessional Learning Scale (RIPLS). Since the RIPLS has not been validated in the Singapore context, this study also aims to validate this scale.
II. METHODS
A. Programme Design
TriGen is a collaboration between NUS Yong Loo Lin School of Medicine, Khoo Teck Puat Hospital, a Northern regional hospital in Singapore, and North West Community Development Council, a grassroots organisation (Ng et al., 2020a, 2020b). A non-profit ground-up social initiative by healthcare undergraduates, it has the dual aim of i) serving the medical and social needs of older patients by providing longitudinal home visits by interprofessional student teams; ii) educating and empowering undergraduate students through a service-learning approach, with a key focus on improving attitudes towards IPE. The programme was designed under the mentorship of university faculty members, and was earmarked as a co-curricular activity aimed at improving students’ attitudes towards IPE and IPC. Older patients with frequent hospital readmissions (three or more times over six months) were followed up by healthcare undergraduates enrolled in Medicine, Nursing, Pharmacy, Social Work, Physiotherapy or Occupational Therapy courses in Singapore.
The programme begins with healthcare undergraduates undergoing didactic, skill-based training and team-based simulation training covering possible scenarios encountered during home visits (Annex 1). Each team comprising 2-3 interdisciplinary undergraduates conduct fortnightly visits to 1-2 patients over 6 months. At the midpoint and endpoint of the programme, healthcare undergraduates assessed the patients’ needs and presented at multi-disciplinary meetings (MDMs) chaired by healthcare professionals and grassroots staff, who guided the undergraduates to execute a management plan.
This IPE programme was designed based on educational principles for adult learners outlined by Knowles (1984). First, it provided healthcare undergraduates with opportunities for experiential learning anchored in the service-learning approach. Second, it was largely problem-based group learning with most training sessions being team-based and scenario-based. MDMs were also problem-based and encouraged undergraduates to brainstorm ideas to address their patients’ issues. Third, the service they provided in this programme modelled the work they may engage in after graduation. What they learned in this programme was of immediate relevance to their current study and future practice. Lastly, the programme provided autonomy to healthcare undergraduates to direct their own learning. This programme was voluntary and allowed participants’ flexibility for further self-study of topics of interest. Key student outcomes include readiness for IPE (including teamwork and collaboration, professional identity, roles and responsibility), and a better appreciation for IPC.
B. Evaluation Approach
This study used the framework by Kirkpatrick (1959) expanded by Barr et al. (2005) to evaluate the effectiveness of TriGen in improving healthcare undergraduates’ attitudes towards IPE, particularly in evaluation levels 1, 2a and 2b, which centre on learner’s reactions, attitude perceptions, and acquisition of knowledge or skills (Table 1). The use of quantitative and qualitative data collection in a survey was thought to be most appropriate in capturing the data and making the evaluation richer, and was hence the approach utilised for this research (Figure 1).
|
Evaluation Level |
Methods and Measures |
Timeframe |
|
Level 1: Learners’ reactions Participants’ views of their learning experience and opinions about the program |
Participants’ self-reported feedback of IPE learning |
Post-intervention |
|
Qualitative feedback |
Post-intervention |
|
|
Level 2a: Modification of attitudes perceptions |
Participants’ self-reported feedback of IPE learning |
Post-intervention |
|
Qualitative feedback |
Post-intervention |
|
|
|
Readiness for Interprofessional Learning Scale |
Pre- and post-intervention |
|
Level 2b: Acquisition of knowledge/skills Concepts, procedures, principles, and skills |
Qualitative feedback |
Post-intervention |
Table 1: Components of Kirkpatrick/Barr et al. evaluation framework as applied to TriGen

Figure 1: Flowchart of study components
C. Quantitative Measures
The RIPLS (Parsell & Bligh, 1999) was among the first scales developed for measurement of attitudes towards interprofessional learning. It assesses student readiness for IPE and IPC with other health care professionals and has been reported to be sensitive to differences in the students’ attitude towards IPE (Berger-Estilita et al., 2020). While there are a few studies validating it in Asian countries (China, Indonesia, Japan), none have been performed in Singapore (a multi-ethnic Asian country with English language as a predominant language of instruction (Ganotice & Chan, 2018; Lestari et al., 2016; Li et al., 2018; Tamura et al., 2012).
The RIPLS, a 19-item questionnaire comprising 4 subscales (“Teamwork and Collaboration”; “Positive Professional Identity”; “Negative Professional Identity” and “Roles and Responsibilities”), was administered pre- and post-intervention (McFadyen et al., 2005). Higher RIPLS scores imply greater readiness for interprofessional learning. This study validates the RIPLS in the Singapore context for the first time, then employs it for quantitative evaluation of TriGen. Additionally, separate from the RIPLS, three questions were added as a direct measure of participants’ reaction (Level 1), “I better appreciate the importance of IPC in the care of patients through the programme”, “The multidisciplinary meetings organised were important for my learning”, and “I would recommend the programme to my friends.”
1) Statistical Analysis: The Shapiro-Wilk test was used to assess if the data followed a normal distribution (Shapiro & Wilk, 1965). Factor analysis was conducted to explore the construct validity of the RIPLS, and Cronbach’s alpha was computed to determine internal consistency. The suitability of the correlation matrix was determined by the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity. The numbers of factors retained for the initial solutions and entered into the rotation were determined with the application of Kaiser’s criterion (eigenvalues >1). The initial factor extraction was performed using principal component analysis. Exploratory factor analysis was then conducted based on the RIPLS four-subscale structure. A paired t-test comparing baseline and post-intervention responses was computed for each survey item to determine significant differences (p ≤ 0.05). One-way ANOVA was performed to assess for demographic factors that correlated with pre-intervention and magnitude of change in RIPLS scores; if it demonstrated an overall difference between groups, post-hoc Tukey’s HSD was performed. For all statistical analyses, the Statistical Package for Social Sciences (SPSS, Version 23.0, Chicago, Illinois) was used.
D. Qualitative Measures
Post-intervention qualitative feedback regarding participants’ learning experiences was collected through online surveys. Questions include: What did you learn about interprofessional collaboration? What are your learning points after completing the project? Would you recommend this project to your peers, and what are your reasons? These questions were chosen to better understand participants’ reaction to the programme, their attitudes toward IPE and IPC, and other key learning points they may have.
1) Thematic Analysis: All survey participants were encouraged to participate in the qualitative research, with a total of 163 recruited to give written qualitative feedback on the programme. Given the relatively large sample of data, thematic analysis was chosen to explore and interpret the dataset, distilling it into recurring ideas (Braun & Clarke, 2006; Kiger & Varpio, 2020). Analysis was performed on participants’ qualitative descriptions of their learning experiences, with constant comparison analysis used to identify patterns in participants’ responses and develop a coding schema. Two coders independently identified major themes from the text within all transcripts, with reference to the research questions. They discussed and resolved any disagreements. No member checking was performed. A common coding schema was generated and applied to all the transcripts.
E. Ethical Approval
Ethical approval was obtained from the NUS institutional review board (B-15-272). Study participation was entirely voluntary and anonymous. Informed consent was taken from participants before data collection commenced, and they were allowed to withdraw from the research at any point in time. No incentives were provided to study participants.
III. RESULTS
226 healthcare undergraduates participated in TriGen from 2015-2018. Response rate for the RIPLS was 79.6%.
A. Demographics
Median age was 21 (range 18-41). 62.2% of participants were female, 37.8% were male. 31.7% were medical students, 12.8% nursing students, 42.2% pharmacy students, 10.0% social work students, and 3.3% therapy students. First- and second-year students comprised 62.2% of participants, while third- to fifth-years comprised 37.8%. 65.0% participated in previous IPE activities.
B. Construct Validity
The KMO index was 0.902, indicating sampling adequacy. The chi-square index for Barlett’s test of sphericity was 1919.445 (df171, p<0.001), indicating suitability for factor analysis.
Principal component analysis yielded four components largely consistent with the four-subscale model of the RIPLS (Barr et al., 2005) (Annex 2). However, one item, “I am not sure what my professional role will be”, had a low loading value of 0.285 under the original subscale of “Roles and Responsibility” and a borderline low loading value of 0.459 under the subscale of “Negative Professional Identity”. It was removed from subsequent analyses in view of its poor fit into the theoretical construct (Table 2).
|
No |
Statements |
Teamwork and Collaboration |
Negative Professional Identity |
Positive Professional Identity |
Roles and Responsibilities |
|
1 |
Shared learning will help me to think positively about other healthcare professionals. |
0.794 |
|
|
|
|
2 |
Learning with other health and social care students before qualification would improve relationships after qualification. |
0.781 |
|
|
|
|
3 |
Team-working skills are essential for all health and social care students to learn. |
0.773 |
|
|
|
|
4 |
Shared learning will help me to understand my own limitations. |
0.767 |
|
|
|
|
5 |
Communication skills should be learnt with other health and social care students. |
0.746 |
|
|
|
|
6 |
Learning with other students/professionals will make me become a more effective member of a health and social care team. |
0.742 |
|
|
|
|
7 |
For small group learning to work, students need to trust and respect each other. |
0.739 |
|
|
|
|
8 |
Shared learning with other healthcare students will increase my ability to understand clinical problems. |
0.723 |
|
|
|
|
9 |
Patients would ultimately benefit if health and social care students/professionals worked together to solve patient problems. |
0.650 |
|
|
|
|
10 |
It is not necessary for undergraduate health and social care students to learn together. |
|
0.882 |
|
|
|
11 |
I don’t want to waste time learning with other health and social care students. |
|
0.854 |
|
|
|
12 |
Clinical problem-solving skills can only be learnt with students from my own department. |
|
0.799 |
|
|
|
13 |
Shared learning will help to clarify the nature of patient problems. |
|
|
0.658 |
|
|
14 |
Shared learning before qualification will help me become a better team worker. |
|
|
0.642 |
|
|
15 |
I would welcome the opportunity to work on small group projects with other health and social care students. |
|
|
0.614 |
|
|
16 |
Shared learning with other health and social care professionals will help me to communicate better with patients and other healthcare professionals. |
|
|
0.567 |
|
|
17 |
The function of nurses and therapists is mainly to provide support for doctors. |
|
|
|
0.836 |
|
18 |
I am not sure what my professional role will be. |
|
0.459* |
|
0.285 |
|
19 |
I have to acquire much more knowledge and skills than other health or social care students. |
|
|
|
0.517 |
Table 2: Exploratory Factor Analysis of the RIPLS – Contribution of Items to Each Component
*The highest loading value of each item under the four subscales are shown (except for item 18). A loading value of >0.5 was taken to be satisfactory. Item 18, “I am not sure what my professional role will be.”, was deemed borderline satisfactory at a loading value of 0.459 in the subscale Negative Professional Identity. Its loading value was lower at 0.285 in its original subscale Roles and Responsibility.
C. Internal Consistency
Cronbach’s alpha is 0.848 for RIPLS total score, suggesting good internal consistency.
D. Baseline RIPLS Score


Table 3: Total RIPLS scores.
Subscale scores can be found in Annexes 3 to 6.
The mean baseline total RIPLS score was 76.6 (95% CI 75.6 – 77.6). There was a baseline difference between faculties (p=0.001), with medical and therapy undergraduates having higher scores as compared to pharmacy students (mean difference 3.85, 0.59–7.11, p=0.012 and mean difference 8.83, 0.94–16.7, p=0.020, respectively) (Table 3). As for subscales, there was a difference in “Teamwork and Collaboration” baseline scores between years of study, with Year 1–2 undergraduates had a higher baseline score of 40.8 (40.0–41.5) versus Year 3–5 undergraduates with a score of 39.5 (38.6–40.4) (p=0.038) (Annex 3). Medical undergraduates had higher baseline scores for the “Teamwork and Collaboration” 41.2 (40.2–42.2)) and “Positive Professional Identity” 17.9 (17.4–18.5) subscales compared to pharmacy undergraduates 39.3 (38.4–40.1) (p=0.034), and 17.0 (16.6–17.4) (p=0.036) respectively (Annexes 3-4). Social work undergraduates have the lowest baseline “Roles and Responsibility” score, averaging 4.94 (4.34–5.55) compared to all other faculties (Annex 6).
E. Change in RIPLS Score Post-Intervention
There was no significant difference between the pre- and post-intervention RIPLS total score and the subscale score under the “Teamwork and Collaboration” subscale (Table 3, Annex 3). Under the “Positive Professional Identity” subscale, there was a decrease in post-intervention scores of Year 1-2 students (mean difference -0.500 (-0.931– -0.069), p=0.023) and students with no participation in activities outside of the faculty (mean difference -0.403 (-0.768 – -0.037), p=0.031) (Annex 4). Under the “Negative Professional Identity” subscale, there was a decrease in post-intervention score in medical students (mean difference -0.667 (-1.31– – 0.020), p=0.44) and social work students (-0.889 (-1.70– -0.073), p=0.035) (Annex 5). There was an increase in the post-intervention score amongst female students under the “Roles and Responsibility” subscale (mean difference 0.384 (0.065–0.703), p=0.019) (Annex 6).
F. Individual Item Analysis
Negatively coded statements like “the function of nurses and therapists is mainly to provide support for doctors” (Item 17) and “I am not sure what my professional role will be” (Item 18) showed significant increases in scores post-intervention (0.23, p=0.005 and 0.17, p=0.016 respectively). Other significant findings include a decrease in scores for the statements “Shared learning with other health and social care professionals will help me to communicate better with patients and other healthcare professionals” (Item 13) (-0.14, p=0.013), and “Shared learning will help to clarify the nature of patient problems (Item 15) (-0.10, p=0.034) (Table 4).

Table 4: RIPLS (Individual items analysis)
G. Self-Reported Feedback on Interprofessional Learning
91.6% participants agreed they could “better appreciate the importance of interprofessional collaboration in the care of patients”. 72.8% said MDMs were important for their learning and 91.9% of respondents would recommend the programme to their friends.
H. Qualitative Feedback
163 of 180 survey respondents participated in the qualitative research (response rate 90.6%). (Fig 1) 34.4% of respondents were male and 65.6% female. 33.1% of respondents were studying Medicine, 12.3% Nursing, 40.5% Pharmacy, 11.0% Social Work and 3.1% Therapy. 54.6% of respondents were in early years of study (Year 1–2). 74.8% had previous exposure to IPE. Thematic analysis yielded the following themes:
1) Learning and teaching one another: Healthcare undergraduates found value in learning from one another. They shared knowledge and skills gained from their respective curriculum with one another.
“I feel more equipped and prepared to teach and learn from other healthcare professionals”
21-year-old female third-year medical student
“I learnt a lot from my social work team leader and how to consider the social aspects of issues the elderly face”
20-year-old male first-year medical student
2) Understanding the role of other healthcare professionals: Healthcare undergraduates learned the role of other healthcare professionals and gained new insights into how different healthcare professionals contributed to the care of the patient.
[I have] learn[ed] … how we can tap on each other[’s] strengths to come up with a care plan for the patients
21-year-old female third-year pharmacy student
Understanding what medicine, nursing [and] pharmacy does make quite a lot of difference to how we perceive and thus, work with them.
23-year-old female second-year social work student
3) Understanding one’s own role: Healthcare undergraduates reported developing a greater understanding of the roles and responsibilities they played as a part of a multi-disciplinary team.
I am now more aware of the role and responsibility I have as a healthcare professional.
21-year-old female first-year pharmacy student
Working in a multi-disciplinary team gave me a feel of how it may be like caring for a patient as a team in my future career.
20-year-old female first-year social work student
4) Teamwork: Healthcare undergraduates appreciated the need for collaboration and teamwork within a multi-disciplinary team. They learned about the importance of compromise.
Working with different people, in terms of personality, faculty, etcetera – I learnt to give and take and be more understanding towards the others.
21-year-old second-year social work student
It has allowed me to better understand … how the different professions can come together to better serve the needs of patients.
20-year-old female second-year pharmacy student
5) Opportunity to meet people from other faculties: Healthcare undergraduates valued meeting people from other faculties and developing collaborative relationships they would otherwise not have had the opportunity to.
I got to know seniors in medicine and peers from pharmacy.
20-year-old female first-year nursing student
It is a very unique experience, having the chance to interact with … other university students from different healthcare faculties.
20-year-old female second-year pharmacy student
6) Factors limiting learning: Factors limiting learning included time constraints, unmotivated teammates, administrative burden and lack of suitable patients. For the latter, some undergraduates felt that their care was restricted to companionship for patients who were already able to manage their own chronic conditions well and did not require further help from the healthcare undergraduates.
IV. DISCUSSION
A. Validation of the RIPLS in Singapore
This study validated the RIPLS in the Singapore context. The final model is the same as proposed by McFadyen et al. (2005). without item 18 “I am not sure what my professional role will be”, from “Roles and Responsibility” subscale. The poor fit of this item into this study’s theoretical construct could be because participants are mostly in their pre-clinical years and may not understand professional roles and responsibilities due to their limited on-job experience, a reason also proposed by McFadyen et al. (2005) and Tyastuti et al. (2014). Tyastuti et al. (2014) found this item, along with “I have to acquire much more knowledge and skills than other healthcare students” (item 19) from the same subscale had loading factors of <0.5 and removed the entire “Roles and Responsibility” subscale from the Indonesian version of the RIPLS. Other studies validating the RIPLS also experienced issues with this subscale (Lauffs et al., 2008; Lestari et al., 2016; McFadyen et al., 2005).
B. Baseline RIPLS score
The mean baseline RIPLS score is comparable with that by Chua et al. (2015), another study conducted in Singapore which measured change in the RIPLS after a one-day IPE conference. They also found higher baseline RIPLS scores for medical undergraduates versus other faculties, a finding also noted in this study and another done in a culturally similar country (Lestari et al., 2016). However, this finding seems inconsistent as other studies (Aziz et al., 2011; de Oliveira et al., 2018) have found the contrary.
Chua et al. (2015) also found that prior IPE experience resulted in higher baseline RIPLS scores, a finding not replicated in this study. We hypothesise that while 65.0% of this study’s participants had previous IPE exposure (versus 10.6% in Chua et al. (2015)), the heterogenous nature of IPE programmes they previously participated in may have had differing efficacy in improving IPE attitudes.
This study found undergraduates in their later years had a lower baseline “Teamwork and Collaboration” subscale score, versus those in their early years. We postulate that undergraduates with more clinical experience better understand the challenges of IPE in practice, a finding echoed by Judge et al. (2015).
That pharmacy students, but not medical students, were mandated by their curriculum to fulfil volunteering hours which could explain the former’s lower baseline scores for total RIPLS and subscales “Teamwork and Collaboration” and “Positive Professional Identity” since they are likely less motivated by IPE when choosing to participate.
Social work undergraduates’ low baseline “Roles and Responsibility” score likely reflects their minimal exposure to medical social work unless they elected for healthcare modules in their senior years of study.
C. Change in Pre- and Post-intervention RIPLS Scores
Our study did not show a significant difference between the pre- and post-intervention RIPLS total score and the “Teamwork and Collaboration” subscale. Additionally, there was a decrease seen in post-intervention scores under the “Positive Professional Identity” subscale for Year 1-2 students and the “Negative Professional Identity” subscale in medical students and social work students. This is in contrast with the literature, where previous studies involving conferences (Chua et al., 2015) or solitary learning modules (Wakely et al., 2013; Zaudke et al., 2016) demonstrated a significant difference in the total RIPLS score pre- and post- intervention. Possible reasons for this are further discussed in section E.
There was a significant increase in the post-intervention score amongst female students under the “Roles and Responsibility” subscale. Previous studies have suggested that there are gender specific differences in perception towards IPE with female students having a more positive attitude towards IPE (Hansson et al., 2010; Wilhelmsson et al., 2011). In addition, the individual item analysis showed that negatively coded statements relating to the subscale of “Roles and Responsibility” such as “the function of nurses and therapists is mainly to provide support for doctors” (Item 17) and “I am not sure what my professional role will be” (Item 18) had significant increases in scores post-intervention. This is encouraging and demonstrates the success of the programme in helping students understand the respective roles and responsibility of each profession which is a crucial part of IPE and eventually IPC.
Other significant findings in the individual item analysis include a decrease in scores for the statements “Shared learning with other health and social care professionals will help me to communicate better with patients and other healthcare professionals” (Item 13), and “Shared learning will help to clarify the nature of patient problems” (Item 15). These findings suggest that the programme can be improved by incorporating more modules on communication between healthcare professionals and shared problem-solving.
D. Qualitative Feedback
While the lack of a significant difference between the pre- and post-intervention RIPLS scores suggest no changes in attitudes, the qualitative data revealed that the majority of undergraduates better appreciated the importance of IPC for patient care and many felt that that MDMs were useful for their learning.
Qualitative analysis revealed five major themes in the undergraduates’ learning pertaining to IPE. Participants learned from and taught each other. Being able to freely learn from and teach one another requires mutual trust and respect which are key elements of collaborative practices (de Oliveira et al., 2018). Participants reported better understanding of their own and other healthcare professionals’ roles; these are recognised as crucial components of collaborative practice (Canadian Interprofessional Health Collaborative, 2010). Undergraduates also shared that they learned about teamwork, specifically, conflict resolution and compromise. Finally, undergraduates appreciated the opportunities to meet fellow undergraduates from different faculties. It has been observed in many successful IPE programmes that informal social interactions are potentially as important as the actual IPE activities (Lie et al., 2016). We observed that the relationships built between participants of the programme often persisted beyond the completion of the programme; these relationships could benefit the institution and healthcare system (Hoffman et al., 2008).
E. Possible Reasons Underlying Lack of Improvement in RIPLS Scores
First, as mentioned earlier, the RIPLS has been described to have psychometrics issues, with multiple researchers modifying the subscales (Mahler et al., 2015). Second, Schmitz and Brandt (2015) suggested that RIPLS is insensitive to course improvements and to pre- versus post-intervention change in attitudes. We chose the RIPLS at the start of 2014 as it had been widely used and validated and simple to administer, and we also sought to validate it in Singapore for the first time. Unfortunately, few studies on its potential issues had been published at the time to inform the design of this study. Third, the longitudinal nature of the programme may have permitted undergraduates greater insight to the challenges of IPE and realities of collaborating within interprofessional teams, tampering their idealism.
Lestari et al. (2016) described how nursing and midwifery undergraduates had lower RIPLS scores as compared to medical and dentistry undergraduates as they had prior clinical experience and likely observed less than exemplary interactions amongst members of healthcare teams. Similarly, Makino et al. (2013) found that graduates of an IPE programme had a lower mean score on the Modified Attitudes Toward Health Care Teams Scale (ATHCTS) as compared to current students. The authors suggested that the alumni’s negative attitude may be due to their real-world experience. Several structural issues in clinical practice have been identified that contribute to this trend, for example competition between professionals (Tremblay et al., 2010) and power struggles (Paradis & Whitehead, 2015).
F. Barriers to IPE
Undergraduates reported four main barriers: time constraints, unmotivated teammates, administrative burden, unsuitable patients. Other studies including Alexandraki et al. (2017) and West et al. (2016) have also faced time constraints. As this programme is voluntary, undergraduates had to take time off their already packed curriculum to participate, and the selection of volunteers was not a stringent process. Additionally, as participants were contributing to clinical care, documentation of visits is required. Multiple studies showed that physicians deemed documentation and administrative work burdensome and excessive time spent on these may be associated with physicians’ burnout (Patel et al., 2018; Wright & Katz, 2018).
In addressing these barriers, incorporating academic credits for participation, a more stringent selection of participants, streamlining administrative work and prudent choice of patients may be considered. These measures are already being implemented by the programme organisers to improve the programme.
G. Strengths and Limitations
The strength of this study lies in the use of both quantitative and qualitative data grounded on an established framework by Kirkpatrick (1959) for the evaluation of a novel experiential IPE programme. The limitations of our study include it being single-institution and that the participants are volunteers which thus form a self-selected group. Hence, the results may not be generalisable. There was also no control arm for the intervention. In addition, there was a large variation in baseline RIPLS score seen in the programme, which can be potentially improved with a more robust study design that controls for baseline differences. Lastly, the use of only a survey for data collection may limit the depth of qualitative data obtained. Further studies could include qualitative interviews.
V. CONCLUSION
We validated the RIPLS in Singapore and demonstrated the feasibility of an interprofessional student-initiated home visit programme. While there was no significant change in RIPLS scores, the qualitative feedback suggests that there are participant-perceived benefits for IPE after undergoing this programme, even with the perceived barriers to IPE. Future programmes can work on addressing these barriers to IPE.
Notes on Contributors
Gloria Yao Chi Leung contributed to the conception and design of the work, the acquisition, analysis, and interpretation of data for the work, drafting and revising the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Kennedy Yao Yi Ng contributed to the conception and design of the work, analysis and interpretation of data for the work, drafting and revising the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Yow Ka Shing contributed to the conception and design of the work, the acquisition and interpretation of data for the work, drafting and revising the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Nerice Heng Wen Ngiam contributed to the conception and design of the work, the acquisition of data for the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Dillon Guo Dong Yeo contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Angeline Jie-Yin Tey contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Melanie Si Rui Lim contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Aaron Kai Wen Tang contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Chew Bi Hui contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Celine Yi Xin Tham contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Yeo Jia Qi contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Lau Tang Ching contributed to the conception and design of the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Wong Sweet Fun contributed to the conception and design of the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Gerald Choon Huat Koh contributed to the conception and design of the work, interpretation of the data for the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Wong Chek Hooi contributed to the conception and design of the work, interpretation of the data for the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.
Ethical Approval
Ethical approval was obtained from the NUS institutional review board (B-15-272). Study participation was entirely voluntary and anonymous. Informed consent was taken from participants before data collection commenced, and they were allowed to withdraw from the research at any point in time. No incentives were provided to study participants.
Data Availability
According to institutional policy, research dataset is available on reasonable request to the corresponding author.
Acknowledgement
The authors would like to thank the Tri-Generational HomeCare Organising Committee from 2014 to 2018 for supporting the study. They would like to extend their thanks to the National University of Singapore, Yong Loo Lin School of Medicine, Dean’s Office; the North West Community Development Council; Khoo Teck Puat Hospital, Singapore; Geriatric Education and Research Institute, Singapore. Finally, they would like to thank the volunteers for their generosity and the patients for their hospitality.
Funding
National University of Singapore, Yong Loo Lin School of Medicine, Dean’s Office; the North West Community Development Council; Khoo Teck Puat Hospital, Singapore provided funding support for the purchase of medical consumables, refreshments and logistics for the program.
Declaration of Interest
There are no conflicts of interest.
References
Alexandraki, I., Hernandez, C. A., Torre, D. M., & Chretien, K. C. (2017). Interprofessional education in the internal medicine clerkship post-LCME standard issuance: Results of a national survey. Journal of General Internal Medicine, 32(8), 871–876.https://doi.org/10.1007/s11606-017-4004-3
Aziz, Z., Teck, L. C., & Yen, P. Y. (2011). The attitudes of medical, nursing and pharmacy students to inter-professional learning. Procedia – Social and Behavioural Sciences, 29, 639–645. https://doi.org/10.1016/j.sbspro.2011.11.287
Barr, H., Koppel, I., Reeves, S., Hammick, M., & Freeth, D. (2005). Effective interprofessional education: argument, assumption, and evidence (1st edition). Wiley-Blackwell.
Berger-Estilita, J., Fuchs, A., Hahn, M., Chiang, H., & Greif, R. (2020). Attitudes towards interprofessional education in the medical curriculum: A systematic review of the literature. BMC Medical Education, 20(1), Article 254. https://doi.org/10.1186/s12909-020-02176-4
Blythe, J., & Spiring, R. (2020). The virtual home visit. Education for Primary Care, 31(4), 244–246. https://doi.org/10.1080/14739879.2020.1772119
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Briggs, M. C. E., & McElhaney, J. E. (2015). Frailty and interprofessional collaboration. Interdisciplinary Topics in Gerontology and Geriatrics, 41, 121–136. https://doi.org/10.1159/000381204
Canadian Interprofessional Health Collaborative. (2010). A national interprofessional competency framework. https://phabc.org/wp-content/uploads/2015/07/CIHC-National-Interprofessional-Competency-Framework.pdf
Chua, A. Z., Lo, D. Y., Ho, W. H., Koh, Y. Q., Lim, D. S., Tam, J. K., Liaw, S. Y., & Koh, G. C. (2015). The effectiveness of a shared conference experience in improving undergraduate medical and nursing students’ attitudes towards inter-professional education in an Asian country: A before and after study. BMC Medical Education, 15, Article 233. https://doi.org/10.1186/s12909-015-0509-9
Conti, G., Bowers, C., O’Connell, M. B., Bruer, S., Bugdalski-Stutrud, C., Smith, G., Bickes, J., & Mendez, J. (2016). Examining the effects of an experiential interprofessional education activity with older adults. Journal of Interprofessional Care, 30(2), 184–190. https://doi.org/10.3109/13561820.2015.1092428
de Oliveira, V. F., Bittencourt, M. F., Navarro Pinto, Í. F., Lucchetti, A. L. G., da Silva Ezequiel, O., & Lucchetti, G. (2018). Comparison of the readiness for interprofessional learning and the rate of contact among students from nine different healthcare courses. Nurse Education Today, 63, 64–68. https://doi.org/10.1016/j.nedt.2018.01.013
Fahs, D. B., Honan, L., Gonzalez-Colaso, R., & Colson, E. R. (2017). Interprofessional education development: Not for the faint of heart. Advances in Medical Education and Practice, 8, 329–336. https://doi.org/10.2147/AMEP.S133426
Ganotice, F. A., & Chan, L. K. (2018). Construct validation of the English version of Readiness for Interprofessional Learning Scale (RIPLS): Are Chinese undergraduate students ready for ‘shared learning’? Journal of Interprofessional Care, 32(1), 69–74. https://doi.org/10.1080/13561820.2017.1359508
Grice, G. R., Thomason, A. R., Meny, L. M., Pinelli, N. R., Martello, J. L., & Zorek, J. A. (2018). Intentional interprofessional experiential education. American Journal of Pharmaceutical Education, 82(3), Article 6502. https://doi.org/10.5688/ajpe6502
Hansson, A., Foldevi, M., & Mattsson, B. (2010). Medical students’ attitudes toward collaboration between doctors and nurses – A comparison between two Swedish universities. Journal of Interprofessional Care, 24(3), 242–250. https://doi.org/10.3109/13561820903163439
Hoffman, S. J., Rosenfield, D., Gilbert, J. H. V., & Oandasan, I. F. (2008). Student leadership in interprofessional education: Benefits, challenges and implications for educators, researchers and policymakers. Medical Education, 42(7), 654–661. https://doi.org/10.1111/j.1365-2923.2008.03042.x
Judge, M. P., Polifroni, E. C., & Zhu, S. (2015). Influence of student attributes on readiness for interprofessional learning across multiple healthcare disciplines: Identifying factors to inform educational development. International Journal of Nursing Sciences, 2(3), 248–252. https://doi.org/10.1016/j.ijnss.2015.07.007
Kiger, M. E., & Varpio, L. (2020). Thematic analysis of qualitative data: AMEE Guide No. 131. Medical Teacher, 42(8), 846–854. https://doi.org/10.1080/0142159X.2020.1755030
Kirkpatrick, D. L. (1959). Techniques for Evaluation Training Programs. Journal of the American Society of Training Directors, 13, 21–26.
Knowles, M. S. (1984). Andragogy in Action: Applying Modern Principles of Adult Learning (1st edition). Jossey-Bass.
Lauffs, M., Ponzer, S., Saboonchi, F., Lonka, K., Hylin, U., & Mattiasson, A.-C. (2008). Cross-cultural adaptation of the Swedish version of Readiness for Interprofessional Learning Scale (RIPLS). Medical Education, 42(4), 405–411. https://doi.org/10.1111/j.1365-2923.2008.03017.x
Lestari, E., Stalmeijer, R. E., Widyandana, D., & Scherpbier, A. (2016). Understanding students’ readiness for interprofessional learning in an Asian context: A mixed-methods study. BMC Medical Education, 16, Article 179. https://doi.org/10.1186/s12909-016-0704-3
Li, Z., Sun, Y., & Zhang, Y. (2018). Adaptation and reliability of the Readiness for Inter Professional Learning Scale (RIPLS) in the Chinese health care students setting. BMC Medical Education, 18(1), Article 309. https://doi.org/10.1186/s12909-018-1423-8
Lie, D. A., Forest, C. P., Walsh, A., Banzali, Y., & Lohenry, K. (2016). What and how do students learn in an interprofessional student-run clinic? An educational framework for team-based care. Medical Education Online, 21(1), Article 31900. https://doi.org/10.3402/meo.v21.31900
Mahler, C., Berger, S., & Reeves, S. (2015). The Readiness for Interprofessional Learning Scale (RIPLS): A problematic evaluative scale for the interprofessional field. Journal of Interprofessional Care, 29(4), 289–291. https://doi.org/10.3109/13561820.2015.1059652
Makino, T., Shinozaki, H., Hayashi, K., Lee, B., Matsui, H., Kururi, N., Kazama, H., Ogawara, H., Tozato, F., Iwasaki, K., Asakawa, Y., Abe, Y., Uchida, Y., Kanaizumi, S., Sakou, K., & Watanabe, H. (2013). Attitudes toward interprofessional healthcare teams: A comparison between undergraduate students and alumni. Journal of Interprofessional Care, 27(3), 261–268. https://doi.org/10.3109/13561820.2012.751901
McFadyen, A. K., Webster, V., Strachan, K., Figgins, E., Brown, H., & McKechnie, J. (2005). The Readiness for Interprofessional Learning Scale: A possible more stable sub-scale model for the original version of RIPLS. Journal of Interprofessional Care, 19(6), 595–603. https://doi.org/10.1080/13561820500430157
McManus, K., Shannon, K., Rhodes, D. L., Edgar, J. D., & Cox, C. (2017). An interprofessional education program’s impact on attitudes toward and desire to work with older adults. Education for Health, 30(2), 172–175. https://doi.org/10.4103/efh.EfH_2_15
Ng, K. Y. Y., Leung, G. Y. C., Tey, A. J.-Y., Chaung, J. Q., Lee, S. M., Soundararajan, A., Yow, K. S., Ngiam, N. H. W., Lau, T. C., Wong, S. F., Wong, C. H., & Koh, G. C.-H. (2020a). Bridging the intergenerational gap: The outcomes of a student-initiated, longitudinal, inter-professional, inter-generational home visit program. BMC Medical Education, 20(1), Article 148. https://doi.org/10.1186/s12909-020-02064-x
Ng, K. Y. Y., Leung, G. Y. C., Yow, K. S., Ngiam, N. H. W., Yeo, D. G. D., Tey, A. J.-Y., Lim, M. S. R., Tang, A. K. W., Chew, B. H., Tham, C. Y. X., Yeo, J. Q., Lau, T. C., Wong, S. F., Wong, C. H., & Koh, G. C.-H. (2020b). Impact of an interprofessional, longitudinal, undergraduate student-initiated home visit program towards interprofessional education. Research Square. https://doi.org/10.21203/rs.3.rs-23744/v1
Paradis, E., & Whitehead, C. R. (2015). Louder than words: Power and conflict in interprofessional education articles, 1954-2013. Medical Education, 49(4), 399–407. https://doi.org/10.1111/medu.12668
Parsell, G., & Bligh, J. (1999). The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Medical Education, 33(2), 95-100. https://doi.org/10.1046/j.1365-2923.1999.00298.x
Patel, R. S., Bachu, R., Adikey, A., Malik, M., & Shah, M. (2018). Factors related to physician burnout and its consequences: A review. Behavioural Sciences, 8(11), Article 98. https://doi.org/10.3390/bs8110098
Schmitz, C. C., & Brandt, B. F. (2015). The Readiness for Interprofessional Learning Scale: To RIPLS or not to RIPLS? That is only part of the question. Journal of Interprofessional Care, 29(6), 525–526. https://doi.org/10.3109/13561820.2015.1108719
Shapiro, S. S., & Wilk, M. B. (1965). an analysis of variance test for normality (complete samples). Biometrika, 52(3/4), 591–611. https://doi.org/10.2307/2333709
Sunguya, B. F., Hinthong, W., Jimba, M., & Yasuoka, J. (2014). Interprofessional education for whom? – Challenges and lessons learned from its implementation in developed countries and their application to developing countries: A systematic review. PloS One, 9(5), Article e96724. https://doi.org/10.1371/journal.pone.0096724
Tamura, Y., Seki, K., Usami, M., Taku, S., Bontje, P., Ando, H., Taru, C., & Ishikawa, Y. (2012). Cultural adaptation and validating a Japanese version of the readiness for interprofessional learning scale (RIPLS). Journal of Interprofessional Care, 26(1), 56–63. https://doi.org/10.3109/13561820.2011.595848
Toth-Pal, E., Fridén, C., Asenjo, S. T., & Olsson, C. B. (2020). Home visits as an interprofessional learning activity for students in primary healthcare. Primary Health Care Research & Development, 21, Article e59. https://doi.org/10.1017/S1463423620000572
Tremblay, D., Drouin, D., Lang, A., Roberge, D., Ritchie, J., & Plante, A. (2010). Interprofessional collaborative practice within cancer teams: Translating evidence into action. A mixed methods study protocol. Implementation Science, 5, Article 53. https://doi.org/10.1186/1748-5908-5-53
Tyastuti, D., Onishi, H., Ekayanti, F., & Kitamura, K. (2014). Psychometric item analysis and validation of the Indonesian version of the Readiness for Interprofessional Learning Scale (RIPLS). Journal of Interprofessional Care, 28(5), 426–432. https://doi.org/10.3109/13561820.2014.907778
Vaughn, L. M., Cross, B., Bossaer, L., Flores, E. K., Moore, J., & Click, I. (2014). Analysis of an interprofessional home visit assignment: Student perceptions of team-based care, home visits, and medication-related problems. Family Medicine, 46(7), 522–526
Wakely, L., Brown, L., & Burrows, J. (2013). Evaluating interprofessional learning modules: Health students’ attitudes to interprofessional practice. Journal of Interprofessional Care, 27(5), 424–425. https://doi.org/10.3109/13561820.2013.784730
West, C., Graham, L., Palmer, R. T., Miller, M. F., Thayer, E. K., Stuber, M. L., Awdishu, L., Umoren, R. A., Wamsley, M. A., Nelson, E. A., Joo, P. A., Tysinger, J. W., George, P., & Carney, P. A. (2016). Implementation of interprofessional education (IPE) in 16 U.S. medical schools: Common practices, barriers and facilitators. Journal of Interprofessional Education & Practice, 4, 41–49. https://doi.org/10.1016/j.xjep.2016.05.002
Wilhelmsson, M., Ponzer, S., Dahlgren, L. O., Timpka, T., & Faresjö, T. (2011). Are female students in general and nursing students more ready for teamwork and interprofessional collaboration in healthcare? BMC Medical Education, 11, Article 15. https://doi.org/10.1186/1472-6920-11-15
Wright, A. A., & Katz, I. T. (2018). Beyond burnout — Redesigning care to restore meaning and sanity for physicians. The New England Journal of Medicine, 378(4), 309–311. https://doi.org/10.1056/NEJMp1716845
Zaudke, J. K., Paolo, A., Kleoppel, J., Phillips, C., & Shrader, S. (2016). The impact of an interprofessional practice experience on readiness for interprofessional learning. Family Medicine, 48(5), 371–376.
*Chek Hooi WONG
90 Yishun Central,
Khoo Teck Puat Hospital,
Singapore 768828
9 Lower Kent Ridge Rd, Level 10,
+65 6807 8001
Email: wong.chek.hooi@ktph.com.sg
Submitted: 15 March 2022
Accepted: 23 March 2022
Published online: 5 July, TAPS 2022, 7(3), 63-64
https://doi.org/10.29060/TAPS.2022-7-3/LE2777
P Ravi Shankar
IMU Centre for Education, International Medical University, Malaysia
I read with great interest the article titled ‘Humanism in Asian medical education – A scoping review’ (Zhu et al., 2021). The article provides an overview of the teaching of humanism in medical schools in Asia. Teaching humanistic values is still not common among Asian medical schools and the published literature is predominantly from a few countries.
The Himalayan country of Nepal has also taken initiatives to strengthen the learning of humanistic values by medical students.
Initiatives have been conducted at different institutions including KIST Medical College, Lalitpur, Nepal, and Patan Academy of Health Sciences (PAHS), Lalitpur, Nepal among others. I believe that the medical humanities can play an important role in fostering humanistic values among medical students. An overview of the discipline in Nepal was provided in an article published in 2014 (Dhakal et al., 2014). Recently several initiatives are being undertaken at PAHS and the undergraduate medical program at the institution has the objective of creating doctors for rural Nepal.
I do agree that there have been problems with the sustainability of these initiatives in Nepal. The language of medical education in Nepal is English like in many other Asian countries. However, the activities and material used were adapted to the Nepalese context, where possible. The scoping review about humanism in Asian medical education can be made more comprehensive by including the initiatives and publications from Nepal, a country where despite various challenges, initiatives have been undertaken in this important area. These studies do fit into the core characteristics of the Integrity, Excellence, Compassion & Collaboration, Altruism, Respect & Resilience, Empathy, and Service (IECARES) framework used by the authors.
The immediate and short-term impacts of these initiatives have been published and the medium-term impact has been studied and is under review for publication. The challenge with measuring the medium to long-term impact of these initiatives is the possibility of other activities undertaken by the student also influencing the outcomes and introducing bias. A variety of methods have been used to foster teaching-learning of humanistic values. Though there are limitations as mentioned earlier, the addition of these initiatives may add strength and greater representativeness to the scoping review.
Note on Contributor
Dr Shankar was involved in conceptualising writing, and editing the manuscript.
Funding
No funds, grants, or other support were received.
Declaration of Interest
No conflicts of interest are associated with this paper.
References
Dhakal, A. K., Shankar, P. R., Dhakal, S., Shrestha, D., & Piryani, R. M. (2014). Medical humanities in Nepal: Present scenario. Journal of the Nepal Medical Association, 52(193), 751–754.
Zhu, C. S., Yap, R. K. F., Lim, S. Y. S., Toh, Y. P., & Loh, V. W. K. (2021). Humanism in Asian medical education – A scoping review. The Asia Pacific Scholar, 7(1), 9-20. https://doi.org/10.29060/TAPS.2022-7-1/RA2460
*P Ravi Shankar
International Medical University,
Bukit Jalil, Kuala Lumpur, Malaysia
Email: ravi.dr.shankar@gmail.com
Submitted: 24 December 2021
Accepted: 23 March 2022
Published online: 5 July, TAPS 2022, 7(3), 60-62
https://doi.org/10.29060/TAPS.2022-7-3/PV2727
Ikuo Shimizu1, Shuh Shing Lee2, Ardi Findyartini3, Kiyoshi Shikino4, Yoshikazu Asada5 & Hiroshi Nishigori6
1Center for Medical Education and Clinical Training, Shinshu University Hospital, Matsumoto, Japan; 2Centre for Medical Education, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 3Department of Medical Education & Medical Education Center-Indonesia Medical Education & Research Institute, Faculty of Medicine Universitas Indonesia; 4Department of General Medicine, Chiba University Hospital, Chiba, Japan; 5Center for Information, Jichi Medical University, Shimotsuke, Japan; 6Center for Medical Education, Nagoya University, Nagoya, Japan
I. INTRODUCTION
After the “To err is human” report in 1999, health care systems have become aware of the serious consequences of failures in health care and have sought to reduce them by enhancing patient safety education. The current medical educators consider that errors are inevitable in clinical practice and think of learning from these errors to improve the quality of the practice and maintain the safety of health care services. This effort on quality improvement and patient safety is now regarded as part of patient safety education. One example is the Morbidity and Mortality conference, a continuous professional development opportunity that had sprung from the efforts of learners to improve practice through the examination of medical errors and unfavourable outcomes. Openness to discussion and study of errors, with a realisation that “errors must not be accepted as a person’s fault”, is central to their message.
To err is human, as is the educators. Educators plan and implement various educational practices, but they sometimes fail to achieve the expected outcomes. We educators sometimes find that our educational practices fail to deliver the intended results or have unexpected adverse outcomes, and we consider such outcomes to be failures. Therefore, it is crucial for faculty to acknowledge the failure and try to make further improvements. In addition to educators’ reflections, they are involved in an institutional opportunity to reflect on practices as a form of faculty development. Faculty development includes initiatives designed to improve the performance of faculty members in teaching, research and administration. However, failures in educational practices are often difficult to be recognised and disclosed to colleagues and learners. Admitting and revealing failure is often difficult for clinicians, and it is no different for educational practitioners. Such educators can be called “problem” educators, just as learners who have difficulty improving their competence appropriately can be called “problem” learners. (Steinert, 2013). Thus, there is a scarce opportunity for educators to recognise and share their failed experiences. Such an attitude of neglect will have a negative impact not only on the quality of educational practices but also on the student-faculty relationship in the long run. It is nothing but a tragedy in medical education to allow faculty to become “problem” educators.
Therefore, the present article states theoretical background to understand how to learn from failure, especially the obstacles for educators, and propose a framework for taking hints from the recent patient safety education.
II. WHY TRADITIONAL SAFETY PARADIGM DOES NOT WORK FOR REFLECTION
Reflecting on experience is crucial for all educators because it enhances learning from practice. When they reflect on unsuccessful educational practices, educators recognise and analyse what they actually did, what happened during or after their practices, and how to improve their practices in the future.
However, learning through self-reflection requires learning strategies, motivation, and awareness of failure (metacognition). While faculty development can provide the strategies, it becomes an environment without motivation and awareness of failure if it lacks psychological safety. Motivation is required for connecting learning with real-life experiences. Educators can facilitate effective self-regulation by thinking critically about their practice and providing attributional reflection (Ryan & Deci, 2000). In particular, extrinsic motivation does not lead to self-reflection; intrinsic motivation is a necessary condition. Even though faculty development provides extrinsic opportunities, it is difficult for “problem” educators without intrinsic motivation to sufficiently reflect on their failures.
Also, there are concerns about whether the psychological safety of educators is ensured when they are asked to improve their educational practices. Firstly, it is burdensome for participants to accept negative results about their practices. If such an evaluation process does not ensure psychological safety, required for self-directed learning (Edmondson, 2014), it will be difficult for the participants to improve their practices. Psychological evidence also shows that people who have fewer teaching competencies tend to overestimate their skills, which might be another risk to hinder the attitude to reflect educational practices. Secondly, a concern about psychological safety lies that some “problem” educators are not even aware of their failures. This phenomenon does not happen in “problem” learners, especially in undergraduate education. While learners often realise they have a problem through some form of summative assessment, educators need to engage in reflection themselves. However, an environment with psychological safety can promote proactive behaviours like self-reflection (Lin, 2007).
III. USE OF SAFETY-II PARADIGM FOR EDUCATORS” PSYCHOLOGICAL SAFETY
In order to overcome these obstacles against the suitable faculty development environment to learn from the failed educational practices, the authors consider psychological safety and suggest shifting our perspective of failure by drawing on the quality improvement strategies. Defining an ideal practice as successful and others that are not (i.e. failures) is derived from the traditional safety management paradigm called Safety-I (Hollnagel, 2014). In contrast to the traditional paradigm, the use of the new paradigm has recently been proposed and become prominent. This paradigm (Safety-II) presupposes that there will always be a gap between the results intended by the practitioners and the actual results. Deviation from the plan itself is not considered a failure. Instead, we can consider such gaps as adaptations and analyse why they occurred and how they worked. The analysis will bring about continuous improvement in a more constructive way.
Safety-II paradigm can provide educators with a new insight that an unexpected result of educational practices can be recognised as a more neutral form rather than “failure”. This perspective would help ensure psychological safety and make it easier to bring about self-directed learning. Also, this paradigm can provide a new perspective on implementing educational theories or methods in the context of health professions education. Educators should always pay attention to gaps between what we anticipate and what actually happens; it is essential to establish a causal relationship by reflecting on such gaps.
We keep two things in mind for reflecting on the practices according to the Safety-II paradigm. First, we should describe the outcome of the practice objectively as an actual result rather than a failure. This perspective brings to faculty development the results of education that did not work (i.e., failures) and the unexpectedly good accomplishments. As a result, it will help focus on the original outcome of education and promote self-reflection. Second, the results should be contrasted with expected results at a glance. Then we can discuss the causes lying between expected results and actual results and what to be improved. Adjustments are made to achieve the desired outcome under expected and unexpected conditions. Safety-II approach might significantly contribute to the evaluation of the practice, by considering unexpected outcomes rather than only failures. Therefore, analysing educational programs from a Safety-II-based perspective will make it easier to find the adjustments that were actually made and enable educators to perform resiliently. It would be not easy to achieve by simply pointing out deviations from ideal practice based on Safety-I. This perspective will allow educators to become more aware of resilience in their educational practices. Furthermore, as educators discover the gaps between planned and actual results from Safety-II, they will be motivated to compare them, thus leading to a critical analysis and continuous improvement of their educational practices.
IV. CONCLUSION
The Safety-II paradigm has the potential to move us away from simply judging failed practices, analysing them from a more constructive perspective, and helping us acquire pragmatic improvements. Then it can help both learners and educators better cope with the complexity of medical education. Furthermore, we can expect to obtain the same outcome as the continuous improvement process; we believe this suggestion will help make our reflection valid and inspire us to professional development. Therefore, it would be further highlighted as a seed for future analytical strategies because it has potential value in the field.
Notes on Contributors
Ikuo Shimizu reviewed literature and took the lead in writing and editing the manuscript.
Shuh Shing Lee contributed to the theoretical ideas for this manuscript.
Ardi Findyartini contributed to the theoretical ideas for this manuscript.
Kiyoshi Shikino contributed to the concept and aided the development of the manuscript.
Yoshikazu Asada contributed to the concept and aided the development of the manuscript.
Hiroshi Nishigori advised and provided feedback on the manuscript, aided the development of the manuscript.
All authors discussed and contributed to the final manuscript.
Acknowledgement
The authors wish to thank Professor Takuya Saiki at Medical Education Development Center, Gifu University, Japan, for providing us with an opportunity to conduct a workshop regarding the Safety-II-based approach on May 24, 2020.
We would also like to appreciate Editage (www.editage.com) for English language editing.
Funding
This work was supported by JSPS KAKENHI under Grant #21H03161. This funding source had no role in the design of this study and will not have any role during its execution, analyses, interpretation of the data, or decision to submit results.
Declaration of Interest
The authors have no conflict of interest to declare.
References
Edmondson, A. C. (2014). The competitive imperative of learning. IEEE Engineering Management Review, 42(3), 110-118. https://doi.org/10.1109/emr.2014.6966928
Hollnagel, E. (2014). Safety-I and safety-II: The past and future of safety management. Ashgate. https://doi.org/10.1201/9781315607511
Lin, H. F. (2007). Effects of extrinsic and intrinsic motivation on employee knowledge sharing intentions. Journal of Information Science, 33(2), 135-149. https://doi.org/10.1177/0165551506068174
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68
Steinert, Y. (2013). The “problem” learner: Whose problem is it? AMEE Guide No. 76. Medical Teacher, 35(4), e1035–e1045. https://doi.org/10.3109/0142159X.2013.774082
*Ikuo Shimizu
Center for Medical Education and Clinical Training,
Shinshu University Hospital,
3-1-1 Asahi, Matsumoto,
Nagano, 390-8621, Japan
Email: ishimizu@shinshu-u.ac.jp
Submitted: 1 February 2022
Accepted: 16 February 2022
Published online: 5 July, TAPS 2022, 7(3), 57-59
https://doi.org/10.29060/TAPS.2022-7-3/PV2737
Garry Soloan1,2 & Muhammad Athallah Arsyaf1,2
1Medical Education Center, Indonesian Medical Education & Research Institute (IMERI) Faculty of Medicine Universitas Indonesia, Jakarta, Indonesia; 2Undergraduate Program in Medicine, Faculty of Medicine Universitas Indonesia, Jakarta, Indonesia
From exams based on short answer questions and multiple-choice questions with a definite answer keys, to project-based, independent, and problem-oriented studies offered in university, the rapid dive transitioning from pedagogical learning into the world of andragogy within the university, truly is one of the highlights of a scholar’s long journey. A 6 year-long habit of having information spoon-fed into our mind, meticulously studying every Cambridge GCE A-Levels past marking scheme and question papers available on the internet had led me to believe that there always had to be the correct, if not perfect, way of finishing an assignment.
Feeling assured and confident on how to approach my study in medicine, it was a surprise to discover that the medical school landscape was far different than what I was used to. Yet being able to define “perfect” in medical school assignments, I found myself approaching every essay as if it was a work of art. Hours, days would be spent writing, reading, and re-writing a single essay assignment. I laboured through every paragraph, often spending more than 30 minutes to finish a paragraph. I regularly consulted a thesaurus, ensuring no word had been repeated within a paragraph, always searching for the perfect word to convey my thoughts. Opinions of others were sought multiple times: Should I mention this? Should I use this word?
The brand-new learning method through problem-based discussions were exciting, yet no less frustrating at times. In completing our assignments for each discussion session, it is likely that we would encounter numerous journals that, more often than not, are contradictory to each other. Significant time would be spent creating a comprehensive literature review, crafting an interactive and thorough presentation, yet at the end of the week, more often than not I doubt that I would be able to answer the simple question of, “what would be the most appropriate treatment for the patient within the trigger case?”.
Over time, I would come to the realisation that such habits and behaviour would not be sustainable in the long run, as I continue to ponder to find the “right way” to study in medical school. By the time that this article was written, 24 months had passed, and although I had an overall satisfactory GPA, each discussion sessions, essay assignment remained invariably the same: challenging as ever. Reflecting back on my prior habit in learning, it is most likely clear to imply that my actions were a result of some degree of perfectionism. The new medical education landscape has caused a turbulence in these habits but as of now, it is still something I am pushing to adapt to by using that drive as something good. But this perfectionist manner of thinking is known to affect more medical students than ever reported, and the perfectionism comes in various degrees in an unpredictable pattern. It might be argued that perfectionism is not necessarily a good trait to have in the study of medicine, but entertaining this idea is not as easy as black and white.
Perfectionism could be simply described as high personal standards with very specific and non-flexible goals to meet. The reactions to these high standards are explained in two different concepts: adaptive and maladaptive perfectionism. The difference between both concepts lies on a thin line, where adaptive perfectionism refers to the standard one puts on his or her performance as a driving force to reach a certain goal. This type of perfectionism, although highly pushing of themselves, still sets realistic standards and is only related to their strivings. Most importantly, the failure of achieving certain goals does not result in self-deprecation. On the other hand, maladaptive perfectionism refers to the same concept of having high standards, but often intensely self-critical over small failures, constantly concerned about creating mistakes, and undermining their success attributing to their low self-esteem. It is simply the overwhelming concern of wanting to do the best, creating a barrier in enjoying a happy life and compromises their state of mind. This type of perfectionism is the most often associated with mental disorders such as anxiety or depression, which sadly, is commonly found among medical students (Seeliger & Harendza, 2017).
Among practicing physicians, the concept of perfectionism often lies in a grey area. In their daily practice, having high standards for their care without having unrealistic expectations is sometimes difficult to do, since responsibilities of physicians are put on the highest pedestals to begin with. As mentioned earlier, adaptive perfectionism presents itself as something good, simply an ambition to always do better without fear of failure. The learning process in medical school is shaped in such a way as an effort to promote this type of perfectionism. As mentioned in a review on medical education by Mylopoulos et al. (2018), most of its studies consist of direct assessments of student’s abilities in recalling factual information. These performance-focused assessments support perfectionism in the lives of students, since their performance is clearly measured by numbers, which for a perfectionist is a perfect judgment of their standards. The review continues by emphasizing on what should be the important components of medical education, including understanding rather than remembering, allowing for challenges and failure to occur as a lesson learned experience, and supporting the variation in individual approaches that come with the aforementioned challenges & failure (Mylopoulos et al., 2018).
So, where do perfectionism stand in these ideal medical education standards in students? Medical education is slowly but steadily pushing towards allowing medical students to dive deep first-hand into their studies and approach it individually, letting mistakes and personal insights influence their clinical judgement before giving the appropriate feedback to put them back on the right track had they stray too far. This type of learning creates opportunities for students to create errors, those of which are hoped to be able to be transformed into valuable learning opportunities. For those with adaptive perfectionism, it could be assumed that they simply adapt to the situation and strive to do well in this new environment as their goals are to strive for good quality outcomes. This assumption is easy to make since adaptive perfectionism rarely associates with the concern of messing-up. However, those with maladaptive perfectionism would most likely succumb under the pressure of starting a learning experience on their own and not having standards to do it perfectly. This is because their actions are fully based on their concerns and feelings, allowing the trait of. Maladaptive perfectionism to become a mediator for mental health disorders as well as an overall decline in quality of life. (Rutter-Eley et al., 2020) The feeling of not knowing what to expect is often the significant cause of anxiety in students with maladaptive perfectionism, and it could lead to further mental instabilities if their performances turn out to not be up to their standards (Bußenius & Harendza, 2019).
From these, there is an implication that if medical education continues to push forward the realization of a new, significantly independent approach in teaching and learning, those with maladaptive perfectionism would simply not survive. This argument could support another issue relating to medical school admission, where several studies had recommended means and the possible benefits in establishing maladaptive perfectionism as a trait to be selected out during medical school selection process (Gärtner et al., 2020; Seeliger & Harendza, 2017). Unfortunately, adaptive and maladaptive perfectionism sometimes live alongside one another, causing a combined effect that is somewhat unpredictable. Because of this, the authors believe that perfectionism could not be ruled out as a negative trait since mental concerns may come and go along the way, where one could use their perfectionism as a driving force one day and use it against them in an episode of low self-esteem another day. Marking it as an elimination characteristic in medical education would not be fair for those with a more stable perfectionism, since we will never know for sure which perfectionist is overly driven, or overly concerned.
It is true that maladaptive perfectionism could pose serious challenges in the learning process of a medical student. This is why traits like these, along with other personality traits that disrupt a good learning environment for an individual, calls for adequate support from the medical school. The trait of maladaptive perfectionism runs rampant among medical students, and it could be tackled by reassurance from their community, teachers, and friends, as well as creating a learning environment that limits the existence of fear-based achievements. (Mylopoulos et al., 2018) The thin line between maladaptive and adaptive perfectionism makes it possible to shape those with the more negative trait into a more positive trait. All in all, we believe that perfectionism is not a trait to shun from medical education, but it is one which medical schools should be able to recognise and provide adequate support in order to nourish the said maladaptive perfectionism, into an adaptive perfectionism in order to nurture physician who would be able to consistently set the bar high, without compromising their own well-being.
Notes on Contributors
Both authors are third year medical students from the Faculty of Medicine, Universitas Indonesia, who is currently undergoing a research internship at Medical Education Center, Indonesia Medical Education & Research Institute (IMERI), Faculty of Medicine, Universitas Indonesia.
Garry Soloan designed and led the study, contributed to argument development, conceptual development, and develop & finalise the manuscript. Muhammad Athallah Arsyaf contributed to argument development, conceptual development, and manuscript development.
Acknowledgement
This paper is written during the author’s internship program at the Medical Education Center, Indonesian Medical Education & Research Institute (IMERI) Faculty of Medicine, Universitas Indonesia. We would like to thank Ardi Findyartini, MD, PhD and Nadia Greviana, DDS, MMedEd, our mentors from Medical Education Center, Indonesia Medical Education & Research Institute (IMERI) FMUI who have provided advice, feedback & mentorship during the writing of this paper.
Declaration of Interest
The authors declare no competing interests.
Funding
No funding is provided for this personal view article.
References
Bußenius, L., & Harendza, S. (2019). The relationship between perfectionism and symptoms of depression in medical school applicants. BMC Medical Education, 19(1), 1-8. https://doi.org/10.1186/s12909-019-1823-4
Gärtner, J., Bußenius, L., Prediger, S., Vogel, D., & Harendza, S. (2020). Need for cognitive closure, tolerance for ambiguity, and perfectionism in medical school applicants. BMC Medical Education, 20(1), 1-8.
Mylopoulos, M., Kulasegaram, K., & Woods, N. N. (2018). Developing the experts we need: Fostering adaptive expertise through education. Journal of Evaluation in Clinical Practice, 24(3), 674-677. https://doi.org/10.1111/jep.12905
Rutter-Eley, E. L., James, M. K., & Jenkins, P. E. (2020). Eating disorders, perfectionism, and quality of life: Maladaptive perfectionism as a mediator between symptoms of disordered eating and quality of life. The Journal of Nervous and Mental Disease, 208(10), 771-776.
Seeliger, H., & Harendza, S. (2017). Is perfect good? – Dimensions of perfectionism in newly admitted medical students. BMC Medical Education, 17(1), 1-7. https://doi.org/10.1186/s12909-017-1034-9
*Garry Soloan
Jl. Salemba Raya No.6, RW.5,
Kenari, Kec. Senen
Kota Jakarta Pusat
Daerah Khusus Ibukota
Jakarta 10430
Tel: +628121162323
Email: garry.soloan@hotmail.com
Submitted: 8 January 2022
Accepted: 26 April 2022
Published online: 5 July, TAPS 2022, 7(3), 51-56
https://doi.org/10.29060/TAPS.2022-7-3/SC2738
Yiwen Koh1, Chengjie Lee2, Mui Teng Chua1,3, Beatrice Soke Mun Phoon4, Nicole Mun Teng Cheung1 & Gene Wai Han Chan1,3
1Emergency Medicine Department, National University Hospital, National University Health System, Singapore; 2Department of Emergency Medicine, Sengkang General Hospital, Singapore; 3Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 4Department of Nursing, National University Hospital, National University Health System, Singapore
Abstract
Introduction: During the first wave of the COVID-19 pandemic in Singapore, clinical attachments for medical and nursing students were temporarily suspended and replaced with online learning. It is unclear how the lack of clinical exposure and the switch to online learning has affected them. This study aims to explore their perceptions of online learning and their preparedness to COVID-19 as clinical postings resumed.
Methods: A cross-sectional study was conducted among undergraduate and graduate medical and nursing students from three local universities, using an online self-administered survey evaluating the following: (1) demographics; (2) attitudes towards online learning; (3) anxieties; (4) coping strategies; (5) perceived pandemic preparedness; and (6) knowledge about COVID-19.
Results: A total of 316 responses were analysed. 81% agreed with the transition to online learning, most citing the need to finish academic requirements and the perceived safety of studying at home. More nursing students than medical students (75.2% vs 67.5% p=0.019) perceived they had received sufficient infection control training. Both groups had good knowledge and coping mechanisms towards COVID-19.
Conclusions: This study demonstrated that medical and nursing students were generally receptive to this unprecedented shift to online learning. They appear pandemic ready and can be trained to play an active part in future outbreaks.
Keywords: Medical Students, Nursing Students, COVID-19, Pandemic, Online Learning, Survey
I. INTRODUCTION
During the first wave of the COVID-19 pandemic in Singapore, the government implemented safe distancing and movement restriction orders in a bid to flatten the epidemiological curve. These measures from 7th April to 1st June 2020 were coined the “circuit breaker” period. Clinical attachments for medical and nursing students were suspended to lower the risk of COVID-19 transmission and to focus the hospitals’ efforts towards dealing with the outbreak.
Before the pandemic, students were embedded within clinical teams where they received bedside teachings, practised communications with patients and acquired practical skills. Students perceive online learning during the pandemic to be less effective for acquiring clinical skills due to the absence of patient interaction and real-world practice (Wilcha, 2020). As the pandemic situation stabilised in Singapore, healthcare students gradually returned to the hospitals from May 2020. In one study, students were concerned about returning to the clinical settings as they perceived themselves as untrained and worried about the risks they might introduce to patients (Hernández-Martínez et al., 2021). This may arise from a lack of pandemic preparedness, which is not commonly incorporated into the medical and nursing school curriculum.
To date, there are no studies evaluating the perceptions of both local medical and nursing students towards the disruption of their studies by the pandemic, and whether these perceptions would be similar to those cited in the aforementioned study. Specifically, we aim to describe the perceptions of online learning and pandemic preparedness of medical and nursing students in Singapore during the “circuit breaker” period. Understanding this will help us create more effective learning strategies and reinforce their preparation for future pandemics.
II. METHODS
A. Study Design and Setting
This was a cross-sectional survey involving medical and nursing students from Yong Loo Lin School of Medicine (YLLSOM) and Alice Lee Centre for Nursing Studies (ALCNS), National University of Singapore (NUS); Duke-NUS Medical School (Duke-NUS); and Lee Kong Chian School of Medicine (LKCSOM), Nanyang Technological University (NTU). Students doing clinical attachments in healthcare institutions during the “circuit breaker” period were sent a link to a self-administered, anonymous online questionnaire. Participation was voluntary. Ethics approval for waiver of written informed consent was obtained from NUS Institutional Review Board (Reference number: NUS-IRB-2020-129).
B. Study Instrument
The questionnaire comprised six parts with a total of 74 questions: (1) socio-demographic characteristics; (2) attitudes towards halting clinical attachments and shift to online learning; (3) anxieties towards the pandemic; (4) coping strategies; (5) perceived pandemic preparedness; and (6) specific knowledge about COVID-19. Responses were collected on Likert scales and the questionnaire was adapted from previous studies with permission. Minor modifications were made to standardise the terms used to refer to COVID-19 and online learning and to ensure understandability in Singapore’s context, while preserving the original intent of the source studies. Content validity of the questionnaire was examined by three board-certified emergency physicians involved in undergraduate and postgraduate medical education.
C. Survey Dissemination
The survey was disseminated to eligible students via email by each school’s administrative staff, who were not part of the study team. Four reminder emails were sent from September to October 2020.
D. Statistical Analysis
Results were analysed using Stata 14 (StataCorp LP, College Station, TX). Categorical variables were reported as proportions in percentages and analysed using χ2 test or Fisher’s exact test, as indicated. A p-value of < 0.05 was considered statistically significant.
III. RESULTS
A total of 316 students were recruited between September and December 2020. 64.2% (203/316) were medical students, most of whom were from YLLSOM (147/203, 72.4%). The majority were between 21 and 29 years of age (250/316, 79.1%).
Table 1 details the respondents’ attitudes towards clinical attachment and their perceived pandemic preparedness. Overall, 57% (180/316) of respondents agreed or strongly agreed with stopping clinical attachments. 81% (256/316) agreed with the shift to online learning. The two main reasons for preferring online learning were the need to finish academic requirements and the perceived safety of studying at home. Of those who disagreed, most preferred learning in the clinical areas and felt there was a lack of personal interaction with tutors and classmates via online learning.
With regards to pandemic preparedness, more nursing students agreed or strongly agreed they had received sufficient infection control training in school or the hospitals they were posted to (75.2% vs 67.5% p=0.019) and had someone to turn to for advice on the use of personal protective equipment if uncertain (p<0.001), compared to the medical students. They were also more likely to have received influenza vaccination (p<0.001) or were recommended to do so (p=0.020).
More than 70% of students used healthy coping strategies such as participating in relaxation activities and interacting with family and friends for support. More than 90% were aware of the basic facts about COVID-19, such as its origin, symptoms, transmission, and prevention methods. Supplementary tables of the complete survey results have been made openly available online at https://doi.org/10.6084/m9.figshare.19646340 .




Table 1. Attitudes towards clinical attachments during Singapore’s circuit breaker period (7 April to 1 June 2020) and their perceived pandemic preparedness
*Fisher’s exact test
Cronbach’s alpha for 9 items of pandemic preparedness = .60
IV. DISCUSSION
A. Paradigm Shift to Online Learning
Our study found that the majority were agreeable with transitioning to online learning during the pandemic. Unsurprisingly, given Singapore’s digital connectivity, students in this study did not lack a reliable internet connection or access to technological devices – reasons why students in other countries found virtual teaching challenging (Wilcha, 2020). Among those who disagreed with the transition to online learning, more than 90% indicated they preferred learning in the clinical areas. They were also concerned about the lack of personal interaction with tutors and classmates. These were similar concerns reflected by medical and nursing students in other studies, who felt that online teaching could not adequately replace clinical teachings and learning of practical clinical skills, in the absence of direct patient contact. Lack of physical interaction with tutors and classmates can also result in reduced student engagement levels which may lead to less effective learning (Wilcha, 2020).
To address the perceived weaknesses of online learning, educators worldwide have increasingly adopted novel teaching methods. These include virtual simulations and ward rounds where students can interact with real patients, and simulated set-ups at home for clinical skills practice. In several studies, positive feedback was cited in terms of an increase in medical knowledge, clinical reasoning, and communication skills with these teaching methods (Wilcha, 2020). Our study focused on their perceptions of online learning in the initial phase of the pandemic. As the pandemic persists and with more experience in these innovative ways of online engagement, it is unclear whether the students may view online learning differently now.
It is also uncertain whether online learning is less effective in acquiring knowledge compared to clinical placements. Weston and Zauche (2021) found no difference in standardised assessment scores between nursing students who completed an in-person paediatric clinical practice versus those who used high-fidelity virtual simulation software with pre-briefing and debriefing components. More research is needed to evaluate the effectiveness of technology-assisted education in imparting clinical competency compared with traditional bedside teaching.
B. Pandemic Preparedness
In this study, we found most of the medical and nursing students felt they were prepared for the pandemic. However, a greater proportion of nursing students perceived they had received sufficient infection control training or had someone to seek advice on the use of personal protective equipment. More had also received the influenza vaccination or were recommended to do so. A previous study found that nursing students were superior to medical students in hand hygiene performance (Cambil-Martin et al., 2020). This was attributed to curriculum differences and less practical training in the healthcare setting for medical students. Our results may reflect similar curriculum disparities, suggesting a need to narrow this gap in pandemic preparation education.
A systematic review by Martin et al. (2020) found that medical students were keen to assist in responses to pandemics and other global health emergencies, in both clinical and non-clinical roles, citing social responsibility and an obligation to help. Having adequate training and knowledge were some factors encouraging their participation. In this study, we did not directly examine if students were willing to serve in the pandemic should the need arise. They however did demonstrate satisfactory basic knowledge about COVID-19 and had healthy coping strategies. This suggests they may be pandemic-ready and may be recruited to play a more active part in future outbreaks.
C. Limitations
Our study has its limitations. First, the voluntary survey results are subjected to non-response bias. However, the demographics of responders were similar to the entire student body and should be representative of the cohort. Second, a cross-sectional survey does not allow the tracking of changes in responses over time. Third, the results may not be generalisable to other countries at varying stages of socio-economic development. Lastly, the results cannot capture responses outside the pre-set questionnaire. For this, qualitative studies would be required to further explore the impact of COVID-19 on the students’ perceptions towards online learning and pandemic preparedness.
V. CONCLUSION
The COVID-19 pandemic has disrupted the education of medical and nursing students in Singapore, causing an unprecedented shift from classroom teaching and bedside clinical attachments to online learning. Although this study demonstrated that medical and nursing students were generally receptive towards this paradigm shift, there is a need to continue implementing and refining online learning methods, especially in teaching clinical skills that are traditionally acquired at the bedside. Additionally, our study found that local medical and nursing students may be pandemic ready and can be trained to take an active part in future outbreaks.
Notes on Contributors
Yiwen Koh reviewed the literature, designed the study, analysed the data and wrote the manuscript. Chengjie Lee performed data collection, analysed the data and critically revised the manuscript. Mui Teng Chua advised on statistical analysis methods, analysed the data and critically revised the manuscript. Beatrice Soke Mun Phoon performed data collection and critically revised the manuscript. Nicole Mun Teng Cheung designed the study instrument and critically revised the manuscript. Gene Wai Han Chan reviewed the literature, conceptualised the overall design of the study and critically revised the manuscript. All authors have read and approved the final manuscript.
Ethical Approval
Ethics approval for waiver of written informed consent was obtained from the NUS Institutional Review Board (Reference number: NUS-IRB-2020-129).
Data Availability
The ethical approval by NUS Institutional Review Board was based on the conditions that only study team members will have access to the raw data that will be stored in a password-protected file. A copy of the survey questions and the additional tables of survey results are openly available at https://doi.org/10.6084/m9.figshare.19646340
Acknowledgement
The authors would like to thank the administrative staff of the Yong Loo Lin School of Medicine, Duke-NUS Medical School, Lee Kong Chian School of Medicine and Alice Lee Centre for Nursing Studies for their kind assistance with this study.
Funding
No funding sources were used for this research study.
Declaration of Interest
The authors have no conflicts of interest to declare.
References
Cambil-Martin, J., Fernandez-Prada, M., Gonzalez-Cabrera, J., Rodriguez-Lopez, C., Almaraz-Gomez, A., Lana-Perez, A., & Bueno-Cavanillas, A. (2020). Comparison of knowledge, attitudes and hand hygiene behavioral intention in medical and nursing students. Journal of Preventive Medicine and Hygiene, 61(1), E9–E14. https://doi.org/10.15167/2421-4248/jpmh2020.61.1.741
Hernández-Martínez, A., Rodríguez-Almagro, J., Martínez-Arce, A., Romero-Blanco, C., García-Iglesias, J. J., & Gómez-Salgado, J. (2021). Nursing students’ experience and training in healthcare aid during the COVID-19 pandemic in Spain. Journal of Clinical Nursing. https://doi.org/10.1111/jocn.15706
Martin, A., Blom, I. M., Whyatt, G., Shaunak, R., Viva, M., & Banerjee, L. (2020). A rapid systematic review exploring the involvement of medical students in pandemics and other global health emergencies. Disaster Medicine and Public Health Preparedness, 1–13. https://doi.org/10.1017/dmp.2020.315
Weston, J., & Zauche, L. H. (2021). Comparison of virtual simulation to clinical practice for prelicensure nursing students in pediatrics. Nurse Educator, 46(5), E95–E98. https://doi.org/10.1097/NNE.0000000000000946
Wilcha, R. J. (2020). Effectiveness of virtual medical teaching during the COVID-19 crisis: systematic review. JMIR Medical Education, 6(2), e20963. https://doi.org/10.2196/20963
*Chengjie Lee
110 Sengkang East Way,
Singapore 544886
Email: lee.chengjie@singhealth.com.sg
Submitted: 7 June 2021
Accepted: 20 January 2022
Published online: 5 July, TAPS 2022, 7(3), 46-50
https://doi.org/10.29060/TAPS.2022-7-3/SC2715
Pilane Liyanage Ariyananda, Chin Jia Hui, Reyhan Karthikeyan Raman, Aishath Lyn Athif, Tan Yuan Yong, Muhammad Hafiz
International Medical University, Malaysia
Abstract
Introduction: We aimed to find out how medical students coped with online learning at home during the COVID 19 pandemic ‘lockdown’.
Methods: A cross-sectional study was carried out from July to December 2020, using an online SurveyMonkey Questionnaire®, with four sections: biodata; learning environment; study habits; open comments; sent to 1359 students of the International Medical University, Malaysia. Responses of strongly disagree, somewhat disagree, neither agree nor disagree, somewhat agree and strongly agree for the closed-ended questions on the learning environment and study habits, were scored on a 5-point Likert scale. Percentages of responses were obtained for the closed ended questions.
Results: There were 323 (23.8%) responses. This included 207 (64%) students from the preclinical semesters 1 – 5 and 116 (36%) students from clinical semesters 6 – 10. Of the respondents, more than 90% had the necessary equipment, 75% had their own personal rooms to study, and 60% had satisfactory internet connections. Several demotivating factors (especially, monotony in studying) and factors that disturbed their studies (especially, tendency to watch television) were also reported.
Conclusion: Although more than 90% of those who responded had the necessary equipment for online learning, about 40% had inadequate facilities for online learning at home and only 75% had personal rooms to study. In addition, there were factors that disturbed and demotivated their online studies.
Keywords: Online Learning, Self-directed Learning, Self-regulated Learning, Learning Environment, Malaysian Medical Students
I. INTRODUCTION
In response to the COVID 19 pandemic, the government of Malaysia imposed a movement control order which is referred to as a lockdown, on 18, March 2020. The International Medical University (IMU), which is a private medical university in Malaysia has been relatively resourceful with respect to e-learning even before the occurrence of the lockdown as it had Moodle®, an online Learning Management System (LMS) platform, in its e-learning portal. Like most educational institutions, the IMU, within a short period of time, had to shift the teaching and learning process from a face-to-face mode to an online mode using Microsoft Teams® most of the time during the lockdown.
The objectives or our study were: to describe the learning environment and the study habits of undergraduate medical students while attending online learning sessions during the lockdown; to determine whether undergraduate medical students used the online resources to practice clinical skills (such as communication skills, physical examination skills) and to develop clinical reasoning.
II. METHODS
A literature search was done in PubMed and Google Scholar using search words: online learning, self-directed learning, self-regulated learning, and learning environment. Study setting and sample selection: Our study population was undergraduate medical students of the IMU. Sample size was calculated to be 293, using the formula provided by Fluid Surveys (2020), for a population size of 1359, with a confidence level of 95% and a margin of error of 5%. A cross-sectional study was carried out using an online SurveyMonkey Questionnaire®, from July to December 2020. As online surveys are well known to have high non-response rates, the questionnaire was sent to all the undergraduate medical students in the IMU, during the lockdown. Data collection and analysis: Informed written consent was obtained from all participants. The questionnaire had four sections: biodata; learning environment; study habits and open comments. There was a total of 12 questions with questions 4, 10 and 11 being closed-ended and having 4, 5 and 14 subsidiary questions, respectively within them. Responses to the closed-ended questions were scored on a 5-point Likert scale: strongly disagree; somewhat disagree; neither agree nor disagree; somewhat agree; strongly agree. Percentages of responses were calculated for the closed-ended questions. Data were analysed using software SPSS version 26.0 (IBM Corporation), and summarised, and descriptive statistics are presented.
III. RESULTS
Data that support the study are openly available in Figshare at http://doi.org/10.6084/m9.figshare. 16909384 (Ariyananda et al., 2021). 323 students (23.7%) responded. This included 207 (64%) students from the preclinical semesters 1 – 5 and 116 (36%) students from clinical semesters 6 – 10. 75% were in their homes and the remainder were in rented accommodation close to the university. Data mentioned below are summarised in Table 1. More than 98% had either a laptop or a tablet and a smart phone. 93% had Internet and WiFi connections, but the internet connection was stable only for 59.4% and only 64.7% had uninterrupted power supply. The locations of their study areas were as follows: personal room 75%; common ‘living room’15.8%; twin shared room 6.5%; varying locations 2.7%. The following demotivating factors were reported: monotony in studying (70.6%); lack of access to real patients (56.3%); lack of support from peers and mentors (50.5%); inadequacy of e-learning resources (25.7%). In addition, 85.7% reported a variety of other causes as demotivating factors. Factors that distracted were watching television (83.6%); sleeping (55.4%); distractions from other members of the family (40.2%) and house chores (40.2%). For demotivating factors and distractions students were invited to offer one or more responses. Ability to obtain feedback, learn clinical skills, learn clinical reasoning and to prepare for assessments were rated as insufficient (scored as strongly disagree, somewhat disagree or neither agree or disagree) as 55.1, 80.5, 57.2 and 56.6 percent, respectively. Those who strongly agreed or somewhat agreed or neither agreed or disagreed that following issues impair their study performances were: inability to access educational resources physically (62.8%) and deterioration of self-discipline (74.3%).
To determine which online resources were statistically significant with respect to their perception of adequacy to learn and practice clinical skills, an independent sample t test was used to compare the mean score on perception of adequacy of different online resources for 63 (19.5%) students who answered ‘yes’ (strongly agree & somewhat agree) against 260 (80.5%) who answered ‘no’ (strongly disagree, somewhat disagree & neither agree nor disagree). A similar statistical comparison was done regarding learning clinical reasoning during online learning to 138 (42.7%) students who answered ‘yes’, with 185 (57.3%) who answered ‘no’ with respect to perception regarding adequacy of resources. Both comparisons yielded highly significant p values.
|
Statement |
Strongly Disagree n (%) |
Somewhat Disagree n (%) |
Neither Agree nor Disagree n (%) |
Somewhat Agree n (%) |
Strongly Agree n (%) |
|
There was adequate lighting for me to study |
5 (1.5) |
15 (4.6) |
9 (2.8) |
81 (25.1) |
213 (65.9) |
|
I had adequate workspace study |
8 (2.5) |
22 (6.8) |
10 (3.1) |
86 (26.6) |
197 (61) |
|
There were no external distractions around my study |
48 (14.9) |
95 (29.4) |
53 (16.4) |
66 (20.4) |
61 (18.9) |
|
Comfort factor (prepared meals and clean laundry) helped to make a more productive studying environment |
22 (6.8) |
19 (5.9) |
37 (11.5) |
77 (23.8) |
168 (52) |
|
The inability to access resources (textbooks, quiet study environment etc.) from a physical library affected the quality of my studies. |
59 (18.3) |
61 (18.9) |
70 (21.7) |
88 (27.2) |
45 (13.9) |
|
I required supervision from lecturers to effectively study. |
84 (26) |
86 (26.6) |
77 (23.8) |
49 (15.2) |
27 (8.4% |
|
I struggled with self-discipline to concentrate fully on my studies while at home. |
33 (10.2) |
50 (15.5) |
39 (12.1) |
97 (30) |
104 (32.2) |
|
I prefer studying in groups rather than in isolation. |
68 (21.1) |
81 (25.1) |
75 (23.2) |
49 (15.2) |
50 (15.5) |
|
I was able to manage my time better during the lockdown for my studies. |
54 (16.7) |
64 (19.8) |
75 (23.2) |
93 (28.8) |
37 (11.5) |
|
I am confident to use online resources for my studies. |
0 (0.0%) |
19 (5.9%) |
51 (15.8%) |
133 (40.9%) |
120 (37.2%) |
|
IMU e-learning resources were adequate to facilitate my studies. |
17 (5.3) |
37 (11.5) |
88 (27.2) |
131 (40.6) |
50 (15.5) |
|
I was able to navigate my way through IMU e-learning to get the materials required for my studies. |
6 (1.9) |
29 (9) |
60 (18.6) |
143 (44.3) |
85 (26.3) |
|
I found online teaching sessions helpful to me to achieve the learning outcomes. |
20 (6.2) |
44 (13.7) |
89 (27.6) |
109 (33.7) |
61 (18.6) |
|
Scheduled online sessions helped me organize my time for my studies. |
27 (8.4) |
43 (13.3) |
67 (20.7) |
108 (33.7) |
78 (23.8) |
|
Scheduled online sessions helped me motivate myself to do my own self-study. |
32 (9.9) |
48 (14.9) |
75 (23.2) |
99 (30.7) |
69 (21.4) |
|
I was able to participate in online discussions with ease. |
19 (5.9) |
43 (13.3) |
76 (23.5) |
123 (38.1) |
62 (19.2) |
|
I was able to receive relevant feedback from my mentors on my performance through online sessions. |
25 (7.7) |
63 (19.5) |
90 (27.9) |
84 (26) |
61 (18.9) |
|
I was able to learn clinical skills (previously through CSSC sessions / Clinical Postings) through online sessions. |
122 (37.8) |
93 (28.8) |
45 (13.9) |
48 (14.9) |
15 (4.6) |
|
I was able to apply clinical reasoning in cases discussed through online sessions. |
32 (9.9) |
58 (17.6) |
94 (29.7) |
110 (34.1) |
29 (8.7) |
|
I was able to prepare well for assessments through online sessions. |
31 (9.6) |
66 (20.4) |
86 (26.6) |
101 (31.3) |
39 (12.1) |
|
I had stable Internet connection for online sessions. |
30 (9.3) |
44 (13.6) |
57 (17.6) |
108 (33.4) |
84 (26) |
|
I did not experience any power outages which interrupted online sessions. |
19 (5.9) |
61 (18.9) |
34 (10.5) |
81 (25.1) |
128 (39.6) |
Table 1. Information about the online resources and learning environments.
IV. DISCUSSION
Although more than 90% of those who responded had the necessary equipment, about 40 % had inadequate facilities for online learning at home and only 75% had personal rooms to study. This is a substantial minority of students who are not equipped to carry out online learning effectively and it is a matter of concern. Areas that need urgent attention to improve online learning which would cater to 40% that lack facilities are: providing reliable power supply and fortification of web-based infrastructure and services (expansion of internet bandwidth and expansion of WiFi facilities, subsidized access to internet) and subsidizing hardware. It is known that use of the internet by medical students has not translated into improved online learning behaviour (Venkatesh et al., 2017). Previous studies suggest that self-study can be both efficient and inefficient depending on how the learners behave (Evans et al., 2020).
Majority of students strongly agreed and somewhat agreed with regards to adequacy of environmental factors/comforts such as illumination (91%), workspace (96.6%); and prepared meals and clean laundry (75.8%). Studies have shown that temperature, lighting, and noise have significant direct effects on university students’ academic performance (Realyvásquez-Vargas et al., 2020).
Furthermore, there were factors that disturbed and demotivated their online studies such as monotony in studying; lack of access to real patients; lack of support from peers and mentors and inadequacy of e-learning resources. Monotony when studying alone may be overcome by getting students to interact through peer online discussion groups and by providing gamified/interactive learning material online. Gaps due to lack of access to real patients may be reduced by use of photos (especially in dermatology and ophthalmology), images (such as radiographs, CT and MRI scans), video clips (in neurology to demonstrate involuntary movements and seizures), audio clips (to listen to abnormal heart sounds and murmurs) and by studying case scenarios. Examining parents and siblings at home may help to practice clinical examination techniques of different body systems. Role play by teachers and peers on predetermined scripts will help to develop clinical reasoning and communication skills. As non-verbal cues contribute to a great extent in data gathering during history taking, there is a high chance of students missing this aspect, as online learning is two-dimensional compared to three-dimensional experience they would get in real life. Our observations with regards to perceptions on learning clinical reasoning online is better than for learning clinical skills, as many as 42.7% perceive those resources at their disposal as adequate to learn clinical reasoning. This finding may be supported by the understanding that clinical reasoning can be learned without actual physical contact with patients.
However, these methods will not be able to substitute the kinaesthetic experiences of palpating abdominal lumps and uterus (at different stages of foetal development) as well as vaginal examination in normal and diseased states as done in clinical settings. As for learning clinical procedures, although theoretical aspects can be learned remotely, procedural skills cannot be properly acquired without performing in clinical settings. Simulations closely matching clinical settings using artificial intelligence, AR and VR technologies are available and would be further developed in the future.
Limitations: The main limitation of this study is the low response rate of 23.7% despite an email reminder and persuasion by the leader of each cohort. Although the sample exceeded the minimum sample size of 293, the findings may not be generalizable to the rest of the students at the IMU. The study does not address findings specific to different cohorts as subgroup analysis has not been done as sample sizes of cohorts were too small to arrive at valid conclusions. Since majority (64%) of students who responded are from the pre-clinical phase (whose clinical training is much less compared to clinical phase), pooled data regarding ability to learn clinical skills and clinical reasoning online would not be generalizable across all semesters.
V. CONCLUSION
It is concerning to find that 40% did not have stable internet and one-fourth did not have personal study rooms despite 90% possessing hardware. Furthermore, there were factors that disturbed and demotivated online studies. These should be remedied by providing reliable power supply and fortification of web-based infrastructure and services and by providing subsidised hardware.
Although acquisition of clinical reasoning and clinical skills were perceived to be possible, through online teaching/learning sessions, by one in five and two in five students respectively; every possible effort should be made to remedy shortcomings of the remaining students.
As the pandemic is likely to prevail for some time, we recommend further studies, especially to obtain perceptions of medical students studying in other medical schools in Malaysia and in poorly resourced countries and in the subset of clinical students.
Notes on Contributors
Pilane Liyanage Ariyananda contributed to the conception, design of the study, interpretation of data, and preparation of the paper. Chin Jia Hui, Reyhan Karthikeyan Raman, Aishath Lyn Athif, Tan Yuan Yong, Muhammad Hafiz contributed to conception, acquisition and analysis of data.
Ethical Approval
Permission was obtained from the Institutional Review Board (Project ID No.: IMU: CSc/Sem6 (34) 2020) of the IMU to collect and analyse the data.
Data Availability
A copy of the informed consent form, survey questionnaire and anonymized database are available at https://doi.org/10.6084/m9.figshare.16909384%20 under CC0 license.
Acknowledgement
We are grateful to IMU of Malaysia for permitting us to acquire and analyse data and to Professor IMR Goonewardene for his insightful comments on the manuscript. We thank students who participated in the study.
Funding
This work was supported by the International Medical University of Malaysia (Project ID No.: IMU: CSc/Sem6 (34) 2020).
Declaration of Interest
The authors have no competing interests.
References
Ariyananda, P. L., Hui, C. J., Raman, R. K., Athif, A. L., Yong, T. Y., & Hafiz, M. (2021). Online learning during the COVID pandemic lockdown: A cross sectional study among medical students [Data set]. Figshare. https://doi.org/10.6084/m9.figshare. 16909384
Evans, D. J. R., Bay, B. H., Wilson, T. D., Smith, C. F., Lachman, N., & Pawlina, W. (2020). Going virtual to support anatomy education: A STOPGAP in the midst of the COVID-19 pandemic. Anatomical Sciences Education. 13,279-283. http://doi:10.1002/ASE.1963
Fluid Surveys. (2020). http://fluidsurveys.com/university/survey-sample-size-calculator
Realyvásquez-Vargas, A., Maldonado-Macías, A. A., Arredondo-Soto, K. C., Baez-Lopez, Y., Carrillo-Gutiérrez, T., & Hernández-Escobedo, G. (2020). The impact of environmental factors on academic performance of university students taking online classes during the COVID-19 pandemic in Mexico. Sustainability, 12(21), 9194. https://doi.org/10.3390/su12219194
Venkatesh, S., Chandrasekaran, V., Dhandapany, G., Palanisamy, S., & Sadagopan, S. (2017). A survey on internet usage and online learning behaviour among medical undergraduates. Postgraduate Medical Journal, 93, 275–279. https://doi.org/10.1136/postgradmedj-2016-134164
*Pilane Liyanage Ariyananda
Clinical Campus,
International Medical University,
Jalan Rasah, Seremban 70300
Negeri Sembilan, Malaysia
Email: ariyananda@imu.edu.my
Submitted: 26 February 2022
Accepted: 22 April 2022
Published online: 5 July, TAPS 2022, 7(3), 42-45
https://doi.org/10.29060/TAPS.2022-7-3/SC2766
Gabriel Lee Keng Yan, Lee Yun Hui, Wong Mun Loke, & Charlene Goh Enhui
Faculty of Dentistry, National University of Singapore, Singapore
Abstract
Introduction: Nurturing preventive-minded dental students has been a fundamental goal of dental education. However, students still struggle to regularly implement preventive concepts such as caries risk assessment into their clinical practice. The objective of this study was to identify areas in the cariology curriculum that could be revised to help address this.
Methods: A total of 10 individuals participated and were divided into two focus group discussions. Thematic analysis was conducted, and key themes were identified based on their frequency of being cited before the final report was produced.
Results: Three major themes emerged: (1) Greater need for integration between the pre-clinical and clinical components of cariology; (2) Limited time and low priority that the clinical phase allows for practising caries prevention; and (3) Differing personal beliefs about the value and effectiveness of caries risk assessment and prevention. Participants cited that while didactics were helpful in providing a foundation, they found it difficult to link the concepts taught to their clinical practice. Furthermore, participants felt that they lacked support from their clinical supervisors, and patients were not always interested in taking action to prevent caries. There was also heterogeneity amongst students with regards to their overall opinion of the effectiveness of preventive concepts.
Conclusion: Nurturing preventive-mindedness amongst dental students may be limited by the current curriculum schedule, the prioritisation of procedural competencies, the lack of buy-in from clinical supervisors, and a perceived lack of relevance of the caries risk assessment protocol and should be addressed through curriculum reviews.
Keywords: Dental Education, Caries Risk Assessment, Cariology, Preventive Dentistry, Qualitative Study, Clinical Teaching, Cariogram
I. INTRODUCTION
According to the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, dental caries in permanent teeth affects an estimated 2 billion people globally yet it is largely preventable. Thus, nurturing preventive-minded dental students has been a fundamental goal of dental education, and a recurring topic of discussion among dental educators (Pitts et al., 2018). Apart from the operative management of dental caries with fillings, dental students are taught to conduct caries risk assessments for their patients. This enables students to construct a tailored caries prevention plan leveraging the use of fluoride varnishes or dietary advice to prevent the onset or progression of carious lesions. However, studies have reported that while students are taught to assess patients’ risk for dental caries and customising preventive plans as part of the Cariology curriculum, they struggle to regularly incorporate prevention into their clinical practice (Calderon et al., 2007; Le Clerc et al., 2021).
The objective of this study was to identify areas in the Cariology curriculum that could be enhanced to help dental students become more prevention orientated in their clinical practice.
II. METHODS
A. Cariology Curriculum at NUS
The Faculty of Dentistry, National University of Singapore offers a four-year Bachelor of Dental Surgery (BDS) programme, mainly divided into pre-clinical and clinical phases. The Cariology curriculum begins in Year 1, where pre-clinical students are equipped with an understanding of the aetiology and pathogenesis of dental caries, along with its preventive and operative management. In Year 2, behavioural science and oral health education and promotion strategies are introduced. Commencing the clinical phase, Year 3 students are taught to utilise the Cariogram electronic assessment tool (D Bratthall, Computer software, Malmö, Sweden), to systematically assess a patient’s caries risk by using self-reported information on plaque control, dietary habits, fluoride exposure, and other caries-related risk factors. From the Cariogram results, a patient’s caries risk profile is generated to guide the development of a targeted caries prevention plan for the patient and aid in the delivery of patient education. A summative assessment is held during the final term of Year 4 where students are required to submit three patient case logs with caries risk assessments and prevention plans documented for one-to-one discussion with faculty members involved in the Cariology curriculum.
B. Study Design
An e-mail invitation was sent to the cohort of 2020 (N=55) within a month after the final examination results were released. Ten individuals responded, willing to participate and giving consent. Participants were divided into two groups where focus group discussions (FGDs) were conducted, held on a teleconferencing platform (Zoom Video Communications), facilitated by one study team member using a discussion guide. Audio recordings of the FGDs were transcribed by the facilitator and two other study team members. All the study team members conducted the thematic analysis. Key themes were identified based on their frequency of being cited.
III. RESULTS
Three major themes emerged from the FGDs.
A. Greater Need for Integration between the Pre-clinical and Clinical Components of Cariology
Participants felt that the pre-clinical lectures provided a foundational understanding of dental caries that they could draw from during their clinical phase of training. However, they suggested that the clinical application of Cariology, such as the use of the caries risk assessment (CRA), can be further emphasised at the beginning of the clinical phase of the BDS programme to reinforce its relevance and significance in the context of overall patient care.
“…not really on our mind when we enter clinics. Maybe the staff can run through the CRA assessment forms before entering clinics.”
[P6]
Participants also highlighted that the three cases due in Year 4 could be submitted and discussed with faculty staff earlier in the clinical phase of the course to concretise concepts and allow an opportunity to implement suggested modifications to their patients’ preventive plans.
“But CRA presentation could have been done earlier like in Year 3. Only after the discussion did it really stick in.”
[P10]
“By the time it made sense, clinic was over.”
[P6]
B. Limited Time and Low Priority to Practice Dental Caries Prevention in the Clinical Phase of Training
Participants shared that the main emphasis of a dental student’s limited clinical time was on operative procedures, as it would mean fulfilling clinical competency requirements essential for graduation.
“As students, we’re slow, so we want to maximise time for treatment rather than talking about prevention.”
[P2]
“…there are other more important requirements.”
[P9]
The low priority dental students accorded to dental caries prevention was also influenced by their clinical supervisors. Some participants noted that their clinical supervisors did not appear keen to discuss caries risk assessment findings during the clinical sessions and did not provide guidance on developing caries prevention plans.
“It is just a two-way thing between patients and students, and not with assessors”.
[P3]
“In the clinics no one really checks our caries risk assessments.”
[P1]
Participants also perceived a lack of interest among patients regarding prevention which discouraged them from providing advice.
“Out of the 30 (patients) I saw, only one was interested in oral hygiene instructions and good oral practices.”
[P2]
C. Differing Personal Beliefs about the Value and Effectiveness of Caries Risk Assessment and Prevention
There was a diverse spread of beliefs among participants about the value and effectiveness of caries risk assessment and caries risk management in clinical practice. Several participants saw the value of caries risk assessments and preventive management as necessary tools to help patients prevent the onset and progression of dental caries.
“Caries risk and prevention is what dentistry is about. It would shape preventive strategies and conversations.”
[P10]
“Knowing how to assess risk for the individual is meaningful as it helps employ more time-effective approaches to managing the patient.”
[P5]
Contrastingly, some participants felt that performing caries risk assessments had little added benefit in guiding their preventive advice as,
“…in the end the advice given is the same regardless…”
[P1]
“I didn’t really have to go through the caries risk assessment to tell them what good habits to have.”
[P7]
IV. DISCUSSION
The findings present several perceived barriers that students face from having a more prevention oriented clinical practice. As dental schools focus heavily on procedural competencies, students will place a larger emphasis on fulfilling these requirements and less on assisting their patients with preventive regimes. Furthermore, the duration of the clinical phase of dental training is insufficient to see the results of the preventive advice given, such as a reduction in incidence of new carious lesions, resulting in students finding its impact less meaningful or tangible as compared to placing a filling or extracting a tooth. One solution is to implement formative grading systems in place of the current summative assessments where students would actively identify patients at risk of caries and conduct one-to-one case discussions with their supervisors throughout the clinical phase and be graded accordingly. This system allows for opportunities to reinforce caries prevention concepts and patient management skills throughout the duration of the clinical training instead of only at the end. To address the scepticism some of the students may have with regard to caries risk assessment, steps to address misconceptions may need to be established (Maupome & Isyutina, 2013). A clearer delivery of concepts at the lecture sessions and opportunities during one-to-one case discussions could be implemented in the revised curriculum.
A frequent theme that emerged was the lack of buy-in from the clinical supervisors about carrying out caries risk assessments and preventive management in the student clinics. This may not be surprising as similar sentiments were reported in a recent qualitative study among practising dentists (Leggett et al., 2021). Majority of clinical supervisors are not involved in teaching Cariology and hence it may be necessary to align them with the teaching of caries management paradigms and their roles in informing preventive treatment plans. This can enable them to reinforce such concepts when they supervise the students in the clinics.
The lack of interest in preventive advice among the participants’ patients is similarly observed in other countries – patients know about prevention but are not interested to change (Leggett et al., 2021). Clinical supervisors can encourage dental students to consider different methods of patient engagement through techniques such as Motivational Interviewing, or even take the opportunity to exploit behavioural change models to effect a more pro-prevention lifestyle. In so doing, patients may appreciate better the importance of prevention from various perspectives including the associated cost savings with a reduction in the operative management of dental caries.
The issues highlighted through the FGDs are summarised in Table 1 together with possible modifications.

Table1. Issues identified in the FGDs and possible mitigating modifications to the current cariology curriculum
V. CONCLUSION
Nurturing preventive-mindedness among dental students may be limited by the current curriculum content and delivery, the prioritisation of procedural competencies, the lack of buy-in from clinical supervisors, and a perceived lack of relevance of the caries risk assessment protocol. Nevertheless, prevention remains the best cure for dental caries and the issues raised through the FGDs can be addressed through curricular modifications discussed earlier. This will, in turn, enhance the preventive-mindedness of the dental students.
Notes on Contributors
GLKY conceptualised the study, participated in data collection, analysis, and interpretation, drafted the manuscript, and approved the final version to be published.
LYH conceptualised the study, participated in data collection, analysis, and interpretation, critically revised the manuscript, and approved the final version to be published.
WML conceptualised the study, critically revised and approved the final version of the manuscript
CGE designed the methodology, participated in data collection, analysis, and interpretation, and critically revised and approved the final version of the manuscript.
Ethical Approval
This study was approved by the NUS Institutional Review Board (IRB No: S-20-141E).
Data Availability
The transcripts/data of this qualitative study are not publicly available due to confidentiality agreements with the participants.
Acknowledgement
The authors would like to thank the participants for their invaluable input and feedback.
Funding
There was no funding for this study.
Declaration of Interest
The authors have no conflicts of interest to declare.
References
Calderon, S. H., Gilbert, P., Zeff, R. N., Gansky, S. A., Featherstone, J. D., Weintraub, J. A., & Gerbert, B. (2007). Dental students’ knowledge, attitudes, and intended behaviors regarding caries risk assessment: impact of years of education and patient age. Journal of Dental Education, 71(11), 1420-1427. https://doi.org/10.1002/j.0022-0337.2007.71.11.tb04412.x
Le Clerc, J., Gasqui, M.-A., Laforest, L., Beaurain, M., Ceinos, R., Chemla, F., Chevalier, V., Colon, P., Fioretti, F., Gevrey, A., Kérourédan, O., Maret, D., Mocquot, C., Özcan, C., Pelissier, B., Pérez, F., Terrer, E., Turpin, Y.-L., Arbab-Chirani, R., . . . Doméjean, S. (2021). Knowledge and opinions of French dental students related to caries risk assessment and dental sealants (preventive and therapeutic). Odontology, 109(1), 41-52. https://doi.org/10.1007/s10266-020-00527-7
Leggett, H., Csikar, J., Vinall-Collier, K., & Douglas, G. (2021). Whose responsibility is it anyway? Exploring barriers to prevention of oral diseases across Europe. JDR Clinical & Translational Research, 6(1), 96-108. https://doi.org/10.1177/2380084420926972
Maupome, G., & Isyutina, O. (2013). Dental students’ and faculty members’ concepts and emotions associated with a caries risk assessment program. Journal of Dental Education, 77(11), 1477-1487. https://doi.org/10.1002/j.0022-0337.2013.77.11.tb05624.x
Pitts, N. B., Mazevet, M. E., Mayne, C., & Shaping the Future of Dental Education Cariology Group (2018). Shaping the future of dental education: Caries as a case-study. European Journal of Dental Education, 22 Suppl 1, 30–37. https://doi.org/10.1111/eje.12345
*Gabriel Lee Keng Yan
9 Lower Kent Ridge Rd, Level 10,
Singapore 119085
Email: dengabriellee@nus.edu.sg
Announcements
- Best Reviewer Awards 2024
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2024.
Refer here for the list of recipients. - Most Accessed Article 2024
The Most Accessed Article of 2024 goes to Persons with Disabilities (PWD) as patient educators: Effects on medical student attitudes.
Congratulations, Dr Vivien Lee and co-authors! - Best Article Award 2024
The Best Article Award of 2024 goes to Achieving Competency for Year 1 Doctors in Singapore: Comparing Night Float or Traditional Call.
Congratulations, Dr Tan Mae Yue and co-authors! - Fourth Thematic Issue: Call for Submissions
The Asia Pacific Scholar is now calling for submissions for its Fourth Thematic Publication on “Developing a Holistic Healthcare Practitioner for a Sustainable Future”!
The Guest Editors for this Thematic Issue are A/Prof Marcus Henning and Adj A/Prof Mabel Yap. For more information on paper submissions, check out here! - Best Reviewer Awards 2023
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2023.
Refer here for the list of recipients. - Most Accessed Article 2023
The Most Accessed Article of 2023 goes to Small, sustainable, steps to success as a scholar in Health Professions Education – Micro (macro and meta) matters.
Congratulations, A/Prof Goh Poh-Sun & Dr Elisabeth Schlegel! - Best Article Award 2023
The Best Article Award of 2023 goes to Increasing the value of Community-Based Education through Interprofessional Education.
Congratulations, Dr Tri Nur Kristina and co-authors! - Volume 9 Number 1 of TAPS is out now! Click on the Current Issue to view our digital edition.

- Best Reviewer Awards 2022
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2022.
Refer here for the list of recipients. - Most Accessed Article 2022
The Most Accessed Article of 2022 goes to An urgent need to teach complexity science to health science students.
Congratulations, Dr Bhuvan KC and Dr Ravi Shankar. - Best Article Award 2022
The Best Article Award of 2022 goes to From clinician to educator: A scoping review of professional identity and the influence of impostor phenomenon.
Congratulations, Ms Freeman and co-authors. - Volume 8 Number 3 of TAPS is out now! Click on the Current Issue to view our digital edition.

- Best Reviewer Awards 2021
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2021.
Refer here for the list of recipients. - Most Accessed Article 2021
The Most Accessed Article of 2021 goes to Professional identity formation-oriented mentoring technique as a method to improve self-regulated learning: A mixed-method study.
Congratulations, Assoc/Prof Matsuyama and co-authors. - Best Reviewer Awards 2020
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2020.
Refer here for the list of recipients. - Most Accessed Article 2020
The Most Accessed Article of 2020 goes to Inter-related issues that impact motivation in biomedical sciences graduate education. Congratulations, Dr Chen Zhi Xiong and co-authors.









