The need for researching the utility of R2C2 model in Cross-Cultural and Cross-Disciplinary settings

Submitted: 26 May 2022
Accepted: 10 June 2022
Published online: 4 October, TAPS 2022, 7(4), 86-87
https://doi.org/10.29060/TAPS.2022-7-4/LE2816

Tomoko Miyoshi1, Fumiko Okazaki2, Jun Yoshino3, Satoru Yoshida4, Hiraku Funakoshi5, Takayuki Oto6 & Takuya Saiki7

1Department of General Medicine, Kurashiki Educational Division, Graduate School of Medicine Dentistry and Pharmaceutical Sciences, Okayama University, Japan; 2Center for Medical Education, The Jikei University School of Medicine, Japan; 3Department of Physical Therapy, Faculty of Health and Medical Science, Teikyo Heisei University, Japan; 4Emergency and Critical Care Medical Center, Niigata City General Hospital, Japan; 5Department of Emergency and Critical Care Medicine Tokyobay Urayasu Ichikawa Medical Center, Japan; 6Department of General Dental Practices, Kagoshima University Hospital, Japan; 7Medical Education Development Center, Gifu University, Japan

Dear Editor,

We are delighted to report that the Japanese translated version of R2C2 (relationship, reaction, content, coaching) was published in the Journal of Medical Education in Japan, under kind permission of the author and Journal of Academic Medicine. The R2C2 model, developed by Sargeant et al. (2015), promotes behavior change through reflection and feedback, while incorporating coaching. The effectiveness and influencing factors have been demonstrated in supervisor–resident pairs in various residency programs (family medicine, psychiatry, internal medicine, surgery, and anesthesiology) in the U.S., Canada, and the Netherlands. The R2C2 model is fascinating since it emphasises the relationship and dialogue between the resident and the supervisor, and provides insights into the residents’ in-depth learning.

While we are interested in factors that influence feedback, common across different specialties and contexts, we hypothesise that national culture and health profession disciplines may affect the dialogue and impact of the R2C2 model, especially in bridging the gap between self-assessment and supervisor’s assessment.

Reports of such cultural differences demonstrate the Japanese learning more from their failures, while Westerners learning more from their successes, as well as differences in learners’ self-evaluation. In addition, Hofstede reports that the relationship between learners and teachers in East Asia, including Japan is hierarchical, and feedback is therefore likely to be one-sided. Regarding mentoring/coaching, we have revealed that Japanese physician–scientist relationships are dependent on trust in mentors, and the cultural influence of acceptance of paternalistic mentoring (Obara et al., 2021) suggests the need for building trusting relationships. Furthermore, we as multidisciplinary author teams are keen to explore how different health profession disciplines shape the different perspectives on effective feedback and supervisor–learner relationship. We expect this topic to become more apparent as modern health services are becoming more multi-professional and the discourse may develop in a multi-professional relationship.

The Japanese version has been cautiously translated and published to overcome any issue involving translation. Although we had successfully conducted the nationwide workshop on R2C2 in Gifu, Japan in 2021 to disseminate its philosophy, we realised variety of factors should affect when we conduct R2C2 in our context. Our future goal is to examine the utility of R2C2 model in cross-cultural settings as well as cross-disciplinary situations in order to generate findings that will contribute to the glocalisation of medical education and multi-disciplinary education.

Notes on Contributors

T Miyoshi conceptualised and wrote the manuscript and approved the final version.

F Okazaki conceptualised cultural difference of R2C2 and revised and approved the manuscript.

H Funakoshi conceptualised cultural difference of R2C2 and revised and approved the manuscript.

T Oto conceptualised different health profession disciplines of R2C2 and approved the manuscript.

J Yoshino conceptualised different health profession disciplines of R2C2 and approved the manuscript.

S Yoshida conceptualised different health profession disciplines of R2C2 and approved the manuscript.

Prof T Saiki supervised and edited the manuscript.

Acknowledgement

We would like to acknowledge Rintaro Imafuku, Kaho Hayakawa, Chihiro Kawakami in Gifu University Medical Education Development Center, for writing and editing the Japanese translated version of R2C2 collaboratively.

Funding

There is no funding provided.

Declaration of Interest

There is no conflict of interest, including financial, consultant, institutional or otherwise for the author.

References

Obara H, Saiki T, Imafuku R, Fujisaki K, & Suzuki Y. (2021). Influence of national culture on mentoring relationship: a qualitative study of Japanese physician-scientists. BMC Medical Education, 21, 300. https://doi.org/10.1186/s12909-021-02744-2

Sargeant J, Lockyer J, Mann K, Holmboe E, Silver I, Armson H, Driessen E, MacLeod T, Yen W, Ross K, & Power M. (2015). Facilitated Reflective Performance Feedback: Developing an Evidence- and Theory-Based Model That Builds Relationship, Explores Reactions and Content, and Coaches for Performance Change (R2C2). Academic Medicine, 90(12), 1698-1706. https://doi.org/10.1097/ ACM.0000000000000809

*Tomoko Miyoshi
2-5-1 Shikata-cho, Kita-ku,
Okayama, Japan, 700-8558
+81-86-235-7342
Email: tmiyoshi@md.okayama-u.ac.jp

Submitted: 9 May 2022
Accepted: 3 August 2022
Published online: 4 October, TAPS 2022, 7(4), 83-85
https://doi.org/10.29060/TAPS.2022-7-4/CS2808

Chi Sum Chong1 & Woei Yun Siow2

1Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 2Raffles Hospital, Singapore

I. INTRODUCTION

The AO foundation aims to improve patient outcomes in the surgical treatment of trauma and musculoskeletal disorders and promote education and research. Yearly, approximately 30,000 Orthopaedics surgeons worldwide attend AO foundation courses. To ensure that the planned curriculum is delivered, the AO foundation requires its surgeon-faculty to attend the Faculty Education Program (FEP) before teaching at regional and international courses.

FEP participants are AO member-surgeons who are actively teaching within their own countries. They are selected by their local AO committees and invited to attend. Every participant is encouraged to teach at regional and international courses thereafter.

II. METHODS

Course structure:

  • Five weeks of online learning

This includes a self-assessment. Thereafter, participants learn through reading assignments, case studies and peer discussion at their own pace. These provide a problem-based and collaborative approach to learning. Most participants experience the same planned curriculum. Participants from locations with poor internet signals require a modified delivery of the curriculum e.g. email and hard copies.

  • One-and-a-half days of live event

This begins with a group discussion to derive the core principles of effective learning from one’s learning experiences. This is followed by an “introduction to the Pendleton method of giving and receiving feedback”. Thereafter, each participant presents a lecture, conducts a small group discussion and demonstrates teaching of a practical session through role playing. For each activity, each participant receives feedback from the other participants and the faculty (Benton & Young, 2018). The event concludes with feedback to evaluate the course. Face-to-face learning activities are contextual and allow for learning of knowledge and skills of teaching strategies in a collaborative fashion. The online and face-to-face curriculum follow the SPICES model and align with the learning outcomes (Harden et al., 1984).

  • One week of online follow-up with a post-course self-assessment.

The learning outcomes are:

  • Prepare and present a lecture
  • Moderate a small group discussion
  • Instruct in practical exercises
  • Receive and give feedback
  • Evaluate one’s own teaching
  • Work with outcomes in teaching strategies
  • Set expectations of a teaching or learning activity
  • Use information about learners e.g. learners’ needs and cultural context in the educational process
  • Motivate learners
  • Encourage interaction among learners

The outcomes encompass knowledge and skills in teaching and awareness of best practice guidelines in teaching strategies i.e. attitudinal domain. They are specific, relevant and timely for the participants who are young surgeons interested in teaching (Harden et al., 1999).

Some outcomes are easily measurable e.g. prepare and present a lecture, moderate a small group discussion, instruct in practical exercises and receive and give feedback. Participant performance is measured against a set of guidelines (Kogan et al., 2009). Some outcomes are embedded within the learning activities e.g. set outcomes and expectations in learning activities, motivate learners and encourage interaction among learners and evaluate one’s own performance.  Some outcomes are not easily measurable e.g. using learner information to plan learning activities. Overall, Kirkpatrick’s level three achievement is met in most outcomes.

For outcomes that cannot be easily measured during the course, longitudinal assessment of the participants will allow these outcomes to be measured i.e. when they teach at future AO courses after the FEP. Thus, entrustable professional activities from the FEP are aligned with the course outcomes (Shorey et al., 2019).

Feedback was gathered from participants attending the FEP courses where the author Siow was one of the faculty. All participants verbally consented to give feedback. A total of 103 participants attended six FEP courses between 2016 to 2019. The response rate was 100%. Achievement of course outcomes was measured using three categories ranging from “not achieved” to “fully achieved”. Faculty effectiveness, content relevance and overall course impact were assessed using five categories ranging from “not at all effective” to “very effective”.

According to the Canton Zurich Ethical commission, this study does not require an authorisation from the ethics committee (BASEC-Nr. Req-2022-00536).

III. RESULTS

Eighty percent or more of graduates agreed that the following outcomes were fully achieved: prepare and present a lecture, moderate a small group discussion, instruct in practical exercise, encourage interaction, work with outcomes in teaching strategies, set expectations and evaluate one’s own teaching.

Seventy-five to seventy-eight percent of graduates agreed that the following outcomes were fully achieved: motivate learners, receive and give feedback and manage time and logistics.

Sixty-six percent of graduates agreed that the following outcome was fully achieved: using learner’s information in the educational process.

Ninety-five to ninety-eight percent of graduates agreed that the faculty, the course content and the overall course impact were very effective.

IV. DISCUSSION

A large majority of the participants were able to fully achieve these outcomes: prepare and present a lecture, moderate a small group discussion, instruct in practical exercise, encourage interaction, work with outcomes in teaching strategies, set expectations and evaluate one’s own teaching. This is likely because these outcomes are more familiar to the participants.

Seventy-five to seventy-eight percent of graduates agreed that the following outcomes were fully achieved: motivate learners, receive and give feedback and manage time and logistics. The achievement rate for this group of outcomes is slightly lower than the previous group of outcomes possibly because these outcomes are less familiar to the participants. Furthermore, the AO method of giving and receiving feedback presents a new concept and practice to many participants.

Sixty-six percent of graduates agreed that the following outcome was fully achieved: using learner’s information in the educational process. One reason for this lower score may be because the application of this outcome was not specifically highlighted and explained to the participants. This outcome was strictly adhered to and applied in the planning and the execution of the very FEP course attended by the participants, but the manner in which participant’s information was used to do so was not clearly explained to the participants themselves.

V. CONCLUSION

The FEP is a rare opportunity for surgeon-educators to learn about scholarly teaching. Feedback from the courses support the continuation of these courses to help faculty improve their teaching skills.

Notes on Contributors

Chi Sum Chong reviewed the literature, performed data analysis and developed the manuscript. Woei Yun Siow reviewed the literature, designed the study, performed the data collection and wrote the manuscript. All authors read and approved the final manuscript.

Funding

This work has not received any external funding.

Declaration of Interest

All authors declare that there are no conflicts of interest.

References

Benton, S. L., & Young, S. (2018). Best practices in the evaluation of teaching. IDEA paper No. 69.

Harden, R. M., Crosby, J. R., & Davis, M. H. (1999). AMEE Guide No. 14: Outcome-based education: Part 1-An introduction to outcome-based education. Medical Teacher, 21(1), 7-14. https://doi.org/10.1080/01421599979969

Harden, R. M., Sowden, S., & Dunn, W. R. (1984). Educational strategies in curriculum development: The SPICES model. Medical Education, 18(4), 284-297.

Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation and assessment of clinical skills of medical trainees. A systematic review. Journal of the American Medical Association, 302(12), 1316-1326.

Shorey, S., Lau, T. C., Lau, S. T., & Ang, E. (2019). Entrustable professional activities in health care education: A scoping review. Medical Education, 53(8), 766-777.

*Woei Yun Siow
Raffles Hospital,
585 North Bridge Road,

Singapore 188770
Email: siowwoeiyun@gmail.com

Submitted: 14 April 2022
Accepted: 3 August 2022
Published online: 4 October, TAPS 2022, 7(4), 76-82
https://doi.org/10.29060/TAPS.2022-7-4/CS2780

Eusni RM Tohit1, Fauzah A Ghani1, Hizmawati Madzin3, Intan N Samsudin1, Subashini C Thambiah1, Siti Z Zakariah2 & Zainina Seman1

1Department of Pathology, Faculty of Medicine & Health Sciences, Universiti Putra Malaysia, Malaysia; 2Department of Medical Microbiology, Faculty of Medicine & Health Sciences, Universiti Putra Malaysia, Malaysia; 3Department of Multimedia, Faculty of Computer Science & Information Technology, Universiti Putra Malaysia, Malaysia

I. INTRODUCTION

Twenty first century learning requires analytical thinking and problem solving; hence, medical educators must design suitable model to prepare learners for challenges in future. Medical teaching and learning are moving towards this direction and use of technology in education is embedded in the process. The role of laboratory testing in patients care is recognised as a critical component of modern medical care (Smith et al., 2010). Ability of practicing physicians to appropriately order and interpret laboratory tests is declining and little attention was given to appropriate medical student education in pathology (Smith et al., 2010).

Clinical Pathology (CP) is a module recently introduced in our medical programme. In depth learning of pathology requires learners to identify appropriate tests and specimen containers, interpret patients’ results with consideration of other factors that may influence them.

Design thinking skills (DTS) is a guided process of thinking where learners’ work in a team and work through to identify problems (patient case), analyse through collaborative learning, provide justification for investigation, interpretation of results, and outline relevant effective management. Experiential learning emphasises the central role of the learners in the educational process by allowing the learner to draw own conclusions and ruminate on meaning of the learned material (Clem et al., 2014). Blending DTS and experiential learning creates a holistic approach to the learning of CP.

II. METHODS

A pilot study was executed amongst year 3 medical students in the Faculty of Medicine and Health Sciences, University Putra Malaysia, Serdang, Selangor, Malaysia. The study was approved by Ethics Committee for Research Involving Human Subjects, Universiti Putra Malaysia, (JKEUPM-2019-387). It was conducted over a span of two months outside students’ formal teaching and learning. Inclusion criteria include students who in clinical years and never been expose to Clinical Pathology module. Students were divided into small groups of either 4 or 5 students, and all were equipped with the CP app (Appendix 1) in Android smartphone together with DTS task book. Each group had a clinical pathologist facilitating the four hybrid sessions (physical and online) due to the global pandemic. In brief, phases involved introduction to CP (empathy), case findings (define), laboratory workup (ideation), results interpretation (solution), case approach (prototype), critical analysis (reflection and post-mortem). [Details in Appendix 2]. These were then presented in the final phase of DTS in a simulated grand ward round. Learners went through pre and post-test in CP and were asked to evaluate their experiences using a modified 28 items questionnaire (Appendix 3) using Likert scale score; adapted from a validated experiential learning questionnaire (Clem et al., 2014).

III. RESULTS

Twenty students from Medicine and Surgery posting participated in this pilot study, conducted from 27th April 2021 to 26th June 2021. In general, students were very satisfied with the experiential learning project. Responses of experiential learning and score marks were tabulated in Table 1. The 28 items were divided into 4 subheadings; as for the type of environment used, 66% agreed to the hybrid approach used in running of the project. Seventy-five percent agreed on the active participation in different phases of DTS. Eighty-six percent agreed with the relevance of the content of CP in their teaching and learning towards being a medical professional. Over two third of respondents agreed on utility of the CP learning experience be adapted in their future learning. As per for students’ performance (n=20) in pre and post-test OSCE in pathology, students scored significantly higher mark in all items evaluated as seen in Table 1.

Encouraging responses were recorded from some of the respondents as stated below:

“I enjoyed it very much. I received a lot of clarity on how important clinical pathology is after the session. Even after all these sessions, I even read again and again the clinical pathology notes that I have. I feel I can slowly relate my prior knowledge when it comes to clinical.”

Respondent 1

 

“In my opinion, I think this research project has given me a lot of benefits such as I can know how to correctly fill in the form to order the lab investigation, understand how to choose the correct tube for each lab investigation. I like this project very much as it can help me in this medical field”

Respondent 2

 

“I am grateful for being part of this research since I learnt a lot from the sessions. I have learnt about the type of lab investigations and blood tube, the sequence of taking blood as well as the phlebotomy techniques from the sessions which may help me in my future medical career.”

Respondent 3

Subheading I

Agree (%)

Neutral (%)

Disagree (%)

On the environment of Clinical Pathology used in the experiential learning

66

15

19

On the active participation and learning of Clinical Pathology

75

14

11

On the relevance of the content of Clinical Pathology module

86

3

11

On the utility of Clinical Pathology experience in future learning

68

3

29

Subheading II

Pre-test (/5)

Post-test (/5)

Correct selection of specimen container

0.6

3.5

Correct order of blood draw

2.5

4.0

Correct preanalytical variables identified

0.3

3.0

Relevant information in the laboratory form

2.3

4.0

Interpretation of laboratory tests

3.5

4.5

Table 1. Responses to the questionnaire, pre and post-test score for OSCE in Clinical Pathology

IV. DISCUSSION

The pilot study conducted has shown to be beneficial for the clinical students who participated in the research.

Using Kirkpatrick model (Kirkpatrick & Kirkpatrick, 2021), students in this pilot study achieved level 2 of the model outcome. As Clinical Pathology is a new subject in the amended curriculum, ‘sensitising’ the students to the importance of Clinical Pathology (CP) is achieved.

Small group teaching practised in this pilot study is in line with other schools who used small group teaching which resulted in close relationship between students & facilitator (Smith et al., 2010). The CP app provided self-directed learning on information about laboratory tests which able to improve students’ performance (Smith et al., 2010). When students worked through their own clinical case, this create inquisitive learners as they were able to do clinical correlation with the laboratory findings of their patients.

Disagreement showed by some of the students’ implied the need to improve implementation and running of the project. Students’ learning preferences varies from visual, aural, reading, and kinaesthetic (VARK) and a suitable approach need to be designed to suit spectrum of students.

Post-test OSCE scores showed improvement in common pathology knowledge required from students. This general knowledge will assist them in other clinical postings in future. CP app provided earlier will be useful as self-directed learning. However, there’s still challenges in developing a standardised approach to assessing students’ knowledge and skills in this area (Smith et al., 2010) which is an avenue for future research.

V. CONCLUSION

Students developed more confidence in CP which is useful for future learning experience in other disciplines and future career.

Notes on Contributors

ERT designed the research, developed the CP app storyboard, created the DTS task book, analysed the results, wrote the manuscript. HM developed the CP application, edited the manuscript. FAG, INS, SCT, SZZ, ZS revised the protocol, CP app story board, DTS task book, facilitated the project, edited the manuscript.

Acknowledgement

The authors would like to acknowledge Sufi Firdaus and Rubhan AL Chandran on technical help in assisting the development of CP application and DTS task book.

Funding

This work was supported by Geran Inovasi Pengajaran Pembelajaran2018 /Universiti Putra Malaysia/ Centre of Academic Development (800-2/2/15).

Declaration of Interest

All authors declared there is no conflict of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.

References

Clem, J. M., Mennicke, A. M., & Beasley, C. (2014). Development and validation of the experiential learning survey. Journal of Social Work Education, 50, 490-506. https://doi.org/10.1080/1043 7797.2014.917900

Kirkpatrick, J., & Kirkpatrick, W. K. (2021). Introduction to the New World Kirkpatrick model. Kirkpatrick Partners. Retrieved June 7, 2022, from https://www.kirkpatrickpartners.com/wp-content/uploads/2021/11/Introduction-to-the-Kirkpatrick-New-World-Model.pdf

Smith, B. R., Aguero-Rosenfeld, M., Anastasi, J., Baron, B., Berg, A., Bock, J. L., Campbell, S., Crookston, K. P., Fitzgerald, R., Fung, M., Haspel, R., Howe, J. G., Jhang, J., Kamoun, M., Koethe, S., Krasowski, M. D., Landry, M. L., Marques, M. B., Rinder, H. M., . . . Wu, Y. (2010). Educating medical students in laboratory medicine: A proposed curriculum. American Journal of Clinical Pathology133(4), 533–542. https://doi.org/10.1309/AJCPQCT9 4S FERLNI

*Eusni Rahayu binti Mohd.Tohit
Department of Pathology,
Faculty of Medicine & Health Sciences,
University Putra Malaysia,
43400 Serdang, Selangor
+60397692379
Email: eusni@upm.edu.my

Submitted: 11 March 2022
Accepted: 10 June 2022
Published online: 4 October, TAPS 2022, 7(4), 73-75
https://doi.org/10.29060/TAPS.2022-7-4/CS2783

Kiyotaka Yasui, Maham Stanyon, Yoko Moroi, Shuntaro Aoki, Megumi Yasuda, Koji Otani & Yayoi Shikama

Centre for Medical Education and Career Development, Fukushima Medical University, Fukushima, Japan

I. INTRODUCTION

Educational strategies that are effective in one culture may not elicit the expected response when transferred across cultures. For instance, discussion-based learning methods such as problem-based learning, which were developed in Western contexts to foster self-directed lifelong learning (Franbach et al., 2019), are not easy for Asian students to adapt to. The quietness of Asian students, noted in multi-national contexts, is not always due to linguistic or cultural literacy barriers (Remedios et al., 2008) and requires contextual deconstruction to enable effective solution generation. In a Japanese context, we have observed how quietness manifests through insufficient question generation and a lack of spontaneous opinion expression in class. Such attitudes may be interpreted by western standards as lacking initiative and critical thinking (Tavakol & Dennick, 2010) but are in line with Japanese social norms and traditional views of learning. Because effective learning through discussion requires cognitive conflict to facilitate conceptual transformation (De Grave et al., 1996), it is necessary to ease the psychological burden experienced by our students when deviating from inherited cultural habits so that they can comfortably express opinions to embrace such conflicts. In this case study we share how we created a supportive environment to enable Japanese medical students to embrace this behavioural change.

Through our understanding of Japanese cultural norms, we hypothesised that student quietness could be attributed to the following: 1) belief that their question is insignificant and a desire not to impose on the time of others; 2) reluctance to express different opinions which might cause conflict; and 3) risk aversion to making incorrect statements. Reasons 1) and 2) reflect Japanese social norms requiring people to always act with consideration for others, while 3) is related to a Confucian-affected traditional view of learning that values humility for one’s imperfection as a driving force to self-cultivation which potentially reinforces embarrassment when giving incorrect statements. We aimed to address the above points by introducing environmental changes to boost student confidence in the significance of their questions and minimise the psychological burden of expressing their opinions during a class on ethical dilemmas.

II. METHODS

The class was undertaken by 256 first-year medical students at Fukushima Medical University in 2018 and 2019, as shown in Figure 1.

Figure 1. Flow diagram explaining the class: The closed circles represent the presenting group members and their interaction with the rest of the class (open circles) during the discussion and plenary session

A. Building Student Confidence

To minimise the risk aversion and associated anxiety of voicing incorrect opinions, we tasked students to reflect on ethical dilemmas with no clear answer, that they encountered during a 3-day placement in local nursing homes which was presented in groups of 5-6 to the rest of the class. Through removing the expectation of a right answer from the start, we created an atmosphere where students felt comfortable in generating multiple questions rather than being focused on reaching a single ‘correct’ answer.

B. A Conducive Environment for Cognitive Conflict

To break down the barriers of students seeking conformity and agreement during their presentations, we refocused the objective of the session onto the reasoning process of how they considered their ethical dilemma. This reframing supported students to embrace conflicting perspectives without worrying about achieving a consensus.

C. Nurturing a Diversity of Opinions

To facilitate the voicing of minority opinions, we harnessed a positive psychological trait in Japanese culture where pleasure is felt in acting as a collective. Therefore, when opinions were presented to the class, the entire group embraced ownership of the discussion, allowing the individuals who raised the points to remain anonymous. This reduced the potential for personal conflict and allowed diverse opinions to be aired without a loss of face.

At the end of the class, students were asked to evaluate the class using a 4-point Likert scale (good, fairly good, not so good, not good) and to write a reflection on the experience in one to two lines.

III. RESULTS

Out of the 245 students who submitted ratings, 89.9% evaluated the course as “good” or “fairly good”. About half mentioned their surprise at the diversity of opinions and their satisfaction with hearing them, acknowledging that hearing the different perspectives deepened their thoughts, broadened their perspectives, and created new ideas. Satisfaction with being able to express one’s thoughts was stated by a small number of students. Some of the students who chose “not so good” or “not good” pointed out that discussion was tough and required getting used to.

IV. DISCUSSION

When adopting a teaching method developed in a different culture, it should be delivered in the context of one’s own culture to optimise student learning. Once given a supportive environment, Japanese students, previously more content to listen than to actively contribute to discussions, exchanged their ideas and positively encountered cognitive conflict, rather than suffer from low confidence and an aversion to personal conflict. This demonstrates their potential to assimilate different perspectives and advance their thinking, akin to undergoing conceptual transformation. Through this work, we show that the standardisation of teaching methods does not equate to the globalisation of education, but how teaching must be adapted with clear implementation strategies and outcome definition, grounded in the culture to which the learners belong.

V. CONCLUSION

Generalising our adaptations outside of a Japanese context is limited, because of the cultural diversity within Asian countries that brings different challenges to discussion-based learning methods. However, vast numbers of students migrate across cultures in higher education and healthcare training. For universities and clinical training institutions with international students, understanding the barriers and supporting ‘quiet’ students to learn effectively through discussion alongside inherited cultural norms is a priority. This study aids in this understanding by providing an example from a Japanese medical undergraduate context.

Notes on Contributors

Kiyotaka Yasui designed and conducted the course, analysed the student reflections and wrote the manuscript.

Maham Stanyon analysed student reflctions and wrote the manuscript.

Yoko Mori conducted the course facilitation and supported the contextualisation of the results and discussion.

Shuntaro Aoki conducted the course facilitation and supported the contextualisation of the results and discussion.

Megumi Yasuda conducted the course facilitation and supported the contextualisation of the results and discussion.

Koji Otani conducted the course facilitation and supported the contextualisation of the results and discussion.

Yayoi Shikama planned and conducted the course as a course supervisor, analysed the student course ratings and reflections, and wrote the manuscript.

Acknowledgement

The authors would like to thank Dr. Rintaro Imafuku (Gifu University, Gifu, Japan) for constructive advice given during the medical education and research mentoring program sponsored by the Japan Society of Medical Education and Oliver Stanyon for editing a draft of this manuscript.

Funding

This study did not receive any funding.

Declaration of Interest

The authors have no conflict of interest to declare.

References

De Grave, W. S., Boshuizen, H. P. A., & Schmidt, H. G. (1996). Problem based learning: Cognitive and metacognitive processes during problem analysis. Instructional Science, 24, 321-341. http://doi.org/10.1007/BF00118111

Franbach, J. M., Talaat, W., Wasenitz, S., & Martimianakis, M. A. (2019). The case for plural PBL: An analysis dominant and marginalized perspectives in the globalization of problem-based learning. Advances in Health Sciences Education, 24, 931-942. http://doi.org/10.1007/s10459-019-09930-4

Remedios, L., Clarke, D., & Hawthorne, L. (2008). The silent participant in small group collaborative learning contexts. Active Learning in Higher Education 9(3), 201-216. http://doi.org/10.1177/1469787408095846

Tavakol, M., & Dennick, R. (2010). Are Asian international medical students just rote learners? Advances in Health Sciences Education, 15, 369-377. http://doi.org/10.1007/s10459-009-9203-1

*Kiotaka Yasui
1 Hikarigaoka,
Fukushima 960-1295,
Japan
Email: taka-y@fmu.ac.jp

Submitted: 23 November 2021
Accepted: 10 May 2022
Published online: 4 October, TAPS 2022, 7(4), 59-70
https://doi.org/10.29060/TAPS.2022-7-4/OA2714

Deepthi Edussuriya1, Sriyani Perera2, Kosala Marambe3, Yomal Wijesiriwardena1 & Kasun Ekanayake1

1Department of Forensic Medicine, Faculty of Medicine, University of Peradeniya, Sri Lanka; 2Medical Library, University of Peradeniya, Sri Lanka; 3Department of Medical Education, Faculty of Medicine, University of Peradeniya, Sri Lanka

Abstract

Introduction: Emotional Intelligence (EI) is especially important for medical undergraduates due to the long undergraduate period and relatively high demands of the medical course. Determining associates of EI would not only enable identification of those who are most suited for the discipline of medicine but would also help in designing training strategies to target specific groups. However, there is diversity of opinion regarding the associates of EI in medical students. Aim of the study was to determine associates of EI in medical students.

Methods: The databases MEDLINE, CENTRAL, Scopus, EbscoHost, LILAC, IMSEAR and three others were searched. It was followed by hand-searching, cited/citing references and searching through PQDT. All studies on the phenomenon of EI and/or its associates with medical students as participants were retrieved. Studies from all continents of the world, published in English were selected. They were assessed for quality using Q-SSP checklist followed by narrative synthesis on selected studies.

Results: Seven hundred and ninety-two articles were identified of which 29 met inclusion criteria. One article was excluded as its full text was not available. Seven articles found an association between ‘EI and academic performance’, 11 identified an association between ‘EI and mental health’, 11 found an association between ‘EI and Gender’, 6 identified an association between ‘EI and Empathy’ while two have found an association with the learning environment.

Conclusion: Higher EI is associated with better academic performance, better mental health, happiness, learning environment, good sleep quality and less fatigue, female gender and greater empathy.

Keywords:           Emotional Intelligence, Associates of Emotional Intelligence, Medical Students, Mental Wellbeing, Empathy

Practice Highlights

  • Higher emotional intelligence is associated with better academic performance.
  • Higher emotional intelligence is associated with better mental health.
  • Higher emotional intelligence is associated with female gender.
  • Higher emotional intelligence is associated with greater empathy.

I. INTRODUCTION

Emotional intelligence (EI) is defined as “the ability to perceive emotions accurately, appraise, and express emotion; the ability to assess and/or generate feelings when they facilitate thought; ability to understand emotions and emotional knowledge, and to regulate emotions to promote emotional and intellectual growth” (Mayer & Salovey, 1997). Studies have found that there is a positive effect between EI and academic as well as professional success (Suleman et al., 2019).  It has been reported that people and college students with good EI show better social functioning and interpersonal relationship and peers have identified them as less antagonistic and conflictual (Petrovici & Dobrescu, 2014).

Several tests and instruments that have been used to assess the Emotional intelligence of medical students were identified through the literature. These include standard EI tests, modified versions of standard EI tests, and authors’ assessment methods of their own. Schutte self-report EI test, TEIQue questionnaire and Bar-on’s emotional intelligence questionnaire ((EQ-i) 2.0) have been used frequently. Each of these instruments has different advantages and disadvantages of their own.

The Emotional Quotient Inventory (EQ-i) 2.0 is a revision of the EQ-I (Bar-On, 2004). The Emotional Quotient Inventory (EQ-I) 2.0 measures the interaction between an individual and their environment. Since the EQ-i 2.0 is a revision of the original Emotional Quotient Inventory (EQ-I) the standard platform of the EQ-i validation remains intact.

The Schutte Self-Report Emotional Intelligence Test (SSEIT) is a method of measuring general Emotional Intelligence (EI), using four sub-scales: emotion perception, utilising emotions, managing self- relevant emotions, and managing others’ emotions (Schutte et al., 1998). The SSEIT model is closely associated with the EQ-I model of Emotional Intelligence. It has a reliability rating of 0.90. The EI score, overall, is fairly reliable for adults and adolescents. However, the utilising emotions sub-scale has shown poor reliability (Ciarrochi et al., 2001).  Also, they report a mediocre correlation of the SSREI with self-estimated EI, the Big Five EI scale, and life satisfaction (Petrides & Furnham, 2000).  However, SSREI correlated poorly with well-being and EI criteria.

The Trait Emotional Intelligence Questionnaire (TEIQue), is an openly accessible instrument developed to measure global trait emotional intelligence. Based on the Trait Emotional Intelligence Theory, a significant number of research has been conducted regarding emotional intelligence (EI) (Mikolajczak et al., 2007). The TEIQue is available in long form and short forms. Internal consistency and test-retest both indicated scale reliabilities of 0.71 and 0.76. High correlations between the TEIQue with Shrink’s Emotional Intelligence Scale showed validity in measuring emotional intelligence and the “Big Five” Personality Traits.

Apart from those assessment methods, Genos Emotional Intelligence Assessment, Mayer-Salovey-Caruso Emotional Intelligence Test, TMMS-24 data and DASS-21 scale, Bradbury-Graves’s Emotional Intelligence and Siberia Schering’s Emotional Intelligence Questionnaire have also been used by the authors to assess the EI.

A comprehensive survey in medicine states that EI had a positive contribution in doctor-patient relationship, increased empathy, teamwork, communication skills, stress management, organisational commitment and leadership (Arora et al., 2010). EI is invariably important to medical professionals as it is associated with self-monitoring which would not only ensure adapting to clinical situations appropriately and having desirable interpersonal relations but also result in a favorable outcome for the patient and the wellbeing of the practitioner.

Few studies suggest that EI training can help medical students to build their leadership and empathy skills, as they enter the clinical years (Austin et al., 2005; Dolev et al., 2019). Literature surveys on emotional intelligence and medicine, and physician leadership qualities concludes that EI correlates with many of the competencies that modern medical curricula seek to deliver including leadership (Mintz & Stoller, 2014; Reshetnikov et al., 2020). Other studies indicate that age and gender are associated with emotional intelligence. However, some studies showed that EI at medical school admission could not reliably predict academic success in later years (Reshetnikov et al., 2020). These studies have all looked at the associates in an isolated sense. However, it would also be interesting to reflect on the concept of EI in a broader sense as it is inevitable that there would be an interaction of factors.

The medical course extends over a period of five years as opposed to most undergraduate degrees which are shorter. Medical training involves close interactions with different categories of people including patients, doctors of different grades and the paramedical staff. Training includes long hours of work in stressful environments where some situations could be emotionally challenging. This long undergraduate period and relatively high demands of the medical course would require medical students to possess a high degree of EI. As findings of different studies on EI are sometimes diverse in opinion, it would be useful to conduct a systematic review to identify the associates of EI in order to design training strategies which target specific groups.

Even though EI is considered a trainable trait, the extent of trainability depends on many personal and institutional factors (Mattingly & Kraiger, 2019). Völker (2020) expresses that trainability in emotional intelligence is subjected to acquired knowledge which is situational and may depend on accumulating relevant experience.

In the Sri Lankan context, the sole criteria for selection of students to a medical course is the academic excellence at the Advanced level examination, which alone may not reflect their suitability to follow a profession like medicine (University Grants Commission, 2022).

However, since EI is an essential trait especially for medical practice many universities worldwide use different tools to assess EI in their applicants. Furthermore, different universities adopt varying techniques to develop EI of their students throughout the course. It is envisaged that this review would not only help determine what additional factors could be considered in the selection of applicants for a medical course but would also help teachers design training strategies to target specific groups of students and also ensure a more enjoyable and productive learning experience for the students as a whole. There is no doubt that these selection and intervention programs would produce doctors with more favourable qualities which would not only produce greater benefits to the patient but would prevent burn out among doctors.

A. Objective

The objective of this study is to find out, the associates of Emotional Intelligence in Medical students based on available literature in English from 2015 to 2020.

II. MATERIALS AND METHODS

The research question was defined based on the PICOS (Population, Intervention, Comparison, Outcomes and Setting) format. The review protocol was developed according to PRISMA-P 2015 (Preferred reporting items for systematic review and meta-analysis protocols) statement (Moher et al. 2015) by all three authors DE, KM and SP and was registered in the PROSPERO Registry (CRD42021227877). The methodology for the systematic review (SR) followed the guidelines and standards of IOM (Institute of Medicine) (Eden et al., 2011) and PRISMA-2015 for reporting.

A. Search Strategy

A Systematic and comprehensive search was conducted by SP in April 2020 and references were managed using the software Mendeley. The search explicitly aimed to identify all published and unpublished relevant studies in order to limit bias in the searching process.  The key search terms were identified with the aid of a search-term-harvesting table by KM and DE. A combination of relevant medical subject headings and search terms tagged with other appropriate search fields were used in the literature search. The following databases were searched:

CDSR (Cochrane Database of Systematic Reviews), DARE (The Database of Abstracts of Reviews of Effects), MEDLINE (1950- 2020) via Pubmed (See supplemental Appendix 1 for search strategy), CENTRAL (The Cochrane Central Register of Controlled Trials, 1948 – 2020), Scopus, EbscoHost, LILAC, IMSEAR (Index Medicus for South East Asian region) and WHO International Clinical Trials Registry Platform (ICTRP). In addition to electronic searches, two key journals (2015-2020) were hand-searched, and cited & citing references of all included studies were screened for further relevant articles. Searches were limited to studies published between the years 2015-2020. Searching other resources included grey literature such as PQDT (ProQuest Dissertations and Thesis database) and Global health (via WHO).

B. Selection Criteria

After removal of duplicates from the retrieved articles, the remaining articles with abstracts were uploaded to the Web application, Rayyan (Quzzani et al., 2016) for the purpose of screening. The criteria for selection of articles were based on the PICOS elements. The studies were from all continents of the world and limited to those published in English. All studies focusing on the phenomenon of EI and/or its associates with medical students as participants were considered for inclusion in the review.

The authors DE, KM, SP and KE independently screened the uploaded articles in Rayyan, using the above eligibility criteria. In the first phase, title and abstract of each article were reviewed by any of the two authors independently for its candidacy.  Following this initial evaluation, the full text of all those selected articles were retrieved and further examined by KM and DE independently (second phase), for the final verification before inclusion in the review. Any disagreements regarding eligibility of studies were resolved by consulting a third author (SP). Reviews, systematic reviews, editorials, letters and comments were removed.  Articles which met the eligibility criteria were selected for inclusion in the review. Excluded studies were marked with the ‘reason’ in Rayyan.

C. Data Extraction and Quality Assessment

Data from all included studies were extracted by the review authors YW and KM using a data extraction table developed for the purpose of this review (Appendix 2). Data extracted were cross-checked by SP for any errors. Information recorded included: study details (author, year, country of origin), participants (number of participants, gender, level of undergrad program, etc.), methods (study aim, design, total study duration, tools used), study type (phenomenon /context studied) and outcomes (all relevant findings related to primary and secondary outcomes).

SP and YW independently assessed the quality of those selected studies using Quality Assessment Checklist for Survey Studies in Psychology (Q-SSP) (Protogerou & Hagger, 2020) Results of the quality assessments were compared (Appendix 3); any disagreements were resolved by consensus. Articles which met the required quality criteria were selected for inclusion in the review.

D. Strategy for Data Synthesis

Due to the heterogeneity between the included studies, a quantitative synthesis was not considered. A narrative synthesis of the findings from individual included studies was carried out by DE, based on the characteristics of the targeted populations and the type of outcome such as association/correlation of EI with academic performance, professional success, social functioning, interpersonal relationship, empathy, teamwork spirit, communication skills, stress management, organizational commitment, leadership quality, self-monitoring, mental health and emotional well-being.

III. RESULTS

A total of 792 articles were retrieved during the literature search. After removing the duplicates, 752 articles were considered for screening using the eligibility criteria. Initial evaluation of articles through title and abstract resulted in only 29 articles meeting the selection criteria. During the full-text evaluation, one article (Parijitham, 2018) was removed, as its full-text article could not be found even after contacting the author. The data that support the findings of this study are openly available at https://doi.org/10.6084/m9.figshare.15564210 (Edussuriya et al., 2021). Twenty-eight articles were finally selected for quality assessment. Flow diagram of the selection of studies is shown in Figure 1.

Figure 1. Flow diagram illustrating included and excluded studies in the systematic review

The study design of the selected studies comprised of 26 cross sectional (majority), one longitudinal and one quasi-experimental. However, all studies used standard validated survey questionnaires to collect data. Therefore, to assess the quality of selected studies, Quality Assessment Checklist for Survey Studies in Psychology (Q-SSP) was selected as the best, ‘applicable to all’ tool in this review, considering its relevance also to the trait emotional intelligence since emotions, thoughts and mental processes are aspects of psychology. The quality of the studies was determined by the extent to which the items on above checklist were met by each of the articles. There were 20 checklist items in the tool out of which one item (item-19 – Debriefing participants at the end of data collection) could be justifiably waived; one reason being none of the included studies used it in the methodology. Thus 19 items were considered to be applicable in this review (Appendix 4).

Table 1. Characteristics of included studies

Table 2. Categorisation of findings of the studies

A. Findings of Studies and Data Analysis

1) EI and academic performance: According to studies, a positive correlation was identified between EI and academic performance (Aithal, et al., 2016, Ibrahim et al. 2017; Moslehi et al., 2015, Wijekoon et al., 2017) while (Ranasinghe et al., 2017; Unnikrishnan et al., 2015) also found a significant association between EI and academic performance. These studies indicated that students with higher EI intend to perform better in their academic work. A cross-sectional study done by Chew et al. (2015) showed that medical students with less emotional intelligence were largely unaware of their anxiety, which was associated with lower academic performance. According to studies done by Holman et al., 2016, Gupta et al., 2017 and Vasefi et al., 2018 there was no correlation of EI with academic performance. A study by Othman et al., 2020 revealed that EI showed a significant positive effect on intuitive decision-making style and a negative effect on avoidant and dependent decision-making styles which may explain better academic performance of medical students with high EI.

2) EI and mental health (emotional wellbeing): A direct relationship between EI and academic satisfaction was found in studies done by Rouhani et al., 2015, Unnikrishnan et al., 2015 and Carvalho et al., 2018. Further, Carvalho et al., 2018 reported that a positive relationship was observed between EI and academic-related well-being which accounts for both academic performance and mental health. It was seen that medical students with less emotional intelligence were largely unaware of their anxiety (Chew et al., 2015) and those with higher emotional intelligence perceived lesser stress (Gupta et al., 2017 and Ranasinghe et al., 2017). Shi and Du (2020) found that EI was strongly and negatively associated with Personal Distress. Heidari Gorji et al. (2018) identified a direct relationship between emotional intelligence and mental health while a study done by Mahaur et al. (2017) did not find a significant relationship between the two. Ghahramani et al. (2019) identified a significant positive relationship of EI with happiness while Abdali et al. (2019) showed a positive correlation with sleep quality and a negative correlation with general fatigue.

3) EI and demographic characters: Higher EI in females compared to males was found (Aithal et al., 2016, Bertram et al., 2015, Ibrahim et al., 2017, Khan et al., 2016, Raut & Gupta, 2019 Sundararajan and Gopichandran, 2018, Tyszkiewicz-Bandur et al., 2017, Unnikrishnan et al., 2015 and Wijekoon et al., 2017). Irfan et al. (2019) suggests that female medical students had significantly higher empathic behavior and emotional intelligence than male students. However, Skokou et al. (2019) did not find any difference in EI in males and females. Vasefi et al. (2018) and Abe et al. (2018) too did not find a significant relationship between EI and gender. However, Abe et al. (2018) revealed that females showed significantly higher Neuroticism, Agreeableness and Empathy scores than males. According to Ibrahim et al. (2017) increasing age resulted in higher EI. However, Yee et al. (2018) did not find a significant association of EI with age. According to Yee et al. (2018) there was no significant association of EI with ethnicity.

4) EI and empathy: Significant correlation between EI and Empathy was identified (Bertram et al., 2015, Irfan et al., 2019 Khan et al., 2016; Sundararajan & Gopichandran, 2018). Shi and Du (2020) suggests that EI helps medical professionals to establish a better association with the patient.

5) Learning environment: Relationship between EI and academic background was identified by both Irfan et al. (2019) and Sundararajan and Gopichandran (2018). According to Sundararajan and Gopichandran (2018), students who attended government schools for high school education had greater emotional intelligence than students from private schools. But Irfan et al. (2019) suggests that medical students of private medical schools showed higher level of empathy as compared to public medical schools. Dolev et al. (2019) reveals that there are no differences in EI levels between first-year and sixth-year medical students.

IV. DISCUSSION

The review included studies conducted in South and Southeast Asian, European, Arabian, North American and South American countries.  Majority of studies on Asian students revealed a high association between EI and academic performance. However, two studies on Asian students and one on US students failed to observe such associations. The impact of EI on academic performance may be explained by the fact that being aware of one’s anxiety relieved stress and those with high EI experienced greater mental wellbeing and satisfaction with their programs; which may contribute to better academic performance. Furthermore, the fact that EI showed a positive correlation with better mental health/wellbeing, less perceived stress/distress, happiness, good sleep quality and less fatigue may account for the better academic performance of students with high EI.

Empathy is an important aspect in the delivery of high-quality healthcare. Several researchers from different regions of the world reported strong association between empathy and high EI scores. Therefore, assessment of EI may be useful in admitting students for medical degrees. However, since EI is considered as a “trainable trait”, the role that EI plays in admitting students to medical schools is debatable. Therefore, all efforts must be taken by medical schools to include activities that enhance EI, during the medical course, irrespective of the EI levels of students on admission.   The fact that EI did not improve with seniority does not purely support the fact that EI is not trainable but it maybe those students were not exposed to and not sensitised to activities which enhance EI.

 Evidence indicated a positive association between high EI scores and female gender. It maybe postulated that the “nurturing and caring” role assigned by society to the females influence their upbringing. Thereby improving their emotional intelligence.

In conclusion it must be stated that since a majority of studies revealed that higher EI is associated with better academic performance, better mental health and greater empathy and since EI is considered a trainable trait, curricular need to be developed with a view to improving EI.

In order to develop EI, curricular should contain programs on general leadership development, self-care/ wellness and burn-out prevention (Monroe & English, 2013). Small-group experiential learning activities and meeting with trained mentors throughout the years would be helpful. Debriefing sessions and maintaining a journal are some other techniques that need to be considered. It may be helpful to discuss change management and quality improvement with students (Audra et al., 2020). Exposure of students to skills of self-awareness and self-management through discussion, exposure to theories of conflict management, mindfulness practice, leadership training, discussions on learning styles, discussions on power and influence, identification of team dynamics, exposure to high-functioning inter-professional teams, peer coaching, health care leader interview and shadowing of experienced clinicians are some techniques that could be adopted in attempting to develop EI among students (KozlowskiIlgen, 2006). It would be beneficial to evaluate acquisition based on completion of an EI inventory, feedback from peers and staff, project presentations, reflective writing, measurement of achievement of professional and personal development benchmarks and milestones, performance on simulated scenarios and small-group exercises (Pan & Allison, 2010).

During the study it was observed that there is paucity of longitudinal studies on Associates of EI. Therefor it would be beneficial to conduct longitudinal studies which may help identify some aspects with regard to the trainability of EI in medical students.

V. CONCLUSION

Through this review it was revealed that higher EI is associated with

  • better academic performance,
  • better mental health including less perception of stress and distress, happiness, good sleep quality and less fatigue,
  • female gender, and
  • greater empathy.

No significant association was found between age, ethnicity, and seniority in the medical course, and emotional intelligence. No conclusions could be made about the association between the nature of the educational institute (private or state) and emotional intelligence.

A. Limitations

In this review, it was found that authors of included studies which used several different tools to assess the EI of medical students. Each of these tools have their own advantages and disadvantages which cause comparison difficult. It could not be assumed that, each and every one of these methods provide results in the same level.

B. Recommendation

Since high EI has shown a positive correlation with academic performance and better mental wellbeing of students and since it has been identified as a “trainable trait” all efforts should be made to enhance EI of medical students during their undergraduate training.

Notes on Contributors

Edussuriya D.H (DE) was the Principal Investigator of the study. Protocol drafting, study selection, analysis and interpretation of data, synthesis of findings of individual studies and the drafting of manuscript was done by the author.

Perera S. (SP) facilitated the methodology, involved in drafting the protocol and retrieved selected articles, since the author has previous experience in conducting systematic reviews. Reference management in Mendeley and Rayyan, cross-checking the extracted data, assessed quality of selected studies and final review of draft was also done by the author.

Marambe K.N (KM) was involved in drafting the protocol, involved in article selection and extracted data from the selected articles.

Wijesiriwardena W.M.S.Y (YW) extracted data from selected articles, assessed the quality of selected articles and finalised the manuscript.

Ekanayake E.M.K.B (KE) has screened the uploaded articles in Rayyan.

Ethical Approval

The review is registered in PROSPERO – The International Prospective Register of Systematic Reviews under the registration number CRD42021227877 for the systematic review.

Data Availability

Data set that support the findings of this study are openly available in Figshare repository https://doi.org/10.6084/ m9.figshare.15564210

Acknowledgement

The authors acknowledge Information Officers of National Science Library and Resources Center, National Science Foundation, Sri Lanka for support in Scopus searches and staff of Medical Library of Faculty of Medicine, University of Peradeniya for the assistance in finding full text articles of the included studies in the review.

Funding

No funding sources are associated with this study.

Declaration of Interest

No conflicts of interest are associated with this paper.

References

Abdali, N., Nobahar, M., & Ghorbani, R. (2019). Evaluation of emotional intelligence, sleep quality, and fatigue among Iranian medical, nursing, and paramedical students: A cross-sectional study. Qatar Medical Journal, 2019(3), 15. https://doi.org/10.5339/qmj.2019.15

Abe, K., Niwa, M., Fujisaki, K., & Suzuki, Y. (2018). Associations between emotional intelligence, empathy and personality in Japanese medical students. BMC Medical Education18, Article 47. https://doi.org/10.1186/s12909-018-1165-7

Aithal, A. P., Kumar, N., Gunasegeran, P., Sundaram, S. M., Rong, L. Z., & Prabhu, S. P. (2016). A survey-based study of emotional intelligence as it relates to gender and academic performance of medical students. Education for Health, 29(3), 255–258.

Arora, S., Ashrafian, H., Davis, R., Athanasiou, T., Darzi, A., & Sevdalis, N. (2010). Emotional intelligence in medicine: A systematic review through the context of the ACGME competencies. Medical education44(8), 749–764. https://doi.org/10.1111/j.1365-2923.2010.03709.x

Audra, V. W., O’Brien, T. C., Varvayanis, S., Alder, J., Greenier, J., Layton, R. L., Stayart,C. A., Wefes, I., & Brady, A. E. (2020). Applying experiential learning to career development training for biomedical graduate students and postdocs: Perspectives on program development and design. CBE—Life Sciences Education, 19(3), 1-12. https://doi.org/10.1187/cbe.19-12-0270

Austin, E. J., Evans, P., Goldwater, R., & Potter, V. (2005). A preliminary study of emotional intelligence, empathy and exam performance in first year medical students. Personality and Individual Differences, 39(8), 1395-1405. https://doi.org/10.1016/j.paid.2005.04.014

Bar-On, R. (2004). The Bar-On Emotional Quotient Inventory (EQ-i): Rationale, description and summary of psychometric properties. In G. Geher (Ed.), Measuring Emotional Intelligence: Common Ground and Controversy (pp. 115–145). Nova Science Publishers.

Bertram, K., Randazzo, J., Alabi, N., Levenson, J., Doucette, J. T., & Barbosa, P. (2016). Strong correlations between empathy, emotional intelligence, and personality traits among podiatric medical students: A cross-sectional study. Education for Health (Abingdon), 29(3), 186–194.

Carvalho, V. S., Guerrero, E., & Chambel, M. J. (2018). Emotional intelligence and health students’ well-being: A two-wave study with students of medicine, physiotherapy and nursing. Nurse Education Today, 63, 35–42. https://doi.org/10.1016/j.nedt.2018.01.010

Chew, B. H., Hassan, F., & Zain, A. M. (2015). Medical students with higher emotional intelligence were more aware of self-anxiety and scored higher in continuous assessment: A cross-sectional study. Medical Science Educator, 25(4), 421-430. https://doi.org/10.1007/s40670-015-0168-9

Ciarrochi, J., Chan, A. Y., & Bajgar, J. (2001). Measuring emotional intelligence in adolescents. Personality and Individual Differences, 31(7), 1105-1119. https://doi.org/10.1016/S0191-8869(00)00207-5.

Dolev, N., Goldental, N., Reuven-Lelong, A., & Tadmor, T. (2019). The evaluation of emotional intelligence among medical students and its links with non-cognitive acceptance measures to medical school. Rambam Maimonides Medical Journal, 10(2), e0010. https://doi.org/10.5041/RMMJ.10365

Eden, J., Levit, L., Berg, A., & Morton. S. (2011). Finding what works in health care: Standards for systematic reviews. National Academies Press. https://doi.org/10.17226/13059

Edussuriya, D., Perera, S., Marambe, K., Wijesiriwardena, Y., & Ekanayake, K. (2021). Emotional intelligence systematic review (Version 3) [Data Set]. Figshare. https://doi.org/10.6084/m9.figshare.15564210.v3

Ghahramani, S., Jahromi, A. T., Khoshsoroor, D., Seifooripour, R., & Sepehrpoor, M. (2019). The relationship between emotional intelligence and happiness in medical students. Korean Journal of Medical Education, 31(1), 29–38. https://doi.org/10.3946/kjme.2019.116

Gupta, R., Singh, N., & Kumar, R. (2017). Longitudinal predictive validity of emotional intelligence on first year medical students perceived stress. BMC Medical Education, 17(1), 139. https://doi.org/10.1186/s12909-017-0979-z

Heidari Gorji, A. M., Shafizad, M., Soleimani, A., Darabinia, M., Goudarzian, A. H. (2018) Path analysis of self-efficacy, critical thinking skills and emotional intelligence for mental health of medical students. Iranian Journal of Psychiatry and Behavioral Sciences. 12(4), e59487. https://doi.org/10.5812/ijpbs.59487

Holman, M. A., Porter, S. G., Pawlina, W., Juskewitch, J. E., & Lachman, N. (2016). Does emotional intelligence change during medical school gross anatomy course? Correlations with students’ performance and team cohesion. Anatomical Sciences Education, 9(2), 143–149. https://doi.org/10.1002/ase.1541

Ibrahim, N. K., Algethmi, W. A., Binshihon, S. M., Almahyawi, R. A., Alahmadi, R. F., & Baabdullah, M. Y. (2017). Predictors and correlations of emotional intelligence among medical students at King Abdulaziz University, Jeddah. Pakistan Journal of Medical Sciences, 33(5), 1080–1085. https://doi.org/10.12669/pjms.335.13157

Irfan, M., Saleem, U., Sethi, M. R., & Abdullah, A. S. (2019). Do we need to care: emotional intelligence and empathy of medical and dental students. Journal of Ayub Medical College, Abbottabad: JAMC, 31(1), 76–81.

Khan, M. A., Niazi, I. M., & Rashdi, A. (2016). Emotional intelligence predictor of empathy in medical students. Rawal Medical Journal41(1), 121-124.

Kozlowski, SWJ., & Ilgen. D.R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7(3), 77-124. https://doi.org/10.1111/j.1529-1006.2006.00030.x

Mahaur, R., Jain, P., & Jain, A. K. (2017). Association of mental health to emotional intelligence in medical undergraduate students: Are there gender differences? Indian Journal of Physiology and Pharmacology61(4), 383-391.

Mattingly, V., & Kraiger, K. (2019). Can emotional intelligence be trained? A meta-analytical investigation. Human Resource Management Review29(2), 140-155. https://doi.org/10.1016/j.hrmr.2018.03.002

Mayer, J. D., & Salovey, P. (1997). What is emotional intelligence? Emotional development and emotional intelligence: Educational implications (pp. 3-31). Basic Books.

Mikolajczak, M., Luminet, O., Leroy, C., & Roy, E. (2007). Psychometric properties of the Trait Emotional Intelligence Questionnaire: Factor structure, reliability, construct, and incremental validity in a French-speaking population. Journal of Personality Assessment, 88(3), 338–353. https://doi.org/10.1080/00223890701333431

Mintz, L. J., & Stoller, J. K. (2014). A systematic review of physician leadership and emotional intelligence. Journal of Graduate Medical Education, 6(1), 21–31. https://doi.org/10.4300/JGME-D-13-00012.1

Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & PRISMA-P Group (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4(1), Article 1. https://doi.org/10.1186/2046-4053-4-1

Monroe, A. D., & English, A. (2013). Fostering emotional intelligence in medical training: The SELECT program. AMA Journal of Ethics, 13(6), 509-513. https://doi.org/10.1001/virtualmentor.2013.15.6.medu1-1306

Moslehi, M., Samouei, R., Tayebani, T., & Kolahduz, S. (2015). A study of the academic performance of medical students in the comprehensive examination of the basic sciences according to the indices of emotional intelligence and educational status. Journal of Education and Health Promotion, 4, 66. https://doi.org/10.4103/2277-9531.162387

Noughani, F., Bayat, R. M., Ghorbani, Z., & Ramim, T. (2015). Correlation between emotional intelligence and educational consent of students of Tehran University of Medical Students. Tehran University Medical Journal, 73(2), 110-116.

Othman, R., El Othman, R., Hallit, R., Obeid, S., & Hallit, S. (2020). Personality traits, emotional intelligence and decision-making styles in Lebanese universities medical students. BMC Psychology, 8(1), Article 46. https://doi.org/10.1186/s40359-020-00406-4

Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan- A web and mobile app for systematic reviews. Systematic Reviews, 5(1), Article 210. https://doi.org/10.1186/s13643-016-0384-4

Petrides, K., & Furnham, A. (2000). On the dimensional structure of emotional intelligence. Personality and Individual Differences, 29(2), 313-320. https://doi.org/10.1016/S0191-8869(99)00195-6 

Petrovici, A., & Dobrescu, T. (2014). The role of emotional intelligence in building interpersonal communication skills. Procedia-Social and Behavioral Sciences, 116, 1405-1410. https://doi.org/10.1016/j.sbspro.2014.01.406

Protogerou, C., & Hagger, M. S. (2020). A checklist to assess the quality of survey studies in psychology. Methods in Psychology, 3, 100031. https://doi.org/10.31234/osf.io/uqak8

Ranasinghe, P., Wathurapatha, W. S., Mathangasinghe, Y., & Ponnamperuma, G. (2017). Emotional intelligence, perceived stress and academic performance of Sri Lankan medical undergraduates. BMC Medical Education, 17(1), Article 41. https://doi.org/10.1186/s12909-017-0884-5

Raut, A. V., & Gupta, S. S. (2019). Reflection and peer feedback for augmenting emotional intelligence among undergraduate students: A quasi-experimental study from a rural medical college in central India. Education for Health (Abingdon, England), 32(1), 3–10. https://doi.org/10.4103/efh.EfH_31_17

Reshetnikov, V. A., Tvorogova, N. D., Hersonskiy, I. I., Sokolov, N. A., Petrunin, A. D., & Drobyshev, D. A. (2020). Leadership and emotional intelligence: current trends in public health professionals training. Frontiers in Public Health, 7, 413. https://doi.org/10.3389/fpubh.2019.00413

Schutte, N. S., Malouff, J. M., Hall, L. E., Haggerty, D. J., Cooper, J. T., Golden, C. J., & Dornheim, L. (1998). Development and validation of a measure of emotional intelligence. Personality and Individual Differences, 25(2), 167–177. https://doi.org/10.1016/S0191-8869(98)00001-4

Shi, M., & Du, T. (2020). Associations of emotional intelligence and gratitude with empathy in medical students. BMC Medical Education, 20(1), Article 116. https://doi.org/10.1186/s12909-020-02041-4 

Skokou, M., Sakellaropoulos, G., Zairi, N. A., Gourzis, P., & Andreopoulou, O. (2019). An exploratory study of trait emotional intelligence and mental health in freshmen Greek medical students. Current Psychology, 40, 6057–6066. https://doi.org/10.1007/s12144-019-00535-z

Suleman, Q., Hussain, I., Syed, M. A., Parveen, R., Lodhi, I. S., & Mahmood, Z. (2019). Association between emotional intelligence and academic success among undergraduates: A cross-sectional study in KUST, Pakistan. PloS one, 14(7), e0219468. https://doi.org/10.1371/journal.pone.0219468

Sundararajan, S., & Gopichandran, V. (2018). Emotional intelligence among medical students: A mixed methods study from Chennai, India. BMC Medical Education, 18(1), Article 97. https://doi.org/10.1186/s12909-018-1213-3

Tyszkiewicz-Bandur M., Walkiewicz M, Tartas M, Bankiewicz-Nakielska J. (2017). Emotional intelligence, attachment styles and medical education. Family Medicine & Primary Care Review, 19(4), 404–407. https://doi.org/10.5114/fmpcr.2017.70127

University Grants Commission, Sri Lanka. (2022). University Admissions. Retrieved April 21, 2022, from https://www.ugc.ac.lk/index.php?option=com_content&view=article&id=25&Itemid=11&lang=en

Unnikrishnan, B., Darshan, B., Kulkarni, V., Thapar, R., Mithra, P., Kumar, N., Holla, R., Kumar, A., Sriram, R., Nair, N., Juanna, Rai, S., & Najiza, H. (2015). Association of emotional intelligence with academic performance among medical students in South India. Asian Journal of Pharmaceutical and Clinical Research, 8(2), 300-302.

Vasefi, A., Dehghani, M., & Mirzaaghapoor, M. (2018). Emotional intelligence of medical students of Shiraz University of Medical Sciences cross sectional study. Annals of Medicine and Surgery, 32, 26–31. https://doi.org/10.1016/j.amsu.2018.07.005

Völker, J. (2020). An examination of ability emotional intelligence and its relationships with fluid and crystallized abilities in a student sample. Journal of Intelligence, 8(2), 18. https://doi.org/10.3390/jintelligence8020018

Wei, P., & Joseph, A. (2011). Implementing and evaluating the integration of critical thinking into problem based learning in environmental building. Journal for Education in the Built Environment, 6(2), 93-115. https://doi.org/10.11120/jebe.2011.06020093

Wijekoon, C. N., Amaratunge, H., de Silva, Y., Senanayake, S., Jayawardane, P., & Senarath, U. (2017). Emotional intelligence and academic performance of medical undergraduates: A cross-sectional study in a selected university in Sri Lanka. BMC Medical Education, 17(1), Article 176. https://doi.org/10.1186/s12909-017-1018-9

Yee, K. T., Yi, M. S., Aung, K. C., Lwin, M. M., & Myint, W. W. (2018). Emotional intelligence level of year one and two medical students of University Malaysia Sarawak: Association with demographic data. Malaysian Applied Biology, 47(1), 203-208.

*Edussuriya D.H
Department of Forensic Medicine, Faculty of Medicine,
University of Peradeniya, Sri Lanka, 20400
+94711698916
Email: deepthi.edussuriya@med.pdn.ac.lk

Submitted: 27 May 2022
Accepted: 10 June 2022
Published online: 4 October, TAPS 2022, 7(4), 71-72
https://doi.org/10.29060/TAPS.2022-7-4/PV2819

Bhuvan KC1,2 & P Ravi Shankar3

1Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, Parkville, Australia; 2College of Public Health, Medical and Veterinary Sciences, James Cook University, Townsville, Australia; 3IMU Centre for Education, International Medical University, Kuala Lumpur, Malaysia

I. INTRODUCTION

Healthcare systems and medicines operate in a complex landscape and constantly interact with individuals, the environment, and society. In such a complex healthcare delivery system, nonlinearity always exists, and treatments, different healthcare services, and medicines cannot be delivered without factoring in the uncertainty brought about by human, behavioural, system, and societal factors.

A medical doctor prescribes medication/s to treat diseases or healthcare problems following certain treatment protocols and guidelines. However, in the community, several factors affect the adherence and outcomes, such as adverse effects, lifestyle factors, socioeconomic aspects, attitudes, and belief systems, so it is difficult to entirely predict the success of a regimen. These factors that can influence the outcomes of therapy have not received adequate attention. Furthermore, the complexity of healthcare delivery is starker in the treatment of ageing populations or those with chronic diseases.

Our world is becoming increasingly complex. Many uncertainties affect the delivery of healthcare services today. There are inherent challenges within the healthcare system such as lack of adequate funding, ageing population, rising burden of chronic diseases, and overstretched health workforce. In addition, newer challenges such as the impact of climate change on health delivery, the use of digital health technologies, the emergence of new epidemics, and questions regarding sustainability make healthcare delivery complex and uncertain. Healthcare systems operate through a network of subsystems such as hospitals and health systems, clinics, primary healthcare networks, rehabilitation centres, pharmacies, hospices, care homes, families, and patients. They interact with each other in a complex way sometimes producing unintended consequences such as adverse reactions, medication errors, unintended hospitalisations, and hospital-acquired infections. Thus, if we view the health system as a complex entity we can appreciate its dynamic behaviour helping us in delivering health services in a self-organized way (Lipsitz, 2012).

There is an urgent need to teach complexity science to undergraduate and postgraduate health sciences students as it better prepares them to deliver healthcare services and medicines to a dynamic and complex society. The healthcare systems we work for and the communities and societies we deliver healthcare services and medicines are complex. Healthcare delivery is disrupted by access to funding and resources, information and communication technology (ICT) applications, healthcare professionals who keep moving in and out of the system, and the increasing burden of chronic diseases and elderly populations needing several healthcare services and medicines. It is difficult to predict the outcome of the healthcare services and medicines that are delivered via both primary and secondary healthcare systems. Furthermore, it is difficult to predict the impact of healthcare services and medicines on patients. A patient may develop an adverse drug reaction to a medication, patient may have different genetic polymorphisms affecting the metabolism of medication or factors such as socioeconomic conditions, education level, support system might affect the way they receive and use healthcare services and medicines. There has been a growing recognition of such complex needs and the biological, psychological, social, and cultural aspects of medicine in the healthcare sciences curriculum (Quintero, 2014). There is also a greater appreciation for the collaborative care and practice model that brings together medical doctors, pharmacists, nurses, and other healthcare professionals together for patient care (Blount et al., 2006). The collaborative care model attempts to implement change in small and manageable cycles, appreciating the complexity involved. We must introduce complexity in medicine and pharmacy teaching and learning by introducing concepts, terminology, and lexicons regarding complexity and uncertainty. Students’ engagement and appreciation of the complexity of healthcare systems and delivery can be assessed through reflective practice, clinical reasoning, and evidence-based practice.

Complexity recognises that relationships may be nonlinear and emphasises the relations and interconnections between different components. Flow, interdependence and the emergence of structures and patterns are emphasised. An acceptance of the non-linear cause and effect relationship is stressed. Evidence-based medicine is based on statistics derived from large populations. Applying the results to an individual patient requires caution. Diagnosis and treatment outcomes are probabilistically determined. With the advent of large data sets the probabilistic nature of medicine is becoming apparent. A particular set of signs and symptoms provides a set of differential diagnosis in either increasing or decreasing order of probability. A variety of social, emotional, and political factors can influence treatment decisions, access to care, and treatment outcomes.

Universities have begun to realise the importance of teaching complexity science to medicine and health sciences students. A study by Jorm et al. (2016) had shown how complexity theory can be used to guide interprofessional learning. It showed how complexity theory can be used to design cases, formats, and assessments and how it enabled students to achieve complex interprofessional learning outcomes (Jorm et al., 2016). Another study by Jorm and Roberts (2018) reported the use of complexity theory to design evaluations with a new focus on developing medical students as future change agents for the transformation of the health system and patients’ lives (Jorm & Roberts, 2018). Several institutions like the Santa Fe Institute (New Mexico, United States) have already begun training programs on complexity in medicine and health care systems. Such programs and training need to be developed and evaluated globally so that medicine and pharmacy students can better tackle the complexity of health systems and the uncertainty around delivering medicines and healthcare in a complex environment. Training students for complexity today can ensure they are better prepared for both current and future challenges. 

Notes on Contributors

BKC contributed to the conceptualisation of the manuscript, wrote the first draft, revised the subsequent draft, and contributed to the final draft. PRS contributed to the conceptualisation of the manuscript and critically revised the first draft. The author contributed to the subsequent revision and finalisation of the manuscript.

Funding

No funding has been received for this article.

Declaration of Interest

The authors state that they do not have any conflicts of interest, including financial, consultant, institutional, and other relationships that might lead to bias or a conflict of interest.

References

Blount, A., DeGirolamo, S., & Mariani, K. (2006). Training the collaborative care practitioners of the future. Families, Systems, & Health, 24(1), 111-119. https://doi.org/10.1037/1091-7527.24.1.111

Jorm, C., Nisbet, G., Roberts, C., Gordon, C., Gentilcore, S., & Chen, T. F. (2016). Using complexity theory to develop a student-directed interprofessional learning activity for 1220 healthcare students. BMC Medical Education, 16(1), Article 199.  https://doi.org/10.1186/s12909-016-0717-y

Jorm, C., & Roberts, C. (2018). Using complexity theory to guide medical school evaluations. Academic Medicine, 93(3), 399-405. https://doi.org/10.1097/ACM.0000000000001828

Lipsitz, L. A. (2012). Understanding health care as a complex system: The foundation for unintended consequences. JAMA, 308(3), 243-244. https://doi.org/10.1001/jama.2012.7551

Quintero, G. A. (2014). Medical education and the healthcare system-why does the curriculum need to be reformed? BMC Medicine, 12(1), Article 213. https://doi.org/10.1186/s12916-014-0213-3

*P Ravi Shankar
IMU Centre for Education
International Medical University+94711698916
126, Jln Jalil Perkasa 19, Bukit Jalil,
57000 Kuala Lumpur, Malaysia
Email: ravi.dr.shankar@gmail.co

Submitted: 22 January 2022
Accepted: 4 May 2022
Published online: 4 October, TAPS 2022, 7(4), 50-58
https://doi.org/10.29060/TAPS.2022-7-4/OA2748

Nguyen Tran Minh Duc, Khuu Hoang Viet & Vuong Thi Ngoc Lan

University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, Vietnam

Abstract

Introduction: The Scholarly Project provides medical students with an opportunity to conduct research on a health and health care topic of interest with faculty mentors. Despite the proven benefits of the Scholarly Project there has only been a gradual change to undergraduate medical education in Vietnam. In the academic year of 2020-2021, the University of Medicine and Pharmacy (UMP) at Ho Chi Minh City launched the Scholarly Project as part of an innovative educational program. This study investigated the impact of the Scholarly Project on the research skills perception of participating undergraduate medical students.

Methods: A questionnaire evaluating the perception of fourteen research skills was given to participants in the first week, at midterm, and after finishing the Scholarly Project; students assessed their level on each skill using a 5-point Likert scale from 1 (lowest score) to 5 (highest score).

Results: There were statistically significant increases in scores for 11 skills after participation in the Scholarly Project. Of the remaining three skills, ‘Understanding the importance of “controls”’ and ‘Interpreting data’ skills showed a trend towards improvement while the ‘Statistically analyse data’ skill showed a downward trend.

Conclusion: The Scholarly Project had a positive impact on each student’s perception of most research skills and should be integrated into the revamped undergraduate medical education program at UMP, with detailed instruction on targeted skills for choosing the optimal study design and follow-up assessment.

Keywords:           Study Skills, Scholarly Project, Undergraduate, Medical Education, Self-Assessment

Practice Highlights

  • The Scholarly Project is an essential component of the undergraduate medical education curriculum.
  • Targeted researching skills is a valuable method to optimise competency-based criteria.
  • The initial choice of study design is important to the overall research skill self-perceptive improvement.

I. INTRODUCTION

Scholarly Project has emerged as an essential component of the modern undergraduate medical curriculum. This entails mentored study in a single topic area and may include classical hypothesis-driven research, literature reviews, or the creation of a medically-related product (Boninger et al., 2010). By researching a topic, designing and implementing experiments and analysing the results, students not only gain knowledge and experience but also essential skills including critical thinking, time management, collaboration, information technology and confidence, all of which benefit their academic endeavours and result in higher undergraduate graduation rates (Bickford et al., 2020; Carson, 2007). Furthermore, the Scholarly Project program, which allows students to learn about research, was rated positively by most undergraduates. In addition, it provides faculty members with assistance in their research projects and the chance to influence future generations (Dagher et al., 2016). It has also been noted that the process of exposing undergraduate students to research benefits the researchers who take part as instructors by refining and shaping their scientific minds (Zydney et al., 2002).

The number of research studies with Vietnamese authorship published in ISI-indexed journals increased considerably between 2001 and 2015, with an annual growth rate of 17%. However, the majority of this growth (77%) was accounted for by international collaboration research rather than domestic-only projects, especially in the clinical medicine area. Thus, scientific research in Vietnam had not changed considerably or achieved independence in this field (Nguyen et al., 2016).

In the academic year of 2020-2021, the University of Medicine and Pharmacy at Ho Chi Minh City (UMP), Vietnam, pioneered the launch of a one-year Scholarly Project for all fifth-year medical students. This medical student population is the first generation to learn under the refreshed Undergraduate Medical Curriculum of the UMP and the first class to experience the Scholarly Project. Undergraduate research experiences are characterised by four features: mentorship, originality, acceptability, and dissemination (Kardash, 2000). Assessment of undergraduate research experience, which determines whether students gained any research skills (such as identifying the research question, collecting data, thinking independently and creatively) is best performed after completing the research program (Blockus et al., 1997; Manduca, 1997). The quasi-experimental work presented here provides one of the first investigations into how the Scholarly Project at the UMP, Vietnam, impacted on the participating students’ perception of how their medical research skills improved in the academic year of 2020-2021.

II. METHODS

A. Description of the Scholarly Project

The Scholarly Project is a compulsory academic module that aims to enable fifth-year medical students to conduct medical research early in their careers. It provides these students with an active experience in conducting a research project with faculty members starting at the beginning of the fifth academic year. The data reported here were collected from medical students and mentors who participated during the 2020-2021 academic year.

For most medical students, the Scholarly Project provides the first exposure to the field of research. There are 48 groups of nine medical students, including one team leader, one secretary, and team members, with one faculty mentor. Medical students are expected to contribute actively to the best of their ability in committed teamwork and an ethical manner.

Members of the faculties of Medicine and Public Medicine who have active ongoing research projects are eligible to participate in the Scholarly Project. Faculty members act as mentors to the students and facilitate the students’ learning process by providing supervision, guidance, and support. In addition, members should allocate suitable tasks for each student based on their skills, expertise, interests, and background.

B. Scholarly Project Steps

1) Student orientation: Student orientation occurred in the first week, informing students of the program’s procedure, and their roles and responsibilities (Figure 1). Also, in the first week, the medical student curriculum included a medical research course, describing the formation of research ideas, study design and statistics, literature searching and referencing, and research ethics. Students were also provided with important dates and deadlines for the Scholarly Project stage.

2) Matching: Matching is the process of pairing students with project mentors. From the first weeks of the Scholarly Project, each student team is required to create a team profile on the university website, including the scientific interest, skill, and research fields of interest for each team member. Each medical student team then chose a mentor from a provided list, taking into account medical research fields and their research curriculum vitae. Each team picked up to 2 mentors, in order of preference. After the deadline, mentors chose which team they would like to work with based on the students’ choice; this process continued until all teams were paired.

3) Work initiation: Students were expected to initiate contact with the faculty member after being notified via the university website that they have been matched to a project. During the second week of the Scholarly Project, faculty members and students discussed the research project, and the roles and responsibilities. Upon finalising the agreement between the two parties, students completed a meeting report form, which was signed by both the mentor(s) and the team leader. During online learning periods due to COVID-19, online meetings were encouraged, along with completion of the meeting report form. This meeting report form included information about topics discussed during the meeting, future work, each student’s role in the research project, and confirmed the next appointment date. Student teams and faculty members scheduled meetings based on the design of their study. In follow-up meetings, faculty mentors continued to discuss and evaluate the medical students’ work, and further plans were discussed. There was no upper limit for the number of meetings. However, there was a second required meeting at the third week of the Scholarly Project, which was nearly the end of the modules, for the research team to update the collected data, trouble-shooting solutions, or feedback.

4) Presentation: In the final week of the fifth-year curriculum, a Scholarly Project Symposium provided the opportunity for research teams to present their project findings. This allowed the scientific committee to evaluate both the performance of each student and the research project in general. Another aim of the symposium was for medical students to learn and share their findings with other teams, and the presentation also provides a valuable reference for the subsequent classes.

Figure 1. Integration of the Scholarly Project into the new reformed undergraduate and postgraduate medical curriculum in Vietnam.

C. Study Setting and Participants

This one-group pretest-posttest study had a quasi-experimental design. Research skills assessed were chosen based on fourteen individual research skills (Kardash, 2000). The questionnaire has been used previously, with a Cronbach’s alpha calculated at 0.9 and item-total correlation varied between 0.49 to 0.76 (Kardash, 2000). The questionnaire was translated into Vietnamese, then the local language version was pre-tested and the final text was amended as necessary. The translation process was undertaken in accordance with Guidelines for the Cross-Cultural Adaptation Process (Beaton et al., 2000). Translations were evaluated and compared with the original questionnaire by the Education and Research Council of the UMP to ensure accuracy of the Vietnamese version prior to study initiation. Medical student surveys were administered during the first week of the Scholarly Project and students were asked to indicate their current level of performance for each skill and the extent to which they hoped that the project would develop each skill on a 5-point Likert scale from 1–5 (where higher scores indicate greater skill level). Surveys were repeated at midterm and during the last week of the Scholarly Project module; at these times the students used the same scale to rate the extent to which they felt capable of performing each skill and how they believed the internship had developed their skills in general. Medical students had to provide informed consent on the first page of the electronic form before accessing the rest of the questionnaire.  

D. Statistical Analysis

Raw data were extracted from the online survey link for each participating medical student and saved in Excel sheets. R (R Core Team, Vienna, Austria) was applied to analyse data. First, scores for each skill at baseline were compared with those obtained after project completion using a paired t-test (Student’s t-test). The same method was used to compare expected skill level evaluated at baseline and the actual skill level rating at the end of the Scholarly Project. A p-value of <0.05 was considered to be statistically significant.

III. RESULTS

A. Response Rate and Participant Data

Of 384 students participating in the Scholarly Project, 194 (50.5%) completed the survey. The majority of participants were male (60%) and had the role of project team member (75.3%) (See Table 1). The most common Scholarly Project design was a cross-sectional study (47.9%), followed by study protocol development (21.1%), case/case series report (11.9%), and literature review (10.3%) (Table 1). Twenty-one different departments with a wide range of specialties provided scientific mentors for the Scholarly Projects undertaken by 48 research groups (See Table 1).

Table 1. Demographic and project characteristics for survey respondents.

Values are mean ± standard deviation, or number of respondents (%).

B. Research Skills at Baseline, Midterm and Project Completion

At baseline, self-rated competency was highest for ‘Understand the importance of “controls”’, ‘Understand contemporary concepts’, ‘Identify a specific question’, and ‘Observe and collect data’ (Figure 2). All skills had self-evaluating levels above “moderate” (score of >3), except for ‘Write research for publication’ (mean score 2.696). Students expected that all skills would increase after participating in the Scholarly Project (p<0.001).

In the midterm survey, five skill groups showed significant improvement from baseline (Figure 2). These were ‘Make use of scientific literature’, ‘Identify a specific question’, ‘Observe and collect data’, ‘Relate results to the “bigger picture”’, and ‘Orally communicate research project skills’. Conversely, there was a significant decrease in self-rated skill for ‘Statistically analyse data’ and ‘Interpret data skills’, while other skill ratings were stable (Figure 2).

Figure 2. Change in self-rated medical research skills of 194 participants from baseline to the midterm of the Scholarly Project

M: mean; SD: standard deviation; CI: confidence interval

At the completion of the Scholarly Project, the five skills that showed improvement at the midterm assessment showed continued improvement, and another six skills had also improved significantly compared with baseline (Figure 3). However, scores for ‘Understand the importance of “controls”’, ‘Interpret data’ and ‘Statistically analyse data” did not change significantly from baseline, and the mean score for the latter parameter was actually slightly below baseline (Figure 3).

Figure 3. Change in self-rated medical research skills of 194 participants from baseline to completion of the Scholarly Project (SP)

 M: mean; SD: standard deviation; CI: confidence interval.

Looking more closely at analytical skills relating to six types of study design showed that self-rated skill for the ability to interpret data for a literature review decreased significantly, as did self-rated skill scores for statistically analyse data in relation to study protocol development and literature review (Table 2). In contrast, there was a significant improvement in self-rated skill for data interpretation for cross-sectional studies and for statistical analysis of data in cohort studies (Table 2).

Table 2. Self-evaluated skill level scores for ‘Interpret data’ and ‘Statistically analyse data’ from baseline to completion of the Scholarly Project

Values are mean ± standard deviation. *p<0.05 vs baseline.

IV. DISCUSSION

A. Impact of Scholarly Project on Students’ Perception of Research Skills

Our results show that ratings for most skills increased during and after the Scholarly Project. Increases in ratings for ‘Identifying a specific question’, ‘Orally communicate research projects’, and ‘Relate results to the “bigger picture”’ in our study were consistent with data from Schor et al. (2005), who reported that the Scholarly Project could be beneficial by fostering analytical thinking skills, improving oral communication skills, and enhancing skills for evaluating and applying new knowledge to their profession (Schor et al., 2005). A significant increase in ‘Make use of scientific literature’ in our study reflects the idea-forming process at the study design stage of the Scholarly Project, during which students could practice the ability to read and critically evaluate medical literature. These are essential components of undergraduate medical education, irrespective of whether students intend to pursue a career in academic medicine or in public or private clinical practice (Holloway et al., 2004).

B. Data-related Skills and the Concept of a Control Group

The two skills of ‘Statistically analyse data’ and ‘Interpret data’ are introduced mainly in the Advanced Statistics Module with a training period of 2 weeks before starting the Scholarly Project, and briefly presented in the ‘Basic statistics informatics’ module during the first year of training and in the ‘Basic epidemiology’ module during the third year of the undergraduate curriculum. Therefore, baseline assessments in our study took place after the Advanced Statistics Module, which could have influenced ratings on the above skills. Given that our midterm assessment was performed at a time when most students had not had the opportunity to practice these skills, there may have been a negative impact on self-evaluation. The change in scores for ‘Statistically analyse data’ and ‘Interpret data’ at the midterm assessment was therefore influenced by an external factor (the Advanced Statistics Module) and an internal factor (the Scholarly Project). Therefore, future assessments of the impact of the Scholarly Project on learning should not have the quasi-experimental design used here, but instead, use an interrupted time-series design. This will mean that several surveys would be conducted before starting the Advanced Statistics Module, with the aim of eliminating confounding factors.

The final assessment showed significant improvements in scores for ‘Statistically analyse data’ and ‘Interpret data skills’ compared with the midterm survey. When applied in students’ projects, the improvement of these two skills indirectly supported the aforementioned context. This highlights the value of active learning compared with passive learning. It has conclusively been shown that cramming statistical knowledge means that students do not understand basic concepts to apply appropriately (Leppink, 2017). As noted by Leppink, statistics should be integrated into medical subjects; familiarity with these subjects and the repeated use of these skills provides opportunities to develop statistical skills. The Scholar Project is a typical example of this trend. However, only the ‘Statistically analyse data skill’ showed a downward pattern, while the ‘Interpret data skill’ increased slightly, suggesting that the Scholarly Project should focus more on these skills. Additional studies that take these variables into account are needed.

The control group concept is taught in Basic Epidemiology during the third year of Basic Science and the first sessions of the Scholarly Project. The control group has a pivotal role in study design should have elements that match the experimental group’s characteristics, except for the intervention/variable applied to the latter (Kinser & Robins, 2013). This scientific control group enables the experimental study of one variable at a time, and it is an essential part of the scientific method. Two identical experiments are carried out in a controlled experiment: in one of them, the treatment or tested factor  is applied (experimental group), whereas in the other group (control), the tested factor is not applied (Pithon, 2013). However, due to the limitation that only four respondents had a project with a case-control study design, the ‘Understand the importance of “controls”’ skill only showed a modest improvement, despite having been taught previously, which is similar to a previous undergraduate research study (Kardash, 2000). Compared with cross-sectional study design, which was the most popular design for studies in this Scholarly Project, case-control studies often required a greater amount of human and facility resources. We suggest that a case-control study with a small sample size of 10–20 could be a suitable study design for medical students to understand how best to conduct research with a control group.

Of the 194 respondents in our study, 56.7% of the cohort should have been able to fully experience all fourteen of the skills assessed. In contrast, those who participated in study protocol development, literature review, and case/case series report projects had limited opportunities to practice analytical skills. Similar to our findings, a previous study demonstrated that only 13% of 475 projects conducted by medical students contained four main research skill areas, including research methods, information gathering, critical analysis and review, and data processing (Murdoch-Eaton et al., 2010). Furthermore, the COVID-19 outbreak during the academic year 2020-2021 significantly impacted the originally planned Scholarly Project data collection process. As a result, some research teams switched to more feasible design studies such as study development or literature review, which potentially influenced the two skills of statistical analysis and data interpretation skills. Therefore, it could be hypothesised that these conditions are less likely to occur if participants recognise the skills required for research before designing the study protocol. Thus, there is room for further progress in determining the optimal project descriptions provided to medical students participating in the Scholarly Project to allow them to benefit from the research opportunities and fully develop essential skills.

C. The Role of Scholarly Project in Medical Education in Vietnam

This Scholarly Project is an essential step in curriculum reform for Vietnam’s medical education system. In the last two decades, medical educators in Vietnam have collaborated to promote the social trend for undergraduate medical education, and identify the goals and outcomes of learning from medical graduates in expected knowledge, attitudes, and skills (Hoat et al., 2009). Furthermore, Vietnamese policymakers created an environment that enabled academic innovation by implementing the necessary changes to national university autonomy policies (Duong et al., 2021). These policies enable public universities to be financially independent, manage their operation and human resources, prioritise technology, and develop new curricula. The Scholarly Project helps to train physicians who are better prepared to meet patient requirements and health needs (Fan et al., 2012). Based on competency in medical education, the Scholarly Project focuses on outcomes, emphasises the application of knowledge and practice, and promotes greater learner-centeredness (Carraccio et al., 2002; Frank et al., 2010; Iobst et al., 2010). In addition, the Scholarly Project helps to reduce the time spent in passive lectures, which can negative affect medical students (Deslauriers et al., 2019; Schwartzstein et al., 2020; Schwartzstein & Roberts, 2017). Instead, students are encouraged to explore research topics based on their interests, human and institutional resources, and university mentors’ guidance and follow-up. Compared with the large class sizes from Vietnam’s traditional teaching method, the Scholarly Project (with an average of eight students and one mentor) provides low faculty-to-students ratios, creating desired small group learning. Starting for the first time in the 2020-2021 school year, Scholarly Project had to adapt to the impact of the COVID-19 pandemic, with two periods of online learning required in September 2020 and May 2021 due to local COVID-19 outbreaks. To help manage this, the university applied for technical assistance from Microsoft Office 365 with a full-access subscription to maintain the scheduled small group meetings between students and their mentors while optimising social distancing (Duong et al., 2021).

We recommend introducing the 14-skill questionnaire as a tool for medical students to self-monitor their improvement during participation in the Scholarly Project. From the mentors’ perspective, the questionnaire provides a reliable and convenient reference for providing feedback to students and suggestions about areas that need further improvement. These approaches could also be utilised in other institutions, either locally or internationally, who include a Scholarly Project for a number of reasons: (1) the Scholarly Project is a lengthy module that could be impacted by unexpected events (e.g. COVID-19); (2) the need for routine self-check and mentor feedback to facilitate the required research skills improvement; and (3) because the questionnaire is a validated, convenient and accessible method for both medical students and mentors.

D. Study Limitations

Although the survey was sent to all medical students participating in the Scholarly Project, only just over half of students responded. Therefore, the impact of the Scholarly Project on non-responding medical students may not reflect the trends reported here, limiting the generalizability of our findings. Nonresponse bias is another potential limitation, although this is not necessarily associated with a lower response rate (Davern, 2013; Halbesleben and Whitman, 2012). Participants might perceive that self-evaluation about how much their research skills had improved could indirectly reflect their level of participation in Scholarly Project, the contribution of their mentor, and the level of their academic performance, leading to social desirability bias in their responses. We attempted to reduce nonresponse and social desirability bias, and any perception that responses could impact on academic assessments, by making survey responses anonymous and keeping the study survey completely separate from any academic assessments (e.g. grade-point average). Another limitation is the lack of a control group of medical students, but this is difficult because participation in the Scholarly Project is mandatory for all students. Using a control group would have strengthened the study from a methodological perspective and allowed investigation of the impact of specific aspects of the Scholarly Project.

Respond shift bias is inevitable while conducting this research. To reduce this, instead of completing self-evaluation for all fourteen skills initially and then after the completion of the whole project, students should assess their skill level immediately after the completion of each Module. However, response shift bias happened because respondents perceived the purpose of the survey as assessing the program’s effectiveness. In the context of our research, even if assessments were completed after each Module, students would realise the aim of the survey meaning that respond-shift bias would not decrease considerably.

V. CONCLUSION

Scholarly Project is an excellent learning opportunity for medical students in the refreshed undergraduate medical curriculum. Participating in a Scholarly Project provides students with research experience, including the knowledge, structure, and support needed to engage in scholarly work. By providing the foundations for scholarly work, medical students can enter the health care workforce with solid clinical expertise and the basic skills required to conduct high-quality projects that improve the safety and quality of care delivered to patients. We suggest integrating the Scholarly Project curriculum throughout the undergraduate medical education curriculum in Vietnam. This is important in terms of early experience of medical research and fostering a good understanding of medical scientific research for all future doctors, regardless of their ultimate career destination.

Notes on Contributors

N.T.M.D. and K.H.V. drafted and revised the manuscript. V.T.N.L. helped in reviewing the manuscript. All authors (N.T.M.D., K.H.V., V.T.N.L.) have made substantial contributions to the conception and design of the work and the acquisition, analysis, and interpretation of data. All authors read and approved the final manuscript.

Ethical Approval

The authors declare that this study did not require human ethics approval and did not include experiments on animal or human subjects. This study was submitted to the Institutional Review Board (IRB) at University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, Vietnam. This project was determined to be exempt from IRB review. All methods were carried out in accordance with relevant guidelines and regulations. Respondents were informed that their participation in the survey was completely voluntary and there were no risks associated with their participation.

Data Availability

The datasets generated and/or analysed during the current study are not publicly available for reasons of data protection but are available from the corresponding author on reasonable request.

Acknowledgement

The authors would very much like to acknowledge Ms. Le Minh Chau, Mr. Ung Nguyen Vu Hoang, Ms. Duong Kim Ngan, Mr. Nguyen Hai Dang, Ms. Tran Thi Hong Ngoc, Mr. Giang Luu Thanh Hoang, and Mr. Nguyen Hoang Nhan (University of Medicine and Pharmacy at Ho Chi Minh City, Vietnam) for their support of this study.

Funding

No funding has been received for the study.

Declaration of Interest

The authors declare that they have no competing interests.

References

Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186-3191. https://doi.org/10.1097/00007632-200012150-00014

Bickford, N., Peterson, E., Jensen, P., & Thomas, D. (2020). Undergraduates interested in STEM research are better students than their peers. Education Sciences, 10(6), 150. https://doi.org/10.3390/educsci10060150

Blockus, L., Kardash, C. M., Blair, M., & Wallace, M. (1997). Undergraduate internship program evaluation: A comprehensive approach at a research university. Council on Undergraduate Research, 18, 60–63.

Boninger, M., Troen, P., Green, E., Borkan, J., Lance-Jones, C., Humphrey, A., Gruppuso, P., Kant, P., McGee, J., Willochell, M., Schor, N., Kanter, S. L., & Levine, A. S. (2010). Implementation of a longitudinal mentored scholarly project: An approach at two medical schools. Academic Medicine, 85(3), 429–437. https://doi.org/10.1097/acm.0b013e3181ccc96f

Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting Paradigms. Academic Medicine, 77(5), 361–367. https://doi.org/10.1097/00001888-200205000-00003

Carson, S. (2007). A new paradigm for mentored undergraduate research in molecular microbiology. CBE—Life Sciences Education, 6(4), 343–349. https://doi.org/10.1187/cbe.07-05-0027

Dagher, M. M., Atieh, J. A., Soubra, M. K., Khoury, S. J., Tamim, H., & Kaafarani, B. R. (2016). Medical Research Volunteer Program (MRVP): Innovative program promoting undergraduate research in the medical field. BMC Medical Education, 16(1), Article 160. https://doi.org/10.1186/s12909-016-0670-9

Davern, M. (2013). Nonresponse rates are a problematic indicator of nonresponse bias in survey research. Health Services Research, 48(3), 905–912. https://doi.org/10.1111/1475-6773.12070

Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences, 116(39), 19251–19257. https://doi.org/10.1073/pnas.1821936116

Duong, D. B., Phan, T., Trung, N. Q., Le, B. N., Do, H. M., Nguyen, H. M., Tang, S. H., Pham, V. A., Le, B. K., Le, L. C., Siddiqui, Z., Cosimi, L. A., & Pollack, T. (2021). Innovations in medical education in Vietnam. BMJ Innovations, 7(Suppl 1), s23–s29. https://doi.org/10.1136/bmjinnov-2021-000708

Fan, A. P., Tran, D. T., Kosik, R. O., Mandell, G. A., Hsu, H. S., & Chen, Y. S. (2012). Medical education in Vietnam. Medical Teacher, 34(2), 103–107. https://doi.org/10.3109/0142159x.2011.613499

Frank, J. R., Snell, L. S., Cate, O. T., Holmboe, E. S., Carraccio, C., Swing, S. R., Harris, P., Glasgow, N. J., Campbell, C., Dath, D., Harden, R. M., Iobst, W., Long, D. M., Mungroo, R., Richardson, D. L., Sherbino, J., Silver, I., Taber, S., Talbot, M., & Harris, K. A. (2010). Competency-based medical education: Theory to practice. Medical Teacher, 32(8), 638–645. https://doi.org/10.3109/0142159x.2010.501190

Halbesleben, J. R. B., & Whitman, M. V. (2012). Evaluating survey quality in health services research: A decision framework for assessing nonresponse bias. Health Services Research, 48(3), 913–930. https://doi.org/10.1111/1475-6773.12002

Hoat, L. N., Lan Viet, N., van der Wilt, G., Broerse, J., Ruitenberg, E., & Wright, E. (2009). Motivation of university and non-university stakeholders to change medical education in Vietnam. BMC Medical Education, 9(1), Article 49. https://doi.org/10.1186/1472-6920-9-49

Holloway, R., Nesbit, K., Bordley, D., & Noyes, K. (2004). Teaching and evaluating first and second year medical students’ practice of evidence-based medicine. Medical Education, 38(8), 868–878. https://doi.org/10.1111/j.1365-2929.2004.01817.x

Iobst, W. F., Sherbino, J., Cate, O. T., Richardson, D. L., Dath, D., Swing, S. R., Harris, P., Mungroo, R., Holmboe, E. S., & Frank, J. R. (2010). Competency-based medical education in postgraduate medical education. Medical Teacher, 32(8), 651–656. https://doi.org/10.3109/0142159x.2010.500709

Kardash, C. M. (2000). Evaluation of undergraduate research experience: Perceptions of undergraduate interns and their faculty mentors. Journal of Educational Psychology, 92(1), 191–201. https://doi.org/10.1037/0022-0663.92.1.191

Kinser, P. A., & Robins, J. L. (2013). Control group design: enhancing rigor in research of mind-body therapies for depression. Evidence-Based Complementary and Alternative Medicine, 2013. https://doi.org/10.1155/2013/140467

Leppink, J. (2017). Helping medical students in their study of statistics: A flexible approach. Journal of Taibah University Medical Sciences, 12(1), 1–7. https://doi.org/10.1016/j.jtumed.2016.08.007

Manduca, C. (1997). Broadly defined goals for undergraduate research projects: A basis for program evaluation. Council on Undergraduate Research, 18(2), 64–69.

Murdoch-Eaton, D., Drewery, S., Elton, S., Emmerson, C., Marshall, M., Smith, J. A., Stark, P., & Whittle, S. (2010). What do medical students understand by research and research skills? Identifying research opportunities within undergraduate projects. Medical Teacher, 32(3), e152–e160. https://doi.org/10.3109/01421591003657493

Nguyen, T. V., Ho-Le, T. P., & Le, U. V. (2016). International collaboration in scientific research in Vietnam: An analysis of patterns and impact. Scientometrics, 110(2), 1035–1051. https://doi.org/10.1007/s11192-016-2201-1

Pithon, M. M. (2013). Importance of the control group in scientific research. Dental Press Journal of Orthodontics, 18(6), 13–14. https://doi.org/10.1590/s2176-94512013000600003

Schor, N. F., Troen, P., Kanter, S. L., & Levine, A. S. (2005). The scholarly project initiative: Introducing scholarship in medicine through a longitudinal, mentored curricular program. Academic Medicine, 80(9), 824–831. https://doi.org/10.1097/00001888-200509000-00009

Schwartzstein, R. M., Dienstag, J. L., King, R. W., Chang, B. S., Flanagan, J. G., Besche, H. C., Hoenig, M. P., Miloslavsky, E. M., Atkins, K. M., Puig, A., Cockrill, B. A., Wittels, K. A., Dalrymple, J. L., Gooding, H., Hirsh, D. A., Alexander, E. K., Fazio, S. B., & Hundert, E. M. (2020). The Harvard Medical School pathways curriculum: Reimagining developmentally appropriate medical education for contemporary learners. Academic Medicine, 95(11), 1687–1695. https://doi.org/10.1097/acm.0000000000003270

Schwartzstein, R. M., & Roberts, D. H. (2017). Saying goodbye to lectures in medical school — Paradigm shift or passing fad? New England Journal of Medicine, 377(7), 605–607. https://doi.org/10.1056/nejmp1706474

Zydney, A. L., Bennett, J. S., Shahid, A., & Bauer, K. W. (2002). Impact of undergraduate research experience in engineering. Journal of Engineering Education, 91(2), 151–157. https://doi.org/10.1002/j.2168-9830.2002.tb00687.x

*Nguyen Tran Minh Duc
217 Hong Bang Street, Ward 11,
District 5, Ho Chi Minh City, Vietnam
+84 988 127 948
Email: ntmduc160046@ump.edu.vn

Submitted: 13 January 2022
Accepted: 9 May 2022
Published online: 4 October, TAPS 2022, 7(4), 35-49
https://doi.org/10.29060/TAPS.2022-7-4/OA2699

Yuan Kit Christopher Chua1*, Kay Wei Ping Ng1*, Eng Soo Yap2,3, Pei Shi Priscillia Lye4, Joy Vijayan1, & Yee Cheun Chan1

1Department of Medicine, Division of Neurology, National University Hospital Singapore, Singapore; 2Department of Haematology-oncology, National University Cancer Institute Singapore, Singapore; 3Department of Laboratory Medicine, National University Hospital Singapore, Singapore; 4Department of Medicine, Division of Infectious Diseases, National University Hospital Singapore, Singapore

*Co-first authors

Abstract

Introduction: In-class engagement enhances learning and can be measured using observational tools. As the COVID-19 pandemic shifted teaching online, we modified a tool to measure the engagement of instructors and students, comparing in-person with online teaching and different class types.

Methods: Video recordings of in-person and online teachings of six identical topics each were evaluated using our ‘In-class Engagement Measure’ (IEM). There were three topics each of case-based learning (CBL) and lecture-based instruction (LLC). Student IEM scores were: (1) no response, (2) answers when directly questioned, (3) answers spontaneously, (4) questions spontaneously, (5) initiates group discussions. Instructor IEM scores were: (1) addressing passive listeners, (2) asking ≥1 students, (3) initiates discussions, (4) monitors small group discussion, (5) monitoring whole class discussions.

Results: Twelve video recorded sessions were analysed. For instructors, there were no significant differences in percentage time of no engagement or IEM scores when comparing in-person with online teaching. For students, there was a significantly higher percentage time of no engagement for the online teaching of two topics. For class type, there was overall less percentage time of no engagement and higher IEM scores for CBL than LLC.

Conclusion: Our modified IEM tool demonstrated that instructors’ engagement remained similar, but students’ engagement reduced with online teaching. Additionally, more in-class engagement was observed in CBL. “Presenteeism”, where learners were online but disengaged was common. More effort is needed to engage students during online teaching.

Keywords:           Engagement, Observational Tool, Online Learning, E-learning, COVID-19, Medical Education, Research

Practice Highlights

  • Lectures to large class (LLC) and case-based learning (CBL) are associated with lower levels of student engagement when conducted on a virtual platform.
  • Instructors’ engagement during online teachings remained similar to that of in-person teachings.
  • LLC is associated with reduced student engagement than CBL.

I. INTRODUCTION

Educational theories suggest that learning should be an active process. According to social constructivist theory, learning can be better achieved by social interactions in the learning environment (Kaufman, 2003). Active learning strategies fostering the students to interact with each other and the instructor such as discussions, talks, questions, may yield desirable learning outcomes in terms of knowledge, skills, or attitudes (Rao & DiCarlo, 2001). Therefore, using in-class learner engagement as an important keystone of active learning strategies is known to stimulate and enhance the learner’s assimilation of content and concepts (Armstrong & Fukami, 2009; Watson et al., 1991).

There is good evidence for the importance of engagement in online learning and use of an engagement metric has been advocated to better understand student online interactions to improve the online learning environment (Berman & Artino, 2018). While medical literature suggests that virtual education games foster engagement (McCoy et al., 2016), the level of engagement and learning fostered by online methods for group discussion and teaching is unknown. Teleconferencing is among some of the methods suggested for maintaining education during the COVID-19 pandemic (Chick et al., 2020).

Possible methods of quantifying student engagement include direct observation and student self-report. O’Malley et al. (2003) has published a validated observation instrument called STROBE to assess in-class learner engagement in health professions without interfering with learner activities. This observation instrument is used to document observed dichotomized types of instructor and student behaviors in 5-minute cycles and quantify the number of questions asked by the instructor and students in different class subtypes. This instrument as well as revised forms of this instrument has since been used as “in-class engagement measures” to compare instructor and student behaviors in different class types (Alimoglu et al., 2014; Kelly et al., 2005).

In our institution, a hybrid curriculum of case-based learning as well as lecture-style courses is used to teach the post graduate year one (PGY-1) interns. We had video recordings of these courses performed in-person prior to the COVID-19 pandemic. With the advent of the pandemic, these courses were shifted onto Zoom teleconferencing platform, but delivered by the same instructors, in the same class format.

We therefore aimed to determine and compare in-class learning engagement levels via observing instructor and student behaviours in different platforms of learning (either observed online or in-person retrospectively via video recording) delivered by the same instructor before and during the COVID-19 pandemic. We also aimed to compare instructor and student behaviours in different class types (either case-based learning or lecture style instruction). To do this, we planned to modify a known in-person observational tool for student engagement – “STROBE” (O’Malley et al., 2003) for use in analysing and recording the behaviours of students in both online and in-person teaching.

II. METHODS

A. Observed Class Types

In this study, we observed two different class types, case-based learning (CBL), as well as lecture-based instruction to teach basic medical/surgical topics to a large classroom (LLC) of PGY-1 interns. Video recordings of these in-person teachings were made in 2017. Both these class types were replicated in the same format on an online Zoom teleconferencing platform and were delivered by nearly all of the same tutors using the same content and Powerpoint slides during the COVID-19 pandemic in 2020. We aimed to view the 2017 video-recordings of the in-person teachings and compare them with the 2020 online teaching of PGY-1 interns. Written consent was obtained from the tutors and implied consent from the students. Students were informed beforehand via email that the sessions were going to be observed and they were again reminded at the start of each session where they had the chance to opt out. Subsequently, all student feedback and observation scores were amalgamated and de-identified. This study was approved by the institution’s ethics board. 

Three topics each of case-based learning as well as lecture-style instruction were selected in chronological order as scheduled for students. Each topic of instruction was allotted up to a maximum of 90 minutes of time, but the instructor could choose to end the class earlier if the session was completed. Description of both class types are below.

1) Description of case-based learning in large classroom

The content of the learning was designed by the instructor, and consisted of clinical cases involving patient scenarios, where the main pedagogy was problem-solving and answering case-based questions relating to the patient scenario (e.g., diagnosis, reading clinical images or electrocardiograms, creating an investigation or treatment plan). Each case would typically take about 15 to 20 minutes to complete, and there would typically be five to six cases. Students were expected to answer the questions, and the instructor gave feedback on the answers and provided additional information, sometimes via additional Powerpoint slides. Class discussions were encouraged where students were encouraged to debate and discuss with each other over their classmates’ answers. The titles of the case-based learning were “ECG – tachydysrhythmias”, “Approach to a confused patient” and “Approach to chest pain”. 

2) Description of lecture in large classroom

This is a typical lecture-style instruction performed with participation of around 86 PGY1-interns and one instructor. The instructor delivers information via a Powerpoint slide presentation and rarely adds clinical case-based questions into the slides to invite student discussion. The titles of the lectures were “Cardiovascular health – hypertensive urgencies”, “Trauma – chest, abdomen and pelvis” and “Stroke”. 

B. Instructor and Student Characteristics

The instructors all had at least ten years of teaching experience in medical education, and all had been teaching the same topics to the PGY-1 interns for at least the last five years. Student feedback scores on their teaching activities have been satisfactorily high (mean 4.63 for 2019, the year prior to the shift to online learning for the pandemic). All the tutors (except for one instructor who taught “Stroke”) had taught the same topics using the same content and Powerpoint slides in 2017 via in-person teaching which was caught on camera.

The students were all PGY-1 interns, who have been asked by the institution to attend at least 70% of a mandatory one-year long teaching program where they are given weekly instruction on various medical or surgical topics. The teaching program commences from May of each year. There were 86 PGY-1 interns commencing their rotations in our institution and attending the teaching program from May 2020. There were 75 PGY-1 interns attending the teaching program in the video recordings caught in 2017.

C. Observation Tool

A revised form of STROBE (O’Malley et al., 2003) was used to analyze and record the behaviors of the instructor and students in classes, to provide a more objective third-person measure of student engagement. The original STROBE tool was an instrument that was developed to objectively measure student engagement across a variety medical education classroom settings. The STROBE instrument consists of 5-minute observational cycles repeated continuously throughout the learning session with relevant observations recorded on a data collection form. Within each cycle, observers record selected aspects of behavior from a list of specified categories that occur in each interval recorded. Observations include macrolevel elements such as structure of class, major activity during time, and a global judgment of the proportion of class members who appear on task, as well as microlevel elements such as instructor’s behavior and the behaviors of four randomly selected students. Observers also record who the behaviours of instructors and students were directed at. After which, observers tally the number of questions asked by the students and instructor in the remainder of the 5 minutes. The revision of this tool was made by the 3 Clinician-educators from the research team (CYC, YES, KN), having discussed what kind of instructor and student behaviors were considered as “active student engagement”, keeping the main statements and principles of the original STROBE tool. The scale was modified to make it suitable for use in an online learning setting, where the observers may not be able to observe the student’s body language cues when the student does not turn on his/her video function. We called this modified scale our ‘In-class engagement measure’. The modified scales were as follows:

A 5-item list of instructor and student behaviors was therefore created and rated from 1 to 5 each, with different scales for instructor and student. For the student behavior scale, each item was to show progressively increasing levels of interaction, and perceived engagement, both with the instructor and with each other. For the instructor behavior list, each item was also about progressively interactive behaviors by the instructor to get the students to engage. We called these scales our “In-class Engagement Measure (IEM)”. The scales were as follows:

Student: 

  1. No response even when asked
  2. Answers only when directly questioned
  3. Answers questions spontaneously
  4. Speaks to instructor spontaneouslyg.,Poses questions, discusses concepts
  5. Speaks to instructor and 1 or more other student during a discussion

Instructor: 

  1. Talking to entire class while all the students are passive receivers 
  2. Telling/asking to one or a group of students, or teaching/showing an application on a student
  3. Starting or conducting a discussion open to whole class, or assigning some students for some learning tasks
  4. Listening/monitoring actively discussing one or a group of students
  5. Listening/monitoring actively discussing entire class

For the student behaviour list, we also sub-categorized the student behaviour item “1”, where “1*” was defined as no response when a question was posed to a specific student and not just the whole class, where the student-in-question would have his/her name called by the tutor. 

D. Observation Process

Drawing from the described process for the STROBE observation tool (O’Malley et al., 2003), as well as other described modifications of the STROBE tool (Alimoglu et al., 2014), we used the same observation units and cycles. Modifications to the original described process for the STROBE observation tool was made to make it suitable for not being in-person when observing a large group of students and their instructor. Three observers from the research team (CYC, YES, KN) observed and recorded the instructor and student behaviors for the three case-based learning and three lecture-style learning conducted live online in 2020, and as a video recording of in-person teaching in 2017. A total of 12 lectures were therefore analyzed. One observation unit was a 5-minute cycle. The 5-minute cycle would proceed as such: The observer would write the starting time of the cycle and information about the class (number of students, title of session). The observer would select a student from the class and observe that student for 20 seconds and mark the type of engagement observed according to the IEM scale created. As the observers were not in-person for the teaching at either the 2017 video recording, and for the 2020 online learning, students who responded to the instructor or posed questions were marked at the same time by all the three observers. The 5-minute cycle would consist of four 20-second observations of individual learners, so marking of student engagement would be performed four times within that cycle with different students in succession. The observer would also observe the instructor for that 5-minute cycle and similarly mark the instructor’s behavior once for that 5-minute cycle. For the remainder of the modified STROBE cycle, the observer would tally the number of questions asked by all the students and the instructor. 

Observers independently and separately observed and marked the students’ and instructors’ behaviors. Due to the lack of in-person observation, students who responded or posed questions during the session were uniformly chosen for marking by the three observers. If a student had already been marked once during that cycle, the same student was not used for remaining three observations within the same cycle. At the end of the marking, two observers (KN and YES) compared their scores for both students and instructor. The marks given by the third observer (CYC) was used to validate the final score awarded and used as the tiebreaker when there was a discrepancy in the marks given by the first two observers. 

E. Collation of Post Teaching Survey Feedback

Apart from the data derived from our modified observational tool, we also reviewed data from surveys conducted by the educational committee after each of these teaching sessions (see Table 1). These were general surveys used to solicit student feedback on the teaching sessions. They were distributed in-person in 2017, with the same forms distributed to the students online in 2020.  Responses from the students were in response to five statements, with scoring 0 to 5 (1 for Strongly disagree, 2 for disagree, 3 for neither agree nor disagree, 4 for agree, and 5 for Strongly agree). These feedback forms had an overall feedback score marked by the student, as well as a score marked by the student in response to a question assessing for self-reported engagement – “The session was interactive and engaging”. The other questions were “The session has encouraged self-directed learning and critical thinking”, “The session was relevant to my stage of training”, “The session helped me advance my clinical decision-making skills”, and “The session has increased my confidence in day-to-day patient management”. Means of the feedback scores were taken as a qualitative guide, and we analyzed the overall feedback scores (“Overall feedback score” in Table 1), and the scores in response to the question assessing for self-reported engagement (“Self-reported engagement feedback score” in Table 1). 

F. Statistical Analyses

Descriptive statistics were used to determine frequencies and median number of questions asked, as well as mean student feedback scores and absolute duration of each teaching session. Fisher exact test was also performed to analyze the differences in scores between different lectures and case-based learning, and the scores in the 2017 in-person learning versus that of the 2020 online learning. For analysis of the scores, we dichotomized our scores using the cut-off of “1”, or our first item on the behavior list for both students and instructors, as we felt that the first item reflected an extreme non-participation for both student and instructor, which if left to continue, can result in negative learning and teaching behaviors.

III. RESULTS

A. Class Types, Characteristics, Feedback Scores

A total of 12 sessions were observed, consisting of in-person and online teaching sessions of six topics (Table 1). There were 3 topics of CBL and LLC each. Duration of the class sessions range from 30-55 minutes for the in-person sessions and 40-90 minutes for the online sessions. Total number of PGY-1 students eligible to attend the in-person teaching sessions in 2017 was 82, and 86 for the online teaching sessions in 2020. Student attendance for the in-person sessions ranged from 11 (13.4%) to 31 (37.8%) and that for the online session ranged from 28 (32.6%) to 77 (89.5%). Median (range) of feedback scores for in-person sessions were 4.57 (4.25 to 4.72) vs 4.32 (4.04 to 4.61) for online sessions. Median (range) of self-reported engagement scores for in-person sessions were 4.55 (4.25 to 4.79) vs 4.34 (4.00 to 4.67) for online sessions (Table 1).

 

Table 1. Class types and characteristics (*Different tutors, but using same content)

B. Instructors’Engagement Behaviour

1) Comparing in-person vs online teaching: Percentage time during which there is no engagement/interaction (or scoring “1” on the IEM score). This ranges from 0-80% for in-person teaching vs 0-100% for online teaching (Table 2A). For each topic, there is no significant difference between percentage time of no engagement. 

Most frequent IEM scores. Most frequent IEM scores for each 5-minute segment were 3 for in-person teaching (48.9%) and online teaching (52.9%) (Table 2B).

2) Comparing CBL vs LLC: Percentage time during which there is no engagement/interaction. This ranges from 0-23.1% for CBL vs 50-100% for LLC (Table 2A).

Most frequent IEM scores. Most frequent IEM score was 3 for CBL (77.3%) and 1 for LLC (71.4%). (Table 2B).

Table 2A. Comparison of instructors’ behaviour showing percentage time with no engagement (scoring “1” on the IEM score)

Table 2B. Numbers (percentages) of a particular IEM score received for a 5-minutes segment of teaching – for instructors

C. Students’ Engagement Behaviour 

1) Comparing in-person vs online teaching: Percentage time during which there is no engagement/interaction. This ranges from 0-95% for in-person teaching vs 78.8-100% for online teaching (Table 3A). There is significant difference in percentage time of no engagement in two topics (ECG, chest pain), where there is higher percentage of no engagement time with online teaching. 

Most frequent IEM scores. Most frequent IEM scores were 1 for both in-person teaching (63.8%) and online teaching (85.1%) (Table 3B).

2) Comparing CBL vs LLC: Percentage time during which there is no engagement/interaction. This ranges from 0-81.9% for CBL vs 84.4-100% for LLC (Table 3A).

Most frequent IEM scores. 

Most frequent IEM scores were 1 for both CBL (65.3%) and LLC (91.8%) (Table 3B).

Presence of 1* scores, where “1*” was defined as no response when a question was posed to a specific student called by name. There was no 1* IEM score for in-person teaching for either CBL or LLC, and 8.4% (12/143) of the “1” responses were 1* for online-teaching for CBL and 6.5% (6/92) of the “1” responses were 1* for LLC.

Table 3A. Comparison of students’ behaviour showing percentage time with no engagement (scoring “1” on the IEM score)

Table 3B. Numbers (percentages) of a particular IEM score received for a 5-minutes segment of teaching – for students

D. Number of Questions Asked Per 5-minute Cycle

Median number of questions asked by instructors ranged from 0-2 for in-person teaching and 1-3 for online teaching (See Appendix 1). These range from 1-3 for CBL vs 0-1 for LLC.

Median number of questions asked by students in all sessions were 0.

The results for this study can be derived from the dataset uploaded onto the online repository accessed via https://doi.org/10.6084/m9.figshare.18133379.v1 (Chua et al., 2022).

IV. DISCUSSION

We modified the known STROBE instrument (O’Malley et al., 2003) to create an observational tool “IEM” which could be used to quantify instructor and student engagement despite the observer not being present in-person. Our IEM scores were derived by taking scores that were in agreement when independently scored by two main observers (YES and KN). The third observer (CYC) was used as the validator of the scores by the two main observers. When there was a discrepancy in the scores awarded by the two observers, the score which was in agreement with the score awarded by CYC was used. To give an indication of the IEM tool’s effectiveness where the observer is not present in-person, we postulated that our modified IEM score should still demonstrate the well-documented difference in engagement between lecture-style learning and case-based learning sessions (Kelly et al., 2005). Our modified IEM score did indeed show more frequent higher scores as expected for case-based learning sessions (Tables 2B and 3B). We also compared our IEM scores with the students’ self-reported engagement scores (Table 1) that had been collected as part of student feedback. The general correlation in the trend of observed IEM scores with that of the students’ self-reported engagement scores also suggest the usefulness of our modified STROBE tool in situations where the observer is not present in-person, although this needs to be further validated in prospective studies.

Our initial study hypothesis was that students may find themselves more engaged in online teaching sessions and open to posing questions to the instructor and their peers, due to the presence of the “chat”, “likes” and “poll” functions available on the Zoom tele-conferencing platform, which may be more familiar to a younger generation accustomed to using social media. We had postulated that live online lectures would encourage further engagement from students who would not otherwise participate in-person, due to the less intimidating online environment where they can ask and answer questions more anonymously (Kay & Pasarica, 2019; Ni, 2013). In an Asian-pacific context, video conferencing had been found to be able to improve access for participation for more reticent participants who prefer written expression, through alternative communication channels like the ‘chat box’, although there was a potential trend to reduced engagement. (Ong et al., 2021).

Our data, shows, that Zoom teleconferencing during the COVID-19 pandemic can be associated with reduced student engagement. The percentage time where there was no engagement was significantly higher with online sessions (Table 3A) and the most frequent IEM score was lower (1 for online vs 3 for in-person), for CBL sessions (Table 3B). This phenomenon in medical education during the COVID-19 pandemic has previously been described. Using student and instructor feedback, students were more likely to have reduced engagement during virtual learning (Longhurst et al., 2020; Dost et al., 2020), and would have increased difficulties maintain focus, concentration and motivation during online learning (Wilcha, 2020).

Our data also suggests that for the instructor to even try to achieve close to the same levels of engagement as before, a longer duration of time was spent by each instructor per topic when executing CBL (Table 1). This may include time where the instructor needs multiple attempts at questioning and discussion before there is a student response. It is also possible that for in-person learning, the instructor relies greatly also on non-verbal cues (e.g., body language, nods of the head, collective feel of the room) to determine if a question has been satisfactorily answered, and therefore can move on quicker than when on a Zoom platform where one cannot see most, or even every student.

The higher number of attendees for online learning compared to in-person attendance (see Table 1) highlights one of the strengths of online learning, which is where online learning is more easily accessible for students who would save on time getting to a designated lecture room and provides flexibility for students to enter and exit (Dost et al., 2020). Unfortunately, this also likely encourages the phenomenon of “presenteeism”, where students are not focused on the learning session, but instead engage in other tasks simultaneously, e.g., reading or composing emails, or completing work tasks instead of having dedicated protected teaching time. Resident learners have been described to participate in nearly twice as many non-teaching session related activities per hour during an online session than when in-person (Weber & Ahn, 2020). This has likely contributed to the number of 1* scores we had, where the student has logged into the Zoom platform, but is not available to even respond in the negative when called upon to answer a question. This presenteeism, however, is not just a problem for online learning, but even for in-person learning, where pretending to engage has been found to be a significant unrecognized issue (Fuller et al., 2018).

The main implication that our study highlights that to improve student engagement when using online learning, a face-to-face platform cannot simply be transposed into a virtual platform. It had been suggested that engagement during live virtual learning could be enhanced with the use of interactive quizzes with audience polling functions (Morawo et al., 2020) and possibly other methods such as “gamification” (Nieto-Escamez & Roldan-Tapia, 2021). Our instructors for the CBL sessions had used both poll functions and live questioning for their sessions, but without increased success in engagement. Smaller groups are likely required to enhance student engagement, but this would lead to the need for increased time and teaching manpower. Increasing the opportunity for interaction via a virtual platform would also require the need to create additional online resources, which would take up more faculty time where creating new resources can take at least three times as much work compared to a traditional format (Gewin, 2020). Online resources would need to be modified in such a way that increases student autonomy to increase student engagement in medical education (Kay & Pasarica, 2019). Our study also shows that as a first step, in time and resource-limited settings, a case-based approach to teaching would be more ideal to enhance student engagement than lecture style teaching.

A culture of accountability also needs to be fostered within the online teaching sessions, where students need to be educated on how Zoom meetings can be more enriching when cameras are on (Sharp et al., 2021). PGY-1 interns, as recent graduates, also need to be educated on the aspect of professionalism when entering the medical work force, where they can be called upon to answer questions during meetings or conferences. When initial questions are not voluntarily answered, our tutors often practice “cold-calling”, which can help keep learners alert and ready (Lemov, 2015). Unfortunately, these evidence-based teaching methods that work well when the student is in-person, ultimately will fail if online students are not educated on their need to be accountable to the instructor or their peers.

This study has several limitations. Firstly, the level of student engagement may also be affected by external factors, such as a different physical learning environment, class size and avenues of communication. The stresses of the on-going pandemic may also have affected student engagement, as a decrease in quality of life and stress would negatively impact student motivation (Lyndon et al., 2017). Secondly, the topics for lecture to large class and case-based learning were not identical as these topics were picked in chronological order and there were no topics in the curriculum that had material for both the lecture and case-based learning class types. This difference in topics may have potentially contributed to confounding when we try to make direct comparisons between the two class types, although, we have attempted to mitigate this by including a variety of topics in each class type. Thirdly, the improved student engagement and feedback scores for in-person learning may also have had some bias given the smaller student size for in-person learning. It is also possible that only the more motivated, and hence more likely to be engaged students, would turn up for in-person learning. Fourthly, due to the online nature as well as the retrospective viewing of the video recordings, the observers were not present in-person to observe the non-verbal cues of the students or instructors. The tool, however, was modified to take into account only the verbal output that could be observed online or via video recording. Lastly, our IEM tool will benefit from more studies and research to further confirm its validity in observing students when the observer is not present in-person.

V. CONCLUSION

Lectures are associated with reduced student engagement than case-based learning, while both class types are associated with lower levels of student engagement when conducted on a virtual platform. Instructor levels of engagement, however, remain about the same. This highlights that a face-to-face platform cannot simply be transposed into a virtual platform, and it is important to address this gap in engagement as this can lower faculty satisfaction with teaching and ultimately result in burnout. Blended teaching or smaller group teaching as the world turns the corner in the COVID-19 pandemic may be one way to circumvent the situation but is also constrained by faculty time and manpower. Our study also shows that as a first step, in time and resource-limited settings, a case-based approach to teaching would be more ideal to enhance student engagement than lecture style teaching.

Notes on Contributors

Dr Ng Wei Ping Kay and Dr Chua Yuan Kit Christopher are co-first authors and contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. They contributed to drafting and revising the work and approved the final version to be published. They agree to be accountable for all aspects of the work.

Dr Lye Pei Shi Priscillia contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. She contributed to drafting and revising the work and approved the final version to be published. She agrees to be accountable for all aspects of the work.

Dr Joy Vijayan contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. He contributed to drafting and revising the work and approved the final version to be published. He agrees to be accountable for all aspects of the work.

Dr Yap Eng Soo contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. He contributed to drafting and revising the work and approved the final version to be published. He agrees to be accountable for all aspects of the work.

Dr Chan Yee Cheun contributed to conceptual development, acquisition, analysis, and interpretation of data for the work. He contributed to drafting and revising the work and approved the final version to be published. He agrees to be accountable for all aspects of the work.

Ethical Approval

I confirm that the study has been approved by Domain Specific Review Board (DSRB), National Healthcare Group, Singapore, an institutional ethics committee. DSRB reference number: 2020/00415.

Data Availability

The data that support the findings of this study are openly available in Figshare at https://doi.org/10.6084/m9.fig share.18133379.v1.

Acknowledgement

We would like to acknowledge Ms. Jacqueline Lam for her administrative support in observing the recordings and online-teaching. 

Funding

There was no funding for this research study.

Declaration of Interest

The authors report no conflicts of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.

References

Alimoglu, M. K., Sarac, D. B., Alparslan, D., Karakas, A. A., & Altintas. (2014). An observation tool for instructor and student behaviors to measure in-class learner engagement: A validation study. Medical Education Online, 19(1), 24037. https://doi.org/10.3402/meo.v19.24037

Armstrong, S. J., & Fukami, C. V. (2009). The SAGE Handbook of Management Learning, Education and Development. SAGE Publications Ltd. https://www.doi.org/10.4135/9780857021038

Berman, N. B., & Artino, A. R. J., (2018). Development and initial validation of an online engagement metric using virtual patients. BMC Medical Education, 18(1), 213. https://doi.org/10.1186/s12909-018-1322-z

Chick, R. C., Clifton, G. T., Peace, K. M., Propper, B. W., Hale, D. F., Alseidi, A. A., & Vreeland, T. J. (2020). Using technology to maintain the education of residents during the COVID-19 Pandemic. Journal of Surgical Education, 77(4), 729–732. https://doi.org/10.1016/j.jsurg.2020.03.018

Chua, Y. K. C., Ng, K. W. P., Yap, E. S., Lye, P. S. P., Vijayan, J., & Chan, Y. C. (2022). Evaluating online learning engagement (Version 1) [Data set]. Figshare. https://doi.org/10.6084/m9.figshare.18133379.v1

Dost, S., Hossain, A., Shehab, M., Abdelwahed, A., & Al-Nusair, L. (2020). Perceptions of medical students towards online teaching during the COVID-19 pandemic: A national cross-sectional survey of 2721 UK medical students. BMJ Open, 10(11), e42378. https://doi.org/10.1136/bmjopen-2020-042378

Fuller, K. A., Karunaratne, N. S., Naidu, S., Exintaris, B., Short, J. L., Wolcott, M. D., Singleton, S., & White, P. J. (2018). Development of a self-report instrument for measuring in-class student engagement reveals that pretending to engage is a significant unrecognized problem. PLOS ONE, 13(10), e0205828. https://doi.org/10.1371/journal.pone.0205828

Gewin, V. (2020). Five tips for moving teaching online as COVID-19 takes hold. Nature, 580(7802), 295–296. https://doi.org/10.1038/d41586-020-00896-7

Kaufman, D. M. (2003). Applying educational theory in practice. BMJ, 326(7382), 213–216. https://doi.org/10.1136/bmj.326.7382.213

Kay, D., & Pasarica, M. (2019). Using technology to increase student (and faculty satisfaction with) engagement in medical education. Advances in Physiology Education, 43(3), 408–413. https://doi.org/10.1152/advan.00033.2019

Kelly, P. A., Haidet, P., Schneider, V., Searle, N., Seidel, C. L., & Richards, B. F. (2005). A comparison of in-class learner engagement across lecture, problem-based learning, and team learning using the STROBE classroom observation tool. Teaching and Learning in Medicine, 17(2), 112–118. https://doi.org/10.1207/s15328015tlm1702_4

Lemov, D. (2015). Teach like a champion 2.0: 62 techniques that put students on the path to college. (2nd ed.). Jossey-Bass.

Longhurst, G. J., Stone, D. M., Dulohery, K., Scully, D., Campbell, T., & Smith, C. F. (2020). Strength, weakness, opportunity, threat (SWOT) analysis of the adaptations to anatomical education in the United Kingdom and Republic of Ireland in response to the Covid-19 pandemic. Anatomical Sciences Education, 13(3), 301–311. https://doi.org/10.1002/ase.1967

Lyndon, M. P., Henning, M. A., Alyami, H., Krishna, S., Zeng, I., Yu, T.-C., & Hill, A. G. (2017). Burnout, quality of life, motivation, and academic achievement among medical students: A person-oriented approach. Perspectives on Medical Education, 6(2), 108–114. https://doi.org/10.1007/s40037-017-0340-6

McCoy, L., Pettit, R. K., Lewis, J. H., Allgood, J. A., Bay, C., & Schwartz, F. N. (2016). Evaluating medical student engagement during virtual patient simulations: A sequential, mixed methods study. BMC Medical Education, 16, 20. https://doi.org/10.1186/s12909-016-0530-7

Morawo, A., Sun, C., & Lowden, M. (2020). Enhancing engagement during live virtual learning using interactive quizzes. Medical Education, 54(12), 1188. https://doi.org/10.1111/medu.14253

Ni, A. Y. (2013). Comparing the effectiveness of classroom and online learning: Teaching research methods. Journal of Public Affairs Education, 19(2), 199-215. https://doi.org/10.1080/15236803.2013.12001730

Nieto-Escamez, F. A., & Roldan-Tapia, M. D. (2021). Gamifica- tion as online teaching strategy during COVID-19: A mini-review. Frontiers in Psychology, 12, 648522. https://doi.org/10.3389/fpsyg.2021.648552

O’Malley, K. J., Moran, B. J., Haidet, P., Seidel, C. L., Schneider, V., Morgan, R. O., Kelly, P. A., & Richards, B. (2003). Validation of an observation instrument for measuring student engagement in health professions settings. Evaluation & the Health Professions, 26(1), 86–103. https://doi.org/10.1177/0163278702250093

Ong, C. C. P., Choo, C. S. C., Tan, N. C. K., & Ong, L. Y. (2021). Unanticipated learning effects in videoconference continuous professional development. The Asia Pacific Scholar, 6(4), 135-141. https://doi.org/10.29060/TAPS.2021-6-4/SC2484

Rao, S. P., & DiCarlo, S. E. (2001). Active learning of respiratory physiology improves performance on respiratory physiology examinations. Advances in Physiology Education, 25(2), 55–61. https://doi.org/10.1152/advances.2001.25.2.55

Sharp, E. A., Norman, M. K., Spagnoletti, C. L., & Miller, B. G. (2021). Optimizing synchronous online teaching sessions: A guide to the “new normal” in medical education. Academic Pediatrics, 21(1), 11–15. https://doi.org/10.1016/j.acap.2020.11.009

Watson, W. E., Michaelsen, L. K., & Sharp, W. (1991). Member competence, group interaction, and group decision making: A longitudinal study. Journal of Applied Psychology, 76(6), 803–809. https://doi.org/10.1037/0021-9010.76.6.803 

Weber, W., & Ahn, J. (2020). COVID-19 conferences: Resident perceptions of online synchronous learning environments. Western Journal of Emergency Medicine, 22(1), 115–118. https://doi.org/10.5811/westjem.2020.11.49125

Wilcha, R. J. (2020). Effectiveness of virtual medical teaching during the COVID-19 crisis: Systematic review. JMIR Medical Education, 6(2), e20963. https://doi.org/10.2196/20963

*Chua Yuan Kit Christopher
5 Lower Kent Ridge Road,
National University Hospital,
Singapore 119074
+65 7795555
Email: christopher_chua@nuhs.edu.sg

Submitted: 6 January 2022
Accepted: 4 May 2022
Published online: 4 October, TAPS 2022, 7(4), 22-34
https://doi.org/10.29060/TAPS.2022-7-4/OA2735

Amelah Abdul Qader1,2, Hui Meng Er3 & Chew Fei Sow3

1School of Postgraduate Studies, International Medical University, Kuala Lumpur, Malaysia; 2University of Cyberjaya, Faculty of Medicine, Cyberjaya, Malaysia; 3IMU Centre for Education, International Medical University, Kuala Lumpur, Malaysia

Abstract

Introduction: The direct ophthalmoscope is a standard tool for fundus examination but is underutilised in practice due to technical difficulties. Although the smartphone ophthalmoscope has been demonstrated to improve fundus abnormality detection, there are limited studies assessing its utility as a teaching tool for fundus examination in Southeast Asian medical schools. This study explored the perception of medical students’ toward using a smartphone ophthalmoscope for fundus examination and compared their abilities to diagnose common fundal abnormalities using smartphone ophthalmoscope against direct ophthalmoscope. 

Methods: Sixty-nine Year-4 undergraduate medical students participated in the study. Their competencies in using direct ophthalmoscope and smartphone ophthalmoscope for fundus examination on manikins with ocular abnormalities were formatively assessed. The scores were analysed using the SPSS statistical software. Their perceptions on the use of smartphone ophthalmoscopes for fundus examination were obtained using a questionnaire.

Results: The students’ competency assessment scores using the smartphone ophthalmoscope were significantly higher than those using the direct ophthalmoscope. A significantly higher percentage of them correctly diagnosed fundus abnormalities using the smartphone ophthalmoscope. They were confident in detecting fundus abnormalities using the smartphone ophthalmoscope and appreciated the comfortable working distance, ease of use and collaborative learning. More than 90% of them were of the view that smartphone ophthalmoscopes should be included in the undergraduate medical curriculum.

Conclusion: Undergraduate medical students performed better in fundus examination on manikins with ocular abnormalities using smartphone ophthalmoscope compared to direct ophthalmoscope. Their positive perceptions toward smartphone ophthalmoscope support its use as a supplementary teaching tool in undergraduate medical curriculum.

Keywords:           Medical Students, Smartphone, Ophthalmoscope, Teaching Tool

Practice Highlights

  • The smartphone ophthalmoscope is a useful supplementary teaching tool for fundus examination in undergraduate medical education.
  • Fundus examination is performed at a safe working distance from the patient using a smartphone ophthalmoscope.
  • Students are able to detect fundus abnormalities with greater ease and accuracy using a smartphone ophthalmoscope compared to a direct ophthalmoscope.
  • Students appreciate the collaborative learning through peer discussion of the fundus findings using the smartphone ophthalmoscope.

I. INTRODUCTION

Fundus examination is one of the essential procedures which provides information about ocular conditions that may compromise the quality of vision and lead to blindness (Leonardo, 2018). The direct ophthalmoscope (DO) is one of the robust ocular clinical examination tools to be grasped during clinical skill training in medical schools as well as clinical practice. However, students have difficulty mastering the technique of using it  (Kim & Chao, 2019), particularly when they have to coordinate their hand movements at a very near distance to the patient and close one eye when examining the patient’s fundus through the pupil  (MacKay et al., 2015). They also have to adjust the power of the direct ophthalmoscope lenses to get a clearer picture if there is a refractive error with the patient’s eye or their own eyes. Instead of concentrating on detecting fundus findings, the students are preoccupied with adjusting the direct ophthalmoscope.

Technical constraints may be the main reason for the underuse of direct ophthalmoscopes. Experienced physicians who use the direct ophthalmoscope may lack confidence and frequently miss significant abnormalities (Purbrick & Chong, 2015),  causing delayed diagnosis of preventable eye disorders and permanent vision impairment (Myung et al., 2014). This has led to the exploration of alternative tools to overcome some of these challenges (Giardini et al., 2014; Kim & Chao, 2019). Smartphone ophthalmoscope, for example, is a breakthrough digital portable retinal imaging system that allows medical practitioners to view the fundus with high-definition images or video of a routine ophthalmoscope examination.

The D-EYE smartphone ophthalmoscope was developed by Doctor Andrea Russo in 2015 (Russo et al., 2015). It is a small, portable, and inexpensive retinal imaging system that can capture retinal images using an attachment to a smartphone that uses a cross-polarisation technique to reduce corneal reflections. It is integrated with the smartphone’s autofocus feature to accommodate the patient’s refractive error.  

A. Problem and Rationale

Fundus examination requires extensive practice to develop adequate interpretation skills (Leonardo, 2018). Medical students are taught to use the direct ophthalmoscope in order to recognise retinal signs of life-threatening disorders (Benbassat et al., 2012). The International Council of Ophthalmology recognises direct ophthalmoscope examination as one of the seven core ocular medical education competencies. All graduating medical students are expected to recognise common abnormalities of the ocular fundus using a direct ophthalmoscope (Dunn et al., 2021). However, there is a lack of competencies among the medical graduates using this tool (MacKay et al., 2015). This needs to be addressed as at least 2.2 billion people globally have visual impairment or blindness, of which at least 1 billion have deterioration in vision that could have been prevented if they were screened or detected earlier, World Health Organization (2019). Tan et al. (2020) reported several favourable studies carried out in Italy, UK, and India on the advantages of smartphone ophthalmoscopes for fundus examination and visualisation of the retinal image. In a randomised cross-over study done by Curtis et al. (2021) on the ease of use of D-EYE smartphone ophthalmoscope versus direct ophthalmoscope, 44 Year-one medical students in Canada examined the patients’ fundus for optic disc assessment and compared their findings with the respective photographs provided. The ease of use and confidence was more significant with the D-EYE smartphone ophthalmoscope.

Although the smartphone ophthalmoscope is available in Southeast Asian countries such as Malaysia, it is not commonly used in public hospitals and general practitioner clinics. This is probably due to resource constraint issues in developing countries. Moreover, there are limited studies assessing its use, in particular, there is no literature report on such studies among undergraduate medical students in Southeast Asia. However, based on the positive findings from the literature, it has been proposed that smartphone ophthalmoscope be included in clinical skill training for fundus examination among undergraduate medical students at the university where this study was conducted. Therefore, this study was carried out to explore the students’ perceptions of using smartphone ophthalmoscopes for fundus examination and determine whether their competencies in fundus examination improved using this tool compared to using the direct ophthalmoscope. In this study, the D-EYE smartphone ophthalmoscope was chosen over the other types of smartphone ophthalmoscope due to reasons including the ease of data management using the available app facility, cost feasibility and convenience. The two research questions of the study were:

1) What were the perceptions of medical students on the use of smartphone ophthalmoscope for fundus examination?

2) Was there a difference between students’ competencies in fundus examination when using the smartphone ophthalmoscope compared to the direct ophthalmoscope?

The cognitive theory of multimedia learning can be applied in the context of fundus examination using a smartphone ophthalmoscope. Using a smartphone ophthalmoscope, the student can visualise the fundus on the smartphone screen. According to the cognitive theory of multimedia learning (Figure 1), the students engage in active cognitive processing in order to create a cohesive mental representation of their experiences based on their recall knowledge of fundus structures and ocular abnormalities. This will allow them to integrate the findings with other relevant information. They can then describe their findings and organise the selected images into a “mental model” of the items they are learning. Finally, their prior knowledge of ocular disorders is incorporated and reconciled with these verbal explanations and graphical representations.

Figure 1. Cognitive theory of multimedia learning

According to the social constructivism theory, learning is social, active and constructed through social interaction (Lötter & Jacobs, 2020). Technologies have been shown to enhance students’ problem solving by breaking down complex concepts into sub-problems (Kim & Hannafin, 2011). A smartphone ophthalmoscope is an appropriate tool for encouraging active interaction between the students and lecturer to work on real-world problems in the teaching and learning environment. When the students perform fundus examination using the smartphone ophthalmoscope, they can see the findings on the screen together with their peers and the lecturer. This will allow them to gain more knowledge and understanding as they can discuss and link the new ideas in the context of their prior knowledge.

II. METHODS

The study was approved by the International Medical University Joint-Committee on Research and Ethics (IMU-JC). Informed consent was obtained from all respondents. The nature and purpose of the study were explained to them. The respondents were assured of anonymity and confidentiality of the collected information.

A. Study Setting

The data were collected from Year-4 undergraduate medical students who undertook ophthalmology rotation for the academic year 2020/2021, at the University of Cyberjaya, Malaysia.

In the fourth year of the medical curriculum, the students undertake four major postings (Orthopedic, Family Medicine, Psychiatry and a speciality posting) over two semesters (Semesters 7 and 8). These are conducted in four rotations per year (rotations 1 & 2 in Semester 7, rotations 3 & 4 in Semester 8). The speciality posting includes Ophthalmology, Anaesthesia, ENT and Radiology. The duration of each of these speciality posting is two weeks. In the ophthalmology posting the students are taught the principles of history taking and ocular examination in the Clinical Skill Training Department and in the hospital, where they clerk patients with eye conditions. Additionally, they learn about basic common eye conditions during interactive sessions and case-based discussion sessions in small groups. However, during the COVID-19 pandemic, the posting was affected by lockdown measures. Therefore, the case-based discussion sessions were conducted online, and ocular examination was demonstrated through online interactive video sessions. Nevertheless, there was a window of opportunity where the students could return to the campus physically for a one-week revision. During this period, the students practice ophthalmoscopy examination on manikins in the Clinical Skill Training Department.

B. Study Design

The direct ophthalmoscope examination technique was introduced to the students virtually through video demonstrations and during online interactive discussion sessions. During the revision week, the students were trained for two hours to perform fundus examination using the direct ophthalmoscope. For the smartphone ophthalmoscope, they were briefed and trained on its use for 20-30 minutes. The training was conducted by a member of the teaching staff (who is the researcher in this study, AMAQ). Following that, the students were required to examine various slides of fundus images provided in the manikins (M1 and M2).

The selected slides on the manikins represented the common pathological fundus findings, i.e. optic disc swelling, branch retinal vein occlusion, optic atrophy/glaucoma and diabetic retinopathy/ maculopathy. Each student performed the fundus examination on M1 and M2 using the direct and smartphone ophthalmoscopes separately (approximately 2-3 minutes on each manikin) on the same day. The students were required to fill in their findings based on their observation (without discussing with their peers) on the formative assessment forms (shown in Appendix 1) and indicate the tools they utilised (direct or smartphone ophthalmoscopes). The formative assessment form was adapted from Mamtora et al. (2018) and had been validated by two ophthalmologists in the department.

To avoid bias, all the completed formative assessment forms were collected and submitted to another researcher (SCF) who was not involved in marking (to remove information on the tool used by the student on each form). These were then returned to the researcher in this study, (AMAQ) for marking.

After completing the formative assessment, the students were requested to fill in an online questionnaire regarding their perception on the use of smartphone ophthalmoscope for fundus examination. This questionnaire (Appendix 2) was adapted from Nagra & Huntjens (2020). In addition, the students were requested to provide the reasons for their suggestions to include smartphone ophthalmoscopes or replace direct ophthalmoscopes with smartphone ophthalmoscopes in the medical curriculum.

C. Data Analysis

All data were statistically analysed using SPSS version 23. The paired t-test was used to compare the performance of the students in the formative assessments using direct ophthalmoscopes and smartphone ophthalmoscopes. The number of students getting the correct diagnosis using both tools was statistically analysed using the McNemar (Chi Square) test (Liao & Lin, 2008). The statistical significance was determined based on the p-values (the difference is significant if p ≤ 0.05). The responses of the students in the perceptions questionnaire related to the ease of use, confidence, and preference were analysed.

III. RESULTS

Sixty-nine Year-4 medical students participated in this study. The demographic data are shown in Table 1.

Table 1: Demographic data

A. Comparison of Formative Assessment Scores Using Smartphone Ophthalmoscope and Direct Ophthalmoscope

The mean scores of the students were higher using the smartphone ophthalmoscope (59%) than the direct ophthalmoscope (39%). The same trend was observed for the students with and without refractive error. The results are shown in Table 2. A higher number of students were able to make the correct diagnosis for all fundus abnormalities using the smartphone ophthalmoscope compared to the direct ophthalmoscope. The difference is statistically significant (p-value < 0.05). The results are presented in Table 3. The data that support the findings are openly available in Figshare at https://figshare.com/s/d45da87ea42c596e714b

Table 2: Comparison of formative assessment scores using direct ophthalmoscope (DO) and smartphone ophthalmoscope (SPO).

*p-value (paired t-test)

Table 3: Comparison of correct diagnosis using direct ophthalmoscope and smartphone ophthalmoscope.

*McNemar (Chi square) test (Liao & Lin, 2008),

 **(Branch retinal vein occlusion)

A. Students’ Perceptions on the Use of Smartphone Ophthalmoscope for Fundus Examination

A total of 69 students participated in the online questionnaire. All the students appreciated that their peers could share the findings with them on the smartphone screen. Most of the students (87%) preferred using smartphone ophthalmoscopes over direct ophthalmoscopes, and 86% felt confident when using the smartphone ophthalmoscope. In addition, the comfortable working distance was appreciated by 87% of the students. The responses of the participants are shown in Table 4.

Online student evaluation Form

 

                         Likert scale 

 1= Strongly disagree 2= disagree, 3 = Neutral, 4 =agree, 5= Strongly agree

Section 1 

Perception on smartphone ophthalmoscope use 

1

2

3

4

5

I feel confident while using it

1.4%

2.9%

8.7%

56.5%

30.4%

I feel easy to view the fundus    

0.0%

5.8%

11.6%

44.9%

37.7%

I feel comfortable when my peer can observe with me the findings

0.0%

0.0%

0.0%

30.4%

69.6%

My hand is steady while I am performing examination 

0.0%

4.3%

20.3%

40.6%

34.8%

I can pick the finding faster

0.0%

4.3%

21.7%

42.0%

31.9%

Smartphone ophthalmoscope

user-friendly

0.0%

1.4%

7.2%

39.1%

52.2%

I prefer to use it

 

0.0%

4.3%

8.7%

40.6%

46.4%

Online student evaluation Form

                    Likert scale  

 1= Strongly disagree 2= disagree, 3 = Neutral, 4 =agree, 5= Strongly agree

 

Section 2

Efficiency of smartphone ophthalmoscope

1

2

3

4

5

It takes shorter duration to detect finding

0.0%

4.3%

27.5%

33.3%

34.8%

It has comfortable working distance

0.0%

0.0%

13.0%

40.6%

46.4%

I found difficulty in handling it

10.1%

44.9%

21.7%

20.3%

2.9%

I think Smartphone ophthalmoscope must be added to the medical curriculum

0.0%

0.0%

4.3%

47.8%

47.8%

I think direct ophthalmoscope should be replaced by smartphone ophthalmoscope

1.4%

10.1%

26.1%

33.3%

29.0%

Table 4: Responses of participants in the questionnaire to evaluate their perception and efficiency on the use of smartphone ophthalmoscope for fundus examination

B. Students’ Preference for Types of Ophthalmoscopes

Most of the students (94%) suggested that the smartphone ophthalmoscope be included in the medical curriculum, and 62% suggested to replace the direct ophthalmoscope with the smartphone ophthalmoscope. Their preference was mainly attributed to the efficiency, ease of use (for those with refractive error and amblyopia (lazy eye)), autofocus function using the smartphone, and the possibility of using both eyes to see the images on the smartphone screen. In addition, the comfortable working distance, ease of cleaning after use and peer discussion were cited. Meanwhile, 11% of the students suggested keeping direct ophthalmoscope alongside the smartphone ophthalmoscope in the curriculum. They opined that smartphone ophthalmoscope should be included as an additional teaching and learning tool for fundus examination but disagreed that it should replace direct ophthalmoscope totally as the smartphone ophthalmoscope might not be readily available in all healthcare settings. One of the participants commented that “eye examination using direct ophthalmoscope was thought to be a basic procedural skill that doctors must-have. Smartphone ophthalmoscope was a newer technology that might not be available in hospitals, unlike direct ophthalmoscope, which was more common“. 

IV. DISCUSSION

The students scored significantly higher in the formative assessment for fundus abnormalities using the smartphone ophthalmoscope compared to the direct ophthalmoscope. The findings from this study were consistent with those of Kim and Chao (2019) and Dunn et al. (2021). In addition, the study also showed that the difference was statistically significant regardless of the presence of refractive error.

The students with refractive error and amblyopia have commented that they found the smartphone ophthalmoscope more convenient and efficient than the direct ophthalmoscope. They stated that they had difficulty using their amblyopic eye when performing the examination using the direct ophthalmoscope as they had to follow the ‘Three R rule’ in which students should use their right eye and their right hand when examining the right eye of the patient at the side of the patient at about 45 degrees to avoid kissing position with the patient. The students with refractive errors highlighted another issue that they needed to adjust the direct ophthalmoscope very frequently to get a proper and clear view. However, when they used the smartphone ophthalmoscope, they were able to perform the examination using both eyes, as they could view the fundus on the smartphone screen without having to close one eye. Fifty percent of the students in this study reported they had refractive errors. In the study by Al-Rashidi et al. (2018), it was found that 89 out of 162 medical students (54 %) had refractive errors. In this study, a significantly higher number of students obtained the correct diagnosis of branch retinal vein occlusion (86%) and glaucoma (62%) using the smartphone ophthalmoscope compared to the direct ophthalmoscope (p-value < 0. 001).

In a study conducted by Mrad et al. (2021) on the accurate method for glaucoma screening, they found that the D-EYE smartphone ophthalmoscope was more accurate for capturing fundus images and assessing the optic disc in detecting glaucoma compared to the direct ophthalmoscope. In addition, Mamtora et al. (2018) reported that it was more convenient and easier to detect optic disc and blood vessels using the D-EYE smartphone ophthalmoscope. Providing alternative tools in medical education could help students learn and perform more efficiently during their teaching and learning activities.

In our study, 86% of the students felt confident using the smartphone ophthalmoscope, and 83% of them found it easy to view the fundus. The majority of the students (91%) found the smartphone ophthalmoscope user friendly, and 73% indicated that they were able to identify the findings quickly while using the smartphone ophthalmoscope. It has been reported previously that medical students preferred smartphone ophthalmoscopes to direct ophthalmoscopes and were more likely to make correct and faster diagnoses (Nagra & Huntjens, 2020). Though mastering the technique of using the direct ophthalmoscope is important, it is equally paramount to be able to identify the fundus findings accurately. The cognitive load theory states that the human working memory can only hold a certain number of interrelated objects (Chu, 2014). Motivational components can enhance student learning by boosting generative processing as long as the learner is not constantly overburdened with needless processing or diverted from critical processing (Mayer, 2014). The technical challenges faced while using the direct ophthalmoscope could hamper the students’ ability to recognise the features associated with fundus abnormalities. The smartphone ophthalmoscope offers an advantage in this context.

In this study, 87 % of the students found that the working distance of a smartphone ophthalmoscope was more comfortable compared to the typical 1–3 cm working distance of a direct ophthalmoscope. This finding was similar to the study conducted by Huntjens & Nagra (2020), where they found that 92% of the students preferred the longer working distance of 20–60 cm of the D-EYE smartphone ophthalmoscope.

The use of smartphone ophthalmoscope as a teaching tool increases student engagement and enhances their learning experience. All students appreciated that their peers could observe the findings together with them on the smartphone screen. They were able to discuss among themselves, as well as with the lecturer. Learning must be an engaging and meaningful experience for the learners to be productive (Mellis et al., 2013). Learners will utilise strategies developed earlier in their training to optimise their knowledge and skills through reflection. When the students record the fundus images, they can discuss their interpretation of findings with the lecturers and peers. Feedback from this process will improve their learning efforts (Kaufman, 2019). The feedback and reflection facilitate the construction of new knowledge, as well as strategies for improving the performance as all of them could see the same findings on the smartphone screen and discuss accordingly.

In our study, 93% of the students’ suggested that smartphone ophthalmoscopes should be included in the medical curriculum. It was easier for them to see the findings without spending a longer time trying to focus by squinting and shutting one eye to look for the findings, as the image is automatically adjusted in a smartphone ophthalmoscope. This has been highlighted as one of the advantages of using the smartphone ophthalmoscope in medical training and screening in primary care centres (Nagra & Huntjens, 2020). Smartphone-based fundus image could even replace the direct ophthalmoscope in clinical medicine (Wintergerst et al., 2020). In our study, out of the 69 students, only eight students (11%) opined that the direct ophthalmoscope should not be totally replaced with a smartphone ophthalmoscope. From their point of view, the direct ophthalmoscope is a must-know clinical skill that contributes to their professional identity. In particular, the smartphone ophthalmoscope may not be easily available in developing countries due to resource constraints. The direct ophthalmoscope is one of the fundamental skills that all clinicians should be able to perform. It is included in the assessment of the final year undergraduate curriculum as well as the postgraduate membership assessment. (Purbrick & Chong, 2015).

With a specific instructional scaffolding strategy, smartphone ophthalmoscopes can be used as a prologue to the direct ophthalmoscope. Students will be able to share the fundus pictures with their peers through the screen simultaneously for the same patient during clinical practice sessions in packed clinics, without having to struggle with the technical challenges of the direct ophthalmoscope. As a result, patients will be less burdened in terms of examination time, and students will be able to evaluate more patients with fundus abnormalities in a shorter amount of time. The concept of just-in-time learning can be a useful pedagogical tool for medical academicians to improve their teaching and learning approach in the age of technology. The just-in-time learning idea uses technology to deliver teaching and learning activities, allowing learning communities to understand better and practise (Naseem et al, 2019). According to Riel (2000), academics continue to play an essential role in encouraging learners to apply their knowledge effectively. As new technologies emerge, educators must prepare students to be lifelong learners who are digitally literate and resourceful in their application of technology.

A. Limitations of the Study

As the study was conducted during the COVID-19 pandemic, the duration for recruitment and training of the students was limited. As a result, the students had a shorter period of face-to-face clinical training. This limited the student’s exposure to performing fundus examinations on real patients in the hospital and using the various ophthalmoscopic tools. In addition, the lack of practice could have affected the students’ performance in the formative assessment on fundus examination using the smartphone and direct ophthalmoscopes. Therefore, we recommend repeating this study when the COVID-19 situation is resolved.

Another limitation of the study was that the students performed the fundus examination on the same manikins using the direct ophthalmoscope followed by the smartphone ophthalmoscope (or vice versa) on the same day. This could result in bias in their judgement in identifying the fundus abnormalities. Nevertheless, the students were reminded to be objective and record their findings accurately based on their observations using either tool.

V. CONCLUSION

Smartphone ophthalmoscope is an effective teaching tool for improving the skills in detecting common clinical ocular diseases. It provides a comfortable working distance and promotes collaborative learning by enabling peer discussion. It is also convenient for students with refractive errors. Therefore, the smartphone ophthalmoscope is a valuable supplementary teaching tool for fundus examination and is highly recommended to be included in the undergraduate medical curriculum.

Notes on Contributors

AMAQ designed and conducted the study, reviewed the literature, analysed the data and wrote the manuscript EHM designed the study, analysed the data, gave critical feedback and edited the manuscript before submission. SCF designed the study, gave critical feedback and edited the manuscript before submission.

Ethical Approval

The study was approved by the International Medical University Joint-Committee on Research and Ethics (IMU-JC), Project ID No.: MHPE I/2021(01). Informed consent was obtained from all respondents, and the nature and purpose of the study were explained to them. The respondents were assured of anonymity and confidentiality of the collected information.

Data Availability

All data are available at https://figshare.com/s/d45da87ea42c596e714b and can be accessed on request and approval from the corresponding author.

Acknowledgement

The authors would like to thank the medical students at the University of Cyberjaya who showed their enthusiasm for learning. And special thanks to the statisticians, Dr Norhafizah Ab Manan, University of Cyberjaya and Dr Shamala Ramasamy, International Medical university, for their advice on statistical tests. The authors would also like to thank Professor Ian Wilson for proofreading the manuscript.

Funding

This study was funded by the International Medical University, Malaysia. MHPE I/2021(01)

Declaration of Interest

Authors declare that they do not have possible conflicts of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.

References

Al-Rashidi, S. H., Albahouth, A. A., Althwini, W. A., Alsohibani, A. A., Alnughaymishi, A. A., Alsaeed, A. A., Al-Rashidi, F. H., & Almatrafi, S. (2018). Prevalence refractive errors among medical students of Qassim University, Saudi Arabia: Cross-sectional descriptive study. Open Access Macedonian Journal of Medical Sciences, 6(5), 940–943. https://doi.org/10.3889/oamjms.2018.197

Benbassat, J., Polak, B. C. P., & Javitt, J. C. (2012). Objectives of teaching direct ophthalmoscopy to medical students. Acta Ophthalmologica, 90(6), 503–507. https://doi.org/10.1111/j.1755-3768.2011.02221.x

Chu, H.-C. (2014). Potential Negative Effects of Mobile Learning on Students’ Learning Achievement and Cognitive Load—A Format Assessment Perspective. Educational Technology & Society, 17 (1), 332–344

Curtis, R., Xu, M., Liu, D., Kwok, J., Hopman, W., Irrcher, I., & Baxter, S. (2021). Smartphone Compatible versus Conventional Ophthalmoscope: A Randomized Crossover Educational Trial. Journal of Academic Ophthalmology, 13(02), e270–e276. https://doi.org/10.1055/s-0041-1736438

Dunn, H. P., Kang, C. J., Marks, S., Witherow, J. L., Dunn, S. M., Healey, P. R., & White, A. J. (2021). Perceived usefulness and ease of use of fundoscopy by medical students: A randomised cross-over trial of six technologies (eFOCUS 1). BMC Medical Education, 21(1), 41. https://doi.org/10.1186/s12909-020-02469-8

Giardini, M. E., Livingstone, I. A. T., Jordan, S., Bolster, N. M., Peto, T., Burton, M., & Bastawrous, A. (2014). A smartphone based ophthalmoscope. [paper presentation]. 36th Annual International Conference of the  Engineering in Medicine and Biology Society, EMBC, Chicago, United States. https://doi.org/10.1109/EMBC.2014.6944049   

Kaufman, D. M. (2019). Teaching and Learning in Medical Education. How Theory can Inform Practice. Tim Swanwick, Kirsty Forrest, Bridget C. O’Brien (Eds), Understanding medical education evidence, theory, and practice (pp. 37-69). The Association for the Study of Medical Education.

Kim, M. C., & Hannafin, M. J. (2011). Scaffolding problem solving in technology-enhanced learning environments (TELEs): Bridging research and theory with practice. Computers & Education, 56(2), 403-417. Elsevier Ltd.  https://www.learntechlib.org/p/67172/.

Kim, Y., & Chao, D. L. (2019). Comparison of smartphone ophthalmoscopy vs conventional direct ophthalmoscopy as a teaching tool for medical students: The COSMOS study. Clinical Ophthalmology13, 391–401. https://doi.org/10.2147/OPTH.S190922

Leonardo, D. (2018). Development of a virtual reality ophthalmoscope prototype: Mechatronic Engineering Program Faculty of Engineering, Universidad Militar Nueva Granada, Bogotá D.C., Colombia. http://hdl.handle.net/10654/17843

Liao, Y. Y., & Lin, Y. M. (2008). McNemar test is preferred for comparison of diagnostic techniques. American Journal of Roentgenology, 191(4), 2008.  https://doi.org/10.2214/AJR.08.1090

Lötter, M. J., & Jacobs, L. (2020). Using smartphones as a social constructivist pedagogical tool for inquiry-supported problem-solving: An exploratory study. Journal of Teaching in Travel & Tourism, 20(4), 347–363. https://doi.org/10.1080/15313220.2020.1715323   

MacKay, D. D., Garza, P. S., Bruce, B. B., Newman, N. J., & Biousse, V. (2015). The demise of direct ophthalmoscopy: A modern clinical challenge. Neurology: Clinical PraFalctice, 5(2), 150–157. https://doi.org/10.1212/CPJ.0000000000000115

Mamtora, S., Sandinha, M. T., Ajith, A., Song, A., & Steel, D. H. W. (2018). Smart phone ophthalmoscopy: A potential replacement for the direct ophthalmoscope. Eye (Basingstoke), 32(11), 1766–1771. https://doi.org/10.1038/s41433-018-0177-1

Mayer, R. E. (2014). Cognitive theory of multimedia learning. The Cambridge Handbook of Multimedia Learning, Second Edition,  43–71. https://doi.org/10.1017/CBO9781139547369.005

Mellis, S., Carvalho, L., & Thompson, K. (2013, December 1-5). Applying 21st century constructivist learning theory to stage 4 design projects.  [Conference presentation]. Joint Australian Association for Research in Education Annual Conference, Adelaide. https://files.eric.ed.gov/fulltext/ED603249.pdf

Mrad, Y., Elloumi, Y., Akil, M., & Bedoui, M. H. (2021). A Fast and Accurate Method for Glaucoma Screening from Smartphone-Captured Fundus Images. Irbm, 1, 1–11. https://doi.org/10.1016/j.irbm.2021.06.004

Myung, D., Jais, A., He, L., Blumenkranz, M. S., & Chang, R. T. (2014). 3D Printed Smartphone Indirect Lens Adapter for Rapid, High Quality Retinal Imaging. Journal of Mobile Technology in Medicine, 3(1), 9–15. https://doi.org/10.7309/jmtm.3.1.3

Nagra, M., & Huntjens, B. (2020). Smartphone ophthalmoscopy: Patient and student practitioner perceptions. Journal of Medical Systems, 44(1), Article 10. https://doi.org/10.1007/s10916-019-1477-0

Naseem, A., Ghias, K., Bawani, S., Shahab, M. A., Nizamuddin, S., Kashif, W., Khan, K. S., Ahmad, T., & Khan, M. (2019). Designing EthAKUL: A mobile just-in-time learning environment for bioethics in Pakistan. Scholarship of Teaching and Learning in the South, 3(1), 36–56. https://doi.org/10.36615/sotls.v3i1.70

Purbrick, R. M. J., & Chong, N. V. (2015). Direct ophthalmoscopy should be taught to undergraduate medical students—No. Eye29(8), 990-991. https://doi.org/10.1038/eye.2015.91

Riel, M. (2000).  Education in the 21st century: Just-in-Time learning or learning communities, Technology and Learning, 137-160.

Russo, A., Morescalchi, F., Costagliola, C., Delcassi, L., & Semeraro, F. (2015). Comparison of smartphone ophthalmoscopy with slit-lamp biomicroscopy for grading diabetic retinopathy. American Journal of Ophthalmology, 159(2), 360-364. https://doi.org/10.1016/j.ajo.2014.11.008

Tan, C. H., Kyaw, B. M., Smith, H., Tan, C. S., & Car, L. T. (2020). Use of smartphones to detect diabetic retinopathy: Scoping review and meta-analysis of diagnostic test accuracy studies. Journal of Medical Internet Research, 22(5), e16658.

Wintergerst, M. W. M., Jansen, L. G., Holz, F. G., & Finger, R. P. (2020). Smartphone-Based Fundus Imaging-Where Are We Now? Asia-Pacific Journal of Ophthalmology, 9(4), 308–314. https://doi.org/10.1097/APO.0000000000000303

World Health Organization. (2019). Report of the 4th global scientific meeting on trachoma: Geneva, 27–29 November 2018. World Health Organization.

*Amelah Mohammed Abdul Qader
University of Cyberjaya Campus
Persiaran Bestari, Cyber 11, 63000 Cyberjaya,
Selangor Darul Ehsan, Malaysia
Email: amelah@cyberjaya.edu.my/ dramelahariqi@gmail.com

Submitted: 22 September 2021
Accepted: 27 April 2022
Published online: 4 October, TAPS 2022, 7(4), 1-21
https://doi.org/10.29060/TAPS.2022-7-4/OA2785

Yao Chi Gloria Leung1*, Kennedy Yao Yi Ng2*, Ka Shing Yow3*, Nerice Heng Wen Ngiam4, Dillon Guo Dong Yeo4, Angeline Jie-Yin Tey5, Melanie Si Rui Lim6, Aaron Kai Wen Tang7, Bi Hui Chew8, Celine Tham9, Jia Qi Yeo10, Tang Ching Lau11,12, Sweet Fun Wong13,14, Gerald Choon-Huat Koh15,16** & Chek Hooi Wong14,17**

1Department of Anaesthesiology, Singapore General Hospital, Singapore; 2Department of Medical Oncology, National Cancer Centre Singapore, Singapore; 3Department of General Medicine, National University Hospital, Singapore; 4Department of General Medicine, Singapore General Hospital, Singapore; 5Department of General Medicine, Tan Tock Seng Hospital, Singapore; 6Department of General Paediatrics, Kandang Kerbau Hospital, Singapore, 7Department of Psychiatry, Singapore General Hospital, Singapore; 8Tan Tock Seng Hospital, Singapore; 9Ng Teng Fong General Hospital, Singapore, 10National Healthcare Group Pharmacy, Singapore, 11Department of Medicine, NUS Yong Loo Lin School of Medicine, Singapore; 12Division of Rheumatology, University Medicine Cluster, National University Hospital, Singapore; 13Medical Board and Population Health & Community Transformation, Khoo Teck Puat Hospital, Singapore; 14Department of Geriatrics, Khoo Teck Puat Hospital, Singapore; 15Saw Swee Hock School of Public Health, National University of Singapore, Singapore; 16Future Primary Care, Ministry of Health Office of Healthcare Transformation, Singapore; 17Health Services and Systems Research, Duke-National University of Singapore Medical School, Singapore

*Co-first authors

**Co-last authors

Abstract

Introduction: Tri-Generational HomeCare (TriGen) is a student-initiated home visit programme for patients with a key focus on undergraduate interprofessional education (IPE). We sought to validate the Readiness for Interprofessional Learning Scale (RIPLS) and evaluate TriGen’s efficacy by investigating healthcare undergraduates’ attitude towards IPE.

Methods: Teams of healthcare undergraduates performed home visits for patients fortnightly over six months, trained by professionals from a regional hospital and a social service organisation. The RIPLS was validated using exploratory factor analysis. Evaluation of TriGen’s efficacy was performed via the administration of the RIPLS pre- and post-intervention, analysis of qualitative survey results and thematic analysis of written feedback.

Results: 79.6% of 226 undergraduate participants from 2015-2018 were enrolled. Exploratory factor analysis revealed four factors accounting for 64.9% of total variance. One item loaded poorly and was removed. There was no difference in pre- and post-intervention RIPLS total and subscale scores. 91.6% of respondents agreed they better appreciated the importance of interprofessional collaboration (IPC) in patient care, and 72.8% said MDMs were important for their learning. Thematic analysis revealed takeaways including learning from and teaching one another, understanding one’s own and other healthcare professionals’ role, teamwork, and meeting undergraduates from different faculties.

Conclusion: We validated the RIPLS in Singapore and demonstrated the feasibility of an interprofessional, student-initiated home visit programme. While there was no change in RIPLS scores, the qualitative feedback suggests that there are participant-perceived benefits for IPE after undergoing this programme, even with the perceived barriers to IPE. Future programmes can work on addressing these barriers to IPE.

Keywords:           Interprofessional Education, Student-Initiated Home Visit Programme, RIPLS, Validation

Practice Highlights

  • We validated the Readiness for Interprofessional Learning Scale (RIPLS) in Singapore, a multi-ethnic Asian country.
  • A student-initiated, interprofessional, longitudinal home visit program is feasible.
  • While there was no significant change in RIPLS scores, participants reported qualitative benefits of the programme in their attitudes towards IPE.
  • Qualitative feedback highlighted four main barriers to IPE: Time constraints, unmotivated teammates, administrative burden, and unsuitable patients.

I. INTRODUCTION

Interprofessional education (IPE) aims to prepare healthcare professionals for effective collaboration, and while becoming increasingly common, is challenging to initiate, implement, evaluate and sustain (Fahs et al., 2017). Key challenges include designing a curriculum that integrates IPE with traditional academic frameworks, active engagement of facilitators and students, and accommodating various professions (Sunguya et al., 2014). IPE is context-specific, evolving, and involves continuous interaction and interdependence, and many traditional top-down approaches such as forums and lectures do not effectively teach it (Briggs & McElhaney, 2015).

Experiential IPE programmes employ a ground-up approach and potentially tackle some of the aforementioned challenges. Students involved in on-the-ground interprofessional healthcare visits to older adults showed that such experiences improved student collaboration and students’ self-perception of interprofessional team care-related skills (Blythe & Spiring, 2020; Conti et al., 2016; McManus et al., 2017; Toth-Pal et al., 2020; Vaughn et al., 2014). Therefore, a group of undergraduates from the National University of Singapore (NUS) Yong Loo Lin School of Medicine initiated an experiential student-led IPE programme which is aimed at improving health outcomes in older people with frequent hospital readmissions. This longitudinal service-learning programme was anchored by several educational aims including enhancing students’ IPE outcomes and improving attitudes towards IPE.

Formal evaluation of such programmes and investigating student IPE attitudes after being involved in a longitudinal home visit programmes are lacking in the current IPE literature (Grice et al., 2018). This study aims to evaluate TriGen’s effectiveness by investigating student IPE attitudes through the use of the Readiness for Interprofessional Learning Scale (RIPLS). Since the RIPLS has not been validated in the Singapore context, this study also aims to validate this scale.

II. METHODS

A. Programme Design

TriGen is a collaboration between NUS Yong Loo Lin School of Medicine, Khoo Teck Puat Hospital, a Northern regional hospital in Singapore, and North West Community Development Council, a grassroots organisation (Ng et al., 2020a, 2020b). A non-profit ground-up social initiative by healthcare undergraduates, it has the dual aim of i) serving the medical and social needs of older patients by providing longitudinal home visits by interprofessional student teams; ii) educating and empowering undergraduate students through a service-learning approach, with a key focus on improving attitudes towards IPE. The programme was designed under the mentorship of university faculty members, and was earmarked as a co-curricular activity aimed at improving students’ attitudes towards IPE and IPC. Older patients with frequent hospital readmissions (three or more times over six months) were followed up by healthcare undergraduates enrolled in Medicine, Nursing, Pharmacy, Social Work, Physiotherapy or Occupational Therapy courses in Singapore.

The programme begins with healthcare undergraduates undergoing didactic, skill-based training and team-based simulation training covering possible scenarios encountered during home visits (Annex 1). Each team comprising 2-3 interdisciplinary undergraduates conduct fortnightly visits to 1-2 patients over 6 months. At the midpoint and endpoint of the programme, healthcare undergraduates assessed the patients’ needs and presented at multi-disciplinary meetings (MDMs) chaired by healthcare professionals and grassroots staff, who guided the undergraduates to execute a management plan.

This IPE programme was designed based on educational principles for adult learners outlined by Knowles (1984). First, it provided healthcare undergraduates with opportunities for experiential learning anchored in the service-learning approach. Second, it was largely problem-based group learning with most training sessions being team-based and scenario-based. MDMs were also problem-based and encouraged undergraduates to brainstorm ideas to address their patients’ issues. Third, the service they provided in this programme modelled the work they may engage in after graduation. What they learned in this programme was of immediate relevance to their current study and future practice. Lastly, the programme provided autonomy to healthcare undergraduates to direct their own learning. This programme was voluntary and allowed participants’ flexibility for further self-study of topics of interest. Key student outcomes include readiness for IPE (including teamwork and collaboration, professional identity, roles and responsibility), and a better appreciation for IPC.

B. Evaluation Approach

This study used the framework by Kirkpatrick (1959) expanded by Barr et al. (2005) to evaluate the effectiveness of TriGen in improving healthcare undergraduates’ attitudes towards IPE, particularly in evaluation levels 1, 2a and 2b, which centre on learner’s reactions, attitude perceptions, and acquisition of knowledge or skills (Table 1). The use of quantitative and qualitative data collection in a survey was thought to be most appropriate in capturing the data and making the evaluation richer, and was hence the approach utilised for this research (Figure 1).

Evaluation Level

Methods and Measures

Timeframe

Level 1: Learners’ reactions

Participants’ views of their learning experience and opinions about the program

Participants’ self-reported feedback of IPE learning

Post-intervention

Qualitative feedback

Post-intervention

Level 2a: Modification of attitudes perceptions

Participants’ self-reported feedback of IPE learning

Post-intervention

Qualitative feedback

Post-intervention

 

Readiness for Interprofessional Learning Scale

Pre- and post-intervention

Level 2b: Acquisition of knowledge/skills

Concepts, procedures, principles, and skills

Qualitative feedback

Post-intervention

Table 1: Components of Kirkpatrick/Barr et al. evaluation framework as applied to TriGen

Figure 1: Flowchart of study components

C. Quantitative Measures

The RIPLS (Parsell & Bligh, 1999) was among the first scales developed for measurement of attitudes towards interprofessional learning. It assesses student readiness for IPE and IPC with other health care professionals and has been reported to be sensitive to differences in the students’ attitude towards IPE (Berger-Estilita et al., 2020). While there are a few studies validating it in Asian countries (China, Indonesia, Japan), none have been performed in Singapore (a multi-ethnic Asian country with English language as a predominant language of instruction (Ganotice & Chan, 2018; Lestari et al., 2016; Li et al., 2018; Tamura et al., 2012).

The RIPLS, a 19-item questionnaire comprising 4 subscales (“Teamwork and Collaboration”; “Positive Professional Identity”; “Negative Professional Identity” and “Roles and Responsibilities”), was administered pre- and post-intervention (McFadyen et al., 2005). Higher RIPLS scores imply greater readiness for interprofessional learning. This study validates the RIPLS in the Singapore context for the first time, then employs it for quantitative evaluation of TriGen. Additionally, separate from the RIPLS, three questions were added as a direct measure of participants’ reaction (Level 1), “I better appreciate the importance of IPC in the care of patients through the programme”, “The multidisciplinary meetings organised were important for my learning”, and “I would recommend the programme to my friends.”

1) Statistical Analysis: The Shapiro-Wilk test was used to assess if the data followed a normal distribution (Shapiro & Wilk, 1965). Factor analysis was conducted to explore the construct validity of the RIPLS, and Cronbach’s alpha was computed to determine internal consistency. The suitability of the correlation matrix was determined by the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity. The numbers of factors retained for the initial solutions and entered into the rotation were determined with the application of Kaiser’s criterion (eigenvalues >1). The initial factor extraction was performed using principal component analysis. Exploratory factor analysis was then conducted based on the RIPLS four-subscale structure. A paired t-test comparing baseline and post-intervention responses was computed for each survey item to determine significant differences (p ≤ 0.05). One-way ANOVA was performed to assess for demographic factors that correlated with pre-intervention and magnitude of change in RIPLS scores; if it demonstrated an overall difference between groups, post-hoc Tukey’s HSD was performed. For all statistical analyses, the Statistical Package for Social Sciences (SPSS, Version 23.0, Chicago, Illinois) was used.

D. Qualitative Measures

Post-intervention qualitative feedback regarding participants’ learning experiences was collected through online surveys. Questions include: What did you learn about interprofessional collaboration? What are your learning points after completing the project? Would you recommend this project to your peers, and what are your reasons? These questions were chosen to better understand participants’ reaction to the programme, their attitudes toward IPE and IPC, and other key learning points they may have.

1) Thematic Analysis: All survey participants were encouraged to participate in the qualitative research, with a total of 163 recruited to give written qualitative feedback on the programme. Given the relatively large sample of data, thematic analysis was chosen to explore and interpret the dataset, distilling it into recurring ideas (Braun & Clarke, 2006; Kiger & Varpio, 2020). Analysis was performed on participants’ qualitative descriptions of their learning experiences, with constant comparison analysis used to identify patterns in participants’ responses and develop a coding schema. Two coders independently identified major themes from the text within all transcripts, with reference to the research questions. They discussed and resolved any disagreements. No member checking was performed. A common coding schema was generated and applied to all the transcripts.

E. Ethical Approval

Ethical approval was obtained from the NUS institutional review board (B-15-272). Study participation was entirely voluntary and anonymous. Informed consent was taken from participants before data collection commenced, and they were allowed to withdraw from the research at any point in time. No incentives were provided to study participants.

III. RESULTS

226 healthcare undergraduates participated in TriGen from 2015-2018. Response rate for the RIPLS was 79.6%.

A. Demographics

Median age was 21 (range 18-41). 62.2% of participants were female, 37.8% were male. 31.7% were medical students, 12.8% nursing students, 42.2% pharmacy students, 10.0% social work students, and 3.3% therapy students. First- and second-year students comprised 62.2% of participants, while third- to fifth-years comprised 37.8%. 65.0% participated in previous IPE activities.

B. Construct Validity

The KMO index was 0.902, indicating sampling adequacy. The chi-square index for Barlett’s test of sphericity was 1919.445 (df171, p<0.001), indicating suitability for factor analysis.

Principal component analysis yielded four components largely consistent with the four-subscale model of the RIPLS (Barr et al., 2005) (Annex 2). However, one item, “I am not sure what my professional role will be”, had a low loading value of 0.285 under the original subscale of “Roles and Responsibility” and a borderline low loading value of 0.459 under the subscale of “Negative Professional Identity”. It was removed from subsequent analyses in view of its poor fit into the theoretical construct (Table 2).

No

Statements

Teamwork and Collaboration

Negative Professional Identity

Positive Professional Identity

Roles and Responsibilities

1

Shared learning will help me to think positively about other healthcare professionals.

0.794

 

 

 

2

Learning with other health and social care students before qualification would improve relationships after qualification.

0.781

 

 

 

3

Team-working skills are essential for all health and social care students to learn.

0.773

 

 

 

4

Shared learning will help me to understand my own limitations.

0.767

 

 

 

5

Communication skills should be learnt with other health and social care students.

0.746

 

 

 

6

Learning with other students/professionals will make me become a more effective member of a health and social care team.

0.742

 

 

 

7

For small group learning to work, students need to trust and respect each other.

0.739

 

 

 

8

Shared learning with other healthcare students will increase my ability to understand clinical problems.

0.723

 

 

 

9

Patients would ultimately benefit if health and social care students/professionals worked together to solve patient problems.

0.650

 

 

 

10

It is not necessary for undergraduate health and social care students to learn together.

 

0.882

 

 

11

I don’t want to waste time learning with other health and social care students.

 

0.854

 

 

12

Clinical problem-solving skills can only be learnt with students from my own department.

 

0.799

 

 

13

Shared learning will help to clarify the nature of patient problems.

 

 

0.658

 

14

Shared learning before qualification will help me become a better team worker.

 

 

0.642

 

15

I would welcome the opportunity to work on small group projects with other health and social care students.

 

 

0.614

 

16

Shared learning with other health and social care professionals will help me to communicate better with patients and other healthcare professionals.

 

 

0.567

 

17

The function of nurses and therapists is mainly to provide support for doctors.

 

 

 

0.836

18

I am not sure what my professional role will be.

 

0.459*

 

0.285

19

I have to acquire much more knowledge and skills than other health or social care students.

 

 

 

0.517

Table 2: Exploratory Factor Analysis of the RIPLS – Contribution of Items to Each Component

*The highest loading value of each item under the four subscales are shown (except for item 18). A loading value of >0.5 was taken to be satisfactory. Item 18, “I am not sure what my professional role will be.”, was deemed borderline satisfactory at a loading value of 0.459 in the subscale Negative Professional Identity. Its loading value was lower at 0.285 in its original subscale Roles and Responsibility.

C. Internal Consistency

Cronbach’s alpha is 0.848 for RIPLS total score, suggesting good internal consistency.

D. Baseline RIPLS Score

Table 3: Total RIPLS scores.

Subscale scores can be found in Annexes 3 to 6.

The mean baseline total RIPLS score was 76.6 (95% CI 75.6 – 77.6). There was a baseline difference between faculties (p=0.001), with medical and therapy undergraduates having higher scores as compared to pharmacy students (mean difference 3.85, 0.59–7.11, p=0.012 and mean difference 8.83, 0.94–16.7, p=0.020, respectively) (Table 3). As for subscales, there was a difference in “Teamwork and Collaboration” baseline scores between years of study, with Year 1–2 undergraduates had a higher baseline score of 40.8 (40.0–41.5) versus Year 3–5 undergraduates with a score of 39.5 (38.6–40.4) (p=0.038) (Annex 3). Medical undergraduates had higher baseline scores for the “Teamwork and Collaboration” 41.2 (40.2–42.2)) and “Positive Professional Identity” 17.9 (17.4–18.5) subscales compared to pharmacy undergraduates 39.3 (38.4–40.1) (p=0.034), and 17.0 (16.6–17.4) (p=0.036) respectively (Annexes 3-4). Social work undergraduates have the lowest baseline “Roles and Responsibility” score, averaging 4.94 (4.34–5.55) compared to all other faculties (Annex 6).

E. Change in RIPLS Score Post-Intervention

There was no significant difference between the pre- and post-intervention RIPLS total score and the subscale score under the “Teamwork and Collaboration” subscale (Table 3, Annex 3). Under the “Positive Professional Identity” subscale, there was a decrease in post-intervention scores of Year 1-2 students (mean difference -0.500 (-0.931– -0.069), p=0.023) and students with no participation in activities outside of the faculty (mean difference -0.403 (-0.768 – -0.037), p=0.031) (Annex 4). Under the “Negative Professional Identity” subscale, there was a decrease in post-intervention score in medical students (mean difference -0.667 (-1.31– – 0.020), p=0.44) and social work students (-0.889 (-1.70– -0.073), p=0.035) (Annex 5). There was an increase in the post-intervention score amongst female students under the “Roles and Responsibility” subscale (mean difference 0.384 (0.065–0.703), p=0.019) (Annex 6).

F. Individual Item Analysis

Negatively coded statements like “the function of nurses and therapists is mainly to provide support for doctors” (Item 17) and “I am not sure what my professional role will be” (Item 18) showed significant increases in scores post-intervention (0.23, p=0.005 and 0.17, p=0.016 respectively). Other significant findings include a decrease in scores for the statements “Shared learning with other health and social care professionals will help me to communicate better with patients and other healthcare professionals” (Item 13) (-0.14, p=0.013), and “Shared learning will help to clarify the nature of patient problems (Item 15) (-0.10, p=0.034) (Table 4).

Table 4: RIPLS (Individual items analysis)

G. Self-Reported Feedback on Interprofessional Learning

91.6% participants agreed they could “better appreciate the importance of interprofessional collaboration in the care of patients”. 72.8% said MDMs were important for their learning and 91.9% of respondents would recommend the programme to their friends.

H. Qualitative Feedback

163 of 180 survey respondents participated in the qualitative research (response rate 90.6%). (Fig 1) 34.4% of respondents were male and 65.6% female. 33.1% of respondents were studying Medicine, 12.3% Nursing, 40.5% Pharmacy, 11.0% Social Work and 3.1% Therapy. 54.6% of respondents were in early years of study (Year 1–2). 74.8% had previous exposure to IPE. Thematic analysis yielded the following themes:

1) Learning and teaching one another: Healthcare undergraduates found value in learning from one another. They shared knowledge and skills gained from their respective curriculum with one another.

 

“I feel more equipped and prepared to teach and learn from other healthcare professionals”

21-year-old female third-year medical student

 

“I learnt a lot from my social work team leader and how to consider the social aspects of issues the elderly face”

20-year-old male first-year medical student

 

2) Understanding the role of other healthcare professionals: Healthcare undergraduates learned the role of other healthcare professionals and gained new insights into how different healthcare professionals contributed to the care of the patient.

 

[I have] learn[ed] … how we can tap on each other[’s] strengths to come up with a care plan for the patients

21-year-old female third-year pharmacy student

 

Understanding what medicine, nursing [and] pharmacy does make quite a lot of difference to how we perceive and thus, work with them.

23-year-old female second-year social work student

 

3) Understanding one’s own role: Healthcare undergraduates reported developing a greater understanding of the roles and responsibilities they played as a part of a multi-disciplinary team.

 

I am now more aware of the role and responsibility I have as a healthcare professional.

21-year-old female first-year pharmacy student

 

Working in a multi-disciplinary team gave me a feel of how it may be like caring for a patient as a team in my future career.

20-year-old female first-year social work student

 

4) Teamwork: Healthcare undergraduates appreciated the need for collaboration and teamwork within a multi-disciplinary team. They learned about the importance of compromise.

Working with different people, in terms of personality, faculty, etcetera – I learnt to give and take and be more understanding towards the others.

21-year-old second-year social work student

 

It has allowed me to better understand … how the different professions can come together to better serve the needs of patients.

20-year-old female second-year pharmacy student

 

5) Opportunity to meet people from other faculties: Healthcare undergraduates valued meeting people from other faculties and developing collaborative relationships they would otherwise not have had the opportunity to.

 

I got to know seniors in medicine and peers from pharmacy.

20-year-old female first-year nursing student

 

It is a very unique experience, having the chance to interact with … other university students from different healthcare faculties.

20-year-old female second-year pharmacy student

 

6) Factors limiting learning: Factors limiting learning included time constraints, unmotivated teammates, administrative burden and lack of suitable patients. For the latter, some undergraduates felt that their care was restricted to companionship for patients who were already able to manage their own chronic conditions well and did not require further help from the healthcare undergraduates.

IV. DISCUSSION

A. Validation of the RIPLS in Singapore

This study validated the RIPLS in the Singapore context. The final model is the same as proposed by McFadyen et al. (2005). without item 18 “I am not sure what my professional role will be”, from “Roles and Responsibility” subscale. The poor fit of this item into this study’s theoretical construct could be because participants are mostly in their pre-clinical years and may not understand professional roles and responsibilities due to their limited on-job experience, a reason also proposed by McFadyen et al. (2005) and Tyastuti et al. (2014). Tyastuti et al. (2014) found this item, along with “I have to acquire much more knowledge and skills than other healthcare students” (item 19) from the same subscale had loading factors of <0.5 and removed the entire “Roles and Responsibility” subscale from the Indonesian version of the RIPLS. Other studies validating the RIPLS also experienced issues with this subscale (Lauffs et al., 2008; Lestari et al., 2016; McFadyen et al., 2005).

B. Baseline RIPLS score

The mean baseline RIPLS score is comparable with that by Chua et al. (2015), another study conducted in Singapore which measured change in the RIPLS after a one-day IPE conference. They also found higher baseline RIPLS scores for medical undergraduates versus other faculties, a finding also noted in this study and another done in a culturally similar country (Lestari et al., 2016). However, this finding seems inconsistent as other studies (Aziz et al., 2011; de Oliveira et al., 2018) have found the contrary.

Chua et al. (2015) also found that prior IPE experience resulted in higher baseline RIPLS scores, a finding not replicated in this study. We hypothesise that while 65.0% of this study’s participants had previous IPE exposure (versus 10.6% in Chua et al. (2015)), the heterogenous nature of IPE programmes they previously participated in may have had differing efficacy in improving IPE attitudes.

This study found undergraduates in their later years had a lower baseline “Teamwork and Collaboration” subscale score, versus those in their early years. We postulate that undergraduates with more clinical experience better understand the challenges of IPE in practice, a finding echoed by Judge et al. (2015).

That pharmacy students, but not medical students, were mandated by their curriculum to fulfil volunteering hours which could explain the former’s lower baseline scores for total RIPLS and subscales “Teamwork and Collaboration” and “Positive Professional Identity” since they are likely less motivated by IPE when choosing to participate.

Social work undergraduates’ low baseline “Roles and Responsibility” score likely reflects their minimal exposure to medical social work unless they elected for healthcare modules in their senior years of study.

C. Change in Pre- and Post-intervention RIPLS Scores

Our study did not show a significant difference between the pre- and post-intervention RIPLS total score and the “Teamwork and Collaboration” subscale. Additionally, there was a decrease seen in post-intervention scores under the “Positive Professional Identity” subscale for Year 1-2 students and the “Negative Professional Identity” subscale in medical students and social work students. This is in contrast with the literature, where previous studies involving conferences (Chua et al., 2015) or solitary learning modules (Wakely et al., 2013; Zaudke et al., 2016) demonstrated a significant difference in the total RIPLS score pre- and post- intervention. Possible reasons for this are further discussed in section E.

There was a significant increase in the post-intervention score amongst female students under the “Roles and Responsibility” subscale. Previous studies have suggested that there are gender specific differences in perception towards IPE with female students having a more positive attitude towards IPE (Hansson et al., 2010; Wilhelmsson et al., 2011). In addition, the individual item analysis showed that negatively coded statements relating to the subscale of “Roles and Responsibility” such as “the function of nurses and therapists is mainly to provide support for doctors” (Item 17) and “I am not sure what my professional role will be” (Item 18) had significant increases in scores post-intervention. This is encouraging and demonstrates the success of the programme in helping students understand the respective roles and responsibility of each profession which is a crucial part of IPE and eventually IPC.

Other significant findings in the individual item analysis include a decrease in scores for the statements “Shared learning with other health and social care professionals will help me to communicate better with patients and other healthcare professionals” (Item 13), and “Shared learning will help to clarify the nature of patient problems” (Item 15). These findings suggest that the programme can be improved by incorporating more modules on communication between healthcare professionals and shared problem-solving.

D. Qualitative Feedback

While the lack of a significant difference between the pre- and post-intervention RIPLS scores suggest no changes in attitudes, the qualitative data revealed that the majority of undergraduates better appreciated the importance of IPC for patient care and many felt that that MDMs were useful for their learning.

Qualitative analysis revealed five major themes in the undergraduates’ learning pertaining to IPE. Participants learned from and taught each other. Being able to freely learn from and teach one another requires mutual trust and respect which are key elements of collaborative practices (de Oliveira et al., 2018). Participants reported better understanding of their own and other healthcare professionals’ roles; these are recognised as crucial components of collaborative practice (Canadian Interprofessional Health Collaborative, 2010). Undergraduates also shared that they learned about teamwork, specifically, conflict resolution and compromise. Finally, undergraduates appreciated the opportunities to meet fellow undergraduates from different faculties. It has been observed in many successful IPE programmes that informal social interactions are potentially as important as the actual IPE activities (Lie et al., 2016). We observed that the relationships built between participants of the programme often persisted beyond the completion of the programme; these relationships could benefit the institution and healthcare system (Hoffman et al., 2008).

E. Possible Reasons Underlying Lack of Improvement in RIPLS Scores

First, as mentioned earlier, the RIPLS has been described to have psychometrics issues, with multiple researchers modifying the subscales (Mahler et al., 2015). Second, Schmitz and Brandt (2015) suggested that RIPLS is insensitive to course improvements and to pre- versus post-intervention change in attitudes. We chose the RIPLS at the start of 2014 as it had been widely used and validated and simple to administer, and we also sought to validate it in Singapore for the first time. Unfortunately, few studies on its potential issues had been published at the time to inform the design of this study. Third, the longitudinal nature of the programme may have permitted undergraduates greater insight to the challenges of IPE and realities of collaborating within interprofessional teams, tampering their idealism.

Lestari et al. (2016) described how nursing and midwifery undergraduates had lower RIPLS scores as compared to medical and dentistry undergraduates as they had prior clinical experience and likely observed less than exemplary interactions amongst members of healthcare teams. Similarly, Makino et al. (2013) found that graduates of an IPE programme had a lower mean score on the Modified Attitudes Toward Health Care Teams Scale (ATHCTS) as compared to current students. The authors suggested that the alumni’s negative attitude may be due to their real-world experience. Several structural issues in clinical practice have been identified that contribute to this trend, for example competition between professionals (Tremblay et al., 2010) and power struggles (Paradis & Whitehead, 2015).

F. Barriers to IPE

Undergraduates reported four main barriers: time constraints, unmotivated teammates, administrative burden, unsuitable patients. Other studies including Alexandraki et al. (2017) and West et al. (2016) have also faced time constraints. As this programme is voluntary, undergraduates had to take time off their already packed curriculum to participate, and the selection of volunteers was not a stringent process. Additionally, as participants were contributing to clinical care, documentation of visits is required. Multiple studies showed that physicians deemed documentation and administrative work burdensome and excessive time spent on these may be associated with physicians’ burnout (Patel et al., 2018; Wright & Katz, 2018).

In addressing these barriers, incorporating academic credits for participation, a more stringent selection of participants, streamlining administrative work and prudent choice of patients may be considered. These measures are already being implemented by the programme organisers to improve the programme.

G. Strengths and Limitations

The strength of this study lies in the use of both quantitative and qualitative data grounded on an established framework by Kirkpatrick (1959) for the evaluation of a novel experiential IPE programme. The limitations of our study include it being single-institution and that the participants are volunteers which thus form a self-selected group. Hence, the results may not be generalisable. There was also no control arm for the intervention. In addition, there was a large variation in baseline RIPLS score seen in the programme, which can be potentially improved with a more robust study design that controls for baseline differences. Lastly, the use of only a survey for data collection may limit the depth of qualitative data obtained. Further studies could include qualitative interviews.

V. CONCLUSION

We validated the RIPLS in Singapore and demonstrated the feasibility of an interprofessional student-initiated home visit programme. While there was no significant change in RIPLS scores, the qualitative feedback suggests that there are participant-perceived benefits for IPE after undergoing this programme, even with the perceived barriers to IPE. Future programmes can work on addressing these barriers to IPE.

Notes on Contributors

Gloria Yao Chi Leung contributed to the conception and design of the work, the acquisition, analysis, and interpretation of data for the work, drafting and revising the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Kennedy Yao Yi Ng contributed to the conception and design of the work, analysis and interpretation of data for the work, drafting and revising the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Yow Ka Shing contributed to the conception and design of the work, the acquisition and interpretation of data for the work, drafting and revising the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Nerice Heng Wen Ngiam contributed to the conception and design of the work, the acquisition of data for the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Dillon Guo Dong Yeo contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Angeline Jie-Yin Tey contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Melanie Si Rui Lim contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Aaron Kai Wen Tang contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Chew Bi Hui contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Celine Yi Xin Tham contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Yeo Jia Qi contributed to the conception and design of the work, drafting the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Lau Tang Ching contributed to the conception and design of the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Wong Sweet Fun contributed to the conception and design of the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Gerald Choon Huat Koh contributed to the conception and design of the work, interpretation of the data for the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Wong Chek Hooi contributed to the conception and design of the work, interpretation of the data for the work, critical revision of the manuscript, approves of the publishing of the manuscript, and agrees to be accountable for the accuracy of the work.

Ethical Approval

Ethical approval was obtained from the NUS institutional review board (B-15-272). Study participation was entirely voluntary and anonymous. Informed consent was taken from participants before data collection commenced, and they were allowed to withdraw from the research at any point in time. No incentives were provided to study participants.

Data Availability

According to institutional policy, research dataset is available on reasonable request to the corresponding author.

Acknowledgement

The authors would like to thank the Tri-Generational HomeCare Organising Committee from 2014 to 2018 for supporting the study. They would like to extend their thanks to the National University of Singapore, Yong Loo Lin School of Medicine, Dean’s Office; the North West Community Development Council; Khoo Teck Puat Hospital, Singapore; Geriatric Education and Research Institute, Singapore. Finally, they would like to thank the volunteers for their generosity and the patients for their hospitality.

Funding

National University of Singapore, Yong Loo Lin School of Medicine, Dean’s Office; the North West Community Development Council; Khoo Teck Puat Hospital, Singapore provided funding support for the purchase of medical consumables, refreshments and logistics for the program.

Declaration of Interest

There are no conflicts of interest.

References

Alexandraki, I., Hernandez, C. A., Torre, D. M., & Chretien, K. C. (2017). Interprofessional education in the internal medicine clerkship post-LCME standard issuance: Results of a national survey. Journal of General Internal Medicine, 32(8), 871–876.https://doi.org/10.1007/s11606-017-4004-3

Aziz, Z., Teck, L. C., & Yen, P. Y. (2011). The attitudes of medical, nursing and pharmacy students to inter-professional learning. Procedia – Social and Behavioural Sciences, 29, 639–645. https://doi.org/10.1016/j.sbspro.2011.11.287  

Barr, H., Koppel, I., Reeves, S., Hammick, M., & Freeth, D. (2005). Effective interprofessional education: argument, assumption, and evidence (1st edition). Wiley-Blackwell.

Berger-Estilita, J., Fuchs, A., Hahn, M., Chiang, H., & Greif, R. (2020). Attitudes towards interprofessional education in the medical curriculum: A systematic review of the literature. BMC Medical Education, 20(1), Article 254. https://doi.org/10.1186/s12909-020-02176-4

Blythe, J., & Spiring, R. (2020). The virtual home visit. Education for Primary Care, 31(4), 244–246. https://doi.org/10.1080/14739879.2020.1772119

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Briggs, M. C. E., & McElhaney, J. E. (2015). Frailty and interprofessional collaboration. Interdisciplinary Topics in Gerontology and Geriatrics, 41, 121–136. https://doi.org/10.1159/000381204

Canadian Interprofessional Health Collaborative. (2010). A national interprofessional competency framework. https://phabc.org/wp-content/uploads/2015/07/CIHC-National-Interprofessional-Competency-Framework.pdf

Chua, A. Z., Lo, D. Y., Ho, W. H., Koh, Y. Q., Lim, D. S., Tam, J. K., Liaw, S. Y., & Koh, G. C. (2015). The effectiveness of a shared conference experience in improving undergraduate medical and nursing students’ attitudes towards inter-professional education in an Asian country: A before and after study. BMC Medical Education, 15, Article 233. https://doi.org/10.1186/s12909-015-0509-9

Conti, G., Bowers, C., O’Connell, M. B., Bruer, S., Bugdalski-Stutrud, C., Smith, G., Bickes, J., & Mendez, J. (2016). Examining the effects of an experiential interprofessional education activity with older adults. Journal of Interprofessional Care, 30(2), 184–190. https://doi.org/10.3109/13561820.2015.1092428

de Oliveira, V. F., Bittencourt, M. F., Navarro Pinto, Í. F., Lucchetti, A. L. G., da Silva Ezequiel, O., & Lucchetti, G. (2018). Comparison of the readiness for interprofessional learning and the rate of contact among students from nine different healthcare courses. Nurse Education Today, 63, 64–68. https://doi.org/10.1016/j.nedt.2018.01.013

Fahs, D. B., Honan, L., Gonzalez-Colaso, R., & Colson, E. R. (2017). Interprofessional education development: Not for the faint of heart. Advances in Medical Education and Practice, 8, 329–336. https://doi.org/10.2147/AMEP.S133426

Ganotice, F. A., & Chan, L. K. (2018). Construct validation of the English version of Readiness for Interprofessional Learning Scale (RIPLS): Are Chinese undergraduate students ready for ‘shared learning’? Journal of Interprofessional Care, 32(1), 69–74. https://doi.org/10.1080/13561820.2017.1359508

Grice, G. R., Thomason, A. R., Meny, L. M., Pinelli, N. R., Martello, J. L., & Zorek, J. A. (2018). Intentional interprofessional experiential education. American Journal of Pharmaceutical Education, 82(3), Article 6502. https://doi.org/10.5688/ajpe6502

Hansson, A., Foldevi, M., & Mattsson, B. (2010). Medical students’ attitudes toward collaboration between doctors and nurses – A comparison between two Swedish universities. Journal of Interprofessional Care24(3), 242–250. https://doi.org/10.3109/13561820903163439

Hoffman, S. J., Rosenfield, D., Gilbert, J. H. V., & Oandasan, I. F. (2008). Student leadership in interprofessional education: Benefits, challenges and implications for educators, researchers and policymakers. Medical Education, 42(7), 654–661. https://doi.org/10.1111/j.1365-2923.2008.03042.x 

Judge, M. P., Polifroni, E. C., & Zhu, S. (2015). Influence of student attributes on readiness for interprofessional learning across multiple healthcare disciplines: Identifying factors to inform educational development. International Journal of Nursing Sciences, 2(3), 248–252. https://doi.org/10.1016/j.ijnss.2015.07.007

Kiger, M. E., & Varpio, L. (2020). Thematic analysis of qualitative data: AMEE Guide No. 131. Medical Teacher, 42(8), 846–854. https://doi.org/10.1080/0142159X.2020.1755030

Kirkpatrick, D. L. (1959). Techniques for Evaluation Training Programs. Journal of the American Society of Training Directors, 13, 21–26.

Knowles, M. S. (1984). Andragogy in Action: Applying Modern Principles of Adult Learning (1st edition). Jossey-Bass.

Lauffs, M., Ponzer, S., Saboonchi, F., Lonka, K., Hylin, U., & Mattiasson, A.-C. (2008). Cross-cultural adaptation of the Swedish version of Readiness for Interprofessional Learning Scale (RIPLS). Medical Education, 42(4), 405–411. https://doi.org/10.1111/j.1365-2923.2008.03017.x

Lestari, E., Stalmeijer, R. E., Widyandana, D., & Scherpbier, A. (2016). Understanding students’ readiness for interprofessional learning in an Asian context: A mixed-methods study. BMC Medical Education, 16, Article 179. https://doi.org/10.1186/s12909-016-0704-3 

Li, Z., Sun, Y., & Zhang, Y. (2018). Adaptation and reliability of the Readiness for Inter Professional Learning Scale (RIPLS) in the Chinese health care students setting. BMC Medical Education, 18(1), Article 309. https://doi.org/10.1186/s12909-018-1423-8

Lie, D. A., Forest, C. P., Walsh, A., Banzali, Y., & Lohenry, K. (2016). What and how do students learn in an interprofessional student-run clinic? An educational framework for team-based care. Medical Education Online, 21(1), Article 31900. https://doi.org/10.3402/meo.v21.31900 

Mahler, C., Berger, S., & Reeves, S. (2015). The Readiness for Interprofessional Learning Scale (RIPLS): A problematic evaluative scale for the interprofessional field. Journal of Interprofessional Care, 29(4), 289–291. https://doi.org/10.3109/13561820.2015.1059652

Makino, T., Shinozaki, H., Hayashi, K., Lee, B., Matsui, H., Kururi, N., Kazama, H., Ogawara, H., Tozato, F., Iwasaki, K., Asakawa, Y., Abe, Y., Uchida, Y., Kanaizumi, S., Sakou, K., & Watanabe, H. (2013). Attitudes toward interprofessional healthcare teams: A comparison between undergraduate students and alumni. Journal of Interprofessional Care, 27(3), 261–268. https://doi.org/10.3109/13561820.2012.751901

McFadyen, A. K., Webster, V., Strachan, K., Figgins, E., Brown, H., & McKechnie, J. (2005). The Readiness for Interprofessional Learning Scale: A possible more stable sub-scale model for the original version of RIPLS. Journal of Interprofessional Care, 19(6), 595–603. https://doi.org/10.1080/13561820500430157

McManus, K., Shannon, K., Rhodes, D. L., Edgar, J. D., & Cox, C. (2017). An interprofessional education program’s impact on attitudes toward and desire to work with older adults. Education for Health, 30(2), 172–175. https://doi.org/10.4103/efh.EfH_2_15

Ng, K. Y. Y., Leung, G. Y. C., Tey, A. J.-Y., Chaung, J. Q., Lee, S. M., Soundararajan, A., Yow, K. S., Ngiam, N. H. W., Lau, T. C., Wong, S. F., Wong, C. H., & Koh, G. C.-H. (2020a). Bridging the intergenerational gap: The outcomes of a student-initiated, longitudinal, inter-professional, inter-generational home visit program. BMC Medical Education, 20(1), Article 148. https://doi.org/10.1186/s12909-020-02064-x

Ng, K. Y. Y., Leung, G. Y. C., Yow, K. S., Ngiam, N. H. W., Yeo, D. G. D., Tey, A. J.-Y., Lim, M. S. R., Tang, A. K. W., Chew, B. H., Tham, C. Y. X., Yeo, J. Q., Lau, T. C., Wong, S. F., Wong, C. H., & Koh, G. C.-H. (2020b). Impact of an interprofessional, longitudinal, undergraduate student-initiated home visit program towards interprofessional education. Research Square. https://doi.org/10.21203/rs.3.rs-23744/v1

Paradis, E., & Whitehead, C. R. (2015). Louder than words: Power and conflict in interprofessional education articles, 1954-2013. Medical Education, 49(4), 399–407. https://doi.org/10.1111/medu.12668

Parsell, G., & Bligh, J. (1999). The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Medical Education, 33(2), 95-100. https://doi.org/10.1046/j.1365-2923.1999.00298.x

Patel, R. S., Bachu, R., Adikey, A., Malik, M., & Shah, M. (2018). Factors related to physician burnout and its consequences: A review. Behavioural Sciences, 8(11), Article 98. https://doi.org/10.3390/bs8110098  

Schmitz, C. C., & Brandt, B. F. (2015). The Readiness for Interprofessional Learning Scale: To RIPLS or not to RIPLS? That is only part of the question. Journal of Interprofessional Care, 29(6), 525–526. https://doi.org/10.3109/13561820.2015.1108719

Shapiro, S. S., & Wilk, M. B. (1965). an analysis of variance test for normality (complete samples). Biometrika, 52(3/4), 591–611. https://doi.org/10.2307/2333709

Sunguya, B. F., Hinthong, W., Jimba, M., & Yasuoka, J. (2014). Interprofessional education for whom? – Challenges and lessons learned from its implementation in developed countries and their application to developing countries: A systematic review. PloS One, 9(5), Article e96724. https://doi.org/10.1371/journal.pone.0096724 

Tamura, Y., Seki, K., Usami, M., Taku, S., Bontje, P., Ando, H., Taru, C., & Ishikawa, Y. (2012). Cultural adaptation and validating a Japanese version of the readiness for interprofessional learning scale (RIPLS). Journal of Interprofessional Care, 26(1), 56–63. https://doi.org/10.3109/13561820.2011.595848

Toth-Pal, E., Fridén, C., Asenjo, S. T., & Olsson, C. B. (2020). Home visits as an interprofessional learning activity for students in primary healthcare. Primary Health Care Research & Development, 21, Article e59. https://doi.org/10.1017/S1463423620000572

Tremblay, D., Drouin, D., Lang, A., Roberge, D., Ritchie, J., & Plante, A. (2010). Interprofessional collaborative practice within cancer teams: Translating evidence into action. A mixed methods study protocol. Implementation Science, 5, Article 53. https://doi.org/10.1186/1748-5908-5-53

Tyastuti, D., Onishi, H., Ekayanti, F., & Kitamura, K. (2014). Psychometric item analysis and validation of the Indonesian version of the Readiness for Interprofessional Learning Scale (RIPLS). Journal of Interprofessional Care, 28(5), 426–432. https://doi.org/10.3109/13561820.2014.907778

Vaughn, L. M., Cross, B., Bossaer, L., Flores, E. K., Moore, J., & Click, I. (2014). Analysis of an interprofessional home visit assignment: Student perceptions of team-based care, home visits, and medication-related problems. Family Medicine, 46(7), 522–526 

Wakely, L., Brown, L., & Burrows, J. (2013). Evaluating interprofessional learning modules: Health students’ attitudes to interprofessional practice. Journal of Interprofessional Care, 27(5), 424–425. https://doi.org/10.3109/13561820.2013.784730

West, C., Graham, L., Palmer, R. T., Miller, M. F., Thayer, E. K., Stuber, M. L., Awdishu, L., Umoren, R. A., Wamsley, M. A., Nelson, E. A., Joo, P. A., Tysinger, J. W., George, P., & Carney, P. A. (2016). Implementation of interprofessional education (IPE) in 16 U.S. medical schools: Common practices, barriers and facilitators. Journal of Interprofessional Education & Practice, 4, 41–49. https://doi.org/10.1016/j.xjep.2016.05.002

Wilhelmsson, M., Ponzer, S., Dahlgren, L. O., Timpka, T., & Faresjö, T. (2011). Are female students in general and nursing students more ready for teamwork and interprofessional collaboration in healthcare? BMC Medical Education11, Article 15. https://doi.org/10.1186/1472-6920-11-15

Wright, A. A., & Katz, I. T. (2018). Beyond burnout — Redesigning care to restore meaning and sanity for physicians. The New England Journal of Medicine, 378(4), 309–311. https://doi.org/10.1056/NEJMp1716845

Zaudke, J. K., Paolo, A., Kleoppel, J., Phillips, C., & Shrader, S. (2016). The impact of an interprofessional practice experience on readiness for interprofessional learning. Family Medicine, 48(5), 371–376.

*Chek Hooi WONG
90 Yishun Central,
Khoo Teck Puat Hospital,
Singapore 768828
9 Lower Kent Ridge Rd, Level 10,
+65 6807 8001
Email: wong.chek.hooi@ktph.com.sg

Announcements