Peer-to-peer clinical teaching by medical students in the formal curriculum

Submitted: 2 December 2022
Accepted: 24 July 2023
Published online: 3 October, TAPS 2023, 8(4), 13-22
https://doi.org/10.29060/TAPS.2023-8-4/OA3093

Julie Yun Chen1,2, Tai Pong Lam1, Ivan Fan Ngai Hung3, Albert Chi Yan Chan4, Weng-Yee Chin1, Christopher See5 & Joyce Pui Yan Tsang1

1Department of Family Medicine and Primary Care, University of Hong Kong, Hong Kong; 2Bau Institute of Medical and Health Sciences Education, University of Hong Kong, Hong Kong; 3Department of Medicine, University of Hong Kong, Hong Kong; 4Department of Surgery, University of Hong Kong, Hong Kong; 5School of Biomedical Sciences, The Chinese University of Hong Kong, Hong Kong

Abstract

Introduction: Medical students have long provided informal, structured academic support for their peers in parallel with the institution’s formal curriculum, demonstrating a high degree of motivation and engagement for peer teaching. This qualitative descriptive study aimed to examine the perspectives of participants in a pilot peer teaching programme on the effectiveness and feasibility of adapting existing student-initiated peer bedside teaching into formal bedside teaching.

Methods: Study participants were senior medical students who were already providing self-initiated peer-led bedside clinical teaching, clinicians who co-taught bedside clinical skills teaching sessions with the peer teachers and junior students allocated to the bedside teaching sessions led by peer teachers.  Qualitative data were gathered via evaluation form, peer teacher and clinician interviews, as well as the observational field notes made by the research assistant who attended the teaching sessions as an independent observer.  Additionally, a single Likert-scale question on the evaluation form was used to rate teaching effectiveness.

Results: All three peer teachers, three clinicians and 12 students completed the interviews and/or questionnaires. The main themes identified were teaching effectiveness, teaching competency and feasibility. Teaching effectiveness related to the creation of a positive learning environment and a tailored approach. Teaching competency reflected confidence or doubts about peer-teaching, and feasibility subthemes comprised barriers and facilitators.

Conclusion: Students perceived peer teaching effectiveness to be comparable to clinicians’ teaching. Clinical peer teaching in the formal curriculum may be most feasible in a hybrid curriculum that includes both peer teaching and clinician-led teaching with structured training and coordinated timetabling.

Keywords:           Peer Teaching, Undergraduate Medical Education, Bedside Teaching, Medical Students

Practice Highlights

  • Peer-led teaching environment facilitates questions and answers from learners to strengthen learning.
  • Training on specific skills and pre-case preparation can help improve peer teacher effectiveness.
  • Clear understanding of the logistics and expectations is necessary to optimise the process.
  • Formal peer teacher training may help quality assurance and encourage more participation.

I. INTRODUCTION

In accordance with the longstanding apprenticeship model of medical training, senior doctors and trainees have been responsible for teaching their junior colleagues across the continuum of medical education. Despite this accepted practice, peer teaching has not become widely formalised in undergraduate medical curricula.

Peer teaching has been shown to be beneficial at multiple levels. For students who are being taught by peers, learning is enabled by social and cognitive congruence because of the near-peer demographic which allows for a more comfortable learning environment for free flow of discussion and better understanding of the learner’s challenges including awareness of the primacy for exam success (Benè & Bergus, 2014; Rees et al., 2016). The peer teacher develops and hones teaching skills that will be useful in internship (Haber et al., 2006) and through teaching, develops higher motivation and deeper understanding of concepts and perhaps also improve their own exam performance (Burgess et al., 2014). The institution derives some practical benefit from the supplementary manpower (Tayler et al., 2015) due to the comparable effectiveness of peer teachers in teaching in certain areas such as physical examination and communication skills (Rees et al., 2016) but perhaps most importantly, it benefits from building a collaborative relationship with students in their learning process. Though the benefits of peer teaching have been noted, students remain an untapped resource as training provided for students to serve as teachers is inconsistent (Soriano et al., 2010).

Undergraduate medical curricula aim to provide a foundation for future training and the framework for such curricula are guided by the recognition that medical students must achieve certain outcomes, including being able to teach, to be prepared for future practice. Well-accepted frameworks such as the ‘Outcomes for Graduates’, from the UK General Medical Council (2015) and the ‘CanMEDS Framework’ from the Royal College of Physicians and Surgeons of Canada (2015) expect medical graduate to teach others. In Hong Kong, similar guidance is provided in the document ‘Hong Kong Doctors’ published by the Medical Council of Hong Kong, which states that undergraduate medical education must prepare graduates to fulfil the roles of ‘medical practitioner, communicator, educator…’ (Medical Council of Hong Kong, 2017).

It is common in medical schools to have informal peer teaching, where senior students coach junior students on an ad hoc basis or organise revision sessions before exams. Zhang et al. (2011) revealed that a majority of medical students believed that informal learning approaches, including the use of past student notes, and participation in self-organised study groups and peer-led tutorials, helped them pass examinations and be a good doctor. Similarly, in our institution, these kinds of informal peer teaching are popular among students and include sharing sessions on study and exam tips, bedside sessions, and sharing of organised study notes. These activities are not subject to any formal oversight.

With the documented benefits of peer teaching, the availability of enthusiastic senior students who are willing to coach their junior peers, and the demand from junior students to learn from their seniors, there is an opportunity to harness the potential peer teaching that is already taking place. This pilot project is important as it aimed to adapt existing student-initiated peer bedside teaching into the formal bedside teaching curriculum and to examine the perspectives of participants on the effectiveness and feasibility of this initiative. It will be helpful to understand the benefits and drawbacks of formal peer bedside teaching in order to further develop this pedagogical approach in medical education.

II. METHODS

This was a descriptive qualitative study of participants in a pilot peer-teaching initiative for bedside teaching implemented in the first clinical year of study for medical students.

A. Setting

1) Small group bedside teaching for Year 4 medical students in the Clinical Foundation Block: The 11-week Clinical Foundation Block (CFB) of the MBBS Year 4 curriculum at The University of Hong Kong runs from August to October and is the first block of the first clinical year of study. It serves to prepare students for the ward- and clinic-based teaching to follow in the clinical clerkships (Figure 1). Year 4 medical students were selected for the study because it is the first clinical year of study when clinical bedside teaching begins. In addition, as the most junior clinical students, they would benefit most from learning from their senior peers. During the CFB, all Year 4 students learn basic history taking, physical examination and clinical skills as well as common clinical problems of 10 key specialty disciplines. In internal medicine, students attend whole class sessions in which the proper clinical examination of each body system is demonstrated followed by seven small group sessions at the bedside for hands-on practice led by a clinician. 

Figure 1. Teaching activities under Medicine within the Clinical Foundation Block in the medical curriculum

Each small group bedside teaching session is comprised of six to eight CFB students who follow the same clinical teacher to examine 3 pre-selected ward patients over a two-hour period. In this pilot study, a peer teacher joined the clinical teacher for the bedside teaching with the first patient case taught by the clinician, the second case taught by the peer teacher under the supervision of the clinician and the final case taught by the peer teacher alone.

2) Peer teaching recruitment and training: Over the years, medical students have been organising bedside peer-teaching on their own and we identified these peer-teaching leaders to help recruit peer teachers for this initiative. Peer teachers recruited in July 2018 and comprised Year 5 students in Senior Clerkship, who were enthusiastic in teaching, and were available to join the training tutorial and take up a subsequent Year 4 CFB bedside teaching session. During the 2.5-hour tutorial, the CFB Coordinator explained the project, and three clinicians then provided a briefing on cardiovascular, neurological, respiratory and abdominal physical examination, common pitfalls, and how to give feedback. There was also time for students to raise questions both on the project and bedside teaching techniques.

B. Participants

The target participants included the three peer teachers who were recruited for this study, together with the three clinician partners and the 24 CFB students in the corresponding three bedside teaching groups. Written informed consent was obtained from all participants before data collection.

C. Data Collection

The qualitative data were collected using a dual subjective (peer teachers, clinicians and students) and objective (independent observer) approach was taken to provide a more holistic perspective of the peer teaching experience. A research assistant not involved in the teaching followed one (of the three) peer teachers as the independent observer. All peer teachers and clinicians were interviewed in-person, by phone or by email, using an interview guide (Appendix 1) by the research assistant after the session where field notes were taken and transcribed. CFB students were invited to complete an evaluation form comprised of open-ended questions and a single Likert-scale question (Appendix 2) immediately after the bedside session, to rate effectiveness and to give general feedback about the peer teaching session.

D. Data Analysis

The qualitative data comprising interview field notes, interview transcripts, email transcripts and open-ended questions from the evaluation form collected from CFB students were analysed thematically by the authors JC and JPYT. The Likert-scale question from the evaluation form was analysed using descriptive statistics. All data were anonymised.

III. RESULTS

All three peer teachers and three clinicians who participated in the pilot peer teaching sessions were interviewed. Eighteen out of 24 CFB students consented to participate and 12 completed questionnaires were collected. Three main themes were identified with two corresponding subthemes for each.

A. Teaching Effectiveness

Peer teachers were rated favourably in terms of their teaching effectiveness. From the evaluation form completed by CFB students, the mean peer teaching effectiveness rating was 4.5/5. While a few students felt the teaching effectiveness of clinicians and peer teachers was comparable, many of them felt less intimidated being taught by the peer teachers. Students also appreciated that the peer teachers understood their current level of understanding and therefore were able to make the teaching more effective by tailoring it to their needs. Students found the experience-sharing by the peer teachers an added-value as shown in Table 1 (Item 1-4). All clinicians agreed that the CFB students appeared more relaxed while the peer teachers were teaching, and the peer teachers met their standard of professionalism as shown in Table 1 (Item 3).

Subtheme: Learning environment

1. ‘I was more willing to ask questions.’ – CFB Student 8

2. ‘I felt more comfortable and less intimidate[ed] with the peer teacher.’ – CFB Student 12

3.‘I think it is pretty well received among the CFB students – they looked like they are more comfortable and less stressed.’ – Clinician B

Subtheme: Tailoring to needs        

4.‘We were told her past experience.’ – CFB Student 9

5.‘More exam advice from peer tutor.’ – CFB student 10

Table 1. Exemplar quotes from participants on teaching effectiveness

These comments were congruent with the observations of the independent observer. When the clinician was teaching, students appeared to be cautious when performing physical examination and answering questions from the clinician. On the other hand, when the peer teacher was teaching, students were asking for reassurance while performing physical examination, and appeared less hesitant when attempting to answer the questions. The peer teacher sometimes also asked the students how they would do a certain examination before they actually performed it. He also shared his own bedside experience. After the clinician ended the bedside session and left, the peer teachers stayed behind and answered further questions from the students regarding physical examination skills and examination tips.

B. Teaching Competence

For students, the teaching on physical examination skills by peer teacher appeared to be comparable to that by clinicians, with the perceived benefit of tailored instructions to student’s current level, and additional personal experience sharing as shown in Table 2 (Item 1-2).

After co-teaching with the peer teacher, clinicians had different opinions about the competency of an undergraduate student as a formal peer teacher. Two stated that it was more appropriate for senior students to do sharing instead of teaching, while the other was satisfied with the ability of the peer teachers to teach, and appreciate the opportunity to exchange ideas with peer teachers. One clinician also suggested that peer teachers might need more practice on teaching to build up confidence as shown in Table 2 (Item 3, 6 and 7).

On the other hand, all the peer teachers expressed that they felt stressed being observed by the clinicians. Two of them felt confident to teach, while one was less confident and prefer to co-teach with a clinician as shown in Table 2 (Item 4, 5 and 8).

The peer teachers also questioned their role as a peer teacher in the regular curriculum. They were unsure to teach in place of clinicians in the regular bedside sessions for the CFB students, yet were more comfortable to co-teach with the clinicians, or to teach in unofficial or supplementary peer-led sessions as shown in Table 2 (Item 4, 8 and 9).

Subtheme: Confidence in teaching competence

1. ‘Very comprehensive teaching; detailed explanation on how to report findings.’ – CFB Student 1

2. ‘Senior students know what we need to know and what we don’t know at this stage.’ – CFB Student 5

3. ‘The peer teacher was sufficiently prepared on content knowledge and teaching skills.’ – Clinician A

4. ‘I am confident with my knowledge and teaching skills. The CFB cases were easy enough for me to handle. I have been teaching student-initiated sessions anyway.’ – Peer Teacher A

5. ‘Are we going to replace the clinicians? The student-initiated sessions worked just fine.’ – Peer Teacher B

Subtheme: Doubts on teaching competence

1. ‘It is too early for the current peer teachers to teach as they lack competency and confidence in teaching.’ – Clinician B

2. ‘Tutors should be at least medical graduates who have shown evidence of proficiency and knowledge in the areas that they teach. Senior students can share their experience of learning, but not to teach.’ – Clinician C

3. ‘The clinicians are definitely better at teaching and has better skills… It would work better if I was to co-teach with a clinician but not to teach solo.’ – Peer Teacher C

4.  ‘It isn’t appropriate to take away the proper learning opportunity to be taught by clinicians from the students.’ – Peer Teacher C

Table 2. Exemplar quotes from participants on teaching competency

C. Feasibility

1) Barriers: One of the peer teachers was disappointed that the session did not go as planned. He suspected that the clinicians may not truly understand the purpose and the plan for the project, and hence sometimes took the lead when the peer teachers were supposed to be teaching as shown in Table 3 (Item 1). 

They also mentioned that timetabling conflicts between CFB and Senior Clerkship were also an issue. For all groups, the session overran and resulted in peer teachers missing their own class, which was scheduled immediately following the intended finishing time of this bedside session.

Peer teachers also commented that there was no concrete incentive for them to join the project. With the added pressure of being observed by clinicians, most peer teachers were hesitant to volunteer again.

2) Facilitators: One peer teacher considered it as an extra learning opportunity as shown in Table 3 (Item 2). Clinicians also believed that the peer teachers could benefit since these were essentially extra tutorials and bedside exposure for them outside of the regular curriculum although students thought that the cases used for CFB were too easy for them to learn anything new. Both peer teachers and clinicians agreed that more practical training on physical examination would be beneficial to boost the confidence and competence of the peer teachers in teaching. Peer teachers suggested that to make the session more efficient, they would prefer to clerk the case themselves before the session, to be better prepared to recognise abnormal physical signs shown in Table 3 (Item 3). A pre-meeting between the peer teacher and the partner clinician would be helpful to clarify expectations and understanding of the process since the training tutorial was conducted by a different clinician. A clinician pointed out that an open call should be made for the recruitment to allow all interested students to participate.

Barriers

1. ‘I felt like the clinician did not want to let me teach solo. Maybe he did not understand the project.’ – Peer Teacher A

Facilitators

2. ‘The organisation of the curriculum is weird – there were a lot to learn in the Medicine Block of the Junior Clerkship, but not much in that of Senior Clerkship. There was also a large gap of time where there was no supervised physical examination at bedside. This is a good refresher session for me.’ – Peer Teacher C

3. The students and I all saw the case for the first time during the session. I felt a bit unprepared and can only comment on the physical examination skills of the students. There is no way to tell if they reported the correct findings. It would help if the peer tutors can clerk the case before the session.’ – Peer Teacher C

Table 3. Exemplar quotes from participants on barriers and facilitators

IV. DISCUSSION

This pilot project aimed to examine the effectiveness and feasibility of adapting peer bedside teaching into the formal curriculum. Student rating has been used as the primary measure of teaching effectiveness in many schools (Chen & Hoshower, 2003). In this project, we triangulated student ratings with clinician viewpoint and also that of an independent observer to assess teaching effectiveness. All found the teaching by the peer teachers was professional and comparable to clinicians.

Their views were also congruent to the observation that peer teaching provided a more relaxed learning environment as cited in the literature (Tai et al., 2016). This is reflected in a study on problem-based learning (PBL) that showed student tutor-led tutorials were rated more highly in group functioning and supportive atmosphere, compared with faculty-led sessions (Kassab et al., 2005).

Sharing from peer teachers was also identified as a bonus feature of bedside peer teaching in our study. Sharing from senior students not only provide junior student with practical exam and ward survival tips, but also served as inspiration and motivation for students to learn. Again this has also been observed in other studies such as one in which students whose peer teachers shared real life experiences performed better in a post-training CPR knowledge test, and demonstrated more confidence and learning motivation (Souza et al., 2022).

In the next incarnation of peer teaching the barriers and facilitators noted by stakeholders need to be addressed. The difficulty in scheduling can be overcome by engaging senior students who are already on the ward to teach by embedding this requirement as part of their usual work. A clinical peer-assisted learning programme by Nikendei, et al. (Nikendei et al., 2009) had demonstrated a successful peer teaching programme at the bedside with final year medical students who were working in the wards as tutors. The comment among peer teachers that there is no ‘concrete incentive’ to being a peer teacher may be due a lack of awareness of the appreciation from peer learners as well as from faculty teachers. More regular and deliberate sharing of learner feedback and role modelling the enjoyment of teaching by teachers and experienced peer teachers can help. Reflecting on the benefits of the learning process undertaken through the preparation and ‘paying forward’ the efforts from other teachers are also less tangible (but important!) factors to emphasise to encourage future students to undertake peer-teaching.

Peer teachers and clinicians should meet before the teaching session to clarify aims and logistics, and match their expectations. To improve peer teacher confidence and to alleviate clinician concern about their competency to teach, more extensive and formal training can be provided to peer teachers, including both theoretical and practical training on physical examination, and on teaching skills. Burgess et al. (2017) had developed and implemented an interprofessional Peer Teaching Training (PTT) programme for medicine, pharmacy and health sciences students, which aimed to develop students’ skills in teaching, assessment and feedback for peer assisted learning and future practice. The PTT course design was adapted by Karia et al. (Karia et al., 2020) for medical students only. Both programmes were shown to be effective in improving students’ confidence and competence in peer teaching, and increasing intention to participate in teaching. This is encouraging and we are also developing a structured peer teaching training programme to fill this gap. Nevertheless, when attempting to include peer teachers in the formal curriculum as a complement to formal teaching by the faculty care must be taken to not over-formalise the process which may undermine the unique benefits of peer teaching (Tong & See, 2020).

A. Strengths and Limitations

This was a small-scale pilot study and the evaluation of the impact was limited to perceptions and feedback from stakeholders and did not include tangible outcomes such as academic performance and clinical competency of participants. However, the objective contemporaneous observations made during the teaching sessions by a third-party researcher strengthened the trustworthiness of the data. A 360-degree evaluation including feedback from patients and ward staff could also provide a more comprehensive evaluation.

V. CONCLUSION

This study examined the perspectives of clinicians, peer teachers and students on the effectiveness and feasibility of peer-led bedside teaching in the formal curriculum and the benefits are encouraging. Peer teaching effectiveness was comparable to clinicians with the added benefit that peer-teachers are better able to understand and meet students’ needs while creating a friendlier environment conducive to constructive learning. Concerns about peer teaching competency were expressed by clinicians and peer-teachers and all participants did not wish to have peer-teaching replace clinician-led teaching.  Clinical peer teaching in the formal curriculum may be most feasible in a hybrid curriculum that includes both peer teaching and clinician-led teaching. It can be accomplished with more structured training and overcoming practical barriers of timetabling and preparation. The benefits of peer teaching and promising responses from all stakeholders support further initiatives in clinical peer teaching.

Notes on Contributors

JY Chen designed the study, performed data collection and data analysis, drafted the manuscript and approved the final manuscript.

TP Lam designed the study, gave critical feedback, read and approved the final manuscript.

IFN Hung designed the study, gave critical feedback, read and approved the final manuscript.

ACY Chan designed the study, gave critical feedback, read and approved the final manuscript.

WY Chin designed the study, gave critical feedback, read and approved the final manuscript.

JPY Tsang performed data collection and data analysis, drafted the manuscript and approved the final manuscript.

C See designed the study, gave critical feedback, read and approved the final manuscript.

Ethical Approval

This study was approved by the Institutional Review Board of the University of Hong Kong/ Hospital Authority Hong Kong West Cluster (Reference number: UW 18-439).

Data Availability

The data of this qualitative study are not publicly available due to confidentiality agreements with the participants.

Acknowledgement

We would like to thank the peer teachers, students and clinicians of HKUMed for participating in the study.

Funding

This work was supported by a Teaching Development Grant funded by The University of Hong Kong (Ref No:. N/A).

Declaration of Interest

The authors declare that there is no conflict of interest.

References

Benè, K. L., & Bergus, G. (2014). When learners become teachers: A review of peer teaching in medical student education. Family Medicine, 46(10), 783-787.

Burgess, A., McGregor, D., & Mellis, C. (2014). Medical students as peer tutors: A systematic review. BMC Medical Education, 14(1), 115.

Burgess, A., Roberts, C., van Diggele, C., & Mellis, C. (2017). Peer teacher training (PTT) program for health professional students: Interprofessional and flipped learning. BMC Medical Education, 17(1), Article 239.

Chen, Y., & Hoshower, L. B. (2003). Student evaluation of teaching effectiveness: An assessment of student perception and motivation. Assessment Evaluation in Higher Education, 28(1), 71-88.

General Medical Council. (2015). Outcomes for graduates (Tomorrow’s Doctors). Retrieved July 18, 2022 from https://www.gmc-uk.org/-/media/documents/Outcomes_for_graduates_jul_15_1216.pdf_61408029.pdf

Haber, R. J., Bardach, N. S., Vedanthan, R., Gillum, L. A., Haber, L. A., & Dhaliwal, G. S. (2006). Preparing fourth‐year medical students to teach during internship. Journal of General Internal Medicine, 21(5), 518-520. https://doi.org/10.1111/j.1525-1497. 2006.00441.x

Karia, C., Anderson, E., Hughes, A., West, J., Lakhani, D., Kirtley, J., Burgess, A., & Carr, S. (2020). Peer teacher training (PTT) in action. Clinical Teacher, 17(5), 531-537.

Kassab, S., Abu-Hijleh, M. F., Al-Shboul, Q., & Hamdy, H. (2005). Student-led tutorials in problem-based learning: Educational outcomes and students’ perceptions. Medical Teacher, 27(6), 521-526.

Medical Council of Hong Kong. (2017). Hong Kong Doctors. Retrieved July 18, 2022 from https://www.mchk.org.hk/english/publications/hk_doctors.html

Nikendei, C., Andreesen, S., Hoffmann, K., & Jünger, J. (2009). Cross-year peer tutoring on internal medicine wards: Effects on self-assessed clinical competencies–A group control design study. Medical Teacher, 31(2), e32-e35.

Rees, E. L., Quinn, P. J., Davies, B., & Fotheringham, V. (2016). How does peer teaching compare to faculty teaching? A systematic review and meta-analysis. Medical Teacher, 38(8), 829-837.

Royal College of Physicians and Surgeons of Canada. (2015). CanMEDS Framework. Retrieved July 18, 2022 from http://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e

Soriano, R. P., Blatt, B., Coplit, L., CichoskiKelly, E., Kosowicz, L., Newman, L., Pasquale, S. J., Pretorius, R., Rosen, J. M., & Saks, N. S. (2010). Teaching medical students how to teach: a national survey of students-as-teachers programs in US medical schools. Academic Medicine, 85(11), 1725-1731.

Souza, A. D., Punja, D., Prabhath, S., & Pandey, A. K. (2022). Influence of pretesting and a near peer sharing real life experiences on CPR training outcomes in first year medical students: A non-randomized quasi-experimental study. BMC Medical Education, 22(1), 1-11.

Tai, J., Molloy, E., Haines, T., & Canny, B. (2016). Same‐level peer‐assisted learning in medical clinical placements: A narrative systematic review. Medical Education, 50(4), 469-484.

Tayler, N., Hall, S., Carr, N. J., Stephens, J. R., & Border, S. (2015). Near peer teaching in medical curricula: Integrating student teachers in pathology tutorials. Medical Education Online, 20(1), 27921.

Tong, A. H. K., & See, C. (2020). Informal and formal peer teaching in the medical school ecosystem: Perspectives from a student-teacher team. JMIR Medical Education, 6(2), e21869.

Zhang, J., Peterson, R. F., & Ozolins, I. Z. (2011). Student approaches for learning in medicine: What does it tell us about the informal curriculum? BMC Medical Education, 11(1), Article 87.

*Julie Chen
4/F William MW Mong Block
Faculty of Medicine Building
21 Sassoon RoadMarrakesh, Marrakesh-Safi,
Pokfulam, Hong Kong
Email address: juliechen@hku.hk

Submitted: 28 September 2022
Accepted: 2 March 2023
Published online: 3 October, TAPS 2023, 8(4), 5-12
https://doi.org/10.29060/TAPS.2023-8-4/OA2883

Soumia Merrou1, Abdellah Idrissi Jouicha2, Abdelmounaim Baslam3, Zakaria Ouhaz3 & Ahmed Rhassane El Adib1

1Health Sciences Research Centre (HSRC), Faculty of Medicine and Pharmacy of Marrakech, Cadi Ayyad University, Morocco; 2Health Sciences Research Centre (HSRC), Faculty of Science Semlalia, Cadi Ayyad University, Morocco; 3Pharmacology, neurobiology and behaviour Lab, Faculty of Science Semlalia, Cadi Ayyad University, Morocco

Abstract

Introduction: A deep understanding of physiology, physiopathology, pharmacology, and the management of pain is crucial for nurse anaesthetists to ensure the well-being of their patients. Thus, the teaching strategies should enhance the transition from acquiring the fundamental pain phenomena, to developing translational and critical thinking. The aim of the study is to determine if the flipped classroom that is considered an active learning approach is most effective compared to the traditional method in teaching pain management and if it improves students’ academic performance.

Methods: This study was quasi experimental, at a higher institute of nursing professions, among third-year anaesthesia resuscitation nursing students. participants were randomly allocated into either: the flipped classroom group where PBL was used (FG, n = 19), or the traditional lecture-based classroom group (TG, n = 19). The results and impact of the above approach were appreciated via the analysis of the summative assessment of the class group and from the questionnaire submitted to students.

Results: The present study revealed that in the midterm exam, the mean score of the flipped classroom group (14.0) which is significantly higher (p<0.01) than the traditional lecture group (11.9). Moreover, the standard deviation of this latter is slightly higher (2.41) which indicates scores far from the average. Also, a significant difference between the averages of the two approaches in favor of flipped classroom Group was revealed (p<0.01).

Conclusion: The assessment of student’s grades and their appreciation of both teaching approaches showed a preference for the PBL.

Keywords:           Flipped Classroom, Nursing Education, Pain Management, Problem-Based Learning

Practice Highlights

  • Flipped classroom showed advantageous results on nursing students’ grades.
  • Flipped classroom endorsed positive results on course comprehension by nursing students.
  • Flipped classroom has shown to effectively support content learning.

I. INTRODUCTION

Flipped classroom is a pedagogical approach defined as: “What was previously completed as homework is now finished in class, and what was previously completed in class is now completed at home” (Dong, 2016). Using this approach, traditional classroom time is spent on active learning strategies such as problem-based learning, games, or practice questions to allow teachers to guide students in developing strategies. critical thinking (Dong, 2016). Flipped classrooms are used as the main teaching method in the courses of health professions such as nursing theory, statistics and pharmacology (Hanson, 2016; Immekus, 2019; Peisachovich et al., 2016). In fact, there is evidence that students’ academic performance improved in midterm exams while using flipped classroom approach (Geist et al., 2015).

Despite feeling that this method increased their knowledge, nursing students said they preferred traditional lectures to the use of a flipped classroom (Hanson, 2016). It is not uncommon for students to prefer lectures to the flipped classroom method, which may be related to how much work they feel they have to do or insecurity of exam preparation or both of them (Dong, 2016; Tune et al., 2013). The use of the flipped classroom in nursing was supported by evidence that showed lecturers were enthusiastic about this method. The most effective method for implementing and assessing this strategy in nursing education, though, is not consistently supported by the available data (Barranquero-Herbosa et al., 2022; Dong, 2016; Njie-Carr et al., 2017).

Contextual learning can encourage the growth of critical reasoning, which enables students to pick out the top nursing concerns for patients from a long list of problems, ultimately fostering the development of problem-based nursing analysis in line with Benner’s model (Dong, 2016). Problem-based learning (PBL) uses problem scenarios to develop knowledge and understanding learning objectives (Wood, 2003). Among the strategies used in a flipped classroom, the PBL has been used in nursing education, in courses such as pharmacology, mental health nursing and critical care nursing (Alton, 2016; Gholami et al., 2016). Any teaching strategy that involves students in the learning process is considered to be an active learning strategy, which includes PBL (Peisachovich et al., 2016).

Despite the introduction of pain management in health professions education, pain is still undertreated. It affects 80%-90% of patients in medicine, surgery, and cancer units (Gerbershagen et al., 2009; Gianni et al., 2010). Previous research also highlighted that 43% to 51% of patients received inadequate or insufficient analgesic treatment and only 14% of patients who received analgesia benefit from reassessment (Deandrea et al., 2008; Manias et al., 2005). To effectively manage pain, nurses are crucial. Therefore, it is crucial that they receive effective training to ensure better pain management (Teike Lüthi et al., 2015).

In this direction, in order to encourage students’ acquisition strategies, nursing science professors must implement effective teaching techniques. Training typically aims to increase knowledge, which is insufficient in this case; as a result, skills development is a top priority (Kerner et al., 2013). While prior research emphasised the value of nurse-patient interactions in pain management, it undervalued the impact of nurses’ scientific knowledge of pain mechanisms and pharmacology. It is interesting to note that a recent study highlighted the significance of the classroom setting and instructional methods in approaching pain management in a novel manner (Teike Lüthi et al., 2015).

However, a need for a rigorous evaluation of learning strategies is crucial for best practices in nursing education (Barranquero-Herbosa et al., 2022; Njie-Carr et al., 2017). The present study provides an assessment of PBL as a model of applied learning in a flipped classroom of anesthesia nursing students in the context of a pain management course.

The main purpose of the study was to determine if the flipped classroom is more effective than traditional learning in teaching pain management by assessing students’ academic performance and determine their perceptions about the flipped classroom approach. In that capacity, the research questions of the study are:

  1. Is there a significant difference in students’ academic performance between the traditional and flipped classroom approaches on declarative knowledge?
  2. Is there a significant difference in students’ academic performance between the traditional and flipped classroom approaches on conditional knowledge?
  3. What are anesthesia and resuscitation nursing students’ perceptions of PBL impact on the acquisition and application of pain management knowledge?
  4. What are anesthesia and resuscitation nursing students’ perceptions of PBL as a model for learning in pain management?

II. METHODS

A. Research Design and Samples

This study is quasi experimental, and was conducted from September at a higher institute of nursing professions. The participants are third-year anaesthesia resuscitation nursing students. Participation in the study was voluntary and anonymous. Oral consent of all participants was obtained. These participants were randomly allocated into either: The flipped classroom group where PBL was used (FG, n = 19), or the traditional lecture-based classroom group (TG, n = 19). Both classroom groups had the same professor.

B. Curriculum Description

The “pain management” course (50h) is taught during the third year of nursing studies in the institute. It is composed of three parts: the pathophysiology of pain; the evaluation of pain, and the pain management.

C. Problem Based Learning on Flipped Classroom Approach

The problem-based template was designed by the professor who teaches the course, by using small groups of 5 to 6 students. The students were the facilitators of the discussion; they meet in group work to discuss a case for an hour. The objective is to identify the type of pain or to choose the best pain assessment tool for the case. The group must then suggest a drug treatment protocol and design appropriate nursing interventions. The role of the professor was to provide immediate and specific feedback during the discussion.

All cases were written by the professor. The objectives were the acquisition of knowledge and the development of clinical reasoning. Each case contained 300 words and included key patient data. Each of these cases included information that could be analysed to provide priority elements to the discussed case.

D. Data Collection and Statistical Analyses

The results and impact of the above approach were extracted via an analysis of the summative assessment of the class group and from the questionnaire submitted to students.

1) Summative assessment (exam):

Students in both groups went through two exams: midterm exam (ME) which took place in the middle of the course in the 6th week in order to assess the students’ declarative knowledge, and a final exam (FE) which took place at the end of the course, to assess conditional knowledge. The tests were graded from zero to twenty. The final score (FS) was obtained by the following equation:

FS=(ME+FE)/2

2) Questionnaire:

At the end of the course, the FG students were asked to fill out an anonymous questionnaire divided into two sections. The questions were developed in the first section of the questionnaire to determine students’ perceptions of knowledge acquisition. Elements evoked in the questionnaire were created with a language that demonstrates perceived ability and related to self-efficacy (Tune et al., 2013). The second set of items was created to determine students’ perceptions of the cases used in the course. The statements began, for example, with “Participating in the group discussions made me more confident for…”. Likert scale was used to measure the responses. The scale is presented as follows:

1 = Strongly disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree, and 5 = Strongly agree.

3) Statistical analyses:

Data analysis was performed using MS Excel (21), background variables of the study participants were calculated, and the results are presented as frequency distribution, percentages, mean, and standard deviation, statistical significance when p <0.05.

III. RESULTS

The data that support the findings of this RCT study are openly available at https://doi.org/10.6084/m9.figshare.22639279  (Merrou et al., 2023).

A. Demographics

The number of participants in the study was 38 students, 19 per group. Female students represented 79% of the study participants, whereas 21% were male.

B. Students’ Grades

Based on the data obtained, statistical analysis was done to analyse the influence of the teaching approach and the type of examination on learners’ results. The obtained findings have been presented in Tables 1 and 2. They indicate the average performance of learners in both exams: midterm (ME) and final exam (FE) where conditional knowledge is measured for both teaching approaches.

 

Type of

exam

Teaching approach

M

Sd

Inf born

Sup born

P value

 

ME

TG

11.9

2.41

7.38

16.1

<0.01

FG

14.0

1.94

9.0

16.5

 

FE

TG

11.9

3.28

6.09

16

<0.01

FG

14.1

1.96

10

16

Table 1. Descriptive statistics by exam type for each teaching approach.

According to Table 1, it is noted that in the midterm exam (ME), the mean score is significantly higher (p<0.01) in the FG (14.0) compared to the TG (11.9), also, with this latter, there is a slightly high standard deviation of 2.41 which indicates scores far from the average. FG, on the other hand, dressed a lower standard deviation (1.94) which indicates that the scores are more grouped around the mean (14.0). The application of the PBL on flipped classroom approach has, as it appears, improved the grades and reduced the gap between them.

For the final exam, with the traditional approach, the dispersion increased (Sd=3.28). On the other hand, PBL approach has improved student outcomes and widened the gap between them compared to TG (p<0.01). Figure 1 highlights the dispersion of the continuous and final control data (before for the traditional approach and after for PBL).

Figure 1. Students’ performance during the midterm exam (ME) and final exam (FE)

The ME grades were improved using PBL. As the number of compared participants is limited, a nonparametric test was carried out (Paired Mann-Whitney test) which revealed that the average grade of MEs is significantly different (p <0.01), between the traditional approach and the PBL. Similarly, an improvement in FE grades is observed when using the PBL approach. This approach allowed an improvement in the means as well as the dispersion. This leads us to state that the teaching approach based on case studies in the context of a flipped classroom (PBL), may improve both declarative and conditional knowledge on students’ outcomes.

Teaching approach

N

M

Standard deviation

Δ mean

p

TG

19

11.9

3.30

 

2.21

 

 

0.01

 

FG

19

14.1

1.95

Table 2: Descriptive statistics by teaching approach for the different types of controls.

From Table 2, there is a difference between the averages of the two approaches in favor of FG (p<0.01). This means that students who have taken the PBL approach had increased grades compared to those who have taken the traditional approach. To check if these differences are significant, a Paired Mann-Whitney test was used. This one demonstrated that the average rank of the grades is significantly different between the two studied approaches (p = 0.01).

The mean score and the standard deviation for each question in the questionnaire was determined. Average responses to the 12 items that referred to acquisition and application of knowledge related to the pain management ranged from 3.8 to 4.5 (See Table 3).

Statements

Average score (Sd)

1. I am confident in my ability to read a case and select the patient’s key factors that may impact their care.

4.3 (0.7)

2. I am confident in my ability to identify the presence of pain in a given patient.

4.1 (0.6)

3. I feel confident in determining the type of pain from the etiology involved.

4.2 (0.6)

4. I feel confident in determining the type of pain from the descriptive semiology used by a patient.

4.2 (0.7)

5. I am confident in my ability to choose the right pain assessment test for a given patient.

4.5 (0.5)

6. I am confident in my ability to use pain assessment tests with a given patient.

4.3 (0.6)

7. I am confident in my ability to understand the mechanism of action of an analgesic according to its pharmacological class.

4.1 (0.6)

8. I feel confident in my ability to relate the therapeutic benefit of a drug to its mechanism of action.

4.1 (0.5)

9. I am confident in my ability to determine the oxidative, supra-additive, or sub-additive effects of painkillers.

4.1 (0.8)

10. I feel more sensitive to the importance of pain management.

4 (0.7)

11. I feel better prepared at the clinic after participating in clinical case discussions as part of the flipped classroom.

4.1 (0.5)

12. I feel better prepared to act as an advocate for my patient’s interests to ensure comfort.

3.8 (0.8)

Table 3. Acquisition and application of knowledge

The statement “I feel better prepared to act as an advocate for my patient’s interests to ensure comfort” received a response rate of 3.8. This statement has the least satisfaction rate compared to all statements in the corresponding section.

Statements

Average score (Sd)

P value

1. The cases were relevant and interesting.

4.7 (0.4)

Ns*

2. I was nervous at the beginning of the module, but I gained confidence in myself as the course progressed.

4.3 (0.8)

3. Participating in the group discussions made me more confident in analysing key pain-related data.

4.1 (0.9)

4. I find that discussions have helped me learn more effectively than lectures.

4.5 (0.6)

5. I found that the group discussions helped my learning more effectively than the manual (handout).

4.6 (0.7)

6. I would recommend case-based seminar discussions as a tool for other courses.

4 (0.9)

*mean comparison of each item

Table 4. Perception of LPLs as a teaching/learning model

Average responses to the six questionnaire items that referred to cases as a learning model ranged from 4 to 4.7. None of the average responses differed significantly (p>0.05) from the other responses in this section of the questionnaire. The average response to the question “Were the cases relevant and interesting?” was 4.7, which was higher than all other answers. The statement “I would recommend the PBL format (no lectures, only case studies with assigned readings) as a tool for other courses” has a response average of 4, which is lower than all other responses in the corresponding category.

IV. DISCUSSION

Nursing students must grasp intricate concepts of basic physiology, pathophysiology, pharmacology, and more. Employing effective teaching methods with active learning can foster critical thinking abilities and uphold patient safety in complex care scenarios (Dong, 2016; Forsgren et al., 2014; Wood, 2003). Nowadays, nursing education has embraced the use of the flipped classroom as it offers a rich learning environment (Dong, 2016; Hanson, 2016; Immekus, 2019; Missildine et al., 2013; Ndosi & Newell, 2009; Peisachovich et al., 2016; Wood, 2003). Problem-based learning (PBL) is a frequently employed active learning approach in flipped classroom scenarios (Dong, 2016; Geist et al., 2015). PBL has been demonstrated to enhance the capacity of nursing students to evaluate patient information and arrive at more contemplative clinical judgments (Forsgren et al., 2014; Njie-Carr et al., 2017). When nursing students engage in discussions within small groups, they open themselves up to a wealth of interactive learning opportunities that are guided by their professor. This active learning situation is far more advantageous than the traditional lecture format as it promotes critical thinking skills and fosters independent learning. By participating in small group discussions, nursing students are able to delve deeper into the subject matter, ask questions, and engage in meaningful dialogue with their peers and instructor. This type of collaborative learning environment encourages students to take ownership of their education and empowers them to become more confident and competent healthcare professionals. Therefore, it is crucial that nursing programs prioritise small group discussions as a key component of their curriculum (Bailey, 2017; Carvalho et al., 2017; Kong et al., 2014; Teike Lüthi et al., 2015; Wood, 2003). High-level thinking and independent learning are enhanced with the use of interactive small groups (Alton, 2016; Gholami et al., 2016). We note from a review of the literature that a limited number of studies have examined the use of PBL in nursing (Bailey, 2017; Forsgren et al., 2014).

The current study revealed that regardless of the nature of the exam, student learning outcomes significantly improved with the flipped classroom method. Furthermore, the students participating in this study consider this method as a useful model to improve their learning and be more engaging. In fact, active learning allows effective knowledge acquisition (Arrue et al., 2017) and the development of critical thinking skills on nursing students as well as the improvement of metacognitive skills (Bailey, 2017; Carvalho et al., 2017; Domínguez, 2012). Furthermore, the students participating in this study consider this method as a useful model to improve their learning and be more engaging (Schlairet et al., 2014). Consequently, alternating between lectures and PBL approach may be a better option for health science courses (Alexandre & Wright, 2013). Greater confidence is demonstrated in acquiring and applying knowledge (practice) related to pain management.

Participation in this approach was considered a positive learning strategy, regardless of course content, the flipped classroom has shown to effectively support content learning (Hanson, 2016). When students were asked to consider whether it helped them learn more effectively than lectures, a higher response was obtained, and the response was very positive. This conclusion is in line with one from a study conducted in Portugal, which found that using this method in a second-year pathophysiology course led to higher levels of student satisfaction (Marques & Correia, 2017). Although some discomfort may be reported students are uncertain about the content and will attend classes on the assumption that it will help them understand exactly what they need to do and what they hope to achieve. This result confirms that student satisfaction does not always accurately reflect their learning (Dong, 2016). Further evaluation of this strategy and other learning tools is needed to establish best practices in nursing education (Barranquero-Herbosa et al., 2022; Njie-Carr et al., 2017).

A. Limitations

The small number of participants may affect the validity of the study. The results of this study cannot be generalised because participants belonged to one track only, so they are not representative to all nursing students. In addition, the small sample size of the study participants and the small number of available academic levels covered by the study.

B. Implications for Teaching and Future Research

Future studies could be considered to compare different learning strategies (e.g., games, medication card design, and practice problems) to determine the best practices for active learning strategies that support learning in a professional education setting and support flipped classroom learnings in nursing education.

V. CONCLUSION

Nursing education is about the development of professional skills; hence it is important to adopt active teaching strategies that promote critical thinking and knowledge transfer. However, the time constraint often pushes teachers to adopt the magistral lectures, the traditional form of knowledge delivery which mostly lacks the element of interactivity which is an issue recognised among many researchers worldwide.

The flipped classroom, in our case, is a solution to the time management problem. It allowed us to free up time in class which was beneficial to give space for interactive activities and active animation techniques such as case studies. In addition, in this study, we were able to compare the impact of the flipped classroom with the traditional model on two groups of students enrolled to the same course: pain management. The comparison results were mainly based on the acquisition of knowledge by students. We also measured students’ satisfaction with the proposed model as well as their sense of self-efficacy.

Students’ grades were clearly in favour of the PBL model in the flipped classroom. The students were also mostly satisfied with the proposed model and confirmed the development of their sense of self-efficacy regarding the pain management course.

Our perspective is the improvement of our teaching which, in our opinion, must be constantly corrected and enriched to face new conditions and situations. In this direction, the present study could constitute a roadmap for further in-depth studies to bring more to the PBL-based teaching model in the flipped classroom.

Notes on Contributors

Soumia Merrou is involved in the conceptualisation, methodology, data curation, writing and original draft preparation.

Abdellah Idrissi Jouicha helped in the methodology, participated in data curation and software, helped in writing – reviewing and editing.

Baslam Abdelmounaim participated in writing the original draft preparation, performed statistical analyses, helped in reviewing and editing corrections.

Zakaria Ouhaz was involved in visualisation, participated in data collection, helped writing and reviewed the manuscript.

Ahmed Rhassane El Adib was central to the conceptualisation and methodology, validated the design study, and supervised work progress. All authors have read and approved the final manuscript.

Ethical Approval

Participation in the study was voluntary and anonymous. Oral consent of all participants was obtained and the research was approved by the Institutional Ethical committee (CCBE-FSA Ref. No: ER-CS-10/2022-000).

Data Availability

The data that support the findings of this study are openly available in Figshare repository, https://doi.org/10.6084/m9.figshare.21385446.

Acknowledgement

We acknowledge the efforts of both professor and participants.

Funding

The study received no funding.

Declaration of Interest

The authors declare that they have no conflict of interest.

References

Alexandre, M. S., & Wright, R. R. (2013). Flipping the classroom for student engagement. International Journal of Nursing Care, 1(2), 100.

Alton, S. (2016). Learning how to learn: Meta-learning strategies for the challenges of learning pharmacology. Nurse Education Today, 38, 2–4. https://doi.org/10.1016/j.nedt.2016.01.003

Arrue, M., Ruiz de Alegría, B., Zarandona, J., & Hoyos Cillero, I. (2017). Effect of a PBL teaching method on learning about nursing care for patients with depression. Nurse Education Today, 52, 109–115. https://doi.org/10.1016/j.nedt.2017.02.016

Bailey, L. A. (2017). Adaptation of know, want to know, and learned chart for problem-based learning. Journal of Nursing Education, 56(8), 506–508. https://doi.org/10.3928/01484834-20170712-11

Barranquero-Herbosa, M., Abajas-Bustillo, R., & Ortego-Maté, C. (2022). Effectiveness of flipped classroom in nursing education: A systematic review of systematic and integrative reviews. International Journal of Nursing Studies, 105, Article 104327. https://doi.org/10.1016/j.ijnurstu.2022.104327

Carvalho, D. P. S. R. P., Azevedo, I. C., Cruz, G. K. P., Mafra, G. A. C., Rego, A. L. C., Vitor, A. F., Santos, V. E. P., Cogo, A. L. P., & Ferreira Júnior, M. A. (2017). Strategies used for the promotion of critical thinking in nursing undergraduate education: A systematic review. Nurse Education Today, 57, 103–107. https://doi.org/10.1016/j.nedt.2017.07.010

Deandrea, S., Montanari, M., Moja, L., & Apolone, G. (2008). Prevalence of undertreatment in cancer pain. A review of published literature. Annals of Oncology, 19(12), 1985–1991. https://doi.org/10.1093/annonc/mdn419

Domínguez, R. G. (2012). Participatory Learning. In N. M. Seel (Ed.), Encyclopedia of the Sciences of Learning (pp. 2556–2560). Springer.

Dong, X. (2016). Application of flipped classroom in college english teaching. Creative Education, 7(9), 1335–1339. https://doi.org/10.4236/ce.2016.79138

Forsgren, S., Christensen, T., & Hedemalm, A. (2014). Evaluation of the case method in nursing education. Nurse Education in Practice, 14(2), 164–169. https://doi.org/10.1016/j.nepr.2013.08.003

Geist, M. J., Larimore, D., Rawiszer, H., & Al Sager, A. W. (2015). Flipped versus traditional instruction and achievement in a baccalaureate nursing pharmacology course. Nursing Education Perspectives36(2), 114-115. https://doi.org/10.5480/13-1292

Gerbershagen, K., Gerbershagen, H. J., Lutz, J., Cooper-Mahkorn, D., Wappler, F., Limmroth, V., & Gerbershagen, M. (2009). Pain prevalence and risk distribution among inpatients in a German teaching hospital. The Clinical Journal of Pain, 25(5), 431–437.

Gholami, M., Moghadam, P. K., Mohammadipoor, F., Tarahi, M. J., Sak, M., Toulabi, T., & Pour, A. H. H. (2016). Comparing the effects of problem-based learning and the traditional lecture method on critical thinking skills and metacognitive awareness in nursing students in a critical care nursing course. Nurse Education Today, 45, 16–21. https://doi.org/10.1016/j.nedt.2016.06.007

Gianni, W., Madaio, R., Cioccio, L., D’Amico, F., Policicchio, D., Postacchini, D., Franchi, F., Ceci, M., Benincasa, E., Gentili, M., & Zuccaro, S. (2010). Prevalence of pain in elderly hospitalized patients. Archives of Gerontology and Geriatrics, 51(3), 273-276. https://doi.org/10.1016/j.archger.2009.11.016

Hanson, J. (2016). Surveying the experiences and perceptions of undergraduate nursing students of a flipped classroom approach to increase understanding of drug science and its application to clinical practice. Nurse Education in Practice, 16(1), 79–85. https://doi.org/10.1016/j.nepr.2015.09.001

Immekus, J. C. (2019). Flipping statistics courses in graduate education: Integration of cognitive psychology and technology. Journal of Statistics Education, 27(2), 79–89. https://doi.org/10.1080/10691898.2019.1629852

Kerner, Y., Plakht, Y., Shiyovich, A., & Schlaeffer, P. (2013). Adherence to guidelines of pain assessment and intervention in internal medicine wards. Pain Management Nursing, 14(4), 302–309. https://doi.org/10.1016/j.pmn.2011.06.005

Kong, L.-N., Qin, B., Zhou, Y., Mou, S., & Gao, H.-M. (2014). The effectiveness of problem-based learning on development of nursing students’ critical thinking: A systematic review and meta-analysis. International Journal of Nursing Studies, 51(3), 458–469. https://doi.org/10.1016/j.ijnurstu.2013.06.009

Manias, E., Bucknall, T., & Botti, M. (2005). Nurses’ strategies for managing pain in the postoperative setting. Pain Management Nursing: Official Journal of the American Society of Pain Management Nurses, 6(1), 18–29. https://doi.org/10.1016/j.pmn.2004.12.004

Marques, P. A. O., & Correia, N. C. M. (2017). Nursing education based on “hybrid” problem-based learning: The impact of PBL-based clinical cases on a pathophysiology course. Journal of Nursing Education, 56(1), 60. https://doi.org/10.3928/01484834-20161219-12

Merrou, S., Jouicha, A. I., Baslam, A., Ouhaz, Z., & El Adib, A. R. (2023). Problem-based learning method in the context of a flipped classroom: Outcomes on pain management course acquisition [Data set]. Figshare. https://doi.org/10.6084/m9.figshare.22639279

Missildine, K., Fountain, R., Summers, L., & Gosselin, K. (2013). Flipping the classroom to improve student performance and satisfaction. Journal of Nursing Education52(10), 597-599.

Ndosi, M. E., & Newell, R. (2009). Nurses’ knowledge of pharmacology behind drugs they commonly administer. Journal of Clinical Nursing, 18(4), 570–580. https://doi.org/10.1111/j.1365-2702.2008.02290.x

Njie-Carr, V. P. S., Ludeman, E., Lee, M. C., Dordunoo, D., Trocky, N. M., & Jenkins, L. S. (2017). An integrative review of flipped classroom teaching models in nursing education. Journal of Professional Nursing, 33(2), 133–144. https://doi.org/10.1016/j.profnurs.2016.07.001

Peisachovich, E. H., Murtha, S., Phillips, A., & Messinger, G. (2016). Flipping the classroom: a pedagogical approach to applying clinical judgment by engaging, interacting, and collaborating with nursing students. International Journal of Higher Education, 5(4), 114. https://doi.org/10.5430/ijhe.v5n4p114

Schlairet, M. C., Green, R., & Benton, M. J. (2014). The flipped classroom. Nurse Educator, 39(6), 321–325. https://doi.org/10.1097/nne.0000000000000096

Teike Lüthi, F., Gueniat, C., Nicolas, F., Thomas, P., & Ramelet, A.-S. (2015). Les obstacles à la gestion de la douleur perçus par les infirmières: Étude descriptive au sein d’un hôpital universitaire Suisse. [Barriers to pain management as perceived by nurses: A descriptive study in a Swiss University Hospital.] Douleur et Analgesie [Douleur & Analgésie], 28, 93-99. https://doi.org/10.1007/s11724-015-0414-3

Tune, J. D., Sturek, M., & Basile, D. P. (2013). Flipped classroom model improves graduate student performance in cardiovascular, respiratory, and renal physiology. Advances in Physiology Education, 37(4), 316–320. https://doi.org/10.1152/advan.00091.2013

Wood, D. (2003). Problem based learning. British Medical Journal, 326, 328–330. https://doi.org/10.1136/bmj.326.7384.328

*Abdellah Idrissi Jouicha
Marrakesh, Marrakesh-Safi,
40000, Morocco
Email: abdellah.idrissi@ced.uca.ac.ma

Submitted: 30 May 2022
Accepted: 7 December 2022
Published online: 4 July, TAPS 2023, 8(3), 35-44
https://doi.org/10.29060/TAPS.2023-8-3/OA2876

Rachel Jiayu Lee1*, Jeannie Jing Yi Yap1*, Abhiram Kanneganti1, Carly Yanlin Wu1, Grace Ming Fen Chan1, Citra Nurfarah Zaini Mattar1,2, Pearl Shuang Ye Tong1,2, Susan Jane Sinclair Logan1,2

1Department of Obstetrics and Gynaecology, National University Hospital, Singapore; 2Department of Obstetrics and Gynaecology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore

*Co-first authors

Abstract

Introduction: Disruptions of the postgraduate (PG) teaching programmes by COVID-19 have encouraged a transition to virtual methods of content delivery. This provided an impetus to evaluate the coverage of key learning goals by a pre-existing PG didactic programme in an Obstetrics and Gynaecology Specialty Training Programme. We describe a three-phase audit methodology that was developed for this

Methods: We performed a retrospective audit of the PG programme conducted by the Department of Obstetrics and Gynaecology at National University Hospital, Singapore between January and December 2019 utilising a ten-step Training Needs Analysis (TNA). Content of each session was reviewed and mapped against components of the 15 core Knowledge Areas (KA) of the Royal College of Obstetrics & Gynaecology membership (MRCOG) examination syllabus.

Results: Out of 71 PG sessions, there was a 64.9% coverage of the MRCOG syllabus. Four out of the 15 KAs were inadequately covered, achieving less than 50% of knowledge requirements. More procedural KAs such as “Gynaecological Problems” and those related to labour were poorly (less than 30%) covered. Following the audit, these identified gaps were addressed with targeted strategies.

Conclusion: Our audit demonstrated that our pre-pandemic PG programme poorly covered core educational objectives i.e. the MRCOG syllabus, and required a systematic realignment. The COVID-19 pandemic, while disruptive to our PG programme, created an opportunity to analyse our training needs and revamp our virtual PG programme.

Keywords:        Medical Education; Residency; Postgraduate Education; Obstetrics and Gynaecology; Training Needs Analysis; COVID-19; Auditing Medical Education

Practice Highlights

  • Regular audits of PG programmes ensure relevance to key educational objectives.
  • Training Needs Analysis facilitates identification of learning goals, deficits & corrective change.
  • Mapping against a milestone examination syllabus & using Delphi technique helps identify learning gaps.
  • Procedural-heavy learning goals are poorly served by didactic PG and need individualised assessment.
  • A central committee is needed to balance the learning needs of all departmental CME participants.

I. INTRODUCTION

    Postgraduate medical education (PG) programmes are an important aspect in meeting core Specialty Trainees’ (ST) learning goals in addition to other modalities of instruction such as practical training (e.g. supervised patient-care or simulator-based training) (Bryant‐Smith et al., 2019) and workplace-based assessments (e.g. case-based discussions and Objective Structured Clinical Examinations [OSCEs] (Chan et al., 2020; Parry-Smith et al., 2014). In academic medical centres, PG education may often be nestled within a wider departmental or hospital Continuing Medical Education (CME) programme. While both PG and CME programmes indirectly improve patient outcomes by keeping clinicians abreast with the latest updates, reinforcing important concepts, and changing practice (Burr & Johanson, 1998; Forsetlund et al., 2021; Marinopoulos et al., 2007; Norman et al., 2004; Raza et al., 2009; Sibley et al., 1982), it is important to balance the learning needs of STs with that of other learners (E.g. senior clinicians, scientists and allied healthcare professionals). This can be challenging as multiple objectives need be fulfilled amongst various learners. Nevertheless, just as with any other component of good quality patient care, it is amenable to audit and quality improvement initiatives (Davies, 1981; Norman et al., 2004; Palmer & Brackwell, 2014).

    The protracted COVID-19 pandemic has disrupted the way we deliver healthcare and conduct non-clinical services (Lim et al., 2009; Wong & Bandello, 2020). In response, the academic medical community has globally embraced the use of teleconferencing platforms such as Zoom, Microsoft Teams and Webex(Kanneganti, Sia, et al., 2020; Renaud et al., 2021) as well as other custom-built solutions for the synchronous delivery of didactics and group discourse (Khamees et al., 2022). While surgical disciplines have suffered a decline in the quality of “hands-on” training due to reduced elective surgical load and safe distancing (English et al., 2020), the use of simulators (Bienstock & Heuer, 2022; Chan et al., 2020; Hoopes et al., 2020; Xu et al., 2022), remote surgical preceptorship, and teaching through surgical videos (Chick et al., 2020; Juprasert et al., 2020; Mishra et al., 2020) have helped mitigate some of these. Virtual options that that have been reproducibly utilised during the pandemic and will be a part of the regular armamentarium of post-graduate medical educationists include online didactic lectures, livestreaming or video repositories of surgical procedures, (Grafton-Clarke et al., 2022) and virtual case discussions and grand ward rounds (Sparkes et al., 2021). Notably, they facilitate the inclusion of a physically wider audience, be it trainer or trainee, and allow participants to tune in from different geographical locations.

    At the Department of Obstetrics and Gynaecology, National University Hospital, Singapore, the forced, rapid transition to a virtual CME format (vCME) (Chan et al., 2020; Kanneganti, Lim, et al., 2020) provided an impetus to critically review and revamp the didactic component of our PG programmes. A large component of this had been traditionally baked into our departmental CME programme which comprises daily morning meetings covering recent specialty and scientific updates, journal clubs, guideline reviews, grand round presentations, surgical videos, exam preparation, topic modular series, and research and quality improvement presentations. The schedule and topics were previously arbitrarily decided by a lead consultant one month prior and were presented by a supervised ST or invited speaker. While attendance by STs at these sessions was mandatory and comprised the bulk of protected ST teaching time, no prior attempt had been made to assess its coverage of core ST learning objectives and in particular, the syllabus for milestone ST exams.

    Our main aim was to conduct an audit on the coverage of our previous PG didactic sessions on the most important learning goals with the aim of subsequently restructuring them to better meet these goals.

    II. METHODS

    We audited and assessed our departmental CME programme’s relevance to the core learning goals of our STs by utilising a Training Needs Analysis (TNA) methodology. While there are various types of TNA used in healthcare and management (Donovan & Townsend, 2019; Gould et al., 2004; Hicks & Hennessy, 1996, 1997; Johnston et al., 2018; Markaki et al., 2021), in general they represent systematic approaches towards developing and implementing a training plan. The common attributes can be distilled into three common phases (Figure 1). Importantly for surgical and procedurally-heavy disciplines, an dimension that is not well covered by didactic sessions alone are assessments for procedural skill competency. These require separate attention that is beyond the scope of this audit.

    Figure 1. A simplified three phase approach to blueprinting, mapping, and auditing a Postgraduate (PG) Education Programme

    A. Phase 1: Identifying Organisational Goals and Specific Objectives

    The overarching goal of a specialty PG education programme is to produce well-balanced clinicians with a strong knowledge base. Singapore’s Obstetrics and Gynaecology specialty training programmes have adopted the membership examinations for the Royal College of Obstetricians and Gynaecologists (MRCOG) (Royal College of Obstetricians and Gynaecologists, 2021) of the United Kingdom as the milestone examination for progression from junior to senior ST.

    First, we adapted a ten-step TNA proposed by Donovan & Townsend (Table 1) to crystallise our our core learning goals, identify deficiencies, and subsequently propose steps to address these gaps in a systematic fashion that is catered to our specific context. While most aspects were followed without change, we adapted the last aspect i.e. Cost Benefit Analysis. As a general organisational and management tool, the original TNA primarily looked at the financial costs of implementing a training programme. At an academic medical institution, the “cost” is mainly non-financial and mainly refers to time taken away from important clinical service roles.

    As part of formulating what were deemed to be core learning goals of an ideal PG programme (i.e. Steps 1 to 4), we had a focused group discussion comprising key stakeholders in postgraduate education, including core faculty (CF), physician faculty (PF), and representative STs. The discussions identified 18 goals specific to our department. We then used a modified Delphi method (Hasson et al., 2000; Humphrey-Murto et al., 2017) to distil what CF, PFs, and STs felt were important priorities for grooming future specialists. Three rounds of priority ranking were undertaken via an anonymised online voting form. At each round, these 18 goals were progressively ranked and distilled until five remained. These were then ranked from highest to lowest priority and comprised 1) exam preparedness, 2) clinical competency, 3) in-depth understanding of professional clinical guidelines, 4) interpretation of medical research literature, and 5) ability to conduct basic clinical research and audits.

    Training Needs Analysis ​

    1 ​

    Strategic objectives​

    • Competent O&G clinicians ​

    2 ​

    Operational outcome ​

    • Specialist trainees – preparation and passing of exams (MRCOG, CREOG), achieving ACGME training requirements
    • Existing clinicians: Maintaining knowledge and competence

    3 ​

    Employee Behaviours ​

    • Be familiar with MRCOG syllabus ​
    • Be familiar with updates in clinical guidelines, keep up with progress/advancements in scientific research​

    4 ​

    Learnable Capabilities ​

    • Completed Part 1 exam before entering specialist training
    • Knowledge, procedural skills, and competency
    • Achieving ACGME milestones

    5 ​

    Gap Assessment ​

    • Blueprinting of PG programme to identify deficiencies in teaching ​
    • Survey to STs/clinicians ​
    • Self-assessment ​
    • Tests (MRCOG, CREOG) ​
    • Performance evaluation ​

    6 ​

    Prioritise Learning and Training Needs ​

    • Restructure PG programme – in terms of breadth & width of topics ​
    • Identify who needs training – STs taking exams ​

    7 ​

    Learning Approaches ​

    • In various methods: didactics, lecture (invited speaker), e-learning, conferences, journal club, scientific research meeting, on the job training, surgical videos, panel discussions
    • Transition to virtual platforms, webinars ​
    • Suspension of simulation/hands-on workshops

    8 ​

    Roll-out Plan ​

    • Virtual didactic PG programme – 3-4 times per week ​

    9 ​

    Evaluation Criteria ​

    • Survey 1 year post implementation ​
    • Assessment form post teaching ​

    10 ​

    Cost Benefit Analysis ​

    • Points in consideration: content development time, lost productivity from time spent in training, delivery method (Zoom®)​

    Table 1. 10-step Training Needs Analysis

    Table adapted from Donovan, Paul and Townsend, John, Learning Needs Analysis (United Kingdom, Management Pocketbooks, 2019)

    MRCOG: Member of the Royal College of Obstetricians and Gynaecologists, O&G: Obstetrics and Gynaecology

    CREOG: Council on Resident Education in Obstetrics and Gynecology

    ACGME: Accreditation Council for Graduate Medical Education,

    PG: Post-Graduate Education

    B. Phase 2: Identifying a Standard and Assessing for Coverage against This Standard

    As with any audit, a “gold-standard” should be identified. As the focus group discussion and Delphi method identified exam preparedness as the highest priority, we created a “blueprint” based on the syllabus of the MRCOG examination (Royal College of Obstetricians and Gynaecologists, 2019). This comprised more than 200 Knowledge Requirements organised more than 200 knowledge requirements into 15 Knowledge Areas (KAs) (Table 2). We mapped the old CME programme against this blueprint to understand the extent of coverage of these KAs. We analyse the session contents between January and December 2019. We felt the best way to ensure systematic coverage of these KAs would be through sessions with pre-identified areas of topical focus conducted during protected teaching time as opposed to opportunistic and voluntary learning opportunities that may not be widely available to all STs. In our department, this applied to morning CME sessions which indeed formed the bulk of protected teaching time for STs, required mandatory attendance, and comprised sessions covering pre-defined topics. Thus, we excluded didactic sessions where 1) the content of the presentations was unavailable for audit, 2) they covered administrative aspects and did not have a pre-identified topical focus where learning was opportunistic (e.g. risk management meetings, labour ward audits), and 3) where the attendance was optional.

    Mapping was conducted independently by two members of the study team (JJYY and CYW) with conflict resolved by a third member (RJL). The number of knowledge requirements fulfilled within a KA were expressed as a percentage.

    Core knowledge areas

    Clinical skills

    Teaching and research

    Core surgical skills

    Post operative care

    Antenatal care

    Maternal Medicine

    Management of Labour

    Management of delivery

    Postpartum problems

    Gynaecological problems

    Subfertility

    Sexual and reproductive health

    Early pregnancy care

    Gynaecological Oncology

    Urogynaecology & pelvic floor problems

    Table 2. RCOG Core Knowledge Areas (Royal College of Obstetricians and Gynaecologists, 2019)

    C. Phase 3: Restructuring a PG Programme

    The final phase i.e. the restructuring of a PG programme, is directed by responses to Steps 7-10 of the 10-step TNA (Table 1). As the focus of our article is on the methodology of auditing the extent of coverage of our departmental didactic sessions over our core ST learning goals i.e. the MRCOG KAs, these subsequent efforts are detailed in the discussion section.

    III. RESULTS

    Altogether, 71 presentations were identified (Table 3) of which 12 CME sessions (16.9%) were unavailable and, thus, excluded from the mapping exercise. The most common types of CME sessions presented clinical updates (31.0%), original research (29.6%), journal clubs (16.9%), and exam-preparation sessions (e.g. Case Based Discussion and OSCE simulations) (12.6%). The overall coverage of the entire syllabus was 64.9% (Figure 2). The KAs demonstrating complete coverage (i.e. 100% of all requirements) were “Teaching and Research”, “Postoperative Care” and “Early Pregnancy Care”. Three KAs had a coverage of 75-100% in the CME programme i.e. “Clinical Skills” (89%), “Urogynaecology and Pelvic Floor” (82%), and “Subfertility” (77%) while three were covered below 50% i.e. “Management of Labour”, “Management of Delivery”, “Postpartum Problems”, and “Gynaecological Problems”. These were more practical KAs that were usually covered during ward covers, operating theatre, clinics, and labour ward as well as during practical skills training workshops and grand ward rounds where clinical vignettes were opportunistically discussed depending on in-patient case mix. Nevertheless, this “on-the-ground” training is often unplanned, unstructured and ‘bite-sized’, thus complicating integration with the deep and broad guideline and knowledge proficiency that may be needed to train STs to adapt to complex situations.

    Type of presentation

    Number of sessions

    Percentage breakdown

    Clinical Updates

    22

    31.0%

    Presentation of Original Research

    21

    29.6%

    Journal Club

    12

    16.9%

    Case Based Discussion

    5

    7.0%

    OSCE practice

    4

    5.6%

    Others*

    2

    2.8%

    Audit

    2

    2.8%

    Workshops

    3

    3.0%

    Total

    71

    100%

    *Others: ST Sharing of Overseas Experiences and Trainee Wellbeing

    Table 3. Type of CME presentations

    Figure 2. Graph showing the percentage coverage of knowledge areas

    IV. DISCUSSION

    Our audit revealed a relatively low coverage of the MRCOG KAs with only 64.9% of the syllabus covered. While the morning CME programme caters to all members of the department, the sessions are an important didactic component for ST education and exam preparation as they are deemed “protected” teaching time. There had been no prior formal review assessing whether it catered to this very important section of the department’s workforce. We were also able to recognise those KAs which had exceptionally low coverage were those with a large amount of practical and “hands-on” skills (i.e. “Gynaecological Problems”, “Management of Labour”, “Management of Delivery”, and “Postpartum Problems”). As a surgical discipline, this highlighted that these areas needed directed solutions through other forms of practical instruction and evaluation. In the pandemic environment, this may involve virtual or home-based means (Hoopes et al., 2020). These “hands-on” KAs likely require at least semi-annual individualised assessment by the CF through verified case logs, Objective Structured Assessment of Technical Skills, Direct Observation of Procedural Skills, and Non-Technical Skills for Surgeons (NOTSS) (Bisson et al., 2006; Parry-Smith et al., 2014). This targeted assessment was even more crucial during the recovery “catch-up” phase due to de-skilling because of reduced elective surgical caseload (Amparore et al., 2020; Chan et al., 2020) and facilitated the redistributing of surgical training material to cover training deficits.

    While there is significant literature on how to organise a robust PG didactic programme (Colman et al., 2020; Harden, 2001; Willett, 2008), little has been published on how to evaluate an established didactic programme’s coverage of its learner’s educational requirements (Davies, 1981). Most studies evaluating the efficacy of such programmes typically assess the effects of individual CME sessions on physician knowledge or performance and patient outcomes after a suitable interval (Davis et al., 1992; Mansouri & Lockyer, 2007), with most citing a small to medium effect. We believe, however, that our audit process permits a more holistic, reproducible, and structured means of evaluating an existing didactic programme and finding deficits that can be improve upon to brings value to any specialty training programme.

    At our institution, safe distancing requirements brought on by the COVID-19 pandemic required a rapid transition to a video-conferencing-based approach i.e. vCME. As milestone examinations were still being held, the first six months were primarily focused on STs as examination preparation remained a high and undisputed priority and learning opportunities had been significantly disrupted by the pandemic. During this phase, our vCME programme was re-organised into three to four sessions per week which were peer-led and supervised by a faculty member. Video-conferencing platforms encouraged audience participation through live feedback, questions posed via the chat box, instantaneous online polling, and directed case-based discussions with ST participants. These facilitated real time feedback to the presenter in a way that was not possible in previous face-to-face sessions due to reasons such as shyness and difficulty conducting polls. Other useful features included being able to record presentations for digital storage in a hospital-based server for access on-demand for revision purposes by STs.

    A previously published anonymised questionnaire within our department (Chan et al., 2020) found very favourable opinions of vCME as an effective mode of learning amongst 28 junior doctors (85.7%) and nine presenters (100%) with 75% hoping for it to continue even after normalisation of social distancing policies. Nevertheless, common issues reported included a lack of personal interaction, difficulties in engaging with speakers, technical difficulties, and inaccurate attendance confirmation as shared devices for participating on these vCME sessions sometimes failed to identify who was present. While there is altered teacher and learner engagement due to physical separation across a digital medium, studies have also found that the virtual platform provided a useful means of communication and feedback and created a psychologically safe learning environment (Dong et al., 2021; Wasfy et al., 2021).

    While our audit focused primarily on STs, departmental CME programmes need to find balanced in catering to the educational outcome of various groups of participants within a clinical department (e.g. senior clinicians, nursing staff, allied healthcare professionals, clinical scientists). Indeed, as these groups started to return to the CME programme after about six months following the vCME transition, we created a core postgraduate committee comprising members representing the learning interests of each party i.e. Department research director, ST Programme Director and Assistant Director, and a representative senior ST in the fifth or sixth year of training, so that we could continue to meet the recommendations set in our TNA while rebalancing the programme to meet the needs of all participants. Out of an average of 20 CME sessions per month, four were dedicated to departmental and hospital grand rounds each. Of the remaining 12 sessions, two were dedicated towards covering KAs, four scientific presentations, three clinical governance aspects, and one journal club. The remaining two sessions would be “faculty wildcard” sessions to be used at the committee’s discretion of the committee to cover poorly covered, more contemporary, “breaking news” topics, or serve as a buffer in the event of cancellations of other topics. Indeed, the same TNA-based audit methodology can be employed any other group of CME participants.

    A key limitation in our audit method is that it focused on the breadth of coverage of learning objectives, but not the quality of the teaching and its depth. Teaching efficacy is also important in the delivery of learning objectives (Bakar et al., 2012) and needs more specific assessment tools (Metheny et al., 2005). Evaluating the quality of PG training could take several forms and may be direct e.g. an evaluation by the learner (Gillan et al., 2011), or could be indirect e.g. charting the learner’s progress through OSCEs and CEXs, scheduled competency reviews, and ST examination pass rates (Pinnell et al., 2021). Importantly, despite the rise of virtual learning platforms, there is little consensus on the best way to evaluate e-learning methods (De Leeuw et al., 2019). Nevertheless, our main audit goal was to assess the extent of coverage of the MRCOG syllabus which is a key training outcome. Future audits, however, should incorporate this element to provide additional qualitative feedback to assess this dimension as well. Further research should be carried out in terms of evaluating the effects of optimising a PG didactic programme on key outcomes such as ST behaviour, perceptions, and objective outcomes such as examination results.

    Finally, while these were the results of an audit conducted in a single hospital department and used a morning CME programme as a basis for evaluation, we believe that this audit methodology based on a ten-step TNA and also utilising the Delphi method and syllabus mapping techniques (Harden, 2001) can be reproduced to any academic department that has a regular didactic programme as long as a suitable standard is selected. The Delphi method can easily be conducted via online survey platforms (e.g. Google Forms) to crystallise the PG programme goals. Our audit shows that without a systematic evaluation of past didactic sessions, it is possible for even mature CME programme to fall significantly short of ameeting the needs of its learners and that PG didactic sessions need deliberate planning.

    V. CONCLUSION

    Just as any other aspect of healthcare delivery, CME and PG programmes are amenable to audits and must adjust to an ever-changing delivery landscape. Rather than curse the darkness during the COVID-19 pandemic, we explored the potential of reformatting the PG programme and adjusting course to better suit the needs of our STs. We demonstrate a method of auditing an existing programme, distilling important learning goals, comparing it against an appropriate standard (i.e. coverage of the MRCOG KAs), and implementing changes utilising reproducible techniques such as the Delphi method (Humphrey-Murto et al., 2017). This process should be a regular mainstay of any mature ST programme to ensure continued relevancy. As continual outbreaks, even amongst vaccinated populations (Amit et al., 2021; Bar-On et al., 2021; Bergwerk et al., 2021) auger a future of COVID-19 endemicity, we must accept a “new-normal” comprising of intermittent workplace infection control policies such as segregation, shift work, and restrictions for in-person meetings (Kwon et al., 2020; Liang et al., 2020). Through our experience, we have shown that this auditing methodology can also be applied to vCME programmes.

    Notes on Contributors

    Rachel Jiayu Lee participated in the data collection and review, the writing of the paper, and the formatting for publication.

    Jeannie Jing Yi Yap participated in the data collection and review, the writing of the paper, and the formatting for publication.

    Carly Yanlin Wu participated in data collection and review.

    Grace Chan Ming Fen participated in data collection and review.

    Abhiram Kanneganti was involved in the writing of the paper, editing, and formatting for publication. Citra Nurfarah Zaini Mattar participated in the editing and direction of the paper.

    Pearl Shuang Ye Tong participated in the editing and direction of the paper.

    Susan Jane Sinclair Logan participated in the editing and direction of the paper.

    Ethical Approval

    IRB approval for waiver of consent (National Healthcare Group DSRB 2020/00360) was obtained for the questionnaire assessing attitudes towards vCME.

    Data Availability

    There is no relevant data available for sharing in this paper.

    Acknowledgement

    We would like to acknowledge the roles of Mr Xiu Cai Wong Edwin, Mr Lee Boon Kai and Ms Teo Xin Yue in the administrative roles behind auditing and reformatting the PG medical education programme.

    Funding

    There was no funding for this article.

    Declaration of Interest

    The authors have no conflicts of interest in connection with this article.

    References

    Amit, S., Beni, S. A., Biber, A., Grinberg, A., Leshem, E., & Regev-Yochay, G. (2021). Postvaccination COVID-19 among healthcare workers, Israel. Emerging Infectious Diseases, 27(4), 1220. https://doi.org/10.3201/eid2704.210016

    Amparore, D., Claps, F., Cacciamani, G. E., Esperto, F., Fiori, C., Liguori, G., Serni, S., Trombetta, C., Carini, M., Porpiglia, F., Checcucci, E., & Campi, R. (2020). Impact of the COVID-19 pandemic on urology residency training in Italy. Minerva Urologica e Nefrologica, 72(4), 505-509. https://doi.org/10.23736/s0393-2249.20.03868-0

    Bakar, A. R., Mohamed, S., & Zakaria, N. S. (2012). They are trained to teach, but how confident are they? A study of student teachers’ sense of efficacy. Journal of Social Sciences, 8(4), 497-504. https://doi.org/10.3844/jssp.2012.497.504

    Bar-On, Y. M., Goldberg, Y., Mandel, M., Bodenheimer, O., Freedman, L., Kalkstein, N., Mizrahi, B., Alroy-Preis, S., Ash, N., Milo, R., & Huppert, A. (2021). Protection of BNT162b2 vaccine booster against Covid-19 in Israel. New England Journal of Medicine, 385(15), 1393-1400. https://doi.org/10.1056/NEJMoa2114255

    Bergwerk, M., Gonen, T., Lustig, Y., Amit, S., Lipsitch, M., Cohen, C., Mandelboim, M., Levin, E. G., Rubin, C., Indenbaum, V., Tal, I., Zavitan, M., Zuckerman, N., Bar-Chaim, A., Kreiss, Y., & Regev-Yochay, G. (2021). Covid-19 breakthrough infections in vaccinated health care workers. New England Journal of Medicine, 385(16), 1474-1484. https://doi.org/10.1056/NEJMoa2109072

    Bienstock, J., & Heuer, A. (2022). A review on the evolution of simulation-based training to help build a safer future. Medicine, 101(25), Article e29503. https://doi.org/10.1097/MD.0000000000029503

    Bisson, D. L., Hyde, J. P., & Mears, J. E. (2006). Assessing practical skills in obstetrics and gynaecology: Educational issues and practical implications. The Obstetrician & Gynaecologist, 8(2), 107-112. https://doi.org/10.1576/toag.8.2.107.27230

    Bryant‐Smith, A., Rymer, J., Holland, T., & Brincat, M. (2019). ‘Perfect practice makes perfect’: The role of laparoscopic simulation training in modern gynaecological training. The Obstetrician & Gynaecologist, 22(1), 69-74. https://doi.org/10.1111/tog.12619

    Burr, R., & Johanson, R. (1998). Continuing medical education: An opportunity for bringing about change in clinical practice. British Journal of Obstetrics and Gynaecology, 105(9), 940-945. https://doi.org/10.1111/j.1471-0528.1998.tb10255.x 

    Chan, G. M. F., Kanneganti, A., Yasin, N., Ismail-Pratt, I., & Logan, S. J. S. (2020). Well-being, obstetrics and gynaecology and COVID-19: Leaving no trainee behind. Australian and New Zealand Journal of Obstetrics and Gynaecology, 60(6), 983-986. https://doi.org/10.1111/ajo.13249 

    Chick, R. C., Clifton, G. T., Peace, K. M., Propper, B. W., Hale, D. F., Alseidi, A. A., & Vreeland, T. J. (2020). Using technology to maintain the education of residents during the COVID-19 pandemic. Journal of Surgical Education, 77(4), 729-732. https://doi.org/10.1016/j.jsurg.2020.03.018

    Colman, S., Wong, L., Wong, A. H. C., Agrawal, S., Darani, S. A., Beder, M., Sharpe, K., & Soklaridis, S. (2020). Curriculum mapping: an innovative approach to mapping the didactic lecture series at the University of Toronto postgraduate psychiatry. Academic Psychiatry, 44(3), 335-339. https://doi.org/10.1007/s40596-020-01186-0

    Davies, I. J. T. (1981). The assessment of continuing medical education. Scottish Medical Journal, 26(2), 125-134. https://doi.org/10.1177/003693308102600208

    Davis, D. A., Thomson, M. A., Oxman, A. D., & Haynes, R. B. (1992). Evidence for the effectiveness of CME: A Rreview of 50 randomized controlled trials. Journal of the American Medical Association, 268(9), 1111-1117. https://doi.org/10.1001/jama.1992.03490090053014

    De Leeuw, R., De Soet, A., Van Der Horst, S., Walsh, K., Westerman, M., & Scheele, F. (2019). How we evaluate postgraduate medical e-learning: systematic review. JMIR medical education, 5(1), e13128.

    Dong, C., Lee, D. W.-C., & Aw, D. C.-W. (2021). Tips for medical educators on how to conduct effective online teaching in times of social distancing. Proceedings of Singapore Healthcare, 30(1), 59-63. https://doi.org/10.1177/2010105820943907

    Donovan, P., & Townsend, J. (2019). Learning Needs Analysis. Management Pocketbooks.

    English, W., Vulliamy, P., Banerjee, S., & Arya, S. (2020). Surgical training during the COVID-19 pandemic – The cloud with a silver lining? British Journal of Surgery, 107(9), e343-e344. https://doi.org/10.1002/bjs.11801

    Forsetlund, L., O’Brien, M. A., Forsén, L., Mwai, L., Reinar, L. M., Okwen, M. P., Horsley, T., & Rose, C. J. (2021). Continuing education meetings and workshops: Effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews, 9(9), CD003030. https://doi.org/10.1002/14651858.CD003030.pub3

    Gillan, C., Lovrics, E., Halpern, E., Wiljer, D., & Harnett, N. (2011). The evaluation of learner outcomes in interprofessional continuing education: A literature review and an analysis of survey instruments. Medical Teacher, 33(9), e461-e470. https://doi.org/10.3109/0142159X.2011.587915

    Gould, D., Kelly, D., White, I., & Chidgey, J. (2004). Training needs analysis. A literature review and reappraisal. International Journal of Nursing Studies, 41(5), 471-486. https://doi.org/10.1016/j.ijnurstu.2003.12.003

    Grafton-Clarke, C., Uraiby, H., Gordon, M., Clarke, N., Rees, E., Park, S., Pammi, M., Alston, S., Khamees, D., Peterson, W., Stojan, J., Pawlik, C., Hider, A., & Daniel, M. (2022). Pivot to online learning for adapting or continuing workplace-based clinical learning in medical education following the COVID-19 pandemic: A BEME systematic review: BEME Guide No. 70. Medical Teacher, 44(3), 227-243. https://doi.org/10.1080/0142159X.2021.1992372

    Harden, R. M. (2001). AMEE Guide No. 21: Curriculum mapping: A tool for transparent and authentic teaching and learning. Medical Teacher, 23(2), 123-137. https://doi.org/10.1080/01421590120036547

    Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008-1015. https://doi.org/10.1046/j.1365-2648.2000.t01-1-01567.x

    Hicks, C., & Hennessy, D. (1996). Applying psychometric principles to the development of a training needs analysis questionnaire for use with health visitors, district and practice nurses. NT Research, 1(6), 442-454. https://doi.org/10.1177/174498719600100608  

    Hicks, C., & Hennessy, D. (1997). The use of a customized training needs analysis tool for nurse practitioner development. Journal of Advanced Nursing, 26(2), 389-398. https://doi.org/https://doi.org/10.1046/j.1365-2648.1997.199702 6389.x

    Hoopes, S., Pham, T., Lindo, F. M., & Antosh, D. D. (2020). Home surgical skill training resources for obstetrics and gynecology trainees during a pandemic. Obstetrics and Gynecology 136(1), 56-64. https://doi.org/10.1097/aog.0000000000003931

    Humphrey-Murto, S., Varpio, L., Wood, T. J., Gonsalves, C., Ufholz, L.-A., Mascioli, K., Wang, C., & Foth, T. (2017). The use of the Delphi and other consensus group Methods in medical education research: A review. Academic Medicine, 92(10), 1491-1498. https://doi.org/10.1097/ACM.0000000000001812

    Johnston, S., Coyer, F. M., & Nash, R. (2018). Kirkpatrick’s evaluation of simulation and debriefing in health care education: a systematic review. Journal of Nursing Education, 57(7), 393-398. https://doi.org/10.3928/01484834-20180618-03

    Juprasert, J. M., Gray, K. D., Moore, M. D., Obeid, L., Peters, A. W., Fehling, D., Fahey, T. J., III, & Yeo, H. L. (2020). Restructuring of a general surgery residency program in an epicenter of the Coronavirus Disease 2019 pandemic: Lessons from New York City. JAMA Surgery, 155(9), 870-875. https://doi.org/10.1001/jamasurg.2020.3107

    Kanneganti, A., Lim, K. M. X., Chan, G. M. F., Choo, S.-N., Choolani, M., Ismail-Pratt, I., & Logan, S. J. S. (2020). Pedagogy in a pandemic – COVID-19 and virtual continuing medical education (vCME) in obstetrics and gynecology. Acta Obstetricia et Gynecologica Scandinavica, 99(6), 692-695. https://doi.org/10.1111/aogs.13885

    Kanneganti, A., Sia, C.-H., Ashokka, B., & Ooi, S. B. S. (2020). Continuing medical education during a pandemic: An academic institution’s experience. Postgraduate Medical Journal, 96, 384-386. https://doi.org/10.1136/postgradmedj-2020-137840

    Khamees, D., Peterson, W., Patricio, M., Pawlikowska, T., Commissaris, C., Austin, A., Davis, M., Spadafore, M., Griffith, M., Hider, A., Pawlik, C., Stojan, J., Grafton-Clarke, C., Uraiby, H., Thammasitboon, S., Gordon, M., & Daniel, M. (2022). Remote learning developments in postgraduate medical education in response to the COVID-19 pandemic – A BEME systematic review: BEME Guide No. 71. Medical Teacher, 44(5), 466-485. https://doi.org/10.1080/0142159X.2022.2040732

    Kwon, Y. S., Tabakin, A. L., Patel, H. V., Backstrand, J. R., Jang, T. L., Kim, I. Y., & Singer, E. A. (2020). Adapting urology residency training in the COVID-19 era. Urology, 141, 15-19. https://doi.org/10.1016/j.urology.2020.04.065

    Liang, Z. C., Ooi, S. B. S., & Wang, W. (2020). Pandemics and their impact on medical training: Lessons from Singapore. Academic Medicine, 95(9), 1359-1361. https://doi.org/10.1097/ACM.0000000000003441

    Lim, E. C., Oh, V. M., Koh, D. R., & Seet, R. C. (2009). The challenges of “continuing medical education” in a pandemic era. Annals Academy of Medicine Singapore, 38(8), 724-726. https://www.ncbi.nlm.nih.gov/pubmed/19736579

    Mansouri, M., & Lockyer, J. (2007). A meta-analysis of continuing medical education effectiveness. Journal of Continuing Education in the Health Professions, 27(1), 6-15. https://doi.org/10.1002/chp.88

    Marinopoulos, S. S., Dorman, T., Ratanawongsa, N., Wilson, L. M., Ashar, B. H., Magaziner, J. L., Miller, R. G., Thomas, P. A., Prokopowicz, G. P., Qayyum, R., & Bass, E. B. (2007). Effectiveness of continuing medical education. Evid Rep Technol Assess (Full Rep), 149, 1-69. https://www.ncbi.nlm.nih.gov/pubmed/17764217

    Markaki, A., Malhotra, S., Billings, R., & Theus, L. (2021). Training needs assessment: Tool utilization and global impact. BMC Medical Education, 21(1), Article 310. https://doi.org/10.1186/s12909-021-02748-y

    Metheny, W. P., Espey, E. L., Bienstock, J., Cox, S. M., Erickson, S. S., Goepfert, A. R., Hammoud, M. M., Hartmann, D. M., Krueger, P. M., Neutens, J. J., & Puscheck, E. (2005). To the point: Medical education reviews evaluation in context: Assessing learners, teachers, and training programs. American Journal of Obstetrics and Gynecology, 192(1), 34-37. https://doi.org/10.1016/j.ajog.2004.07.036

    Mishra, K., Boland, M. V., & Woreta, F. A. (2020). Incorporating a virtual curriculum into ophthalmology education in the coronavirus disease-2019 era. Current Opinion in Ophthalmology, 31(5), 380-385. https://doi.org/10.1097/icu.0000000000000681

    Norman, G. R., Shannon, S. I., & Marrin, M. L. (2004). The need for needs assessment in continuing medical education. BMJ (Clinical research ed.), 328(7446), 999-1001. https://doi.org/10.1136/bmj.328.7446.999

    Palmer, W., & Brackwell, L. (2014). A national audit of maternity services in England. BJOG, 121(12), 1458-1461. https://doi.org/10.1111/1471-0528.12973

    Parry-Smith, W., Mahmud, A., Landau, A., & Hayes, K. (2014). Workplace-based assessment: A new approach to existing tools. The Obstetrician & Gynaecologist, 16(4), 281-285. https://doi.org/10.1111/tog.12133

    Pinnell, J., Tranter, A., Cooper, S., & Whallett, A. (2021). Postgraduate medical education quality metrics panels can be enhanced by including learner outcomes. Postgraduate Medical Journal, 97(1153), 690-694. https://doi.org/10.1136/postgradmedj-2020-138669

    Raza, A., Coomarasamy, A., & Khan, K. S. (2009). Best evidence continuous medical education. Archives of Gynecology and Obstetrics, 280(4), 683-687. https://doi.org/10.1007/s00404-009-1128-7

    Renaud, C. J., Chen, Z. X., Yuen, H.-W., Tan, L. L., Pan, T. L. T., & Samarasekera, D. D. (2021). Impact of COVID-19 on health profession education in Singapore: Adoption of innovative strategies and contingencies across the educational continuum. The Asia Pacific Scholar, 6(3), 14-23. https://doi.org/10.29060/TAPS.2021-6-3/RA2346

    Royal College of Obstetricians and Gynaecologists. (2019). Core Curriculum for Obstetrics & Gynaecology: Definitive Document 2019. https://www.rcog.org.uk/media/j3do0i1i/core-curriculum-2019-definitive-document-may-2021.pdf

    Sibley, J. C., Sackett, D. L., Neufeld, V., Gerrard, B., Rudnick, K. V., & Fraser, W. (1982). A randomized trial of continuing medical education. New England Journal of Medicine, 306(9), 511-515. https://doi.org/10.1056/NEJM198203043060904

    Sparkes, D., Leong, C., Sharrocks, K., Wilson, M., Moore, E., & Matheson, N. J. (2021). Rebooting medical education with virtual grand rounds during the COVID-19 pandemic. Future Healthc J, 8(1), e11-e14. https://doi.org/10.7861/fhj.2020-0180

    Wasfy, N. F., Abouzeid, E., Nasser, A. A., Ahmed, S. A., Youssry, I., Hegazy, N. N., Shehata, M. H. K., Kamal, D., & Atwa, H. (2021). A guide for evaluation of online learning in medical education: A qualitative reflective analysis. BMC Medical Education, 21(1), Article 339. https://doi.org/10.1186/s12909-021-02752-2

    Willett, T. G. (2008). Current status of curriculum mapping in Canada and the UK. Medical Education, 42(8), 786-793. https://doi.org/10.1111/j.1365-2923.2008.03093.x

    Wong, T. Y., & Bandello, F. (2020). Academic ophthalmology during and after the COVID-19 pandemic. Ophthalmology, 127(8), e51-e52. https://doi.org/10.1016/j.ophtha.2020.04.029

    Xu, J., Zhou, Z., Chen, K., Ding, Y., Hua, Y., Ren, M., & Shen, Y. (2022). How to minimize the impact of COVID-19 on laparoendoscopic single-site surgery training? ANZ Journal of Surgery, 92, 2102-2108. https://doi.org/10.1111/ans.17819

    *Abhiram Kanneganti
    Department of Obstetrics and Gynaecology,
    NUHS Tower Block, Level 12,
    1E Kent Ridge Road,
    Singapore 119228
    Email: abhiramkanneganti@gmail.com

    Submitted: 30 June 2022
    Accepted: 31 October 2022
    Published online: 4 July, TAPS 2023, 8(3), 26-34
    https://doi.org/10.29060/TAPS.2023-8-3/OA2834

    Noorjahan Haneem Md Hashim1, Shairil Rahayu Ruslan1, Ina Ismiarti Shariffuddin1, Woon Lai Lim1, Christina Phoay Lay Tan2 & Vinod Pallath3

    1Department of Anaesthesiology, Faculty of Medicine, Universiti Malaya, Malaysia; 2Department of Primary Care Medicine, Faculty of Medicine, Universiti Malaya, Malaysia; 3Medical Education Research & Development Unit, Dean’s Office, Faculty of Medicine, Universiti Malaya, Malaysia

    Abstract

    Introduction: Examiner training is essential to ensure the trustworthiness of the examination process and results. The Anaesthesiology examiners’ training programme to standardise examination techniques and standards across seniority, subspecialty, and institutions was developed using McLean’s adaptation of Kern’s framework.

    Methods: The programme was delivered through an online platform due to pandemic constraints. Key focus areas were Performance Dimension Training (PDT), Form-of-Reference Training (FORT) and factors affecting validity. Training methods included interactive lectures, facilitated discussions and experiential learning sessions using the rubrics created for the viva examination. The programme effectiveness was measured using the Kirkpatrick model for programme evaluation.

    Results: Seven out of eleven participants rated the programme content as useful and relevant. Four participants showed improvement in the post-test, when compared to the pre-test. Five participants reported behavioural changes during the examination, either during the preparation or conduct of the examination.  Factors that contributed to this intervention’s effectiveness were identified through the MOAC (motivation, opportunities, abilities, and communality) model.

    Conclusion: Though not all examiners attended the training session, all were committed to a fairer and transparent examination and motivated to ensure ease of the process. The success of any faculty development programme must be defined and the factors affecting it must be identified to ensure engagement and sustainability of the programme.

    Keywords:           Medical Education, Health Profession Education, Examiner Training, Faculty Development, Assessment, MOAC Model, Programme Evaluation

    Practice Highlights

    • A faculty development initiative must be tailored to faculty’s learning needs and context.
    • A simple framework of planning, implementing, and evaluating can be used to design a programme.
    • Target outcome measures and evaluation plans must be included in the planning process.
    • The Kirkpatrick model is a useful tool to use in programme evaluation: to answer if the programme has met its objectives.
    • The MOAC model is a useful tool to explain why a programme has met its objective.

    I. INTRODUCTION

    Anaesthesiology specialist training in Malaysia comprises a 4-year clinical master’s programme. At the time of our workshop, five local public universities offer the programme. The course content is similar in all universities, but the course delivery may differ to align with each university’s rules and regulations. The summative examinations are held as a Conjoint Examination. Examiners include lecturers from all five universities, specialists from the Ministry of Health and external examiners from international Anaesthesiology training programmes. The examination consists of a written and a viva voce examination. The areas examined are the knowledge and cognitive skills in patient management.

    A speciality training programme’s exit level assessment is an essential milestone for licensing. In our programme, the exit examination occurs at the end of the training before trainees practise independently in the healthcare system and are eligible for national specialist registration. Therefore, aligning the curriculum and assessment to licensing requirements is necessary.

    Examiners play an important role during this high-stakes summative examination, making decisions regarding allowing graduating trainees to work as specialists in the community. Therefore, examiners must understand their role. In recent years, the anaesthesiology training programme providers in Malaysia have been taking measures to improve the validity of the examination. These include a stringent vetting process to ensure examination content reflects the syllabus, questions are unambiguous, and the examiners agree on the criteria for passing. However, previous examinations revealed that although examiners were clear on the aim of the examination, some utilised different assessment approaches, which were possibly coloured by personal and professional experiences, and thus needed constant calibration on the passing criteria. In addition, during examiner discussions, different examiners were found to have different skill levels in constructing focused higher-order questions and were not fully aware of potential cognitive biases that may affect the examination results.

    These insights from previous examinations warranted a specific skill training session to ensure the trustworthiness of the examination process and results (Blew et al., 2010; Iqbal et al., 2010; Juul et al., 2019, Chapter 8, pp. 127-140; McLean et al., 2008). The examiners and the Specialty committee were keen to ensure that these issues were addressed with a training programme that complements the current on-the-job examiner training.

    II. METHODS

    An examiner training module was developed using McLean’s adaptation of Kern’s framework for curriculum development: Planning, Implementation and Evaluation (McLean et al., 2008; Thomas et al., 2015). A conceptual framework for the examiner training programme was drawn up from the programme’s conception stage to the evaluation of its outcome, as illustrated in Figure 1 (Steinert et al., 2016).

    Figure 1: The conceptual framework for the examiner training programme and evaluation of its effectiveness

    A. Planning

    Three key focus areas were identified for the training programme: (1) Performance Dimension Training (PDT); (2) examiner calibration with Frame-Of-Reference Training (FORT); as well as (3) identifying factors affecting the validity of results and measures that can be taken to prevent them.

    1) Performance dimension training (Feldman et al., 2012): The aim was to improve examination validity by reducing examiner errors or biases unrelated to the examinees’ targeted performance behaviours. Finalised marking schemes outlining competencies to be assessed required agreement by all the examiners ahead of time. These needed to be clearly defined and easily understood by all the examiners, and consistency was key to reducing examiner bias.

    2) Examiner calibration with Frame-of-Reference Training (FORT) (Newman et al., 2016): Differing levels of experience among all the participants meant that there were differing expectations and performances among them. The examiner training programme needed to assist examiners in resetting expectations and criteria for assessing the candidates’ competencies. This examiner calibration was achieved using pre-recorded simulated viva sessions in which the participants rated candidates’ performances in each simulated viva session and received immediate feedback on their ability and criteria for scoring the candidates.

    3) Identifying factors affecting the validity of results (Lineberry, 2019): Factors that may affect the validity of examination results may be related to construct underrepresentation (CU), where the results only reflect one part of an attribute being examined; or construct-irrelevant variance (CIV), where the results are being affected by areas or issues other than the attribute being examined.

    An example of CU is sampling issues where only a limited area of the syllabus is examined, or an answer key is limited by the availability of evidence or content expertise.

    Examples of CIV include the different ways a concept can be interpreted in different cultures or training centres, ambiguous questions, examiner cognitive biases, examiner fatigue, examinee language abilities, and examinees guessing or cheating. The examiner training programme was designed with the objectives listed in Table 1.

    Malaysian Anaesthesiology Exit Level Examiner Training Programme

    1. Participants should be able to define the purpose and competencies to be assessed in the viva examination.

    2. Participants should be able to construct high-order questions (elaborating, probing, and justifying).

    3. Participants should be able to agree on anchors on rating scales of examination and narrow the range of ratings for the same encounter everyone observes.

    4. Participants should be able to calibrate the scoring of different levels of responses.

    Table 1: Objectives of the Faculty Development Intervention

    B. Implementation

    The faculty intervention programme was designed as a one-day online programme to be attended by potential examiners for the Anaesthesiology Exit Examination. The programme objectives were prioritised from the needs assessment and designed based on Tekian & Norcini’s recommendations (Tekian & Norcini, 2016). Due to time constraints, training was performed using an online platform closer to the examination dates after obtaining university clearance on confidentiality regarding assessment issues.

    The structure and contents of the examiner training programme are outlined in Table 2 and is further elaborated in Appendix A.

    General content

    Specific content

    Lectures

    1. Orientation to the examination regulations, objectives, structure and format of the final examination.

     

    2. Ensuring validity of the viva examination: elaborating on the threats present to the process and how to mitigate these concerns.

     

    3. Creating high-order questions based on competencies to be assessed and promoting appropriate examiner behaviours through consistency and increasing reliability.

     

    4. Utilising marking schemes, anchors and making inferences with:

    1. Review of literature discussing ratings in high-stakes examinations.
    2. Presentation of various checklists and rating scales and discussion about anchors.

    Experiential learning sessions

    1. Participants discuss and agree on the competencies to be assessed.

    2. Participants work in groups to construct questions based on a given scenario and competencies to be assessed.

    3. Participants finalise a rating scale to be used in the examination.

    4. Participants observe videos of simulated examination candidates performing at various levels of competencies and rate their performance. The discussion here focused on the similarities and differences between examiners.

    Participant feedback and evaluation

    A question-and-answer session is held to iron out any doubts and queries from the participants.

    Table 2: Contents and structure of the examiner training programme

    Based on the objectives, the organisers invited a multidisciplinary group of facilitators. The group consisted of anaesthesiologists, medical education experts in assessment and faculty development, and a technical and logistics support team to ensure efficient delivery of the online programme.

    A multimodal approach to delivery was adopted to accommodate the diversity of the examiner group (gender, seniority, subspeciality, and examination experience). Explicit ground rules were agreed upon to underpin the safe and respectful learning environment. The educational strategy included interactive lectures, hands-on practice using rubrics created and calibration using video-assisted scenarios. The programme objectives were embedded and reinforced with each strategy. Pre- and post-tests were performed to help participants gauge their learning and assist the programme organisers in evaluating the participants’ learning.

    This would be the first time such a programme was held within the local setting. Participants were all anaesthesiologists by profession, were actively involved in clinical duties within a tertiary hospital setting and consented to participate in this programme. As potential examiners, they all had prior experience as observers of the examination process, with the majority having previous experience as examiners as well.

    The programme was organised during the peak of the COVID-19 pandemic and was managed on a fully online platform to ensure safety and minimise the time taken away from clinical duties. In addition, participants received protected time for this programme, a necessary luxury as anaesthesiologists were at the forefront of managing the pandemic.

    C. Evaluation

    The Kirkpatrick model (McLean et al., 2008; Newstrom, 1995;) was used to evaluate the programme’s effectiveness described and elaborated in Figure 2.

    Figure 2: The Kirkpatrick model, elaborated for this programme

    The MOAC model (Vollenbroek, 2019), expanded from the MOA (Marin-Garcia & Martinez Tomas, 2016) model by Blumberg & Pringle (Blumberg & Pringle, 1982) was used to examine factors that contributed to the effectiveness of the programme. Motivation, opportunity, ability, and communality are factors that drives action and performance.

    III. RESULTS

    Eleven participants attended the programme. These participants were examiners for the 2021 examinations from the university training centres and the Ministry of Health, Malaysia. Only one of the participants would be a first-time examiner in the Exit Examination. Four of the would-be examiners could not attend due to service priorities.

    A. Level 1: Reaction

    Seven of the eleven participants completed the programme evaluation form, which is openly available in Figshare at https://doi.org/10.6084/m9.figshare.20189309.v1 (Tan & Pallath, 2022). All of them rated the programme content as useful and relevant to their examination duties and stated that the content and presentations were pitched at the correct level, with appropriate visual aids and reference materials. The online learning opportunity was also rated as good.

    All seven also aimed to make behavioural changes after attending the programme, as indicated below. Some of the excerpts include:

    “I am more cognizant of the candidates’ understanding to questions and marking schemes”

    “Yes. We definitely need the rubric/marking scheme for standardisation. Will also try to reduce all the possible biases as mentioned in the programme.”

    “Yes, as I will be more agreeable to question standardisation in viva examination because it makes it fairer for the candidates.”

    The participants also shared their understanding of the importance of standardisation and examiner training and would recommend this programme to be conducted annually. They agreed that the examiner training programme should be made mandatory for all new examiners, with the option of refresher courses for veteran examiners if appropriate.

    B. Level 2: Learning

    All 11 participants completed the pre-and post-tests. The data supporting these findings of this is openly available in Figshare at https://doi.org/10.6084/m9.figshare.20186582.v1 (Md Hashim, 2021). The participants’ marks in both tests are shown in Appendix B. The areas that showed improvement in scores were identifying why under-sampling is a problem and methods to prevent validity threats. Understanding the source of validity threat from cognitive biases showed a decline in scores (question 2 with scores of 11 to 8 and question 3 with scores of 10 to 8), respectively.

    Comparing the post-test scores to pre-test scores, four participants showed improvement, four showed no change (one of the participants answered all questions correctly in both tests) and three participants showed a decline in test scores.

    C. Level 3: Behavioural Change

    Six participants responded to the follow-up questionnaire, which is openly available in Figshare at https://doi.org/10.6084/m9.figshare.20186591.v2 (Md Hashim, 2022). This questionnaire was administered about a year after the examiner training programme and after the completion of two examinations. Only one respondent did not make any self-perceived behavioural change while preparing the examination questions and conducting the viva examinations. Two respondents did not make any changes while marking or rating candidates.

    The specific changes in the three areas of behavioural change that were consciously noted by the respondents were explored. Respondents reported increased awareness and being more systematic in question preparation, making questions more aligned to the curriculum, preparing better quality questions, and being more cognizant of candidates’ understanding of the questions.

    They also reported being more objective and guided during marking and rating as the passing criteria were better defined and structured.

    Regarding the conduct of the viva examination, respondents shared that they were better prepared during vetting and felt it was easier to rate candidates as the marking schemes and questions were standardised and could ensure candidates could answer all the required questions to pass.

    D. Level 4: Results

    The examiners who attended the training programme were able to prepare questions as blueprinted and were able to identify areas to be examined and provided recommended criteria for passing each question. This has led to a smooth vetting process and examination.

    E. Factors Affecting Effectiveness

    Even though the programme was not attended by all the potential examiners, those who did were committed to the idea of a fairer and more transparent examination process. This formed the motivation aspect of the model.

    In terms of opportunity, protected training time is important, followed by prioritising the content of the training material according to the most pressing needs.

    The ability aspect encompassed the abilities of the facilitators and participants. To emphasise the learning process, credible trainers were invited to this programme to facilitate the lectures and experiential learning sessions. In this aspect, the Faculty Development team comprised an experienced clinician, a basic medical scientist, and an anaesthesiologist, all with medical education qualifications and were vital in ensuring the success of this programme. The whole team was led by the Chief Examiner who focused on the dimensions to be tested and calibrated, while simultaneously managing the expectations of the examiners and their abilities to give and accept feedback. Communication and the skill to be receptive to the proposed changes were also crucial to make the intervention work.

    In terms of communality, all the participants were of similar professional backgrounds and shared the common realisation that this training programme was essential and would only yield positive results. Hence this ensured the programme’s overall success.

    IV. DISCUSSION

    The progressive change seen in this attempt to improve the examination system is aligned with the general progress in medical education. Training of examiners is important (Holmboe et al., 2011), as it is not the tool used for assessment, but rather the person using the tool, that makes the difference. As it is difficult to design the ‘perfect tool’ for performance tests and redesigning a tool only changes 10% of the variance in rating (Holmboe et al., 2011; Williams et al., 2003), educators must now train the faculty in observation and assessment. It is not irrational to extrapolate this effect on written and oral examinations. Holmboe et al. (2011) also share the reasons for a training programme for assessors, which are changing curriculum structure, content and delivery and emerging evidence regarding assessment, building a system reserve, utilising training programmes as opportunities to identify and engage change agents and allow the faculty to form a mental picture of how changes will affect them and improve practice. Enlisting the help and support of a respected faculty member during training will promote the depth and breadth of change.

    Khera et al. (2005) described their paediatric examination experiences, in which the Royal College of Paediatrics and Child Health defined examiners’ competencies, selection process and training programme components. The training programme included principles of assessment, examination design, writing questions, interpersonal skills, professional attributes, managing diversity, and assessing the examiners’ skills. They believe these contents will ensure the assessment is valid, reliable, and fair. As Anaesthesiology examiners have different knowledge levels and experiences, it had been crucial to assess their learning needs and provide them with appropriate learning opportunities.

    In the emergency brought on by the COVID-19 pandemic, online training was the safest and most feasible platform for conducting this programme. Online faculty development activities have the perceived advantages of being convenient, flexible, and allowing interdisciplinary interaction and providing an experience of being an online student(Cook & Steinert, 2013). Forming the facilitation team together with the dedicated technical and logistics team and creating a chat group prior to conducting the programme were key in anticipating and handling communication and technical issues (Cook & Steinert, 2013).

    Though participants were engaged and the results of the workshop were encouraging, the programme delivery and the content will be reviewed based on the feedback received. The convenience of an online activity must be balanced with the participant engagement and facilitator presence of a face-to-face-activity. Since the results of both methods of delivery differs (Arias et al., 2018; Daniel, 2014; Kemp & Grieve, 2014), the best solution may to ask the participants what would best work for them, as they are adult learners and experienced examiners. The programme must be designed with participants involvement, with opportunities to participate and engaging facilitators and support teams that would be able to support the participants’ learning need (Singh et al., 2022).

    At the end of the programme, the effectiveness of the programme was measured by referencing the Kirkpatrick model. The Kirkpatrick model (Newstrom, 1995; Steinert et al., 2006) was the most helpful in helping us identify the success of the intervention, which included behavioural change. Measuring behavioural change and impact on the examination results, organisational changes and changes in student learning may be difficult and may not be directly caused by a single intervention (McLean et al., 2008). The key, is perhaps to involve examiners, students and other stakeholders in the evaluation process, using various validated tools, and to ensure that the effort is ongoing, with sustained support, guidance and feedback (McLean et al., 2008).

    To explain the overall effectiveness of the programme (with regards to reaction, learning and behavioural change), the MOAC model (Vollenbroek, 2019) expanded from the original MOA model was used. The MOAC model not only describes factors that affect an individual’s performance in a group, but also the group behaviour.

    Motivation is an important driving force of action, and members are more motivated when a subject becomes relevant on a personal level, leading to action. The motivation to be informed and to improve has led to active participation in the knowledge sharing session, processing new information presented in the programme and adopting changes learnt during the programme. Presence of a group of motivated individuals with the same goals supported each other’s learning.

    Opportunity, especially time, space and resources, must be allocated to reflect the value and relevance of any activity. Work autonomy, allows professionals to engage in what they consider relevant or important, and be accountable for their work outcomes. Facilitating conditions, for example, technology, facilitators, and a platform to practise what is being learnt are also important aspects of opportunity. Allowing protected time with the appropriate facilitating conditions, indicates institutional support and has enabled participants to fully optimise the learning experience.

    Ability positively affects knowledge exchange and willingness to participate. Having prior knowledge improves a participant’s ability to absorb and utilise new knowledge. The programme participants, being experienced clinical teachers and examiners are fully aware of their capabilities and are able to process and share important information. Experienced faculty development facilitators who are also clinical teachers and examiners were able to identify areas to focus and provide relevant examples for application.

    Communality is the added dimension to the original MOA model. Participants of this programme are members in a complex system, who already know each other. Having shared identity, language and challenges have allowed them to develop trust while pursuing the common goal of improving the system they were working in. This facilitated knowledge sharing and behavioural change.

    The limitation in our programme is the small sample size. However, we believe that is important to review the effectiveness of a programme, especially with regards to behavioural change, and to share how other programmes can benefit from using the frameworks we shared. The findings from this programme will also inform how we conduct future faculty development programmes. With pandemic restrictions lifted, we hope to conduct this programme face-to-face, to facilitate engagement and communication.

    V. CONCLUSION

    For this faculty development programme to succeed, targets for success must first be defined and factors that contribute to its success need to be identified. This will ensure active engagement from the participants and promote the sustainability of the programme.

    Notes on Contributors

    Noorjahan Haneem Md Hashim designed the programme, assisted in content creation, curation and matching learning activities, moderated the programme, and conceptualised and wrote this manuscript.

    Shairil Rahayu Ruslan participated as a committee of the programme, assisted as a simulated candidate during the training sessions, as well as contributed to the conceptualisation, writing, and formatting of this manuscript. She also compiled the bibliography and cross-checked the references for this manuscript.

    Ina Ismiarti Shariffuddin created the opportunity for the programme (Specialty board and interdisciplinary buy-in, department funding), prioritised the programme learning outcomes, chaired the programme, and contributed to the writing and review of this manuscript.

    Woon Lai Lim participated as a committee member of the programme and contributed to the writing of this manuscript.

    Christina Phoay Lay Tan designed and conducted the faculty development training programme, and reviewed and contributed to the writing of this manuscript. She also cross-checked the references for this manuscript.

    Vinod Pallath designed and conducted the faculty development training programme, and reviewed and contributed to the writing of this manuscript.

    All authors verified and approved the final version of the manuscript.

    Ethical Approval

    Ethical approval was applied for the follow-up questionnaire that was distributed to the participants, which was approved on the 6th of May 2022 (Reference number: UM.TNC2/UMREC_1879). The programme evaluation and pre- and post-tests are accepted as part of the programme evaluation procedures.

    Data Availability

    De-identified individual participant data collected are available in the Figshare repository immediately after publication without an end date, as below :

    https://doi.org/10.6084/m9.figshare.20189309.v1

    https://doi.org/10.6084/m9.figshare.20186582.v1

    https://doi.org/10.6084/m9.figshare.20186591.v2

    The authors confirm that all data underlying the findings are freely available for view from the Figshare data repository. However, the reuse and resharing of the programme evaluation form, pre- and posttest questions, as well as followup questionnaire, despite being easily accessible from the data repository, should warrant a reasonable request from the corresponding author out of courtesy.

    Acknowledgement

    The authors would like to acknowledge Dr Selvan Segaran and Dr Siti Nur Jawahir Rosli from the Medical Education, Research and Development Unit (MERDU) for their logistics and technical support in all stages of this programme; Professor Dr Jamuna Vadivelu, Head, MERDU for her insight and support; Dr Nur Azreen Hussain and Dr Wan Aizat Wan Zakaria from the Department of Anaesthesiology, UMMC and UM, for their acting skills in the training videos; and the Visibility and Communication Unit, Faculty of Medicine, Universiti Malaya for their video editing services.

    Funding

    There is no funding source for this manuscript.

    Declaration of Interest

    There are no conflicts of interest among the authors of this manuscript.

    References

    Arias, J. J., Swinton, J., & Anderson, K. (2018). Online vs. face-to-face: A comparison of student outcomes with random assignment. E-Journal of Business Education & Scholarship of Teaching, 12(2), 1–23. https://eric.ed.gov/?id=EJ1193426

    Blew, P., Muir, J. G., & Naik, V. N. (2010). The evolving Royal College examination in anesthesiology. Canadian Journal of Anesthesia/Journal canadien d’anesthésie, 57(9), 804-810. https://doi.org/10.1007/s12630-010-9341-1

    Blumberg, M., & Pringle, C. D. (1982). The missing opportunity in organizational research: Some implications for a theory of work performance. The Academy of Management Review, 7(4), 560–569. https://doi.org/10.2307/257222

    Cook, D. A., & Steinert, Y. (2013). Online learning for faculty development: A review of the literature. Medical Teacher, 35(11), 930–937. https://doi.org/10.3109/0142159X.2013.827328

    Daniel, C. M. (2014). Comparing online and face-to-face professional development [Doctoral dissertation, Nova Southeastern University]. https://doi.org/10.13140/2.1.3157.5042

    Feldman, M., Lazzara, E. H., Vanderbilt, A. A., & DiazGranados, D. (2012). Rater training to support high-stakes simulation-based assessments. Journal of Continuing Education in the Health Professions, 32(4), 279–286. https://doi.org/10.1002/chp.21156

    Holmboe, E. S., Ward, D. S., Reznick, R. K., Katsufrakis, P. J., Leslie, K. M., Patel, V. L., Ray, D. D., & Nelson, E. A. (2011). Faculty development in assessment: The missing link in competency-based medical education. Academic Medicine, 86(4), 460–467. https://doi.org/10.1097/ACM.0b013e31820cb2a7

    Iqbal, I., Naqvi, S., Abeysundara, L., & Narula, A. (2010). The value of oral assessments: A review. The Bulletin of the Royal College of Surgeons of England, 92(7), 1–6. https://doi.org/10.1308/147363510×511030

    Juul, D., Yudkowsky, R., & Tekian, A. (2019). Oral Examinations. In R. Yudkowsky, Y. S. Park, & S. M. Downing (Eds.), Assessment in Health Professions Education. Routledge. https://doi.org/10.4324/9781315166902-8

    Kemp, N., & Grieve, R. (2014). Face-to-face or face-to-screen? Undergraduates’ opinions and test performance in classroom vs. online learning. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.01278

    Khera, N., Davies, H., Davies, H., Lissauer, T., Skuse, D., Wakeford, R., & Stroobant, J. (2005). How should paediatric examiners be trained? Archives of Disease in Childhood, 90(1), 43–47. https://doi.org/10.1136/adc.2004.055103

    Lineberry, M. (2019). Validity and quality. Assessment in Health Professions Education, 17-32. https://doi.org/10.4324/9781315166902-2

    Marin-Garcia, J. A., & Martinez Tomas, J. (2016). Deconstructing AMO framework: A systematic review. Intangible Capital, 12(4), 1040. https://doi.org/10.3926/ic.838

    McLean, M., Cilliers, F., & Van Wyk, J. M. (2008). Faculty development: Yesterday, today and tomorrow. Medical Teacher, 30(6), 555–584. https://doi.org/10.1080/01421590802109834

    Md Hashim, N. H. (2021). Pre- and Post-test [Dataset]. Figshare. https://doi.org/10.6084/m9.figshare.20186582.v1

    Md Hashim, N. H. (2022). Followup Questionnaire [Dataset]. Figshare. https://doi.org/10.6084/m9.figshare.20186591.v2

    Newman, L. R., Brodsky, D., Jones, R. N., Schwartzstein, R. M., Atkins, K. M., & Roberts, D. H. (2016). Frame-of-reference training: Establishing reliable assessment of teaching effectiveness. Journal of Continuing Education in the Health Professions, 36(3), 206–210. https://doi.org/10.1097/CEH.0000000000000086

    Newstrom, J. W. (1995). Evaluating training programs: The four levels, by Donald L. Kirkpatrick. (1994). San Francisco: Berrett-Koehler. 229 pp., $32.95 cloth. Human Resource Development Quarterly, 6(3), 317-320. https://doi.org/10.1002/hrdq.3920060310

    Singh, J., Evans, E., Reed, A., Karch, L., Qualey, K., Singh, L., & Wiersma, H. (2022). Online, hybrid, and face-to-face learning through the eyes of faculty, students, administrators, and instructional designers: Lessons learned and directions for the post-vaccine and post-pandemic/COVID-19 World. Journal of Educational Technology Systems, 50(3), 301–326. https://doi.org/10.1177/00472395211063754

    Steinert, Y., Mann, K., Anderson, B., Barnett, B. M., Centeno, A., Naismith, L., Prideaux, D., Spencer, J., Tullo, E., Viggiano, T., Ward, H., & Dolmans, D. (2016). A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Medical Teacher, 38(8), 769-786. https://doi.org/10.1080/0142159x.2016.1181851

    Steinert, Y., Mann, K., Centeno, A., Dolmans, D., Spencer, J., Gelula, M., & Prideaux, D. (2006). A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Medical Teacher, 28(6), 497–526. https://doi.org/10.1080/01421590600902976

    Tan, C. P. L., & Pallath, V. (2022). Workshop Evaluation Form [Dataset]. Figshare. https://doi.org/10.6084/m9.figshare.20189309.v1

    Tekian, A., & Norcini, J. J. (2016). Faculty development in assessment : What the faculty need to know and do. In M. Mentkowski, P.F. Wimmers (Eds.), Assessing Competence in Professional Performance across Disciplines and Professions (1st ed., pp. 355–374). Springer Cham. https://doi.org/10.1007/978-3-319-30064-1

    Thomas, P. A., Kern, D. E., Hughes, M. T., & Chen, B. Y. (2015). Curriculum development for medical education : A six-step approach. John Hopkins University Press. https://jhu.pure.elsevier.com/en/publications/curriculum-development-for-medical-education-a-six-step-approach

    Vollenbroek, W. B. (2019). Communities of Practice: Beyond the Hype – Analysing the Developments in Communities of Practice at Work [Doctoral dissertation, University of Twente]. https://doi.org/10.3990/1.9789036548205

    Williams, R. G., Klamen, D. A., & McGaghie, W. C. (2003). SPECIAL ARTICLE: Cognitive, social and environmental sources of bias in clinical performance ratings. Teaching and Learning in Medicine, 15(4), 270–292. https://doi.org/10.1207/S15328015TLM1504_11

    *Shairil Rahayu Ruslan
    50604, Kuala Lumpur,
    Malaysia
    03-79492052 / 012-3291074
    Email: shairilrahayu@gmail.com, shairil@ummc.edu.my

    Submitted: 23 August 2022
    Accepted: 3 January 2023
    Published online: 4 July, TAPS 2023, 8(3), 15-25
    https://doi.org/10.29060/TAPS.2023-8-3/OA2871

    Iroro Enameguolo Yarhere1, Tudor Chinnah2 & Uche Chineze3

    1Department of Paediatrics, College of Health Sciences, University of Port Harcourt, Nigeria; 2Department of Anatomy, University of Exeter, United Kingdom; 3Department of Education and Curriculum studies, University of Port Harcourt, Nigeria

    Abstract

    Introduction: This study aimed to compare the paediatric endocrinology curriculum across Southern Nigeria medical schools, using reports from learners. It also checked the learners’ perceptions about different learning patterns and competency in some expected core skills.

    Methods: This mixed (quantitative and qualitative) study was conducted with 7 medical schools in Southern Nigeria. A multi-staged randomized selection of schools and respondents, was adopted for a focus group discussion (FGD), and the information derived was used to develop a semi-structured questionnaire, which 314 doctors submitted. The FGD discussed rotation patterns, completion rates of topics and perceptions for some skills. These themes were included in the forms for general survey, and Likert scale was used to assess competency in skills. Data generated was analysed using statistical package for social sciences, SPSS 24, and p values < 0.05 were considered significant

    Results: Lectures and topics had various completion rates, 42.6% – 98%, highest being “diabetes mellitus”. Endocrinology rotation was completed by 58.6% of respondents, and 58 – 78 % perceived competency in growth measurement and charting. Significantly more learners, 46.6% who had staggered posting got correct matching of Tanner staging, versus learners who had block posting, 33.3%, p = 0.018.

    Conclusion: Respondents reported high variability in the implementation of the recommended guidelines for paediatric endocrinology curriculum between schools in Southern Nigeria. Variabilities were in the courses’ completion, learners’ skills exposure and how much hands-on were allowed in various skills acquisitions. This variability will hamper the core objectives of human capital development should the trend continue.

    Keywords:          Paediatric Endocrinology Curriculum, Perception, Compliance, Completion Rate, Learners

    Practice Highlights

    • Medical and dental council of Nigeria has a recommended benchmark for minimum academic standards in all medical schools to which total compliance is expected.
    • Evaluation of paediatric endocrinology curriculum content and training methods was conducted using reports from learners.
    • Variability in the content, and training methods of the intended competency were reported across medical schools.
    • Compliance rate of the recommended curriculum was less than 50% in some contents and some learners reported low skill performance training.
    • The lack of uniformity can prevent achievement of the overarching objective of the curriculum in Nigeria with wide variations in competence among graduating doctors.

    I. INTRODUCTION

    The primary aim of the Medical and Dental council of Nigeria (MDCN) undergraduate curriculum is “to train doctors and dentists who can work effectively in a health team to provide comprehensive health care to individuals in any community in the nation, and keep up to date on issues of global health” (Federal Ministry of Health of Nigeria, 2012). In Nigeria today, there are 49 federal, 59 states and 111 private universities, and 44 of these have full or partially accredited medical schools and while these schools have a prescribed curriculum, some are not following explicitly (Federal Ministry of Health of Nigeria, 2012). This curriculum advocates for universities to develop syllabus to meet the benchmark for minimum academic standards (BMAS) across schools, however there is no uniform template developed for assessing graduates to know how their competence converge as is applicable in United States of America (USA), Canada and United Kingdom (UK) (Santen et al., 2019; Shah et al., 2020; Sosna et al., 2021).  Diabetes mellitus, thyroid disorders, puberty, rickets and growth abnormalities are topics included in the MDCN paediatric curriculum under endocrinology which learners are expected to acquire competence in cognitive and psychomotor skills to diagnose and treat or refer appropriately children presenting with these diseases.

    A. Problem

    Most deaths from diseases in Nigeria and other resource-limited countries are consequent upon general public ignorance of disease, late presentation to the health care systems, poverty and lack of funds to access healthcare facilities and reduced knowledge of some disease patterns by the healthcare providers (Yarhere & Nte, 2018). Addressing the gaps in reduced knowledge can be done by developing competency-based curriculum for all graduating doctors to have as near-similar competence as possible but achieving this may not be feasible. Training activities are not uniform throughout medical schools in Nigeria and elsewhere, and depend on schools’ vision, mission and objectives, and the structures and processes put in place. There are barriers to positive implementation across schools including but not limited to individual school’s determination of what is relevant in the curriculum, access to the materials needed to teach the curriculum content and getting trainers to use these curriculums (Polikoff, 2018). The lack of uniformity of curriculum across universities may not be contending issues, but when the graduating doctors have varying degree of competencies in skills and cognition, then a template for imparting uniform and up to date knowledge and to evaluate this is needed to find ways of reducing the variability (McManus, 2003; McManus et al., 2020; Rimmer, 2014).

    The curriculum uniformity across schools is one way of improving competency and thus, healthcare standards, and there is need to explore this uniformity or diversity within the paediatric undergraduate training. In some countries, there is a uniform board certification examination before doctors can practice and this is also done for doctors immigrating into these countries (Hohmann & Tetsworth, 2018; Puri et al., 2021; Tiffin et al., 2017; van Zanten et al., 2022) but Nigeria is exempt from this uniform exit examination. This uniform exit board examination makes these schools align course contents, and therefore reduces the variabilities between medical schools and undergraduate training.

    B. Curriculum Evaluation for Change or Improvement

    Curriculum evaluation is a means by which educators understand whether the curriculum used to train learners is working as intended, and whether there is need to change the entire programme or redesign aspects (Burton & McDonald, 2001; Ornstein & Hunkins, 2009). It is also a way of identifying deficiencies in training syllabus across universities, (Rufai et al., 2016) or whether compliance to a curriculum is being achieved (Grant, 2014; Olson et al., 2000). Kirkpatrick’s curriculum evaluation method is widely acceptable in medical education using the 4 steps; learners’ reaction or satisfaction, knowledge, behavioural changes and results or impact, and in Nigeria, for paediatric endocrinology, this has not been done (Alsalamah, 2021; Bates, 2004).

    Universities have variabilities in organisation, students’ numbers in classes, duration of specific posting, posting types and whether the courses are elective or core. In medical schools in Nigeria, paediatric postings are undertaken in the 5th or 6th year of a 6-year programme. While some stagger the posting to be done within the last 2 years, others do theirs in the 5th or the 6th year exclusively, and the extent of these variabilities and how they affect the training processes and products has not been evaluated in Nigeria and this can be done using learners’ or graduates’ perceptions.

    The aim of this research was to evaluate learners’ report and perception of some aspects of the paediatric endocrinology curriculum contents and learning methods across Southern Nigeria medical schools. Endocrinology was taken from the paediatric course to reduce the volume of information to be analysed.

    II. METHODS

    This was a cross sectional study design with qualitative and quantitative data analyses, evaluating learners’ report and their perception of the curriculum being used by various medical schools in Southern Nigeria to deliver the MDCN paediatric endocrinology curriculum. Survey was conducted across 10 medical schools in Southern Nigeria that have learners who have either completed their final year, or are doing their internship. Two steps were used to retrieve the information needed; a focus group discussion of sampled learners, and a questionnaire survey sent out to randomly selected respondents and these 2 methods complemented each other. The focus group discussion was used to explore in depth, the minds of the respondents and what they perceived was being done well and what needed to be changed in the syllabus in their respective schools. The questionnaire survey was then used to collect reports and perceptions from a wider set of learners who had completed their paediatric posting within the past 6 – 12 months. Some of these were already doing their internship and others were in their final year in preparation for their final examinations.

    Sample size for respondents will be calculated using the formula:

    N = (Z score)2 x SD x (1 – SD) 

                           (CI)2

    Z score = 1.96, SD (standard deviation of the mean) = estimated at ± 0.5, Confidence interval = 0.05

    = 384 respondents, with an attrition rate of 10% will be added 10% of 384 = 38

    384 + 38 = 422 respondents.

    A. Sampling Technique

    Multi-staged sampling technique was used to determine the schools, and respondents that participated in the study. There are 29 Southern Universities with medical / health colleges and 16 of these had more than 50 learners in their final year or had graduated. Ten schools were randomly selected using the excel formula [= rand ()], and a proportionate stratified sampling was done using the matriculation numbers of the students in each school to arrive at 422 respondents. Total number of learners that studied paediatrics in various institutions was 800; Ibadan 150, Port Harcourt, 128, Lagos 128, Niger delta University 69, UNN Enugu 128, University of Benin 128, Others 69. From the total number of learners in each school, selected learners and interns were sent the questionnaire using their email addresses. Selection for the FGD was done using simple random sampling from each school and these were sent separate emails with details for the meeting.

    B. Focus Group Discussions Process

    Focus group discussion was conducted with the respondents using zoom video platform, and the process lasted for 2 hours, 30 minutes. Ten learners’ representatives from the selected schools were contacted for this FGD, however, 7 (70%) agreed to participate after several email reminders.  The interview was semi-structured with a flexible topic guide, which covered issues relating to the respondents’ views and opinions on the curriculum in paediatric endocrinology; description of posting type in each school, whether block, or staggered, topics received and/or completed, perception of their competence in a key psychomotor skill. The focus group interview discussions were recorded in the zoom meeting platform and transcribed verbatim. The data were analysed using the thematic framework content analyses method. The themes generated were categorised into; 1. Lecture contents and completion rate, 2. Types of paediatric rotation and posting, 3. Skill competence acquisition and clinical postings. Their perceptions about these themes were also sought and discussed. The transcription of the groups’ discussions was reviewed by IY and TC to help categorise the data and pull-out important quotes used.

    C. Questionnaire Survey

    Following thematic analyses of the FGD, the themes generated were converted to questions in a survey for a larger sample population. Themes generated were the type of paediatric posting, rotations through units in the departments and paediatric endocrinology topics, training methods and competency acquired. Demographic characteristics of responders such as level/year of study, age, gender and university of study were collected. The respondents were also asked to select topics from a poll, included in their paediatric endocrinology syllabus, with result in Figure 1, and to state the various methods used to learn growth and growth disorders in their schools. A means of assessing cognitive (recall) skills of the learners was conducted using animated pictures of Tanner staging and matching-type multiple choice, and the responses were crossmatched with the type of posting learners were exposed to, i.e. block posting or staggered posting. Tanner staging was chosen as it cuts across general paediatrics and endocrinology as part of growth and puberty (endocrinology).

    Data retrieved were analysed statistically by using chi-square test, and Pearson correlation for categorical variables. The level of competence perceived by learners in height measurement and charting on growth chart was retrieved using 5-point Likert scale (where 1 = not competent; 2 = low competence; 3 = neutral; 4 = competent; and 5 = proficient). The association between level of competence and whether learners rotated through paediatric endocrinology was checked using Pearson’s correlation test. For all statistics, p value < 0.05 was considered significant.

    D. Ethics

    The research commenced after the Research Ethics committee of the University of Port Harcourt granted approval (UPH/CEREMAD/REC/MM80/056). Verbal informed consent was obtained from the participants during the focus group discussion, who also gave consent for video and recording of the process. Informed consent was also obtained from all participants who filled and submitted the online survey. The focus group discussants received N3,000 ($10) for internet data only as monetary compensation.

    III. RESULTS

    There were 314 learners from the 422 calculated sample size, responded to the questionnaire survey, giving a response rate of 74.4%. There were more final year respondents than early career doctors and more of the respondents were in the age bracket 20 – 24 years, with a mean of 25.02 ±2.71 years. The male: female ratio was 1:1.01, and the data that support the findings of this study are available in Figshare at https://doi.org/10.6084 /m9.figshare.20730937.v1 (Yarhere et al., 2022).

    RESPONDENTS

    Frequency

    Percentage

     

    Year of study

     

     

     

    Early career doctor (graduate/intern)

    130

    41.4

    p = 0.002

    Final year

    184

    58.6

     

    University attended (calculated cohort)

     

     

     

    University of Port Harcourt (63)

    62

    19.7

     

    Niger Delta University (54)

    54

    17.2

     

    University of Ibadan (76)

    50

    15.9

     

    University of Benin (65)

    44

    14.0

     

    University of Lagos (65)

    40

    12.7

     

    University of Nigeria (65)

    42

    13.4

     

    Other western Universities (34)

    22

    7.0

     

    Age

     

     

     

    20-24

    140

    44.6

     

    25-29

    162

    51.6

     

    >=30

    12

    3.8

     

    Mean

    25.02 ± 2.71

     

     

    Gender

     

     

     

    Male

    152

    48.4

    p = 0.612

    Female

    162

    51.6

     

    Table 1. Demographic characteristics of all respondents and the universities attended

    A. Evaluating Contents of Lecture Topics and Completion of Lectures

    The syllabus lecturers use to teach courses are supposed to be descriptive with all learning outcomes stated in the handbook or in the log books given to them before the start of the academic year. The prescribed topics for paediatric endocrinology as stated below were not completely taught to learners or learners did not attend the lectures. In the discussion, some agreed that they did not have the full complement of lectures suggested by the BMAS. One respondent said she and her group mates did not receive diabetes mellitus lectures in their final paediatric posting. This fact was corroborated in the questionnaire survey as 2% of the respondents revealed not having diabetes mellitus lectures, and more than 40% did not learn genetics in their paediatric endocrinology training as shown in Figure 1.

    Diabetes had almost 100% lecture recipient while genetic had the least. In some schools, genetics were placed under endocrine disorders while in others, genetics were left for the pathology and basic medicine classes.

    “I was taught, I personally received 4 lectures in Paed Endo including ambiguous genitalia, “CAH” congenital adrenal hyperplasia, hypothyroidism, and puberty.”

    Participant 3

    “So, you did not get to do calcium and rickets?”

    Facilitator

    “No, I was not taught calcium and rickets.”

    Participant 3

    “What about growth and short stature?”

    Facilitator

    “Yes, I received introductory lectures in my young (sic), junior posting, yes I did in my 400 level, but not in my senior posting and it was not part of endocrinology but general paediatrics.”

    Participant 3 

    I did not take lectures in diabetes mellitus because it was rescheduled several times until we finally had to sit for our exams. In the end, many of us just took notes from our seniors and other students who had theirs when it was scheduled.”

    Participant 2 

    “Why were the classes rescheduled? I mean what did the lecturer tell you?”

    Facilitator

    “The lecturer kept traveling or was indisposed most of our time in the senior posting.”

    Participant 2

    Participant 4 shared:

    Dr. xxxxxx taught us diabetes mellitus and the topic was quite extensive. We learnt the different types, pathophysiology, aetiology, DKA, precipitating factors, risk factors, management. Our lecturers even made us do presentations on DKA, we monitored patients that were being managed for DKA, checking their urine samples for ketones, glucose and their blood pressure.

    Figure 1: Percentage of learners in various schools who received/attended specific endocrinology lectures in their universities

    B. Types of Paediatric Posting and Rotation and Perception of Learners Relating to Task Completion

    There were basically 2 modes of paediatric posting in the institutions sampled; 4-months block posting where respondents have a month of didactic lectures and 3 months of clinical rotations through various units in the Paediatric departments, and 4 months of staggered rotations with junior and senior postings in the clinical classes. While some learners rotated through all the units (core and electives) in the departments, some went through core units, emergency and neonatal units, and 2 other units randomly selected for the respondents by the departments.

    C. Learners’ Responses to Rotation through Paediatrics and Posting Types

    Participant 2 shared:

    The way it works in University of xxxx, we rotate through 2 elective postings with core (CHEW and SCBU) postings in the junior and senior postings. These elective postings are randomly selected by the department (meaning heads or coordinators). I did neurology and gastroenterology in my junior posting and haemato-oncology and I really can’t remember the other one in my senior posting.

     “Will I be wrong to say you did not see a patient with Diabetic keto acidosis?

    Facilitator

    “I saw a child with diabetic keto acidosis in the ward but it wasn’t my unit managing the patient. I only went to the ward to do some other thing.”

    Participant 2 

    “If you were given the opportunity to design a curriculum or programme for your university, will you prefer what is being practiced now, or will you rather have every student go through every unit and get titbits from each unit?”

    Facilitator

    Participant 2 responded:

    Yes, I will prefer that situation where you get to be exposed to every unit in the department but …. emmm, that creates a problem because you may be in a unit for a week, and no patient comes in but the next group rotating to the unit gets to see many patients. I would want to suggest that perhaps, instead of focusing on more of clinical posting, that a unified tutorial class which will expose everyone to the core diseases in the various disciplines.

    Table 2 corroborates the information given by the focus group discussants. Testing the competency outcome in either method can give some estimated guess as to which is better, however, there are several confounding factors that will not allow fair comparison (See Table 3).

    Variable

    Frequency

    Percent (%)

     

    Paediatric posting in your university

     

     

     

    Staggered posting into Junior and senior paediatrics

    176

    56.1

    c2 = 4.59,

    Block posting of 4 months total

    138

    43.9

    p = 0.032

    Paediatric rotations through various units in universities

     

     

     

    I rotated through all units in the department

    162

    51.6

    c2 = 0.318,

    I rotated through CHEW, neonatal unit, and 2/3 other units

    152

    48.4

    p = 0.573

    Rotate through paediatric endocrinology unit in your university

     

     

     

    Yes

    184

    58.6

    c2 = 7.48,

    No

    130

    41.4

    p = 0.006

    Table 2. Paediatric posting and unit rotations in the departments (n=314)

    Though there were differences in the mode of paediatric postings where staggered or block, c2 = 4.59, p = 0.032. the difference in proportion of respondents who had core and selected elective posting as against all units posting was not significant, c2 = 0.318, p = 0.573.

     

    Block posting of 4 months

    Staggered junior and senior paediatrics

    Correct

    Count

    46

    82

    % within paediatric posting

    33.3%

    46.6%

    % of Total

    14.6%

    26.1%

    Wrong

    Count

    92

    94

    % within paediatric posting

    66.7%

    53.4%

    % of Total

    29.3%

    29.9%

    Total

    Count

    138

    176

    % within correct response

    43.9%

    56.1%

    % of Total

    43.9%

    56.1%

    Table 3: Comparing correct response to animated picture of Tanner stage (pubic hair) in females, and the type of paediatric rotation learners were exposed to

    In the 2×2 table above where recall was tested in the learners based on their paediatric posting type, higher percentage of those who had staggered posting got the correct matching of Tanner stage, and the difference was significant, c2 = 5.630, p = 0.018. However, the total number of respondents with the correct response was low.

    D. Perception of Core Competency Skill in Growth Measurement and Charting by Learners

    One of the most important courses in paediatrics is growth and development and training future medical doctor to acquire skills and competence in growth and management is a key component of the BMAS. While growth measurement may seem easy to the uninformed, the whole task is daunting especially in children with complex growth abnormalities and malformation, and for more complex skills like arm span. Which of the more complex skills should the learner be expected to be competent in, will be debated in an expert forum of trainers.

    “So, did you do anthropometric measures?”

    Facilitator

    Participant 1 shared:

    Yes, anytime we clerk a patient, we must check the weight and height and interpret using age-appropriate charts, but we did not plot them in the charts. We carry the age-appropriate chart and interpreted our patients, as this is a requirement.

    Using the chart may not be emphasised by all paediatric lecturers, so learners can be smart to know those lecturers who will request this skill from them during the clerkship period or the unit rotations.

    We did not quite get the concept of mid parental height, height percentile, it was just mentioned in passing. I never saw a severely short child that needed growth hormone. I was only told by a classmate of mine.”

    Participant 3

    The charting and interpretation of weight and height measurements of children was not done in all schools as shown in Table 4 below, which tells that only 65.8% of total respondents were taught interpretation of measured and charted growth parameters. The level of competence in these tasks will also be varied as seen in Appendix 1. Two hundred and thirty-eight (75.8%) learners perceived they had competency/ proficiency in height measures using stadiometer, and 44.6 % of the learners with these perceptions actually had paediatric endocrinology clinical rotation (Appendix 1).

    Variable

    Frequency n = 314

    Percent (%)

    How was growth and growth disorders taught in your school

    (Multiple response applicable)

     

     

    Didactic lectures

    272

    86.6

    Measurement of children using standardised stadiometer

    230

    73.2

    Charting of growth measurements in CDC/WHO growth charts

    203

    64.6

    Measurement of children using improvised height rules

    157

    50.0

    Interpretation of measured and charted growth parameters

    203

    64.6

    Ward clerkship and presentation

    230

    73.2

    Measurements of children using bathroom spring balance

    140

    44.6

    Use of bone age X radiographs

    78

    24.8

    Use of orchidometer

    90

    28.6

    Table 4: Methods used to teach growth and growth disorders in various institutions

    Bone age and orchidometers are used to assess skeletal maturation and puberty, which are advanced for the undergraduate learners and certainly not compulsory, but some respondents were taught with the tools showing the variabilities in contents and skills delivery between these schools. From Table 4 above, framers of the syllabus for endocrinology aspect of paediatrics curriculum are unlikely to include use of orchidometer and bone age during the undergraduate paediatric endocrinology rotation as the skill is complex, and not necessary for their level of development.

    IV. DISCUSSION

    This study has highlighted differences in course contents and training methods across medical schools in Southern Nigeria. While many schools have used the BMAS prescribed by the MDCN, the syllabus used are different and the intended learning outcomes are diverse based on the respondents’ reports. Some learners reported not having diabetes lectures in their school through no fault of theirs, as lecturer rescheduled the lectures and never gave them. While learners have the responsibility to attend lectures, trainers are also obligated to be present at their scheduled lectures or transfer this to their teacher-assistants, or use technologies (Grant, 2014; Ruiz et al., 2006). Some learners had little participation in the Emergency Room, others participated fully in DKA management, learning empathy, specialised skills and communication. The intended competencies to be acquired can be achieved through shadowing and participation, bed-side teaching, and tutorial to improve the cognitive and psychomotor skill, and these opportunities must be created for them in experiential settings (Ryan et al., 2020; Shah et al., 2020).

    More learners had staggered postings, going through junior and senior paediatric postings in what may be considered as integrated learning departing from the traditional method (Patel et al., 2005; Watmough et al., 2006, 2009). In the staggered posting type of rotation, we noticed that not all learners went through paediatric endocrinology unit posting, and like one of the discussants said, they would rather everyone went through each unit getting bits of everything and having opportunity to study specific and prevalent diseases in paediatric units rather than leaving them with the possibility of not learning important disorders. As it is not always possible to encounter specific diseases like DKA during entire posting in the schools that use staggered posting types, the likelihood of exposure was higher in schools that had block posting from the FGD conducted, but this did not translate to better retention of skills or cognitions as depicted in the Tanner staging matching question.

    Having learners train in all special postings may not be the best approach in undergraduate medicine because the specialised skills may not be utilised in general practice and even in general paediatrics should the learners plan paediatric specialisation (Bindal et al., 2011). While some trainers may argue that all information and skill should be taught to the learners, the time to acquire and achieve mastery may be short for the learners (Jensen et al., 2018; Offiah et al., 2019). This study can be referenced in curriculum designing and implementation so the framers understand what society needs should be filled at any time. The concept of cognitive overload has actually reduced the duration of core specialty in clinical medicine while increasing the duration for others with emphasis on psychomotor, affective skills and professionalism. Some medical schools have core paediatric posting of 7 – 8 weeks, but Nigeria is still fixed with the traditional 3 – 4 months. In some schools in South Africa, the clinical posting is run as modular block for 3 years, with paediatric curriculum running from year 4 through year 6 (Dudley & Rohwer, 2015). With the long duration in the Nigeria curriculum, skills competencies are still deficient, so there is need to revamp the curriculum to make it more competency driven. It is excusable that more sophisticated competence like use of orchidometer were not known by more than half the learners, but if some were taught, the level of confidence in these skills at this stage of their learning should also be assessed as was done for diabetes by George et al. (2008).

    Medical schools in Nigeria and other countries will have to continually evolve and produce curricula that are competency based, using problem-based learning, simulations, mannikin training for skills as is done in other countries (Watmough et al., 2006). Diabetes, thyroid, ambiguous genitalia with congenital adrenal hyperplasia, short stature and calcium disorders are common in Nigeria and should be taught in structured and integrated formats. Integrated curriculum where skills are graded from simple to complex can also be tested e g, skills of height measurements and charting using the stadiometer and growth charts can be taught in the 1st clinical year, and then the mid parental heights, target height calculation and bone age may be taught in the 2nd and 3rd clinical years. (Brauer & Ferguson, 2015; Grant, 2014).

    A. Strength of the Research

    Articulating the perceptions of learners is not always easy as they are varied and subjective, but getting them to come together, discuss and give suggestions on how curriculum can be designed and achieved increases the strength of this research. There was no sense of victimisation of the learners as many had already graduated from their schools, and the discussants admitted to not missing classes, or clinical learning. They spoke freely, with courtesy to others and there was little or no argument among them.

    B. Limitations of the Research

    As this research is based on past experiences of the cognitive and psychomotor skills achieved during the learners’ training period, the possibility of recall bias is high, and respondents may underestimate or exaggerate their skills. Using respondents who had just concluded their paediatric postings was an attempt at reducing this limitation. The best time to evaluate a programme is usually soon after the programme has been concluded however, as there has been no report of this type of evaluation, there was need to embark on it and make recommendations.

    V. CONCLUSION

    Respondents reported high variability in the implementation of the recommended guidelines for paediatric endocrinology curriculum between schools in Southern Nigeria. Variabilities were in the courses’ completion, learners’ skills exposure and how much hands-on were allowed in various skills acquisitions. This variability will hamper the core objectives of human capital development should the trend continue.

    A. Area of Future Research

    Noting the differences exist between schools, curriculum strategists and implementation teams in universities should commission a DELPHI study by experts, where core competencies and objectives for paediatric endocrinology will be agreed on and sent to the regulatory bodies for endorsement and implementation.

    Notes on Contributors

    IY conceived, designed, planned, executed and conducted interviews and the research. He also collected the data, analysed it and wrote the manuscript.

    TC helped in designing the methodology for the data colllection and analyses, and reviewed the manuscript.

    CU gave critical appraisal of the manuscript and all authors have approved the final manuscript.

    Ethical Approval

    The research ethics committee of the Univeristy of Port Harcourt gave ethical approval before the start of the study with the number: UPH/CEREMAD/REC/MM80/056.

    Data Availability

    The data supporting this research is available for publication purposes, without editing. Data can be shared only with express permission from the corresponding author as deposited in Figshare repository, using the private url:

    https://figshare.com/articles/dataset/Copy_of_CURRICULUM_STUDENTS_xls/21154396

    Acknowledgement

    We acknowledge the early career doctors and final year students who participated in the online survey especially the selected ones who took part in the focus group discussion.

    Declaration of Interest

    Authors declare that there are no conflicts of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.

    Funding

    There was no funding for this survey.

    References

    Alsalamah, A., & Callinan, C. (2021). Adaptation of Kirkpatrick’s four level model of training criteria to evaluate training programmes for head teachers. Education Science, 11(116), 1-25. https://doi.org/10.3390/educsci11030116

    Bates, R. (2004). A critical analysis of evaluation practice: The Kirkpatrick model and the principle of beneficence. Evaluation and Program Planning, 27, 341-347. https://doi.org/10.1016/j.evalprogplan.2004.04.011

    Bindal, T., Wall, D., & Goodyear, H. M. (2011). Medical students’ views on selecting paediatrics as a career choice. European Journal of Pediatrics, 170(9), 1193-1199. https://doi.org/10.1007/s00431-011-1467-9

    Brauer, D. G., & Ferguson, K. J. (2015). The integrated curriculum in medical education: AMEE Guide No. 96. Medical Teacher, 37(4), 312-322. https://doi.org/10.3109/0142159X.2014.970998

    Burton, J. L., & McDonald, S. (2001). Curriculum or syllabus: Which are we reforming? Medical Teacher, 23(2), 187-191. https://doi.org/10.1080/01421590020031110

    Dudley, L. D., Young, T. N., Rohwer, A. C., Willems, B., Dramowski, A., Goliath, C., Mukinda, F. K., Marais, F., Mehtar, S., & Cameron, N. A. (2015). Fit for purpose? A review of a medical curriculum and its contribution to strengthening health systems in South Africa. African Journal Health Profession Education, 7(1), 81-84. https://doi.org/10.7196/AJHPE.512

    Federal Ministry of Health of Nigeria, Health Systems 20/20Project. (2012). Nigeria undergraduate medical and dental curriculum template. Health systems 20/20 Project, Abt Associates Inc.

    George, J. T., Warriner, D. A., Anthony, J., Rozario, K. S., Xavier, S., Jude, E. B., & Mckay, G. A. (2008). Training tomorrow’s doctors in diabetes: Self-reported confidence levels, practice and perceived training needs of post-graduate trainee doctors in the UK. A multi-centre survey. BMC Medical Education, 8, Article 22. https://doi.org/10.1186/1472-6920-8-22

    Grant, J. (2014). Principles of curriculum design. In T. Swanwick, K. Forrest, B. C. O’Brien (Eds.), Understanding medical education evidence, theory and practice Sussex, UK: Wiley Blackwell, 31-46.

    Hohmann, E., & Tetsworth, K. (2018). Fellowship exit examination in orthopaedic surgery in the commonwealth countries of Australia, UK, South Africa and Canada. Are they comparable and equivalent? A perspective on the requirements for medical migration. Medical Education Online, 23(1), Article 1537429. https://doi.org/10.1080/10872981.2018.1537429

    Jensen, J. K., Dyre, L., Jørgensen, M. E., Andreasen, L. A., & Tolsgaard, M. G. (2018). Simulation-based point-of-care ultrasound training: a matter of competency rather than volume. Acta Anaesthesiology Scandanavia, 62(6), 811-819. https://doi.org/10.1111/aas.13083

    McManus, I. C. (2003). Medical school differences: beneficial diversity or harmful deviations. BMJ Quality and Safety in Health Care, 12(5), 324-325. https://doi.org/10.1136/qhc.12.5.324

    McManus, I. C., Harborne, A. C., Horsfall, H. L., Joseph, T., Smith, D. T., Marshall-Andon, T., Samuels, R., Kearsley, J. W., Abbas, N., Baig, H., Beecham, J., Benons, N., Caird, C., Clark, R., Cope, T., Coultas, J., Debenham, L., Douglas, S., Eldridge, J., . . . Devine, O. P. (2020). Exploring UK medical school differences: the MedDifs study of selection, teaching, student and F1 perceptions, postgraduate outcomes and fitness to practise. BMC Medicine, 18(1), Article 136. https://doi.org/10.1186/s12916-020-01572-3

    Offiah, G., Ekpotu, L. P., Murphy, S., Kane, D., Gordon, A., O’Sullivan, M., Sharifuddin, S. F., Hill, A. D. K., & Condron, C. M. (2019). Evaluation of medical student retention of clinical skills following simulation training. BMC Medical Education, 19(1), Article 263. https://doi.org/10.1186/s12909-019-1663-2

    Olson, A. L., Woodhead, J., Bekow, R., Kaufman, N., & Marshal, S. (2000). A national general pediatric clerkship curriculum: The process of development and implementation. Pediatrics, 160(S1), 216 -222. https://doi.org/10.1542/peds.106.S1.216

    Ornstein A, H. F., & Hunkins, F. P. (2009). Curriculum: Foundations, principles and issues (5th Ed.). Pearson.

    Patel, V. L., Arocha, J. F., Chaudhari, S., Karlin, D. R., & Briedis, D. J. (2005). Knowledge integration and reasoning as a function of instruction in a hybrid medical curriculum. Journal of Dental Education, 69(11), 1186-1211. https://www.ncbi.nlm.nih.gov/pubmed/16275683

    Polikoff, M. S. (2018). The challenges of curriculum materials as a reform lever Evidence Speaks Reports, 2, 58

    Puri, N., McCarthy, M., & Miller, B. (2021). Validity and reliability of pre-matriculation and institutional assessments in predicting USMLE STEP 1 success: Lessons from a traditional 2 x 2 curricular model. Frontiers in Medicine (Lausanne), 8, Article 798876. https://doi.org/10.3389/fmed.2021.798876

    Rimmer, A. (2014). GMC will develop single exam for all medical graduates wishing to practise in UK. BMJ, 349, g5896. https://doi.org/10.1136/bmj.g5896

    Rufai, S. R., Holland, L. C., Dimovska, E. O., Bing Chuo, C., Tilley, S., & Ellis, H. (2016). A national survey of undergraduate suture and local anesthetic training in the United Kingdom. Journal of Surgical Education, 73(2), 181-184. https://doi.org/10.1016/j.jsurg.2015.09.017

    Ruiz, J. G., Mintzer, M. J., & Leipzig, R. M. (2006). The impact of E-learning in medical education. Academic Medicine, 81(3), 207-212. https://doi.org/10.1097/00001888-200603000-00002

    Ryan, A., Hatala, R., Brydges, R., & Molloy, E. (2020). Learning with patients, students, and peers: Continuing professional development in the solo practitioner workplace. Journal of Continuing Education in the Health Profession, 40(4), 283-288. https://doi.org/10.1097/CEH.0000000000000307

    Santen, S. A., Feldman, M., Weir, S., Blondino, C., Rawls, M., & DiGiovanni, S. (2019). Developing comprehensive strategies to evaluate medical school curricula. Medical Science Educator, 29(1), 291-298. https://doi.org/10.1007/s40670-018-00640-x

    Shah, S., McCann, M., & Yu, C. (2020). Developing a national competency-based diabetes curriculum in undergraduate medical education: A Delphi study. Canadian Journal of Diabetes, 44(1), 30-36. https://doi.org/10.1016/j.jcjd.2019.04.019

    Sosna, J., Pyatigorskaya, N., Krestin, G., Denton, E., Stanislav, K., Morozov, S., Kumamaru, K. K., Jankharia, B., Mildenberger, P., Forster, B., Schouman-Clayes, E., Bradey, A., Akata, D., Brkljacic, B., Grassi, R., Plako, A., Papanagiotou, H., Maksimović, R., & Lexa, F. (2021). International survey on residency programs in radiology: similarities and differences among 17 countries. Clinical Imaging, 79, 230-234. https://doi.org/10.1016/j.clinimag.2021.05.011

    Tiffin, P. A., Paton, L. W., Mwandigha, L. M., McLachlan, J. C., & Illing, J. (2017). Predicting fitness to practise events in international medical graduates who registered as UK doctors via the Professional and Linguistic Assessments Board (PLAB) system: a national cohort study. BMC Medicine, 15(1), Article 66. https://doi.org/10.1186/s12916-017-0829-1

    van Zanten, M., Boulet, J. R., & Shiffer, C. D. (2022). Making the grade: licensing examination performance by medical school accreditation status. BMC Medical Education, 22(1), Article 36. https://doi.org/10.1186/s12909-022-03101-7

    Watmough, S., Garden, A., & Taylor, D. (2006). Does a new integrated PBL curriculum with specific communication skills classes produce Pre Registration House Officers (PRHOs) with improved communication skills. Medical Teachers, 28(3), 264-269. https://doi.org/10.1080/01421590600605173

    Watmough, S., O’Sullivan, H., & Taylor, D. (2009). Graduates from a traditional medical curriculum evaluate the effectiveness of their medical curriculum through interviews. BMC Medical Education, 9, Article 64. https://doi.org/10.1186/1472-6920-9-64

    Yarhere, I., Chinnah, T., & Uche, C. (2022). Learners’ report and perception of differences in undergraduate paediatric endocrinology curriculum content and delivery across Southern Nigeria. [Data set]. Figshare. https://doi.org/10.6084/m9.figshare.20730937.v1

    Yarhere, I. E., & Nte, A. R. (2018) A ten-year review of all cause paediatric mortality in University of Port Harcourt Teaching Hospital, Nigeria (2006–2015). Nigerian Journal of Paediatrics, 45(4), 185-91.

    *Iroro Enameguolo Yarhere
    East/West Road,
    PMB 5323 Choba,
    Rivers State, Nigeria
    +2347067987148
    Email: iroro.yarhere@uniport.edu.ng

    Submitted: 16 May 2022
    Accepted: 3 January 2023
    Published online: 4 July, TAPS 2023, 8(3), 5-14
    https://doi.org/10.29060/TAPS.2023-8-3/OA2813

    Bikramjit Pal1, Aung Win Thein2, Sook Vui Chong3, Ava Gwak Mui Tay4, Htoo Htoo Kyaw Soe5 & Sudipta Pal6

    1Department of Surgery, Manipal University College Malaysia, Melaka, Malaysia; 2Department of Surgery, Manipal University College Malaysia, Melaka, Malaysia; 3Department of Medicine, Manipal University College Malaysia, Melaka, Malaysia; 4Department of Surgery, Manipal University College Malaysia, Melaka, Malaysia; 5Department of Community Medicine, Manipal University College Malaysia, Melaka, Malaysia; 6Department of Community Medicine, Manipal University College Malaysia, Melaka, Malaysia

    Abstract

    Introduction: The practice of high-fidelity simulation-based medical education has become a popular small-group teaching modality across all spheres of clinical medicine. High-fidelity simulation (HFS) is now being increasingly used in the context of undergraduate medical education, but its superiority over traditional teaching methods is still not established. The main objective of this study was to analyse the effectiveness of HFS-based teaching over video-assisted lecture (VAL)-based teaching in the enhancement of knowledge for the management of tension pneumothorax among undergraduate medical students.

    Methods: A cohort of 111 final-year undergraduate medical students were randomised for this study. The efficacy of HFS-based teaching (intervention group) and VAL-based teaching (control group), on the acquisition of knowledge, was assessed by single-best answer multiple choice questions (MCQ) tests in the first and eighth week of their surgery posting. Mean and standard deviation (SD) for the total score of MCQ assessments were used as outcome measures. ANCOVA was used to determine the difference in post-test MCQ marks between groups. The intragroup comparison of the pre-test and post-test MCQ scores was done by using paired t-test. The P-value was set at 0.05.

    Results: The mean of post-test MCQ scores were significantly higher than the mean of pre-test MCQ scores in both groups. The mean pre-test and post-test MCQ scores in the intervention group were slightly more than those of the control group but not statistically significant.

    Conclusion: There was a statistically significant enhancement of knowledge in both groups but the difference in knowledge enhancement between the groups was insignificant.

    Keywords:           High-Fidelity Simulation, Video-Assisted Lecture, Simulation-Based Medical Education (SBME), Randomized Controlled Trial (RCT), Medical Education, Pre-test and Post-test Knowledge Assessments

    Practice Highlights

    • An RCT study to evaluate the effectiveness of HFS over video-assisted lecture teaching method.
    • HFS seems to be not superior than VAL-based teaching for knowledge acquisition and retention.
    • HFS may be used judiciously when the objectives are mainly knowledge based.
    • Further research may determine curricular areas where HFS is superior and worth adopting.

    I. INTRODUCTION

      High-Fidelity Simulation (HFS) is an innovative healthcare education methodology that involves the use of sophisticated life-like mannequins to create a realistic patient environment. HFS can be considered an innovative teaching method that aids students in translating knowledge and psychomotor skills from the classroom to the actual clinical setting. Kolb’s Experiential Learning Cycle (Kolb, 1984) provides a basis for the integration of active learning of simulation with conventional teaching methods for a comprehensive learning experience in undergraduate medical education. HFS-based education is potentially an efficacious pedagogy that is now available for teaching. The usefulness of HFS has been recognized by the Accreditation Council of Graduate Medical Education (Accreditation Council of Graduate Medical Education [ACGME], 2020). HFS has the added benefit of increasing students’ confidence and their ability to care for the patients at the bedside (Kiernan, 2018). HFS-based education and video-assisted lecture-based teaching are both effective in achieving factual learning. Despite the increasing acceptance of HFS, there are limited studies to compare the usefulness of HFS with conventional teaching methods for factual learning among undergraduate medical students. At present, the different research studies have not provided enough evidence to establish HFS-based teaching’s superiority over traditional educational methods in the acquisition and retention of knowledge. There is inconsistent and variable outcome regarding the effectiveness of HFS on student learning (Yang & Liu, 2016). HFS-based education is both time-consuming and resource intensive. Its long-term merits in retaining knowledge and translating it into enhanced patient care need further research. As educators, we need to systematically evaluate the expensive newer teaching-learning modules like HFPS for their effectiveness by using rigorous research methodology and protocols. This is to ensure that we are providing the best learning opportunities conceivable for the students. Previous studies were mostly done in North America and, therefore, the generalisability of these results is guarded and might not be applicable in the context of Europe and Asia due to many differences in academic and curriculum aspects (Davies, 2008). The purpose of this study was to establish the feasibility of the use of HFS to deliver critical care education to final-year medical students and to find its efficacy in the enhancement of knowledge when compared to video-assisted lectures. The study compared the effectiveness of two methods of teaching pedagogy in the enhancement of knowledge acquisition using pre-test and post-test MCQ. This study was designed to provide insights that may be applied to the future development and improvement of HFS-based education among undergraduate medical students and its possibility of integrating it into course curricula.

      II. METHODS

      A. Study Design

      Randomized Controlled Trial (RCT) with parallel groups and 1:1 allocation. Please see Appendix 1 for the Flow Chart.

      B. Sample Size

      G*Power software was used to calculate the sample size (Faul et al., 2007). Based on the preliminary RCT study of our institute done with the same protocol in 2018, the calculated sample size was 114 with a power of 0.95 for this study.

      C. Inclusion and Exclusion Criteria

      All male and female final-year undergraduate medical (MBBS) students in our institute were recruited after obtaining their written informed consent. All final-year students in the institute consented to the study. The participants were between the ages of 22-26 years.

      The total number of participants recruited was 123.

      The number of participants dropped out was 12 (9.77%).

      Out of 111 participants who completed the study, 61 (54.95%) were female and 50 (45.05%) were male.

      The study was conducted in the Clinical Skills Simulation Lab of Melaka Manipal Medical College (presently known as Manipal University College Malaysia).

      The study period was from March 2019 to February 2020 (12 months).

      D. Interventions

      1) Description of HFPS-based teaching: It was an interactive session using a high-fidelity patient simulator demonstrating the management of tension pneumothorax by performing Needle Decompression on METIman (Pre-Hospital) following the Advanced Trauma Life Support Manual developed by the American College of Surgeons (ATLS Subcommittee et al., 2013).

      2) Description of Hi-fidelity simulator: METIman Pre-Hospital HI-Fidelity Simulator (MMP-0418) was used for the simulation sessions. It was a fully wireless, adult High-Fidelity Patient Simulator (HFPS) with modelled physiology. It comes with extensive clinical features and capabilities designed specifically for learners to practice, gain experience, and develop clinical mastery in a wide range of patient care scenarios.

      3) Description of video-assisted lecture-based teaching: It was a small group interactive session delivered face-to-face to the participants using a recorded video clip demonstrating the management of tension pneumothorax by performing Needle Decompression on METIman (Pre-Hospital) following the Advanced Trauma Life Support Manual developed by the American College of Surgeons (ATLS Subcommittee et al., 2013).

      E. Outcome

      The tool for measurement of knowledge was an identical set of single-best answer A-type MCQs. These MCQs were used for both Pre-test and Post-test knowledge assessments. MCQs were constructed based on the teaching sessions to assess their learning outcome.

      The efficacy of HFPS-based teaching when compared to video-assisted lecture-based teaching is enhancement of knowledge for management of tension pneumothorax.

      F. Recruitment

      The students were recruited in the study during their final year surgical posting.

      G. Randomisation

      A cohort of 12 to 14 students from each rotation was randomised into intervention (HFPS-based teaching) and control (video-assisted lecture-based teaching) groups following random sequence generation method.

      A computer-generated random sequence number was developed from randomizer.org. The independent randomiser was a biostatistician who did not participate in the delivery of interventions. The allocated interventions were then sealed in a sequentially numbered, opaque envelope.

      Block randomisation with a block size of two was used to assign the students into intervention and control groups.

      H. Implementation

      A biostatistician generated the allocation sequence. One independent investigator enrolled the participants, and another independent investigator assigned the participants to interventions. The outcome assessor and the biostatistician were kept blinded to the randomisation.

      I. Procedure for Data Collection

      The participants who gave consent were enrolled in the study. Each session was conducted with a group of 12 to 14 participants. On the first day, the participants were briefed about the sessions and expected learning outcomes. As part of the briefing process, they were explained the confidentiality of the HFPS, the video-assisted lecture sessions and the ethical issues involved. All the participants were introduced to the high-fidelity patient simulator (METIman) in the clinical lab set-up to make them aware of its functions and familiarise them with the handling of the mannequin. An assurance was given to the students that the training course was not part of the evaluation process for the surgical curriculum. The briefing was followed by the first knowledge assessment (Pre-test MCQ) of all the participants. Pre-test MCQ was designed to collect the score of initial background knowledge about tension pneumothorax and its management following the ATLS protocol. The module for the aetiology, pathophysiology and clinical presentation of tension pneumothorax and its steps of management following the ATLS protocol was part of their final year course curriculum. It was taught before they participated in the study. After the Pre-test MCQ session, they were randomized into intervention and control groups consisting of 6 to 7 participants each. For the intervention group, an independent investigator used the high-fidelity simulator (METIman Pre-hospital) to demonstrate the diagnosis and management of tension pneumothorax (Needle Decompression) in an emergency setting. The demonstration time was 20 minutes followed by hands-on training for another 20 minutes. For the control group, a recorded video clip of the identical facilitated simulation session on the diagnosis and management of tension pneumothorax (Needle Decompression) was shown by another investigator. The video demonstration lasted for 20 minutes. This session was followed by a 20-minute interactive discussion session with the faculty. All the participants in both groups were apprised of the importance of aetiology, pathophysiology and clinical presentation in arriving at the diagnosis and management of tension pneumothorax during these interactive teaching sessions. The participants were encouraged to explore how they would manage the stated clinical situation through discussion. The faculty were instructed to emphasize the teaching points related to the outcome of the study. The total duration for both types of teaching was 40 minutes. There were no more additional hands-on practice or video-assisted lecture sessions for the participants during the course of the research study. In the seventh/eighth week, both the intervention and the control groups again participated in the second knowledge assessment (Delayed Post-test MCQ) to assess their gain and retention of knowledge. Delayed Post-test MCQ assessment may minimise the recall bias and test their retained memory better.

      Both Pre-test and Post-test knowledge assessments comprised 20 MCQs which were to be completed in 20 minutes. The single-best answer A-type MCQs with five options of answers were prepared following the guidelines framed by the National Board of Medical Examiners (Case & Swanson, 2001). For each correct response, a score of one point was awarded. No negative marking was awarded for incorrect response. Based on the learning objectives, the MCQs were constructed by 6 experts in the field of Surgery, Medicine and Medical Education who were not part of this research study. The MCQs covered the items on pathophysiology, diagnosis, and management of tension pneumothorax, and assessed for knowledge comprehension and knowledge application. The order of the questions was changed between the Pre-test and the Post-test. The MCQ answer sheets were scanned by Konica Minolta FM (172.17.5.12) scanner and graded by using Optical Mark Recognition (OMR) software (Remark Office OMR, version 9.5, 2014; Gravic Inc., USA). Before the main study, a preliminary study involving 56 students was conducted to explore the time management, feasibility, acceptability, and validation of the MCQs (Pal et al., 2021). In the preliminary study, the Pre-test and the Post-test were administered in the first week and the fourth week respectively to note the short-term retention of knowledge. This study is an extension of the preliminary study with a different cohort of students where the Pre-test and the Delayed Post-test were administered in the first week and the seventh/eighth week respectively to determine the medium-term retention of knowledge. The MCQs were reviewed based on the feedback from the preliminary study on the appropriateness of the content, clarity in wording, and difficulty level. The difficulty index and the bi-serial correlation for item discrimination of all MCQs were checked. The value between 30 and 95 in the difficulty index and the bi-serial correlation value > 0.2 were chosen as the accepted standard for this study. 

      At the end of the study, the participants in the intervention group were provided with access to the identical video-assisted lecture sessions as designed for the control group. Similarly, the participants in the control group were provided with access to the same HFS sessions. This is to ensure parity between the groups for their professional development of knowledge.

      J. Statistical Analysis

      SPSS software (version 25) was used for data analysis. The descriptive statistics such as frequency and percentage for categorical data and the mean and standard deviation for the total score of the assessments were calculated. ANCOVA was used to determine the difference in post-test MCQ marks between intervention and control groups with pre-test MCQ marks as a covariate. Intragroup comparison of pre-test and post-test MCQ marks was also done by calculating paired t-test. For intergroup comparison, the effect size – Partial Eta Squared was calculated in ANCOVA. Cohen’s dz was calculated for the comparison of dependent means. The level of significance was set at 0.05 and the null hypothesis was rejected when P < 0.05. We measured the scale-level content validity index (SCVI) and item-level content validity index (ICVI) for the validity and Cronbach alpha for the internal consistency (reliability) of the MCQs. The average values of SCVI and ICVI were 0.94 & 0.89 respectively. The value of Cronbach’s alpha was 0.78.

      III. RESULTS

      The data that support the findings this RCT study are openly available at https://doi.org/10.6084/m9.figshare.19932053 (Pal et al., 2022).

      A. General Data Analysis

      There was no difference in the highest Pre-test scores achieved by the participants in both intervention and control groups. The lowest scores recorded in the intervention group were better than the control group in both Pre-test and Post-test. There was a negligible difference between the highest Post-test scores among control and intervention groups (See Table 1).

      Test score

      Intervention

      Control

      PRE-TEST

       Mean (SE)

      12.31 (0.34)

      12.23 (0.36)

       95% CI for Mean

      11.64 – 12.98

      11.50 – 12.96

       Min – Max

      6.0 – 18.0

      6.0 – 18.0

      POST-TEST

       Mean (SE)

      13.65 (0.27)

      13.60 (0.30)

       95% CI for Mean

      13.12 – 14.19

      12.98 – 14.20

      Min – Max

      8.0 – 18.0

      7.0 – 17.0

      Table 1. Highest, lowest and unadjusted mean MCQ scores among intervention and control groups

      SE – Standard Error                CI – Confidence Interval

      Min – Minimum                      Max – Maximum

      B. Statistical Data Analysis

      ANCOVA was used to determine the difference in Post-test MCQ scores among control and intervention groups after adjusting pre-test MCQ scores. There was a linear relationship between Pre-test and Post-test MCQ scores for each group, as determined by visual inspection of the scatterplot. The homogeneity of regression slopes was noted as the interaction term was not statistically significant, F (1, 107) = 0.889, P = 0.348. When assessed by Shapiro-Wilk’s test, standardized residuals were normally distributed (P > 0.05) in the intervention group, but not normally distributed in the control group (P < 0.05). Both homoscedasticity and homogeneity of variance were noted, as assessed by visual inspection of a scatterplot and Levene’s test of homogeneity of variance (P = 0.531), respectively. Data were adjusted with mean ± standard error unless otherwise stated. The effect size, Partial Eta Squared (Partial η2) was calculated in ANCOVA. A partial η2 value of 0.01 or less was considered to be small. For the comparison of dependent means, the effect size, Cohen’s dz was calculated; where the effect size of 0.5-0.8 was considered to be moderate (Ellis, 2010). Post-test MCQ score was higher in the intervention group but after adjustment for pre-test MCQ scores, there was no statistically significant difference in post-test MCQ scores between the control and intervention groups. The effect size was small (See Table 2).

      Variable

      n

      Post-test MCQ score

      Mean (SE)

      Mean difference (95% CI)

      P-value

      Partial η2

      Intervention

      55

      13.65 (0.27)

      0.04 (-0.69, 0.77)

      0.917

      0.0001

      Control

      56

      13.60 (0.30)

      Table 2. Intergroup comparison of post-test MCQ scores between intervention and control groups after adjusting pre-test MCQ marks (ANCOVA)

      n: number of students

      SE: Standard error

      95% CI: 95% confidence interval

      Partial η2: Partial Eta Squared

      There was a statistically significant difference between pre-test and post-test MCQ scores among the intervention and control groups. The mean of post-test MCQ scores was significantly higher than the mean of pre-test MCQ scores in both intervention and control groups. The effect size was moderate in both groups (See Table 3).

      Variable

      n

      Mean (SD)

      Mean difference (95% CI)

      t (df)

      P-value

      Dz

      Pre-test MCQ scores

      Post-test MCQ scores

      Intervention

      55

      12.31 (2.49)

      13.65 (1.99)

      1.34 (0.64, 2.05)

      3.841 (54)

      * < 0.001

      0.518

      Control

      56

      12.23 (2.72)

      13.60 (2.26)

      1.36 (0.68, 2.04)

      3.998 (55)

      * < 0.001

      0.534

      Table 3. Intragroup comparison of pre and post MCQ scores among intervention and control groups (Paired t-test)

      n: number of students                                                                                * Significant

      SD: Standard deviation

      95% CI: 95% confidence interval

      dz: Cohen’s dz

      IV. DISCUSSION

      Multiple studies have revealed slight to the modest enhancement of knowledge in simulation-based medical education (SBME) when compared to other instructional teaching methods (Cook et al., 2012; Gordon et al., 2006; Lo et al., 2011; Ray et al., 2012; Ten Eyck et al., 2009). Notwithstanding the increasing popularity of SBME, there is little evidence to conclude that it is superior to other small-group teaching modalities for the acquisition of knowledge (Alluri et al., 2016). The common perception is that knowledge lies at the lowest level of competence in Miller’s model of clinical acumen (Miller, 1990), but it is also important to note that knowledge is the basic foundation of competence and proficiency (Norman, 2009). Theoretically, SBME is advantageous for assessment of both knowledge and skills but there are few studies which directly evaluated the effectiveness of HFS in the assessment of knowledge (McGaghie et al., 2009; Rogers, 2008).

      The mean scores of both Pre-test and the Post-test were higher in the intervention group in this study. In comparison, our preliminary study demonstrated that the control group had higher mean MCQ marks than the intervention group in Pre-test whereas at Post-test, the intervention group had higher mean MCQ marks than the control group (Pal et al., 2021).

      In our study, there is significant enhancement of knowledge (P < 0.001) in both modes of teaching which corroborates the findings of Alluri et al. (2016). Their RCT study demonstrated that the participants in both the simulation and lecture groups had improved post-test scores (p < 0.05). The comparison of Pre-test and Post-test MCQ scores in our preliminary study also revealed significant higher mean MCQ scores at Post-test than Pre-test in both intervention and control groups (Pal et al., 2021). A study by Couto et al. (2015) showed improved post-test scores in both methods. Similar results were noted in the studies by Chen et al. (2017) and Vijayaraghavan et al. (2019). The finding of a study by Hall (2013) showed a slight increase in post-test scores in both the HFPS and control groups.

      A systematic review by La Cerra et al. (2019) revealed that HFS was superior to other teaching methods in improving knowledge and performance. Significant higher scores for participants in the HFS group in the studies by Larsen et al. (2020) and Solymos et al. (2015) demonstrated that HFS may be superior to conventional teaching methods for factual learning. In another study by Bartlett et al. (2021), HFS showed a significant long-term gain in knowledge over traditional teaching methods, but short-term knowledge gain was insignificant. Our study revealed that the Post-test MCQ score was higher in the HFS group but after adjustment of pre-test scores, there was no significant difference in knowledge gain between the control and intervention groups. The findings were similar in our preliminary study where  the intervention group had higher mean change score of MCQ scores than the control group but it was not statistically significant (Pal et al., 2021).

      On the other hand, there was no significant knowledge improvement in both simulation and traditional teaching methods as observed in the studies (Corbridge et al., 2010; Kerr et al., 2013; Moadel et al., 2017). The findings of Alluri et al. (2016) also showed no difference in knowledge gain between simulation and lecture-based teaching. The studies by Morgan et al. (2002) and Tan et al. (2008), demonstrated equal efficacy between simulation and conventional lectures. The findings of a study by Kerr et al. (2013) demonstrated that SBME was not beneficial in acquisition and retention of knowledge. There was no significant improvement in knowledge after simulation-based education as revealed by the findings of three RCTs (Cavaleiro et al., 2009; Cherry et al., 2007; Kim et al., 2002).

      Despite simulation being effective in acquisition of knowledge, it may not be the most efficient modality when compared to other traditional educational methods (Bordage et al., 2009). There is ample evidence that SBME usually leads to enhancement of knowledge and skills among undergraduate students but its superiority over other conventional teaching methods is yet to be defined (Nestel et al., 2015).

      A. Limitations

      There is a possibility of potential biases in the form of design, recruitment, sample populations and data analysis that could have influenced the findings. Due to randomization in blocks of two, the allocation of participants may be predictable which may result in selection bias. The confounding factors such as communication between the different groups of students prior to the second MCQ assessment, participants’ recall memory and preparation for the post-test after 7 – 8 weeks need to be considered. As it was a single-centre study which included final-year medical students only, the validity of the findings may not be applicable to other settings.

      V. CONCLUSION

      Conventional teaching modalities and HFS, when used in conjunction with bedside teaching, may complement clinical practice, leading to higher retention of knowledge. Therefore, more studies are required to measure the efficacy of simulation for a better understanding of the differences that it can make in the acquisition of knowledge. Our study revealed that the efficacy of high-fidelity simulation-based teaching was not superior to video-assisted lecture-based teaching in terms of knowledge acquisition and retention. The substantially higher cost and maintenance associated with HFS need to be considered before planning a teaching-learning activity. It may be used judiciously with conventional teaching when the objectives are mainly knowledge-based. More studies are required to determine its effectiveness and further evaluation as a teaching-learning tool in medical education.

      Notes on Contributors

      Bikramjit Pal was involved in Conceptualization, Formal Analysis, Literature Review, Methodology, Project administration & Supervision, Data Analysis and Writing (original draft & editing).

      Aung Win Thein was involved in Formal Analysis, Literature Review, Methodology, Supervision and Writing (review & editing).

      Sook Vui Chong was involved in Literature Review, Methodology, Supervision and Writing (review & editing).

      Ava Gwak Mui Tay was involved in Formal Analysis, Literature Review, Supervision and Writing (review & editing).

      Htoo Htoo Kyaw Soe was involved in Formal Analysis, Methodology, Data curation, Statistical Analysis and Validation.

      Sudipta Pal was involved in Literature Review, Methodology, Formal Analysis, Data curation and Writing (review & editing).

      Ethical Approval

      Ethical approval was duly obtained from the Ethical Committee / IRB of Manipal University College Malaysia. Informed consent was taken from all the participants. All information about the participants was kept confidential.

      Approval number: MMMC/FOM/Research Ethics Committee – 11/2018.

      Data Availability

      The data that supports the findings of this RCT study are openly available at Figshare repository, https://doi.org/10.6084/m9.figshare.19932053.v2 (Pal et al., 2022).

      Acknowledgement

      The authors would like to acknowledge the final year MBBS students of Manipal University College Malaysia who had participated in this research project, the faculty of the Department of Surgery, the lab assistants and technicians of Clinical Skills Lab and the Management of Manipal University College Malaysia.

      Funding

      The researchers had not received any funding or benefits from industry or elsewhere to conduct this study.

      Declaration of Interest

      The researchers had no conflicts of interest.

      References

      Accreditation Council for Graduate Medical Education. (2020, July 1). Program requirements for graduate medical education in general surgery. https://www.acgme.org/globalassets/pfassets/programrequirements/440_generalsurgery_2020.pdf.

      Alluri, R. K., Tsing, P., Lee, E., & Napolitano, J. (2016). A randomized controlled trial of high-fidelity simulation versus lecture-based education in preclinical medical students. Medical Teacher, 38(4), 404–409. https://doi.org/10.3109/0142159X.2015.1031734

      ATLS Subcommittee, American College of Surgeons’ Committee on Trauma, & International ATLS working group. (2013). Advanced trauma life support (ATLS®): The ninth edition. Journal of Trauma and Acute Care Surgery, 74(5), 1363–1366. https://doi.org/10.1097/TA.0b013e31828b82f5

      Bartlett, R. S., Bruecker, S., & Eccleston, B. (2021). High-fidelity simulation improves long-term knowledge of clinical swallow evaluation. American Journal of Speech-Language Pathology, 30(2), 673–686. 

      Bordage, G., Carlin, B., Mazmanian, P. E., & American College of Chest Physicians Health and Science Policy Committee (2009). Continuing medical education effect on physician knowledge: Effectiveness of continuing medical education: American College of Chest Physicians evidence-based educational guidelines. Chest, 135(3 Suppl), 29S–36S. https://doi.org/10.1378/chest.08-2515

      Case, S. M., & Swanson, D. B. (2001). Constructing written test questions for the basic and clinical sciences. 3rd ed. Philadelphia: National Board of Medical Examiners.

      Cavaleiro, A. P., Guimarães, H., & Calheiros, F. (2009). Training neonatal skills with simulators? Acta Paediatrica, 98(4), 636–639.    

      Chen, T., Stapleton, S., Ledford, M., & Frallicciardi, A. (2017). Comparison of high-fidelity simulation versus case-based discussion on fourth- year medical student performance. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 18(5.1). https://escholarship.org/uc/item/5k73f4qc

      Cherry, R. A., Williams, J., George, J., & Ali, J. (2007). The effectiveness of a human patient simulator in the ATLS shock skills station.  Journal of Surgical Research, 139(2), 229–235. https://doi.org/10.1016/j.jss.2006.08.010

      Cook, D. A., Brydges, R., Hamstra, S. J., Zendejas, B., Szostek, J. H., Wang, A. T., Erwin, P. J., & Hatala, R. (2012). Comparative effectiveness of technology-enhanced simulation versus other instructional methods: A systematic review and meta-analysis. Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare, 7(5), 308–320. https://doi.org/10.1097/SIH.0b013e3182614f95

      Corbridge, S. J., Robinson, F. P., Tiffen, J., & Corbridge, T. C. (2010). Online learning versus simulation for teaching principles of mechanical ventilation to nurse practitioner students. International Journal of Nursing Education Scholarship, 7(1), Article 12. https://doi.org/10.2202/1548-923X.1976

      Couto, T. B., Farhat, S. C., Geis, G. L., Olsen, O., & Schvartsman, C. (2015). High-fidelity simulation versus case-based discussion for teaching medical students in Brazil about pediatric emergencies. Clinics (Sao Paulo, Brazil), 70(6), 393–399. https://doi.org/10.6061/clinics/2015(06)02

      Davies, R. (2008). The Bologna process: the quiet revolution in nursing higher education. Nurse Education Today, 28(8), 935–942. https://doi.org/10.1016/j.nedt.2008.05.008

      Ellis, P. (2010). The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge University Press. https://doi.org/10.1017/CBO9780511761676

      Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/bf03193146

      Gordon, J. A., Shaffer, D. W., Raemer, D. B., Pawlowski, J., Hurford, W. E., & Cooper, J. B. (2006). A randomized controlled trial of simulation-based teaching versus traditional instruction in medicine: A pilot study among clinical medical students. Advances in Health Sciences Education , 11(1), 33–39. https://doi.org/10.1007/s10459-004-7346-7

      Hall, R. M. (2013). Effects of high fidelity simulation on knowledge acquisition, self-confidence, and satisfaction with baccalaureate nursing students using the solomon-four research design [Doctoral dissertation, East Tennessee State University]. East Tennessee State University Higher Education Commons. https://dc.etsu.edu/etd/2281

      Kerr, B., Hawkins, T. L., Herman, R., Barnes, S., Kaufmann, S., Fraser, K., & Ma, I. W. (2013). Feasibility of scenario-based simulation training versus traditional workshops in continuing medical education: a randomized controlled trial. Medical Education Online, 18(1), Article 21312. https://doi.org/10.3402/meo.v18i0.21312

      Kiernan, L. C. (2018). Evaluating competence and confidence using simulation technology. Nursing, 48(10), 45–52. https://doi.org/10.1097/01.NURSE.0000545022.36908.f3

      Kim, J. H., Kim, W. O., Min, K. T., Yang, J. Y., & Nam, Y. T. (2002). Learning by computer simulation does not lead to better test performance on advanced cardiac life support than textbook study. The Journal of Education in Perioperative Medicine, 4(1), Article E019. 

      Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice Hall. 

      La Cerra, C., Dante, A., Caponnetto, V., Franconi, I., Gaxhja, E., Petrucci, C., Alfes, C. M., & Lancia, L. (2019). Effects of high-fidelity simulation based on life-threatening clinical condition scenarios on learning outcomes of undergraduate and postgraduate nursing students: A systematic review and meta-analysis. BMJ Open, 9(2), Article e025306. https://doi.org/10.1136/bmjopen-2018-025306

      Larsen, T., Jackson, N., & Napolitano, J. (2020). A comparison of simulation-based education and problem-based learning in pre-clinical medical undergraduates. MedEdPublish, 9(1), Article 172.

      Lo, B. M., Devine, A. S., Evans, D. P., Byars, D. V., Lamm, O. Y., Lee, R. J., Lowe, S. M., & Walker, L. L. (2011). Comparison of traditional versus high-fidelity simulation in the retention of ACLS knowledge. Resuscitation, 82(11), 1440–1443. https://doi.org/10.1016/j.resuscitation.2011.06.017

      McGaghie, W. C., Siddall, V. J., Mazmanian, P. E., Myers, J., & American College of Chest Physicians Health and Science Policy Committee (2009). Lessons for continuing medical education from simulation research in undergraduate and graduate medical education: Effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest, 135(3 Suppl), 62S–68S. https://doi.org/10.1378/chest.08-2521

      Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67. https://doi.org/10.1097/00001888-199009000-00045

      Moadel, T., Varga, S., & Hile, D. (2017). A prospective randomized controlled trial comparing simulation, lecture and discussion-based education of sepsis to emergency medicine residents. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 18(5.1). https://escholarship.org/uc/item/0132981t

      Morgan, P. J., Cleave-Hogg, D., McIlroy, J., & Devitt, J. H. (2002). Simulation technology: A comparison of experiential and visual learning for undergraduate medical students. Anesthesiology, 96, 10–16. https://doi.org/10.1097/00000542-200201000-00008

      Nestel, D., Harlim, J., Smith, C., Krogh, K., & Bearman, M. (2015). Simulated learning technologies in undergraduate curricula: An evidence check review for HETI.

      Norman, G. (2009). The American College of Chest Physicians evidence-based educational guidelines for continuing medical education interventions: a critical review of evidence-based educational guidelines. Chest, 135(3), 834–837. https://doi.org/10.1378/chest.09-0036

      Pal, B., Chong, S. V., Thein, A. W., Tay, A. G., Soe, H. H., & Pal, S. (2021). Is high-fidelity patient simulation-based teaching superior to video-assisted lecture-based teaching in enhancing knowledge and skills among undergraduate medical students? Journal of Health and Translational Medicine, 24(1), 83-90. https://doi.org/10.22452/jummec.vol24no1.14

      Pal, B., Thein, A. W., Chong, S. V., Tay, A., Htoo, H., & Pal, S. (2022). A randomized controlled trial study to compare the effectiveness of high-fidelity based teaching with video-assisted based lecture teaching in enhancing knowledge [Dataset]. Figshare. https://doi.org/10.6084/m9.figshare.19932053

      Ray, S. M., Wylie, D. R., Shaun Rowe, A., Heidel, E., & Franks, A. S. (2012). Pharmacy student knowledge retention after completing either a simulated or written patient case. American Journal of Pharmaceutical Education, 76(5), 86. https://doi.org/10.5688/ajpe76586

      Rogers, D. A. (2008). The role of simulation in surgical continuing medical education. Seminars in Colon and Rectal Surgery, 19(2), 108-114. https://doi.org/10.1053/j.scrs.2008.02.007

      Solymos, O., O’Kelly, P., & Walshe, C. M. (2015). Pilot study comparing simulation-based and didactic lecture-based critical care teaching for final-year medical students. BMC Anesthesiology, 15, Article 153. https://doi.org/10.1186/s12871-015-0109-6

      Tan, G. M., Ti, L. K., Tan, K., & Lee, T. (2008). A comparison of screen-based simulation and conventional lectures for undergraduate teaching of crisis management. Anaesthesia and Intensive Care, 36(4), 565–569.

      Ten Eyck, R. P., Tews, M., & Ballester, J. M. (2009). Improved medical student satisfaction and test performance with a simulation-based emergency medicine curriculum: a randomized controlled trial. Annals of Emergency Medicine, 54(5), 684–691. https://doi.org/10.1016/j.annemergmed.2009.03.025

      Vijayaraghavan, S., Rishipathak, P., & Hinduja, A. (2019). High-fidelity simulation versus case-based discussion for teaching bradyarrhythmia to emergency medical services students. Journal of Emergencies, Trauma, and Shock, 12(3), 176–178. https://doi.org/10.4103/JETS.JETS_115_18

      Yang, Y., & Liu, H. P. (2016). Systematic evaluation influence of high-fidelity simulation teaching on clinical competence of nursing students. Chinese Nursing Research, 30(7), 809–814. https://caod.oriprobe.com/articles/47628779/Systematic_evaluation_influence_of_high_fidelity_s.htm

      *Bikramjit Pal
      RCSI & UCD Malaysia Campus (RUMC),
      4 Jalan Sepoy Lines,
      Georgetown, Penang, 10450, Malaysia
      +6042171908-1908 (Ext)
      Email: bikramjit.pal@rcsiucd.edu.my

      Submitted: 16 March 2022
      Accepted: 26 May 2022
      Published online: 4 April, TAPS 2023, 8(2), 57-65
      https://doi.org/10.29060/TAPS.2023-8-2/OA2778

      Vijay Kautilya Dayanidhi1, Arijit Datta2, Shruti P Hegde3 & Preeti Tiwari4

      1Department of Forensic Medicine, Medicine, Manipal Tata Medical College, MAHE, India; 2Department of Forensic Medicine, Medicine, Pramukhswamy Medical college, India; 3Department of Ophthalmology, Medicine, Manipal Tata Medical College, MAHE, India; 4Department of Community Medicine, Medicine, Pramukhswamy Medical college, India

      Abstract

      Introduction: Summative assessments play a major role in shaping the student’s learning. There is little literature available on validity of summative assessment question papers in Forensic Medicine & Toxicology. This study analyses 30 question papers from 6 reputed universities for content validity.

      Methods: A retrospective cross-sectional record-based observational study was conducted where 30 university summative question papers in Forensic Medicine & Toxicology from 6 universities across India were evaluated for content validity. The learning domain assessed, the type of questions asked, and sampling of the content was compared and presented in the results.

      Results: From the results of the study, it was noted that 80% weightage was allotted to recall in most papers and only one paper tested for application. 70 to 80% of the marks were allotted to Forensic Pathology leading to disproportionate sampling. Core areas in Toxicology and Medical Jurisprudence were sparsely assessed. 

      Conclusion: The content validity of the summative question papers in Forensic Medicine and Toxicology was unsatisfactory, emphasising the need for evaluation of the clarity and efficacy of the blueprints being used by the universities. Faculty training to motivate and influence a change in the mindset is necessary to bring about a course correction.

      Keywords:           Forensic Medicine & Toxicology, Summative Assessments, University Assessments, Blueprint, Content Validity, Learning Domains

      Practice Highlights

      • Content validity of the Forensic Medicine & Toxicology university exam question papers form six universities was studied.
      • It was observed that certain subtopics like Forensic Pathology over time have been over value (80% Weightage).
      • Core areas in Medical Jurisprudence and toxicology like substance abuse, environmental toxicology, and pharmaceutical toxicity have been undervalued.
      • None of the QPs analysed tested for application. Most items in the assessment tested only recall.
      • The blueprints for the paper setters, considering the competencies to be assessed must be designed and validated.

      I. INTRODUCTION

      Reflecting on our learning experience during MBBS, we realised that we have always had issues with the examination system. The questions are vague and clustered around few important topics. Undergraduate students look up to previous examination question papers as references to decide the stake to be allotted to the topics while preparing for examination. Invariable all students attempt to predict the examination pattern and allot appropriate time and efforts to different subjects, skills, and topics. This reiterates George E Miller’s quote “Assessment drives Learning”. Summative assessments need to be planned appropriately as medicine has high stakes (Amin et al., 2006). Properly designed and executed assessments are known to have a “positive steering effect on the student’s learning. They are also needed to evaluate the programs. Improper assessments can drive a hidden curriculum leading to a completely unintended outcomes (Amin & Khoo, 2003, pp. 260).

      Competency Based Medical Education (CBME) model being adopted in India as per the new Graduate Medical Education Regulations 2019, has attempted to bring about a radical change in the educational process. Undergraduate examinations in India are shifting towards a criteria-based process (Aggarwal & Agarwal., 2017; Bhattacharya et al., 2017; Mehta & Kikani, 2019). Outcome based education demands that the examinations be designed to sample and evaluate specific competencies prescribed. The success of these models strongly depended on the validity of the examination process. Summative assessments require that the assessment tool be validated. Key outcomes need to be tested (Amin & Khoo, 2003, pp. 260; McAleer, 2001). Content validity and construct validity are two very important aspect that support the effectiveness of an assessment. Content validity tests the representativeness of the learning objectives in the assessment tool and construct validity represents the congruence of the assessment tool with the intended purpose (Amin & Khoo, 2003, pp. 260).

      Forensic Medicine and Toxicology in India, trains the undergraduate to apply their knowledge gained in Medicine for the benefit of law. It is a culmination of Forensic Pathology, Medical Jurisprudence and Toxicology put together. Its key objective is to empower Indian Medical Graduates in handling Medical Legal issues and critically apply their medical skills in delivering justice. Emphasis is also placed on training in etiology, identification, and management of Poisoning ( Sharma et al., 2005). Studies on student perception suggest that teaching is significantly teacher centric and theory oriented. Skill training in Medical Jurisprudence and Toxicology is significantly neglected. Students allege though they value the subject, they spend less time as only select concepts are emphasised (Gupta et al., 2017; Parmar, 2018; Sharma et al., 2005; Sudhan & Raj, 2019). As the new CBME UG curriculum 2019 is being rolled out it is necessary that deficiencies in the traditional curriculum be identified in order to deliver an efficient and effective Forensic Medicine & Toxicology curriculum (National Medical Commission, 2018).

      Summative theory exams inherently have a challenge with distribution of the items being tested (Aggarwal & Agarwal, 2017;  Amin et al., 2006; Amin & Khoo, 2003, pp. 260; Bhattacharya et al., 2017). Validity of the content being tested in examination is always in question. Selecting appropriate questions, question types and domain can make all the difference in the validity of the examination (Amin et al., 2006; Amin & Khoo, 2003, pp. 260; McAleer, 2001). Particularly in Forensic Medicine which is purely application-based course, testing critical thinking and synthesis is necessary. This is found wanting in the traditional curriculum (Parmar, 2018; Sharma et al., 2005; Sudhan & Raj, 2019). Published literature on systematic analysis of summative assessment question papers in Forensic Medicine & Toxicology are sparsely available.  In this study, we have analysed and compared undergraduate summative examination question papers of Forensic Medicine & Toxicology from six reputed universities all over India for the distribution of content tested, Domain of learning and Construct of the question.

      II. METHODS

      A retrospective cross-sectional record-based observational study was conduct at Government Medical College, Bharatpur after obtaining ethical approval from the Institutional Ethics Committee between October to December 2020. For the study, 30 summative exam question papers from six reputed medical universities were selected based on the availability of the University question papers in public domain. The last five-year (2016-20) undergraduate question papers in Forensic Medicine & Toxicology were collected from the university websites and the college records from constituent colleges after thorough web search. The names of the universities have been kept anonymous during the analysis of results. All the data was collected form sources in public domain hence explicit consent was not taken.  Two of the selected universities were based in North India and four universities were based in South India. The identity of the Medical Universities was kept confidential during the analysis of the question papers. 

      The Summative theory examination in Forensic Medicine & Toxicology as per the Medical Council of India (MCI) regulations consists of one theory paper of minimum 40 marks. The question paper consists of essay type questions and objective questions like very short answer questions or Multiple-choice questions depending on the universities (National Medical Commission, 2018).

      For analysis, the questions were categorised based on the question type as LEQ (Long Essay Question), SAQ (Short Answer Question) & VSAQ (Very Short Answer Question including MCQs). The Questions were also categorised based on the domain of learning as Recall Based, Comprehension Based and Application Based Questions.

      The Topics in Forensic Medicine & Toxicology can be broadly subdivided into Medical Jurisprudence, Forensic Pathology and Toxicology. These were further subdivided as Six Categories as Legal Procedure, Medical jurisprudence, Forensic Pathology, Forensic Psychiatry, Lab Technique, emerging trends, and Toxicology (Medical Council of India, 1997). Percentage of marks allotted to each of these topics was analysed in each of the papers.

      Further, Forensic Pathology was Sub divided into Subtopics like Identification, Postmortem Changes, Mechanical Injuries, Mechanical Asphyxia, Thermal Deaths, Sexual Offences and Medico Legal issues related to Pregnancy, Delivery, Abortion. Toxicology was Sub divided into General Toxicology, Chemical Toxicology, Drug, Pharmacy & Substance abuse Toxicology, Bio toxicology (Medical Council of India, 1997).  Percentage allotment of Marks in each of the question papers was analysed for each of the subtopics.

      The data thus collected was tabulated in an Excel Sheet and the percentage distribution of marks in various subtopics noted. The SPSS Statistical Software (IBM SPSS Statistics for Windows, Version 23.0) was used to analyse the data. Radar Graphs and line graphs were plotted to represent and compare pattern of distribution of marks in various topics in each question paper. The type of questions asked, the weightage allotted to the subtopics were compared keeping in mind the expected outcomes in the Forensic Medicine & Toxicology curriculum proposed by National Medical Commission and Medical Council of India for content validity (Medical Council of India, 1997; National Medical Commission, 2018). The learning domain targeted in the questions was compared for construct validity of the question papers.

      III. RESULTS

      In this study, five question papers(n=30) from each university(n=6) were analysed and compared. The data that supports the findings of this study are openly available in Figshare at https://doi.org/10.6084/m9.Fig share.19367864 (Kautilya et al., 2022).

      As regulated the university Summative examination in Forensic Medicine & Toxicology consists of one theory assessment and one practical assessment (Medical Council of  India, 1997; National Medical Commission, 2018). The theory paper is allotted a minimum of 40 marks. Five universities conducted exam for 40 marks and one university paper was of 100 marks. All question papers had three types of questions, namely Essay questions (Long Answer Questions-LAQs) of 8 to 10 marks each, short essays (Short Answer questions-SAQs) of 3-5 marks each and Objective questions (like Multiple choice questions-MCQs or Very short answer questions- VSAQs) of 1-2 marks each. Table 1 presents the percentage distribution of the marks allotted to each question type.

       

      University

       

      % Marks LAQ

      % Marks SAQ

      % Marks VSAQ/ MCQ

      U1

      43

      49.5

      7.5

      U2

      25

      50

      25

      U3

      20.4

      51.2

      28.4

      U4

      25

      50

      25

      U5

      25

      56

      19

      U6

      42

      40.5

      17.5

      Table 1. University vice distribution marks in the question papers based on the type of questions

      Nearly 50% of the marks in all universities is allotted to Short Essay or Short Answer question types. This was followed by Long Answer questions and very short answer questions respectively. Relative to the number of marks allotted to each question type the university question papers consisted of 11 to 22 items or questions in every question paper.

      A. Domain of the Learning Tested

      Theory questions papers attempt to test the knowledge/ cognition of the students. Limiting the questions to just recall type affects the quality of the question paper. Medicine and Forensic Medicine, requires application of knowledge. Testing of higher order cognition is necessary for the assessment to be Valid. To evaluate this the questions were categorised into Recall type, Comprehension type and application type. The percentage distribution of marks in each question paper was also analysed and presented in the Line graph (Figure 1).

      Figure 1. Comparison of percentage mark distribution based on the domain of learning

      B. Distribution of Marks Based on the Subtopics

      The Graduate Medical education regulation- 2019 further divide the subject of Forensic Medicine and Toxicology into Forensic Pathology, General Information and legal procedures, medical jurisprudence, Forensic Psychiatry, Toxicology, Lab investigations and general trends. The question papers were further analysed for the percentage distribution of marks among these six subtopics and presented in a radar graph in Figure 2.

      Figure 2. Topic wise distribution of marks (%) in the question papers

      From the graph it is noted that Forensic Pathology receives the most attention in almost all the question papers from all the universities. Forensic pathology can further be divided into seven subtopics. From the total marks allotted in each paper for forensic Pathology, percentage marks allotted for each of these subtopics was calculated and presented as a separate radar graph in Figure 3.

      Figure 3. Percentage distribution of marks in Forensic Pathology in the question papers

      Toxicology can further be divided into subtopics like General Toxicology dealing with management of poisons, Chemical Toxicology, drug-pharmacy, and substance abuse dealing with pharmaceutical agents and banned substances, Bio and environmental toxicology dealing with snakebite, venomous stings, mushrooms, Food poisoning and plant toxicology etc. From the total marks allotted to toxicology, the percentage distribution of marks allotted to each of these subtopics was analysed and presented in Figure 4.

      Figure 4. Percentage distribution of marks in Toxicology in the question papers

      IV. DISCUSSION

      The undergraduate medical education curriculum has been governed by the Graduate Medical Education Rule- GMR 1997 (Medical Council of India, 1997) framed by the Medical Council of India over the last two decades and in 2019, the National Medical Commission adopted a competency-based training model to revamp the medical education in India. The National Medical Commission in its series of reports and documents has attempted to identify the lacuna in the old curriculum. To be able to successfully implement this radically new proposal it is necessary that we understand the limitations of the current curriculum. The Graduate Medical Education Rules 1997, like the newer GMER 2019 provides a clear framework of the Undergraduate curriculum. It lays down guidelines on the standards of implementation. The curriculum framework is designed in a manner that there is significant room for the colleges and the Universities to plan and implement the same as they deem best suits them. This however is not the case always. It has been observed in various previous studies that universities and colleges sometimes fall short of the expectations (Medical Council of India, 1997; National Medical Commission, 2018; Sharma et al., 2005). 

      Previous studies attempting to gauge the student’s perception on the implementation of Forensic Medicine and Toxicology curriculum have raised serious doubts among the academicians. Kumar et al. (2018) in their study of student’s perception revealed that 20% of the students felt that autopsy was a mere formality and 64% felt the need for student involvement during the autopsy training.  Mardikar and Kasulkar (2015) revealed that 89% of the Interns and 41% of the residents didn’t have any exposure to handling medico-legal cases. It was noted that only 14% of the Interns and 21% of the residents were aware of the proper preservatives to be used for body fluids in poisoning. Only 32% of the interns and 46% of the residents were aware of Medical Indemnity Insurance. Only 13% of the interns were aware of the consumer protection act. There is a serious disconnect between the proposed and the implemented curriculum in forensic medicine. 

      As per the guidelines framed by the Medical Council of India in the GMER-1997, a variety of essay questions and short answer questions are permitted. Objective question like Very Short Answer questions and MCQs are permitted to the extent of 20 % only (Medical Council of India, 1997). Most of the question papers analysed in this study conformed to this regulation. From the Table 1 it can be noted that nearly 50% of the marks were allotted for short essay/Answer questions (SAQ) requiring a descriptive answer. Long Answer Questions (LAQ) requiring an elaborate explanation of the concepts represented about 20% to 42 % of the question paper. The total marks allotted for the individual questions also varied with the LAQs being allotted between 8 to 10 marks each, SAQs being allotted 3 to 5 marks and VSAQs being allotted 1- 2 marks each.  Thus, the Number of Items included in each question paper ranged from 11 to 22. This distribution is similar to the analysis published in papers of other subjects like microbiology, Pharmacology, anatomy, Physiology etc (Aggarwal & Agarwal , 2017; Ayub et al., 2013; Bhattacharya et al., 2017; Choudhary et al., 2012; Chowdhury et al., 2017; Mehta & Kikani, 2019; Pichholiya et al., 2021).  

      With the number of items being limited the chance of certain areas being missed increases. This has a profound influence on the sampling while making the blueprint (Raymond & Grande, 2019).  In papers with only 11 Items, there is a definite probability of certain topics being left out compared to papers having 22 items. As Forensic Medicine and Toxicology has only one paper compared to other subjects which have two papers in the second year MBBS, some key topics get left out, adversely effecting its content validity. 

      A. Analysis of the Domain of Learning Tested

      From the Figure No 1, it can be observed that in about 10 of the 30 papers, more than 75% of the questions/ Items tested recall. In only 7 of the 30 papers, more than 50% of the marks were allotted to comprehension. In only one paper the application was assessed to an extant of 12.5%. This is similar studies done in Anatomy, Physiology, Pharmacology, and microbiology (Aggarwal & Agarwal., 2017; Bhattacharya et al., 2017; Choudhary et al., 2012; Chowdhury et al., 2017; Mehta & Kikani, 2019).  

      This raises a serious doubt on the construct validity of the question papers. Forensic Medicine and Toxicology, an application-based course requires that higher order cognition like application is tested.  The current papers fall short of assessing the right competency domains. The Regulations prescribed by the GMER-1997 require that the at least one long answer question (LAQ) of 10 marks (i.e., 25% of the Marks) testing application is asked in the theory question paper (Medical Council of India, 1997). The newer Competency based medical education Regulations prescribed in the GMER- 2019 document also reiterate this fact and in addition suggest that an application based question including Attitude, Ethics and communication skills module be included in every paper (National Medical Commission, 2018). This needs serious introspection in the times to come.

      B. Content Validity of the Question Papers

      The content validity of a test depends strongly on how well the sample is spread across the syllabus. From the analysis of the percentage distribution of marks allotted to different subtopics presented in figure no 2, it is very clear that in majority of the question papers the bulk of the questions asked are from forensic pathology. There is distinct skewing of the graph toward forensic pathology with an average allocation of 60% of the marks.  

      This is like studies in physiology with over 42% of the marks being allotted to Cardiovascular system. The observations in the figure no 2 classically suggest that the forensic medicine and toxicology curriculum is a victim of “Carcinoma of the Curriculum” (Abrahamson, 1978). Over a period, certain section of the curriculum takes precedence and are valued more than other equally relevant sections. Core areas like Toxicology and Medical Jurisprudence which are clinically more relevant to the undergraduate students, considering their role as a physician of the first contact, seem to have been blatantly missed and neglected. Faculty should reflect on the factors that might have caused this drift which over time has led to this dangerous disease of the curriculum.  

      The New Competency based UG curriculum being implemented by the National Medical Commission wonderfully provides a framework of competencies in forensic medicine and toxicology (National Medical Commission, 2018). They serve as guiding milestones to reorient and redistribute the weightage, time and value allotted to certain topics. 

      From the Percentage marks allotted to each of these subtopics in forensic pathology in Figure no 3 it clearly shows that over 60 to 70% of the marks allotted were distributed among just 3 key topics i.e., Post-Mortem Changes, Mechanical Injuries and Asphyxia. The source of the error in the assessment is this high value allotted to theoretical aspects related to Autopsy and Medical examination. The faculty and the student’s attention have shifted towards the conduct of postmortem examination which is generally a high stakes scenario. But only a handful of undergraduates end up doing autopsies in their career. The ability to do autopsy is no doubt an important competency for the Undergraduates but the competencies related to Medical Jurisprudence and Toxicology are equally Important. The competencies related to handling Medico- legal issues related to patient care are encountered more frequently by an undergraduate thus requiring additional attention in the undergraduate curriculum than Forensic Pathology which is a rare or chance encounter for an MBBS graduate in India (Kumar et al., 2018; Medical Council of India, 1997; National Medical Commission, 2018; Sharma et al., 2005). 

      An Indian Medical graduate needs to make accurate observation, logical deductions and take critical decisions applying medical ethics in patient care. He should be able to diagnose and manage the common cases of poisoning as a physician of the first contact (Kumar et al., 2018; Medical Council of India, 1997; National Medical Commission, 2018; Sharma et al., 2005). 

      Most of the competencies in Toxicology are covered in the Forensic Medicine curriculum rather than in General Medicine. Hence, the percentage marks allotted for various subtopics of toxicology was also analysed in figure no 4. From figure no 2 it can be noted that about 20% of the marks were allotted to toxicology. Further considering figure no 4 it can be observed that 60-80% of the marks for toxicology was allotted to general toxicology and chemical toxicology showing a skewing in the distribution of marks. 

      Assessments must complement the roles of the undergraduate after completion of the course. Snake bite, an occupational disease in India, is an emergency frequently encountered by physicians of first contact (Vijay & Hegde, 2019). Substance abuse and pharmaceutical toxicity are also some of the most encountered cases in clinical practice following pesticide abuse (Basu & Mattoo, 1999). As a curriculum planner it is imperative that these factors considered as core in the curriculum (Amin et al., 2006; Amin & Khoo, 2003, pp. 260; McAleer, 2001). The current UG curriculum is deficient as certain areas have been undervalued leading to poor perception about the subject. Students undervalue the subject as the core competencies tested are not relevant considering their role as physician of the first contact. Students allot little time to study as most assessments cover few topics leading deterioration in the quality of teaching and learning in the course (Sharma et al., 2005). 

      Adult learners value learning bases on its immediate applicability and its use in problem solving. Curriculum must value topic and skill that complement the roles the learner after the training. Medical jurisprudence and toxicology have not been sufficiently assessed in this curriculum.

      V. CONCLUSION

      From the above discussion, it is reiterated that the university assessments in Forensic Medicine and Toxicology need to be realigned with the curricular needs. Certain subtopics like Forensic Pathology have been over valued compared to Medical Jurisprudence and Toxicology which have been undervalued. The sampling in Forensic Medicine and Toxicology assessment is not ideal. Application must be tested instead of just recall. 

      Universities need to periodically Assess their question papers for validity and chalk down clear guidelines for the paper setters. The current blueprints being used must be revalidated to check if there is clarity and scope of improvement. Most importantly, training the faculty and the question paper setters to use the blueprint and value the competencies mandated by the Curriculum lies at the heart of the solution to this problem. Overtime, this curricular malignancy observed, has had a profound effect on the mindsets of the faculty trainers. Faculty Developments activities to motivate and influence these mindsets to bring change is indispensable. The Application centered regulations prescribed by the National Medical Commission provides an excellent opportunity to motivate positive changes leading to the required course correction. 

      Notes on Contributors

      Dr Vijay Kautilya was instrumental in conseptualising the idea, designing the study, data collection, data analysis, drafting and reviewing the manuscript.

      Dr Arijit Datta contributed in designing the study, data collection, data analysis, drafting and reviewing the manuscript.

      Dr Shruti P Hegde was instrumental in designing the study, data analysis, drafting and reviewing portions of the manuscript.

      Dr Preethi Tiwari, contributed in data collection, data analysis, drafting and reviewing portions of the manuscript. 

      Ethical Approval

      Institutional Ethics committee approval was received from the IEC, Government Medical College, Bharatpur where the study was conducted (GMCB/IEC/2020/009 dated 26th September 2020). 

      Data Availability

      Datasets generated and/or analysed during the current study are available from the following DOI.

      https://doi.org/10.6084/m9.figshare.19367864 

      Acknowledgement

      We wish to acknowledge the Faculty of Forensic Medicine and Toxicology at MTMC, Jamshedpur for assisting in procurement of the question papers.  

      Funding

      No external funding was received for the conduct of this study. 

      Declaration of Interest

      There is no conflict of Interests to the best of our knowledge. 

      References

      Abrahamson, S. (1978). Diseases of the curriculum. Academic Medicine53(12), 951-957. https://doi.org/10.1097/00001888-197812000-00001

      Aggarwal, M., & Agarwal, S. (2017). Analysis of undergraduate pharmacology annual written examination papers at Pt. B. D. Sharma University of health sciences Rohtak. National Journal of Physiology, Pharmacy and Pharmacology, 7(5), 509. https://doi.org/10.5455/njppp.2017.7.1236224012017

      Amin, Z., Chong, Y. S., & Khoo, H. E. (2006). Practical guide to medical student assessment. World Scientific. https://doi.org/10.1142/6109

      Amin, Z., & Khoo, H. E. (2003). Basics in Medical Education. World Scientific.

      Ayub, M., Habib, M., Huq, A., Manara, A., Begum, N., & Hossain, S. (2013). Trends in covering different aspects of anatomy in written undergraduate MBBS course. Journal of Armed Forces Medical College, Bangladesh9(1), 75-83. https://doi.org/10.3329/jafmc.v9i1.18729

      Basu, D., & Mattoo, S. K. (1999). Epidemiology of substance abuse in India: Methodological issues and future perspectives. Indian Journal of Psychiatry, 41(2), 145-153.

      Bhattacharya, S., Wagh, R., Malgaonkar, A., & Kartikeyan, S. (2017). Analysis of content of theory question papers in preliminary examinations and marks obtained by first-year MBBS students in physiology. International Journal of Physiology, Nutrition and Physical Education, 2(2), 856-868.

      Choudhary, R., Chawla, V. K., Choudhary, K., Choudhary, S., & Choudhary, U. (2012). Content validity of first MBBS Physiology examinations and its comparison with teaching hours devoted for different sub-divisions of physiology. Journal of Physiology and Pathophysiology3(1), 8-11.

      Chowdhury, D. K., Saha, D., Talukder, M. H., Habib, M. A., Islam, A. S., Ahmad, M. R., & Hossin, M. I. (2017). Evaluation of pharmacology written question papers of MBBS professional examinations. Bangladesh Journal of Medical Education8(2), 12-17. https://doi.org/10.3329/bjme.v8i2.33331

      Gupta, S., Parekh, U. N., & Ganjiwale, J. D. (2017). Student’s perception about innovative teaching learning practices in forensic medicine. Journal of Forensic and Legal Medicine, 52, 137-142. https://doi.org/10.1016/j.jflm.2017.09.007

      Kumar, A., Kumar, S., Goel, N., Ranjan, S. K., Prasad, M., & Kumari, P. (2018). Attitude of undergraduate medical students towards medico-legal autopsies at IGIMS, Patna, Bihar. International Journal of Medical Research Professionals, 4(6), 132-135.

      Kautilya, D. V., Datta, A., Hegde, S. P., & Tiwari, P. (2022). Evaluating the content validity of the undergraduate summative exam question papers of forensic medicine & toxicology from 6 medical universities in India. [Data set]. Figshare. https://doi.org/10.6084/m9.figshare.19367864

      Mardikar, P. A., & Kasulkar, A. A. (2015). To assess the need of medicolegal education in interns and residents in medical institution. Journal of Evolution of Medical and Dental Sciences, 4(17), 2885-2889. https://doi.org/10.14260/jemds/2015/417

      McAleer, S. (2001). Formative and Sumative assessment. In J. A. Dent, & R. M. Harden, A Practical Guide to Medical teachers (pp. 293-302). Edinbergh Churchill Livingstone.

      Medical Council of India. (1997). Regulations on Graduate medical education, 1997.  https://www.nmc.org.in/wp-content/uploads/2017/10/GME_REGULATIONS-1.pdf

      Mehta, S., & Kikani, K. (2019). Descriptive analysis of II – MBBS university question papers of microbiology subject. Journal of Education Technology in Health Sciences, 6(2), 44-47. https://doi.org/10.18231/j.jeths.2019.011

      National Medical Commission. (2018). Competency based undergraduate curriculum for the indian medical graduate. https://www.nmc.org.in/information-desk/for-colleges/ug-curriculum/

      Parmar, P. (2018). Study of students’ perceptions towards case based learning in forensic medicine. Indian Journal of Forensic Medicine & Toxicology12(1), 154-160.  

      Pichholiya, M., Yadav, A., Gupta, S., Kamlekar, S., & Singh, S. (2021). Blueprint for summative theory assessment in pharmacology – A tool to increase the validity as per the new competency based medical education. National Journal of Physiology, Pharmacy and Pharmacology, 11(12), 1345-1355. https://doi.org/10.5455/njppp.2021.11.06170202107072021

      Raymond, M. R., & Grande, J. P. (2019). A practical guide to test blueprinting. Medical Teacher, 41(8), 854-861. https://doi.org/10.1080/0142159x.2019.1595556

      Sharma, B., Harish, D., & Chavali, S. (2005). Teaching, training and practice of forensic medicine in India-An overview. Indian Journal  of Forensic medicine & Toxicology 27(4), 247-251.

      Sudhan, S. M., & Raj, M. N. (2019). Current status of knowledge, attitude and awareness of medical students on forensic autopsy in Tumkur district of Karnataka. Indian Journal of Forensic Medicine & Toxicology13(1), 131-141.

      Vijay, D. K., & Hegde, S. P. (2019). Study of snake bite and factors influencing snake bite among the rural population of Kancheepuram district. Journal of Punjab Academy of Forensic Medicine & Toxicology19(2), 142-146.

      *Vijay Kautilya D
      Kadani Road, Baridih,
      Jamshedpur-831017
      Jharkhand, India.
      +919448651848
      Email: kautilya.dacroo@gmail.com

      Submitted: 19 August 2022
      Accepted: 5 December 2022
      Published online: 4 April, TAPS 2023, 8(2), 47-56
      https://doi.org/10.29060/TAPS.2023-8-2/OA2869

      Edyta Truskowska1, Yvonne Emmett2 & Allys Guerandel1

      1Department of Psychiatry, Faculty of Medicine, University College Dublin, Ireland; 2National College of Ireland, Ireland

      Abstract

      Introduction: Digital Badges have emerged as an alternative credentialing mechanism in higher education. They have data embedded in them and can be displayed online. Research in education suggests that they can facilitate student motivation and engagement. The authors introduced digital badges in a Psychiatry module in an Irish University. Completion of clinical tasks during the student’s clinical placements, which were previously recorded on a paper logbook, now triggers digital badges. The hope was to increase students’ engagement with the learning and assessment requirements of the module.

      Methods: The badges – gold, silver and bronze level – were acquired on completion of specific clinical tasks and an MCQ. This was done online and student progress was monitored remotely. Data was collected from the students at the end of the module using a questionnaire adapted from validated questionnaires used in educational research.

      Results: The response rate was 68%. 64% of students reported that badges helped them achieve learning outcomes. 68% agreed that digital badges helped them to meet the assessment requirements. 61% thought badges helped them to understand their performance. 61% were in favour of the continuing use of badges. Qualitative comments suggested that badges should contribute to a higher proportion of the summative mark, and identified that badges helped students to structure their work.

      Conclusions: The findings are in keeping with the literature in that engagement and motivation have been facilitated. Further evaluation is required but the use of badges as an educational tool is promising.

      Keywords:           Medical Education, Digital Badges, Students’ Engagement, Continuous Assessment Gamification, Health Profession Education

      Practice Highlights

      • Digital badges may enhance student engagement.
      • Digital badges may promote motivation for learning.
      • Evaluation of digital badges using a questionnaire with ordinal analysis of data and coding of free comments.
      • Majority of students reported working harder than in a non-gamified module.
      • Digital badges provided structure and direction to the student’s learning.

      I. INTRODUCTION

      Educational research recognises student engagement as valuable and as having significant impact on their learning (Mandernach, 2015). While searching for tools impacting on engagement, educators observed that games have been good at engaging players for decades, through their ability to sustain players’ attention and keep them motivated throughout the games (Przybylski et al., 2010). This level of engagement is desirable to both students and educators. This achievable level of engagement in gaming strategies has led to the exploration of its use in education. Elements from game design applied in non-game contexts to influence, engage and motivate individuals and groups have resulted in the development of a new field known as gamification (Deterding et al., 2011).

      Digital badges are common tools of gamification (Barata et al., 2013). They are frequently used by game designers and in recent years also by educators. A digital badge used in education can be a validated symbol of academic achievement, accomplishment, skill, quality or interest (HASTAC, n.d.). Digital badges are digital images obtained through the completion of some pre-specified goals that are annotated with metadata and that can be displayed online (Hensiek et al., 2017). In higher education badges have been used to recognise a student’s participation in a learning activity, to help students explicitly and visually capture and monitor progress made on learning tasks, to recognise the achievement of skills and competencies and to serve as a means of certifying these achievements. They are reported to have a positive effect on the learners’ motivation if they are considered as awards or if they trigger competition among peers (Yildirim et al., 2016).  

      It appears that the value of digital badges depends on their design (when awarded, for what, and what they mean). For example, the use of badges as credentials only, has been criticised for focusing exclusively on extrinsic motivating factors, which have less impact on engagement than intrinsic ones (Seaborn & Fels, 2015). This is why combining the use of badges, as credentials as well as using them within the assessment process appears to be a better idea. Considering, that assessment has proven to have the most impact on effective learning, the use of badges during structured assessment has been favoured by educators (Abramovich, 2016; Rolfe & McPherson, 1995). The assessments that have potential to generate formative and summative feedback are presented as particularly useful (Armour-Thomas & Gordon, 2013). Digital Badges represent a viable alternative to existing methods of assessment in educational institutions and in the work environment (Dowling-Hetherington & Glowatz, 2017). It was also noted that access to regular feedback (broadly available in games) is helpful to learners. Students that are given opportunities to complete a task and learn from their mistakes do better in overall assessment. Games are a great example of the design where a player learns through feedback, gets better and eventually becomes successful (McGonigal, 2011). Similarly, the literature states that the use of badges has potential to offer a sort of “covert assessment”, meaning that students can approach a task as if it was a game. This helps to maintain the benefits of assessment while minimizing the potential for unhelpful levels of test anxiety (McGonigal, 2011) (Abramovich et al., 2013).

      Another advantage given for the use of digital badges is their potential for remote monitoring of students’ progress and their difficulties by instructors and tutors (Huang & Soman, 2013). There is a growing momentum for the use of digital badges as an innovative instruction and credentialling strategy in higher education (Noyes et al., 2020).

      In our University, Psychiatry is taught as a 10-credit module to both undergraduate and graduate entry students in the final stage of their degree in medicine. Typically, approximately 240 students are taught the module in four different groups: two groups in the spring and two groups in autumn, for six weeks at a time. Face to face teaching is centralized on Mondays and Fridays. Clinical teaching is delivered during the rest of the week and takes place in multiple different clinical centres. The overall assessment of this module comprised a continuous assessment with specific formative and summative tasks recorded in a paper logbook. The summative tasks were worth 20% of their overall assessment.

      Standardising the student clinical experience, engaging them in their clinical placements and monitoring their attendance and progress can be challenging. The paper logbook/portfolio we were using was inadequate in that it did not allow for central monitoring of progress and often the difficulties students were encountering came to the attention of the teaching staff only when the logbook was handed over at the end of the module. Provision of feedback on progress was also limited. We felt that, in particular, students that were slow to progress were missing potential remediation before the summative assessments. We also encountered practical difficulties such as lost logbooks that affected the continuous assessment process.

      We felt that digital badges offered a way of monitoring attendance and participation in tasks remotely, providing feedback, facilitating remediation and allowing students’ gauge how they are doing in relation to their peers while optimizing engagement in the clinical placements and structuring the learning to sustain progress through the module. We introduced and piloted the use of digital badges in the Psychiatry module as part of the continuous assessment. We carried out a descriptive study to appraise the potential usefulness of digital badges as part of our teaching strategy.

      II. METHODS

      A. Course Design

      Students taking the 6-week Psychiatry module start their clinical placement on day 2 of the Module. Each week, students participate in their continuous assessment in order to collect their weekly badge. To acquire a badge, they need to complete and upload specific clinical tasks including formative clinical cases scheduled for them, to upload a Clinical Placement Form signed by the consultant on the team they are attached to and do an online multiple-choice question test at the end of the week. As all of this is done online their progress can be monitored remotely by the teaching team independent of the location of their clinical placement. Collecting their weekly badges provides them with 5% of their continuous assessment mark. The other marks for continuous assessment come from a summative clinical case (90%) and a reflective assignment (5%). Continuous assessment contributes to 20% of overall assessment mark.

      B. Badges Design

      Tutors in Psychiatry in conjunction with the University’s Teaching and Learning Department created Badges. It was part of an institution-wide digital badging pilot project (UCD Teaching and learning, 2017). It was agreed that there would be three types of badges -bronze, silver and gold – obtained and displayed on the university’s virtual learning environment (currently, Brightspace). As noted above students receive a digital badge on completion of assigned tasks, which are part of their continuous assessment. The type of badge awarded depends on the MCQ score and it is displayed on the student’s Blackboard profile. Figure 1 depicts the process. It shows that all badges are contributing to 5% of the module continuous assessment. Every week students receive information as to what percentage of the group has acquired a bronze, silver or gold badge so they have an idea of their performance in relation to that of the rest of the group.

      Figure 1. Step-by-step the process of getting/awarding a badge

      C. Questionnaire

      The ‘Digital Badges Experience Survey’ questionnaire was designed based on previously described surveys: the ARCS Badge Motivation Survey (Foli et al., 2016) and the Badge Opinion Survey (Abramovich et al., 2013) with some additional questions suggested by the literature on digital badges. The authors and faculty members identified and agreed on the following constructs as being relevant to our teaching delivery and our study: previous knowledge of digital badges (items 1 & 2), their meaning and relevance to students (items 22, 13, 14, 12, 8 & 5) motivation and engagement (items 3, 4 & 23), relevance to assessment and feedback (items 11, 9, 10, 24, 18 & 19), their use in structuring learning (items 15, 6, 7 & 16), self-efficacy (items 17, 20, 21 & 25), social context implication (items 26, 27 & 28). Under each construct, items from the above questionnaires were discussed and agreement reached on the ones to be used, altered or added assessing relevance and acceptability for our aims and teaching context.

      Our survey consisted of 30 items with answers displayed on a seven-point Likert Scale. The 31st (final) question required dichotomous (yes/no) answer with space for respondents to explain their reasons for it. We also provided for free commenting from students (Questionnaire is available in Table 1).

      Please rate your agreement with each of the statements using the following scale:

      Strongly Agree

      Agree

      Somewhat Agree

      Neutral

      Somewhat Disagree

      Disagree

      Strongly Disagree

      +3

      +2

      +1

      0

      – 1

      -2

      -3

      Please circle one number for each statement

      Strongly

      Agree

       

       

      Strongly Disagree

      1. I knew what digital badges were before I began this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      2. I have earned digital badges before beginning this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      3. I felt motivated to complete the module because I was earning digital badges.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      4. Compared to other modules on my programme, the digital badges motivated me to work harder.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      5. The digital badges helped me to understand the learning outcomes for this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      6. The digital badges helped me to achieve the learning outcomes for this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      7. The badge helped draw my attention to the clinical seminars.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      8. The digital badges helped me to understand the content of this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      9. The digital badges helped me to understand the assessment requirements for this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      10. I was more aware of the module continuous assessment requirements because I would be earning digital badges.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      11. Because I was earning digital badges, I knew the continuous assessment requirements were important.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      12. Earning digital badges made a difference in how I viewed completing the continuous assessment requirements.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      13. Earning badges made the assignments more significant to me.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      14. The badges increased how relevant the assignments were.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      15. The digital badges helped me to structure my work in this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      16. The digital badges helped me to meet the assessment requirements of this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      17. The badges increased my confidence that I could demonstrate the content of my knowledge.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      18. The digital badges helped me to understand my performance in this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      19. The digital badges helped me to understand my progress through the module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      20. The badges were symbols that I had mastered content.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      21. The badges increased my overall level of satisfaction with completing the continuous assessment requirements.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      22. By earning the badges I was more fulfilled as a student by completing the assessment requirements.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      23. The digital badges made me want to keep on working.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      24. I understand why I earned all of my badges.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      25. The badges I earned represent what I learned on this module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      26. I talked to others about the badges I earned.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      27. I compared the badges I earned with others’ on the module.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      28. The potential to earn digital badges at gold, silver and bronze levels made me feel competitive.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      29. I think digital badges are a good addition to the programme.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      30. I would like to earn digital badges in other modules on my programme.

      +3

      +2

      +1

      0

      -1

      -2

      -3

      31. I think the badges are helpful and should be used in the coming years: tick as appropriate and give 3 reasons why

      Yes

      No

      32. Any other comments

      Thank you for your participation

      Table 1. Digital Badges Experience Survey

      D. Participants

      The questionnaires were distributed to all the students of final year of Medicine in our university at the beginning of the final or sixth week of the module and collected by their tutors. Informed verbal consent was obtained from study participants. As described above, the course was run four times in one academic year, and we collected data from all four groups of students: two in the spring and two in the autumn.

      E. Data Collection and Analysis

      As noted above, questionnaires were distributed and collected by tutors at the beginning of the sixth (last) week of the Psychiatry Course. The level of student’s agreement with various statements was marked on the 7-point Likert-type scale. Those data were uploaded to Excel. Data from Likert scales can be analysed as ordinal as well as interval data (Sullivan & Artino 2013), (Norman, G. 2010) and we have considered both options. We concluded that using descriptive statistics such as the mean in relation to students’ opinions had limited value (Sullivan & Artino 2013), (Knapp 1990). This is why we decided to analyse our data as ordinal. To simplify the answers, we organised them into three groups: “agreed”, “neutral”, “disagreed”.

      Students’ comments were entered into an excel sheet and analysed by two independent researchers. The comments relating to the use of badges in teaching of Psychiatry were coded according to topics, which were identified and agreed upon by the two independent researchers (Johnson & LaMontagne, 1993), (Sundler et al., 2019). Topics were further codified as positive or negative. This way of coding is described and performed in more details in other studies (Quesenberry et al., 2011).

      III. RESULTS

      A. Demographics

      161 out of 237 students completed questionnaires giving a 68% response rate. The response rate was 75% in the first half (from springtime) of the students and 61% in the second (autumn rotation).

      B. Analysis of Answers

      65% of students had no previous knowledge of digital badges and 93% had never earned a badge before the module as per items 1 & 2 of questionnaire.

      1) Meaning and relevance: Item 22: 48% of respondents agreed that by earning the badges they felt more fulfilled as a student when completing the assessment requirements. 31% disagreed. 45% agreed and 39% disagreed that earning badges made the assignments more significant to them (item 13) and similarly only 42% felt the badges increased the sense of how relevant the assignments were, while 40 % disagreed with this view as per item 14.

      Earning digital badges made a difference in how 59% of students viewed completing the continuous assessment requirements, (item 12). 29% disagreed with this. 68% students felt that the digital badges helped them to understand the content of this module and 18% disagreed, (item 8). 66% students agreed and 24% disagreed about the fact that digital badges helped them to understand the learning outcomes for this module (item 5).

      2) Motivation and engagement: 51% of students that responded felt motivated to complete the module because they earned a digital badge (item 3). 33% did not agree with this. The possibility of earning a digital badge motivated 43% of respondents to work harder (item 4). 39% disagreed with this. The digital badges made 45% of students want to keep on working, while 29% were not impacted (item 23).

      3) Assessment and feedback: Item 11: 50% felt that because they were earning digital badges, they knew that the continuous assessment requirements were important. 36% disagreed. The digital badges also helped 69% respondents to understand the assessment requirements for this module but not so for the 15% respondents (item 9). Out of all respondents, 78% were more aware of the module continuous assessment requirements because of the digital badges, and only 15% disagreed with that (item 10). Item 24: As many as 74% of all respondents did and 14% did not understand why they earned their badges. The digital badges helped 61% of students to understand their performance in this module (item18). 27% did not find that badges helped in that way. Similarly, the digital badges helped 61% of students to understand their progress through the module. 22% disagreed with this (item 19).

      4) Structure: Out of all respondents 50% agreed and 33% disagreed that the digital badges helped them to structure their work in the module (item 15). 64% of respondents agreed and 21% disagreed that the digital badges helped them to achieve the learning outcomes for this module (item 6). Similarly, the badge helped draw attention to the clinical seminars for 57% of respondents, but not so for 25%. 68% of students (vs 19% who have disagreed) felt that the digital badges helped them to meet the assessment requirements of this module (item 16).

      5) Self-efficacy: Item 17: 48% of students agreed (vs 33% who disagreed) that earning the digital badges made them more confident that they could demonstrate the content of their knowledge. Item 20: 41% of all respondents agreed that the badges were symbols of mastering the content of the module. A similar number (40%) of students disagreed and 17% stayed neutral. 59% found the badges increased their overall level of satisfaction with completing the continuous assessment requirements (item 21). 27% disagreed with this. 44% of students agreed and 41% disagreed with the statement that the badges they earned represented their learning in the module (item 25).

      6) Social context and competitiveness: 42% of students did and 44% did not talk to others about the badges they earned (item 26). The potential to earn digital badges at gold, silver and bronze levels made 49% of students more competitive (item 28). 39% disagreed with this. 29% of students did and 56% did not compare the badges they earned with others on the module (item 27).

      7) Overall: 56% agreed that digital badges were a good addition to the program. 61% of students found digital badges helpful and felt that they should be used in the future. 31% did not agree with the statement and 8% did not answer.

      Students’ opinions: 136 students did and 25 students did not write any comment. Students’ comments related either to one or to several topics. Students were positive about the use of badges (in various topics) 134 times (See Figure 1) and negative/critical 106 times (See Figure 2). It is important to note however that as many as 67 out of the 106 negative comments related to the low value of the badges.

      Figure 2 depicts information about positive topics included in students’ comments. 50 students liked the structure and focus that the badge system provided. Twenty-eight students found badges motivating and 24 valued feedback they received in the process. A number of students found the whole process enjoyable, rewarding and fun (See Figure 2). Figure 3 indicates the negative opinions. The most frequent topic of all was repeated 67 times as was related to the value of badges (See Figure 3). Figure 4 shows some comments made by students in the free comment box provided on the survey (See Figure 4).

      Figure 2. Frequency of positive comment grouped by topic

      Figure 3. Frequency of negative comment grouped by topic

      Figure 4. Comments written by students

      In summary, the majority of the students liked the way Digital Badges were used in the teaching of Psychiatry, however both groups (those that liked and those that dislike badges) criticized them for their low value of the overall assessment.

      IV. DISCUSSION

      Students met digital Badges piloted in the teaching of Psychiatry to Medical Students of our university originally with apprehension. The majority of the students had never heard of digital badges and have never earned a digital badge before this module. However, data from the study looking at students’ perception of the use of digital badges in medical education provided encouraging results.

      A. Sense of Reward

      Students found badges rewarding yet complained about the small value of the badges. This reflected our design intention in which we wanted to support and engage students rather than focus on extrinsic motivating factors such as sense of reward.

      As mentioned above the badge was awarded for completion of weekly tasks, and the acquisition of a badge was functioning more as a method of feedback to students rather than for grading. However, a number of students complained about this, and reported a sense of frustration and a lack of motivation to try harder when the assessment value of the badge was so low.

      Nevertheless, our design was supported by other studies. One such study concluded that achievement of badges could influence students’ behaviour even if they do not interfere with grading (Hakulinen & Auvinen, 2014). It seems that competitiveness was triggered in those who wanted to do better anyway.

      We were pleased to see learners’ comments about reduced stress during the module. We wanted our award system to potentiate a sense of safety around assessment, giving participants freedom to learn from their mistakes without influencing their final grade. This is a well-recognized principle in gamification as a facilitator of students’ engagement (De Byl & Hooper, 2013).

      In addition, our design was guided by the fact that the best use of badges was linked with the recognition of already occurred learning, therefore more viewed as an assessment tool, providing feedback and possibly self-reflection (Reid et al., 2015).

      B. Impact on Structure, Assessment and Feedback

      We were pleased to note that students in our study felt that digital badges provided direction and structure to their learning. This was also reflected in their comments: students mentioned how badges impacted on their study structure, helping them to focus attention on important aspects of the seminars. These findings are consistent with a study that reported that students who enjoyed badges, found them helpful in giving them the direction they needed to work in (Abramovich et al., 2013). These students also praised the alignment between badge topics and course content (Abramovich et al., 2013).

      We were also hoping that badges designed, as part of an assessment that generates formative feedback would help students know if they are progressing enough to meet the requirements of their class (HASTAC n.d.). Based on responses to items 24, 18 and 19 it appeared that students benefited somewhat from the potential guidance and feedback provided by the digital badges system.

      It is important to remember that students were asked about the badges at their review seminar and few days prior to their exams. This timing could have influenced their answers. For example, students’ opinion was divided on the statement that earning a badge gave them a sense that they have mastered the course content. Similarly, opinion was divided on whether badges increased students’ confidence that they could demonstrate their knowledge, nevertheless more students felt that they had an impact. It would be interesting to see if students’ responses had been different after their exams. We know gamification has already been described as a powerful strategy that can help achieve learning objectives by affecting the way students behave (Huang & Soman, 2013).

      C. Impact on Motivation and Engagement

      In our study, a majority of students responded that they worked harder in this module compared to non-gamified modules. Similarly, about 30% more students stated that they were more motivated to work harder through the module because they were earning digital badges. Interestingly when they were given space to provide free comment, many have noted that they did feel more motivated, and a few felt more engaged. Previous studies also reported that students were more likely to engage in the game-like tasks providing rapid feedback (Thamvichai & Supanakorn-Davila, 2012). In other publications students also considered gamified courses to be more motivating, interesting and easier to learn as compared to other courses (Barata et al., 2013), (Dicheva et al., 2015), (Hakulinen & Auvinen, 2014). It is suggested that badges are most valued by learners who are extrinsically motivated and value external validation (Foli et al., 2016).

      D. Impact on Outcomes

      We did not compare outcomes in overall performance in assessment between students before and after implementation of badges. Having considered this, we decided against it. We felt there were too many variables influencing students’ performance and it would be difficult to definitely attribute potential change to the implementation of digital badges.

      E. View of Badges and Learner Type

      The impact of an educational tool could also depend on characteristics of the student as a learner. It is reported that students with high expectation for learning and those that value their learning tasks may view the badge as validating if designed as a performance assessment (having impact on intrinsic motivation), but it may devalue their learning if it was viewed as an external reward (Reid et al., 2015). On the other hand, badges used as an assessment model can have a negative impact on students with low expectancy values (Reid et al., 2015). Another study concluded that engagement in the gamified classroom was dependent on students’ playfulness (De Byl & Hooper, 2013). In this study, we have not addressed the learner types and other such specifications of the individuals in our group of students.

      In a systematic review of digital badges in health care education, it is mentioned that digital badges represent an innovative approach to learning and assessment and evidence in further education literature demonstrates that their use increases knowledge, retention and motivation to learn. However, they also report a lack of empirical research investigating digital badges within the health care education context (Noyes et al., 2020).

      F. Limitations

      Our study is limited by the lack of demographic data from all participants. This reduces the potential for comparison between genders and undergraduate vs postgraduate students. As mentioned above, we have not addressed learner characteristics and types (intrinsic versus extrinsic motivation, playfulness). The timing of the data collection (students completed questionnaires before their exams, rather than after) may also be limiting factors. It is also important to remember that this study allowed only for assessment of subjective impact (via students’ opinions and experience) of the badges on students learning and did not perform objective measures of students’ overall performance.

      V. CONCLUSION

      This study was performed at the start of the implementation of the digital badges in the module and at the time, it was the only module with elements of gamification throughout the whole undergraduate medical curriculum in our university. Like most changes in the assessment process, the students greeted this with a level of apprehension. It would be interesting to see if students’ opinion has evolved after a few years of digital badges being integrated in the module and when other modules are using them. Nevertheless, our data shows that our group of students felt, that they benefited from the learning structure provided by the digital badges. The online process of obtaining the badges enabled tutors to provide timely feedback and monitor students’ progress. In addition, our findings are in keeping with the literature in that engagement and motivation have been facilitated by introducing the digital badges and as such, they indicate that the use of digital badges is a promising tool in education.

      The use of digital badges in Medical Education is only starting and would benefit from more research in its judicious integration in higher education curriculum as appropriate.

      Notes on Contributors

      Dr Edyta Truszkowska did the literature search, collected and analysed the data and gave feedback on methodology and questionaire developed and wrote paper.

      Dr Yvonne Emmett designed the methodology and developed questionaire. She gave feedback on the data analysis and edited the writing of the paper.

      Prof Allys Guerandel suggested the implementation of digital badges and research project. She gave feedback on data analysis and wrting of paper making revisions to same.

      All three authors have read and approved the final manuscript.

      Ethical Approval

      Our project has been exempted by ethics committee of our institution Human Research Ethics Committee – Sciences (Exemption number LS-E-17-56).

      Data Availability

      Data is available on reasonable request and data is shared in the institution.

      Acknowledgement

      We would like to acknowledge our institution psychiatry teaching team for their support in the implementation of the digital badges.

      Funding

      There are no sources of funding.

      Declaration of Interest

       There is no conflict of interest for any of the authors.

      References

      Abramovich, S. (2016). Understanding digital badges in higher education through assessment. On the Horizon, 24(1), 126-131. https://doi.org/10.1108/OTH-08-2015-0044

      Abramovich, S., Schunn, C., & Higashi, R. M. (2013). Are badges useful in education? It depends upon the type of badge and expertise of learner. Educational Technology Research and Development, 61(2), 217-232. https://doi.org/10.1007/s11423-013-9289-2

      Armour-Thomas, E., & Gordon, E. (2013). Toward an Understanding of Assessment as a Dynamic Component of Pedagogy. https://www.ets.org/Media/Research/pdf/armour_thomas_gordon_understanding_assessment.pdf

      Barata, G., Gama, S., Jorge, J., & Gonçalves, D. (2013, September). Engaging engineering students with gamification. In 2013 5th International Conference on Games and Virtual Worlds for Serious Applications IEEE, UK (VS-GAMES) (pp. 1-8). https://doi.org/10.1109/VS-GAMES.2013.6624228

      De Byl, P., & Hooper, J. (2013). Key attributes of engagement in a gamified learning environment. In ASCILITE-Australian Society for Computers in Learning in Tertiary Education Annual Conference, Australia (221-230). https://www.learntechlib.org/p/171232/

      Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011, September). From game design elements to gamefulness: defining” gamification”. In Proceedings of the 15th international academic MindTrek conference: Envisioning future media environments (pp. 9-15). https://doi.org/10.1145/2181037.2181040

      Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in education: A systematic mapping study. Journal of Educational Technology & Society18(3), 75-88. https://www.jstor.org/stable/jeductechsoci.18.3.75

      Dowling-Hetherington, L., & Glowatz, M. (2017). The usefulness of digital badges in higher education: Exploring the student perspectives. Irish Journal of Academic Practice, 6(1), 1-28. https://researchrepository.ucd.ie/handle/10197/9691

      Foli, K. J., Karagory, P., & Kirby, K. (2016). An exploratory study of undergraduate nursing students’ perceptions of digital badges. Journal of Nursing Education, 55(11), 640-644. https://doi.org/10.3928/01484834-20161011-06

      Hakulinen, L., & Auvinen, T. (2014, April). The effect of gamification on students with different achievement goal orientations. In 2014 international conference on teaching and learning in computing and engineering, Malaysia (pp. 9-16). IEEE. https://doi.org/10.1109/LaTiCE.2014.10

      HASTAC. (n.d.). Digital Badges. http://www.hastac.org/digital-badges

      Hensiek, S., DeKorver, B. K., Harwood, C. J., Fish, J., O’Shea, K., & Towns, M. (2017). Digital badges in science: A novel approach to the assessment of student learning. Journal of College Science Teaching, 46(3), 28. https://www.proquest.com/scholarly-journals/digital-badges-science-novel-approach-assessment/docview/1854234735/se-2

      Huang, W. H. Y., & Soman, D. (2013). Gamification of education. Report Series: Behavioural Economics in Action, 29, 11 -12.

      Johnson, L. J., & LaMontagne, M. J. (1993). Research methods using content analysis to examine the verbal or written communication of stakeholders within early intervention. Journal of Early Intervention, 17(1), 73-79.

      Knapp, T. R. (1990). Treating ordinal scales as interval scales: An attempt to resolve the controversy. Nursing Research, 39(2), 121-123.

      Mandernach, B. J. (2015). Assessment of student engagement in higher education: A synthesis of literature and assessment tools. International Journal of Learning, Teaching and Educational Research, 12(2), 1-14.

      McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. Penguin.

      Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in health sciences education, 15(5), 625-632.

      Noyes, J. A., Welch, P. A., Johnson, J. W., & Carbonneau, K. J. (2020). A systematic review of digital badges in health care education Medical Education, 54(7), 600-615. https://doi.org/10.1111/medu.14060

      Przybylski, A. K., Rigby, C. S., & Ryan, R. M. (2010). A motivational model of video game engagement. Review of General Psychology, 14(2), 154. https://doi.org/10.1037/a0019440

      Quesenberry, A. C., Hemmeter, M. L., & Ostrosky, M. M. (2011). Addressing challenging behaviors in Head Start: A closer look at program policies and procedures. Topics in Early Childhood Special Education, 30(4), 209-220. https://doi.org/10.1177/0271121410371985

      Reid, A. J., Paster, D., & Abramovich, S. (2015). Digital Badges in undergraduate composition courses: effects on intrinsic motivation. Journal of Computers in Education, 2(4), 377-98. https://doi.org/10.1007/s40692-015-0042-1

      Rolfe, I., & McPherson, J. (1995). Formative assessment: How am I doing? The Lancet, 345(8953), 837-839. https://doi.org/10.1016/S0140-6736(95)92968-1

      Seaborn, K., & Fels, D. I. (2015). Gamification in theory and action: A survey. International Journal of Human Computer Studies, 1(74), 14-31. https://doi.org/10.1016/j.ijhcs.2014.09.006

      Sullivan, G. M., & Artino, A. R. (2013). Analyzing and interpreting data from likert-type scales. Journal of Graduate Medical Education, 5(4), 541-542 https://doi.org/10.4300/JGME-5-4-18

      Sundler, A. J., Lindberg, E., Nilssonn, C., & Palmer, L. (2019). Qualitative thematic analysis based on descriptive phenomenology. Nursing Open, 6(3), 733-739. https://doi.org/10.1002/nop2.275

      Thamvichai, R., & Supanakorn-Davila, S. (2012). A pilot study: Motivating students to engage in programming using game-like instruction. Proceedings of Active Learning in Engineering Education. St Cloud University. https://nms.asee.org/wp-content/uploads/sites/47/2020/02/St_Cloud_2012_Conference_Proceedings.pdf#page=18

      UCD Teaching and learning. (2017). UCD digital/open badges pilot 2016/2017. Implementation and evaluation report. UCD Teaching and Learning, 1-23. https://www.ucd.ie/t4cms/UCD%20Digital%20Badges%20Pilot%20Report.pdf

      Yildirim, S., Kaban, A., Yildirim, G., & Celik, E. (2016). The effect of digital badges specialization level of the subject on the achievement, satisfaction and motivation levels of the students. Turkish Online Journal of Educational Technology- TOJET, 15(3), 169-182. https://eric.ed.gov/?id=EJ1106420

      *Allys Guerandel
      University College Dublin,
      School of Medicine and Medical Sciences,
      Belfield, Dublin 4, Ireland D04V1W8.
      00353868590063
      Email: allys.guerandel@ucd.ie

      Submitted: 1 August 2022
      Accepted: 1 November 2022
      Published online: 4 April, TAPS 2023, 8(2), 36-46
      https://doi.org/10.29060/TAPS.2023-8-2/OA2855

      Marina C. Jenkins1, Caroline R. Paul2, Shobhina Chheda1 & Janice L. Hanson3

      1School of Medicine and Public Health, University of Wisconsin-Madison, United States; 2Langone Health, Grossman School of Medicine, New York University, United States; 3School of Medicine, Washington University in St. Louis, United States

      Abstract

      Introduction: Increases in vaccine hesitancy continue to threaten the landscape of public health. Literature provides recommendations for vaccine communication and highlights the importance of patient trust, yet few studies have examined medical student perspectives on vaccine hesitancy in clinical settings. Therefore, we aimed to explore medical student experiences encountering vaccine hesitancy, mistrust, and personal biases, with the goal of informing medical student education.

      Methods: A health disparities course including simulated clinical scenarios required students to complete a written reflection. We sorted reflections written in 2014-2016 to identify common topics and used inductive thematic analysis to identify themes relevant to vaccine hesitancy by group consensus.

      Results: Our sample included 84 de-identified essays sorted into three non-exclusive topics: vaccine hesitancy (n=42), mistrust (n=34), and personal bias (n=39). We identified four themes within medical students’ reflections: 1) Building a Relationship, including emphasis on patient-centred approaches; 2) Preparedness and Need to Prepare for Future Encounters, including highlighting gaps in medical education; 3) Reactions to Encountering Hesitant Patients, including frustration; 4) Insights for Providing Information and Developing a Plan with Hesitant Patients, including approaches to presenting knowledge. 

      Conclusion: Reflections in the context of simulated encounters and discussion are useful in students identifying their preparedness for vaccine discussion with patients. Student reflections can assist educators in identifying missing educational frameworks for particular scenarios such as vaccine hesitancy. Without a structured framework regarding addressing vaccine hesitancy, students draw upon other skills that may contradict recommended practices.

      Keywords:           Medical Education, Vaccine Hesitancy, Reflective Writing, Bias, Mistrust

      Practice Highlights

      • Reflective writing can be a useful tool in medical education toward addressing vaccine hesitancy.
      • Medical student reflective writing can be used to demonstrate curricular gaps.
      • Medical students expressed feeling unprepared to care for vaccine hesitant patients.
      • Without a framework for vaccine communication, students may draw on other inappropriate skills.

      I. INTRODUCTION

      Increases in vaccine hesitancy and refusal threaten public health (He et al., 2022; Hough-Telford et al., 2016; Kempe et al., 2020; Santibanez et al., 2020), especially with the COVID-19 pandemic introducing a need for quick and widespread uptake of a new vaccine (Hamel et al., 2022; Ognyanova et al., 2022). Patients, especially parents, are increasingly seeking alternative forms of health information, such as online sources that can include misinformation (Broniatowski et al., 2018; Hara & Sanfilippo, 2016; Jenkins & Moreno, 2020; Meleo-Erwin et al., 2017). Patient trust in their clinician and the health care system delivering the vaccine strongly influence vaccination decisions (Goldenberg, 2016; Kennedy et al., 2011; Larson, 2016). Trust remains the most important barrier to acceptance and uptake of the COVID-19 vaccine, with mistrust of government, medicine, and science presenting major barriers to vaccine uptake (Ognyanova et al., 2022). Vaccine hesitant patients may bring preconceptions and concerns from their own research to in-clinic vaccine communication. Thus, it is important for clinicians to be well-prepared to work with vaccine-hesitant patients and parents.

      Existing recommendations for clinicians encountering vaccine hesitancy emphasise centring patient views and voice instead of a medical, academic perspective (Holt et al., 2016; Koski et al., 2019). Approaches including motivational interviewing, presumptive language around vaccine recommendations, and persistent vaccine reminders without pressuring or dismissing patients have been shown to be effective in addressing vaccine hesitancy in medical practice (Dempsey et al., 2018; Gagneur et al., 2018; Hofstetter et al., 2017), while correcting misinformation and offering evidence to patients have been found to be counterproductive (Holt et al., 2016; Koski et al., 2019). These pre-COVID recommendations remain the same for addressing COVID-19 vaccine hesitancy, and lack of physician preparedness for encountering these patients is still an important issue (Centres for Disease Control and Prevention, 2021). Physicians may have misconceptions about patients’ reasons for vaccine hesitancy, often assuming lack of understanding or information on the safety, effectiveness, and necessity of vaccines (Hough-Telford et al., 2016), rather than recognising the more central roles of trust and validation of concerns. If physicians do not learn approaches for centring patient voices in vaccine communication, these pre-conceived biases may present a barrier to vaccine uptake and patient-physician trust.

      While valuable recommendations for addressing vaccine hesitancy in the clinical setting exist, current efforts center around informing practicing clinicians on these approaches and providing more educational resources to patients (Centres for Disease Control and Prevention, 2021). These may not represent a sufficient, long-term solution. Furthermore, resources available for healthcare workers may be inaccessible or overwhelming for physicians independently seeking tools (Karras et al., 2019). Incorporating vaccine hesitancy-centred curriculum into medical education may be the optimal, long-term solution to the lack of physician preparedness for these encounters, especially in the face of future pandemics and introduction of new vaccines. With curriculum renewal efforts incorporating early clinical experiences, students could encounter patients for whom vaccines are recommended, including vaccine hesitant patients, early in medical school. It would provide a better educational experience for students and a better health care experience for patients if students receive education to prepare them for these conversations. However, few studies have examined medical student perspectives on vaccine hesitancy in the clinical setting. Existing studies have found mixed findings around medical students’ reflections on their preparedness for encountering vaccine hesitant patients and highlight the need for expansion of related curriculum in medical education (Brown et al., 2017; Kernéis et al., 2017). While COVID vaccine hesitancy literature lacks exploration of medical student perspectives and preparedness, recent studies have highlighted an additional barrier of vaccine hesitancy among medical students in some settings (Lucia et al., 2021). These findings provide additional motivation for including vaccine hesitancy-specific curriculum in medical education.

      Understanding medical students’ reactions to vaccine hesitancy is critical in preparing students to address vaccine hesitancy while maintaining patient trust. In the present study, which used a scholarship-of-teaching approach, we aimed to expand on existing research on medical student preparedness for encountering vaccine hesitancy to examine written reflections on mistrust and personal bias in clinical encounters more broadly and use a larger sample of student narratives. We analysed students’ structured reflections regarding assigned reading, simulated patient encounters, peer discussions, and faculty-facilitated discussions to evaluate medical students’ learning during a health disparities curriculum. Structured reflection on simulated encounters has been shown to be a useful tool for understanding student perspectives (Koski et al., 2018); this approach can inform development of medical curriculum for addressing vaccine hesitancy and may be a useful teaching tool as well for students to practice, discuss, and reflect on their own biases in an educational setting. Therefore, the purpose of this study was to explore medical student reflections on encountering vaccine hesitancy, patient mistrust, and personal biases, with the goal of informing medical student education.

      II. METHODS

      In this qualitative study, we analysed written reflections from a third-year medical student Skills to Impact Health Disparities course, to evaluate their learning about interacting with vaccine-hesitant patients and parents. This study was determined to be exempt by the relevant institutional review boards, including a waiver of informed consent.

      From 2006-2018, a medical school at a U.S., Midwestern university required a one-day core session with the goal of developing learner skills to impact health disparities. Small groups of approximately six students went through five to six standardized patient scenarios, each designed to generate discussion and reflection about clinician bias that can unintentionally influence patient care. During the learning activity, each student spent 3-5 minutes interacting with a standardized patient who presented a challenge designed to provoke a level of discomfort in the learners to allow for discussion and reflection. One of these six scenarios included a parent with a history of vaccine refusal for their child expressing concerns about a recommended vaccine.

      Following each case, students engaged in a 15-minute, non-facilitated discussion based on a list of focused questions. After all cases, students joined another group of six students for a 75-minute faculty-facilitated debrief. In addition, students were required to complete a brief critical reflection based on a theme of the core day activity using the LeAP framework (Aronson et al., 2012). This framework is modelled on a clinical framework, the SOAP note (Chief complaint, Subjective, Objective, Assessment, and Plan). Students were asked to consider a specific experience that led to concern or questions; describe the experience as fully as possible; reconsider the experience by getting other perspectives; synthesize learning; and make a plan to address future similar challenges. Students could choose to reflect on simulated or real clinical experiences.

      Written reflective essays were available for analysis from years 2014-2016, providing qualitative data about students’ observations and experiences with health disparities and health equity. All available essays (n=292) from 2014, 2015, and 2016 that were submitted as a course requirement for the Skills to Impact Health Disparities Core Day required course were de-identified and organized by year.

      To ascertain the topics that the students addressed, three investigators (two involved in this study and one from another study using the larger set of all essays) read all essays. Each investigator then designated each essay to a topic from a jointly-developed list of non-exclusive topics derived from the data. After individually assigning topics for a sample of essays, the investigators met to compare their sorting and reconcile any differences before they went on to sort through another set of essays. This process continued until all essays were assigned to one or more topics. Most topic labels matched topics of the simulated scenarios that the students encountered in the course, while others related to broader issues highlighted across scenarios. With the goal of selecting reflections relevant to the issue of vaccine hesitancy, all reflections designated under the topics of vaccine hesitancy, mistrust, and personal bias were gathered for qualitative data analysis. Literature review and initial reading of essays suggested essays on encountering mistrust and bias relate to students’ experiences when encountering vaccine hesitant patients, despite not all essays relating directly to vaccine hesitancy. Each essay was assigned an identifier with cohort year and an essay number. Individual essays were excluded based on group consensus on lack of relevance to vaccine hesitancy.

      Inductive thematic analysis was used to identify codes and themes in the reflection data using a semantic, realist approach to identify explicit reactions from students grounded in clinical experiences to identify themes that could be directly applied to clinical practice (Braun & Clarke, 2006). Four investigators, including two involved in topic assignment (CRP, SC) and two additional investigators (MCJ, JLH), read and discussed six essays to develop a preliminary codebook, applied these codes to the same six essays, then met to discuss and revise the codebook. Subsequently, investigators coded the remaining essays in pairs using the revised codebook through four rounds of coding, making further iterative changes to the codebook and reconciling differences within pairs. The full team then met to discuss the coding, revise code descriptions, refine the grouping of the codes, and agree on descriptions of the groups. Any changes made to the codebook during the analysis process were retrospectively updated in all previous coding, so that all coding data reflected the final version of the codebook. Data were organized with qualitative analysis software (HyperResearch version 4.5.4). After all data were coded, investigators discussed and reached consensus on the themes.

      III. RESULTS

      A total of 90 reflections were collected from the Skills to Impact Health Disparities course across three cohorts of third-year medical students from 2014-2016 at one U.S., Midwestern university. Based on investigator consensus on lack of content relevance, six reflections were excluded from our study sample. Our final study sample included 84 de-identified reflections across three, non-exclusive topics: 42 categorized as relating to vaccine hesitancy, 34 as mistrust, and 39 as personal bias. We identified four major themes in medical students’ reflections on encountering vaccine hesitancy, mistrust and personal bias: 1) Building a Relationship, 2) Preparedness and Need to Prepare for Future Encounters, 3) Reactions to Encountering Hesitant Patients, and 4) Insights for Providing Information and Developing a Plan with Hesitant Patients. Representative quotes for each theme can be found in Table 1. Supplemental Table 1 lists each theme with the codes that informed the theme.

      A. Building a Relationship

      In our first theme, medical students recognized the importance of Building a Relationship with hesitant parents or patients as the foundation for discussions about vaccines or other care about which patients expressed hesitance. They focused on approaches such as building rapport, centring the parent/patient’s views during the discussion, acknowledging their efforts to gather information about their health decisions, expressing empathy, and avoiding direct confrontation of the patient’s viewpoint during the discussion. Many of these observations occurred during the core day experience. For example, one student wrote:

      “I learned the importance of letting the patient try to teach the doctor what they know rather than the doctor jumping in and lecturing to the patient. In the future I will try to talk less and let the patient explain more about why they oppose vaccinations to better gauge what they understand about the literature before I try to explain why vaccinations are important and the facts about vaccinations.”

      [Year3_61]

      The students saw the importance of finding points of commonality between their perspectives and those of the patient and moving the conversation toward establishing goals that they could work together with the patient to accomplish.

      One student described, “I learned that a big part of approaching this difficult conversation is establishing the correct approach: common goal, shared decision making.”

      [Year3_65]

      B. Preparedness and Need to Prepare for Future Encounters

      Another major theme identified in medical student reflections on encountering hesitant patients was Preparedness and Need to Prepare for Future Encounters. This theme included discussion of whether the student expressed feeling ready for the encounter or whether they thought it was successful, as well as specific plans for preparing for similar encounters in the future. One way that students discussed their own feelings of preparedness was by recognizing their own biases upon reflection of the encounter. For example, one student wrote:

      “I realized my own prejudices influenced my care of my patients more than I would have liked. … It was an eye opener that I am not as impartial as I would like to be and that it takes a lot more self-reflection and awareness to be the best care provider I can be.

      [Year3_16] 

      When discussing a need to prepare for future encounters, many students referenced plans to independently seek additional resources, especially those referenced by patients in encounters.

      Other students mentioned plans to practice patient interactions related to the reflection encounter; including, “For me, practicing acknowledging a patient’s views and concerns without endorsing or validating false information is paramount.”

      [Year1_07]

      Some students also referenced plans to request feedback or advice from more senior clinicians. Additionally, several students identified gaps in their medical school curriculum that contributed to their lack of preparedness or that needed to be filled to support future preparedness. Students specifically referred to needing more resources, support, and training for encountering hesitant patients. They sometimes called for system-wide changes to address this gap in knowledge.

      C. Reactions to Encountering Hesitant Patients

      One of the themes identified in the students’ self-reflection was related to their own and others’ Reactions to Encountering Hesitant Patients. While some students expressed frustration with patients/parents who expressed hesitance about vaccines, they acknowledged that they can be passionate about the topic of vaccines in their patient care, but ultimately, patients and parents make their own decisions.

      One student shared, “I have always found it quite distressing when an otherwise healthy child goes unvaccinated, given the enormous amount of evidence in favour of vaccination efficacy and its effect on public health.”

      [Year2_86]

      Another student shared, “I knew I could not force the patient, and I knew that she ultimately was in control of what she would do.”

      [Year3_78]

      In some reflections patient and parents were labelled, for example, as “anti-vaxxers.” Some reflections described parents’ and patients’ bias towards the physician or clear messaging of a desire for a different doctor.  In encountering standardized patients in our scenarios or in reflecting on patients seen in clinical settings, students acknowledged that these conversations were difficult, and they were able to self-assess their level of comfort with conversations.

      This was well-summarized in one reflection: “It was remarkable to me how such a strong reaction from this patient’s mother elicited an equally strong reaction in me.”

      [Year2_34]

       

      At times students recognized a point where these difficult conversations could reach a dead end. One student stated, “No matter how hard I would try, nothing seemed to work.”

      [Year2_03]

      Especially in this context, students reflected ambivalence towards the patient’s decision. For example:

      “I personally feel that providers allowing for healthy children on their patient panels to remain unvaccinated indirectly reinforces non-vaccination as being acceptable by the medical establishment. That said, I also see and appreciate that turning a child away from one’s practice because their parents refuse to vaccinate them not only does not solve the problem at hand, but it also leaves a child at a very critical developmental age with no health care at all until an alternative provider can be found. Ultimately, I found attempting to reconcile these seemingly incompatible sides of the issue of dealing with anti-vaccination quite confusing and uncomfortable.”

      [Year2_86]

      D. Insights for Providing Information and Developing a Plan with Hesitant Patients

      A fourth theme centred on students’ insights regarding how to provide information appropriately to patients and how to create a plan with patients who were hesitant regarding the medical recommendations given to them. Medical students suggested a variety of ways to provide information to patients who were hesitant. They noted the importance of contributing relevant facts and evidence, stressing that such information and knowledge in general needed to be presented in an understandable manner.

      As one student described, “Finding the appropriate words to use in such conversations with a patient is essential.”

      [Year1_44]

      Students often wrote that they needed to provide reputable information to inform the patient’s decision-making. Some suggested strategies for how to present information to patients, including the sharing of stories and the use of scary information to convey the level of seriousness of the medical recommendation and advice.

      One student referenced storytelling in the literature, “…the use of storytelling, the same method used by the anti-vaccination movement, [can be] a way to counteract the barrage of misinformation regarding vaccines.”

      [Year1_90]

      Sharing these insights about how to present information, students also moved towards how to develop a plan with their patients with some deliberate suggestions. Some students felt they needed to be persistent in their recommendations for vaccines. Some students explained how intentional discussions on the risks and benefits of their recommendations can help in their negotiation about a care plan with their patients.

      One student noted, “This draws along the line of patient autonomy, and as long as we are clear about the risks and benefits with the patient, then ultimately, it’s up to the patient to make the decision about which medications she will take.”

      [Year1_52]

      Medical Students’ Experiences with Vaccine Hesitancy, Mistrust, and Bias

      Themes

      Exemplar Quotes

      Building a relationship

      I felt it was most important that I listen to his story as much as I possibly could, before I spoke. So I let him talk. I said, ‘tell me your concerns.’” [Year3_18]

      “My feelings during this situation were somewhat of frustration but more of just desire for the patient to feel as though I was there to care for her child above all else and to come alongside her rather than combat with her.” [Year3_03]

      “One suggestion that my classmate said was to start out the conversation by validating how they are feeling more and that you understand that they are a good parent rather than jumping into facts about vaccinations which caused the patient to become defensive.” [Year3_61]

      Preparedness and need to prepare for future encounters

       

      “I need more tools for dealing with these situations in the future.” [Year1_04]

      “My plan is to educate myself more on the materials available for parents regarding immunizations.” [Year3_03]

      “Ultimately it would be nice to see EMRs advance to the point where they can track a patient’s problem, not just on a list, but through stages of management and onto completion, with a provider responsible for follow-up.”  [Year2_33]

      “I will seek feedback from my attendings and residents so that I can improve my motivational interviewing skills.” [Year3_81]

      Reactions to encounter-ing hesitant patients

      Ultimately this is a decision of the parent and I can only offer my professional advice…I learned that this topic did elicit some emotion which I was surprised about.” [Year3_79]

      “I learned that I need to work on my bluntness (what I consider to be honesty), as well as increasing affirmation of patients’ fears, since telling someone they are wrong (in any facet of life) typically doesn’t work out that well.” [Year2_34]

      “I felt uncomfortable and offended at times during the conversation. The patient clearly was not interested in negotiating vaccination, and when I tried to discuss the validity of some of the studies and articles she had read, she became very defensive.” [Year2_82]

      “I dealt with a mother who had embraced the anti-vaccination movement. This is an issue that I have thought about a lot but despite my reflections, it is an issue that I do not know how to address well. This filled me with fear because I honestly didn’t know what the best approach was.” [Year1_90]

      Insights for providing information and creating a plan with hesitant patients

       

      “From the debriefing session I learned that a promising approach for the anti-vaccine population is to continue to offer the vaccines at each well-child check-up without intensive counsel on the risks/benefits of vaccines.” [Year1_13]    

      “I also learned about using pictures to get a visceral response from the parent which hopefully would change their mind about not getting a vaccine.” [Year3_69]

      “When I encounter this scenario in the future, as I’m sure I will, I will begin by teasing out whether the patient is interested in more information, in which case I can have resources and studies available, or if they have already made up their mind and at that point I need to negotiate the visit to ensure that they continue to see me for whatever care they are willing to receive, even if that doesn’t include all the preventive measures I would like.” [Year2_82]

      Table 1. Medical students’ experiences with vaccine hesitancy, mistrust, and bias—Themes and exemplar quotes

      IV. DISCUSSION

      In this qualitative study of a curricular activity designed to build medical students’ skills for interacting with patients toward reducing health disparities, we explored medical student reflections on real and simulated patient care encounters related to vaccine hesitancy, mistrust and personal bias, with the overall goal of informing medical student education. This allowed for evaluation of the utility of this curriculum framework, as well as highlighting gaps in medical curriculum around addressing vaccine hesitancy. Our analysis supports that medical student reflections across the areas of vaccine hesitancy, mistrust and personal bias share thematic structure and implications for informing medical curriculum regarding encounters with patients who resist medical advice, as well as recommendations for teaching approaches to communication with patients and parents who express hesitancy about vaccines.

      This study highlights the benefits of reflections on simulated clinical encounters in the context of a Skills to Impact Health Disparities course. Reflections in the context of simulated encounters and discussion were successful in encouraging students assess their preparedness for vaccine discussions with patients. Review of written reflections, like those analysed in this study, can assist educators in identifying missing educational frameworks for particular patient care scenarios such as vaccine hesitancy. While efforts are growing to incorporate vaccine hesitancy information into medical curricula, especially now, in response to the COVID-19 pandemic (Kelekar et al., 2022; Onello et al., 2020; Real et al., 2017; Schnaith et al., 2018), there is little focus on recommending or evaluating these efforts on a large scale in the U.S. However, recent efforts to establish innovative curriculum of this kind have shown it to be feasible and effective for improving medical student preparedness in addressing vaccine hesitancy (Kelekar et al., 2022; Onello et al., 2020; Real et al., 2017; Schnaith et al., 2018). The curriculum structure assessed in this study may offer a strong approach to teach students valuable lessons related to vaccine hesitancy and evaluate existing progress in this area.

      Findings from this study also highlight gaps in existing medical curriculum for preparing students to encounter hesitant patients. We found that without a structured and deliberate learning framework for addressing vaccine hesitancy, students will draw upon other skills that may not be appropriate and may be counterproductive. Students in this study often expressed feeling unprepared, aligning with prior studies (Brown et al., 2017; Kernéis et al., 2017). However, we found that using a structured framework for reflection encouraged planning future preparation for similar encounters. This included calling for system-wide changes to curriculum and availability of resources. Additionally, discussion with peers and reflection were cited as helping students to feel more prepared for future encounters with hesitant patients.

      While discussion with peers as a learning strategy was widely recognized as helpful, outcomes of these discussions varied greatly and were directly related to the student’s overall reflection and plan for future preparation. This sometimes led to misguided solutions, highlighting the need for aligning education and training around similar encounters with evidence-informed recommendations. Many students referenced using an approach of centring patient views, either during the clinical encounter or after peer discussion and reflection, which aligns with recommendations (Centres for Disease Control and Prevention, 2021; Holt et al., 2016; Jarrett et al., 2015; Koski et al., 2019). However, many others referenced using only facts to correct knowledge, which is advised against in the vaccine hesitancy literature (Holt et al., 2016; Koski et al., 2019). In the context of these reflections, there would not be a space for students who came to misguided conclusions about approaching vaccine hesitancy to have this knowledge corrected based on recommended practices. Additional support and curriculum around vaccine hesitancy should be implemented alongside this framework of practice, peer discussion and reflection.

      Previous research has shown that written reflections provide an effective tool for students to acknowledge their biases and the potential impact on patient care, as was seen in this study (Ross & Lypson, 2014). Physician biases related to perceptions of patient education, lifestyle, and identity have been documented and found to impact patient care and rapport (Forhan & Salas, 2013; Franz et al., 2021; Verbrugge & Steiner, 1981; Walls et al., 2015). There are concerns of physicians’ dismissal of patients expressing vaccine hesitancy from their care and physicians’ beliefs that patient hesitancy is due to lack of reliable information (Hough-Telford et al., 2016). Physician frustration may contribute to lack of willingness to bridge communication with hesitant patients; this has been seen even at the student-level, in this study and in previous research (Koski et al., 2018). Preparing students for these types of encounters by promoting reflection on frustrations and biases is important for addressing vaccine hesitancy.

      Limitations of this study include that data were collected from a single institution. However, detailed, written reflections allowed for in-depth thematic analysis that may transfer to medical students more broadly. Additionally, reflections were from a course required for all medical students at the institution from cohorts over three years. Students’ reflections were written in 2014-2016, prior to the COVID-19 pandemic. However, vaccine hesitancy is an even more relevant topic now and reasons for vaccine hesitancy as well as strategies for addressing it are largely unchanged (Centres for Disease Control and Prevention, 2021). Indeed, vaccine hesitancy to the COVID-19 vaccine highlights the need for deliberate curricular efforts. Another limitation is that our sample only includes students who chose to discuss vaccine hesitancy, mistrust and bias in their reflections. However, this allowed us to analyse a fairly large sample of student reflections for a qualitative study, aiding in robust thematic saturation and providing insights that are relevant beyond vaccine hesitancy cases.

      V. CONCLUSION

      There are several meaningful implications of this study for medical education. Our findings illustrate benefits of learner reflection to build insights about communicating and building relationships to address vaccine hesitancy in medical education. Students found encounters with vaccine hesitant patients challenging, in part due to lack of preparedness, highlighting a gap in curriculum. Findings demonstrate varied familiarity with existing recommendations for addressing vaccine hesitancy, emphasizing the need to incorporate specific training into medical curriculum regarding specific skills gaps such as with communication. By focusing on mistrust and personal bias beyond vaccine hesitancy-specific cases, medical curriculum can better prepare students to approach these underlying issues with vaccine hesitant patients and patients expressing hesitancy to other medical recommendations in their future clinical practice. Finally, comprehensive efforts to improve vaccine hesitancy preparedness amongst learners are needed in our current climate of medical mistrust, given the prominence of vaccine hesitancy not just in paediatrics but also throughout clinical care in the context of the current COVID-19 pandemic. To improve vaccine confidence and decrease mistrust in the physician-patient relationship, medical educators must address medical student preparedness for encounters with vaccine-hesitant patients and parents through intentional learning strategies incorporated into medical school curriculum. We recommend that medical schools explore incorporating simulated patient encounters or role-play scenarios with structured reflection and discussion activities in response to encounters with hesitant patients, alongside didactic curriculum on evidence-based vaccine communication strategies, as research continues to evaluate best practices for preparing medical students to encounter vaccine hesitancy.

      Notes on Contributors

      Marina C. Jenkins BA was involved in the conceptual development of this qualitative analysis; analysis of reflective writings for development of themes; writing of introduction, results, methods and discussion and editing all sections and final approval of the manuscript.

      Caroline R. Paul MD was involved in the original curriculum, the original sorting process of student reflective writing; the conceptual development of this qualitative analysis; analysis of reflective writings for development of themes; writing of results section and editing of all sections and final approval of the manuscript.

      Shobhina Chheda MD MPH was involved in the original curriculum, the original sorting process of student reflective writing; analysis of reflective writings for development of themes; writing of results section and editing of all sections and final approval of the manuscript.

      Janice L. Hanson PhD EdS MH was lead in the conceptual development of this qualitative analysis and organization of qualitative data; analysis of reflective writing; writing of results; writing of methods; and primary mentor to first author on writing of introduction and discussion; editing of all sections and final approval.

      Ethical Approval

      This study received exemption status from the Institutional Review Boards from the University of Wisconsin-Madison and the Washington University in St. Louis.

      Data Availability

      We do not have IRB permission to share our data in a data repository. The data are essays written by medical students during a required university course. While the essays are de-identified, it could be possible for someone who wrote an essay or participated in discussion groups with those who wrote the essays to identify an individual who wrote an essay.

      Acknowledgement

      We would like to acknowledge Andrea Maser, MS for her assistance in de-identifying student reflections and organization of student reflections from various student cohorts.

      We would like to acknowledge Roberta Rusch, MPH for assistance in the original sorting of student reflections.

      Funding

      There is no funding source for this study.

      Declaration of Interest

      The authors have no conflicts of interest to disclose.

      References

      Aronson, L., Kruidering, M., Niehaus, B., & O’Sullivan, P. (2012). UCSF LEaP (Learning from your experiences as a professional): guidelines for critical reflection. MedEdPORTAL, 8, 9073. https://doi.org/10.15766/mep_2374-8265.9073

      Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

      Broniatowski, D. A., Jamison, A. M., Qi, S. H., AlKulaib, L., Chen, T., Benton, A., Quinn, S. C., & Dredze, M. (2018). Weaponized health communication: Twitter bots and russian trolls amplify the vaccine debate. American Journal of Public Health, 108(10), 1378-1384. https://doi.org/10.2105/ajph.2018.304567

      Brown, A. E. C., Suryadevara, M., Welch, T. R., & Botash, A. S. (2017). “Being persistent without being pushy”: Student reflections on vaccine hesitancy. Narrative Inquiry in Bioethics, 7(1), 59-70. https://doi.org/10.1353/nib.2017.0018

      Centres for Disease Control and Prevention. (2021). COVID-19 vaccination field guide: 12 strategies for your community. United States Department of Health and Human Services. https://www.cdc.gov/vaccines/covid-19/downloads/vaccination-strategies.pdf.

      Dempsey, A. F., Pyrznawoski, J., Lockhart, S., Barnard, J., Campagna, E. J., Garrett, K., Fisher, A., Dickinson, L. M., & O’Leary, S. T. (2018). Effect of a health care professional communication training intervention on adolescent human papillomavirus vaccination: A cluster randomized clinical trial. JAMA Pediatrics, 172(5), e180016. https://doi.org/10.1001/jamapediatrics.2018.0016

      Forhan, M., & Salas, X. R. (2013). Inequities in healthcare: A review of bias and discrimination in obesity treatment. Canadian Journal of Diabetes, 37(3), 205-209. https://doi.org/10.1016/j.jcjd.2013.03.362

      Franz, B., Dhanani, L. Y., & Miller, W. C. (2021). Rural-urban differences in physician bias toward patients with opioid use disorder. Psychiatric services, 72(8), 874-879. https://doi.org/10.1176/appi.ps.202000529

      Gagneur, A., Gosselin, V., & Dubé, È. (2018). Motivational interviewing: A promising tool to address vaccine hesitancy. Vaccine, 36(44), 6553-6555. https://doi.org/10.1016/j.vaccine.2017.10.049

      Goldenberg, M. J. (2016). Public misunderstanding of science? Reframing the problem of vaccine hesitancy. Perspectives on Science, 24(5), 552-581. https://doi-org/10.1162/POSC_a_00223

      Hamel, L., Sparks, G., Lopes, L., Stokes, M., & Brodie, M. (2022). KFF COVID-19 Vaccine Monitor: January 2022 Parents And Kids Update. K. F. Foundation. https://www.kff.org/coronavirus-covid-19/poll-finding/kff-covid-19-vaccine-monitor-january-2022-parents-and-kids-update/

      Hara, N., & Sanfilippo, M. R. (2016). Co-constructing controversy: Content analysis of collaborative knowledge negotiation in online communities. Information Communication & Society, 19(11), 1587-1604. https://doi.org/10.1080/1369118x.2016.1142595

      He, K., Mack, W. J., Neely, M., Lewis, L., & Anand, V. (2022). Parental perspectives on immunizations: Impact of the COVID-19 pandemic on childhood vaccine hesitancy. Journal of Community Health, 47(1), 39-52. https://doi.org/10.1007/s10900-021-01017-9

      Hofstetter, A. M., Robinson, J. D., Lepere, K., Cunningham, M., Etsekson, N., & Opel, D. J. (2017). Clinician-parent discussions about influenza vaccination of children and their association with vaccine acceptance. Vaccine, 35(20), 2709-2715. https://doi.org/10.1016/j.vaccine.2017.03.077

      Holt, D., Bouder, F., Elemuwa, C., Gaedicke, G., Khamesipour, A., Kisler, B., Kochhar, S., Kutalek, R., Maurer, W., Obermeier, P., & Seeber, L. (2016). The importance of the patient voice in vaccination and vaccine safety—Are we listening? Clinical Microbiology and Infection, 22, S146-S153. https://doi.org/10.1016/j.cmi.2016.09.027

      Hough-Telford, C., Kimberlin, D. W., Aban, I., Hitchcock, W. P., Almquist, J., Kratz, R., & O’Connor, K. G. (2016). Vaccine delays, refusals, and patient dismissals: A survey of pediatricians. Pediatrics, 138(3), 9, Article e20162127.https://doi.org/10.1542/peds.2016-2127

      Jarrett, C., Wilson, R., O’Leary, M., Eckersberger, E., & Larson, H. J. (2015). Strategies for addressing vaccine hesitancy–A systematic review. Vaccine, 33(34), 4180-4190. https://doi.org/10.1016/j.vaccine.2015.04.040

      Jenkins, M. C., & Moreno, M. A. (2020). Vaccination discussion among parents on social media: A content analysis of comments on parenting blogs. Journal of Health Communication, 25(3), 232-242. https://doi.org/10.1080/10810730.2020.1737761

      Karras, J., Dubé, E., Danchin, M., Kaufman, J., & Seale, H. (2019). A scoping review examining the availability of dialogue-based resources to support healthcare providers engagement with vaccine hesitant individuals. Vaccine, 37(44), 6594-6600. https://doi.org/10.1016/j.vaccine.2019.09.039

      Kelekar, A., Rubino, I., Kavanagh, M., Lewis-Bedz, R., LeClerc, G., Pedell, L., & Afonso, N. (2022). Vaccine hesitancy counseling—an educational intervention to teach a critical skill to preclinical medical students. Medical Science Educator, 32(1), 141-147. https://doi.org/10.1007/s40670-021-01495-5

      Kempe, A., Saville, A. W., Albertin, C., Zimet, G., Breck, A., Helmkamp, L., Vangala, S., Dickinson, L. M., Rand, C., & Humiston, S. (2020). Parental hesitancy about routine childhood and influenza vaccinations: A national survey. Pediatrics, 146(1). https://doi.org/10.1542/peds.2019-3852

      Kennedy, A., LaVail, K., Nowak, G., Basket, M., & Landry, S. (2011). Confidence about vaccines in the United States: Understanding parents’ perceptions. Health Affairs, 30(6), 1151-1159.  https://doi.org/10.1377/hltaff.2011.0396

      Kernéis, S., Jacquet, C., Bannay, A., May, T., Launay, O., Verger, P., Pulcini, C., Abgueguen, P., Ansart, S., Bani-Sadr, F., Bannay, A., Bernard, L., Botelho-Nevers, E., Boutoille, D., Cassir, N., Cazanave, C., Demonchy, E., Epaulard, O., Etienne, M., & Wyplosz, B. (2017). Vaccine education of medical students: A nationwide cross-sectional survey. American Journal of Preventive Medicine, 53(3), e97-e104. https://doi.org/10.1016/j.amepre.2017.01.014

      Koski, K., Lehto, J. T., & Hakkarainen, K. (2018). Simulated encounters with vaccine-hesitant parents: Arts-based video scenario and a writing exercise. Journal of Medical Education and Curricular Development, 5, 2382120518790257. https://doi.org/10.1177/2382120518790257

      Koski, K., Lehto, J. T., & Hakkarainen, K. (2019). Physician self-disclosure and vaccine-critical parents׳ trust: Preparing medical students for parents׳ difficult questions. Health Professions Education, 5(3), 253-258. https://doi.org/10.1016/j.hpe.2018.09.005

      Larson, H. J. (2016). Vaccine trust and the limits of information. Science, 353(6305), 1207-1208. https://doi.org/10.1126/science.aah6190

      Lucia, V. C., Kelekar, A., & Afonso, N. M. (2021). COVID-19 vaccine hesitancy among medical students. Journal of Public Health, 43(3), 445-449. https://doi.org/10.1093/pubmed/fdaa230

      Meleo-Erwin, Z., Basch, C., MacLean, S. A., Scheibner, C., & Cadorett, V. (2017). “To each his own”: Discussions of vaccine decision-making in top parenting blogs. Human Vaccines & Immunotherapeutics, 13(8), 1895-1901. https://doi.org/10.1080/21645515.2017.1321182

      Ognyanova, K., Lazer, D., Baum, M., Perlis, R. H., Druckman, J., Santillana, M., Qu, H., Trujillo, K. L., Safarpour, A., Uslu, A., Quintana, A., Green, J., Pippert, C. H., & Shere, A. (2022). The COVID States Project# 82: COVID-19 vaccine misinformation trends, awareness of expert consensus, and trust in social institutions. Open Science Framework. https://doi.org/10.31219/osf.io/9ua2x

      Onello, E., Friedrichsen, S., Krafts, K., Simmons, G., Jr., & Diebel, K. (2020). First year allopathic medical student attitudes about vaccination and vaccine hesitancy. Vaccine, 38(4), 808-814. https://doi.org/10.1016/j.vaccine.2019.10.094

      Real, F. J., DeBlasio, D., Beck, A. F., Ollberding, N. J., Davis, D., Cruse, B., Samaan, Z., McLinden, D., & Klein, M. D. (2017). A virtual reality curriculum for pediatric residents decreases rates of influenza vaccine refusal. Academic Pediatrics, 17(4), 431-435. https://doi.org/10.1016/j.acap.2017.01.01

      Ross, P. T., & Lypson, M. L. (2014). Using artistic-narrative to stimulate reflection on physician bias. Teaching and Learning in Medicine, 26(4), 344-349. https://doi.org/10.1080/10401334.2014.945032

      Santibanez, T. A., Nguyen, K. H., Greby, S. M., Fisher, A., Scanlon, P., Bhatt, A., Srivastav, A., & Singleton, J. A. (2020). Parental vaccine hesitancy and childhood influenza vaccination. Pediatrics, 146(6), Article e2020007609. https://doi.org/10.1542/peds.2020-007609

      Schnaith, A. M., Evans, E. M., Vogt, C., Tinsay, A. M., Schmidt, T. E., Tessier, K. M., & Erickson, B. K. (2018). An innovative medical school curriculum to address human papillomavirus vaccine hesitancy. Vaccine, 36(26), 3830-3835. https://doi.org/10.1016/j.vaccine.2018.05.014

      Verbrugge, L. M., & Steiner, R. P. (1981). Physician treatment of men and women patients: Sex bias or appropriate care? Medical Care, 19(6), 609-632. https://doi.org/10.1097/00005650-198106000-00005

      Walls, M. L., Gonzalez, J., Gladney, T., & Onello, E. (2015). Unconscious biases: Racial microaggressions in American Indian health care. The Journal of the American Board of Family Medicine, 28(2), 231-239. https://doi.org/10.3122/jabfm.2015.02.140194

      *Marina C. Jenkins
      Department of Paediatrics
      University of Wisconsin-Madison
      2870 University Ave., Suite 200
      Madison, WI 53703
      Email address: mcjenkins@wisc.edu

      Submitted: 11 March 2022
      Accepted: 28 June 2022
      Published online: 4 April, TAPS 2023, 8(2), 14-35
      https://doi.org/10.29060/TAPS.2023-8-2/OA2762

      Sayaka Oikawa1, Ruri Ashida2 & Satoshi Takeda3

      1Center for Medical Education and Career Development, Fukushima Medical University, Fukushima, Japan; 2Center for International Education and Research, Tokyo Medical University, Tokyo, Japan; 3Department of Emergency Medicine, The Jikei University School of Medicine, Tokyo, Japan

      Abstract

      Introduction: There are various difficulties in treating foreign patients; however, the existing educational programs are still insufficient for addressing this issue. The purpose of this study is to investigate what difficulties are encountered in the treatment of foreigners in emergency departments, and to create scenarios for simulation-based education using real-life cases.

      Methods: A cross-sectional anonymous survey to 457 emergency departments was conducted in 2018. Additionally, we conducted a survey of 46 foreign residents who had visited hospitals for treatment in Japan. The data was analysed quantitatively, and the narrative responses were thematically analysed.

      Results: Of the 141 hospitals that responded (response rate: 30.9%), 136 (96.5%) answered that they had treated foreign patients. There were 51 and 66 cases with cultural and linguistic difficulties, respectively. In the qualitative analysis, different ideas/beliefs towards treatments or examinations (51.0%) and communication with non-English speaking patients (65.2%) were most common categories in the cases with cultural and linguistic difficulties, respectively. The survey of 46 foreign residents on the surprising aspects of Japanese healthcare showed, 14% mentioned difference in treatment plans between own country and Japan, 12% each mentioned a lack of explanation by medical staff, and a lack of privacy in the examination room. Based on the survey results, we created 2 scenarios of simulation.

      Conclusions: Scenarios of simulation-based education using real-life cases may be effective materials for cultivating cultural awareness of medical staff.

      Keywords:           Cultural Awareness, Cultural Humility, Emergency Department, Foreign Patients, Simulation-based Education

      I. INTRODUCTION

      According to the Japan Tourism Agency (JTA), the number of foreign visitors to Japan was increasing every year in the midst of the recent rapid globalisation (Japan Tourism Agency, 2021). Although it is currently on the decline due to COVID-19 infection, a survey of foreign visitors to Japan conducted by the JTA in 2018 revealed that 5% of 3,000 visitors had suffered injuries or illnesses while visiting Japan (Japan Tourism Agency, 2019). When visiting a medical institution in an unfamiliar country, patients have anxiety due to language and cultural differences. Various measures are being taken around the world to prevent patients with different cultural backgrounds from being disadvantaged in medical care (NHS England, 2016; Office of Disease Prevention and Health Promotion, 2021), such as training medical staff to recognise factors impeding cultural awareness (Hobgood et al., 2006).

      Due to its nature, prompt treatment is required in emergency departments (EDs). Previous reports showed that among 97 EDs in Japan, 84 had some difficulties in treating foreign patients (Kubo et al., 2014), and medical staff faced complex cultural and social problems with foreign patients (Osegawa et al., 2002). According to the reports of Japanese government, health care institutions in Japan organise English conversation training or lectures on cultural differences by foreign lecturers for medical staff to improve treatment of foreign patients (Japan Ministry of Economy, Trade and Industry, 2019; Japan Ministry of Health, Labour and Welfare, 2021). However, a training for cultivating cultural awareness among medical staff who take care of foreign patients is still insufficient (Osegawa et al., 2002; Serizawa, 2007).

      Simulation-based education (SBE) is a practical learning method which enables mastery learning (Kelly et al., 2018; Motola et al., 2013), and in Japan, English-speaking simulated patients are increasingly introduced in medical education (Ashida & Otaki, 2022). Simulated patients enhance reflective learning which improves cultural awareness of learners (Leake et al., 2010; Paroz et al., 2016). However, according to a survey of emergency training programs, less than 10% of the programs used SBE as a training method for cultivating cultural awareness (Mechanic et al., 2017).

      The purposes of this study were to investigate what difficulties are encountered in the treatment of foreigners in EDs, and to create scenarios of SBE using real-life cases.

      II. METHODS

      In January 2018, we sent a questionnaire to 457 EDs of residency training hospitals in the top 10 prefectures with the highest number of foreign visitors, Hokkaido, Chiba, Tokyo, Kanagawa, Shizuoka, Aichi, Kyoto, Osaka, Fukuoka, and Okinawa (Japan Tourism Agency, 2016), by postal mail. In an anonymous survey, we asked about the hospital readiness for treating foreign patients and about difficult cases of foreign patients with linguistic or cultural differences in medical care (Appendix 1). The questions about readiness on treating foreign patients were analysed by simple percentages, and descriptive statistics were used for the questions about number of patients visiting ED per day. The narrative responses were collated and thematic analysis was performed. First, two authors created codes, generated several categories based on the codes, and sorted each case into categories independently as an investigator triangulation. Following that, we merged categories that were similar and revised categories that were different in interpretation through discussion. We repeated the member checking until we built our consensus, and the final categorisation was confirmed by all authors. The number of cases in each category was also calculated.

      As a sub study, we also conducted a survey of 46 foreigners who were residing in Japan and had visited the hospital for treatment in Japan (hereafter foreign residents) to find patients’ perspectives on medical care in Japan (Appendix 2). The questionnaire was initially sent to those who were recruited by the authors via email using Google form from January to May in 2018, and data were collected by snowball sampling. The data were analysed by simple percentages, and for narrative responses, we created codes and sorted the responses into categories. The number of responses in each category was also calculated. Both questionnaires stated that the participants’ responses would be considered as their consent to the study, and the answers would be used anonymously for educational research.

      Following the survey analysis, we selected cases suitable for scenario creation from an educational perspective with focus on the following points: 1) cases which were noted by multiple facilities, 2) difficulties that can be demonstrated by simulated patients; and 3) cases which had teaching points for multiple professions. The scenarios were composed following the Scenario Folder Sections by Seropian (2003) and included case description, manual for simulated patients, and teaching guide for the instructors. The scenarios were reviewed by an experienced medical English communication teacher from a linguistic and cultural standpoint, and by 2 experienced emergency medicine physicians from a medical standpoint. All 3 experts co-reviewed the final scenarios.

      III. RESULTS

      A. Survey of the EDs

      1) Characteristics of the responding EDs: We received responses to the questionnaire from 141 EDs (response rate: 30.9%). Of these, 136 (96.5%) answered that they had accepted foreign patients, 116 (82.3%) had English-speaking staff, and 76 (53.9%) used translation tools or manuals.  On the other hand, only 13 (9.2%) answered that they had a full-time English interpreter, and 27 (19.1%) had a website in English. The median number of overall outpatients visiting the ED per day was 30 (1–135), and the median number of foreign patients visiting the ED per day was 0.5 (0–8.3) (Table 1). As for translation method, a variety of methods were used. Of the 76 EDs, 36 (47.4%) answered that they used translation applications on tablet/PC or smartphone (Appendix 3).

      Total Responded Hospitals

      141

       

       

       

      Readiness on treating foreign patients

      n

        (  %  )

      Have accepted foreign patients

      136

      (

      96.5

      )

      Have an English-speaking staff

      116

      (

      82.3

      )

      Use translation tools or manuals

      76

      (

      53.9

      )

      Have English medical history forms

      52

      (

      36.9

      )

      Have English medical certificates

      50

      (

      35.5

      )

      Have English signs for patients

      46

      (

      32.6

      )

      Have English medical explanation / consent forms

      27

      (

      19.1

      )

      Have a hospital website in English

      27

      (

      19.1

      )

      Have a full-time English interpreter

      13

      (

      9.2

      )

      No. of patients visiting emergency department per day

      Median

       

      Range

      Total

      30

      (

      1-135

      )

      Foreign patients

      0.5

      (

      0-8.3

      )

      Table 1: Characteristics of the responding hospitals.

      2) Cases with cultural / linguistic difficulties: Cultural difficulties were encountered in 51 cases, and linguistic difficulties were encountered in 66 cases. In the thematic analysis, the cultural difficulties were classified into 4 categories: different ideas/beliefs towards treatments or examinations, medical fees, patients’ lifestyle, and others.  The linguistic difficulties were classified into 4 categories: communication with non-English-speaking patients, communication with English-speaking patients, communication with interpreters or using translation tools, and others. Different ideas/beliefs towards treatments or examinations (51.0%), and communication with non-English-speaking patients (65.2%) were the most common, respectively. Case examples in each category and how the hospital handled to the cases are shown in Table 2.

      Cases with cultural difficulties (51 cases)

            Categories

      n (%)

      Examples and ways they were handled

      1

      Different ideas/beliefs towards treatments or examinations

      26

       (51.0)

      The patient’s husband requested that only female medical staff be allowed to examine the patient.

      -Initially, the doctor in charge was a male, but he was switched to a female doctor.

      2

      Medical fees

      10

      (19.6)

      The patient’s credit card was over its limit and he/she could not pay for the hospitalisation. 

      -They asked the embassy of his/her country to handle the international money transfer.

       

      3

       

      Patients’ lifestyle

       

      7

      (13.7)

       

      The patient complained about the predominantly rice-based diet during his/her hospitalisation. 

      -They changed his/her diet to the bread-based one during the hospitalisation.

      4

      Others

      8

       (15.7)

      The patient had a low threshold for pain and was very assertive about the pain.

      -They confirmed that the complaint was due to pain and prescribed adequate painkillers.

       

      Cases with linguistic difficulties (66 cases)

            Categories

      n (%)

      Examples and ways they were handled

      1

      Communication with non-English-speaking patients

      43

       (65.2)

      The medical staff could not communicate with the patient in either English or Japanese.

      -They used a translation tool to communicate. 

      2

      Communication with English-speaking patients

      10

       (15.2)

      The medical staff could understand ordinary conversation, but it was difficult for them to explain medical terms in English.

      -The English-speaking staff helped them.

      3

      Communication with interpreters or translation tools

      9

       (13.6)

      The patient brought in an interpreter, but it was unclear if the interpreter was able to understand the details.

      -They asked an interpreter to support.

      4

      Others

      4

       (6.1)

      The patient asked to provide a medical certificate in his/her native language.

      -They could not provide a medical certificate in the patient’s native language, so we provided one in English.

      Table 2: Categories of cultural and linguistic difficulties, their examples and ways handled

      B. A Survey of the Foreign Residents

      As regards the questionnaire sent to the foreign residents, we received 46 responses. Of those, 11 (23.9%) had lived in Japan for more than 30 years. In the multiple-answer questions regarding the reasons for visiting the hospital, 11 (8.2%) answered acute illness treated in the ED (The demographic data of foreigners responded to the survey is shown in Appendix 4). In terms of interpretation in the hospital, 10 (21.7%) answered that they have had some means of interpretation. For the question “What aspects of your medical care in Japan were most surprising or different from those in your country?”, of a total of 50 responses with multiple answers, 7 (14%) answered “difference in treatment plans between own country and Japan ” while 6 respondents (12%) each answered “a lack of explanation by medical staff” and “a lack of privacy in the examination room” (Table 3).

      Questions about the medical care/staff

      Answer

      No. (%) in total respondents

      Q1

      Did you have any means of interpretation in the hospital?

      Yes

      10 (21.7)

      No

      36 (78.3)

      Q2

      Could you tell the doctor/nurse about your concerns in history taking?

      Yes

      27 (58.7)

      Somewhat

      18 (39.1)

      No

      1 (2.2)

       

      Q3

      Did you feel the doctor/nurse really cared for your ideas and culture during the history taking?

      Yes

      23 (50.0)

      Somewhat

      17 (37.0)

      No

      6 (13.0)

      Q4

      Did you feel that you were sincerely cared for during the physical exam?

      Yes

      29 (63.0)

      Somewhat

      16 (34.8)

      No

      1 (2.2)

      Q5

      Could you tell the doctor/nurse about your true concerns about treatment?

      Yes

      29 (63.0)

      Somewhat

      12 (26.1)

      No

      5 (10.9)

      Q6

      Did the doctor/nurse explain the diagnosis and treatment plan clearly?

      Yes

      29 (63.0)

      Somewhat

      12 (26.1)

      No

      5 (10.9)

       

       

       

      Q7

      Were you satisfied with the medical care you received?

      Yes

      32 (69.6)

      Somewhat

      12 (26.1)

      No

      2 (4.3)

      Questions about surprising points

      Q8

      What aspects of your medical care in Japan was most surprising or different from your country?

      Top 3 Answers

      No. (%)

       

      Different treatment plan

      7 (14.0)

       

      Lack of explanation by medical staffs

      6 (12.0)

       

      No privacy in the examination room

      6 (12.0)

      Table 3: Result of the survey of foreign residents

      C. Scenario Development

      Based on the survey results, we decided the main topic of the scenarios based on the contents overlapped in multiple cases. “Gender restriction of doctors who treated patients” and “communication difficulty in languages other than Japanese or English” were the most frequent topics in cultural and linguistic difficulties respectively. Following the selection of topics, we synthesised the similar responses to create a scenario that could occur in any size of ED setting. We developed the settings including patient age, sex, language, and backgrounds, regarding that the patient characteristics can be demonstrated by simulated patients. As a result, we developed two scenarios: a scenario of abdominal pain in a Muslim female patient and a scenario of forearm fracture in a Chinese male patient (Appendices 5 and 6).  In the abdominal pain scenario, no female doctor was available, and a learner, a male doctor, had to examine and treat a simulated patient who refused to be seen by a male doctor. In the forearm fracture scenario, no interpreter was available, and a learner had to communicate with a simulated patient who spoke Chinese only. The learning objective for the learners was to communicate appropriately with patients with different cultural and linguistic backgrounds. Based on the results of the survey for foreign residents, we indicated the importance of listening to the patient’s concerns carefully as a teaching point. Also, we reflected the survey results of how each hospital handled the cases on the information for instructors and teaching points.

      IV. DISCUSSION

      At the time of writing this paper, 96.5% of the EDs had accepted foreign patients; and 82.3% had English-speaking staff. However, only 32.6% of the EDs had multilingual signs for patients, which is listed as actions to be taken in the manual for treating foreign patients (Japan Ministry of Health, Labour and Welfare, 2021).

      In the present study, most of the EDs used translation tools when treating foreign patients. Various types of translation methods were found to be used in the EDs, the use of which is consistent with the manual for treating foreign patients (Japan Ministry of Health, Labour and Welfare, 2021). However, we found that the EDs still encountered a significant number of cases with linguistic difficulties. This suggests that even though the EDs own the translation tools, medical staff are not able to utilise them in communicating with foreign patients. According to our survey result, it was revealed that more than half of the cases with linguistic difficulties were of non-English speaking patients. To overcome the linguistic difficulties, medical staff need to be capable of using them enough to communicate with patients of various native languages. In addition to the use of translation tools, multilingual medical explanation/consent forms or signs in hospitals may be effectively used in the aim of communication with foreign patients.

      Regarding culturally difficult cases, our survey showed the various issues caused by differences of religious background, lifestyles, and ideas and beliefs on treatment and testing between medical staff and patients. This result is consistent with the reports which elaborated difficulties in treating foreign patients in Japan (Tatsumi et al., 2016). Our study showed that different ideas/beliefs towards treatments or examinations was most common theme in the cases with cultural difficulties in EDs. Knowing beliefs of other culture is one of individual’s capabilities to manage effectively in culturally diverse settings (Ang et al., 2007), and a report on psychiatric hospitals showed that medical staff adapted to hospitalised foreign patients’ culture and religion as they built the relationships with the patients over a long period of time (Kobayashi et al., 2014). Whereas, it is difficult to build relationships with foreign patients in the acute ED setting. Thus, we realised that practical training of communication with foreign patients provide knowledge about their cultures and religions in limited time and is critically important for medical staff in EDs.

      SBE is an effective educational format which makes learners’ unconscious incompetence to conscious incompetence (Morell et al., 2002), in other words, medical staff may be able to recognise their unconscious biases towards foreign patients by participating in SBE. As consistent with the previous survey by the MHLW (2021), the culturally difficult cases included complicated issues that require the cooperation of administrative staff and full-time English interpreters in the hospital. In the present study, we created the two scenarios targeting medical staff as learners based on the real-life cases with the many responses in the survey. However, we need to create more varieties of scenarios that can involve other professions than health care professionals. Furthermore, the acquisition and retention of learners’ skills in a single training session of SBE is limited (Legoux et al., 2021). SBE aimed at cultivating cultural awareness cannot be completed in a single session but in continuous sessions with multiple scenarios.

      The results of our survey of foreign residents showed that they had been surprised at the differences in treatment plans between their country and Japan, a lack of explanation by medical staff, and a lack of privacy in the examination room. We found that it is important to investigate the opinions of those who receive medical care in a country different from their home because their perspectives allow us to recognise the things taken for granted among medical staff.  Medical staff’s unconscious biases about patients of different cultural backgrounds or national origins influence their decision-making (Tervalon & Murray-Garcia, 1998), and implicit bias can contribute to miscommunication (Bartlett et al., 2019). Therefore, listening to the concerns of foreign patients is important in order to avoid providing treatment based solely on medical staff’s biases. Furthermore, in creating scenarios, referring to the survey results of multiple stakeholders made the contents more multi-dimensional and relevant. This study was conducted in the contexts of EDs in Japan, however, scenarios created with perspectives of both medical staff and patients who have various cultural backgrounds may effectively address to the real-life problems triggered by unconscious biases, even in other contexts. 

      In Emergency situations, we often focus on the patients’ cultural backgrounds, national origins, languages, and religious background in order to provide effective treatments. However, recognising our own bias is not achieved by only focusing on the patients’ culture. Self-reflection is necessary to recognise one’s own cultural biases. The process of self-reflection of our own culture is important for cultivating cultural awareness. Furthermore, the importance of cultural humility – discovering one’s own values toward other cultures through continuous self-reflection and becoming aware of one’s own relationship to the world – has been recently noted in medical education (Chang et al., 2012). As a further research, the development of scenarios that include the study guide which ensure the learners’ self-reflection is required for SBE in emergency settings.

      There are several limitations in this study. The response rate of a survey for EDs was 30.9%, which is unable to deny sampling bias. We conducted a survey for EDs with a focus on English, however, it is necessary to conduct surveys on languages other than English. In addition, the survey was only for the EDs of training hospitals in the top 10 prefectures with the most foreign tourists. We may consider expanding the number of hospitals to collect more information about difficulties they encounter in treating foreign patients. For the sub study, the snowball sampling had a methodological limitation in calculating total number the survey sent. As a further research, impacts of SBE using these scenarios on the treatment of foreign patients is less clear. To assess whether foreign patients’ satisfaction of medical care will change, and whether unconscious bias towards foreign patients among medical staff will decrease by conducting these scenarios are necessary.

      V. CONCLUSION

      In the current study, we were able to clarify linguistic and cultural difficulties in treating foreign patients in the EDs. We developed the scenarios for SBE using the real-life difficult cases of foreign patients with linguistic or cultural differences in medical care in Japan. The simulation training using these scenarios may be useful for promoting cultural awareness of medical staff in EDs. In future, more varieties of scenarios of SBE need to be created and shared in order to treat foreign patients safely and adequately.

      Notes on Contributors

      SO contributed to the design of the study and conducted data collection and analysis. RA devised the project, the main conceptual ideas, and conducted data collection and analysis. ST contributed to the design of the study and the interpretation of the data.

      Ethical Approval

      This study was approved by the Institutional Review Board of The Jikei University School of Medicine Japan (Approval No. 28-211(8454), 28-276(8519)). An informed consent was obtained from all the participants responded to the survey.

      Data Availability

      The data that support the findings of this study are not openly available due to privacy. The materials are available from the corresponding author on reasonable request.

      Acknowledgement

      The authors would like to acknowledge the respondents at the EDs of training hospitals, the foreigners living in Japan, and the young clinicians at The Jikei University School of Medicine for their cooperation in the study. 

      Funding

      This work has been supported by JSPS KAKENHI, grant number 16K08883.

      Declaration of Interest

      The authors report no conflicts of interest. The authors alone are responsible for the content of the article.

      References

      Ang, S., Van Dyne, L., Koh, C., Ng, K. Y., Templer, K. J., Tay, C., & Chandrasekar, N. A. (2007). Cultural intelligence: Its measurement and effects on cultural judgment and decision making, cultural adaptation and task performance. Management and Organization Review, 3(3), 335-371. https://doi.org/10.1111/j.1740-8784.2007.00082.x

      Ashida, R., & Otaki, J. (2022). Survey of Japanese medical schools on involvement of English-speaking simulated patients to improve students’ patient communication skills. Teaching and Learning in Medicine, 34(1), 13-20. https://doi.org/10.1080/10401334.2021.1915789

      Bartlett, K., Strelitz, P., Hawley, J., Sloane, R., & Staples, B. (2019). Explicitly addressing implicit bias in a cultural competence curriculum for pediatric trainees. MedEdPublish, 8. https://doi.org/10.15694/mep.2019.000102.1

      Chang, E. S., Simon, M., & Dong, X. (2012). Integrating cultural humility into health care professional education and training. Advances in Health Sciences Education, 17(2), 269–278. https://doi.org/10.1007/s10459-010-9264-1

      Hobgood, C., Sawning, S., Bowen, J., & Savage, K. (2006). Teaching culturally appropriate care: a review of educational models and methods. Academic Emergency Medicine, 13(12), 1288-1295. https://doi.org/10.1197/j.aem.2006.07.031

      Japan Ministry of Economy, Trade and Industry. (2019, January). Kokunai iryokikan ni okeru gaikokujin kanja no ukeire jittai chosa [Survey on the actual conditions of foreign patients accepted at domestic medical institutions in Japan]. https://www.meti.go.jp/policy/mono_info_service/healthcare/iryou/inbound/activity/survey_report.html

      Japan Ministry of Health, Labour and Welfare. (2021, March). Gaikokujin kanja no ukeire no tameno iryokikan muke manyuaru [A manual for medical institutions to accept foreign patients]. https://www.mhlw.go.jp/content/10800000/000795505.pdf

      Japan Tourism Agency. (2018, February). Shukuhakuryoko tokeichosa hokokusho [Report on the survey of accommodations and travel statistics]. Ministry of Land, Infrastructure, Transport and Tourism. https://www.mlit.go.jp/common/001220398.pdf

      Japan Tourism Agency. (2019, March). Honichi gaikokujin ryokosha no iryo ni kansuru jittaichosa ukeire kankyo no seibikyoka wo okonaimashita [Conducted a survey on the actual conditions of medical care for foreign visitors to Japan and strengthened the development of the receiving environment]. Ministry of Land, Infrastructure, Transport and Tourism. https://www.mlit.go.jp/kankocho/news08_000272.html

      Japan Tourism Agency. (2021, June). Shukuhakuryoko tokeichosa [Survey of accommodations and travel statistics]. Ministry of Land, Infrastructure, Transport and Tourism. https://www.mlit.go.jp/kankocho/siryou/toukei/content/001413644.pdf

      Kelly, M. A., Balakrishnan, A., & Naren, K. (2018). Cultural considerations in simulation-based education. The Asia Pacific Scholar, 3(3), 1-4. https://doi.org/10.29060/TAPS.2018-3-3/GP10 70

      Kobayashi, Y., Yoshimitsu, Y., & Kato, S. (2014). Super kyukyu ni okeru kangoshi no gaikokujinkanja nitaishite ninshiki suru mondai to taio no jissai [Nurses’ perceptions of and responses to foreign patients in a super emergency hospitals]. Nihon Seishinka Kango Gakujutsu Shukaishi [The Japanese Psychiatric Nursing Society], 57(3), 379 383.

      Kubo, Y., Takaki, S., Nomoto, Y., Maeno, Y., & Kawaguchi, Y. (2014). Nihon no byoin ni okeru kyukyugairai deno gaikokujinkanja heno kango no genjo ni kansuru chosa. [A survey on the current status of nursing care for foreign patients in emergency departments in Japanese jospitals]. Kosei no shihyo [Journal of Health and Welfare Statistics], 61(1), 17-25.

      Leake, R., Holt, K., Potter, C., & Ortega, D. M. (2010). Using simulation training to improve culturally responsive child welfare practice. Journal of Public Child Welfare, 4(3), 325-346. https://doi.org/10.1080/15548732.2010.496080

      Legoux, C., Gerein, R., Boutis, K., Barrowman, N., & Plint, A. (2021). Retention of critical procedural skills after simulation training: a systematic review. AEM Education and Training, 5(3), e10536. https://doi.org/10.1002/aet2.10536

      Mechanic, O. J., Dubosh, N. M., Rosen, C. L., & Landry, A. M. (2017). Cultural competency training in emergency medicine. The Journal of Emergency Medicine, 53(3), 391-396. https://doi.org/10.1016/j.jemermed.2017.04.019

      Morell, V. W., Sharp, P. C., & Crandall, S. J. (2002). Creating student awareness to improve cultural competence: creating the critical incident. Medical Teacher, 24(5), 532-534. https://doi.org/10.1080/0142159021000012577

      Motola, I., Devine, L. A., Chung, H. S., Sullivan, J. E., & Issenberg, S. B. (2013). Simulation in healthcare education: a best evidence practical guide. AMEE Guide No. 82. Medical Teacher, 35(10), e1511-e1530. https://doi.org/10.3109/0142159X.2013.818632

      NHS England. (2016). NHS England response to the specific duties of the Equality Act. Equality information relating to public facing functions. https://www.england.nhs.uk/wp-content/uploads/2016/02/nhse-specific-duties-equality-act.pdf

      Office of Disease Prevention and Health Promotion. (2021, August). Disparities. U.S. Department of Health and Human Services, Office of Disease Prevention and Health Promotion. https://www.healthypeople.gov/2020/about/foundation-health-measures/Disparities

      Osegawa, M., Morio, H., Nomoto, K., Nishizawa, M., & Sadahiro, T. (2002). Present medical practice and problems in emergency disease in foreign travelers requiring hospital admission. Nihon Kyukyu Igakukai Zasshi [Journal of Japanese Association for Acute Medicine], 13(11), 703-710. https://doi.org/10.3893/jjaam.13.703

      Paroz, S., Daele, A., Viret, F., Vadot, S., Bonvin, R., & Bodenmann, P. (2016). Cultural competence and simulated patients. The Clinical Teacher, 13(5), 369-373. https://doi.org/10.1111/tct.12466

      Serizawa, A. (2007). Developing a culturally competent health care workforce in Japan: implications for education. Nursing education perspectives, 28(3), 140-144.

      Seropian, M. A. (2003). General concepts in full scale simulation: getting started. Anesthesia & Analgesia, 97(6), 1695-1705. https://doi.org/10.1213/01.ane.0000090152.91261.d9

      Tatsumi, Y., Sasaki-Otomaru, A., & Kanoya, Y. (2016). The actual situation and issues of emergency medical services for foreigners staying in Japan extracted by systematic review. Nihon Kenko Igakukai Zasshi [Journal of Japan Health Medicine Association], 25(2), 91-97.

      Tervalon, M., & Murray-Garcia, J. (1998). Cultural humility versus cultural competence: A critical distinction in defining physician training outcomes in multicultural education. Journal of Health Care for the Poor and Underserved, 9(2), 117-125. https://doi.org/10.1353/hpu.2010.0233

      *Sayaka Oikawa
      Center for Medical Education and Career Development,
      Fukushima Medical University,
      1 Hikarigaoka, Fukushima, 960-1295, Japan
      Email: sayaka9@fmu.ac.jp

      Announcements