Education of medical students in child and adolescent psychiatry
Submitted: 14 March 2020
Accepted: 20 July 2020
Published online: 5 January, TAPS 2021, 6(1), 30-39
https://doi.org/10.29060/TAPS.2021-6-1/OA2235
Yit Shiang Lui, Abigail HY Loh, Tji Tjian Chee, Jia Ying Teng, John Chee Meng Wong & Celine Hsia Jia Wong
Department of Psychological Medicine, National University Health System, Singapore
Abstract
Introduction: A good understanding of basic child-and-adolescent psychiatry (CAP) is important for general medical practice. The undergraduate psychiatry teaching programme included various adult and CAP topics within a six-week time frame. A team of psychiatry tutors developed two new teaching formats for CAP and obtained feedback from the students about these teaching activities.
Methods: Medical students were introduced to CAP via small group teaching in two different modes. One mode was the “Clinical Vignettes Tutorial” (CVT) and the other mode “Observed Clinical Interview Tutorial” (OCIT). In CVT, tutors would discuss clinical vignettes of real patients with the students, followed by explanations about theoretical concepts and management strategies. OCIT involved simulated-patients (SPs) who assisted by acting as patients presenting with problems related to CAP, or as parents for such patients. At each session, students were given the opportunity to interview “patients” and “parents”. Feedback was given following these interviews. The students then completed surveys about the teaching methods.
Results: Students rated very-positive feedback for the teaching of CAP in small groups. Almost all found these small groups enjoyable and that it helped them apply what they had learnt. Majority agreed that the OCIT sessions increased their level of confidence in speaking with adolescents and parents. Some students agreed that these sessions had stimulated their interest to know more about CAP.
Conclusion: Small group teaching in an interactive manner enhanced teaching effectiveness. Participants reported a greater degree of interest towards CAP, and enhanced confidence in treating youths with mental health issues as well as engaging their parents.
Keywords: Child Adolescent Psychiatry, Medical Education, Small Group, Teaching
Practice Highlights
- Psychiatric disorders are among the most common medical conditions experienced by children and adolescents, and data from the Singapore Mental Health survey conducted in 2010 had shown the prevalence rates of emotional and behavioural problems among Singaporean youth to be at 12.5%.
- Most medical students had limited exposure to Child & Adolescent Psychiatry (CAP) in their medical curriculum due to reduced proportions of teaching time and opportune clinical exposures allocated to CAP programmes.
- This would be further compounded by the limited number of child and adolescent psychiatrists involved in teaching at medical schools and supervising clinical postings.
- This manuscript described synergistic teaching methods employed in educating medical students within the field of Child & Adolescent Psychiatry and examined the effectiveness and acceptability of CAP teaching using small-group teaching classes.
- The CAP small group interactive teaching sessions for medical students received good feedback from majority of the participants and translated to applicability and skillsets transferability.
I. INTRODUCTION
Psychiatric disorders are among the most common medical conditions experienced by children and adolescents during their developmental years. Epidemiological data from developed countries demonstrated transitions from acute and infectious diseases to chronic conditions, that included mental health problems as well (Baranne & Falissard, 2018; Kyu et al., 2016; World Health Organization, 2014). Recent global health surveys had estimated the median prevalence of psychiatric disorders present in children and adolescents to be about 12% (Costello, Egger, & Angold, 2005). Data from the Singapore Mental Health survey conducted in 2010 had shown the prevalence rates of emotional and behavioural problems among Singaporean youth to be at 12.5% which was comparable with global data (Lim, Ong, Chin, & Fung, 2015). Some studies had also demonstrated a growing trend of a burgeoning proportion of disabilities in children and adolescents that would be attributable to mental health disorders. Therefore, increasingly more health resources would be expected to meet these demands (Baranne & Falissard, 2018; Erskine et al., 2015). This would largely come in the form of services focusing on prevention, identification, and management of child and adolescent psychiatric disorders (Baranne & Falissard, 2018; Costello et al., 2005; Erskine et al., 2015). There is hence a demand to fill the gap for escalating mental health needs in this population of children and adolescents. Delays in accessing prompt and adequate assessment may incur socio-economic costs and bring about further psychiatric comorbidities.
Increasing the numbers of trained child and adolescent psychiatrists may be necessary to meet the current and projected needs in youth mental health (Baranne & Falissard, 2018; Breton, Plante, & St-Georges, 2005; Thomas & Holzer, 2006). Globally, as well as in Singapore, the number of such specialists fell short of meeting the demands, and increased recruitment was needed to address this workforce shortage (Breton et al., 2005; Lim et al., 2015; Thomas & Holzer, 2006). Hence, there had been moves in recent years to increase exposure to, and interest in, child and adolescent psychiatry (CAP) among medical students (Hunt, Barrett, Grapentine, Liguori, & Trivedi, 2008; Malloy, Hollar, & Lindsey, 2008; Plan, 2002; Thomas & Holzer, 2006). Most medical students had limited exposure to CAP in their medical curriculum due to reduced proportions of teaching time and opportune clinical exposures allocated to CAP programmes. This would be further compounded by the limited number of child and adolescent psychiatrists involved in teaching at medical schools and supervising clinical postings (Dingle, 2010; Lim et al., 2015; Plan, 2002; Sawyer & Giesen, 2007). It remained important however that medical students were taught CAP, given the burden of mental health disorders in our youths today (Dingle, 2010; Hunt et al., 2008; Kaplan & Lake, 2008; Sawyer & Giesen, 2007; Thomas & H, 2006). Other specialist practitioners such as family medicine specialists and paediatricians also frequently managed youths with psychiatric problems. Understanding early childhood development, critical milestones in childhood and adolescents would be essential in any specialty that had to interact and manage children as part of routine practice (Hunt et al., 2008; Plan, 2002). This would form the basis why CAP would be taught in medical schools as part of regular and wider curricula (Dingle, 2010; Hunt et al., 2008; Kaplan & Lake, 2008; Malloy et al., 2008; Plan, 2002; Sawyer & Giesen, 2007). The current medical school pedagogy may have underestimated the salience of teaching CAP in the undergraduate curriculum. This resulted in allocating much less time, attention as well as teaching resources towards CAP. Curriculum designers will also have severely under-appreciated the transferability of skillset due to the inherent challenges in undertaking interviews with children and their parents.
A. The Curriculum and Teaching Methods
In Yong Loo Lin School of Medicine at the National University of Singapore, CAP teaching would be embedded within a six-week General Psychiatry clerkship for Fourth-Year medical students. CAP teaching would consist of a period of 20-hour centralised teaching at the affiliated National University Hospital, together with clinical attachments to the outpatient child psychiatry clinics in other restructured hospitals. The 20-hour teaching would include online lectures made accessible through students’ Intranet, didactic lectures delivered in large group setting by clinical tutors, as well as small group teaching classes. In this paper, the authors examined the effectiveness and acceptability of CAP teaching using these small group teaching classes.
A comprehensive CAP education will ensure the following domains are included such as emotional symptomatology (e.g. depression, anxiety, enuresis), conduct and disruptive behavioural problems (e.g. attention deficit disorder, conduct disorder, bullying), developmental delays (e.g. specific learning, speech or autistic spectrum) and relationship difficulties, personal habits and injuries (e.g. abuse, suicide, digital overuse). Knowledge will include normal child developmental psychology as well as the assessment and management of common CAP conditions. Practice imparts interview skills of CAP and counselling of young parents.
Small group teaching sessions consisted of several components in its general pedagogic approach. The aim of these sessions was to cover the teaching of core knowledge and practices in common CAP cases, as well as training of interview skills required in communicating with children, adolescents, and their parents. Each session would start off with a series of lectures on four major domains of CAP: (1) emotional symptoms, (2) conduct and disruptive behavioural problems, (3) developmental delays, and (4) relationship difficulties, personal habit, and injuries. The lectures would be followed by both “Clinical Vignettes Tutorial” (CVT) and “Observed Clinic Interview Tutorials” (OCIT). The teaching sessions were structured as such in view of time constraints in the undergraduate curriculum that precluded comprehensive clinical exposure—a combination of didactics and simulated practice was designed to maximise the transferability of necessary theoretical knowledge and practical skills set for the students.
In the CVT, tutors would discuss clinical vignettes derived from real-life patients, and their underpinning theoretical concepts for about 2½ hours. This teaching activity would have covered the principles of psychopharmacology in the youths, as well as three distinct childhood conditions: a) Adolescent Depression with self-harm behaviour, b) Post-traumatic Stress Disorder in an adolescent and c) Adjustment Disorder in an adolescent with chronic medical illnesses. The anonymised vignettes were based on actual patient profiles. During each interactive discussion of these clinical presentations, students were encouraged by tutors to raise critical questions as pertinent portions of the history unfolded to enhance their analytic thinking of the cases and remember these teachable moments.
The second teaching activity of the OCITs would take place after a second series of lectures on other CAP conditions had been conducted. During this three-hour long OCIT, students would be provided opportunities to interview simulated patients (SPs). Each group would comprise of 12 to 18 students led by one clinical tutor.
The four pre-prepared clinical scenarios included one case of an adolescent with Anorexia Nervosa; another of an adolescent with Social Anxiety Disorder; a parent of a child with Attention Deficit Hyperactivity Disorder; and last but not least a parent of a child with features of Autism Spectrum Disorder. Each of these scenarios would include a case template that comprised an interesting title, the learning and assessment objective, the student’s task and the script for the SP complete with an opening statement, standard statements and character presentation (behaviour, affect and mannerism).
Students would take turn to interview the SPs in attempts to collate accurate and adequate clinical information to arrive at provisional diagnoses. The students were then tasked to discuss the possible differential diagnoses, to provide treatment options as well as to formulate prognoses of the conditions with the SPs. The SPs were in turn invited to comment on the interactions they had with the students. The clinical tutors would also conduct follow-up discussions to provide feedback to the students on aspects of their interviewing techniques and knowledge of the clinical conditions. The discussions also focused on the differential diagnoses and management strategies for various conditions.
II. METHODS
Paper and pen self-report surveys for both the CVT and OCIT sessions were done to evaluate the student participants’ learning, experience, and interest in CAP (Appendix A). Student participants were asked to grade responses on a five-point Likert scale (1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree and 5 = Strongly agree), in relation to statements such as “I found the session enjoyable” and “The case scenarios were relevant”. The surveys were completed and submitted anonymously at the end of each teaching session. The surveys also included a free–text segment for any open feedback, in which the question asked the student participants to list down “The best things about the session” and “Some ways which I think can make the sessions better”. The surveys utilised for each teaching session differed slightly owing to varied content validity of the teaching methods, but the questions were largely identical for most of the surveys. Implied informed consent was provided for by the participating students during the surveys.
For the current study, the authors analysed data from the surveys completed by the Fourth-Year undergraduate medical students who were rotated to the six-week Psychiatry clerkship period of five months between July and November in 2017.
Descriptive statistics were used to analyse the findings of the survey.
III. RESULTS
A total of 289 students completed the survey between July 2017 and November 2017. With regards to the CVT, majority of the students agreed or strongly agreed that the sessions were enjoyable (90.7%) and beneficial to their overall learning (90.7%; Table 1). They provided feedback that the session had helped them to apply what they had learnt (95.8%), and that the case scenarios were relevant (98.2%).
|
Survey Statement |
Participants Who Indicated “Agree” Or “Strongly Agree” |
||
|
|
|
N |
% |
|
1 |
“I found the session enjoyable…”
|
262 |
90.7 |
|
2 |
“The session helped me to apply what I have learnt…”
|
277 |
95.8 |
|
3 |
“The case scenarios were relevant…”
|
284 |
98.2 |
|
4 |
“My clinical tutor was effective in facilitating the session…”
|
281 |
97.2 |
|
5 |
“The session stimulated my interest in Child and Adolescent Psychiatry…”
|
247 |
85.5 |
|
6 |
“There was sufficient time for each section…”
|
272 |
94.1 |
|
7 |
“Overall, I found the session beneficial…”
|
262 |
90.7 |
Table 1. Survey results for the Clinical Vignettes Tutorial (CVT)
For the OCIT, most of the survey respondents agreed or strongly agreed that the activity had helped them to learn psychiatric interviewing skills (97.7%), increased their confidence in speaking with adolescents or parents (95.1%) (Table 2). Most of the students who responded to the survey had reported that the simulated patients’ performances were realistic (97.7%). A large proportion of the respondents indicated that the teaching session had met their learning objectives (98.5%).
|
Survey Statement |
Participants Who Indicated “Agree” Or “Strongly Agree” |
||
|
|
|
N |
% |
|
1 |
“The session helped me to learn psychiatric interviewing skills…”
|
260 |
97.7 |
|
2 |
“The session increased my confidence in speaking to adolescents/parents…”
|
253 |
95.1 |
|
3 |
“The session helped me to apply what I have learnt…”
|
248 |
96.9 |
|
4 |
“The session stimulated my interest in Child and Adolescent Psychiatry…”
|
219 |
83.3 |
|
5 |
“My clinical tutor provided useful feedback…”
|
259 |
97.3 |
|
6 |
“The simulated patients’ performances felt realistic…”
|
258 |
97.7 |
|
7 |
“There was sufficient time for each case…”
|
256 |
95.9 |
|
8 |
“Overall, the session met the learning objectives…”
|
257 |
98.5 |
Table 2. Survey results for the Observed Clinical Interview Tutorial (OCIT)
Examining the effectiveness of these teaching activities in stimulating the students’ interest towards CAP, 83.3% of the respondents indicated that the CVT had done so, while a slightly higher proportion (85.5%) of the respondents reported that the OCIT stimulated their interest in CAP.
Majority of the respondents indicated that the clinical tutors were effective in facilitating the CVT (97.2%). Similarly, most of the respondents reported that the clinical tutors provided useful feedback during the OCIT (97.3%).
Entries in the free-text feedback section about what the students liked best about the CVT and OCIT included comments such as “good for application”, session allowed for “practice of interviewing skills” and “helped consolidate knowledge” (Figure 1). Several students liked the “interactive” nature of the interviews and discussions, as well as “feedback” from tutors, which also helped in their learning.

Figure. 1. Open comment feedback to the survey question “The best things about the sessions were…”
In areas that the students indicated for further improvement, they had cited for a “shorter” duration in each teaching session (Figure 2). This was likely due to the nature of a full day programme of CAP teaching which could last eight hours in a day with a one-hour lunch break. Others had shared that they preferred “smaller” groups so students could get more chances to practice interviewing the SPs and also be provided “more time for discussion” to allow more in-depth feedback as well as discussion of each clinical condition. Some students remarked that Objective Structured Clinical Examination (OSCE) styled marking schemes could help enhance their learning experiences as this method might be more structured, compared to an open discussion.

Figure 2. Open comment feedback to the survey question “Some ways which I think can make the sessions better are…”
IV. DISCUSSION
This study evaluated the effectiveness and acceptability of small group tutorials for CAP conditions, which are packaged inseparably as part of a medical undergraduate psychiatry teaching programme. CVT and OCIT are synergistically designed to complement each other in the curriculum. The surveys used to compile the medical undergraduates’ responses had focused on their learning experience with the CAP curriculum. The effectiveness of the teaching methods namely CVT and OCIT would be determined from transferability of the requisite knowledge base and the clinical skills, as well as availability of opportunities to experience interviewing for the participants. The survey responses were also used to gauge the performance of the SPs and the clinical tutors’ usefulness. In addition, the degree of how impactful the teaching sessions had in generating interest towards CAP was also evaluated.
The fourth-year medical students gave good feedback for the small group teaching sessions. They reported that the CVT were enjoyable, beneficial and had allowed them to apply what they had learnt. For the OCIT, most of the respondents indicated that the session had helped them to learn psychiatric interviewing skills, increased their level of confidence in speaking with adolescents and parents, and had helped them to apply in clinical scenarios what they had learnt. There is discernible difference between the feedback for CVT and OCIT. The students’ feedback for CVT affirmed applicability of the knowledge content of CAP whereas those for OCIT concurred with transferability of interviewing skills in terms of confidence level.
In the open feedback segment of the survey, respondents reported that they had particularly liked the interactive and hands-on aspect of the session, the frequent opportunities for evaluation and feedback, as well as for practice. However, they highlighted that certain factors such as the size of grouping, the length of the sessions and random allocation of conditions could be improved further to enhance their learning experience. Overall, their feedback still indicated positive experiences in these small group sessions, and this translated to an increased knowledge base, a heightened level of confidence, and burgeoned interest in CAP among the student participants.
This study’s limitations included the challenges inherent with attempting to accurately assess the students’ genuine experiences and feelings towards the sessions; with possible biases (recall and Hawthorne effect) in responding to questionnaires; and the lack of correlation to actual performances in real-world settings. Furthermore, what remained unanswered was how such sessions might truly generate interest leading to possibly pursuit of a career in CAP. In addition, it is uncertain whether changing the teaching methods with the curriculum could inspire more medical students and young doctors to consider specialising in this field and raise the number of residency applications. The data from our study did appear to be consistent with findings from other CAP clinical teaching programmes. In these programmes, more exposure to CAP and increased clinical opportunities did correlate with changes in impressions towards and appreciation of clinical interactions with children, increased positive views of CAP as part of medical practice, and heightened interest in CAP as a field of medical specialty (Dingle, 2010; Kaplan & Lake, 2008; Malloy et al., 2008; Martin, Bennett, & Pitale, 2005).
In the current undergraduate medical curriculum, the amount of time allocated to teaching CAP is relatively small compared to other topics. Child and adolescent psychiatric cases can be particularly complex and their management demand sensitive handling, which may pose challenges to real world practice. Youth patients and their parents may value privacy and sometimes do not allow medical students to be involved in initial assessments and subsequent follow-up consultations. These factors collectively pose unique challenges to teaching and equipping medical students with the skills and knowledge to address child and adolescent mental health disorders. While clinical contact and patient experience would be preferred and desirable for training, it may be impractical given the various constraints mentioned above (Kaplan & Lake, 2008). Hence, other creative methods of “exposure” to CAP patients should be incorporated into teaching rotations to offer medical students the opportunities to expand this knowledge base, apply the knowledge to practice scenarios, and further their clinical and communication skills. Small group sessions such as the CVTs and OCITs are teaching activities that can be used to overcome some of these challenges.
Our study showed that small group interactive teaching is effective in helping medical students to apply what they have learnt about CAP, increase their confidence in speaking to adolescents as patients and learn psychiatric interviewing skills. It also exposes them to a wide range of relevant CAP cases to which they can apply their theoretical knowledge and practice interview and management techniques. Furthermore, we have found that all this can be adequately achieved in a tailored environment that is conducive for learning. The collective constructive feedback had been used to further improve the content and deliverability style so as to enhance implementation in future batches. It has also been conceptualised to compare CVT and OCIT as individual teaching methods for future scholarly research.
V. CONCLUSION
The CAP small group interactive teaching sessions for medical students received good feedback from majority of the participants. This positive validation would spur the authors on to explore further how this pedagogy could help spark interests in Child and Adolescent Psychiatry among medical students given the shortfall of child and adolescent psychiatrists worldwide.
Notes on Contributors
AHYL analysed and interpreted data. CHJW, together with TJY and JCMW planned and conducted the child psychiatry small group teaching and collected feedback data from the medical students. TJY developed the feedback questionnaire. YSL, together with AHYL, CHJW and TTC planned and wrote the manuscript. All authors read and approved the final manuscript.
Ethical Approval
NHG DSRB reference number 2019/00431 for exemption.
Data Availability
Datasets generated and/or analysed during the current study are available from corresponding author on reasonable request.
Acknowledgements
The authors wish to thank the team from Centre for Healthcare Simulation, Yong Loo Lin School of Medicine, National University of Singapore for the invaluable support in recruiting and training the simulated patients for the CAP teaching program. We appreciate the participation of the simulated patients and medical students in the teaching programme.
Funding
There is no funding for this paper.
Declaration of Interest
As far as all the authors are concerned, we do not know of, or foresee any future competing interests. We are not aware of any issues relating to journal policies in submitting this manuscript. All the authors have approved of the manuscript for submission. The authors declare that they have no competing interests.
References
Baranne, M. L., & Falissard, B. (2018). Global burden of mental disorders among children aged 5–14 years. Child and Adolescent Psychiatry and Mental Health, 12(1), 19.
Breton, J. J., Plante, M. A., & St-Georges, M. (2005). Challenges facing child psychiatry in Quebec at the dawn of the 21st Century. The Canadian Journal of Psychiatry, 50(4), 203-212.
Costello, E. J., Egger, H., & Angold, A. (2005). 10-year research update review: The epidemiology of child and adolescent psychiatric disorders: I. Methods and public health burden. Journal of the American Academy of Child & Adolescent Psychiatry, 44(10), 972-986.
Dingle, A. D. (2010). Child psychiatry: What are we teaching medical students? Academic Psychiatry, 34(3), 175-182.
Erskine, H. E., Moffitt, T. E., Copeland, W. E., Costello, E. J., Ferrari, A. J., Patton, G., … & Scott, J. G. (2015). A heavy burden on young minds: The global burden of mental and substance use disorders in children and youth. Psychological Medicine, 45(7), 1551-1563.
Hunt, J., Barrett, R., Grapentine, W. L., Liguori, G., & Trivedi, H. K. (2008). Exposure to child and adolescent psychiatry for medical students: Are there optimal “teaching perspectives”?. Academic Psychiatry, 32(5), 357-361.
Kaplan, J. S., & Lake, M. (2008). Exposing medical students to child and adolescent psychiatry: A case-based seminar. Academic Psychiatry, 32(5), 362-365.
Kyu, H. H., Pinho, C., Wagner, J. A., Brown, J. C., Bertozzi-Villa, A., Charlson, F. J., … & Fitzmaurice, C. (2016). Global and national burden of diseases and injuries among children and adolescents between 1990 and 2013: Findings from the global burden of disease 2013 study. JAMA Pediatrics, 170(3), 267-287.
Lim, C. G., Ong, S. H., Chin, C. H., & Fung, D. S. S. (2015). Child and adolescent psychiatry services in Singapore. Child and Adolescent Psychiatry and Mental Health, 9(1), 7.
Malloy, E., Hollar, D., & Lindsey, B. A. (2008). Increasing interest in child and adolescent psychiatry in the third-year clerkship: Results from a post-clerkship survey. Academic Psychiatry, 32(5), 350-356.
Martin, V. L., Bennett, D. S., & Pitale, M. (2005). Medical students’ perceptions of child psychiatry: Pre-and post-psychiatry clerkship. Academic Psychiatry, 29(4), 362-367.
Plan, S. (2002). A Call to Action: Children Need Our Help! American Academy of Child & Adolescent Psychiatry. Retrieved from https://www.aacap.org/app_themes/aacap/docs/resources_for_primary_care/workforce_issues/AACAP_Call_to_Action.pdf
Sawyer, M., & Giesen, F. (2007). Undergraduate teaching of child and adolescent psychiatry in Australia: Survey of current practice. Australian & New Zealand Journal of Psychiatry, 41(8), 675-681.
Thomas, C. R., & Holzer, C. E., 3rd (2006). The continuing shortage of child and adolescent psychiatrists. Journal of the American Academy of Child & Adolescent Psychiatry, 45(9), 1023-1031.
World Health Organization. (2014). Adolescent health epidemiology. Retrieved from http://www.who.int/maternal_child_adolescent/epidemiology/adolescence/en/
*Yit Shiang Lui
1E Kent Ridge Road
Tower Block, Level 9,
Singapore 119228
Tel: 6772 6331
Email address: yit_shiang_lui@nuhs.edu.sg
Submitted: 14 February 2020
Accepted: 1 July 2020
Published online: 5 January, TAPS 2021, 6(1), 40-48
https://doi.org/10.29060/TAPS.2021-6-1/OA2227
Shirley Beng Suat Ooi1,2, Clement Woon Teck Tan3,4 & Janneke M. Frambach5
1Emergency Medicine Department, National University Hospital, National University Health System, Singapore; 2Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 3Department of Ophthalmology, National University Hospital, National University Health System, Singapore; 4Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 5School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, The Netherlands
Abstract
Introduction: Almost all published literature on effective clinical teachers were from western countries and only two compared medical students with residents. Hence, this study aims to explore the perceived characteristics of effective clinical teachers among medical students compared to residents graduating from an Asian medical school, and specifically whether there are differences between cognitive and non-cognitive domain skills, to inform faculty development.
Methods: This qualitative study was conducted at the National University Health System (NUHS), Singapore involving six final year medical students at the National University of Singapore, and six residents from the NUHS Residency programme. Analysis of the semi-structured one-on-one interviews was done using a 3-step approach based on principles of Grounded Theory.
Results: There are differences in the perceptions of effective clinical teachers between medical students and residents. Medical students valued a more didactic spoon-feeding type of teacher in their earlier clinical years. However final year medical students and residents valued feedback and role-modelling at clinical practice. The top two characteristics of approachability and passion for teaching are in the non-cognitive domains. These seem foundational and lead to the acquisition of effective teaching skills such as the ability to simplify complex concepts and creating a conducive learning environment. Being exam-oriented is a new characteristic not identified before in “Western-dominated” publications.
Conclusion: The results of this study will help to inform educators of the differences in a learner’s needs at different stages of their clinical development and to potentially adapt their teaching styles.
Keywords: Clinical Teachers, Medical Students, Residents, Cognitive/Non-Cognitive, Asian Healthcare, Faculty Development
Practice Highlights
- Approachability and teaching passion are foundational non-cognitive skills in effective clinical teachers.
- These foundational skills are more important for undergraduate than postgraduate teaching.
- Procedural residents can accept less ‘warm’ teachers if they can learn advanced clinical skills.
- Medical students value didactic ‘spoon-feeding’ type of teachers in their earlier clinical years.
- Final year medical students and residents value feedback and role-modelling at clinical practice.
I. INTRODUCTION
“The transformation of our students requires the engagement of innovative and outstanding clinician-teachers who not only supervise students in their development of technical skills and applied knowledge but also serve as role models of the values and attributes of the profession and of the life of a professional” (Sutkin, Wagner, Harris, & Schiffer, 2008). This statement nicely encapsulates the very important role played by outstanding clinical teachers in helping students to ultimately become professionals with the attributes our healthcare system desires. Previous research has extensively investigated characteristics of effective clinical teachers to inform faculty development (e.g. Branch, Osterberg, & Weil, 2015; Hatem et al., 2011; Hillard, 1990; Kernan, Lee, Stone, Freudigman, & O’Connor, 2000; Paukert & Richards, 2000; Singh et al., 2013; Sutkin et al., 2008; White & Anderson, 1995). However, despite the large body of existing research on effective clinical teaching, two issues related to the needs of different groups of learners need further investigation to enable more tailored faculty development.
First, effective clinical teaching may look different in undergraduate as compared with postgraduate education. In many healthcare institutions, clinical teachers are expected to teach across the medical education continuum, i.e., undergraduate medical students, graduate doctors in training, as well as part of continuing medical education, and teaching abilities are a necessary prerequisite in an academic environment (Hatem et al., 2011). Based on the conceptual framework of constructivism (Bednar, Cunningham, Duffy, & Perry, 1991)—a theory which equates learning with creating meaning from experience or contextual learning—Jonassen (1991) argues that constructive learning environments are most effective for acquiring knowledge in the advanced stage of knowledge, the stage between introductory and expert. According to Jonassen (1991), the initial or introductory stage of knowledge acquisition occurs when learners have very little directly transferable prior knowledge about a skill or content area. In this stage, knowledge is best acquired through more objectivistic approaches which can be described as ‘spoon-feeding’. Medical students in general would fit into this introductory stage, in varying degrees depending on their seniority and individual progress in learning. Jonassen’s (1991) second stage is advanced knowledge acquisition where the domains are ill-structured and more knowledge-based. This is in contrast to his third or final stage of knowledge acquisition of experts that require very little instructional support but are able to deal with elaborate structures, schematic patterns and seeing the interconnectedness in knowledge through experience. The stage of junior doctors in training would be fit into the second or advanced stage of learning. Constructivist teachers help students construct knowledge to become active learners rather than passive recipients of knowledge from the teachers or textbooks. In view of this constructivist framework, it appears logical to postulate that as medical students mature to become practicing doctors, their perceptions of effective clinical teachers may change from one who ‘spoon-feeds’ them with medical knowledge to one who encourages them to actively construct new meaning as they become clinically more experienced and have to deal with complex and ill-defined problems. Low, Khoo, Kuan, and Ooi (2020) showed that although the top four characteristics of effective medical teachers are consistent across all 5 years of medical school, characteristics that facilitate active learner participation are emphasised in the clinical years consistent with constructivist learning theory. However, as there is a paucity of comparative research on perceptions of effective clinical teachers among undergraduates as compared to postgraduates to plan more focused faculty development to address the attributes the learners look for in their clinical teachers, this warrants further research.
The second issue relates to potential differences in the clinical teaching role between Asian and Western settings. In Western studies, as noted above, effective clinical teachers are encouraged to stimulate students’ intellectual curiosity leading to more self-directed learning (Hillard, 1990; Kernan et al., 2000; White & Anderson, 1995). In contrast, feelings of uncertainty about the independence required in self-directed learning, a focus on tradition that respects ‘old ways’, hierarchy expecting ‘truths’ to come from persons of higher status, and an achievement orientation to pass and excel in examinations have been identified as more prominent in non-Western than in Western cultures (Frambach, Driessen, Chan, & van der Vleuten, 2012). This is despite the recent introductions of more student-centred education methods. In Singapore for example, there is a move in the Yong Loo Lin School of Medicine (YLLSoM) to try to embed students into healthcare teams (Jacobs & Samarasekera, 2012) and implement newer methods of learning such as flipped classroom. However, many teachers still employ traditional methods of lectures and small group tutorials focused on exam preparation. A comprehensive review study of 68 articles on effective clinical teaching (Sutkin et al., 2008), comprised only one article that reported research from a non-Western setting (Elzubeir & Rizk, 2001). In this article, originating from the United Arab Emirates, there is no discussion on whether there is a difference in the perception of a role model between medical students in Asian countries compared to the West (Elzubeir & Rizk, 2001). Another study conducted in Asia showed differences in the perceptions of first-year and fifth-year medical students in Singapore on what makes an effective medical teacher (Kua, Voon, Tan, & Goh, 2006). More first-year students preferred handouts in contrast to fifth-year students who were less reliant on ‘spoon-feeding’. Research on effective clinical teaching is growing in the Asian setting (Ciraj et al., 2013; Haider, Snead, & Bari, 2016; Kikukawa et al., 2013; Mohan & Chia, 2017; Nishiya et al., 2019; Venkataramani et al., 2016) though there is still a paucity of literature in the Asian setting compared with studies conducted in the West and there are none that directly compared medical students with residents.
Another issue that deserves further attention is the role of non-cognitive domain skills in clinical teaching. Sutkin et al.’s (2008) review study described three main categories of characteristics of good clinical teachers: 1) physician characteristics, 2) teacher characteristics, and 3) human characteristics (Table 1). Approximately two-thirds of the characteristics were in non-cognitive domains (such as those involving relationship skills, emotional states, and personality types), and one-third in cognitive domains (such as those involving reasoning, memory, judgment, perception, and procedural skills). The article noted that cognitive abilities can be taught and learned, in contrast to non-cognitive attributes which are more difficult to develop and teach. Faculty development programmes currently often focus on traditional cognitive skills, such as curriculum design, large-group teaching, and assessment of learners (Searle, Hatem, Perkowski, & Wilkerson, 2006). In contrast, if non-cognitive domains are more important in contributing to outstanding teaching, they might need greater emphasis in the curricula of these workshops. The good news is that according to Schiffer, Rao, and Fogel (2003), non-cognitive behaviours are both measurable and alterable. Most of them have underlying neural networks which are entering our sphere of understanding. Hence non-cognitive skills, although much more challenging to develop than cognitive skills, have a potential to be developed. It is not clear whether there are differences in the distribution between cognitive and non-cognitive domains skills between the perceptions of medical students compared to residents of an effective clinical teacher.
The aim of this qualitative study is to explore the perceived characteristics of an effective clinical teacher among medical students compared to residents graduating from an Asian medical school and whether there is a difference regarding cognitive and non-cognitive domain skills.
II. METHODS
A. Participants
The participants consisted of final/fifth year medical students (M5s) from the Yong Loo Lin School of Medicine (YLLSoM), National University of Singapore (NUS) who were posted to the National University Hospital (NUH) to do their student internship posting in 2016. To ensure sufficient working experience, the National University Health System (NUHS) residents who had graduated from the YLLSoM and who had recently completed their intermediate specialty examinations were recruited. These were third to fifth year residents in different programmes. Maximal variation sampling of the M5s and the residents of both gender, different ethnic groups and from different specialties (for residents only) was done.
B. Design
A pragmatic qualitative research design (Savin-Baden & Howell Major, 2013) was used to get the participants to reflect on their own learning journey affecting their perceptions of the qualities that make an effective clinical teacher from the time they were first exposed to clinical medicine in year 3 (M3) of medical school to final year (M5) for the students, and to residency for the residents.
C. Data Collection
Semi-structured one-on-one interviews using open-ended questions were conducted. A list of M5s doing their student internship programme in the various departments in NUH was invited via an e-mail invitation to participate in this study. To ensure maximal variation sampling, M5s of both gender and as far as possible different ethnic groups were recruited. As for the residents, through the Graduate Medicine Education Office in NUH, residents of both gender, from different ethnic groups and different specialties (both procedural and non-procedural) were selected from those who responded voluntarily to the invitation to participate in this study to ensure maximal variation sampling as residents from procedural specialties may have different perceptions of effective clinical teachers from non-procedural specialties.
Written consent after reading the Participant Information Sheet was taken from the interviewees before the interview was conducted in a quiet room. The interview was audiotaped and lasted between 30 and 45 minutes.
D. Data Analysis
The audiotaped interviews were transcribed. As all the 12 interviews were conducted by the principal investigator (PI) (SO) and although the coding and official analysis of the interviews were done after all the 12 interviews were transcribed, the PI had taken note of themes emerging and decided on ending the interviews after no substantial new themes had emerged.
In the first phase, open coding, initial categories of the information on characteristics of effective clinical teachers by segmenting information and assigning open codes were formed. In the second coding phase, broader categories were developed through conceptually related ideas. The third phase involved selective coding where the individual categories were counterchecked with Sutkin et al.’s (2008) categories of teacher, physician and human characteristics and whether they were in the cognitive or non-cognitive domains (Table 1). Further related categories according to Sutkin et al.’s (2008) classification were brought together.
Physician Characteristics
|
P1 |
Demonstrates medical/clinical knowledge |
|
P2 |
Demonstrates clinical and technical skills/competence, clinical reasoning |
|
P3 |
Shows enthusiasm for medicine |
|
P4 |
A close doctor-patient relationship |
|
P5 |
Exhibits professionalism |
|
P6 |
Is scholarly (does research) |
|
P7 |
Values teamwork and has collegial skills |
|
P8 |
Is experienced |
|
P9 |
Demonstrates skills in leadership and /or administration |
|
P10 |
Accepts uncertainty in medicine |
|
P11 |
Others |
Teacher Characteristics
|
T1 |
Maintains positive relationships with students and a supportive learning environment |
|
T2 |
Demonstrates enthusiasm for teaching |
|
T3 |
Is accessible/available to students |
|
T4 |
Provides effective explanations, answers to questions, and demonstrations |
|
T5 |
Provides feedback and formative assessment |
|
T6 |
Is organized and communicates objectives |
|
T7 |
Demonstrates knowledge of teaching skills, methods, principles, and their application |
|
T8 |
Stimulates students’ interest in learning and/or subject |
|
T9 |
Stimulates or inspires trainees’ thinking |
|
T10 |
Encourages trainees’ active involvement in clinical work |
|
T11 |
Provides individual attention to students |
|
T12 |
Demonstrates commitment to improvement of teaching |
|
T13 |
Actively involves students |
|
T14 |
Demonstrates learner assessment/evaluation skills |
|
T15 |
Uses questioning skills |
|
T16 |
Stimulates trainees’ reflective practice and assessment |
|
T17 |
Teaches professionalism |
|
T18 |
Is dynamic, enthusiastic, and engaging |
|
T19 |
Emphasizes observation |
|
T20 |
Others |
Human Characteristics
|
H1 |
Communication skills |
|
H2 |
Acts as role model |
|
H3 |
Is an enthusiastic person |
|
H4 |
Is personable |
|
H5 |
Is compassionate/emphathetic |
|
H6 |
Respect others |
|
H7 |
Displays honesty |
|
H8 |
Has wisdom, intelligence, common sense, and good judgement |
|
H9 |
Appreciates culture and different cultural backgrounds |
|
H10 |
Consider other’s perspectives |
|
H11 |
Is patient |
|
H12 |
Balances professional and personal life |
|
H13 |
Is perceived as a virtuous person and a globally good person |
|
H14 |
Maintains health, appearance, and hygiene |
|
H15 |
Is modest and humble |
|
H16 |
Has a good sense of humour |
|
H17 |
Is responsible and conscientious |
|
H18 |
Is imaginative |
|
H19 |
Has self-insight, self-knowledge, and is reflective |
|
H20 |
Is altruistic |
|
H21 |
Others |
Note: Italics denotes cognitive characteristics; Bold denotes non-cognitive characteristics.
Table 1. Classification of characteristics of outstanding clinical teachers (Sutkin et al., 2008)
E. Trustworthiness
To enhance the credibility of the research, member checking on the accuracy of interview transcription was done. The same transcription was coded by the PI (SO) and a co-researcher (CT) and the themes and differences were discussed and resolved together. The themes were then discussed with another co-researcher (JF) who is an outsider to the research setting. To contribute to the dependability of the data, a reflexivity diary was kept to reflect on the process and the PI’s role and influence on this study. This is because the PI is the person overall in charge of the residency training and has vast experience in teaching both undergraduate and postgraduate learners and has observed undergraduates seemingly valuing the willingness of time spent teaching in contrast to postgraduate learners who value effective teaching on the job. The PI emphasised to participants that whatever they mentioned in this study would not affect them in any way in their assessments, selection into a residency programme, job selection nor career progression. As a point of note, none of the interviewees mentioned any of the authors by name in the interviews when describing an effective clinical teacher.
III. RESULTS
A total of six final year medical students from the YLLSoM consisting of three males and three females with a mean age of 23 years old were interviewed. As for the residents group, they consisted of four males and two females. There were two internal medicine year 3 residents, one paediatric year 5 resident, one emergency medicine year 4 resident, one orthopaedic year 3 resident and one urology year 4 resident with a mean age of 29 years (range 26-33 years). All of them were of Chinese ethnicity.
The characteristics of effective teachers were mapped onto Sutkin et al.’s (2008) review paper (Table 1) and while the majority of the characteristics could be mapped, those characteristics not able to be mapped would be considered as new characteristics. Referring to the summary of results in Table 2, the top characteristic identified equally by the medical students and residents group was approachability, in the non-cognitive domain. This was described as being “relatable, personable, forming good rapport, warm, able to remember students’ names, having a sense of humour, sharing personal experience”. Medical student 2 aptly described its importance: “Approachability in being willing to teach is an inborn trait. It acts as a screening tool. It opens the door for a student to decide whether or not this clinical tutor is someone she is likely to approach to learn from.” Interestingly, while both the medical students and residents group unanimously identified the need for a clinical teacher to have a threshold level of clinical competence, followed by a teacher who is warm and approachable with a passion to teach, this latter requirement was emphasised as particularly important in undergraduate teaching. In contrast, a postgraduate trainee/resident was able to accept a less warm but skillful clinician to learn advanced surgical skills from as they were more able to do self-directed learning being already in a training programme and they could observe and learn.
|
Total |
MS |
R |
Characteristics |
Teacher |
Physician |
Human |
Cognitive |
Non-Cognitive |
|
10 |
5 |
5 |
Approachability |
X (T3) |
|
X (H4) |
|
x |
|
9 |
3 |
6 |
Passion/enthusiasm in teaching/engaging |
X (T2) |
|
|
|
x |
|
8 |
5 |
3 |
Provide effective explanations, answers to questions, and demonstrations (T4) Demonstrate clinical and technical skills/competence, clinical reasoning (P2) |
X (T4) |
X (P2) |
|
x |
|
|
7 |
3 |
4 |
Creates conducive learning environment
|
X (T1) |
|
X (H11, H15) |
|
x |
|
7 |
3 |
4 |
Role modeling
|
x |
x |
X (H1, H6) |
|
x |
|
7 |
2 |
5 |
Teach at appropriate level/know learning objectives |
X (T6) |
|
|
x |
|
|
7 |
3 |
4 |
Sacrifice time |
x |
|
|
x |
|
|
6 |
3 |
3 |
Realistic/concrete learning |
X (T6) |
|
|
x |
|
|
6 |
2 |
4 |
Feedback, supervision, assessment for learning |
X (T5, T19) |
|
|
x |
|
|
5 |
2 |
3 |
Knowledgeable/up to date/evidence-based |
|
X (P1) |
|
x |
|
|
5 |
4 |
1 |
Exam-oriented |
x |
|
|
x |
|
|
4 |
2 |
2 |
Inspirational to learning |
X (T8, T9, T18) |
|
|
|
x |
|
4 |
1 |
3 |
Clinical thinking/Demonstrate to impart/pedagogy |
X (T9) |
X (P2) |
|
x |
|
|
3 |
2 |
1 |
Nurturing/encouraging/compassion for students & team |
X (T11) |
|
X |
|
x |
|
2 |
0 |
2 |
Allows hands-on/encourages trainees active involvement in clinical work |
X (T10) |
|
|
|
x |
|
|
|
|
Others: Strict, elocution, fair/moral compass (H13, H7), innovative (T12), directs learners, worldly-wise; empathy (H5), interpersonal skills, humour (H16) |
|
|
|
|
|
Note: (T), (P) and (H) refer to the specific Sutkin et al.’s (2008) classification as given in Table 1.
Table 2. Characteristics of effective teachers identified by Medical Students (MS) and Residents (R) classified into teacher, physician and human characteristics and cognitive vs non-cognitive domains and mapped onto Sutkin et al.’s (2008) Classification (Table 1)
The second most important characteristic identified was having a passion/enthusiasm in teaching, in the non-cognitive domain. This was described as “engaging, enthusiastic to help residents learn, enthusiasm/infectious attitude rubs off, lively, draws out from learners, takes time to explain to students”. Resident 5 explained: “Passion is actually demonstrated in the knowledge you display. Because when you are interested in something, you can go on to explore the depth. People who display passion are able to depict the subject matter in a very interesting, personal and in a lively way. Passion is also about the desire to learn about things and to contribute to things. So in a sense teaching is not a passive tool for the diffusion of students … it’s also the ability to be able to draw things out from the students …draw contribution or ideas…”. Passion as a characteristic was mentioned by all the residents but not by all of the medical students.
The third most important characteristic identified can be summarised as “providing effective explanations, answers to questions, and demonstrations” (a teacher characteristic) and “demonstrates clinical and technical skills/competence, clinical reasoning” (a physician characteristic) in the cognitive domain. This was described as “being able to break down concepts into digestible chunks; being able to synthesise and teach in understandable way; how to think, synthesise and use information; concise, targeted, clear thinking; headings, subheadings, elaborations; clarity in giving instructions and thought so that everyone is on the same page; demonstrate better way of presenting and more accurate way of physical examination”. This was identified more in the medical student group than in the resident group.
Most of the other characteristics generally coincided with Sutkin et al.’s (2008) paper. Among the two cognitive domains skills were “teaching at appropriate level/knowing learning objectives” as well as being willing to “sacrifice time” demonstrating commitment for student education. The teachers who sacrificed their time gave additional teaching sessions and did not rush through. The medical students and the residents identified this characteristic as something they really valued in undergraduate teaching. Another characteristic in the cognitive domain was “Realistic/concrete learning” was described as “bedside teaching; teaching with practical aspect, case-based teaching; use of clinical pictures, electrocardiogram, clinical quiz and learning aids”. This form of learning was identified as being effective by both the medical students and residents equally. In contrast, “feedback, supervision, assessment for learning” described as “being able to discuss in detail as physically present; balance between supervision and resisting urge to take over in an operation; good feedback with balance of positive and negative points done in a fun and nice way” was identified more by the residents than the medical students group.
Being “exam-oriented” i.e., the teacher being able to prepare the students well for exams, was notably a characteristic identified mainly by the medical students but was one not identified at all in Sutkin et al.’s (2008) paper nor other more recent references. To quote medical student 1, “I guess especially for medical students, it is whether this tutor prepares us well for the exams and in terms of meeting our academic objectives.” Medical student 5: “He teaches us very exam focused and he synthesises all the information very succinctly for exams.”
The medical students were specifically asked whether they identified a difference in the characteristics they valued in their teachers between when they were first introduced to clinical medicine in M3 compared to now in M5. The students almost unanimously expressed that in M3, as they had just been exposed to clinical medicine, they identified the need to build up their medical knowledge through more content-heavy didactic style of teaching that could be described as more of spoon-feeding than self-directed learning. Medical student 5 said, “Year 3 is more introductory kind of year so we don’t know anything. So what a good tutor to me in year 3 was whoever can teach me approaches, impart didactic teachings like knowledge.” They valued connections back to the basic sciences taught in their first two years of medical school and teachers who taught them how to approach patients. They were open to the gradual introduction of self-directed learning but it should not hold up the pace of the lesson if the students were unable to answer. In contrast, at the time of interview they were in M5 and they had two main aims. Their first aim was to look for good role-models for their upcoming internship and choice of residency for some. Hence, they appreciated bedside teaching with close supervision and feedback on medical knowledge applied to actual clinical care. Moreover, bedside examination skills and patient communications cannot be studied at home. At M5, they valued more self-directed learning as they were more equipped to search for information themselves unlike when they were in M3. They also greatly valued preparation for their final exams which would involve clinical examination in the form of Objective Structured Clinical Examination. In this aspect, they valued teachers who could teach them clinical reasoning on how to synthesise information to be applied to management of actual patients. The second aim had become more important as their final exams drew near. This feedback was also expressed by the residents when they recalled on what they looked for in their undergraduate years.
For the residents who were in their third year of their residency and beyond, they identified the need for more active, self-directed learning. They mentioned the need to ask the ‘why questions’ and to learn evidence-based clinical practice. They appreciated experienced tutors who shared pearls and personal experience with them. They preferred to learn from good teachings during ward rounds and clinics and mentioned that didactic teaching was less important unlike in their undergraduate days and also as a first year resident where they still appreciated more spoon-feeding. As a more senior resident, they found discussions, greater analysis, asking questions to identify knowledge gaps, opportunity to present and testing useful because they already had a fund of medical knowledge.
IV. DISCUSSION
The results of this study suggest that there are differences in the perceived characteristics of an effective clinical teacher among medical students compared to residents. The results support Jonassen’s theory of constructivism (1991) as seen by the medical students at the beginning of their clinical year (M3) wanting more didactic teaching to ‘spoon-feed’ them with medical knowledge. As these students move on to become more senior in M5, and then residency, they start appreciating teachers who help them become more self-directed learners. These more senior learners also value feedback to help them deal with more complex ill-defined problems that they encounter during their daily clinical work. This is supported by more residents than medical students identifying feedback and supervision as well as clinical decision making/thinking as important characteristics of an effective clinical teacher (Table 2).
It is also interesting to note that the top two characteristics of approachability and passion/enthusiasm in teaching are both in the non-cognitive domains. In fact, they are probably fundamental attributes that make a good teacher into a great one as they lead to a lot of teaching experience coupled with feedback from the learners that make them become good at simplifying and explaining concepts well, especially in undergraduate teaching. For the students beyond a baseline clinical competence, they value clinical teachers who want to teach rather than those who may be excellent top clinicians who do not possess the soft skills and the approachability for the students to want to have the courage to learn effectively from him/her. In contrast, the residents are willing to accept less ‘warm’ teachers if they are able to learn advanced clinical skills from them, particularly in the procedural specialties.
One of the characteristics that has not been identified in any of the references, including Sutkin et al.’s (2008) review paper is that of being exam-oriented. This was a characteristic identified by four of the medical students but only by one of the residents who mentioned it while recalling his undergraduate days. This is not too surprising because Frambach et al. (2012) have found that Asian students tend to strive for success and to rank among the top achievers in an examination. The fact that the YLLSoM is Asia’s leading medical school (QS Top Universities, 2015; Times Higher Education, 2015) and hence the crème de la crème of Singapore’s students study at YLLSoM as seen by both the 10th and 90th percentiles of Medical students getting all A grades in their Singapore-Cambridge GCE A-level admission scores (National University of Singapore, 2019) can explain the exam-orientedness of the students. Moreover, Singapore practices meritocracy (Prime Minister’s Office, 2015) and in a small country of only 719.1 km² with a population of 5.35 million (World Bank, 2015) with only three public healthcare clusters, doing well in exams is seen as a tried and tested way of securing a good future. Failing in a high-stakes exam such as the final Bachelor of Medicine and Bachelor of Surgery (MBBS) exams will delay one’s progression to the next stage of one’s career such as admission to a residency training programme, and in a small country like Singapore where it is perceived to have few opportunities of starting afresh, it is not surprising that so much emphasis is placed on doing well in exams and a teacher who is able to prepare students well for exams is greatly valued.
There are several limitations to this study. Although we had wanted to recruit interviewees from different ethnicity, all 12 who responded to our invitation were Chinese, though participating in a multi-cultural and multi-ethnic public school. Another limitation is that this study only explores the perceptions of the learners themselves. It will be more balanced if the viewpoints of the teachers are obtained as well.
V. CONCLUSION
This study suggests that there are differences in the perceptions of an effective clinical teacher between medical students compared to residents. Medical students valued a more didactic spoon-feeding type of teacher in their earlier clinical years. However, final year medical students and residents valued feedback and role-modelling at clinical practice. The top two characteristics of approachability and passion for teaching are in the non-cognitive domains. The results of this study will help to inform educators of the differences in a learner’s needs at different stages of their clinical development and to potentially adapt their teaching styles. In addition, it is also possible for certain non-cognitive domain skills to be developed through recognition of clinical teachers who are role models in showing by example the art of the practice of Medicine and being able to create a conducive non-threatening learning environment. There are definitely faculty development programmes which target at how to develop a conducive learning environment.
Notes on Contributors
Shirley Ooi, MBBS(S’pore), FRCSEd(A&E), MHPE(Maastricht) is senior consultant emergency physician at NUH and associate professor at NUS. She was the Designated Institutional Official NUHS Residency programme at the time of the study. Currently she is the Associate Dean at NUH. This study was her MHPE thesis. She reviewed the literature, designed the study, conducted the interviews, analysed the transcripts and wrote the manuscript.
Clement Tan, MBBS(S’pore), FRCSEd (Ophth), MHPE(Maastricht), is associate professor, senior consultant and head of the Department of Ophthalmology, NUS and NUH. He was the first author’s local MHPE thesis supervisor. He co-analysed the transcripts and approved the final versions of the manuscripts.
Janneke M. Frambach PhD is assistant professor at the School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, the Netherlands. She was the first author’s MHPE thesis supervisor. She supervised the study from the beginning to the final stage of manuscript writing with its revisions.
Ethical Approval
This study was reviewed and approved by the NUS Institutional Review Board (approval no. 3172), which considered the letter of invitation for recruitment of participants, participant information sheet, written informed consent for the audio-recordings of the one-on-one interviews, interview guide and confidentiality of participants.
Acknowledgements
The authors would like to thank the following for their help, advice and support, without which this study would not have been possible:
- Medical student, Gerald Tan, for his help in transcribing many of the interviews.
- The six YLLSoM medical students who had willingly come forward to be interviewed for this study.
- The six NUHS residents who had willingly spared their time to be interviewed for this study.
Funding
No grant nor external funding was received for this study.
Declaration of Interest
The PI as the interviewer emphasised to the participants that whatever they mentioned in this study would not affect them in any way in their assessments, selection into a residency programme, job selection nor career progression. Moreover, their participation was entirely voluntary. The other two authors had no conflict of interest.
References
Bednar, A. K., Cunningham, D., Duffy, T. M., & Perry, J. D. (1991). Theory into practice: How do we link? In G.J. Anglin (Ed.), Instructional Technology: Past, Present, and Future. Englewood, CO: Libraries Unlimited.
Branch, W. T., Osterberg, L., & Weil, R. (2015). The highly influential teacher: Recognising our unsung heroes. Medical Education, 49, 1121-27.
Ciraj, A., Abraham, R., Pallath, V., Ramnarayan, K., Kamath, A., & Kamath, R. (2013). Exploring attributes of effective teachers-student perspectives from an Indian medical school. South-East Asian Journal of Medical Education, 7(1), 8-13.
Elzubeir, M. A., & Rizk, D. E. E. (2001). Identifying characteristics that students, interns and residents look for in their role models. Medical Education, 35, 272-277.
Frambach, J. M., Driessen, E. W., Chan, L. C., & van der Vleuten, C. P. M. (2012). Rethinking the globalization of problem-based learning: How culture challenges self-directed learning. Medical Education, 46, 738-747.
Haider, S. I., Snead, D. R., & Bari, M. F. (2016). Medical students’ perceptions of clinical teachers as role model. PloS ONE, 11(3): e0150478. https://doi:10.1371/journal.pone.0150478
Hatem, C. J., Searle, N. S., Gunderman, R., Krane, N. K., Perkowski, L., Schutze, G. E., & Steinert, Y. (2011). The educational attributes and responsibilities of effective medical educators. Academic Medicine, 86(4), 474-480.
Hillard, R. I. (1990). The good and effective teacher as perceived by paediatric residents and faculty. American Journal of Diseases of Childhood, 144, 1106 –1110.
Jacobs, J. L., & Samarasekera, D. D. (2012). How we put into practice the principles of embedding medical students into healthcare teams. Medical Teacher, 34, 1008-1011.
Jonassen, D. H. (1991). Evaluating constructivistic learning. Educational Technology, 31(9), 28-33.
Kernan, W. N., Lee, M. Y., Stone, S. L., Freudigman, K. A., & O’Connor, P. G. (2000). Effective teaching for preceptors of ambulatory care: A survey of medical students. American Journal of Medicine, 108(6), 499-502.
Kikukawa, M., Nabeta, H., Ono, M., Emura, S., Oda, Y., Koizumi, S., & Sakemi, T. (2013). The characteristics of a good clinical teacher as perceived by resident physicians in Japan: A qualitative study. BMC Medical Education, 13(1), 100.
Kua, E. H., Voon, F., Tan, C. H., & Goh, L. G. (2006). What makes an effective medical teacher? Perceptions of medical students. Medical Teacher, 28(8), 738-741.
Low, M. J. W., Khoo, K. S. M., Kuan, W. S., & Ooi, S. B. S. (2020). Cross-sectional study of perceptions of qualities of a good medical teacher among medical students from first to final year. Singapore Medical Journal, 61(1), 28-33.
Mohan, N., & Chia, Y. Y. (2017). Our first steps into surgery: The role of inspiring teachers. The Asia-Pacific Scholar, 2(1), 29-30. https://doi.org/10.29060/TAPS.2017-2-1/PV1027
National University of Singapore. (2019). National University of Singapore Undergraduate Programmes Indicative Grade Profile. Retrieved from http://www.nus.edu.sg/oam/gradeprofile/sprogramme-igp.html
Nishiya, K., Sekiguchi, S., Yoshimura, H., Takamura, A., Wada, H., Konishi, E., Saiki, T., Tsunekawa, K., Fujisaki, K., & Suzuki, Y. (2019). Good clinical teachers in Paediatrics: The perspective of paediatricians in Japan. Paediatrics International, 62(5), 549-555.
Prime Minister’s Office. (2010, May 5). “Old and new citizens get equal chance,” says MM Lee. [Press release].
Paukert, J. L., & Richards, B. F. (2000). How medical students and residents describe the roles and characteristics of their influential clinical teachers. Academic Medicine, 75, 843-845.
QS Top Universities. (2015) QS World University Rankings 2015-20116. Retrieved from: https://www.topuniversities.com/university-rankings/world-university-rankings/2015
Savin-Baden, M., & Howell Major, C. (2013). Pragmatic qualitative research. In M. Savin-Baden & C. H. Major (Eds). Qualitative Research. The Essential Guide to Theory and Practice. London: Routledge.
Schiffer, R. B., Rao, S. M., & Fogel, B. S. (2003). Neuropsychiatry: A Comprehensive Textbook (2nd ed.). Philadelphia, PA: Lippincott Williams and Wilkins.
Searle, N. S., Hatem, C. J., Perkowski, L., & Wilkerson, L. (2006). Why invest in an educational fellowship program? Academic Medicine, 81, 936-940.
Singh, S., Pai, D. R., Sinha, N. K., Kaur, A., Soe, H. H. K., & Barua, A. (2013). Qualities of an effective teacher: What do medical teachers think? BMC Medical Education, 13(128), 1-7.
Sutkin, G., Wagner, E., Harris, I., & Schiffer, R. (2008). What makes a good clinical teacher in Medicine? A review of the literature. Academic Medicine, 83(5), 452-466.
Times Higher Education. (2015). The World University Rankings 2015-16. Retrieved from: https://www.timeshighereducation.com/student/news/best-universities-world-revealed-world-university-rankings-2015-2016
Venkataramani, P., Krishnaswamy, N., Sugathan, S., Sadanandan, T., Sidhu, M., & Gnanasekaran, A. (2016). Attributes expected of a medical teacher by Malaysian medical students from a private medical school. South-East Asian Journal of Medical Education, 10(2), 39-45.
White, J. A., & Anderson, P. (1995). Learning by internal medicine residents: Differences and similarities of perceptions by residents and faculty. Journal of General Internal Medicine, 10, 126-132.
World Bank. (2015). Annual Report 2015. Retrieved from: https://www.worldbank.org/en/about/annual-report-2015
*Shirley Ooi
Emergency Medicine Department,
National University Hospital
9 Lower Kent Ridge Road, Level 4,
National University Centre for Oral Health Building,
Singapore 119085
Tel: (65)6772-2458
Fax: (65)6775-8551
Email: shirley_ooi@nuhs.edu.sg
Submitted: 15 April 2020
Accepted: 5 June 2020
Published online: 5 January, TAPS 2021, 6(1), 49-59
https://doi.org/10.29060/TAPS.2021-6-1/OA2248
Amaya Tharindi Ellawala1, Madawa Chandratilake2 & Nilanthi de Silva2
1Department of Medical Education, Faculty of Medical Sciences, University of Sri Jayewardenepura, Sri Lanka; 2Faculty of Medicine, University of Kelaniya, Sri Lanka
Abstract
Introduction: Professionalism is a context-specific entity, and should be defined in relation to a country’s socio-cultural backdrop. This study aimed to develop a framework of medical professionalism relevant to the Sri Lankan context.
Methods: An online Delphi study was conducted with local stakeholders of healthcare, to achieve consensus on the essential attributes of professionalism for a doctor in Sri Lanka. These were built into a framework of professionalism using qualitative and quantitative methods.
Results: Forty-six attributes of professionalism were identified as essential, based on Content Validity Index supplemented by Kappa ratings. ‘Possessing adequate knowledge and skills’, ‘displaying a sense of responsibility’ and ‘being compassionate and caring’ emerged as the highest rated items. The proposed framework has three domains: professionalism as an individual, professionalism in interactions with patients and co-workers and professionalism in fulfilling expectations of the profession and society, and displays certain characteristics unique to the local context.
Conclusion: This study enabled the development of a culturally relevant, conceptual framework of professionalism as grounded in the views of multiple stakeholders of healthcare in Sri Lanka, and prioritisation of the most essential attributes.
Keywords: Professionalism, Culture, Consensus
Practice Highlights
- Medical professionalism is recognised as a culturally dependent entity.
- This has led to the emergence of definitions unique to socio-cultural settings.
- List-based definitions provide operationalisable means of portraying its meaning.
- A Delphi study was conducted to achieve consensus on locally relevant professionalism attributes.
- Using quantitative and qualitative methods, a conceptual framework of professionalism was developed.
I. INTRODUCTION
There is no single definition of medical professionalism that encompasses its many subtle nuances (Birden et al., 2014). The realisation that professionalism is a dynamic, multi-dimensional entity (Van de Camp, Vernooij-Dassen, Grol, & Bottema, 2004), significantly dependent on context (Van Mook et al., 2009), and cultural backdrop (Chandratilake, Mcaleer, & Gibson, 2012), has led to the emergence of definitions specific to cultures and socio-economic backgrounds.
Many of the current definitions originate from Western societies. Certain Eastern cultures have embraced such definitions, though they are undeniably in conflict with local traditional views (Pan, Norris, Liang, Li, & Ho, 2013). In parallel however, countries such as Egypt, Saudi Arabia, Japan, China and Taiwan have explored how professionalism is conceptualised within their contexts (Al-Eraky, Chandratilake, Wajid, Donkers, & Van Merrienboer, 2014; Leung, Hsu, & Hui, 2012; Pan et al., 2013). Such studies have portrayed the interplay between cultural, socio-economic and religious factors in shaping perceptions on professionalism, further fuelling the notion that professionalism must be “interpreted in view of local traditions and ethos” (Al-Eraky et al., 2014, p. 14).
Culture is the embodiment of elements such as attitudes, beliefs and values that are shared among individuals of a community and is therefore, an entity that distinguishes one group of people from another (Hofstede, 2011). Various cultural theories provide insight into inter-cultural differences across the globe (Hofstede, n.d.; Schwartz, 1999). The Sri Lankan cultural context, while aligned with those of its closest geographical neighbours in South Asia in some ways, differs from them in other important aspects.
Certain attempts have been made to explore the meaning of professionalism in Sri Lanka. Chandratilake et al. (2012) provided a degree of insight while comparing cultural similarities and dissonances in conceptualising professionalism among doctors of several nations. Monrouxe, Chandratilake, Gosselin, Rees, and Ho (2017) built on this work with their analysis of professionalism as viewed by local medical students. The sole regulatory authority of the medical profession in the country, the Sri Lanka Medical Council (SLMC, 2009) has delineated what it expects in terms of professionalism, by outlining the constituents of ‘good medical practice’, many of which converge with elements of professionalism described in the literature.
While the work mentioned here has shed some light on the topic, to our knowledge, there were no studies that focused solely on the local conceptualisation of professionalism, drawing on the views of diverse stakeholders of healthcare.
There exist two schools of thought on how professionalism can be defined: as a list of desirable attributes (Lesser et al., 2010), or as an over-arching, value-laden entity that transcends such lists (Irby & Hamstra, 2016; Wynia, Papadakis, Sullivan, & Hafferty, 2014). Unlike the latter, a list may not address the “foundational purpose of professionalism” (Wynia et al., 2014, p. 712); however, it will provide a tangible, operationalisable portrayal (Lesser et al., 2010). It is possibly for this reason that many studies have opted for list-based definitions, an approach that is supported in the East (Al-Eraky & Chandratilake, 2012; Al-Eraky et al., 2014; Pan et al., 2013).
The aim of this study was to develop a culturally appropriate conceptual framework of medical professionalism in Sri Lanka using a combination of qualitative and quantitative methods. We envisioned that identifying a list of desirable attributes would be appropriate, providing a definition that could readily be operationalised for teaching/learning, assessment and research purposes (Wilkinson, Wade, & Knock, 2009).
II. METHODS
A. The Approach
We followed a consensus approach, and opted for the Delphi technique as it was imperative to involve a large number of participants not limited by geographical location (Humphrey-Murto et al., 2017). The method offered the further advantage of providing participants with equal opportunity to express their opinions (De Villiers, De Villiers, & Kent, 2005), thereby negating the possible drawbacks of face-to-face interactions and resulting in a ‘process gain’ (Powell, 2003).
B. Participant Panel
The panel comprised nation-wide stakeholders of healthcare (Table 1), from both rural and urban regions who were presumably exposed to diverse forms of medical services and geographical variations in their distribution.
|
Stakeholder group |
Description |
Number |
|
Medical teachers |
Four Medical Faculties (nation-wide) |
69 (44%) |
|
Medical students |
Four Medical Faculties (nation-wide) – fourth and final years |
36 (23%) |
|
Hospital doctors |
Four Teaching Hospitals (nation-wide) |
14 (9%) |
|
Healthcare staff |
Selected secondary and tertiary hospitals |
5 (3%) |
|
General practitioners |
Selected GP practices around the country |
2 (1%) |
|
Medical administrators |
Selected secondary and tertiary hospitals |
5 (3%) |
|
Policy makers in healthcare |
Ministry of Health, professional associations and regulatory bodies |
2 (1%) |
|
General public |
Employees of selected private and state banks Non-academic staff of four Medical Faculties Teachers of selected private and state schools |
25 (16%) |
Table 1. Composition of the Delphi panel
1) Delphi Round I: The question posed in the first round was ‘What are the attributes of professionalism you would expect in a doctor working in the Sri Lankan context?’. No limitation was posed on the number of answers to this open-ended question. This was piloted among a group comprising local medical educationists, medical officers and members of public and edited based on their feedback. Invitations to participate were emailed and informed consent was obtained through an online link. Participants were then automatically granted access to the online questionnaire. An email reminder was sent to the initial mailing list after one week. The questionnaire was accessible for three weeks from the date of launch. Invitations were emailed to 920 individuals, of which 158 (17.2%) responded.
To analyse the data of Round I, we used conventional content analysis, which is employed when literature and theory on a phenomenon is limited, thereby allowing themes to emerge from and be grounded in the data itself (Hsieh & Shannon, 2005). Initially, individual responses – considered as meaning units – were listed out verbatim, removing exact duplicates. Meaning units varied from single word responses to longer phrases, and were therefore divided into short and long meaning units. The latter were shortened into condensed meaning units, while preserving the original meaning. Finally, condensed and short meaning units were coded. Similar phrases were assigned the same code. A final scrutiny of the codes allowed the removal of synonymous items and coupling of items with similar meaning. We followed this process iteratively till the items had been refined to the maximum extent possible.
With two additional experts, we reviewed the appropriateness of items. Four common misconceptions of professionalism (distractors) were added, in order to prevent inattentive responses to the large number of items included in the subsequent round (Meade & Craig, 2012). A search of literature also revealed a number of evidenced-based items that had not emerged in the data. Three of these were agreed to be relevant and important to the local context, and were therefore added to the list, to ensure that a comprehensive coverage of items was achieved.
2) Delphi Round II: The attributes of professionalism were compiled into another online survey and emailed to all individuals initially invited to participate in the study, three weeks after completion of the first round; 118 of the initial sample (dropout rate = 25.3%) participated in Round II. Respondents were asked to rate each item on a five-point Likert scale ranging from ‘not important’ to ‘very important’, according to perceived importance in the local context. An email reminder was sent out after one week. The form was accessible for three weeks.
The aim of this second round was to select the attributes considered most essential. Content Validity Index (CVI) was chosen for this purpose, over less rigorous methods such as prioritisation by mean. The CVI is the proportion of respondents rating an item as essential (Polit & Beck, 2006). Responses ‘4’ and ‘5’of the Likert scale were determined as reflecting ‘essentialness’. The general acceptance is that in a study with a large number of raters (as in this case), a CVI > 0.78 will indicate that an item is essential (Lynn, 1986).
To avoid the possibility of agreement being due to chance, Kappa statistics – a measure of inter-rater agreement and the probability of chance responses – were computed. K-values can range from -1 to +1; -1 indicating perfect disagreement below chance, +1, perfect agreement above chance and 0, agreement equal to chance (Randolph, n.d.). A K-value ≥0.7 indicates acceptable inter-rater agreement.
As the final step, the prioritised list of attributes was emailed to participants requesting further comments; however, none were received. The Delphi study concluded at this stage.
In order to organise the attributes in a more meaningful manner, we attempted to identify the emerging domains of professionalism. Initially this was performed through an Exploratory Factor Analysis, a method which allows identification of underpinning, latent ‘factors’ that are inferred from the variables. Scholars have recommended however, that quantitative analysis of studies with a social science perspective be complemented with qualitative methods (Tavakol & Sandars, 2014). Therefore, a panel of experts individually sorted the attributes into themes using the constant comparison technique; data was sorted, systematically compared and the emergence of a theme was acknowledged when many similar items appeared across the data set (Maykut & Morehouse, 1994). The results were compared with those of the Factor Analysis and by identifying common domains, a final framework of professionalism was formulated. As an additional measure, the internal consistency (Cronbach’s Alpha) within each domain was computed to determine close clustering of items. The framework developed was vetted by a group of reviewers.
III. RESULTS
A. Profile of Participant Panel
The response rates of the different participant groups are depicted in Table 1. As demographic details were not re-obtained in Round II, the profile of this group could not be determined.
B. Results of Round I
A total of 288 items were initially documented, and condensed to 53 attributes following content analysis. The three evidence-based items and four distractors were added to make a final inventory of 60 items (Table 2).
C. Results of Round II
1) Essential Attributes of Professionalism: Forty-six items achieved a CVI > 0.78 and were therefore labelled as ‘essential’. The attributes are arranged in descending order of importance in Table 2. The Kappa value was 0.77, confirming that rating of items was not due to chance.
|
Attribute of professionalism |
CVI |
|
Possessing adequate medical knowledge and skills |
0.99 |
|
Displaying a sense of responsibility |
0.98 |
|
Being compassionate and caring |
0.97 |
|
Managing limited resources for optimal outcome |
0.97 |
|
Ensuring confidentiality and patient privacy |
0.97 |
|
Being punctual |
0.97 |
|
Maintaining standards in professional practice |
0.97 |
|
Displaying effective communication skills |
0.97 |
|
Displaying honesty and integrity |
0.97 |
|
Displaying commitment to work |
0.97 |
|
Being empathetic towards patients |
0.96 |
|
Being able to work as a member of a team |
0.96 |
|
Being reliable |
0.96 |
|
Displaying professional behaviour and conduct |
0.96 |
|
Being accountable for one’s actions and decisions |
0.96 |
|
Being available |
0.95 |
|
Being responsive |
0.95 |
|
Being clear in documentation |
0.95 |
|
Being patient |
0.94 |
|
Displaying effective problem-solving skills |
0.94 |
|
Understanding limitations in professional competence |
0.94 |
|
Being respectful and polite |
0.94 |
|
Ability to effectively manage time |
0.93 |
|
Being a committed teacher/supervisor |
0.92 |
|
Being open to change |
0.92 |
|
Commitment to continuing professional development |
0.91 |
|
Having scientific thinking and approach |
0.91 |
|
Being accurate and meticulous |
0.91 |
|
Maintaining work-life balance |
0.91 |
|
Displaying self confidence |
0.91 |
|
Ability to provide and receive constructive criticism |
0.90 |
|
Non-judgmental attitude and ensuring equality |
0.90 |
|
Engaging in reflective practice |
0.90 |
|
Respecting patient autonomy |
0.90 |
|
Being accessible |
0.88 |
|
Avoiding substance and alcohol misuse* |
0.86 |
|
Working towards a common goal with the health system |
0.85 |
|
Providing leadership |
0.84 |
|
Being humble |
0.84 |
|
Advocating for patients |
0.83 |
|
Maintaining professional relationships |
0.83 |
|
Adhering to a professional dress code |
0.82 |
|
Avoiding conflicts of interest |
0.82 |
|
Displaying sensitivity to socio-cultural and religious issues related to patient care |
0.81 |
|
Being composed |
0.80 |
|
Stands for professional autonomy** |
0.79 |
|
Being amiable |
0.77 |
|
Displaying sensitivity to socio-cultural and religious issues in dealing with colleagues and students* |
0.76 |
|
Being assertive |
0.75 |
|
Being creative in work related matters |
0.74 |
|
Not money minded |
0.73 |
|
Willingness to work in rural areas |
0.72 |
|
Respecting professional hierarchy** |
0.69 |
|
Possessing knowledge in areas outside of medicine |
0.68 |
|
Being altruistic |
0.65 |
|
Adhering to socio-cultural norms* |
0.64 |
|
Fluency in multiple languages |
0.62 |
|
Abiding by religious beliefs |
0.32 |
|
Displaying self-importance** |
0.19 |
|
Using professional status for personal advantage** |
0.07 |
Note: *Evidence-based items sourced from the literature **Distractors
Table 2. Attributes of professionalism arranged in order of perceived importance
The highest rated attributes were, ‘possessing adequate medical knowledge and skills’, followed by ‘displaying a sense of responsibility’ and ‘being compassionate and caring’. Five items were mentioned collectively across the main stakeholder groups:
- Being empathetic towards patients
- Possessing adequate knowledge and skills
- Displaying effective communication skills
- Displaying honesty and integrity
- Being respectful and polite
2) Development of a Professionalism Framework: The main themes of professionalism identified by the expert panel and through exploratory factor analysis are summarised in Table 3.
|
Panelist 1 |
Panelist 2 |
Panelist 3 |
Factor Analysis |
|
Professionalism in interactions with patients (1) |
Interpersonal (1,2) |
Competency – Competency in managing patients and clinical reasoning (3) |
Qualities required to effectively work within the healthcare team (2) |
|
Professionalism in interactions in the workplace (2) |
Intrapersonal (4)
|
Accountability – Taking responsibility for work performed as a doctor in the clinical context and in interactions with co-workers (2) |
Clinical competency, excellence and continuous development (3)
|
|
Professionalism in fulfilling expectations of the profession and society (3) |
Societal/public (3)
|
Attitude – Thought process, internal qualities of the doctor (4) |
Equal and fair treatment of patients (1)
|
|
|
|
Behaviour – External actions of the doctor (1) |
Humane qualities in dealing with patients (1) |
Table 3. Themes of professionalism identified quantitatively and qualitatively*
Based on the convergence of these domains, a framework was developed, which portrayed professionalism as encompassing three main elements: individual traits, inter-personal interactions and responsibilities to the profession and community (Figure 1). Cronbach Alpha values for the three domains were 0.882, 0.918 and 0.755, thereby confirming the relevance of the constituents to each overarching element.

|
Professionalism in interactions with patients and co-workers |
Professionalism as an individual |
Professionalism in fulfilling expectations of the profession and society |
|
Ensuring confidentiality and patient privacy |
Displaying a sense of responsibility |
Managing limited resources for optimal outcome |
|
Displaying effective communication skills |
Being punctual |
Maintaining standards in professional practice |
|
Being empathetic towards patients |
Displaying honesty and integrity |
Displaying professional behaviour and conduct |
|
Being able to work as a member of a team |
Displaying commitment to work |
Working towards a common goal with the health system |
|
Being available |
Being reliable |
Adhering to a professional dress code |
|
Being responsive |
Being accountable for one’s actions and decisions |
Avoiding conflicts of interest |
|
Being respectful and polite |
Being clear in documentation |
Stands for professional autonomy |
|
Being a committed teacher/supervisor |
Displaying effective problem-solving skills |
Possessing adequate medical knowledge and skills |
|
Respecting patient autonomy |
Understanding limitations in professional competence |
Maintaining work-life balance |
|
Being accessible |
Ability to effectively manage time |
Avoiding substance and alcohol misuse |
|
Providing leadership |
Being open to change |
|
|
Advocating for patients |
Commitment to continuing professional development |
|
|
Maintaining professional relationships |
Having scientific thinking and approach |
|
|
Displaying sensitivity to socio-cultural and religious issues related to patient care |
Being accurate and meticulous |
|
|
Being compassionate and caring |
Displaying self confidence |
|
|
Being patient |
Non-judgemental attitude and ensuring equality |
|
|
Ability to provide and receive constructive criticism |
Engaging in reflective practice |
|
|
Being humble |
|
|
|
|
Being composed |
|
Figure 1. A framework of medical professionalism for Sri Lanka
IV. DISCUSSION
A. Framework of Professionalism Attributes
The framework depicts a progressively widening circle, with desirable individual traits at its core, expanding into interactions within the workplace and finally, responsibilities as a professional in wider society. It thus depicts the fundamental areas that must be addressed in aspiring towards professionalism. The three domains are largely congruent with the broad areas of professionalism described by Van de Camp et al. (2004) and Hodges et al. (2011). Though portrayed as distinct entities however, we emphasise that the domains should not be interpreted as evolving in sequential stages; professional development should ideally occur in these areas simultaneously.
Frameworks developed in other Eastern cultures have highlighted significant tenets of local traditions and ethos that have shaped perceptions on professionalism. Confucian values in Taiwan (Ho, Yu, Hirsh, Huang, & Yang, 2011), principles of Bushido in Japan (Nishigori, Harrison, Busari, & Dornan, 2014), and Islamic teachings within Egypt (Al-Eraky et al., 2014), have been shown to be deeply entrenched within such understandings.
Sri Lanka possesses a rich and diverse cultural heritage. British ideologies in particular appear to influence local medical education (Uragoda, 1987), and the conceptualisation of professionalism (Babapulle, 1992; Monrouxe et al., 2017), resulting in a strong emphasis on ethical behaviour. Sri Lanka is widely acknowledged to have a ‘religious’ background. Theravada Buddhism, the religion followed by the majority of Sri Lankans, as well as less widespread religions such as Christianity, Hinduism and Islam, exert a significant influence on local culture (Gildenhuys, 2004). Virtues collectively upheld by these doctrines, such as generosity, impartiality, honesty and peace are thought to be central to the development of professionalism (Keown, 2002). Of these, honesty, impartiality (equality) and peace (composure) were echoed within the theme ‘professionalism as an individual’, as were responsibility, reliability and accountability. These characteristics, built on a foundation of integrity, are fundamental tenets of Sri Lanka’s socio-cultural framework. Thus, we reasoned that ‘professionalism as an individual’ was ideally depicted as central to the local concept of professionalism, highlighting the importance of building a solid foundation of fundamental characteristics.
We also drew on elements of the ‘cultural dimension’ (Hofstede, n.d.) and ‘cultural value’ (Schwartz, 1999) theories in developing the framework. Accordingly, the collectivist nature of local culture provides the basis for qualities that enable harmonious interactions with others, as depicted in the second domain. The hierarchical disposition of local society dictates that the doctor is duty-bound to ensure that responsibilities to the profession and community are met.
B. Essential Attributes of Professionalism
Among the essential items, broad areas encompassing competence, humanism, interpersonal skills and ethics were prioritised. Qualities most consistently mentioned in literature – accountability, integrity and respect – received high ratings (Van de Camp et al., 2004). Reflective practice, understanding limitations in practice, accepting constructive criticism and continuous professional development – ‘cornerstones’ of the medical profession – were also labelled as significant (Chandratilake et al., 2012; Wynia et al., 2014), in contrast to other Eastern settings (Adkoli, Al-Umran, Al-Sheikh, Deepak, & Al-Rubaish, 2011). The striking omission was altruism, which was intriguingly rated as non-essential. Altruism has been named as one of the most consistently valued attributes of professionalism worldwide (Van de Camp et al., 2004), and would assumedly be espoused in the local collectivist culture. Our findings suggest that even qualities accepted as key tenets of professionalism may not be equally valued cross-culturally. However, it has been claimed that altruism is traditionally a Western concept (Nishigori et al., 2014), and the acceptance of altruism as a composite of professionalism has been challenged in recent years, on the premise that selflessness may in fact be causing considerable harm (Harris, 2018; Nishigori, Suzuki, Matsui, Busari, & Dornan, 2019).
Participants rated ‘possessing adequate medical knowledge and skills’ as the most essential professionalism attribute. This coincides with findings from Canada (Brownell & Cote, 2001) and Asia (Leung et al., 2012; Pan et al., 2013), though conflicting with a school of thought that considers competence to be the foundation of professionalism, rather than an integral part of it (Stern, 2006). The primacy afforded to knowledge and skills most likely stems from the significance placed on education, which is upheld in Sri Lanka as the primary means of elevating one’s socio-economic status. The emphasis on responsibility and compassion – the second and third highest rated items – as well as morality and empathy, can be attributed to the deeply religious background of the country. It was unsurprising that respectfulness was prioritised, being a cardinal virtue embraced by Sri Lankans, as in other Eastern settings (Nishigori et al., 2014).
A comparison of professionalism attributes hailed as important in various contexts, with the highest rated qualities locally, revealed a convergence of several items (Table 4). This provides assurance that the local conceptualisation of professionalism reflects the ‘core’ principles of medical professionalism and shows considerable alignment with definitions provided by professional bodies around the world (General Medical Council [GMC], 2013; Medical Professionalism Project, 2002).
|
Sri Lanka |
USA (American Board of Internal Medicine, 2001) |
Western countries (Hilton & Slotnick, 2005) |
Canada (Steinert, Cruess, Cruess, Boudreau, & Fuks, 2007) |
Taiwan (Ho et al., 2011) |
China (Pan et al., 2013) |
|
Knowledge and skills |
|
|
Competence |
Clinical competence |
Clinical competence |
|
Responsibility |
Accountability |
Accountability Social responsibility |
Responsibility |
Accountability |
Accountability |
|
Compassion and caring |
|
|
|
Humanism |
Humanism |
|
Managing limited resources for optimal outcome |
|
|
|
|
Economic consideration |
|
Confidentiality and patient privacy |
|
|
|
|
|
|
Punctuality |
|
|
|
|
|
|
Maintaining standards in professional practice |
Excellence |
|
|
Excellence |
Excellence |
|
Effective communication skills |
|
|
|
Communication |
Communication |
|
Honesty and integrity |
Integrity |
|
Honesty Integrity |
Integrity |
|
|
Commitment to work |
Duty |
|
Commitment |
|
|
|
|
Altruism |
|
Altruism |
Altruism |
Altruism
|
|
|
Respect |
Respect |
|
|
|
|
|
|
Self-awareness Reflection |
Self-regulation |
|
Self-management |
|
|
|
Teamwork |
Teamwork |
|
Teamwork |
|
|
|
Ethical practice |
Ethics |
Ethics |
Ethics |
|
|
|
|
Morality |
|
Morality |
|
|
Honour |
|
|
|
|
|
|
|
|
Autonomy |
|
|
|
|
|
|
|
|
Health promotion |
Table 4. Comparison of main attributes of professionalism identified locally with those of Western and Eastern contexts
Interestingly, certain items globally recognised as insignificant in terms of professionalism (work-life balance, leadership, professional appearance and composure) (Chandratilake et al., 2012), were highlighted as essential locally. The local expectation that professionals maintain an appearance befitting of their social status and the high power-distance between doctor and patient (Hofstede, n.d.), could have contributed to the emphasis on appearance. Similarly, power distance could explain the significance placed on leadership, a crucial skill required to handle subordinates and patients at the ‘lower end’ of the power spectrum. A promising finding was the importance placed on ‘work-life balance’, complementing the lack of emphasis on altruism and coinciding with recommendations of multiple professional bodies that underscore the value of personal well-being (GMC, 2013). The significance assigned to composure can be attributed to Sri Lanka’s conservative nature (Schwartz, 1999), where cultural norms dictate that public displays of intense emotion be suppressed.
It was intriguing to note that of the four distractors—which were expected to be rated as non-essential— ‘stands for professional autonomy’ achieved a CVI just above the baseline. In Sri Lanka, political influence is known to permeate into the workplace; therefore, this attribute can be viewed in light of being able to perform one’s duties in the midst of such pressures. The paternalistic nature of the doctor-patient relationship common to many Eastern cultures, could also underpin the significance afforded to professional autonomy (Ho & Al-Eraky, 2016; Susilo, Marjadi, van Dalen, & Scherpbier, 2019). Incidentally, this item was not corroborated elsewhere in the literature and was therefore, unique to this study. Other items that were exclusive to the Sri Lankan context were clarity in documentation, patience, time management and maintaining professional relationships.
As a whole, it is evident that the local conceptualisation of professionalism—while including areas unique to the Sri Lankan context—greatly coincides with the perceptions representing professionalism shared by the global medical community.
C. Strengths and Limitations
The study has responded to calls for culture-specific discourse on professionalism (Monrouxe et al., 2017) and prioritisation of essential qualities in terms of professionalism (Jha, Bekker, Duffy, & Roberts, 2007). Many studies seeking to define professionalism have drawn on the views of particular stakeholder groups in isolation; few have attempted to collate the views of the many groups (Ho et al., 2011; Leung et al., 2012; Pan et al., 2013). Scholars have challenged the medical profession to determine who should define professionalism, with the belief that this onus should not be placed solely on doctors (Wear & Kuczewski, 2004). The assimilation of views of multiple stakeholder groups therefore, was a significant strength of this study.
Although the initial list of 920 individuals who were invited to participate in the study was representative of all groups of stakeholders, the majority of those who responded were medical teachers and students. Thus, the study results predominantly reflect the views of these two groups. This may have precluded identification of attributes considered essential by the less represented groups, especially the public.
Another limitation of the study was the exclusive use of English, which though widely used in Sri Lanka, is not the first language of the majority of the population. The decision was justified as all potential participant groups were posited to be adequately fluent in English to participate. However, we recognise that providing the option of Sinhalese and Tamil translations may have increased participation in certain groups (healthcare staff and the public).
Finally, we acknowledge that while this framework reflects the current perception regarding medical professionalism, this notion is far from static, and will undeniably evolve with time. We therefore propose that future research involve repeated discussions that may inform the evolution of the current framework with time, being mindful of achieving a fair balance of stakeholder representation to this end.
V. CONCLUSION
This study has enabled us, through a consensus seeking approach, to paint a picture of medical professionalism as grounded in the views of the multiple stakeholders of healthcare in Sri Lanka. The conceptual framework that represents these opinions, reflects how perceptions on professionalism are shaped by cultural, societal, religious, economic and other factors. Moreover, it has enabled identification of individual elements of professionalism that are expected of a doctor in the local context, and prioritisation of those most essential among them.
Notes on Contributors
Amaya Ellawala MBBS, PGDME, MD, is a Lecturer in Medical Education in the Department of Medical Education, Faculty of Medical Sciences, University of Sri Jayewardenepura, Sri Lanka. Amaya Ellawala reviewed the literature, developed the methodological framework for the study, performed data collection, analysis and wrote the manuscript.
Madawa Chandratilake MBBS, MMed, PhD, is a Professor of Medical Education at the Department of Medical Education, Faculty of Medicine, University of Kelaniya, Sri Lanka. Madawa Chandratilake contributed to the development of the methodological framework, data analysis and writing of the manuscript.
Nilanthi de Silva MBBS, MSc, MD, is a Senior Professor in the Department of Parasitology, Faculty of Medicine, University of Kelaniya, Sri Lanka. Nilanthi de Silva contributed to the development of the methodological framework, data analysis and writing of the manuscript.
All authors read and approved the final manuscript.
Ethical Approval
Ethics approval was obtained from the Ethics Review Committee, Faculty of Medicine, University of Kelaniya (P/15/01/2016).
Funding
This study was not funded.
Declaration of Interest
The authors declare that they have no competing interests.
References
Adkoli, B. V., Al-Umran, K. U., Al-Sheikh, M., Deepak, K. K., & Al-Rubaish, A. M. (2011). Medical students’ perception of professionalism: A qualitative study from Saudi Arabia. Medical Teacher, 33(10), 840–845.
Al-Eraky, M. M., & Chandratilake, M. (2012). How medical professionalism is conceptualised in Arabian context: A validation study. Medical Teacher, 34(S1), S90–S95.
Al-Eraky, M. M., Chandratilake, M., Wajid, G., Donkers, J., & Van Merrienboer, J. G. (2014). A Delphi study of medical professionalism in Arabian Countries: The Four-Gates Model. Medical Teacher, 36, S8–S16.
American Board of Internal Medicine. (2001). Project Professionalism. Philadelphia, USA: American Board of Internal Medicine.
Babapulle, C. J. (1992). Teaching of medical ethics in Sri Lanka. Medical Education, 26(3), 185–189.
Birden, H., Glass, N., Wilson, I., Harrison, M., Usherwood, T., & Nass, D. (2014). Defining professionalism in medical education: A systematic review. Medical Teacher, 36(1), 47–61.
Brownell, A. K. W., & Cote, L. (2001). Senior residents’ views on the meaning of professionalism and how they learn about it. Academic Medicine, 76(7), 734–737.
Chandratilake, M., Mcaleer, S., & Gibson, J. (2012). Cultural similarities and differences in medical professionalism: A multi-region study. Medical Education, 46(3), 257–266.
De Villiers, M. R., De Villiers, P. J. T., & Kent, A. P. (2005). The Delphi technique in health sciences education research. Medical Teacher, 27(7), 639–643.
General Medical Council. (2013). Good Medical Practice. London, UK: GMC.
Gildenhuys, J. S. H. (2004). Ethics and Professionalism. Stellenbosch. South Africa: Sun Press.
Harris, J. (2018). Altruism: Should it be included as an attribute of medical professionalism? Health Professions Education, 4, 3–8.
Hilton, S. R., & Slotnick, H. B. (2005). Proto-professionalism: How professionalisation occurs across the continuum of medical education. Medical Education, 39, 58–65.
Ho, M., & Al-Eraky, M. (2016). Professionalism in context: Insights from the United Arab Emirates and beyond. Journal of Graduate Medical Education, 8(2), 268–270.
Ho, M., Yu, K., Hirsh, D., Huang, T., & Yang, P. (2011). Does one size fit all? Building a framework for medical professionalism. Academic Medicine, 86(11), 1407–1414.
Hodges, B. D., Ginsburg, S., Cruess, R., Cruess, S., Delport, R., Hafferty, F., … Wade, W. (2011). Assessment of professionalism: Recommendations from the Ottawa 2010 conference. Medical Teacher, 33(5), 354–363.
Hofstede, G. (n.d.). Cultural Dimensions. Retrieved from http://geerthofstede.com/culture-geert-hofstede-gert-jan-hofstede/6d-model-of-national-culture/
Hofstede, G. (2011). Dimensionalizing cultures: The Hofstede model in context. Online Reading in Psychology and Culture, 2(1), 1–26.
Hsieh, H., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288.
Humphrey-Murto, S., Varpio, L., Wood, T. J., Gonsalves, C., Ufholz, L., Mascioli, K., … Foth, T. (2017). The use of the Delphi and other consensus group methods in medical education research. Academic Medicine, 92(10), 1491–1498.
Irby, D. M., & Hamstra, S. J. (2016). Parting the clouds: Three professionalism frameworks in medical education. Academic Medicine, 91(12), 1606–1611.
Jha, V., Bekker, H. L., Duffy, S. R. G., & Roberts, T. E. (2007). A systematic review of studies assessing and facilitating attitudes towards professionalism in medicine. Medical Education, 41, 822–829.
Keown, D. (2002). Buddhism and medical ethics. Principles of Practice, 7, 39–70.
Lesser, C. S., Lucey, C. R., Egener, B., Braddock, C. H., Linas, S. L., & Levinson, W. (2010). A behavioral and systems view of professionalism. Journal of the American Medical Association, 304(24), 2732–2737.
Leung, D. C., Hsu, E. K., & Hui, E. C. (2012). Perceptions of professional attributes in medicine: A qualitative study in Hong Kong. Hong Kong Medical Journal, 18(4), 318–324.
Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35, 382–385.
Maykut, P., & Morehouse, R. (1994). Beginning Qualitative Research: A Philosophic and Practical Guide. London, UK: Falmer Press.
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437-455.
Medical Professionalism Project. (2002). Medical professionalism in the new millennium: A physicians’ charter. The Lancet, 359, 520-522.
Monrouxe, L. V., Chandratilake, M., Gosselin, K., Rees, C. E., & Ho, M. J. (2017). Taiwanese and Sri Lankan students’ dimensions and discourses of professionalism. Medical Education, 51(7), 1-14.
Nishigori, H., Harrison, R., Busari, J., & Dornan, T. (2014). Bushido and medical professionalism in Japan. Academic Medicine, 89(4), 560–563.
Nishigori, H., Suzuki, T., Matsui, T., Busari, J., & Dornan, T. (2019). A two-edged sword: Narrative inquiry into Japanese doctors’ intrinsic motivation. The Asia Pacific Scholar, 4(3), 24-32.
Pan, H., Norris, J. L., Liang, Y., Li, J., & Ho, M. (2013). Building a professionalism framework for healthcare providers in China: A Nominal Group technique study. Medical Teacher, 35, e1531–e1536.
Polit, D., & Beck, C. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing and Health, 29, 489–497.
Powell, C. (2003). The Delphi Technique: Myths and realities – Methodological issues in nursing research. Journal of Advances in Nursing, 41(4), 376–382.
Randolph, J. (n.d.). Online Kappa Calculator. Retrieved from http://justusrandolph.net/kappa/
Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied Psychology: An International Review, 48(1), 23–47.
Sri Lanka Medical Council. (2009). Guidelines on Ethical Conduct for Medical and Dental Practitioners Registered with the Sri Lanka Medical Council. Colombo, Sri Lanka: SLMC.
Steinert, Y., Cruess, R. L., Cruess, S. R., Boudreau, J. D., & Fuks, A. (2007). Faculty development as an instrument of change: A case study on teaching professionalism. Academic Medicine, 82(11), 1057-1064.
Stern, A. (2006). Measuring Professionalism. New York, USA: Oxford University Press.
Susilo, A. P., Marjadi, B., van Dalen, J., & Scherpbier, A. (2019). Patients’ decision-making in the informed consent process in a hierarchical and communal culture. The Asia Pacific Scholar, 4(3), 57-66.
Tavakol, M., & Sandars, J. (2014). Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Medical Teacher, 36(10), 838–848.
Uragoda, C. (1987). A History of Medicine in Sri Lanka – From the Earliest Times to 1948. Colombo, Sri Lanka: Sri Lanka Medical Association.
Van de Camp, K., Vernooij-Dassen, M. J. F. J., Grol, R. P. T. M., & Bottema, B. J. A. M. (2004). How to conceptualize professionalism: A qualitative study. Medical Teacher, 26(8), 696–702.
Van Mook, W. N. K. A., Van Luijk, S. J., O’Sullivan, H., Wass, V., Schuwirth, L. W., & Van Der Vleuten, C. P. M. (2009). General considerations regarding assessment of professional behaviour. European Journal of Internal Medicine, 20(4), e90–e95.
Wear, D., & Kuczewski, M. G. (2004). The professionalism movement: Can we pause? American Journal of Bioethics, 4(2), 1–10.
Wilkinson, T. J., Wade, W. B., & Knock, L. D. (2009). A blueprint to assess professionalism: Results of a systematic review. Academic Medicine, 84(5), 551–558.
Wynia, M. K., Papadakis, M. A., Sullivan, W. M., & Hafferty, F. W. (2014). More than a list of values and desired behaviors: A foundational understanding of medical professionalism. Academic Medicine, 89(5), 712–714.
*Amaya Ellawala
Department of Medical Education,
Faculty of Medical Sciences,
University of Sri Jayewardenepura,
Sri Lanka
Email address: amaya@sjp.ac.lk
Submitted: 17 March 2020
Accepted: 3 June 2020
Published online: 5 January, TAPS 2021, 6(1), 60-69
https://doi.org/10.29060/TAPS.2021-6-1/OA2239
Frank Bate1, Sue Fyfe2, Dylan Griffiths1, Kylie Russell1, Chris Skinner1, Elina Tor1
1University of Notre Dame Australia, Australia; 2Curtin University, Australia
Abstract
Introduction: In 2017, the School of Medicine of the University of Notre Dame Australia implemented a data-informed mentoring program as part of a more substantial shift towards programmatic assessment. Data-informed mentoring, in an educational context, can be challenging with boundaries between mentor, coach and assessor roles sometimes blurred. Mentors may be required to concurrently develop trust relationships, guide learning and development, and assess student performance. The place of data-informed mentoring within an overall assessment design can also be ambiguous. This paper is a preliminary evaluation study of the implementation of data informed mentoring at a medical school, focusing specifically on how students and staff reacted and responded to the initiative.
Methods: Action research framed and guided the conduct of the research. Mixed methods, involving qualitative and quantitative tools, were used with data collected from students through questionnaires and mentors through focus groups.
Results: Both students and mentors appreciated data-informed mentoring and indications are that it is an effective augmentation to the School’s educational program, serving as a useful step towards the implementation of programmatic assessment.
Conclusion: Although data-informed mentoring is valued by students and mentors, more work is required to: better integrate it with assessment policies and practices; stimulate students’ intrinsic motivation; improve task design and feedback processes; develop consistent learner-centred approaches to mentoring; and support data-informed mentoring with appropriate information and communications technologies. The initiative is described using an ecological model that may be useful to organisations considering data-informed mentoring.
Keywords: Data-Informed Mentoring, Mentoring, Programmatic Assessment, E-Portfolio
Practice Highlights
- Students and mentors appreciated the introduction of data-informed mentoring.
- Assessment policies and practices should be integrated with data-informed mentoring.
- Data-informed mentoring presents curriculum challenges in task design and framing feedback.
- The student context informs the data-informed mentoring approach (learner-centred to mentor-directed).
- Data-informed mentoring requires supportive information and communications technologies.
I. INTRODUCTION
An often-cited definition of mentoring, highlights the role of experienced and empathetic others guiding students to re-examine their ideas, learning and personal and professional development (Standing Committee on Postgraduate Medical and Dental Education, 1998).
Heeneman and de Grave (2017) identify some subtle differences between traditional conceptions of mentoring and the type of mentoring that is required under programmatic assessment, which in this paper we refer to as Data-Informed Mentoring (D-IM). For example, D-IM is embedded in a curriculum in which rich data on student progress arises from student interaction with assessment tasks, informing and enhancing their progress (see Appendix). Further, in programmatic assessment, the learning portfolio is typically the setting in which the mentor-mentee relationship develops. This setting brings together institutional imperatives (e.g. assessable tasks), and personal imperatives such as evidence of competence and personal reflection. Situating mentoring in a curriculum and assessment framework impacts upon the mentoring relationship.
Meeuwissen, Stalmeijer, and Govaerts (2019) propose that a different type of mentoring is required under programmatic assessment. Mentors interpret data and feedback provided by content experts across domains of learning thus providing an evidence-base to facilitate student reflection. They might also take on a variety of roles (e.g. critical friend, coach, assessor) that could influence the mentoring relationship including the level of trust that is established with the student. These challenges suggest that conventional definitions of mentoring might not capture the essence of D-IM. Whilst the availability of rich information potentially enhances the mentoring experience and personalises learning, mentors and students are challenged to make sense and act upon this information; students might focus on issues that fall outside of the scope of the data provided (e.g. their wellbeing); mentors may also struggle to delineate boundaries between multiple roles or draw a line on where their scope of practice, as a mentor, begins and ends.
Mentoring is a social construct and as such is best considered through a holistic lens taking account of societal, institutional and personal factors (Sambunjak, 2015). The current study adopted Sambunjak’s “ecological model” (2015, p. 48) as a framework to help understand the impact of D-IM (Figure 1). Societal, institutional and personal forces are inter-related. For example, a student’s approach to D-IM might be influenced by financial circumstances resulting in the need to work part-time (societal); a medical school’s assessment policy (institutional); or a student’s learning style (personal). The model is presented as a set of cogs where the optimal educational experience is achieved if all elements work in harmony. The study uses the ecological model to help answer the central research question that guided the study: how did students and staff react and respond to D-IM?

Figure 1. An ecological framework for conceptualizing D-IM (modified from Sambunjak, 2015)
This paper shares findings from the study derived from the first two years of data collection. Its focus is on the implementation of D-IM and how students and staff reacted to this implementation (Kirkpatrick & Kirkpatrick, 2006).
II. METHODS
The School of Medicine Fremantle (the School) of the University of Notre Dame Australia introduced D-IM as part of its incremental approach to programmatic assessment. The School offers a four-year doctor of medicine (MD) with around 100 students enrolling each year. The first two years are pre-clinical consisting of problem-based learning supported by lectures and small group learning. The final two years involve clinical rotations mostly located at hospital sites. Each year of the MD constitutes a course that students need to pass in order to progress to the next year. The School’s assessment mix includes knowledge-based examinations (multiple choice/case-based), Objective Structured Clinical Examinations, work-based assessments and rubric-based assessments (e.g. reflections). Examinations are administered mid-year and end-of-year for pre-clinical students and end-of-year for students in the clinical years.
All performance data informs D-IM. Regular feedback from assessors is provided and collated in an e-portfolio (supported by Blackboard) so that students have opportunities to reflect on their progress and plan future learning. Students are allocated a mentor each year who has access to their students’ e-portfolio.
Mentoring was provided by 26 pre-clinical de-briefing (CD) tutors whose role was to facilitate student reflection on their learning and support and guide their interpretation of the feedback they had received. D-IM was introduced to first year students in 2017 and first and second year students in 2018. Three mentoring meetings were conducted per student per year. CD tutors also have a role in assessing student performance and providing feedback. Each CD tutor has a CD group which is also their mentoring group (8-10 students). However, tasks are assessed and feedback is provided by a different tutor. This means that mentor and assessor functions are separated.
In preparation for the implementation of D-IM, targeted professional development was provided to tutors which unpacked the mentoring role and provided examples of how performance data can be used to underpin mentoring sessions.
The University of Notre Dame Australia Human Research Ethics Committee (HREC) provided ethical approval for the research, and a research team was formed in 2017. Action research guided the conduct of the research, as it aims to understand and influence the change process. Action research is the “systematic collection and analysis of data for the purpose of taking action and making change” (Gillis & Jackson, 2002, p. 264). It involves cycles of planning, implementing, observing and reflecting on the processes and consequences of the action. The subjects of the research have input into cycles and influence changes that are made as a result of feedback and reflection (Kemmis & McTaggart, 2000). Each cycle of the research runs for one year so that planning, action, observation and reflection can inform the next iteration.
Mixed methods research involving qualitative and quantitative methods, was used. Data were collected each year from student questionnaires and focus groups which included mentors. Participation in the research was underpinned by a Statement of Informed Consent. For the questionnaire consent constituted ticking a box on an online form. For the focus group, a physical form was signed before taking part in a focus group. The student questionnaire comprised qualitative and quantitative components and posed 9 statements on mentoring. The questionnaire was critically appraised by a panel of 8 academic staff in May 2017 and it was agreed that the questionnaire had attained face validity before it was administered in September 2017.
Students were asked to rate each statement of the questionnaire according to a Likert-type scale from Strongly Disagree, Disagree, Neutral, Agree to Strongly Agree. For interpretation, a numerical value was assigned to each response from 1=Strongly Disagree through to 5=Strongly Agree. Quantitative data were downloaded from SurveyMonkey as Excel files for extraction of descriptive statistics and then imported into SPSS Version 25. Statistical analysis was undertaken using SPSS version 25. Two statistical tests were conducted. The first test, a non-parametric median test on students’ perception of each aspect of DI-M, is consistent with the purpose of action research to inform future practice. Responses to individual survey items using a Likert-type response scale are ordinal in nature, and the distributions are not identical for the two cohorts, therefore a median test was used. This statistic compares the responses from two independent groups to individual survey items, with reference to the overall pooled median rating for the two cohorts combined. More specifically, the median test examines whether there are the same proportion of responses above and below the overall pooled median rating, in each of the two cohorts, for each individual item. A second test, an aggregate mean score (an integer), was calculated from the students’ responses to the nine statements in each cohort. The mean score for each cohort provided an overall indication on the extent to which respondents were satisfied with the mentoring program. A parametric test, (independent t-test) was used to examine if there were statistically significant differences in mean scores between the two independent cohorts.
Qualitative data were coded from students’ comments to two open-ended questions in the student questionnaire: (1) Please comment on any aspect of the learning portfolio that you feel were particularly beneficial for your learning journey; and (2) Please comment on any aspect of the learning portfolio that could be improved in the future. Qualitative data from mentors through three focus groups in both 2017 and 2018 were recorded, transcribed and imported into Nvivo12 to help identify patterns across and within data sources. Data saturation was achieved after two focus group iterations. Two researchers independently coded students’ comments and staff transcripts and then met to discuss and resolve differences in interpretation. These codes were then presented to the broader team in which ideas were further unpacked and themes developed using Braun and Clarke’s (2006) thematic approach to analysis.
III. RESULTS
In 2017, 29% of the year 1 student cohort responded to the questionnaire (n=33) and in 2018, the response fraction across both Year 1 and Year 2 was 47% (n=98). The 2017 student cohort is described as Cohort 1 and the 2018 Student Cohort is Cohort 2. The response fraction for Cohort 1 increased from 29% in 2017 to 46% in 2018. In 2017, 21 staff participated in focus groups (7 of whom were mentors). In 2018, 17 staff took part (9 mentors). Tables 1-2 compare student responses to the 9 items on mentoring on the following basis:
- Over time in 2017 and 2018 within Cohort 1 (Table 1);
- For first year students–Cohort 1 2017 and Cohort 2 2018 (Table 2).
For each table, median ratings are shown for each item along with the results of the median test to discern statistically significant differences between or within cohorts. Table 1 compares Cohort 1 responses to D-IM over time.
|
Item |
Overall Pooled Median* |
Cohort 1 2017 (n=32) |
Cohort 1 2018 (n=51) |
|
||
|
|
|
n> pooled median |
n<= pooled median |
n> pooled median |
n<= pooled median |
Median Test |
|
The mentoring process was well organised
|
4 |
6 |
26 |
6 |
45 |
χ2 =0.776; df=1; p=0.378 |
|
My mentor was personally very well organised
|
5 |
0 |
32 |
0 |
50 |
n/a**
|
|
There were an appropriate number of mentoring meetings throughout the year
|
4 |
2 |
30 |
4 |
47 |
χ2 =0.074; df=1; p=0.785 |
|
My mentor was respectful
|
5 |
0 |
32 |
0 |
51 |
n/a**
|
|
My mentor listened to me
|
5 |
0 |
32 |
0 |
50 |
n/a**
|
|
My mentor asked thought-provoking questions which helped me to reflect
|
4 |
10 |
22 |
12 |
39 |
χ2 =0.602; df=1; p=0.438 |
|
My mentor added value to my learning
|
4 |
10 |
22 |
11 |
40 |
χ2 =0.975; df=1; p=0.323 |
|
My mentor helped me to set future goals that were achievable
|
4 |
9 |
23 |
11 |
40 |
χ2 =0.462; df=1; p=0.497 |
|
The summaries provided of my performance in the Blackboard Community Site were useful in helping me to reflect on my progress |
3 |
17 |
16 |
14 |
37 |
χ2 =4.983; df=1; p=0.026*** |
Note. *In the median test, a comparison is made between the median rating in each group to the ‘overall pooled median’ from both groups. **Values are less than or equal to the overall pooled median therefore Median Test could not be performed. ***Significant at p < 0.05 level.
Table 1. Student Perceptions of D-IM within Cohort 1 in 2017 and 2018–Median Tests for Individual Items
The only statistically significant difference noted for Cohort 1 was for the summaries of performance provided in Blackboard that were designed to underpin D-IM. The data provided in these summaries was less valued by students who engaged with D-IM in their second year.
The aggregate mean score in response to the statements on D-IM in the survey was positive in 2017 (M=4.02; SD=0.62; n=32). Mentoring continued to be well perceived by Cohort 1 as they progressed to second year in 2018 (M=3.80; SD=0.67; n=51). The slight difference in aggregate mean scores between 2017 and 2018 is not statistically significant (t=1.571; df=82, p=0.120). Table 2 compares first year students’ perceptions of D-IM.
|
Item |
Overall Pooled Median* |
Cohort 1 n=32 |
Cohort 2 n=47 |
|
||
|
|
|
n> pooled median |
<= pooled median |
> pooled median |
<= pooled median |
Median Test |
|
The mentoring process was well organised
|
4 |
6 |
26 |
9 |
37 |
χ2 =0.008; df=1; p=0.928 |
|
My mentor was personally very well organised
|
5 |
0 |
32 |
0 |
47 |
n/a** |
|
There were an appropriate number of mentoring meetings throughout the year
|
4 |
2 |
30 |
8 |
39 |
χ2 =0.998; df=1; p=0.158 |
|
My mentor was respectful |
5
|
0 |
32 |
0 |
47 |
n/a**
|
|
My mentor listened to me |
5 |
0 |
32 |
0 |
47 |
n/a**
|
|
My mentor asked thought-provoking questions which helped me to reflect
|
4 |
10 |
22 |
18 |
29 |
χ2 =0.413; df=1; p=0.520 |
|
My mentor added value to my learning
|
4 |
10 |
22 |
17 |
30 |
χ2 =0.205; df=1; p=0.651 |
|
My mentor helped me to set future goals that were achievable
|
4 |
9 |
23 |
17 |
30 |
χ2 =0.558; df=1; p=0.455 |
|
The summaries provided of my performance in the Blackboard Community Site were useful in helping me to reflect on my progress |
3 |
17 |
16 |
17 |
30 |
χ2 =1.868; df=1; p=0.172 |
Note. *In the median test, a comparison is made between the median rating in each group to the ‘overall pooled median’ from both groups. **Values less than or equal to the overall pooled median therefore Median Test could not be performed.
Table 2. First Year Student Perceptions of D-IM –Median Tests between Cohort 1 and Cohort 2 for Individual Items
No statistically significant differences were noted between cohorts 1 and 2.
The aggregate mean score in response to the statements on D-IM in the survey was positive for Cohort 1 in 2017 (M=4.02; SD=0.62; n=32). Equally positive responses were noted in Cohort 2 in 2018 (M=3.91; SD=0.79; n=47). The difference between aggregate mean scores for first year students’ perceptions is not statistically significant (t=0.686; df=78, p=0.495).
Data from tables 1 and 2 reveals that students are highly satisfied with three aspects of mentoring: the personal organisation of the mentor along with their respectful and listening attributes. Students were also satisfied with the mentoring process, the number of mentoring meetings, the ability of the mentoring to assist in reflection and to add value to their learning, and also the propensity of the mentor to assist in action-planning. However, the summaries provided in the Blackboard environment were a source of dissatisfaction for students.
As discussed, qualitative data were collected from students through the questionnaire and staff through focus groups. The research team collated the qualitative data and confirmed that the qualitative data corroborated quantitative results with students and mentors appreciating the introduction of D-IM. For example, “Mentor sessions are important in providing support to students and…are a welcome introduction” (Yr1 Student, 2017); “Mentoring was useful to develop self-directed learning and to check where you were” (Yr2 Student, 2018); “You get to know the students, things were revealed which would not have been otherwise” (Mentor, 2017); and “Mentoring enabled me to facilitate more, listen more. Definitely a difference when you’re one-on-one with somebody” (Mentor, 2018).
In tune with the action research method adopted by the study which seeks to identify and respond to opportunities for improvement, the Research Team identified three concerns from the qualitative data: differing views of the purpose of D-IM and the role of the mentor; the provision of student feedback and information and communications technologies (ICT); and workload.
A. Differing Views of the Purpose of D-IM and the Role of Mentor
Mentors had differing conceptions of the purpose of D-IM and the role of a mentor. Some mentors perceived their primary function to be one of facilitating reflection and being encouraging whilst others were more directive, providing advice or sharing their own experiences. “I was…a sounding board to prompt their thoughts about how their progress was going. Rather than offering ways of solving problems it was more pointing where problems might lie and encouraging them to think of solutions” (Mentor, 2017); “The basic rule is to guide them… guide them properly, maybe to get them to change their study strategies and other things” (Mentor, 2017).
B. Provision of Student Feedback and ICT
Students reported that feedback was inconsistent in timeliness and quality. Often feedback lacked guidance for improvement or was too late for it to help the student improve their learning: “More in-depth feedback on work, and returned in a timeframe that allows it to be relevant to our learning” (Yr1 Student, 2018); “Marking seemed thoughtless and halfhearted” (Yr2 Student, 2018).The use of a Blackboard Wiki to collate and present data points was also less than ideal with students finding the site difficult to navigate and use although they generally reported that it was safe and secure.
C. Workload
Students understood the role of reflection and appreciated having a mentor although there was some misunderstanding of the role of the portfolio with some students seeing it as extra work: “The amount of work required…was disproportionate” (Yr2 Student, 2018). Some students felt that the added stress and anxiety detracted from their study of medicine: “The portfolio actually detracts from spending time learning content that is essential to clinical years” (Yr2 Student, 2018). These concerns needed to be addressed by the School and are discussed in the context of changes that have and will be made to D-IM for preclinical students in the School.
IV. DISCUSSION
On the whole there was a positive response to D-IM implementation by students and staff. This response is consistent with Frei, Stamm, and Buddeberg-Fischer (2010, p. 1) who found that the “personal student-faculty relationship is important in that it helps students to feel that they are benefiting from individual advice.”
The findings of the research, however, reveal some tensions between the various elements of Sambunjak’s (2015) ecological model that link to the three areas of concern identified in the research. These tensions are shown in Figure 2.

Figure 2. The ecological framework to explore tensions in D-IM
A. Purpose and Role of Mentors and D-IM
The role of the mentor at the School is to support and guide students, and this role was not confused with other functions such as content expert or assessor. In this respect, the role conflict described by Meeuwissen et al. (2019) and Heeneman and deGrave (2017) was not evident at the School. However, mentoring approaches were situated on a continuum between learner-centred and mentor-directed. It is probable that the mentor’s style–empowering, checking or directing (Meeuwissen et al., 2019, p.605)–and their potentially different view of their role impacted on how D-IM sessions played out. Three ways of understanding the role of mentor in medical education have been identified: someone who can answer questions and give advice, someone who shares what it means to be a doctor and someone who listens and stimulates reflection (Stenfors-Hayes, Hult, & Owe Dahlgren, 2011). In a study of mentoring styles of beginning teachers, Richter et al. (2013) found that the mentor’s beliefs about learning have the greatest impact on the quality of the mentoring experience. Although professional development was provided to mentors on their role as facilitators of reflection and these issues were outlined and discussed, there were differences in interpretation of the role in the D-IM context.
Heeneman and de Grave (2017) argue that students need to be self-directed in order to be effective medical professionals. It is posited that a number of factors can influence the extent to which the mentor directs proceedings including the mentor’s experience, role clarity, rate of student progress, depth of student reflections and the perceived importance of the data required for assessment purposes.
In this study most students engaged positively with D-IM, though, albeit with variations in the extent and quality of reflection and action planning. A slight decrease in students’ enthusiasm towards D-IM was noted as they progressed from first to second year. This could be related to the novelty of D-IM diminishing over time that has been evident in other educational technology innovations (Kuykendall, Janvier, Kempton, & Brown, 2012). However, students also have a different mentor in each year. According to Sambunjak (2015), mentoring requires commitment sustained over a long period of time. At Maastricht University, for example, Heeneman and de Grave (2017) report that students are allocated the same mentor for a four-year medical course. It is, therefore, likely that in the current study the short timeframe for mentors to establish student relationships, and the introduction of a different mentor each year contributed to a reduction in student satisfaction.
B. Feedback and ICT Support
D-IM is dependent on quality data. That is, the perceived value of tasks that students engage with, and the feedback that they receive on, these tasks. Findings suggest that students found some tasks repetitive and feedback belated and superficial. Better task design and feedback practices are required. This finding is consistent with those of Bate, Macnish and Skinner (2016) in a study of task design within a learning portfolio. Findings also indicated dissatisfaction with Blackboard ICT environment. The portal was not intuitive and the structure and requirements for use of the template did not stimulate the desired level of reflection.
C. Workload
Students at the School are “time poor” and many work part-time whilst studying. They are graduate entrants used to achieving academic success. Most are millennials comfortable with distilling and manipulating data and using online technologies. These characteristics are consistent with what Waljee, Copra, and Saint (2018) see as the new breed of medical students, being accustomed to distilling information and desirous of rapid career advancement. In these circumstances, it is unsurprising that students valued D-IM as it promoted focused data-driven discussion on their progress. However, it is also unsurprising that students were critical of anything that, in their opinion, did not support the “study of medicine”. Although students were sometimes critical of tasks that fed into D-IM (Bate et al., 2020), the reflective and action planning components of DI-M were not onerous and were at any rate optional.
For most students, grades rather than learning were paramount and this created a competitive environment which fuelled strategic learning in engaging with tasks underpinning D-IM. The School’s Assessment Policy has implications here. Progression is determined by passing discrete assessments and causes students to focus on grades rather than learning. These dispositions play out in D-IM sessions where, for example, goals are sometimes framed around passing examinations rather than addressing deficits in understanding. The School also distinguishes between formative and summative assessment with the result being that formative assessments are less valued by students. Opportunities to test understanding through formative testing are sometimes not taken up and result in less information for students and their mentor to gauge learning progress.
Bhat, Burm, Mohan, Chahine, and Goldszmidt (2018, p. 620) identified a set of “threshold concepts” in medicine that are crucial for students transitioning into clinical practice. Among these are self-directed, metacognitive and collaborative dispositions to learning. However, for a student in the preclinical years, these threshold concepts are not perceived to be the important factors that determine their progress through the course and their aim to become a doctor. Thus the tensions between students valuing mentoring but feeling that reflecting on their performance through D-IM is time-consuming and unrelated to their course progression is a source of tension within the model.
D. Actions as a Result of the Study
The action research approach of this study meant that in all results the Research Team was looking for ways to improve the system. Some issues could be improved quickly. A refinement of the Blackboard environment and a change to a software solution called SONIA was implemented in 2019 to improve the ICT interface and reduce workload. Continuing professional development (PD) for staff is undertaken and takes the research results into account. Within the mentor PD program, the Research Team saw that mentoring requires mentors to be able to diagnose the readiness and willingness of students to consider their learning educational journey. This means that, whilst the D-IM program needs a consistent view of D-IM where mentors see their role as facilitating reflection, different mentoring skills and behaviours are needed by mentors for different students. PD is also needed for students so that they understand the relationship between their achievement of learning and the role of D-IM in their journey.
Some issues are longer-term or resource dependent. A focus on the role of feedback in the system, especially for student reflection and its timeliness for mentoring sessions and action planning is critical to making D-IM valued by students. However, it is not always possible for staff to provide feedback in an optimum timeframe although the quality of the feedback can be improved by clear guidelines, expectations and an intuitive online interface.
Of great complexity and more difficult to resolve is the tension between developing the “threshold concepts” (Bhat et al., 2018); the generic skills which are built on self-reflection and are supported by D-IM and the ways in which a student progresses through the course. These are School-based rules of progression and produce a framework within which D-IM needs to operate.
V. LIMITATIONS OF THE STUDY
The study was conducted at one University and although it will ultimately cover a six-year timeframe, findings should be gauged within the context of this setting. Relatively low response rates were noted, and selection bias is a possibility with students most engaged with D-IM completing the questionnaire. Although professional development was provided to underpin the mentoring role, there was variation in the way tutors interpreted this role. The study was conducted at a time where other changes were occurring at the School (e.g. development of more continuous forms of assessment) and these changes might have impacted on D-IM. The questionnaire used in the study contained nine questions on mentoring. To gain a more nuanced understanding of D-IM at the School, it may be useful to use a comprehensive and validated questionnaire (e.g. Heeneman & de Grave, 2019) capturing the perceptions of mentees and mentors.
VI. CONCLUSION
The School aims to create quality, patient-centred and compassionate doctors who are lifelong learners (Candy, 2006). D-IM is an effective augmentation to the School’s educational program and the paper has demonstrated that it was well received by students and staff. Future directions include consideration of D-IM in clinical mentoring, development of more consistent learner-centred approaches to mentoring; improved task design and feedback; support for D-IM with appropriate ICT; and better integration of D-IM with assessment policies and practices.
Notes on Contributors
Associate Professor Frank Bate completed his PhD at Murdoch University. He is the Director of Medical and Health Professional Education at the School of Medicine Fremantle, University of Notre Dame Australia. He conceptualised and led the research, and was the primary author responsible for developing, reviewing and improving the manuscript.
Professor Sue Fyfe attained her PhD at the University of Western Australia and is an adjunct professor at Curtin University. She assisted in conceptualising the research design, conducted the qualitative data analysis and made a significant contribution to reviewing and improving the manuscript.
Dr Dylan Griffiths has a PhD from the University of Essex and is the Quality Assurance Manager at the School of Medicine Fremantle, University of Notre Dame Australia. He conducted data collection and assisted with preliminary analysis.
Associate Professor Kylie Russell obtained her PhD from the University of Notre Dame Australia. She is currently a Project Officer at the School of Medicine Fremantle, University of Notre Dame Australia. She assisted in the development of the research methodology and made a contribution to reviewing and improving the manuscript.
Associate Professor Chris Skinner completed his PhD at the University of Western Australia. He is Domain Chair of Personal and Professional development at the School of Medicine Fremantle, University of Notre Dame Australia. He assisted in conceptualising the research design and made a contribution to reviewing and improving the manuscript.
Associate Professor Elina Tor completed her PhD at Murdoch University and is the Associate Professor of Psychometrics at the School of Medicine Fremantle, University of Notre Dame Australia. She helped conceptualise the research design, led the quantitative data analysis, and made a significant contribution to reviewing and improving the manuscript.
Ethical Approval
The University of Notre Dame Australia Human Research Ethics Committee (HREC) has provided ethical approval for the research (Approval Number 017066F).
Acknowledgement
The authors acknowledge the thoughtful and insightful feedback provided by staff and students.
Funding
No internal or external funding was sought to conduct this research.
Declaration of Interest
There is no conflict of interest to declare.
References
Bate, F., Fyfe, S., Griffiths, D., Russell, K., Skinner, C., & Tor, E. (2020). Does an incremental approach to implementing programmatic assessment work: Reflections on the change process. MedEdPublish, 9(1), 55. https://doi.org/10.15694/mep.2020.000055.1
Bate, F., Macnish, J., & Skinner, C. (2016). The cart before the horse? Exploring the potential of ePortfolios in a Western Australian medical school. International Journal of ePortfolio, 6 (2), 85-94.
Bhat, C., Burm, S., Mohan, T., Chahine, S., & Goldszmidt, M. (2018). What trainees grapple with: A study of threshold concepts on the medicine ward. Medical Education, 52(6), 620–631. https://doi.org/10.1111/medu.13526
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77-101. https://doi.org/10.1191/1478088706qp063oa
Candy, P. (2006). Promoting lifelong learning: Academic developers and the university as a learning organization. International Journal for Academic Development, 1(1), 7-18. https://doi.org/10.1080/1360144960010102
Frei, E., Stamm, M., & Buddeberg-Fischer, B. (2010). Mentoring programs for medical students: A review of PubMed literature 2000-2008, BMC Medical Education, 10(32), 1-14. https://doi.org/10.1186/1472-6920-10-32
Gillis, A., & Jackson, W. (2002). Research for Nurses: Methods and Interpretation. Philadelphia, PA: F.A. Davis Co.
Heeneman, S., & de Grave, W. (2017). Tensions in mentoring medical students toward self-directed and reflective learning in a longitudinal portfolio-based mentoring system – An activity theory analysis. Medical Teacher, 39(4), 368-376. https://doi.org/10.1080/0142159X.2017.1286308
Heeneman, S., & de Grave, W. (2019). Development and initial validation of a dual-purpose questionnaire capturing mentors’ and mentees’ perceptions and expectations of the mentoring process. BMC Medical Education, 19(133), 1-13. https://doi.org/10.1186/s12909-019-1574-2
Kemmis, S., & McTaggart, R. (2000). Participatory action research. In N. K. Denzin & Y. S. Lincoln (Eds.) Handbook of Qualitative Research (2nd Ed.; pp 567-606). New York: Sage Publications.
Kirkpatrick, D., & Kirkpatrick, J. (2006). Evaluating Training Programs: The Four Levels (3rd ed.). Oakland, CA: Berrett-Koehler Publishers, Inc.
Kuykendall, B., Janvier, M., Kempton, I., & Brown, D. (2012). Interactive whiteboard technology: Promise and reality. In T. Bastiaens & G. Marks (Eds.), Proceedings of E-Learn 2012 – World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 1 (pp. 685-690). Retrieved from Association for the Advancement of Computing in Education (AACE), https://www.learntechlib.org/p/41669
Meeuwissen, N., Stalmeijer, R., & Govaerts, M. (2019). Multiple-role mentoring: Mentors conceptualisations, enactments and role conflicts. Medical Education, 53, 605-615. https://doi.org/10.1111/medu.13811
Richter, D., Kunter, M., Lüdtke, O., Klusmann, U., Anders, Y., & Baumert, J. (2013). How different mentoring approaches affect beginning teachers’ development in the first years of practice. Teaching and Teacher Education, 36, 166-177. https://doi.org/10.1016/j.tate.2013.07.012
Sambunjak, D. (2015). Understanding the wider environmental influences on mentoring: Towards an ecological model of mentoring in academic medicine. Acta Medica Academia, 44(1), 47-57. https://doi.org/10.5644/ama2006-124.126
Standing Committee on Postgraduate Medical and Dental Education. (1998). Supporting Doctors and Dentists at Work: An Enquiry into Mentoring. London: SCOPME.
Stenfors-Hayes, T., Hult, H., & Owe Dahlgren, L. (2011). What does it mean to be a mentor in medical education? Medical Teacher, 33(8), 423-428. https://doi.org/10.3109/0142159X.2011.586746
Waljee, J. F., Copra, V., & Saint, S. (2018). Mentoring Millennials. JAMA, 319(15), 1547-1548. https://doi.org/10.1001/jama.2018.3804
*Frank Bate
Medical and Health Professional Education,
School of Medicine Fremantle,
University of Notre Dame Australia,
PO Box 1225, Fremantle,
Western Australia 6959
Telephone: +66 9433 0944
Email address: frank.bate@nd.edu.au
Submitted: 20 March 2020
Accepted: 28 July 2020
Published online: 5 January, TAPS 2021, 6(1), 70-82
https://doi.org/10.29060/TAPS.2021-6-1/OA2241
Sandra Widaty1, Hardyanto Soebono2, Sunarto3 & Ova Emilia4
1Department of Dermatology and Venereology, Faculty of Medicine, Universitas Indonesia – Dr. Cipto Mangunkusumo Hospital, Indonesia; 2Department of Dermatology and Venereology Faculty of Medicine, Universitas Gadjah Mada, Indonesia; 3Pediatric Department, Faculty of Medicine, Universitas Gadjah Mada Indonesia, Indonesia; 4Medical Education Department, Faculty of Medicine, Universitas Gadjah Mada, Indonesia
Abstract
Introduction: Performance assessment of residents should be achieved with evaluation procedures, informed by measured and current educational standards. The present study aimed to develop, test, and evaluate a psychometric instrument for evaluating clinical practice performance among Dermatology and Venereology (DV) residents.
Methods: This is a qualitative and quantitative study conducted from 2014 to 2016. A pilot instrument was developed by 10 expert examiners from five universities to rate four video-recorded clinical performance, previously evaluated as good and bad performance. The next step was the application of the instrument to evaluate the residents which was carried out by the faculty of DV at two Universities.
Results: The instrument comprised 11 components. There was a statistically significant difference (p < 0.001) between good and bad performance. Cronbach’s alpha documented high overall reliability (a = 0.96) and good internal consistency (a = 0.90) for each component. The new instrument correctly evaluated 95.0% of poor performance. The implementation study showed that inter-rater reliability between evaluators range from low to high (correlation coefficient r =0.79, p < 0.001).
Conclusion: The instrument is a reliable and valid instrument for assessing clinical practice performance of DV residents. More studies are required to evaluate the instrument in different situation.
Keywords: Instrument, Clinical Assessment, Performance, Resident, Dermatology-Venereology, Workplace-Based Assessment
Practice Highlights
- The residents’ performance will reflect on their professionalism and competencies. Furthermore, clinical care provided by Dermatology and Venereology field is unique, therefore a standard instrument is needed to assess their performance.
- Dermatology – Venereology Clinical Practice Performance Examination instrument is proven to be reliable and valid in assessing residents’ clinical performance
I. INTRODUCTION
Performance assessment in medical clinical practice has been a great concern for medical education programmes worldwide. (Holmboe, 2014; Khan & Ramachandran, 2012; Naidoo, Lopes, Patterson, Mead, & MacLeod, 2017). It is an accepted premise that performance may differ according to competency (Cate, 2014; Khan & Ramachandran, 2012). Performance also occurs within a domain; therefore, the assessment of performance should be separated from that of competency. Performance assessment of medical residents should also be informed by existing medical standards and performance criteria (Li, Ding, Zhang, Liu, & Wen, 2017; Naidoo et al., 2017).
Assessment of residents during their training programme is an important issue in postgraduate medical education, which has declared formative evaluation and constructive feedback as priorities (World Federation for Medical Education, 2015). An earmark of postgraduate medical specialist training is that it occurs in the workplace; therefore, the most appropriate measurement tools are Workplace-Based Assessments (WPBA). In medical education, these assessments emphasise on result and professionalism (Boursicot et al., 2011; Joshi, Singh, & Badyal, 2017).
In response to a standardisation programme for postgraduate medical specialist training (PMST), the World Federation for Medical Education (WFME) had published guidelines which adopted by several countries including Indonesia (Indonesian College of Dermatology and Venereology, 2008; World Federation for Medical Education, 2015). Clinical care provided in the Dermatology and Venereology (DV) field is unique; a brief examination of the patient is often useful before taking a lengthy history (Garg, Levin, & Bernhard, 2012). Privacy is a top priority, especially for venereology patients, patients with communicable diseases, cosmetic dermatology and skin surgery care.
Until now, no standard instrument has been available for performance assessment of PMST in DV; therefore, a variety of assessments are in use which may cause discrepancies (Jhorar, Waldman, Bordelon, & Whitaker-Worth, 2017). A valid and reliable method of assessment is required that can be used in various facilities and related to proficiency in both content and process (Kurtz, Silverman, Benson, & Drapper, 2003). Therefore, a study was conducted to focus on the development of a residents’ clinical performance assessment based on certain standards and principles such as the WPBA and WFME standards.
II. METHODS
A. Instrument Development
The instrument was developed and tested using qualitative and quantitative study designs. It started with a solicitation of inputs regarding expected performance from a variety of stakeholders in DV: patients, nurses, laboratory staff, newly graduated DV specialists, DV practitioners, and faculty. A literature review was performed, which included various documents such as the educational programme standards for DV residents, and documentation on available assessment tools (Cate, 2014; Hejri et al., 2017; Norcini, 2010). The instrument was developed according to the current standards (Campbell, Lockyer, Laidlaw, & MacLeod, 2007; McKinley, Fraser, van der Vleuten, & Hastings, 2000).
The resulted 11-items instrument was subsequently evaluated by faculty groups from various universities in Indonesia. Repeated revisions were carried out. Psychometric data for the instrument were provided through independent evaluations of performance videos of the residents and also through comparison of the results of the new instrument (Dermatology -Venereology Clinical Practice Performance Examination/DVP-Ex) and the compared instrument. The design was a validation study in which psychometric data for the instrument were provided. Further step was the assessment of residents’ performances when performing their clinical practice using the instrument to evaluate instrument reliability and feedback. Flowchart of the study process is shown in Appendix A.
B. Setting
The present study was conducted at the Department of Dermatology and Venereology, Dr. Cipto Mangunkusumo Hospital, a teaching hospital for Faculty of Medicine Universitas Indonesia, from 2014 to 2016. The study was conducted in four steps. When developing the instrument (Step 1), we included faculty members from five medical faculties in Indonesia that have DV Residency Programme (Universitas Indonesia, Universitas Sriwijaya (UNSRI), Universitas Padjajaran (UNPAD), Universitas Gadjah Mada, and Universitas Sam Ratulangi) through in depth interview and expert panel. The study received ethical approval from the Research Ethics Committee of the Faculty of Medicine Universitas Gadjah Mada Number KE/FK/238/EC.
The instrument that had been developed was sent to five senior faculty members from three universities (Department of Dermatology and Venereology, Universitas Indonesia, UNPAD and UNSRI) (Step 2). They were asked to give their assessment in order to have face and content validity. As a test of criterion validity, we recruited 10 faculty members of Faculty of Medicine, Universitas Indonesia, randomised them into two groups. Randomisation was performed to prevent bias against the instruments being tested. One group used DVP-Ex and the other used the current instrument. The single inclusion criterion was more than three years of teaching experience. After receiving some inputs, final correction was done and training was provided for faculty member who would use the instrument.
C. Performance Video
To obtain standardised performance of the residents, video recordings of the resident’s clinical practice were made. Two residents were voluntarily recruited and a special team recorded their clinical practice performance using scenarios created by the first author. (Campbell et al., 2007; McKinley et al., 2000)
There were four videos, each of which showed the clinical practice performance of the residents when they were presented with a difficult case (dermatomyositis) and a common case (leprosy tuberculoid borderline type). Patients had to sign informed consent before included in this study. A good (first and fourth video clips) and poor (second and third clips) standard of performance were demonstrated. Activities presented in the scenarios were those associated with patient care (Campbell et al., 2007; Iobst et al., 2010). After the recording session was finished, patients were managed accordingly and provided with rewards.
D. Training on the Performance Instrument
An hour-long training was provided for the 10 faculty members (the examiners). The faculty then practiced scoring, using the recorded video clips. During the training, we received some input and made necessary corrections to the rubrics. There was no training given for the comparison instrument because the entire faculty was already accustomed to this instrument. Step 3 is the step to produce validity, reliability and accuracy of performance instrument, which was conducted through a comparative study between two instruments of assessment; i.e. performance and control instruments. It evaluated the clinical practice performance of residents in the form of video film recording their performance
E. Implementation of Resident Performance Assessment with Performance Instrument
This step was aimed to evaluate the reliability of the instrument and results of instrument implementation when it was used to assess the residents (Step 4). The sample included residents of Postgraduate Medical Specialist Training Programme in Dermatology and Venereology, Faculty of Medicine, University of Indonesia and UGM, who were at their basic level (residents who were on their 1st semester in clinical setting), intern level (semester II-V) and independent level (semester VI or higher).
Sample size: n = 3 – 4/level/Faculty of Medicine = 20. The evaluators were five lecturers/ Faculty of Medicine = 10, and each lecturer evaluated six residents.
F. Data Collection
One week after the training, the instrument was evaluated. Faculty members assessed the performance of the residents in the four video recordings at the same time. Three days later, the groups underwent a rotation to reassess the video with whichever of the two instruments they had not already used. The examiners were asked to provide feedback and information on the ease of completing the instrument and the clarity of its instructions. For the implementation of resident assessment performance with the instrument, one resident was being evaluated by three lecturers simultaneously. The lecturers were grouped randomly; therefore, every lecturer could evaluate six residents out of ten residents from each group that would be assessed.
G. Data Analysis
The analyses aimed to evaluate validity, reliability, and precision of the instrument for discriminating the performance of the residents as poor, good, or excellent.
H. Validity and Reliability
A reliability test was performed, i.e. internal consistency in the form of responses against items in each field (Cronbach alpha coefficient). Face and content validity were assessed by addressing the relevant performance standards and criteria, and by optimizing clarity of instruction, specific criteria, acceptable format, gradation of responses, correct and comprehensive answers (including all assessed variables). The cut-off score of the instrument was determined using ROC (receiver operating curve) principles, which was then used to evaluate sensitivity, specificity, positive and negative predictive value. The accuracy of the instrument was determined to evaluate the precision of the instrument in distinguishing between good and poor performance.
I. Statistical Analysis
The statistical analysis was performed using SPSS 11.5 software. Total assessment scores of each examiner were analysed using analysis of variance (ANOVA). Internal consistency was determined using Cronbach’s α and Spearman analysis was performed to acquire p value for the validity. The accuracy was determined by comparing failed or passed score results and comparing it with the video. To obtain the intergroup difference, McNemar’s test and Kappa analysis were carried out. Qualitative analysis was also performed, especially to evaluate feedbacks by performing several analytical steps.
III. RESULTS
A performance instrument was developed with 11 competency components, for which evaluation responses were given in the form of rubric scale (Appendix B). All 10 faculty members completed an assessment of each of the four videos. Eight examiners had more than 3 years of teaching experiences, and five examiners were DV consultants.
A. Validity
For validity, face, content, and construct validity remain solid points of reference for validity evaluation (Colliver, Conlee, & Verhulst, 2012; Johnson & Christensen, 2008). Face content was evaluated by five experts from three universities. The evaluation was implemented to improve the instrument. The scale of the rubrics described the capacity of residents to perform activities according to the Standard Competency of DV specialist and the domain of performance for physicians has made the instrument evaluated as the instrument with good validity on its face, content and construction.
The results of the assessments made on performance videos with the DVP-Ex showed that examiners agreed that the performances of the first and fourth videos (the “good” videos) were good performance (>60); conversely, the second and third videos (the “bad” ones) were evaluated as poor performance by 10 and 9 out of 10 faculty members, respectively (Table 1).
|
Video |
Mean (Score) |
N |
Standard Deviation |
Median |
Minimum |
Maximum |
Score >60 |
Score <60 |
|
1 |
87.45 |
10 |
12.59 |
89.44 |
56.00 |
100.00 |
90% |
10% |
|
2 |
33.54 |
10 |
15.77 |
35.92 |
4.17 |
51.85 |
100% |
0 |
|
3 |
25.31 |
10 |
16.84 |
25.00 |
3.70 |
64.00 |
90% |
10% |
|
4 |
81.96 |
10 |
9.06 |
84.25 |
66.67 |
96.29 |
100% |
0 |
Note: Chi Square, Kruskal–Wallis p < 0.001
Table 1. Assessment scores for each of the four videos (n = 10)
Faculty members also gave feedback suggesting that the instrument would be useful for assessing residents’ performance. They also commented that the instrument was more objective than the one currently in use, that it was challenging in that they had to read the instrument carefully in order to use it properly, and that the response options allowed several aspects of the residents’ performance to be assessed.
B. Validity and Reliability
Validity of the instrument was measure using Spearman analyses showed significant result for all of the competency component (p > 0.001). Reliability measure of the correlation between each item score and the total score on all relevant items (Cohen, Manion, & Morrison, 2008). Our analysis revealed good overall reliability, with Cronbach α = 0.96. All components of competency achieved internal reliability scores >0.95. The correlation between each item score on the competency components and the overall score was excellent (range: 0.64–0.99).
|
No |
Competency Component |
Corrected Item-Total Correlation |
Alpha if item Deleted (Cronbach a 0.96) |
|
1 |
C1 |
0.76 |
0.96 |
|
2 |
C2 |
0.81 |
0.96 |
|
3 |
C3 |
0.79 |
0.96 |
|
4 |
C4 |
0.76 |
0.96 |
|
5 |
C5 |
0.84 |
0.96 |
|
6 |
C6 |
0.82 |
0.96 |
|
7 |
C7 |
0.88 |
0.96 |
|
8 |
C8 |
0.99 |
0.95 |
|
9 |
C9 |
0.64 |
0.96 |
|
10 |
C10 |
0.90 |
0.95 |
|
11 |
C11 |
0.89 |
0.96 |
Note: C1 = history-taking, C2 = effective communication, C3 = physical examination, C4 = workup, C5 = diagnosis/ differential diagnosis, C6 = DV management, C7 = information and/ education, C8 = data documentation on medical record, C9 = multidisciplinary consultation, C10 = self-development/ transfer of knowledge, C11 = introspective, ethical, and professional attitude
Table 2. Analysis of internal consistency for each competency component
|
Results from instrument |
Video type |
Amount |
|
|
Good |
Poor |
||
|
Passed Failed |
19 1 |
1 19 |
20 20 |
|
Total |
20 |
20 |
40 |
Note: McNemar’s test: p = 0.50, Kappa Analysis κ = 0.90, p < 0.001; accuracy = 95%
Table 3. Comparison of the results from the DVP-Ex instrument and video type (n=40)
It can be concluded that the instrument was able to accurately assess the clinical practice performance demonstrated in the videos (Table 3). The control instrument can accurately identify 80% of the insufficient performance, which makes it a valuable tool for assessment during the clinical years (Table 4). From both data, it can be concluded that DVP-Ex was better than the control instrument in assessing the video with superior accuracy (95% vs 80%, respectively) and better interrater reliability (0.90 vs 0.60, respectively).
|
Results from instrument |
Video type |
Total |
|
|
Good |
Poor |
||
|
Passed Failed |
18 2 |
6 14 |
24 16 |
|
Total |
20 |
20 |
40 |
Note: McNemar’s test: p = 0.289, Kappa analysis κ = 0.60, p <0.001, accuracy: 80%
Table 4. Comparison of the results from the control instrument and video type (n=40)
C. Implementation of the Instrument
By using the cut-off score of 60, a reliability test was performed among instrument evaluators gradually, i.e. between the evaluator I and II (PI-II), evaluator I and III (PI-III) and evaluator II and III (PII-III). We found the following results: (Table 5).
|
|
|
Evaluator I |
Evaluator II |
Evaluator III |
|
Evaluator I |
Coefficient of correlation |
1.000 |
0.59(**) |
0.49 |
|
|
P value |
. |
0.01 |
0.07 |
|
|
N |
20 |
20 |
14 |
|
Evaluator II |
Coefficient of correlation |
0.59(**) |
1.00 |
0.79(**) |
|
|
P value |
0.006 |
. |
0.001 |
|
|
N |
20 |
20 |
14 |
|
Evaluator III |
Coefficient of correlation |
0.49 |
0.79(**) |
1.00 |
|
|
P value |
0.07 |
0.001 |
. |
|
|
N |
14 |
14 |
20 |
Note: **significant correlation
Table 5. Analysis of reliability on performance instrument with Spearman’s Rho correlation
D. Feedback on Assessment with the Performance Instrument
Most feedback was about skill and the process of the clinical practice being performed. In contrast to results of another study suggesting that most feedback addresses communication (Pelgrim, Kramer, Mokkink, & van der Vleuten, 2012), only 5% of examiners’ remarks mentioned a need to improve communication skill. Additionally, 20% of examiner comments mentioned the importance of attitude, especially as a part of effective communication.
IV. DISCUSSION
The present study was conducted to develop a WPBA instrument to assess clinical practice performance, and to obtain psychometric data on the instrument. The DVP-Ex can easily be used by faculty members, Early psychometric evaluation has demonstrated promising levels of validity and reliability of the instrument.
We found that examiners experienced some difficulties in completing the instrument, therefore, repeated trainings are necessary. Further workup or laboratory examination (C4), multidisciplinary consultation (C9) and knowledge transfer and self-development (C10) were not always scored because they were not observable in every clinical encounter. However, those components (C4, C9, and C10) are important and are not assessed at all by other WPBA instruments (Norcini & Burch, 2007; Norcini, 2010).
The validity evaluation through face and content validity was performed by the experts, who agreed in their approval of the content and construction of the instrument and its relevance to the competencies and performance of physicians. Moreover, the consistency of the examiners in evaluating the performance videos has provided further evidence that the instrument is appropriate for DV residents. Analysis of internal consistency provided ample evidence of the instrument’s reliability. Additionally, the DVP-Ex’s 95% success rate in categorising poor performance as failing offers yet another converging piece of evidence of the instrument’s validity for identifying residents who are struggling.
On the step of implementation, not all of inter-evaluator reliability values were good, which might be cause by the unfamiliarity of the evaluators with the performance instrument; therefore, a more intensive training on how to use the instrument may improve inter-evaluator reliability value. The advantage of utilisation of instrument for evaluators in association with instrument reliability has been discussed in various studies (Boursicot et al., 2011). A special strategy is required to produce a successful assessment process (Kurtz et al., 2003). Full participation in the assessment process and training, including providing the feedbacks are needed (Norcini & Burch, 2007).
The promising results for this instrument’s ability to differentiate poor and good performance could be the basis for further studies to assess the formative functions of the instrument through repeated assessment of the same resident by several examiners. In addition, further studies are needed to justify whether this instrument can also be used as a summative tool. Limitations of the study are that some of the experts were from the same university as the residents which could impose bias on the assessment and no training for the level of questioning. Also, a lot of training and standardisation of the assessors should be addressed if this instrument is to be used in a larger population.
V. CONCLUSION
DVP-Ex is a reliable and valid instrument for assessing DV residents’ clinical performance. With intensive training for the evaluator, this instrument can correctly classify a poor clinical practice performance as a failed performance according to applicable standards. Therefore, it can improve the DV education programme.
Notes on Contributors
Sandra Widaty is a dermato-venereologist consultant and a fellow of Asia Academy of Dermatology and Venereology. She is a Faculty member in Dermatology and Venereology Post Graduate Training and Medical Education Department of Faculty of Medicine Universitas Indonesia. She is the main investigator in this study.
Hardyanto Soebono is a professor and faculty member in Dermatology and Venereology and Medical Education Department of Faculty of Medicine Universitas Gadjah Mada. He conducts lots of publication in both fields. He contributed to the conceptual development and data analysis, including approving this final manuscript.
Sunarto is a faculty member and teach residents in Pediatrics Department. He conducts lots of research and publication in the field of medical education. He contributed to conceptual development and editing, including approving this final manuscript.
Ova Emilia got her PhD degree in Medical Education. She teaches doctoral degree in Medical Education. Currently, she is the dean of Faculty of Medicine Universitas Gadjah Mada. She contributed to the conceptual development, data analysis and editing, including approving this final manuscript.
Ethical Approval
Research Ethics Committee of the Faculty of Medicine University Gadjah Mada Number KE/FK/238/EC.
Acknowledgement
The authors would like to thank Joedo Prihartono for the statistical calculation and analysis.
Funding
No funding source was required.
Declaration of Interest
All authors declared no conflict of interest.
References
Boursicot, K., Etheridge, L., Setna, Z., Sturrock, A., Ker, J., Smee, S., & Sambandam, E. (2011). Performance in assessment: Consensus statement and recommendations from the Ottawa conference. Medical Teacher, 33(5), 370-383.
Campbell, C., Lockyer, J., Laidlaw, T., & MacLeod, H. (2007). Assessment of a matched-pair instrument to examine doctor – Patient communication skills in practising doctors. Medical Education, 41(2), 123- 129.
Cate, O. T. (2014). Competency-based postgraduate medical education: Past, present and future. GMS Journal for Medical Education, 34(5), 1-13.
Cohen, L., Manion, L., & Morrison, K. (2008). Research Methods in Education (6th ed.). London: Routledge.
Colliver, J. A., Conlee, M. J., & Verhulst, S. J. (2012). From test validity to construct validity and back? Medical Education, 46(4), 366-371.
Garg, A., Levin, N. A., & Bernhard, J. D. (2012). Structure of skin lesions and fundamentals of clinical diagnosis. In: L. A. Goldsmith , S. I. Katz, B. A. Gilchrest, A. S. Paller, D. J Leffel & K. Wolff (Eds), Fitzpatrick’s Dermatology in General Medicine, 8e. New York: McGraw-Hill Medical.
Hejri, S. M., Jalili, M., Shirazi, M., Masoomi, R., Nedjat, S., & Norcini, J. (2017). The utility of mini-clinical evaluation exercise (mini-CEx) in undergraduate and postgraduate medical education: Protocol for a systematic review. Systematic Reviews, 6(1), 146-53.
Holmboe, E. S. (2014). Work-based assessment and co-production in postgraduate medical training. GMS Journal for Medical Education, 34(5), 1-15.
Indonesian College of Dermatology and Venereology. (2008). Standard of Competencies for Dermatologists and Venereologists. Jakarta: Indonesian Collegium Dermatology and Venereology.
Iobst, W. F., Sherbino, J., Cate, O. T., Richardson, D. L., Swing, S. R., Harris, P., … Frank, J. R. (2010). Competency-based medical education in postgraduate medical education. Medical Teacher, 32(8), 651-656.
Jhorar, P., Waldman, R., Bordelon, J., & Whitaker-Worth, D. (2017). Differences in dermatology training abroad: A comparative analysis of dermatology training in the United States and in India. International Journal of Women’s Dermatology, 3(3), 164-169.
Johnson, B., & Christensen, L. (2008). Educational Research, Quantitative, Qualitative and Mixed Approaches (3rd ed.). London: Sage Publications, Thousand Oaks.
Joshi, M. K., Singh, T., & Badyal, D. K. (2017). Acceptability and feasibility of mini-clinical evaluation exercise as a formative assessment tool for workplace based assessment for surgical postgraduate students. Journal of Postgraduate Medicine, 63(2), 100-105.
Khan, K., & Ramachandran, S. (2012). Conceptual framework for performance assessment: Competency, competence and performance in the context of assessments in healthcare – Deciphering the terminology. Medical Teacher, 34(11), 920-928.
Kurtz, S., Silverman, J., Benson, J., & Drapper, J. (2003). Marrying content and process in clinical method teaching: Enhancing the Calgary–Cambridge guide. Academic Medicine, 78(8), 802-809.
Li, H., Ding, N., Zhang, Y., Liu, Y., & Wen, D. (2017). Assessing medical professionalism: A systematic review of instruments and their measurement properties. PLOS One, 12(5), 1-28.
McKinley, R. K., Fraser, R. C., van der Vleuten, C. P., & Hastings, A. M. (2000). Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package. Medical Education, 34(7), 573-579.
Naidoo, S., Lopes, S., Patterson, F., Mead, H. M., & MacLeod, S. (2017). Can colleagues’, patients’ and supervisors’ assessments predict successful completion of postgraduate medical training? Medical Education, 51(4), 423-431.
Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher. 29(9):855-71.
Norcini, J. J. (2010). Workplace based assessment. In: T. Swanwick (Ed), Understanding Medical Education: Evidence, Theory and Practice (1st ed., pp. 232-245). London UK: The Association for the Study of Medical Education.
Pelgrim, E. A., Kramer, A. W., Mokkink, H. G., & van der Vleuten, C. P. (2012). The process of feedback in workplace-based assessment: Organisation, delivery, continuity. Medical Education, 46(6), 604-612.
World Federation for Medical Education. (2015). Postgraduate medical education WFME global standards for quality improvement. University of Copenhagen, Denmark: WFME Office.[Accessed 2018 July 20] http://wfme.org/publications/wfme-global-standards-for-quality-improvement-pgme-2015/
*Sandra Widaty
Jl. Diponegoro 71,
Central Jakarta,
Jakarta, Indonesia, 10430
Tel: +622131935383
Email: sandra.widaty@gmail.com
Submitted: 16 April 2020
Accepted: 24 June 2020
Published online: 5 January, TAPS 2021, 6(1), 83-92
https://doi.org/10.29060/TAPS.2021-6-1/OA2251
Eng Koon Ong
Division of Supportive and Palliative Care, National Cancer Centre Singapore, Singapore; Assisi Hospice, Singapore
Abstract
Introduction: Physician empathy is declining due to an unproportionate focus on technical knowledge and skills. The medical humanities can counter this by allowing connection with our patients. This is a pilot study that aims to investigate the acceptability, efficacy, and feasibility of a humanities educational intervention to develop physician empathy.
Methods: Junior doctors at the Division of Supportive and Palliative Care at the National Cancer Centre Singapore between July 2018 and June 2019 attended two small-group sessions facilitated by psychologists to learn about empathy using literature and other arts-based materials. Feasibility was defined as a completion rate of at least 80% while acceptability was assessed by a 5-question Likert-scale questionnaire. Empathy was measured pre- and post-intervention using Jefferson’s Scale of Physician Empathy (JSPE) and the modified-CARE (Consultation and Relational Empathy) measure.
Results: Seventeen participants consented, and all completed the programme. Acceptability scores ranged from 18 to 50 out of 50 (mean 38, median 38). There was an increase in JSPE scores (pre-test mean 103.6, SD=11.0 and post-test mean 108.9, SD=9.9; t (17) =2.49, P=.02). The modified-CARE score increased between pre-test mean of 22.9(SD=5.8) and a post-test mean of 28.5(SD=5.9); t (17) = 5.22, P<0.001.
Conclusion: Results indicate that the programme was acceptable, effective, and feasible. The results are limited by the lack of longitudinal follow-up. Future studies that investigate the programme’s effect over time and qualitative analysis can better assess its efficacy and elicit the participants’ experiences for future implementation and refinement.
Keywords: Empathy, Humanities, Literature, Palliative Medicine
Practice Highlights
- The medical humanities can be used to teach empathy by facilitating reflective practice.
- This novel educational programme was acceptable, effective, and feasible.
- Limitations include the lack of longitudinal follow-up and the quantitative nature of assessment.
- Future studies should investigate the programme’s effect over time and include qualitative analysis.
I. INTRODUCTION
Empathy can be defined as having feelings that are more congruent with another situation than one’s own by recognising the perspectives of others (Hojat et al., 2002). Higher physician empathy leads to better patient care outcomes and satisfaction (Hall et al., 2002) and has also been associated with lower levels of physician burnout (Lee, Loh, Sng, Tung, & Yeo, 2018). However, studies suggest a worrying trend that empathy levels decline as training progresses for medical students and residents as well as a correlation between decreasing empathy and increasing burnout (Lee et al., 2018). The various reasons for such a decline were elicited in a recent systematic review and can be summarised into 4 main domains (refer Table 1) (Neumann et al., 2011).
|
Domains of various reasons for empathy decline |
Details |
|
1. Individual variables |
Personality traits, upbringing, and experiences during adulthood |
|
2. Individual distress |
Burnout, depression, and decreased quality of life are associated with decreased empathy levels. |
|
3. Nature of medical practice |
Uncertainties increase the vulnerability of the medical practitioner and lead to negative coping mechanisms like depersonalisation and detachment from patients. |
|
4. Learning environment |
Inadequate and inappropriate role models and the hidden curriculum cause moral distress and decrease empathy as a consequent of poor coping mechanisms. |
Table 1. Reasons contributing to decline in empathy
The medical humanities are an inter-disciplinary field where the concepts, content, and methods from art, history, and literature are used to investigate the experience of illness and to understand the professional identity of healthcare providers (Shapiro, Coulehan, Wear, & Montello, 2009). It is hypothesised that experiences and perspectives illustrated by the medical humanities through stories depicted in novels, literature, drama, and poetry can promote the development of empathy by encouraging deep reflection, facilitating meaning-finding and comfort with uncertainty and providing new perspectives (Bleakley, 2015; Bleakley & Marshall, 2014; Dennhardt, Apramian, Lingard, Torabi, & Amtfield, 2016). The medical humanities have the potential to address the factors mentioned in Table 1.
A. Individual Distress:
The medical humanities allow an avenue for physicians to express difficult emotions encountered in clinical practice like anxiety, guilt, and regret. Such emotions may be due to uncertain disease trajectories, ethical dilemmas, and physical exhaustion. The medical humanities allow such emotions to be expressed and discussed, with the intention to support physicians and decrease distress from burnout.
B. Nature of Medical Practice:
The uncertainties of medical practice and the consequent vulnerability of the medical practitioner affect empathy levels. Doctors may develop negative coping mechanisms like depersonalisation that may seemingly help meet the unrealistic expectation that medicine can always cure. To counter this, physicians must be given the time to share their clinical experiences in a safe environment and subsequently support each other by establishing relationships and reducing isolation (Batt-Rawden, Chisolm, Anton, & Flickinger, 2013; Feld & Heyse-Moore, 2006; Wear & Zarconi, 2016). Reflective writings and creative arts are some of the methods that have been used to facilitate such a process (West, Dyrbye, Erwin, & Shanafelt, 2016).
C. Learning Environment:
Palliative medicine has been touted to be able to provide an ideal environment to impart empathetic values in view of its patient-centred philosophy of care (Block & Billings, 1998; Othuis & Dekkers, 2003). This is achieved through the routine use of the humanities to understand the personhood of our patients and develop empathetic connections. History, art, music, and narratives define our patients’ life experiences and influence their responses to disease and treatment. Learning through mentorship and role-modelling of such an approach to patient care allows junior doctors to appreciate the importance of using the humanities to achieve better patient care outcomes.
There is currently no conclusive evidence on the best method of teaching empathy or the best person to teach it. Where educators have tried to teach humanism and empathy in medicine, research done on its curriculum has been criticised in terms of clinical relevance and methodology (Birden et al., 2013; Ousager & Johannessen, 2010; Perry, Maffulli, Wilson, & Morrissey, 2011; Schwartz et al., 2009; Wear & Zarconi, 2016). However, the impact of the humanities on the factors that cause declining physician empathy levels illustrated above demonstrates that the medical humanities may be an important tool in teaching empathy. This pilot study has the potential to fill this gap by taking the first step in establishing the acceptability and feasibility of a pilot humanities education programme based on established conceptual frameworks. There were two specific aims to this pilot study: 1) To determine the acceptability and feasibility of the proposed curriculum; 2) To assess the efficacy of the HAPPE programme. The data collected will inform future studies on whether the humanities can be one of the best tools to use in teaching empathy.
II. METHODS
A. Intervention Design
The Humanistic Aspirations as a Propeller of Palliative medicine Education (HAPPE) programme was conceived to introduce and develop a novel curriculum in empathy for junior doctors undergoing a palliative medicine rotation. The overall goal of the study was to design an effective education programme based on the humanities to teach doctors empathy. Our study draws upon Schon’s work of Reflective Practice (see Table 2; Schön, 1987).
|
Concepts of Reflective Practice |
Planned activities during HAPPE |
Expected Outcomes |
|
“Reflection-in-action” – reflecting during an event and act on a decision “on the spot” |
Discussion and awareness of perspectives that trigger powerful emotions and empathetic reflections and responses. |
Recalls triggers, leading to empathetic change in behaviour and decisions in actual practice. |
|
“Reflection-on-action” – reflecting after an event, process feelings and experiences, gain new perspectives. |
Uses rich perspectives of patients, caregivers, and healthcare providers via humanities, leading to deep reflections. |
Reinforces changes in practice “on the ground”. |
Table 2. Application of the Theory of Reflective Practice in the design of the HAPPE programme
Based on the theory of reflective practice, the components of the HAPPE programme are elaborated in Table 3. The principles listed are supported by existing research literature (Gibbs, 1988; Shapiro et al., 2009).
|
Principles |
Component of HAPPE |
|
1. Facilitating factors of reflective practice include a safe environment, conducive settings, and trained facilitators.
|
1. The sessions are facilitated by two trained clinical psychologists who are experienced in conducting support group sessions for both patients and staff.
2. To ensure psychological safety for intimate sharing, all participants are provided explicit consent for the study. The project was submitted for the institution review board review but was exempted.
3. The sessions are conducted via small-group discussions and ground rules are set before the start of each session (see Appendix A).
4. Data collected is blinded to the investigator. |
|
2. Reflective practice is propelled by materials and modalities that provide rich perspectives and trigger strong emotions and empathetic personal inquiry. |
1. Arts-based materials are used to prompt deep reflection and facilitate by examining multiple perspectives and challenging expectations and vulnerabilities of junior doctors.
2. The novel The Death of Ivan Ilyich is chosen in view of its ability to elicit deep reflections about suffering and care. The interpretation of the apparently ambiguous piece of fictitious literary piece based on the learner’s personal beliefs can stimulate personal growth, develop non-judgmental attributes as well as improve coping with uncertainty in medical practice. |
Table 3. Components of the HAPPE programme designed according to the theory of reflective practice.
Acceptability was measured by a Likert-scale questionnaire (see Annex 1). Feasibility of the curriculum was defined as a completion rate of at least 80%. The efficacy of the curriculum was measured by the self-reported Jefferson Physician Empathy Scale (JSPES) (Hojat et al., 2001) as well as the third-party reported modified-Consultation and Relational Empathy (CARE) Measure (Mercer, Maxwell, Heaney, & Watt, 2004).
B. Study Design
This was a quantitative study that assessed the acceptability, feasibility, and effectiveness of the HAPPE programme pre- and post-intervention. Participants: All junior doctors who rotated through the department between 1st July 2018 and 30th June 2019 were invited to participate in this study. About 30 junior doctors (residents and medical officers) rotate through the division of palliative medicine as part of their postgraduate training yearly. The junior doctors have varying levels of prior training and exposure to palliative medicine. These junior doctors worked in palliative care teams each consisting of a consultant, registrar/resident physician, and a nurse who assess and manage patients with palliative care needs. The duration of each rotation ranged from 1 to 6 months. An independent research coordinator provided the participants with information regarding the study and took written consent from each participant face-to-face.
C. Intervention
The HAPPE programme consisted of two 1.5-hour sessions during office hours of small group discussions held at the National Cancer Centre Singapore (NCCS) facilitated by two clinical psychologists 1-week apart. The two facilitators are senior psychologists who are trained in counselling and group facilitation and regularly encounter complex clinical scenarios in communications and grief. The programme was repeated throughout the year at regular intervals for all new junior doctors rotated into the department. The junior doctors were considered to have completed the curriculum in its entirety when they attend both sessions of the HAPPE programme during their posting.
In the first session, a brief introduction on the novel, The Death of Ivan Ilyich was presented by the facilitators (Charlton & Verghese, 2010; Florijn & Kaptein, 2013). There was no need for the learners to read the entirety of the novel before the session. The sections of the novel used are found in Annex 2.
The learners were asked the following questions that addressed the tenets of empathy. (standing in patient’s shoes, compassionate care, and perspective-taking):
- What was described about Ivan Ilyich’s life preceding his illness that you think was important to know if you were his doctor?
- Why do you think Ivan Ilyich was so distressed before he died?
- How different do you think you will feel if you were Ivan Ilyich?
In the second session, learners were asked to bring along any arts-based material (paintings, literature, music, drama) and share with the class their reflections on why the material was chosen and how appreciation and/or critique of the art piece helped them develop empathy and patient-centred care. The participants brought materials available from the internet like photographs, paintings, illustrations from magazines, and references to non-fiction books that they had previously read.
Prompting questions included:
- Why was the material chosen?
- How did the material trigger reflections on the concept of empathy?
- What were some of the emotions elicited when reflecting on the concepts of empathy using the materials?
The two clinical psychologists employed techniques that encouraged personal sharing in a safe environment. Participants were reassured that their sharing was confidential, and they were free to leave the session at any point in time if they felt uncomfortable. Sharing was encouraged by picking up themes of similarities and contrasts between participant’s sharing, asking questions with the intention to clarify, reflect and hypothesise, progressing from “participant to facilitators communication” to “between-participants communications” and progressing from talking about “Ivan Ilyich” to “themselves if they were Ivan Ilyich or Ivan’s doctor or Ivan’s family member/friend” to “themselves”.
This study was submitted for review in the Institutional Review Board but was exempted in view of its nature as a medical education project.
D. Outcomes Assessment
To assess acceptability, the junior doctors were asked to complete a questionnaire post-intervention (see Annex 1). Feasibility was defined as at least 80% of junior doctors completing the curriculum in its entirety.
The assessment of efficacy is investigated using 2 scales pre- and post-intervention:
- The Jefferson Physician Empathy Scale (JPES) is a self-reported 20-item empathy measure based on a seven-point Likert scale designed to assess empathy in physicians. It has been validated, has an alpha coefficient of 0.87 for internal consistency, and is the most widely used in literature. There are ten positively worded items and ten negatively worded items, and the negatively worded items will be reverse scored on a Likert scale of 7 (strongly disagree) to 1 (strongly agree). Scores can range from 20 to 140 with higher scores indicating participants to be more empathic.
- As there were currently no validated tools for the assessment of empathy of palliative care doctors, the Consultation and Relational Empathy (CARE) Measure was chosen. It is a 10-item patient-rated questionnaire developed and validated to assess a physician’s level of empathy and patient-centred care when used by patients, with an alpha coefficient of 0.92 for internal consistency. However, as the enrolment of patients for this purpose for the study was not possible, this measure was modified with permission from its developer, to generate third party-rated outcomes from the junior doctors’ team members (consultant, registrar or resident physician and the nurse) (see Annex 3). As the participants work in small teams of not more than three, all their respective team members were invited to perform the assessment. They will observe interactions between the junior doctors and their patients during their daily work and rate the 10 items that are each described in the questionnaire. No prior training is needed.
III. RESULTS
A total of 17 junior doctors agreed to participate in the study and all of them completed the programme and assessments. Out of a full score of 50, the acceptability score ranged from 18 to 50. The median and mean were both 38.
The JPES scores pre-test had a range of 77 to 123 out of 140. The mean was 103.6 (SD 11). Post-test, the JSPE scores ranged from 93 to 132. The mean value was 108.9 (SD 9.9). This gave a paired t-score difference of 2.49 with P value of 0.02.
The modified-CARE score pre-test had a range of 12 to 31 out of 50, a mean of 22.9 (SD 5.8). Post-test, the scores ranged from 17 to 37, with a mean of 28.5 (SD 5.9). This gave a paired t-score difference of 5.22 with a P value of <0.001.
IV. DISCUSSION
This is a quantitative pilot study conducted to investigate the acceptability, efficacy, and feasibility of a novel educational intervention based on the humanities to teach empathy to junior doctors in a palliative medicine rotation. It is the first project under the Humanities Initiative Programme (HIP) at the Division of Supportive and Palliative Care (DSPC) at the National Cancer Centre of Singapore (NCCS). The results of this pilot study are encouraging and are consistent with other similar pilot studies that investigated the efficacy of a humanities-based programme in medical education and will propel the development of the HIP (Perry et al., 2011). The positive results regarding acceptability and feasibility are important as they suggest that the implementation of such an intervention on a larger scale that spans across disciplines is possible. The increase in empathy scores demonstrates the efficacy of the programme, although further analysis is needed to investigate whether such a change is due to other factors as the intervention is relatively short and effects may not be sustainable.
There are other limitations to this study. The study is limited by the small number of participants in a single institution and deficiencies of a self-assessed rating scale (Boud & Falchikov, 1989). The possible reasons that the enrolment rate is low include lack of awareness about the concept of the humanities and its role in medical education and difficulty with balancing time between clinical duties and educational activities. The programme was also novel and junior doctors may be hesitant to enrol due to uncertainties about the nature of the programme.
The limitations of self-assessment tools are mitigated by the employment of a third-party empathy measure that allowed triangulation of results, but caution should remain about the clinical significance of the results. Inherent biases by fellow team members and difficulty in having adequate time and making effort for observation and accurate grading of the participants in a busy clinical service may render third-party assessment unreliable. Ideally, an independent party observing the participants during their daily work may reduce biases. Stealthy observations may also avoid both conscious and unconscious alteration in behaviour from the participants’ awareness of being observed. Unfortunately, this was not logistically possible in this study.
The use of a modified-CARE measure which has not been validated for use by fellow colleagues of the doctor may also render the increase in scores post-intervention less reliable and valid.
Lastly, the indication for programme feasibility is set at 80% based on the investigator’s discretion. This is due to a lack of data about standards on feasibility from existing studies of humanities-based educational programmes. It could be possible that there are other measures of feasibility that are more valid.
Future research will need to address the choice of outcome measures including assessment of feasibility and empathy scores. Discrete studies for the design and validation of such measures will grant important rigor to future studies in this field.
Studies that utilise qualitative research methodology could also provide rich data that answers questions about the choice of materials and facilitators. Possible methods include thematic analysis (Braun & Clarke, 2006) and narrative inquiry – a developing methodology of investigating lived experiences in the context of place, sociality, and time (Clandinin & Connelly, 2000). This will help the investigator assess the suitability and transferability of the HAPPE programme to other disciplines with varying participant demographics and how further refinement in design and methods can improve efficacy and sustainability.
A. Moving Forward
As this was a pilot study, the investigator had chosen to focus on only quantitative parameters to achieve the aims of the study. It is recognised that qualitative analysis will provide richer data on the experiences of the participants and further guide implementation and refinement of programmes based on the humanities and there are ongoing projects that have started within the institution based on the need to address this gap of the pilot study.
Research done on humanities programmes in medicine has commonly been criticised in terms of methodology. A literature review of arts-based interventions in medical education found poor designs of methodology (Perry et al., 2011) while another study on needs assessment noted only a minority of studies describing outcome measures beyond learner satisfaction (Taylor, Lehmann, & Chisolm, 2017). Publications have also been criticised for the lack of a conceptual basis in the design of interventions. This pilot study aimed to address some of these challenges by clearly stately the conceptual theory of reflective practice that impacts the study results. In addition, the Gagne Instructional Plan guided lesson planning with the steps of gaining attention, informing the learner of objectives, stimulating recall, presenting stimulus, and providing learning guidance achieved (Gagne, Briggs, & Wager, 1988). However, the lack of validated and relevant assessment outcomes remains. Future research should focus on developing suitable assessment tools that can achieve their aims without stifling the responses of participants. One possible approach may be the adoption of formative assessments that focus on feedback, in contrast to summative tools that typically impact outcomes of appraisals (Taras, 2008).
Finally, there is a paucity of studies that employ the humanities as educational resources in the Asia-Pacific region. This is despite the rich multi-cultural nature of the societies in this region, many with deep-rooted and unique practices in the arts. The investigator of this study hopes that this pilot programme will inspire like-minded medical educators in the region to embark on similar projects within their institutions and develop the arts as an educational tool for the benefit of both healthcare professionals and patients.
V. CONCLUSION
This pilot study has produced encouraging results regarding the use of humanities in medical education. The humanities have the potential for multiple functions in medicine and perhaps most importantly serve to bridge the gap between biomedical sciences and the “art of medicine” (Best, 2015; Chew, 2008; Ong & Anantham, 2019). Further research in this field will provide guidance on the development of a robust educational intervention that adheres to the best practices of medical education research.
Note on Contributor
OEK is a consultant at the Division of Supportive and Palliative Care in the National Cancer Centre of Singapore. OEK reviewed the literature, designed the study, engaged the facilitators for the programme, analysed results, and wrote the manuscript.
Ethical Approval
This study was submitted to the institution’s review board but received an exemption due to its nature as an educational intervention (CIRB Ref: 2018/2276).
Acknowledgements
The author would like to acknowledge Ms Tan Yee Pin and Ms Jacinta Phoon from the Division of Psycho-oncology at the National Cancer Centre of Singapore who facilitated the HAPPE sessions. The author would like to thank Professor Steward Mercer for his generosity in sharing the CARE measure as an evaluation tool in this study.
Funding
This study was supported by the Lien Centre of Palliative Care, Singapore, Education Incubator Grant (Reference code: LCPC-EX18-0001).
Declaration of Interest
The author declares no conflict of interest in this study.
References
Batt-Rawden, S. A., Chisolm, M. S., Anton, B., & Flickinger, T. E. (2013). Teaching empathy to medical students: An updated, systematic review. Academic Medicine, 88(8), 1171-1177. https://doi.org/10.1097/acm.0b013e318299f3e3
Best, J. (2015). 22nd Gordon Arthur Ransome oration: Is medicine still an art? Annals of the Academy of Medicine Singapore, 44, 353-357.
Birden, H., Glass, N., Wilson, I., Harrison, M., Usherwood, T., & Nass, D. (2013). Teaching professionalism in medical education: A Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 25. Medical Teacher, 35(7), e1252-e1266. https://doi.org/10.3109/0142159x.2013.789132
Bleakley, A. (2015). Medical Humanities and Medical Education: How the Medical Humanities Can Shape Better Doctors. New York: Routledge. https://doi.org/10.4324/9781315771724
Bleakley, A., & Marshall, R. (2014). Can the science of communication inform the art of the medical humanities? Medical Education, 47(2), 126-133. https://doi.org/10.1111/medu.12056
Block, S., & Billings, A. (1998). Nurturing Humanism through teaching palliative care. Academic Medicine, 73(7), 763-765. https://doi.org/10.1097/00001888-199807000-00012
Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18, 529-549. https://doi.org/10.1007/bf00138746
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
Charlton, B., & Verghese, A. (2010). Caring for Ivan Ilyich. Journal of General Internal Medicine, 25(1), 93-95. https://doi.org/10.1007/s11606-009-1177-4
Chew, C. H. (2008). 5th College of Physicians lecture—A physician’s odyssey: Recollections and reflections. Annals of the Academy of Medicine Singapore, 37, 968-976.
Clandinin, D. J., & Connelly, F. M. (2000). Narrative Inquiry: Experience and Story in Qualitative Research. San Francisco, CA: Jossey-Bass. https://doi.org/10.1016/b978-008043349-3/50013-x
Dennhardt, S., Apramian, T., Lingard, L., Torabi, N., & Amtfield, S. (2016). Rethinking research in the medical humanities: A scoping review and narrative synthesis of quantitative outcome studies. Medical Education, 50, 285-299. https://doi.org/10.1111/medu.12812
Feld, J., & Heyse-Moore, L. (2006). An evaluation of a support group for junior doctors working in palliative medicine. American Journal of Hospice and Palliative Care, 23(4), 287-296. https://doi.org/10.1177/1049909106290717
Florijn, B. W., & Kaptein, A. A. (2013). How Tolstoy and Solzhenitsyn define life and death in cancer: Patient perceptions in oncology. American Journal of Hospice and Palliative Care, 30(5), 507-511. https://doi.org/10.1177/1049909112452626
Gagne, R. M., Briggs, L. J., & Wager, W. W. (1988). Principles of Instructional Design. New York: Holt, Rinehart and Winston Inc.
Gibbs, G. (1988). Learning by Doing: A Guide to Teaching and Learning Methods. Oxford: Further Education Unit, Oxford Polytechnic.
Hall, M. A., Zheng, B., Dugan, E., Camacho, F., Kidd, K. E., Mishra, A., & Balkrishnan, R. (2002). Measuring patients’ trust in their primary care providers. Medical Care Research and Review, 59, 293-318. https://doi.org/10.1177/1077558702059003004
Hojat, M., Gonnella, J. S., Nasca, T. J., Mangione, S., Vergare, M., & Magee, M. (2002). Physician empathy: Definition, components, measurement, and relationship to gender and specialty. American Journal of Psychiatry, 159(9), 1563-1569. https://doi.org/10.1176/appi.ajp.159.9.1563
Hojat, M., Mangione, S., Nasca, T. J., Cohen, M. J. M., Gonnella, J. S., Erdmann, J. B., … Magee, M. (2001). The Jefferson Scale of Physician Empathy: Development and Preliminary Psychometric Data. Educational and Psychological Measurement, 61(2), 349-365. https://doi.org/10.1177/00131640121971158
Lee, P. T., Loh, J., Sng, G., Tung, J., & Yeo, K. K. (2018). Empathy and burnout: A study on residents from a Singapore institution. Singapore Medical Journal, 59(1), 50-54. https://doi.org/10.11622/smedj.2017096
Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21(6), 699-705. https://doi.org/10.1093/fampra/cmh621
Neumann, M., Edelhäuser, F., Tauschel, D., Fischer, M. R., Wirtz, M., Woopen, C., … Scheffer, C. (2011). Empathy decline and its reasons: A systematic review of studies with medical students and residents. Academic Medicine, 86(8), 996-1009. https://doi.org/10.1097/acm.0b013e318221e615
Ong, E. K., & Anantham, D. (2019). The medical humanities: Reconnecting with the soul of medicine. Annals of the Academy of Medicine Singapore, 48(7), 233-237.
Othuis, G., & Dekkers, W. (2003). Professional competence and palliative care: An ethical perspective. Journal of Palliative Care, 19(3), 192-197. https://doi.org/10.1177/082585970301900308
Ousager, J., & Johannessen, H. (2010). Humanities in undergraduate medical education: A literature review. Academic Medicine, 85, 988-998. https://doi.org/10.1097/acm.0b013e3181dd226b
Perry, M., Maffulli, N., Wilson, S., & Morrissey, D. (2011). The effectiveness of arts-based interventions in medical education: A literature review. Medical Education, 45(2), 141-148. https://doi.org/10.1111/j.1365-2923.2010.03848.x
Schön, D. A. (1987). Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions. San Francisco, CA: Jossey‐Bass.
Schwartz, A. W., Abramson, J. S., Wojnowich, I., Accordino, R., Ronan, E. J., & Rifkin, M. R. (2009). Evaluating the impact of the humanities in medical education. Mount Sinai Journal of Medicine, 76, 372-380. https://doi.org/10.1002/msj.20126
Shapiro, J., Coulehan, J., Wear, D., & Montello, M. (2009). Medical humanities and their discontents: Definitions, critiques, and implications. Academic Medicine, 84, 192-198. https://doi.org/10.1097/acm.0b013e3181938bca
Taras, M. (2008). Summative and formative assessment: Perceptions and realities. Active Learning in Higher Education, 9(2), 172-192. https://doi.org/10.1177/1469787408091655
Taylor, A., Lehmann, S., & Chisolm, M. (2017). Integrating humanities curricula in medical education: A needs assessment. MedEdPublish. https://doi.org/10.15694/mep.2017.000090
Wear, D., & Zarconi, J. (2016). Humanism and other acts of faith. Medical Education, 50, 271-281. https://doi.org/10.1111/medu.12974
West, C. P., Dyrbye, L. N., Erwin, P. J., & Shanafelt, T. D. (2016). Interventions to prevent and reduce physician burnout: A systematic review and meta-analysis. The Lancet, 388(10057), 2272-2281. https://doi.org/10.1016/S0140-6736(16)31279-X
*Ong Eng Koon
Division of Supportive and Palliative Care,
National Cancer Centre Singapore
11 Hospital Drive, Singapore 169610
Tel: +6564368462
Email address: ong.eng.koon@singhealth.com.sg
Submitted: 21 February 2020
Accepted: 13 July 2020
Published online: 5 January, TAPS 2021, 6(1), 93-108
https://doi.org/10.29060/TAPS.2021-6-1/OA2229
Kah Wei Tan1, Hwee Kuan Ong2 & Un Sam Mok3
1Ministry of Health Holdings, Singapore; 2Department of Physiotherapy, Singapore General Hospital, Singapore; 3Division of Anaesthesiology and Peri-operative Medicine, Singapore General Hospital, Singapore
Abstract
Introduction: During resuscitations, healthcare professionals (HCPs) find balancing the need for timely resuscitation and adherence to infection prevention (IP) measures difficult. This study explored the effects of an innovative teaching method, using in-situ simulation and inter-professional education to enhance compliance to IP through better inter-professional collaboration.
Methods: The study was conducted in the Surgical Intensive Care Unit (SICU) in a 1200-beds teaching hospital. HCPs working in the SICU were conveniently allocated to the intervention or control group based on their work roster. The intervention group attended an in-situ simulated scenario on managing cardiac arrest in an infectious patient. The control group completed the standard institution-wide infection control eLearning module. Outcomes measured were: (a) attitudes towards inter-professional teamwork [TeamSTEPPS Teamwork Attitudes Questionnaire (TAQ)], (b) infection prevention knowledge test, (c) self-evaluated confidence in dealing with infectious patients and (d) intensive care unit (ICU) audits on infection prevention compliance during actual resuscitations.
Results: 40 HCPs were recruited. 29 responded (71%) to the pre- and post-workshop questionnaires. There were no significant differences in the TeamSTEPPS TAQ and infection prevention knowledge score between the groups. However, ICU audits demonstrated a 60% improvement in IP compliance for endotracheal tube insertion and 50% improvement in parenteral medication administration. This may be attributed to the debriefing session where IP staff shared useful tips on compliance to IP measures during resuscitation and identified threats that could deter IP compliance in SICU.
Conclusion: Learning infection prevention through simulated inter-professional education (IPE) workshops may lead to increased IP compliance in clinical settings.
Keywords: Inter-Professional Education, Simulation Infection Control, Resuscitation, Inter-Professional Teamwork
Practice Highlights
- Use of a simulated scenario to improve infection prevention during resuscitation.
- Improving attitudes towards inter-professional collaboration amongst healthcare professionals.
- Evaluating the efficacy of a simulated scenario through clinical audit.
I. INTRODUCTION
Adherence to infection prevention is paramount in the intensive care unit (ICU) as hospital acquired infections in the critically ill are associated with increased morbidity, mortality, length of stay and healthcare cost (Gandra & Ellison, 2014). However, during resuscitations, healthcare professionals (HCPs) may experience difficulty in balancing the need for resuscitation and adherence to infection prevention guidelines, resulting in suboptimal compliance to basic infection prevention measures (Steinemann et al., 2016). Moreover, resuscitation is a time-critical endeavour that requires good collaboration in a team comprising of different HCPs fulfilling different roles with different priorities, and lapses in teamwork may arise in a team comprising of HCPs with different roles and priorities (Barr, Koppel, Reeves, Hammick, & Freeth, 2009).
Inter-professional education (IPE) is defined as “occasions when two or more professions learn with, from and about each other to improve collaboration and quality of care” by the Centre for Advancement of Interprofessional Education (CAIPE) (Steinert, 2005). It is known to improve patient safety through improving communication, understanding and knowledge to encourage active participation from different HCPs (Oandasan, 2007; Wong, Lee, Allen, & Foong, 2020). Research has shown that active collaboration amongst HCPs in the workplace resulted in improved patient outcomes and provider satisfaction (Wagner, Parker, Mavis, & Smith, 2011). Previous studies had investigated the outcomes of using IPE workshops to teach infection prevention in non-emergency clinical settings or using standardised patients, and concluded that knowledge and confidence in infection prevention and inter-professional teamwork had improved (Mundell, Kennedy, Szostek, & Cook, 2013).
Currently, infection prevention education in our institute is didactic and web-based. Although this method is effective in disseminating information, there are no opportunities to learn with different HCPs or apply knowledge to real-life scenarios. On the basis that resuscitation is traditionally taught using simulation and has been proven to be highly effective (Perkins, 2007), we developed an IPE simulation workshop on infection prevention during resuscitation.
We hypothesised that the in-situ simulation workshop involving different HCPs will result in improved attitudes towards inter-professional teamwork and improved compliance to infection prevention guidelines, compared to our standard institutional infection control (IC) education.
II. METHODS
A. Study Population
We conducted a non-randomised experimental study amongst HCPs working in the Surgical Intensive Care Unit (SICU) of the Singapore General Hospital (SGH). All HCPs working in the SICU were eligible to participate in the study. There were no exclusion criteria. Informed consent was obtained from all participating HCPs. Information on participants’ profession, the year they obtained their professional qualification, the number of years they had worked in critical care and prior experience in simulation training were collected. HCPs from Anaesthesiology, Nursing, Physiotherapy, Pharmacy, Speech and Language Therapy, Dietetics and Infection Prevention were involved. A working day was picked to run the workshop and HCPs on duty that day were allocated to the intervention group, while those there were not on duty were assigned to the control group.
B. In-Situ Simulation Workshop
Participants in the intervention group (n=25) underwent a two-hour in-situ simulation workshop in the SICU on the scenario of a cardiac arrest in an infectious patient (Annex A). The training faculty comprised of HCPs from various professions such as Anaesthesiology, Nursing, Physiotherapy, Pharmacy, Speech and Language Therapy, Dietetics and Infection Prevention. Each workshop consisted of HCPs from five to seven different professions. The learning outcomes of the workshop were (i) to practice infection prevention precautions for transmission-based infections during resuscitations and (ii) to improve attitudes towards inter-professional teamwork during a crisis situation.
The workshop was designed based on Kolb’s experiential learning theory (Kolb & Fry, 1975), which is a four-stage learning cycle consisting of (i) concrete experience, (ii) reflective observation, (iii) abstract conceptualisation and (iv) active experimentation. Concrete experience was facilitated through in-situ simulation where participants experienced real-life constraints of resuscitating an infectious patient. Following the simulation, a debrief was held by the faculty that facilitated reflective observation and abstract conceptualisation. Learning points discussed include (i) issues faced in adhering to infection prevention guidelines in a resuscitation setting, (ii) inter-professional teamwork and (iii) threats that could deter IC compliance in SICU. The final stage, active experimentation, was facilitated through actual application of learning points surmised from the workshop, and evaluated through real-time ICU audits.
C. Evaluation of Outcomes
Evaluation of the effectiveness and impact of the workshop was done in accordance with Kirkpatrick’s evaluation framework (Kirkpatrick, 1994), which emphasised the need to go beyond the immediate reactions of participants by assessing them on four different levels, which are (i) Reaction, (ii) Learning, (iii) Behaviour and (iv) Results. “Reaction” was evaluated through the post-workshop questionnaire and participants’ responses in the post-workshop debrief with regards to the effectiveness of simulated workshops in improving inter-professional teamwork and encouraging compliance to infection prevention guidelines. The second level, “Learning”, was evaluated by discussing learning points of the workshop in the debrief, and getting participants to note down their most important takeaways regarding infection prevention and IPE in the post-workshop questionnaire. “Behavioural changes” were assessed through a real-time observational study that assessed participants’ ability to observe proper infection prevention practices during actual resuscitations in the SICU. Results, the fourth level, were difficult to evaluate due to the small sample size that we recruited.
D. Outcomes of Study
There are two primary outcomes of this study: (i) attitudes towards inter-professional teamwork and (ii) infection prevention knowledge and practices.
Changes in attitudes towards inter-professional teamwork were assessed using the TeamSTEPPS Teamwork Attitudes Questionnaire (TAQ 1.0), based on scores in the following subcategories; Team Structure, Leadership, Situation Monitoring, Mutual Support and Communication (Appendix 1). Qualitative comments on key learning points with regards to teamwork were also collected during the debriefing sessions.
Changes in infection prevention knowledge and practice were assessed in five ways:
- An infection prevention self-evaluation questionnaire on a 5-point Likert Scale (Appendix 2).
- A multiple-choice quiz developed by the Institutional Infection Prevention Nurse Educators (Appendix 3A and 3B).
- Questionnaire on effectiveness of the simulation (for participants in the intervention group only) (Appendix 4).
- Qualitative feedback on the learning points concerning infection prevention. (Appendix 5).
- Clinical audit data that evaluated compliance to infection prevention guidelines during resuscitations in the SICU. Two months after the simulation workshop, the data was collected by a trained hospital-based infection prevention team based on real-time observations. The audit checklist assessed proper use of personal protective equipment (PPE), hand hygiene, administration of parenteral medications and insertion of endotracheal tube (ETT).
Statistical analysis was performed using SPSS for Mac, Version 20.0 (SPSS Inc., Chicago, IL, USA). Continuous variables were analysed using the t-test, and categorical variables were analysed using the Fisher and Chi square test. A p-value of less than 0.05 was taken to be statistically significant.
III. RESULTS
The study recruited a total of 40 HCPs. Of the 40 clinical staff working in the SICU asked to rank their responses, 29 (71%) responded to the pre- and post-workshop TAQ, self-evaluation of infection prevention knowledge and infection prevention knowledge quiz. Healthcare professions represented included doctors (31%), nurses (34%), pharmacists (10%), physiotherapists (10%), speech and language therapists (10%) and dieticians (3%).
The intervention and control groups were not significantly different in terms of the number of years post-graduation, years of working experience in critical care and the number of simulated training sessions they have attended, excluding basic cardiac life support (BCLS) and advanced cardiac life support (ACLS). Similarly, there were no significant differences in self-evaluation of infection prevention knowledge and infection prevention scores at baseline. However, we noted significantly higher scores in the control group in Team Structure (mean difference=2.76), Leadership (mean difference =3.8), and Communication (mean difference=2.54; Table 1).
|
|
Intervention [Range] (n=16) |
Control [Range] (n=13) |
P-value |
|
|
Mean no. of years since graduation |
8.65 [4-23] |
8.67 [5-20] |
0.996 |
|
|
Mean no. of years in critical care |
3.78 [0-10] |
5.13 [0-15] |
0.413 |
|
|
Mean no. of prior simulation training sessions (excluding BCLS/ACLS) |
1.52 [0-15] |
2.33 [0-12] |
0.474 |
|
|
Self-evaluation of infection prevention knowledge |
13.3 [10-20] |
14.6 [8-19] |
0.179 |
|
|
Infection prevention baseline MCQ scores |
5.47 [5-6] |
5.78 [5-6] |
0.103 |
|
|
TeamSTEPPS Teamwork Attitudes Questionnaire 2.0 |
||||
|
|
Team Structure |
23.04 [19-30] |
25.80 [20-30] |
0.046* |
|
Leadership |
24.13 [18-29] |
27.93 [21-30] |
0.009* |
|
|
Situation Monitoring |
23.43 [20-30] |
25.73 [21-30] |
0.100 |
|
|
Mutual Support |
18.91 [15-26] |
19.33 [15-27] |
0.678 |
|
|
Communication |
22.39 [19-29] |
24.93 [20-30] |
0.042* |
|
Table 1. Baseline characteristics of the Intervention and Control groups
A. Inter-Professional Teamwork
Within the intervention group, there were no significant changes between pre- and post-workshop TeamSTEPPS TAQ scores in most subcategories, with the exception of an improvement in post-workshop Mutual Support scores (mean difference=3.21), which translated to a 17.0% increase from baseline (Table 2). The lack of a significant change in most subcategories could possibly be due to the already high baseline scores prior to the workshop.
|
|
Pre-workshop mean (SD) [Range] (n=16) |
Post-workshop mean (SD) [Range] (n=16) |
Mean difference |
P-value |
Percentage increase/% |
|
Self-evaluation of infection prevention knowledge |
13.57 (3.32) [10-20] |
14.71 (1.90) [9-19] |
1.14 |
0.230 |
8.4 |
|
Infection prevention quiz scores |
5.85 (0.38) [5-6] |
4.85 (0.69) [4-6] |
-1.00 |
0.000 |
-17.1 |
|
Team Structure |
24.14 (2.69) [19-30] |
25.36 (2.50) [21-30] |
1.22 |
0.058 |
5.1 |
|
Leadership |
25.64 (2.76) [18-30] |
25.9 (2.55) [24-30] |
0.26 |
0.780 |
1.0 |
|
Situation Monitoring |
24.71 (3.29) [18-30] |
25.36 (3.10) [24-30] |
0.65 |
0.272 |
2.6 |
|
Mutual Support |
18.93 (2.43) [18-30] |
22.14 (2.03) [18-28] |
3.21 |
0.002 |
17.0 |
|
Communication |
23.43 (2.56) [20-30] |
23.86 (2.45) [22-30] |
0.43 |
0.551 |
1.8
|
Table 2. Comparison of pre- and post-workshop scores within the intervention group
The intervention group also had greater percentage increases in the TAQ 2.0 Team Structure, Leadership and Communication sub-categories compared to the control group (Table 3).
|
|
Intervention (n=16) |
Control (n=13) |
P-value |
||
|
|
Mean change (SD) |
Percentage change/% |
Mean change (SD) |
Percentage change/% |
|
|
Change in scores for self-evaluation of infection prevention knowledge |
1.14 (3.39) |
8.4 |
0.78 (0.97) |
5.2 |
0.758 |
|
Change in infection prevention scores |
-1.00 (0.71) |
-17.1 |
-1.63 (1.19) |
-29.0 |
0.145 |
|
Change in scores for Team Structure |
0.42 (1.86) |
5.1 |
0.67 (1.45) |
1.3 |
0.661 |
|
Change in scores for Leadership |
-0.42 (2.35)
|
1.0 |
-0.09 (1.38) |
-3.1 |
0.693 |
|
Change in scores for Situation Monitoring |
1.00 (2.00) |
2.6 |
0.27 (1.35) |
2.6 |
0.323 |
|
Change in scores for Mutual Support |
4.33 (2.19) |
17.0 |
3.18 (3.82) |
25.3 |
0.380 |
|
Change in scores for Communication |
0.33 (3.17) |
1.8 |
0.45 (2.34) |
1.4 |
0.919 |
Table 3. Comparison of changes in pre- and post-workshop scores between the Intervention and Control groups
The most common learning point for IPE was the importance of learning the different roles and capabilities that different HCPs can play and the need to involve other HCPs to ensure an effective resuscitation effort. The learning points listed support change in perceptions related to interprofessional roles that the quantitative scale did not capture.
“The workshop improves knowledge of the roles that other healthcare professionals are able to perform, for example, a physiotherapist being qualified to help in CPR during resuscitation.”
(Nursing participant, ID 16)
B. Infection Prevention Knowledge and Practices
Although there were no statistically significant differences between the groups, better infection prevention scores were noted in the intervention group. The intervention group had a percentage increase of 3.2% (8.4% vs 5.2%) in self-evaluated infection prevention knowledge. The questions on infection prevention knowledge were supposed to be of similar difficulty, and we avoided repeating the same set of questions, as we did not want participants to discuss or look up the answers. For both groups, there was a decrease in infection prevention knowledge scores post-workshop; however, there was a smaller decrease in the intervention group (-17.1%) compared to the control group (-29.0%) (Table 3). We speculate that this may be due to the post-workshop questions being more difficult compared to the pre-workshop questions. Another reason that may have contributed to the decrease in scores is the limited number of questions (n=6), which may have confounded our results.
The participants shared a rich diversity of infection prevention learning points during the debrief session. Examples included correct steps in the donning of personal protective equipment, strategies to clean the intravenous (IV) injection hub quickly and effectively, and identification of threats that deterred proper infection prevention compliance during the simulation such as the lack of a disposable dish on the resuscitation trolley to keep intravenous drugs and intubation equipment clean. The most common learning point for infection prevention was the importance of adhering to infection prevention practices during resuscitation such as the accurate administration of parenteral medications. The learning points listed support change in perceptions related to interprofessional roles that the quantitative scale did not capture.
“The importance of practicing infection prevention measures such as the need for changing soiled gloves in between administering parenteral medications, but yet not compromising on resuscitation.”
(Physiotherapist, ID 8)
The clinical audit conducted after the simulation workshop showed that compliance rates in accurate parenteral medication administration improved by 50%, while compliance rates in ETT insertion improved by 60% post-workshop, compared to pre-workshop performance (Table 4).
|
|
Pre-workshop |
Post-workshop |
Percentage change in compliance rates/% |
||
|
|
Number of instances of compliance |
Number of instances of non-compliance |
Number of instances of compliance |
Number of instances of non-compliance |
|
|
PPE |
24 (100%) |
0 (0%) |
24 (96%) |
1 (4%) |
-4 |
|
Hand hygiene |
10 (100%) |
0 (0%) |
2 (100%) |
0 (0%) |
0 |
|
Parenteral medication administration |
6 (50%) |
6 (50%) |
9 (100%) |
0 (0%) |
50 |
|
ETT insertion |
2 (40%) |
3 (60%) |
10 (100%) |
0 (0%) |
60 |
Table 4. Comparison of compliance rates to infection prevention during real-time resuscitations pre- and post-workshop
IV. DISCUSSION
The study was designed, conducted and written before the COVID-19 pandemic. Since the pandemic, there had been some changes in infection prevention guidelines in aerosol general procedures such as tracheal intubation (Perkins, et al., 2020), which was not reflected in our study. Our study had highlighted the importance of using simulation and inter-professional collaboration to enhance infection prevention education, and these were also emphasised in many publications regarding infection prevention during the pandemic (Wong, et al., 2020). For example, there had been recommendations of using a buddy system for PPE donning and doffing, and using high fidelity simulation to prepare for the COVID-19 crisis (Bricknell, Hodgetts, Beaton, & McCourt, 2016). However, many of these publications were reviews and opinions rather than research studies (Foong, et al., 2020; Lim, Wong, Teo, & Ho, 2020).
To our knowledge, there are no publications on the use of in-situ simulation to teach infection prevention during resuscitations in an IPE setting. Current literature evaluating the impact of simulated IPE workshops in teaching infection prevention had mixed results with regards to the effectiveness of such workshops in improving attitudes towards inter-professional teamwork and enhancing compliance rates to infection prevention practices. In the study by Luctkar-Flude et al. (2016), there was significant improvement in infection prevention knowledge, but little change in inter-professional teamwork. Although knowledge related to aseptic technique improved significantly immediately post-workshop, long-term retention was poorer (Wagner, et al., 2011).
A. The Utility of Simulation in Improving Infection Prevention
Our results from the clinical audit conducted during actual resuscitations in the SICU demonstrated a large improvement after the workshop in accurate parenteral medication administration and ETT insertion. This finding supports the hypothesis that an inter-professional simulated workshop is more effective than traditional didactic web-based methods in improving adherence to infection prevention practices, which could be due to three added elements present in simulated workshops.
Firstly, simulation provides interaction amongst different HCPs and enables collaborative learning in small groups (Dolmans, Michaelsen, van Merriënboer, & van der Vleuten, 2015). Secondly, the debriefing process promotes reflective learning and provides real-time feedback (Ziv, Wolpe, Small, & Glick, 2003). Thirdly, the learning is contextualized as participants learn infection prevention principles that are embedded in authentic clinical scenarios, and simulated cardiac arrest in an infectious patient is a common scenario that reflects the reality of practice (Morison & Jenkins, 2007).
Simulation also enables participants to discover innovative solutions, thereby enabling optimal adherence to infection prevention protocols while ensuring a timely resuscitation response with limited manpower. An example of one of the interesting solutions discussed include the designation of specific roles during resuscitation, such as assigning one HCP to be in charge of the airway and another to be in charge of administering intravenous drugs so as to avoid contamination.
B. Encouraging Inter-Professional Teamwork Through Simulation
Simulated scenarios with a focus on IPE also encourage active engagement and collaboration amongst participants, which had been demonstrated to result in improved attitudes towards teamwork (Huitt, Killins, & Brooks, 2015). During the debriefing process, study investigators facilitated the discussion to allow HCPs from different specialties to give feedback and volunteer information on how they could better contribute to the resuscitation effort and work more cohesively as a team, thereby enabling HCPs to discover more about the capabilities of their fellow colleagues. This discussion helps to create a sense of shared purpose within teams (Freytag, Stroben, Hautz, Eisenmann, & Kammer, 2017), which is a defining characteristic of an effective team (Drinka & Clark, 2000), and also reinforces the idea that a team can often achieve what an individual cannot.
In our debriefing, the participants noted the importance of learning the different roles and capabilities that different HCPs can play and the need to involve other HCPs to ensure an effective resuscitation effort, which can subsequently translate to a positive change in patient care and collaborative practice (Hammick, Freeth, Koppel, Reeves, & Barr, 2007). For example, the nursing and medical participants did not realise that physiotherapists, pharmacists and speech and language therapists are BCLS trained and can perform effective chest compression, and the medical participants did not realise nurses can perform cardiac defibrillation during a cardiac arrest. Better collaboration and understanding of other HCPs’ roles can improve task delegation to fully maximise available manpower, and aids in crisis resource management.
C. Debriefing–An Essential Component of a Simulated Workshop
According to the experiential learning style theory (Kolb & Fry, 1975), reflective practice is an integral component that allows learners to fully integrate the learning experience. It allows HCPs to transit from merely experiencing the simulation to deriving critical learning points (Savoldelli, et al., 2006), as constructive discussion and feedback allow participants to better understand potential areas of improvement and reinforces proper infection prevention practice (Gerolemou, et al., 2014). The importance of feedback and discussion is aptly demonstrated in our study, with participants noting that accurate parenteral medication administration was one of their main takeaways from the debriefing process, and there being a subsequent 50% improvement in parenteral medication administration in our observational study.
An effective debriefing process was facilitated through the creation of a non-threatening environment by using open-ended questions, positive reinforcement, constructive feedback and active engagement of all HCPs present (Fanning & Gaba, 2007). While there are many debriefing tools present such as the Organ Specific Autoimmune Disease (OSAD) debriefing tool (Ahmed, et al., 2012), and Advocacy Inquiry (Gururaja, Yang, Paige, & Chauvin, 2008), we believe that the cornerstone of a successful debrief is through the creation of an environment that allows participants to freely voice their queries and concerns, and subsequent discussion to tease out relevant learning points that serve as important takeaway messages.
D. Other Observations
Interestingly, we noted significant differences in Team Structure and Leadership between the Intervention and Control groups in the pre-test questionnaire. This could be due to difference in number of years that the intervention and control groups have worked in SICU, with the intervention group having worked a mean number of 3.78 years compared to the control group’s 5.13 years, although this difference was not statistically significant. Studies have shown that HCPs who have worked together for longer periods of time and on a daily basis are more likely to develop trust and confidence in their teams (Bosch & Mansell, 2015). Less-experienced healthcare staff may feel more uncomfortable with inter-professional teamwork compared to more experienced staff, further highlighting the importance of increasing exposure to IPE for younger HCPs.
E. Challenges Encountered in the Implementation of a Simulated Infection Prevention IPE Workshop
The conception and organisation of an infection control workshop that incorporates both simulation and IPE improved our understanding of existing challenges to the development of a coherent curriculum and implementation of simulated workshops (Buckley, et al., 2012). There were numerous challenges encountered in the implementation of such workshops.
Examples include:
- Simulated workshops are resource intensive. Monthly faculty meetings were held for five months before the workshop, and each workshop required the presence of a HCP from at least five different specialties. In addition, beds in the SICU had to be specially set aside for the workshop to take place.
- Learning outcomes had to be crafted carefully to ensure that all HCPs could benefit from an effective IPE session.
- Faculty development was crucial and the faculty was trained to ensure that the post-simulation debriefing could take place effectively and learning outcomes were met.
- Scheduling conflicts were encountered as implementation of the workshop required HCPs with different work schedules to be present at the same time.
The effectiveness of the workshop could be better evaluated with multiple SICU audits of resuscitations pre- and post-workshop, as this would truly demonstrate translation of learning to real-life practice. However, it is difficult to coordinate logistically. Whenever a real-time resuscitation in the SICU occurred, our hospital-based infection prevention team had to be mobilised within a few minutes without advanced notice to the SICU to conduct the audit on infection prevention compliance during resuscitation, which proved to be challenging.
Nevertheless, these challenges can be resolved if HCPs are strongly committed to better healthcare and our study showed that it is possible to overcome the above-mentioned challenges (Byakika-Kibwika, et al., 2015). We hope that our experience would help shed light on future barriers to implementation of similar in-situ simulated IPE workshops.
F. Plans for the Future
Our study showed that simulation IPE workshops encouraged mutual support amongst different HCPs and improved infection prevention practices during resuscitation. However, implementation of the workshop was costlier and more labour intensive compared to current online video infection prevention education. Currently, we now run in-situ simulations in SICU every month on crisis resource management, and infection prevention is one of the important learning outcomes.
This workshop was conducted before the COVID-19 pandemic and there was no limitation on the maximum number of participants. In the future, with the SICU roster having changed to shift work, the workshop will only be conducted amongst HCPs within a particular shift to avoid cross-contamination with other shifts. A larger debrief room may be needed to allow for social distancing as well.
G. Limitations of our study
Limitations of our study include the small sample size, making it difficult to draw generalisations, as the study cannot show any statistically significant difference when comparing the control and intervention groups even though positive trends were observed. Furthermore, longer follow-up is required to evaluate long-term changes in behaviour, attitudes and retention of knowledge.
V. CONCLUSION
Our study showed that learning infection prevention through simulated IPE workshops is an innovative way to teach infection prevention and may lead to increased infection prevention compliance in clinical settings, as demonstrated by the clinical audit conducted. In light of the ongoing COVID-19 pandemic, use of a simulated scenario may help enhance infection prevention practices to limit the spread of transmission-based infections. Simulation may also help improve attitudes towards inter-professional teamwork and collaboration, which are crucial in resuscitations.
Notes on Contributors
Kah Wei Tan is a Medical Officer working with Ministry of Health Holdings (MOHH). Kah Wei Tan performed data collection and data analysis, reviewed the literature and wrote the manuscript.
Hwee Kuan Ong is a Senior Principal Physiotherapist in the Department of Physiotherapy, Singapore General Hospital. Hwee Kuan Ong performed data collection and data analysis and wrote the manuscript.
May Un Sam Mok is a Senior Consultant in the Division of Anaesthesiology and Peri-operative Medicine, Singapore General Hospital. May Un Sam Mok developed the methodological framework for the study, designed the study, reviewed the literature and wrote the manuscript.
Ethical Approval
The study is approved by SingHealth Centralised Institutional Review Board (CIRB reference: 2016/3001).
Acknowledgement
We would like to acknowledge the SICU staff for providing assistance in conducting the survey.
Funding
Funding was obtained from the Academic Medicine Education Institute (AMEI) grant.
Declaration of Interest
There is no conflict of interest to declare.
References
Ahmed, M., Sevdalis, N., Paige, J., Paragi-Gururaja, R., Nestel, D., & Arora, S. (2012). Identifying best practice guidelines for debriefing in surgery: A tri-continental study. American Journal of Surgery, 203(4), 523-529.
Barr, H., Koppel, I., Reeves, S., Hammick, M., & Freeth, D. (2009). Effective Interprofessional Education: Arguments, Assumption & Evidence. Oxford: Blackwell.
Bosch, B., & Mansell, H. (2015). Interprofessional collaboration in health care: Lessons to be learned from competitive sports. Canadian Pharmacists Journal, 148(4), 176-179.
Bricknell, M., Hodgetts, T., Beaton, K., & McCourt, A. (2016). Operation GRITROCK: The defence medical services’ story and emerging lessons from supporting the UK response to the Ebola crisis. Journal of the Royal Army Medical Corps, 162(3), 169-175.
Buckley, S., Hensman, M., Thomas, S., Dudley, R., Nevin, G., & Coleman, J. (2012). Developing interprofessional simulation in the undergraduate setting: Experience with five different professional groups. Journal of Interprofessional Care, 26(5), 362-369.
Byakika-Kibwika, P., Kutesa, A., Baingana, R., Muhumuza, C., Kitutu, F. E., Mwesigwa, C., … Sewankambo, N. K. (2015). A situation analysis of inter‑professional education and practice for ethics and professionalism training at Makerere University College of Health Sciences. BMC Research Notes, 8, 598.
Dolmans, D., Michaelsen, L., van Merriënboer, J., & van der Vleuten, C. (2015). Should we choose between problem-based learning and team-based learning? No, combine the best of both worlds! Medical Teacher, 37(4), 354-359.
Drinka, T. J. K., & Clark, P. G. (2000). Health Care Teamwork: Interdisciplinary Practice and Teaching. Westport, CT: Auburn House.
Fanning, R. M., & Gaba, D. M. (2007). The role of debriefing in simulation-based learning. Simulation in Healthcare: Journal of the Society for Simulation in Healthcare, 2(2), 115-125.
Foong, T. W., Ng, E. S. H., Khoo, C. Y. W., Ashokka, B., Khoo, D., & Agrawal, R. (2020). Rapid training of healthcare staff for protected cardiopulmonary resuscitation in the COVID-19 Pandemic. British Journal of Anaesthesia, 125(2), e257–e259.
Freytag, F., Stroben, F., Hautz, W. E., Eisenmann, D., & Kammer, J. E. (2017). Improving patient safety through better teamwork: How effective are different methods of simulation debriefing? Protocol for a pragmatic, prospective and randomised dtudy. (2017). BMJ Open, 7(6), e015977.
Gandra, S., & Ellison, R. T. (2014). Modern trends in infection control practices in intensive care units. Journal of Intensive Care Medicine, 29(6), 311-326.
Gerolemou, L., Fidellaga, A., Rose, K., Cooper, S., Venturanza, M., Aqeel, A., … Khouli, S. (2014). Simulation-based training for nurses in sterile techniques during central vein catheterization. American Journal of Critical Care, 23(1), 40-48.
Gururaja, R. P., Yang, T., Paige, J. T., & Chauvin, S. W. (2008). Examining the Effectiveness of Debriefing at the Point of Care in Simulation-Based Operating Room Team Training. Rockville, MD: Agency for Healthcare Research and Quality (US).
Hammick, M., Freeth, D., Koppel, I., Reeves, S., & Barr, H. (2007). A best evidence systematic review of interprofessional education: BEME Guide no. 9. Medical Teacher, 29(8), 735-751.
Huitt, T. W., Killins, A., & Brooks, W. S. (2015). Team-based learning in the gross anatomy laboratory improves academic performance and students’ attitudes toward teamwork. Anatomical Sciences Education, 8(2), 95-103.
Kirkpatrick, D. L. (1994). Evaluation Training Programs: The Four Levels. San Francisco: Berrett-Koehler.
Kolb, D. A., & Fry, R. E. (1975). Toward an Applied Theory of Experiential Learning. New York: John Wiley & Sons.
Lim, W. Y., Wong, P., Teo, L., & Ho, V. K. (2020). Resuscitation during the COVID-19 pandemic: Lessons learnt from high-fidelity simulation. Resuscitation, 152, 89-90.
Luctkar-Flude, M., Hopkins-Rosseel, D., Jones-Hiscock, C., Pulling, C., Gauthier, J., Knapp, A., … Brown, C. (2016). Interprofessional infection control education using standardized patients for nursing, medical and physiotherapy students. Journal of Interprofessional Education and Practice, 2, 25-31.
Morison, S., & Jenkins, J. (2007). Sustained effects of interprofessional shared learning on student attitudes to communication and team working depend on shared learning opportunities on clinical placement as well as in the classroom. Medical Teacher, 29(5), 464-470.
Mundell, W. C., Kennedy, C. C., Szostek, J. H., & Cook, D. A. (2013). Simulation technology for resuscitation training: A systematic review and meta-analysis. Resuscitation, 84(9), 1174-1183.
Oandasan, I. (2007). Teamwork and healthy workplaces: Strengthening the links for deliberation and action through research and policy. HealthcarePapers, 7, 98-103.
Perkins, G. D. (2007). Simulation in resuscitation training. Resuscitation, 73(2), 202-211
Perkins, G. D., Morley, P. T., Nolan, J. P., Soar, J., Berg, K., Olasveengen, T., … Neumar, R. (2020). International Liaison Committee on Resuscitation: COVID-19 consensus on science, treatment recommendations and task force insights. Resuscitation, 151, 145-147.
Savoldelli, G. L., Nalik, V. N., Park, J., Joo, H. S., Chow, R., & Hamstra, S. J. (2006). Value of debriefing during simulated crisis management: oral versus video-assisted oral feedback. Anesthesiology, 105(2), 279-285.
Steinemann, S., Kurosawa, G., Wei, A., Ho, N., Lim, E., Suares, G., … Berg, B. (2016). Role confusion and self-assessment in interprofessional trauma teams. The American Journal of Surgery, 211(2), 482-488.
Steinert, Y. (2005). Learning together to teach together: Interprofessional education and faculty development. Journal of Interprofessional Care, 19(Suppl 1), 60-75.
Wagner, P. D., Parker, C. J., Mavis, B. E., & Smith, M. K. (2011). An interdisciplinary infection control education intervention: Necessary but not sufficient. The Journal of Graduate Medical Education, 3(2), 203-210.
Wong, J., Goh, Q. Y., Tan, Z., Lie, S. A., Tay, Y. C., Ng, S. Y., … Soh, C. R. (2020). Preparing for a COVID-19 pandemic: A review of operating room outbreak response measures in a large tertiary hospital in Singapore. Canadian Journal of Anesthesia, 67, 732-745.
Wong, M. L., Lee, T. W. O., Allen, P. F., & Foong, K. W. C. (2020). Dental education in Singapore: A journey of 90 years and beyond. The Asia Pacific Scholar, 5(1), 3-7.
Ziv, A., Wolpe, P. R., Small, S. D., & Glick, S. (2003). Simulation-based medical education: an ethical imperative. Simulation in Healthcare Journal of the Society for Simulation in Healthcare, 78(8), 783-788.
*Kah Wei Tan
1 Maritime Square,
#11-25 HarbourFront Centre,
Singapore 099253
Email address: kahwei.tan@mohh.com.sg
Submitted: 2 April 2020
Accepted: 3 June 2020
Published online: 5 January, TAPS 2021, 6(1), 109-113
https://doi.org/10.29060/TAPS.2021-6-1/SC2243
Wen Hao Chen1, Shairah Radzi1, Li Qi Chiu2, Wai Yee Yeong3, Sreenivasulu Reddy Mogali1
1Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; 2Department of Emergency Medicine, Tan Tock Seng Hospital, Singapore; 3Singapore Centre for 3D Printing, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
Abstract
Introduction: Simulation-based training has become a popular tool for chest tube training, but existing training modalities face inherent limitations. Cadaveric and animal models are limited by access and cost, while commercial models are often too costly for widespread use. Hence, medical educators seek a new modality for simulation-based instruction. 3D printing has seen growing applications in medicine, owing to its advantages in recreating anatomical detail using readily available medical images.
Methods: Anonymised computer tomography data of a patient’s thorax was processed using modelling software to create a printable model. Compared to a previous study, 3D printing was applied extensively to this task trainer. A mixture of fused deposition modelling and material jetting technology allowed us to introduce superior haptics while keeping costs low. Given material limitations, the chest wall thickness was reduced to preserve the ease of incision and dissection.
Results: The complete thoracostomy task trainer costs approximately SGD$130 (or USD$97), which is significantly cheaper compared to the average commercial task trainer. It requires approximately 118 hours of print time. The complete task trainer simulates the consistencies of ribs, intercostal muscles and skin.
Conclusion: By utilising multiple 3D printing technologies, this paper aims to outline an improved methodology to produce a 3D printed chest tube simulator. An accurate evaluation can only be carried out after we improve on the anatomical fidelity of this prototype. A 3D printed task trainer has great potential to provide sustainable simulation-based education in the future.
Keywords: Medical Education, Chest Tube, Thoracostomy, Simulation, 3D Printing
I. INTRODUCTION
Training opportunities in procedures such as chest tube insertions are increasingly limited amidst a growing population of trainees. Yet, the deliberate practice remains essential to improving proficiency and preventing possible complications such as lung parenchymal damage (Hernandez, El Khatib, Prokop, Zielinski, & Aho, 2018). Hence, many institutions have adopted simulation-based training to provide realistic training opportunities while mitigating harm to patients.
Cadaveric and animal models are limited by access and cost, and raise religious and ethical concerns (Kovacs, Levitan, & Sandeski, 2018). In addition, commercial models tend to be very costly (e.g. Trauma-Man® at USD~$25,000). As such, new modalities are desired.
Three-dimensional (3D) printing can accurately recreate anatomical details from imaging data through precision modelling and a wide range of compatible printing materials (Mogali et al., 2018). Together with its decreasing cost, it has become an attractive technology for creating inexpensive and anatomically accurate simulation modalities.
A previous study from the Federal University of Parana, Brazil (Bettega et al., 2019) outlined the development and evaluation of a low-cost chest tube simulator. The bony structures were 3D printed, while the remainder of the model was manually assembled using silicone sheets, foam pads, and balloons.
They compared 2 groups of participants using a porcine rib model, and their 3D printed simulator respectively. They found subjective improvements in confidence and safety amongst both groups and showed no difference between the objective grades. Hence, they concluded that their 3D printed simulator was equivalent to the animal model concerning the simulation of a chest tube placement.
However, there exist many other 3D printing technologies and materials, which can potentially be applied to create superior haptics and anatomical detail. Hence, this paper aims to outline a methodology of integrating multiple 3D printing modalities to create a cost-efficient 3D printed chest tube simulator.
II. METHODS
An anonymised computerized tomography (CT) file of a healthy human thorax (2.5 mm slices thickness) in Digital Communication in Medicine (DICOM) format was downloaded from the databank provided by 3D Slicer (https://www.slicer.org/, Version 4.10.2). The CT data was available freely for research and educational use at the time of this study.
3D Slicer was employed to segment the thoracic bony structures using a radiodensity based threshold algorithm, which traces the bone based on the Hounsfield units. Due to a lack of contrast possibly from the poor resolution of the CT images, we were not able to segment the respective soft tissue layers using thresholding. Hence, the intercostal muscles were manually drawn with the paintbrush function. Intrathoracic organs were all removed to create a central cavity. From initial experimentation, we found that incision and dissection were too difficult to perform if the task trainer was printed at the true thoracic thickness. Hence, a decision was made to thin out the chest wall. At the 4th and 5th intercostal space midaxillary line, the mean chest wall thickness is 39mm (Laan et al., 2016), but our model measured at 18mm at this corresponding anatomical landmark.
Further processing was done to smoothen the contours of the model (see Appendix, A). Subsequently, the anatomical structures were saved as stereolithography (STL) file and exported into Materialise Magics (Version 20 by Materialise, Belgium).
On Magics, cut and Boolean techniques were used to create the replaceable component. This space was demarcated by the 5th to 6th intercostal space, between anterior axillary to the mid axillary line. To create a secure fit for the replaceable piece, a groove was created and reinforced using the cut and punch function which generates teething to maximise friction. The main frame measured 23cm (length) x 19.5cm (width) x 23.5cm (height), while the replaceable part measured 9cm (length x 8.1cm (width) x 0.8cm (height). The Fix Wizard and Shrink Wrap Part functions were used to repair the surface mesh and eliminate holes and loose shells. The models were then exported using IdeaMaker® (Raise3D, USA) and uploaded to the printer.
The model was printed in two parts: the main frame was printed using fusion deposition modelling (FDM). This technology extrudes a continuous filament of melted thermoplastic, repeated by layer based on the design coordinates. Bones were printed with polylactic acid (PLA) which is a rigid material while the intercostal muscles were printed with thermoplastic urethane (TPU) which is a flexible material. Support was printed using PLA. We utilised a dual nozzle extrusion printer (Raise3D Pro 2, Raise3D, USA) to allow us to print the bony and soft tissue simultaneously, thereby increasing convenience. The following settings were used: printing speeds were reduced to 25mm/s, retraction of the TPU extrusion head was disabled, nozzle temperatures were set at 200°C, and build plate temperature was at 65°C. Post-print processing was done to remove the support, with subsequent filing and sanding.
The replaceable part was printed using Objet500 Connex 3 (Stratasys Ltd, Eden Prairie, MN), a multi-material printer utilising material jetting technology. This technology drops liquid photopolymers onto the build tray and simultaneously cures the material using UV light. As such, we can mix plastic and rubber to create hybrid consistencies (Mogali et al., 2018) of varying shore hardness. Two materials were selected to achieve the desired haptics: VeroWhite (FullCure, RGD835) was the stiff plastic photopolymer used for bones, while Tango Plus (FullCure, 930) was the rubber photopolymer used for simulating soft tissue. Support resin (FullCure, 706) was also used for printing. Post-printing processing was required to remove the support resin.
Skin coloured silicone sheets of 5 mm thickness were wrapped around the model using generic superglue. The task trainer was cable tied to stainless steel supports and screwed onto a laminated wood baseplate. Cut sponges were wrapped in duct tape to simulate the lung parenchyma and placed into the central cavity created.
III. RESULTS
The completed task trainer is shown in Figure 1. Both the main frame and replaceable piece provided simulation for the ribs, intercostal muscles, and skin.
The 3D thoracostomy task trainer costs approximately SGD$130 (or USD$97) (excluding manpower and printer cost)–see Appendix, B). The baseplate and mount were repurposed and did not add to costs.

Note. A = completed hemithorax main frame using FDM printing; B= replaceable piece; C = task trainer without the replaceable piece. Figure 1. Photos of the completed task trainer
The main frame required 676g of polylactic acid and 114g of thermoplastic urethane. The replaceable piece required 30g of VeroWhite, 22g of Tango Plus, and 66g of Support706. It took a total of approximately 118 hours to print the entire task trainer.
IV. DISCUSSION
Our methodology addressed several issues with the model as outlined by the Brazilian team (Bettega et al., 2019). The proposed methodology here required less manual assembly of components, thereby saving time and improving fabrication. By utilising dual extrusion printing, construction was simplified while integrating an additional material for varying consistencies. The creation of a replaceable piece also meant long term savings in the cost of utilising this model. These logistical advantages would make it easier to adopt our proposed task trainer.
Secondly, simple materials such as foam pads and silicone sheets were inferior in simulating human tissue. Our utilisation of material jetting technology with the Objet500 Connex 3 (Stratasys Ltd, Eden Prairie, MN) printer allowed us to blend plastic and rubber materials to better recreate the consistency of human tissue. This technology and blend of materials have been extensively validated in other simulation models (Mogali et al., 2018).
Cost remains an important impedance to the widespread use of simulation in procedural education. We performed a surface comparison of our product against an existing commercial model in use by a local hospital in Singapore (LF03770U by Lifeform, NASCO, USA). The task trainer outlined here (~USD$97) is significantly cheaper than the commercial trainer (~USD$1,800). Also, our material blend provides superior haptics and bony structures in the replaceable component, as compared to a plain silicone insert in the Lifeform model. These should provide improvements in the quality and quantity of simulation opportunities for training physicians.
Unfortunately, we were not able to recreate the anatomical thickness of the thorax given our material limitations at the time of writing. This inaccurate depth of dissection creates a confounding variable when evaluating our task trainer against existing cadaveric or commercial simulators. Hence, an evaluation of this task trainer was withheld to address this limitation in our future prototype. Moving forward, we plan to invite physicians to validate the efficacy of our improved task trainer.
V. CONCLUSION
We have outlined the methodology for creating a 3D printed tube thoracostomy task trainer using a combination of printing technologies. The outlined task trainer could potentially provide superior haptics at a lower cost while improving fabrication. However, an equitable validation against an existing modality of simulation can only be done after we achieve a comparable anatomical fidelity.
In our continued search for sustainable simulation models, 3D printing shows great potential in reproducing anatomical detail with superior cost efficiency. The growing availability of 3D printing infrastructure makes the large-scale adoption of such task trainers ever more realistic. It makes it therefore worthwhile to invest in the creation of the perfect 3D printed task trainer.
Notes on Contributors
Mr. Wen Hao Chen is an undergraduate medical student with the Lee Kong Chian School of Medicine, Singapore. He was involved in the development of the task trainer, along with co-authoring the submitted manuscript.
Dr. Shairah Radzi is a research fellow with the Lee Kong Chian School of Medicine, Singapore. She was involved in the development of the task trainer, along with co-authoring the submitted manuscript.
Dr. Li Qi Chiu is a consultant physician in the Department of Emergency Medicine in Tan Tock Seng Hospital, Singapore. She was involved in the development of the task trainer, along with co-authoring the submitted manuscript.
Assoc. Prof Wai Yee Yeong is the Associate Chair (Students) of the School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore. She was involved in the development of the task trainer, providing her technical expertise on the 3D printing process, along with co-authoring the submitted manuscript.
Asst. Prof Sreenivasulu Reddy Mogali is the Head of Anatomy and Principal Investigator in Clinical Anatomy and Medical Education at Lee Kong Chian School of Medicine, Singapore. He was involved in the development of the task trainer, along with co-authoring the submitted manuscript. He serves as the principal investigator.
Ethical Approval
Approved by Nanyang Technological University’s Institutional Review Board (2019-07-017). The CT scans used were anonymised and provided free for education and research use by 3D Slicer (https://www.slicer.org/, Version 4.10.2).
Acknowledgement
The authors thank the staff and faculty of the Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore for supporting this research; Singapore Centre for 3D Printing, Nanyang Technological University for their technical support.
Funding
This project was funded by the Ministry of Education Research Start-Up Grant, Lee Kong Chian School of Medicine, Nanyang Technological University Singapore.
Declaration of Interest
All authors declare no conflict of interest. The authors alone are responsible for the content and writing of the article.
References
Bettega, A. L., Brunello, L. F. S., Nazar, G. A., De-Luca, G. Y. E., Sarquis, L. M., Wiederkehr, H. de A., … Pimentel, S. K. (2019). Chest tube simulator: Development of low-cost model for training of physicians and medical students. Revista Do Colégio Brasileiro de Cirurgiões, 46(1). https://doi.org/10.1590/0100-6991e-20192011
Hernandez, M. C., El Khatib, M., Prokop, L., Zielinski, M. D., & Aho, J. M. (2018). Complications in Tube Thoracostomy: Systematic review and Meta-analysis. The Journal of Trauma and Acute Care Surgery, 85(2), 410–416. https://doi.org/10.1097/TA.0000000000001840
Kovacs, G., Levitan, R., & Sandeski, R. (2018). Clinical Cadavers as a Simulation Resource for Procedural Learning. AEM Education and Training, 2(3), 239–247. https://doi.org/10.1002/aet2.10103
Laan, D. V., Vu, T. D. N., Thiels, C. A., Pandian, T. K., Schiller, H. J., Murad, M. H., & Aho, J. M. (2016). Chest Wall Thickness and Decompression Failure: A Systematic Review and Meta-analysis Comparing Anatomic Locations in Needle Thoracostomy. Injury, 47(4), 797–804. https://doi.org/10.1016/j.injury.2015.11.045
Mogali, S. R., Yeong, W. Y., Tan, H. K. J., Tan, G. J. S., Abrahams, P. H., Zary, N., … Ferenczi, M. A. (2018). Evaluation by medical students of the educational value of multi-material and multi-colored three-dimensional printed models of the upper limb for anatomical education. Anatomical Sciences Education, 11(1), 54–64. https://doi.org/10.1002/ase.1703
*Sreenivasulu Reddy Mogali
11 Mandalay Road, Singapore 308232
Lee Kong Chian School of Medicine,
Nanyang Technological University
Email: sreenivasulu.reddy@ntu.edu.sg
Submitted: 17 April 2020
Accepted: 05 August 2020
Published online: 5 January, TAPS 2021, 6(1), 114-118
https://doi.org/10.29060/TAPS.2021-6-1/SC2358
Warren Fong1,3,4, Yu Heng Kwan2, Sungwon Yoon2, Jie Kie Phang1, Julian Thumboo1,2,4 & Swee Cheng Ng1
1Department of Rheumatology and Immunology, Singapore General Hospital, Singapore; 2Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; 3Duke-NUS Medical School, Singapore; 4Yong Loo Lin School of Medicine, National University of Singapore, Singapore
Abstract
Introduction: This study aimed to examine the perception of faculty on the relevance, feasibility and comprehensiveness of the Professionalism Mini Evaluation Exercise (P-MEX) in the assessment of medical professionalism in residency programmes in an Asian postgraduate training centre.
Methods: Cross-sectional survey data was collected from faculty in 33 residency programmes. Items were deemed to be relevant to assessment of medical professionalism when at least 80% of the faculty gave a rating of ≥8 on a 0-10 numerical rating scale (0 representing not relevant, 10 representing very relevant). Feedback regarding the feasibility and comprehensiveness of the P-MEX assessment was also collected from the faculty through open-ended questions.
Results: In total, 555 faculty from 33 residency programmes participated in the survey. Of the 21 items in the P-MEX, 17 items were deemed to be relevant. For the remaining four items ‘maintained appropriate appearance’, ‘extended his/herself to meet patient needs’, ‘solicited feedback’, and ‘advocated on behalf of a patient’, the percentage of faculty who gave a rating of ≥8 was 78%, 75%, 74%, and 69% respectively. Of the 333 respondents to the open-ended question on feasibility, 34% (n=113) felt that there were too many questions in the P-MEX. Faculty also reported that assessments about ‘collegiality’ and ‘communication with empathy’ were missing in the current P-MEX.
Conclusion: The P-MEX is relevant and feasible for assessment of medical professionalism. There may be a need for greater emphasis on the assessment of collegiality and empathetic communication in the P-MEX.
Keywords: Professionalism, Singapore, Survey, Assessment
I. INTRODUCTION
Medical professionalism is one of the core Accreditation Council for Graduate Medical Education competencies and forms the basis of medicine’s contract with society. Unprofessional behaviour during training of junior doctors has been shown to result in future unprofessional behaviour. Assessment of professionalism not only allows for timely feedback to residents to help them improve, but also allows for development of better curriculum to prevent lapses in medical professionalism. The Professionalism Mini-Evaluation Exercise (P-MEX) had previously been identified as a potential observer-based assessment tool (Kwan et al., 2018), but it has not been validated in a multi-ethnic and multi-cultural Asian context such as Singapore. According to International Ottawa Conference Working Group on the Assessment of Professionalism, professionalism varies across cultural contexts, and therefore cross-cultural validation of the assessment tool for medical professionalism is imperative (Hodges et al., 2011). The current assessment tools adopted in local institutions may not cover the entire continuum of medical professionalism. For example, in the Ministry of Health Holdings (MOHH) C1 form which is currently being used for the assessment of residents on a 6-monthly basis, the assessment of professionalism is summative and consists of only three items (1) Accepts responsibility and follows through on tasks, (2) Responds to patient’s unique characteristics and needs equitably, (3) Demonstrates integrity and ethical behaviour.
We aimed to (1) examine faculty perception of the relevance of the P-MEX for assessment of medical professionalism in the local context, and (2) determine the feasibility and comprehensiveness of the P-MEX as an assessment tool for medical professionalism in Singapore.
II. METHODS
A. Design and Participants
We invited faculty in the SingHealth residency programmes to participate in the study by completing an online anonymous questionnaire in July 2018 to August 2018. Participants were given one week to complete the survey, with three reminder emails sent at one-week, two-weeks and one-month after the deadline for submission. SingHealth Centralised Institutional Review Board approved the conduct of this study (Reference Number: 2016/3009). Implied informed consent was provided by participants before completing the online anonymous questionnaire.
B. Survey Questionnaire
The P-MEX consists of four domains (Doctor-patient relationship skills, Reflective skills, Time management and Inter-professional relationship skills) and 21 sub-domains. Faculty were asked to rate the relevance of each item in P-MEX using a 0-10 numerical rating scale (0 representing not relevant, 10 representing very relevant). The faculty were also asked the following open-ended questions to determine the feasibility and comprehensiveness of the P-MEX- (1) “In your opinion, is a P-MEX form with 21 items too long, making it not feasible for routine use? If so, which items should be removed?” and (2) “In your opinion, are there any missing items (observable actions of a medical professional) that should be included in this form? If so, what new items should be added?” The questionnaire also included additional questions related to demographic characteristics (age, gender, specialty and number of years since becoming a specialist).
C. Analysis
Items were deemed to be relevant to the assessment of medical professionalism when at least 80% of the faculty gave a rating of ≥8. This was determined by expert judgement and prior literature (Avouac et al., 2011). For the open-ended questions on feasibility and comprehensiveness, responses were categorised and the number of the respondents who deemed the 21-item P-MEX to be not feasible (too long) or not comprehensive (there were missing items that should be included) are presented.
III. RESULTS
In total, 555 faculty from 33 residency programmes participated in the survey (response rate 44%). The respondents were 59% male, median age 43 years old, age ranged from 30 to 78 years old. Specialists from medical and surgical disciplines made up 39% and 27% of the respondents respectively, with the remaining respondents coming from diagnostic radiology/nuclear medicine, anaesthesiology, paediatrics and emergency medicine (12%, 11%, 6% and 5% of the respondents respectively).
A. Relevance
Of the 21 items in P-MEX, 17 items were deemed to be relevant (at least 80% of the faculty gave a rating of ≥8). For the remaining four items ‘maintained appropriate appearance’, ‘extended his/herself to meet patient needs’, ‘solicited feedback’, and ‘advocated on behalf of a patient’, the percentage of faculty who gave a rating of ≥8 was 78%, 75%, 74%, and 69% respectively (Figure 1).

Figure 1: Percentage of faculty (n=555) who rated the item ≥8 on the relevance of the item in assessment of medical professionalism using a 0-10 numerical rating scale (0 representing not relevant, 10 representing very relevant).
B. Feasibility
There were 333 respondents for the question “In your opinion, is a P-MEX form with 21 items too long, making it not feasible for routine use? If so, which items should be removed?”, of which 34% (n=113) felt that there were too many questions in the P-MEX assessment form. The top four items chosen to be removed were “solicited feedback” (n=36), “extended his/herself to meet patient needs” (n=27), “advocated on behalf of a patient” (n=25), and “maintained appropriate appearance” (n=23). 208 (62%) respondents felt that the number of questions in the P-MEX assessment form was appropriate.
C. Comprehensiveness
There were 307 respondents to the question “In your opinion, are there any missing items (observable actions of a medical professional) that should be included in this form? If so, what new items should be added?”, of which 28% (n=85) faculty felt that there were missing items. The most frequently mentioned missing items were regarding assessment of ‘collegiality’ (n=54) and assessment of ‘communication with empathy’ (n=12).
Examples of ‘collegiality’ provided by faculty— “Collaboration with other healthcare professionals in the patients’ best interest”, “Demonstration of collaborative behaviour”
Examples of ‘communication with empathy ‘provided by faculty— “Communicate with empathy and effectively to patient and family, taking into account their level of understanding, education and socioeconomic background”, “Communication skills…should embrace empathy, listening skills, discretion, sensitivity and intelligence… sufficient information, counselling, planning and advice regarding medical condition and options.”
207 respondents (67%) felt that the P-MEX was comprehensive for the assessment of medical professionalism.
IV. DISCUSSION
This study provides preliminary evidence on the relevance, feasibility and comprehensiveness of the P-MEX in the assessment of medical professionalism in an Asian city state. The current study is part of a larger project to culturally adapt and validate the P-MEX. Based on our knowledge, this is the first study to explore the faculty perception on relevance, feasibility and comprehensiveness of the P-MEX in the assessment of medical professionalism in a multi-cultural and multi-ethnic context.
There were four items that were deemed to be less relevant (extended his/herself to meet patient needs, advocated on behalf of a patient, solicited feedback, maintained appropriate appearance). These findings were also similar in a validation study performed in Canada, where the items ‘extended his/herself to meet patient needs’ and ‘advocated on behalf of a patient’ were also frequently marked as ‘not applicable’, suggesting that the two items may be less relevant (Cruess, McIlroy, Cruess, Ginsburg, & Steinert, 2006). Qualitative methods can be used to explore the reasons why these items were deemed to be less relevant. About one-third of faculty felt that P-MEX was too long. Further study is warranted to evaluate the possibilities for shortening the P-MEX to reduce response burden and enhance routine use of the P-MEX.
In addition, our study revealed a need for greater emphasis on the assessment of collegiality. Some faculty felt that ‘collegiality’ was missing in the P-MEX despite the presence of items such as ‘demonstrated respect for colleagues’ and ‘avoided derogatory language’. This suggests that collegiality may encompass actions other than demonstrating respect and avoiding derogatory language in the local context, and further reinforces the emphasis of interprofessional collaborative practice.
Faculty also felt that there was also a lack of assessment of ‘communication with empathy’ in the P-MEX. The importance of empathetic communication is also supported by a study in Indonesia, a country in the same region, which found that patients considered communication as the most important attribute of medical professionalism (Sari, Prabandari, & Claramita, 2016).
This study has some limitations. The non-response rate raises concern about possible selection bias. Non-responders may have been less enthusiastic about the assessment of medical professionalism. Medical professionalism is affected by socio-cultural factors, therefore the findings from this study may not be entirely generalizable to another socio-cultural context. In addition, we were unable to elucidate the reasons for disagreement with the relevance of some of the items in the P-MEX as many faculty did not provide feedback and comments. Nevertheless, the findings of this study can serve as basis for future research, especially in countries with similar multicultural backgrounds.
V. CONCLUSION
Faculty agreed that most of the items in the P-MEX were relevant in the assessment of medical professionalism. Majority of the faculty also felt that the P-MEX was feasible to be used routinely in the assessment in medical professionalism. There may be a need for greater emphasis on the assessment of collegiality and communication with empathy in the modified P-MEX.
Notes on Contributors
Warren Fong reviewed the literature, designed the study, collected data, analysed data, and wrote manuscript. Yu Heng Kwan reviewed the literature, designed the study, collected data, analysed data, and wrote manuscript. Sungwon Yoon advised the design of study, analysed data, and gave critical feedback to the writing of manuscript. Jie Kie Phang collected data, analysed data, and wrote manuscript. Julian Thumboo advised the design of study, and gave critical feedback to the writing of manuscript. Swee Cheng Ng advised the design of study, collected data, analysed data, and gave critical feedback to the writing of manuscript. All authors have read and approved the final manuscript.
Ethical Approval
Ethical approval for this was granted by the SingHealth Institutional Review Board (Reference Number: 2016/3009).
Acknowledgement
The authors wish to thank all the study participants for contributing to this work.
Funding
This research was supported by SingHealth Duke-NUS Medicine Academic Clinical Programme Education Support Programme Grant (Reference Number: 03/FY2017/P2/03-A47). Funder was not involved in the design, delivery or submission of the research.
Declaration of Interest
The authors declare that they have no competing interests.
References
Avouac, J., Fransen, J., Walker, U., Riccieri, V., Smith, V., Muller, C., … Matucci-Cerinic, M. (2011). Preliminary criteria for the very early diagnosis of systemic sclerosis: Results of a Delphi Consensus Study from EULAR Scleroderma Trials and Research Group. Annals of the Rheumatic Diseases, 70(3), 476-481. doi:10.1136/ard.2010.136929
Cruess, R., McIlroy, J. H., Cruess, S., Ginsburg, S., & Steinert, Y. (2006). The professionalism mini-evaluation exercise: A preliminary investigation. Academic Medicine, 81(10), S74-S78.
Hodges, B. D., Ginsburg, S., Cruess, R., Cruess, S., Delport, R., Hafferty, F., . . . Ohbu, S. (2011). Assessment of professionalism: Recommendations from the Ottawa 2010 Conference. Medical Teacher, 33(5), 354-363.
Kwan, Y. H., Png, K., Phang, J. K., Leung, Y. Y., Goh, H., Seah, Y., . . . Lie, D. (2018). A systematic review of the quality and utility of observer-based instruments for assessing medical professionalism. Journal of Graduate Medical Education, 10(6), 629-638.
Sari, M. I., Prabandari, Y. S., & Claramita, M. (2016). Physicians’ professionalism at primary care facilities from patients’ perspective: The importance of doctors’ communication skills. Journal of Family Medicine and Primary Care, 5(1), 56-60. https://doi.org/10.4103/2249-4863.184624
*Warren Fong
SingHealth Rheumatology,
Senior Residency Programme,
20 College Road,
Singapore 169856
Tel: +6563214028
Email: warren.fong.w.s@singhealth.com.sg
Submitted: 16 April 2020
Accepted: 21 July 2020
Published online: 5 January, TAPS 2021, 6(1), 119-121
https://doi.org/10.29060/TAPS.2021-6-1/PV2250
Annushkha Sharanya Sinnathamby
Department of Paediatrics, Khoo Teck Puat National University Children’s Medical Institute, National University Hospital, Singapore
I. INTRODUCTION
“To have striven, to have made the effort, to have been true to certain ideals – this alone is worth the struggle.”
William Osler
The word “values” is heard frequently in healthcare. From the moment we step into medical school, we are challenged to reflect what our intrinsic values are, or how we can “add value” to a department during the residency application.
With time, and in going through the system, our definitions of the word “values” may change. To me, values are those things which are right and wrong, and which are important in life. In other words, values include not only what is important to my profession and to being a good doctor, but also to what is important to being a good person.
The philosopher Alasdair MacIntyre argues that one should reflect on the following three questions at the heart of moral thinking (Hinchman, 1989):
- Who am I?
- Who ought I to become?
- How ought I to get there?
In the context of understanding our values in healthcare, I wondered if the above can be translated into:
- What are my values?
- Which values should we value?
- How should we value those values?
In this article, I aim to touch on some of my view on values in the healthcare system, from the perspective of a junior doctor.
II. ARE OUR VALUES MISPLACED?
How often do we really ask ourselves what is important, what is good, or what is morally correct?
I asked a few junior doctors what values they think are important to being a good doctor. For some, the first response was classical, including “perseverance”, “compassion”, and “integrity”. However, the first thought of many others was not to be a kind or compassionate doctor, but an efficient or skilful one. I quote some of them verbatim:
“If my seniors don’t have to do anything, because I’ve done it all, then I’ve done my job.”
“No matter how much we value empathy and respect… I feel this doesn’t matter unless you have the competency to treat your patients.”
These doctors are far from unkind, dishonest, or cold. In fact, I know them personally to be some of the most good-hearted residents at work. Despite this, “typical” values such as kindness or integrity are not values which they instinctively identify with.
It is important to distinguish that being a “good” doctor may have more than one definition. “Good” as an adjective can mean being skilled and competent; on the other hand, it also means being morally upright, kind, and compassionate. Of course, it should be no argument that every doctor should be all of the above. Yet, I fear that we may be so increasingly fixated on the former, that we begin to lose sight of the latter.
As a case in point, I challenged some of our contemporaries to see how strongly they held on to an arguably core value—integrity. This value is often tested in a common daily scenario for our junior doctors: bargaining for a scan from our Radiology colleagues, where questionable tactics are sometimes employed to ensure a slot.
I asked every junior doctor working in the department two simple questions:
1) If they had ever lied to get a scan
2) If they had ever augmented the truth to get a scan
I had assumed that not a single doctor would have outright lied to get a scan, but 7.1% admitted to having done so. Furthermore, 67.9% said they would augment the truth to get a scan. This implies that there is a spectrum from an exaggeration to an outright falsehood.
When asked to elaborate on the above question, many retrospectively regretted embellishing the truth. A senior medical officer described in detail his experience lying for a particular peripherally inserted central catheter as a house officer. Even after 4 years, he could cite shame at lying to a radiologist who could almost certainly see through the lie, and perhaps depriving another patient who needed the scan more of a slot.
Ultimately, I think this boils down to our personal yardstick of our own integrity, and how willing each of us is to allow ends to justify means. Though the change of phrasing in the question I asked led to a big change in statistics, this does not change the fact that for some doctors, “augmenting the truth” strays dangerously far from what the truth really is.
Perhaps, it is then relevant to examine what would make a junior doctor re-order their priorities, and inadvertently compromise their own core values. In an increasingly busy environment, one reason we may lose sight of our core values is burnout. Studies in Singapore have described that between 55.1%-80.7% of residents reported burnout in some form, higher than their US counterparts (Lee, Loh, Sng, Tung, & Yeo, 2018; See et al., 2016). Furthermore, it was postulated that there was a negative correlation between burnout and empathy levels, and that overnight calls and low degrees of respect from colleagues were associated with increased stress levels. Burnout and emotional fatigue may cause us to erroneously weigh our values, and this could be why some junior doctors prioritise efficiency, meticulousness, or even keeping their seniors happy, to the extent of losing sight of their core values.
III. WHAT VALUES SHOULD WE VALUE?
It is no secret that a career in medicine is highly competitive. At every stage of training, medical student’s face a barrage of rigorous series of assessments that continue on into their professional careers. Therefore, it is important to examine the criteria we use to measure our doctors. Grading systems increasingly put emphasis on the softer side of medicine such as compassion and integrity, but more can be done to help our doctors value themselves and their own values more.
I recently filled up a typical grading form for my house officer. For 22 questions about his daily work, there was only one about his values and professionalism. It was a shame, as I strongly believe that an emphasis on our values should be a learning outcome, even if it is not a graded criterion. I was once taught that a patient may never remember your management, but will always remember your kindness—words that resonate with me even today.
On an institutional level, it is also important to have an emphasis on values. The institution I work in advocates the TRICEPS core values, a catchy acronym for Teamwork, Respect, Integrity, Compassion, Excellence, and Patient-Centeredness. While these values were probably established as a guideline to attract like-minded individuals to the institution, I also think these are a good set of values to emulate.
IV. HOW SHOULD WE VALUE OUR VALUES?
A system is only as great as its people. It is difficult to change a huge system, but it is easy to start the change from within ourselves, and those around us. It is also beneficial to ensure junior doctors are mindful of their values. In our daily practice, this means empowering them to self-reflect.
A simple way I do this is to ensure that after every night call, I debrief each member of my on-call team to highlight things I noticed they did well. I try not to focus solely on their medical decisions, but also the small things: staying beyond hours just to let a teenage patient with a chronic condition sleep in before blood taking, sitting with an anxious parent, or sacrificing rest time to offer moral support to a colleague doing a difficult procedure. My hope in doing this is to allow junior doctors to recognise good traits in themselves, so that they can further nurture them along their journey of medicine, and in turn inspire the people around them.
My second suggestion is for each of us to take a minute to remember what values brought us into medicine in the first place. For me, when I am at my most fatigued, feel most apathetic, or when something had gone wrong at work, I read the personal statement I wrote for my medical school application more than 10 years ago, and try to remember that inside me, my core values are still the same as the overly enthusiastic teenager who wrote them—though perhaps more mature, and hopefully slightly wiser too.
After all, it is only if we are certain of what we value, that we can inspire and encourage those around us to value their values too.
Note on Contributor
Annushkha is a Paediatrics Senior Resident. She has an interest in medical education, and is currently in the National University Health System’s Medical Education Residency Programme. She conceptualised and gathered information and drafted the initial manuscript, critically reviewed the manuscript for important intellectual content and revised the manuscript.
Acknowledgements
The author would like to thank A/Prof Marion Aw and A/Prof Quah Thuan Chong for providing her with inspiration and guidance in writing this article.
Funding
There was no funding involved in writing this article.
Declaration of Interest
The author declares no conflicts of interest, including financial, consultant and institutional relationships that might lead to bias or a conflict of interest.
References
Hinchman, L. P. (1989). Virtue or Autonomy: Alasdair MacIntyre’s Critique of Liberal Individualism. Polity, 21(4), 635-654.
Lee, P. T.,Loh, J., Sng, G., Tung, J., & Yeo, K. K. (2018). Empathy and burnout: A study on residents from a Singapore institution. Singapore Medical Journal, 59(1), 50-54.
See, K. C., Lim, T. K., Kua, E. H., Phua, J., Chua, G. S., & Ho, K. Y. (2016). Stress and burnout among physicians: Prevalence and risk factors in a Singaporean internal medicine programme. Annals, Academy of Medicine, Singapore, 45(10), 471-474.
*Annushkha Sharanya Sinnathamby
Address: 1E Kent Ridge Rd,
National University Health System,
Singapore 119228
Email: annushkha_sharanya_sinnathamby@nuhs.edu.sg
Announcements
- Best Reviewer Awards 2025
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2025.
Refer here for the list of recipients. - Most Accessed Article 2025
The Most Accessed Article of 2025 goes to Analyses of self-care agency and mindset: A pilot study on Malaysian undergraduate medical students.
Congratulations, Dr Reshma Mohamed Ansari and co-authors! - Best Article Award 2025
The Best Article Award of 2025 goes to From disparity to inclusivity: Narrative review of strategies in medical education to bridge gender inequality.
Congratulations, Dr Han Ting Jillian Yeo and co-authors! - Best Reviewer Awards 2024
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2024.
Refer here for the list of recipients. - Most Accessed Article 2024
The Most Accessed Article of 2024 goes to Persons with Disabilities (PWD) as patient educators: Effects on medical student attitudes.
Congratulations, Dr Vivien Lee and co-authors! - Best Article Award 2024
The Best Article Award of 2024 goes to Achieving Competency for Year 1 Doctors in Singapore: Comparing Night Float or Traditional Call.
Congratulations, Dr Tan Mae Yue and co-authors! - Best Reviewer Awards 2023
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2023.
Refer here for the list of recipients. - Most Accessed Article 2023
The Most Accessed Article of 2023 goes to Small, sustainable, steps to success as a scholar in Health Professions Education – Micro (macro and meta) matters.
Congratulations, A/Prof Goh Poh-Sun & Dr Elisabeth Schlegel! - Best Article Award 2023
The Best Article Award of 2023 goes to Increasing the value of Community-Based Education through Interprofessional Education.
Congratulations, Dr Tri Nur Kristina and co-authors! - Best Reviewer Awards 2022
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2022.
Refer here for the list of recipients. - Most Accessed Article 2022
The Most Accessed Article of 2022 goes to An urgent need to teach complexity science to health science students.
Congratulations, Dr Bhuvan KC and Dr Ravi Shankar. - Best Article Award 2022
The Best Article Award of 2022 goes to From clinician to educator: A scoping review of professional identity and the influence of impostor phenomenon.
Congratulations, Ms Freeman and co-authors.









