Insights for medical education: via a mathematical modelling of gamification

Submitted: 28 March 2020
Accepted: 23 September 2020
Published online: 4 May, TAPS 2021, 6(2), 9-24
https://doi.org/10.29060/TAPS.2021-6-2/OA2242

De Zhang Lee1, Jia Yi Choo1, Li Shia Ng2, Chandrika Muthukrishnan1 & Eng Tat Ang1

1Department of Anatomy, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 2Department of Otolaryngology, National University Hospital, Singapore

Abstract

Introduction: Gamification has been shown to improve academic gains, but the mechanism remains elusive. We aim to understand how psychological constructs interact, and influence medical education using mathematical modelling.

Methods: Studying a group of medical students (n=100; average age: 20) over a period of 4 years with the Personal Responsibility Orientation to Self-Direction in Learning Scale (PRO-SDLS) survey. Statistical tests (Paired t-test) and models (logistic regression) were used to decipher the changes within these psychometric constructs (Motivation, Control, Self-efficacy & Initiative), with gamification as a tool. Students were encouraged to partake in a maze (10 stations) that challenged them to answer anatomical questions using potted human specimens.

Results: We found that the combinatorial effects of the maze and Script Concordance Test (SCT) resulted in a significant improvement for “Self-Efficacy” and “Initiative” (p<0.05). However, the “Motivation” construct was not improved significantly with the maze alone (p<0.05). Interestingly, the “Control” construct was eroded in students not exposed to gamification (p<0.05). All these findings were supported by key qualitative comments such as “helpful”, “fun” and “knowledge gap” by the participants (self-awareness of their thought processes). Students found gamification reinvigorating and useful in their learning of clinical anatomy.

Conclusion: Gamification could influence some psychometric constructs for medical education, and by extension, the metacognition of the students. This was supported by the improvements shown in the SCT results. It is therefore proposed that gamification be further promoted in medical education. In fact, its usage should be more universal in education.

Keywords:            Psychometric Constructs, Medical Education, Motivation, Initiative, Self-efficacy

Practice Highlights

  • Student’s enjoyment (interest) of the curriculum will determine the eventual academic outcome.
  • Metacognition (defined as the “learning of learning”, “knowing of knowing” and/ or the awareness of one’s thought processes) was improved with SCT and gamification.
  • Gamification is useful as a form of augmentation for didactic teaching but should never replace it.
  • Different type of psychometric scale (e.g. LASSI versus PRO-SDLS) used in your research will produce varying results.
  • Gamification is resource intensive and needs extra time to prepare compared to didactic approaches.

I. INTRODUCTION

Psychology is integral to healthcare and education but has often been overshadowed, compared to the other basic disciplines (Choudhry et al., 2019; Pickren, 2007). This is ironical because human psyche needs to be properly understood in order to manage them effectively (Wisniewski & Tishelman, 2019). Presently, the study of psychology does not feature prominently in the medical curriculum (Gallagher et al., 2015) with the exception of psychiatry (Douw et al., 2019). This gap needs to be addressed (Paros & Tilburt, 2018). In this research, we seek to understand the constructs for good medical learning via gamification which has wide ranging effects (Mullikin et al., 2019). The psychometric constructs to be analysed were as follows: 1) “Motivation”; define as the desire to learn out of interest or enjoyment (Yue et al., 2019). 2) “Initiative”; refers to how proactive a student is to learning (Boyatzis et al., 2000). 3) “Control”; is how much influence one has over the circumstances (Sheikhnezhad Fard & Trappenberg, 2019). 4) “Self-Efficacy”; relates to how confident one is, to do what needs to be done (Michael et al., 2019). We believe that these constructs contribute to the student’s awareness of their own thought processes (metacognition) towards their medical education.

Gamification” is defined as a process of adding game-like elements to something so as to encourage more participation (Rutledge et al., 2018; Van Nuland et al., 2015). The idea of using games to “lighten up” medical education in the clinical setting was first proposed in 2002 (Howarth-Hockey & Stride, 2002). The authors observed increased engagement and participation during lunchtime medical quizzes in the hospital. They therefore concluded that medical education could be fun, and since then, gamification has been taken seriously by the community (Evans et al., 2015; Nevin et al., 2014). In essence, gamification could be something as simple as having board games (Ang et al., 2018) but importantly, its impact on students’ learning must be evaluated and validated. Most studies in the literature did not fulfil this requirement (Graafland et al., 2012). The impact of games on the behavioral and/or psychological outcomes should be studied (Graafland et al., 2017; Graafland et al., 2014).

A PubMed search would reveal that there are numerous self-reporting tools such as LASSI (Learning and Strategies Study Inventory (Muis et al., 2007), MSLQ (Motivated Strategies for Learning Questionnaire (Villavicencio & Bernardo, 2013), and the SRLPS (Self-regulated Learning Perception Scale) (Turan et al., 2009) etc. Given the choices, how does one decide which one to adopt for their studies? In our research, we chose to use the PRO-SDLS survey questions with some modifications. The choice was both serendipitous and practical, as we have previously validated it via the Cronbach alpha (>0.7). In our earlier work, feedback scores and results yielded inconclusive evidence to support enhanced motivation among our students. Furthermore, was this due to gamification? With the current endeavour, we aim to prove via mathematical modelling that there are indeed alterations to the psychometric constructs. Hence, we re-analyse the old data set together with additional new information, using statistical analysis tools such as the logistic regression model, Wilcoxon tests, and the Paired t-test.

Medical teaching and learning is a complex endeavour based on an apprenticeship model (Cortez et al., 2019), which may or may not be an ideal arrangement (Sheehan et al., 2010). Furthermore, the decision making is often delegated to the seniors (Chessare, 1998). Conversely, gamification could empower the students to take charge of one’s learning, including decision making (Shah et al., 2013). Furthermore, one needs to understand what works from what is empirical (Cote et al., 2017). While our initial research addressed the impact of the games on academic performance, we now sought to further understand its effects on the psychometric dimensions. This will help to understand the psychology of self-directed (or regulated) learning. We hypothesize that the amount of gamification will impact these constructs. In summary, we hope to achieve the following:

Aims:

  • Understanding the role of psychometric constructs and gamification in medical education via suitable mathematical modelling.
  • To decipher the interaction of different psychometric constructs (Motivation, Self-efficacy, Control and Initiatives) in producing desired learners’ behaviours (metacognition) via the anatomy maze.

II. METHODS

First-year medical students (M1) took part in this retrospective analytical research. Two randomised groups of medical students (n=75, median age: 20 years) consented to the study (Group 1 & 2). A randomised group of students (n=25) exposed to no gamifications (Group 0) served as the control. Every student was required to complete a pre- and post- PRO-SDLS for the research. There were no penalties for withdrawing from the IRB-approved project (See IRB: B-16-205).

Gamification was carried out according to the scheme in Figure 1. Each group was divided into 3 to 4 subgroups that would enter the maze with a clue card (see example in Figure 1) linked to a specific pot specimen. They were required to explore the museum for the next clue and had to answer the hidden questions (see examples in Appendix) which would provide further directions. At the conclusion, students were given a competitive pop quiz that had no impact on their summative academic grades.

The main purpose was to assess formative knowledge acquisition. The validated PRO-SDLS include the following psychometric constructs: “Motivation (7 questions), “Initiative (6 questions), “Control” (6 questions) and “Self-Efficacy” (6 questions). (See Sup. Materials). The responses are then collapsed into an average accordingly. A higher score indicated more agreeability towards that construct for self-directed learning (Ang et al., 2017). The survey was designed with backward scoring to ensure accuracy. For quantification purposes, we subtracted the pre-feedback from the post-feedback scores for each question. An increased score for a particular construct suggests improvement (Cazan & Schiopca, 2014). Furthermore, students in Group 2 were given Script Concordance Test (SCT) quizzes (See Sup. Materials) as part of gamification (Lubarsky et al., 2013; Lubarsky et al., 2018; Wan et al., 2018). SCT were meant to enhance clinical reasoning. All data were analysed from two perspectives:

  • The magnitude of score increase (or decrease) of the post- PRO-SDLS survey responses, with respect to the pre- responses.
  • The odds of a student reporting an increased score in the post- PRO-SDLS survey responses.

In (a), the paired differences for each student’s response were studied using a parametric approach (paired t-test). In (b), we studied the odds of increased score for each construct, and investigate if grouping affected these odds. More formally, for each construct k (where k is one of the four constructs), we define variable  as the probability of a student from group  showing an increase in score for construct  (and hence,  as the probability that the student’s score decreased or remained unchanged). The value of   can be estimated by dividing the number of students from group  with an increased score for construct  by the total number of students from group . If the interventions are unsuccessful, we would expect  to be around  since a student’s score would likely either increase or decrease at random, with an equal probability. This can be tested using the t-test.

An alternative approach would be to study the odds of success, which can be written as  . A common mathematical model used to study these odds is the logistic regression model. For each construct, the logistic regression model studies the odds of a student from a given group showing an increased or decreased score. The overall significance of the model can be tested using the p-value obtained from the likelihood ratio test, while the significance of the individual odds can be tested using the t-test. For more details on the logistic regression model, we refer the reader to (Agresti, 2003).

We utilised the open source software R  (Team, 2019) to perform our statistical analysis.

III. RESULTS

Participation rate in the gamification endeavour was consistently 90±5%, and there was zero withdrawal from it, accompanied by reported favourable qualitative comments.

A. Studying the Absolute Scores

The average change in scores across all the groups for each construct is given in Table 1. From these scores, we believe that our gamification exercises may have had a positive impact on “Self-Efficacy” and “Initiative”. To visualize the spread of responses, we have prepared box plots of the post – pre scores (available in Supplementary Materials).

Groups

Constructs

Self-Efficacy

Initiative

Motivation

Control

0

0.07

-0.05

0.11

0.03

1

0.13

0.26

0.05

0.01

2

0.13

0.20

0.12

0.07

Table 1: Average post-pre scores

To determine if the construct scores pre and post intervention were different, we used the paired t-test, under the null hypothesis that there is no change. The p-values obtained are summarized in Table 2.

Groups

Constructs

Self-Efficacy

Initiative

Motivation

Control

0

0.46

0.57

0.43

0.79

1

0.07

0.00

0.54

0.86

2

0.01

0.00

0.09

0.14

Table 2: p-values of t-test (to 2 decimal places)

We observe that the null hypothesis of no difference between pre and post intervention levels for all constructs are not rejected (under p=0.05) for the control group. Both tests also failed to show any significant change for the “Control” construct.

There is strong evidence that the classroom interventions employed by Groups 1 and 2 had an impact on “Initiative” levels of students, reflected by the small p-values obtained using both tests. The average increase in “Initiative” scores for students in Groups 1 and 2 are 0.71 and 0.67, respectively, which are similar. Recall that the students in Group 2 participated in the SCT, in addition to the maze which is common across both groups. This suggests that the SCT has a negligible impact on “Initiative”.

There is also strong evidence (p=0.05) to show that the games enhanced the “Self-Efficacy” levels among the students. The t-test also gives strong evidence (p=0.05) that there is a significant change in Group 2, and milder evidence for Group 1 (p=0.10). The average increase in “Self-Efficacy” levels for Groups 1 and 2 are 0.63 and 0.56, respectively. Again, the differences are negligible, and this suggests that the SCT has a negligible impact. Finally, there is mild evidence (p=0.10) of a significant change in “Motivation” for Group 2, but no such evidence for Group 1. The average increase in “Motivation” score for Group 2 is 0.55. This time, the SCT might have helped to improve students’ motivation.

B. Studying the Odds of Score Improvement

We will now turn our attention to modelling the odds of a student reporting an increase in construct score. Earlier, we defined  as the probability of a student from group  showing an increase in score for construct , and explained why we would expect  to be around  if the games have no impact on the odds of “success”. The t-test was used to test this, under the null hypothesis that  for all groups and constructs. The p-values obtained are summarised in Table 3.

Groups

Constructs

Self-Efficacy

Initiative

Motivation

Control

0

0.56

0.56

0.85

0.07

1

0.08

0.00

0.78

0.25

2

0.57

0.08

0.57

0.85

Table 3: p-values of t-test (to 2 decimal places)

We first notice that the p-values reported using both tests are almost identical. Interestingly, there is mild evidence (p=0.10) that the value of “Control” construct for Group 0 deviates significantly from 0.5, and it is estimated to be 0.32. This means that the students in the control group reported a drop in “Control” levels.

There is also mild evidence (p=0.10) that the probability of a student reporting an increase in “Self-Efficacy” for Group 1 deviates significantly from 0.5. This probability is estimated to be 0.63, which indicates that the odds of a student from Group 1 reporting an increase in “Self-Efficacy” levels is higher compared to the others. 

Finally, there is evidence that the probability of reporting an increase in “Initiative” levels for students from Groups 1 and 2 deviates significantly from 0.5. The probabilities for Group 1 and Group 2 are 0.71 and 0.67, respectively.

Next, we will model our data using the logistic regression model. We will fit four models, one for each construct. For each model, we calculate the odds of a student from a given group showing an increased or decreased score. An odds of greater than 1 means that the student is more likely to show an increased score, while an odds of less than 1 means the opposite. An odds of exactly 1 means that the student is neither more nor less likely to show a changed score. The statistical significance of the individual odds and the overall model fit for each construct was computed using the t-test, and likelihood ratio test, respectively. The results are summarised in Table 4, with the statistically significant (p=0.10) odds highlighted in blue, together with their respective p-values.

 

Odds

Constructs

Self-Efficacy

Initiative

Motivation

Control

Group 0

0.92

0.79

1.27

0.47 (0.08)

Group 1

1.08

2.41 (0.02)

1.67

0.72

Group 2

1.25

1.99 (0.10)

1.26

0.93

 

 

Constructs

Self-Efficacy

Initiative

Motivation

Control

p-value

0.93

0.01

0.22

0.29

Table 4: (top) Coefficients for each construct (significant odds in blue, p-values in brackets) (bottom) p-values to assess logistic regression model fit using likelihood ratio test

Under the logistic regression model, not rejecting the null hypothesis for a given odds means that we assume it takes on the value 1. It should be noted that the individual coefficients should be examined when the model is determined to be significant under the likelihood ratio test, as the coefficients obtained under a poor model fit may not be meaningful.

We notice that the significant terms flagged out by the t-test (Table 3) largely agree with the significant terms of the logistic regression model, except for the “Self-Efficacy” odds for Group 1. However, the “Self-Efficacy” model was not determined to be a good fit using the likelihood ratio test.

The only model which was deemed to be a good fit was the one for the “Initiative” construct. The odds for Group 0 is deemed to be insignificant (and hence assumed to be 1), while the odds for Groups 1 and 2 are statistically significant. We can interpret this model as follows,

  1. Since the odds for Group 0 is statistically insignificant under the t-test, we assume the odds to be 1. In other words, it is equally likely for a student from the control group to show an increase or decrease in score.
  2. The odds for both Group 1 and 2 are statistically significant. The odds of success of Group 1 is 2.41, which can be translated to a roughly 7 in 10 chance (probability of 0.71) for a student in this group showing an improved score. A similar interpretation can be made for Group 2, which showed an odds of 1.99. This translates to a slightly lower probability of 0.67 for a student from Group 2 displaying an improved score.

 

With this, we have presented a logistic regression approach of mathematically modelling these odds. A search on Google Scholar and PubMed yielded no previous work which made us of this mathematical modelling approach on the PRO-SDLS survey data. With the derived odds, we can compare the degree of success of the various classroom interventions. The logistic regression modelling approach is therefore, proposed as a complement of the t-test approach, which is restricted to detecting the presence of statistically significant differences.

C. Qualitative Comments (underlined words underpinning for metacognition)

1) Positive feedbacks:

  • The maze games were the most helpful as they helped me to consolidate my learning, and also enables me to ask the tutor any questions that I have from class. They allowed me to learn anatomy in a fun, enjoyable and memorable way”
  • “It allowed me to visualize the things that I was learning and helped with clarifying doubts”
  • “The extra question posted at each station was helpful”
  • “Wanting to be able to identify things in the museum makes me more motivated to prepare beforehand”
  • “Allows me to identify the knowledge gap so that I can work on it”
  • “I like the quiz as it motivates me to study beforehand and shows me the gaps in my knowledge”
  • “The clinically relevant questions made me think a lot”

2) Negative feedbacks:

  • “I prefer didactic teaching”
  • “We did not interact much with the exhibits”
  • “The maze was more of a mini quiz or test to check if we remember anything”
  • “Perhaps we could go into more complex concepts”
  • “More challenging questions”
  • “Students just follow each other around the anatomy museum and it defeats the purpose of the maze”
  • “The maze could have a competitive element to make it more exciting. Maybe more MCQ questions per model so we can make use of it more”

IV. DISCUSSION

We undertook this research to decipher how gamification as a concept helps medical students learn a basic subject like human anatomy. We also want to understand how psychometric constructs interact to produce behavioural changes towards self-directed learning. This was done by analysing the data from the PRO-SDLS via statistical tests. Put simply, one needs to understand that medical education is a very complex process that demands balance between apprenticeship (fellowship) (Sheehan et al., 2010), and a dose of self-directed learning (van Houten-Schat et al., 2018). With our initial research into gamification of anatomy education (Ang et al., 2018), there were other studies suggesting similar benefits (Felszeghy et al., 2019; Nicola et al., 2017; Van Nuland et al., 2015). We are therefore convinced that gamification could help to engage students and improve academic gains. However, the notion of gaming can be very broad (Virtual Reality, board games, digital apps etc.), so there is a need to understand the underlying psychology. With that in mind, we re-analyse our previous data with the existing, using proven statistical tools to decipher the learning psychology of these medical students, and their awareness of their own thought processes (metacognition).

We earlier hypothesised that gamification would influence these dependent constructs differently and indeed this was the outcome. In our analysis, we found that the combinatorial effects of the maze and SCT resulted in a significant improvement for “Self-Efficacy” and “Initiative”. While the maze alone did not significantly improve “Motivation”, we saw mild evidence of an improvement in terms of psychometric scores, when the SCT and maze were used in combination. In lay terms, the maze encouraged these students to learn on their own. By extension, one could also argue that gamification will help the students in making decisions since “Motivation” and “Initiatives” are key attributes (Vohs et al., 2008). The ability to make a simple clinical judgement, and the courage to act on them, are the virtues that we should be imbuing in the medical students, and some junior doctors. Interestingly, there is mild evidence that the “Control” construct was undergoing erosion in the students not exposed to gamification, as the course progresses. This adverse result is not seen in both groups exposed to the games. Perhaps the more relaxed classroom setting with gamification helped students to feel more in control of their learning process. Logically, this made a lot of sense across the education landscape.

A follow up question would be, does the feedback confirm the results given in our qualitative analysis? Recall that in our logistic regression model, students from both non-control groups displayed a statistically significant improvement in “Initiative” levels. This is supported by some of the positive feedback received for our endeavours, such as being “motivated to prepare beforehand”, “identify the knowledge gap” and work on them, as well as helping them to “think a lot” about the course content. Furthermore, some of the negative feedback, such as requests for more challenging questions, or more questions in general, suggests that the students are taking the initiative to learn more. This certainly adds credence to the findings of our proposed logistic regression model, as well as highlighting the importance of studying both qualitative and quantitative feedback.

There are caveats that one should be aware when implementing gamification. The formative part of the endeavour could be variable, and dependent on numerous factors such as the tutor involved, and the type of games, interventions, and reporting scales used. In the feedback, 76% of the participants felt that the maze should continue as an adjunct but not to totally replace didactic tutorials. In other words, introducing gaming elements into the curriculum should be done judiciously. With reverse scoring, it was shown that “Self-Efficacy” fell as the level of gamification is increased. In lay terms, students might be feeling that the maze trivialize the learning of the subject. As a counter measure, and to maintain quality assurance, we could introduce video lectures from previous years to allay these fears. In summary, we now confirmed that gamification works, and it influences learning outcomes as demonstrated by others (Burgess et al., 2018; Goyal et al., 2017; Kollei et al., 2017; Kouwenhoven-Pasmooij et al., 2017; Kurtzman et al., 2018; O’Connor et al., 2018; Patel et al., 2017; Savulich et al., 2017). Separately, there were criticisms as to why SCT was introduced into the research. We believed that such augmentation will add “fun” for the pre-clinical students to tackle the various clinical scenarios and clinical anatomy.

V. LIMITATIONS OF THE STUDY

Our research necessitated that the students take part in the maze and the SCT. Although it was not compulsory, no students opted out of it. Some critics would misconstrue this to be a form of forced play. According to Jane McGonigal, gamification should ideally not be mandated (Roepke et al., 2015).

VI. CONCLUSION

Through statistical modelling, we have shown how the “Initiative”, “Motivation, and “Self-Efficacy” constructs could potentially benefit from gamification. The before-after experimental set up allowed for powerful comparisons to be made. Studying the odds of construct score improvement, alongside the raw scores, allowed us to study the data from different perspectives. Though this approach, we discovered how the potential benefits of our gamification exercises outweigh the potential adverse effects. Gamification had resulted in improved “Initiative” in these medical students. We believe that their decision-making skills will also be boosted if existing culture allows for more self-discovery (to improve “Initiative, “Control” and “Self-efficacy”) and autonomy. If these recommendations are duly considered and implemented thoughtfully, there is little doubt that our future doctors will be better equipped to serve humanity. This may also help to avoid possible burnout in residents (Hale et al., 2019).

Stronger conclusion and potential for applications are as follows: In a continuum, we started gamifying anatomy education and proven that academic grades could be improved by the process (Ang et al., 2018). We then asked a fundamental question in how exactly it happened. This was done by carrying out a psychometric analysis on the participants. We discovered that psychometric constructs were important, and this was proven in this manuscript. The impact of gamification is now elevated given the COVID-19 pandemic that necessitated more online teaching. Moving forward, we believe that gamification should move towards creating an electronic application that the students may access 24/7. This will ensure that medical teaching will be fortified and be somewhat protected from further disruptions.

Notes on Contributors

Lee De Zhang graduated with a degree in Statistics and Computer Science. He reviewed the literature, analysed the data and wrote part of the manuscript.

Eng Tat Ang, Ph.D., is a senior lecturer in anatomy at the Department of Anatomy at the YLLSoM, NUS. He reviewed the literature, designed the research, collected and analysed the data. He developed the manuscript.

Choo Jiayi, BSc (Hons) graduated with a degree in life sciences. She executed the research, and help collected the data. She contributed to the development of the manuscript.

M Chandrika, MBBS, DO, MSc is an instructor at the Department of Anatomy at the YLLSoM, NUS. She helped to execute the research and collected the data.

Ng Li Shia, MBBS, Master of Medicine (Otorhinolaryngology), MRCS(Glasg) is a consultant at the Department of Otolaryngology, Head & Neck Surgery (ENT), National University Hospital. She developed the SCT questions.

Ethical Approval

This project has received full IRB and Ethical clearance (NUS IRB: B-16-205).

Acknowledgements

A big thank you to all students who took part in the research, and to the CDTL, NUS, for providing a teaching enhancement funds to support this research. Appreciation also due to Dr Patricia Chen (Dept. of Psychology, NUS) for her helpful advice.

Funding

NUS TEG AY2017/2018 was awarded to help the investigators pay Mr De Zhang Lee for the statistical modelling that gamification drove medical education via a MAZE.

Declaration of Interest

All authors have no conflict of interest to declare.

References

Agresti, A. (2003). Categorical data analysis. John Wiley & Sons.

Ang, E. T., Abu Talib, S. N., Samarasekera, D., Thong, M., & Charn, T. C. (2017). Using video in medical education: What it takes to succeed. The Asia Pacific Scholar. 2(3), 15-21.

Ang, E. T., Chan, J. M., Gopal, V., & Li Shia, N. (2018). Gamifying anatomy education. Clinical Anatomy, 31(7), 997-1005. https://doi.org/10.1002/ca.23249

Boyatzis, R. E., Murphy, A. J., & Wheeler, J. V. (2000). Philosophy as a missing link between values and behaviour. Psychological Reports, 86(1), 47-64. https://doi.org/10.2466/pr0.2000.86.1.47

Burgess, J., Watt, K., Kimble, R. M., & Cameron, C. M. (2018). Combining Technology and Research to Prevent Scald Injuries (the Cool Runnings Intervention): Randomized Controlled Trial. Journal of Medical Internet Research, 20(10), e10361. http://doi.org/10.2196/10361

Cazan, A. M., & Schiopca, B. A. (2014). Self-directed learning, personality traits and academic achievement. Procedia-Social and Behavioral Sciences (127), 640-644.

Chessare, J. B. (1998). Teaching clinical decision-making to pediatric residents in an era of managed care. Pediatrics, 101(4 Pt 2), 762-766; discussion 766-767. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9544180

Choudhry, F. R., Ming, L. C., Munawar, K., Zaidi, S. T. R., Patel, R. P., Khan, T. M., & Elmer, S. (2019). Health literacy studies conducted in australia: A scoping review. International Journal of Environmental Research and Public Health, 16(7). https://doi.org/10.3390/ijerph16071112

Cortez, A. R., Winer, L. K., Kassam, A. F., Hanseman, D. J., Kuethe, J. W., Quillin, R. C., 3rd, & Potts, J. R., 3rd. (2019). See none, do some, teach none: An analysis of the contemporary operative experience as nonprimary surgeon. Journal of Surgical Education, 76(6), e92-e101. https://doi.org/10.1016/j.jsurg.2019.05.007

Cote, L., Rocque, R., & Audetat, M. C. (2017). Content and conceptual frameworks of psychology and social work preceptor feedback related to the educational requests of family medicine residents. Patient Education and Counseling, 100(6), 1194-1202. https://doi.org/10.1016/j.pec.2017.01.012

Douw, L., van Dellen, E., Gouw, A. A., Griffa, A., de Haan, W., van den Heuvel, M., Hillebrand, A., Van Mieghem, P., Nissen, I. A., Otte, W. M., & Reijmer, Y. D. (2019). The road ahead in clinical network neuroscience. Network Neuroscience, 3(4), 969-993. https://doi.org/10.1162/netn_a_00103 

Evans, K. H., Daines, W., Tsui, J., Strehlow, M., Maggio, P., & Shieh, L. (2015). Septris: a novel, mobile, online, simulation game that improves sepsis recognition and management. Academic Medicine, 90(2), 180-184. https://doi.org/10.1097/ACM.0000000000000611

Felszeghy, S., Pasonen-Seppänen, S., Koskela, A., Nieminen, P., Härkönen, K., Paldanius, K. M., Gabbouj, S., Ketola, K., Hiltunen, M., Lundin, M., & Haapaniemi, T.  (2019). Using online game-based platforms to improve student performance and engagement in histology teaching. BMC Medical Education, 19(1), 273. https://doi.org/10.1186/s12909-019-1701-0

Gallagher, S., Wallace, S., Nathan, Y., & McGrath, D. (2015). ‘Soft and fluffy’: medical students’ attitudes towards psychology in medical education. Journal of Health Psychology, 20(1), 91-101. https://doi.org/10.1177/1359105313499780

Goyal, S., Nunn, C. A., Rotondi, M., Couperthwaite, A. B., Reiser, S., Simone, A., Katzman, D. K., Cafazzo, J. A., & Palmert, M. R. (2017). A mobile app for the self-management of Type 1 Diabetes among adolescents: A randomized controlled trial. Journal of Medical Internet Research  mHealth and uHealth, 5(6), e82. https://doi.org/10.2196/mhealth.7336

Graafland, M., Bemelman, W. A., & Schijven, M. P. (2017). Game-based training improves the surgeon’s situational awareness in the operation room: a randomized controlled trial. Surgical Endoscopy, 31(10), 4093-4101. https://doi.org/10.1007/s00464-017-5456-6

Graafland, M., Schraagen, J. M., & Schijven, M. P. (2012). Systematic review of serious games for medical education and surgical skills training. British Journal of Surgery, 99(10), 1322-1330. https://doi.org/10.1002/bjs.8819

Graafland, M., Vollebergh, M. F., Lagarde, S. M., van Haperen, M., Bemelman, W. A., & Schijven, M. P. (2014). A serious game can be a valid method to train clinical decision-making in surgery. World Journal of Surgery, 38(12), 3056-3062. https://doi.org/10.1007/s00268-014-2743-4

Hale, A. J., Ricotta, D. N., Freed, J., Smith, C. C., & Huang, G. C. (2019). Adapting Maslow’s Hierarchy of Needs as a Framework for Resident Wellness. Teaching and Learning in Medicine, 31(1), 109-118. https://doi.org/10.1080/10401334.2018.1456928

Howarth-Hockey, G., & Stride, P. (2002). Can medical education be fun as well as educational? British Medical Journal, 325(7378), 1453-1454.  https://doi.org/10.1136/bmj.325.7378.1453 

Kollei, I., Lukas, C. A., Loeber, S., & Berking, M. (2017). An app-based blended intervention to reduce body dissatisfaction: A randomized controlled pilot study. Journal of Consulting and Clinical Psychology, 85(11), 1104-1108. https://doi.org/10.1037/ccp0000246  

Kouwenhoven-Pasmooij, T. A., Robroek, S. J., Ling, S. W., van Rosmalen, J., van Rossum, E. F., Burdorf, A., & Hunink, M. G. (2017). A blended web-based gaming intervention on changes in physical activity for overweight and obese employees: Influence and usage in an experimental pilot study. Journal of Medical Internet Research   Serious Games, 5(2), e6. https://doi.org/10.2196/games.6421

Kurtzman, G. W., Day, S. C., Small, D. S., Lynch, M., Zhu, J., Wang, W., Rareshide, C. A., & Patel, M. S. (2018). Social incentives and gamification to promote weight loss: The lose it randomized, controlled trial. Journal of General Internal Medicine, 33(10), 1669-1675. https://doi.org/10.1007/s11606-018-4552-1  

Lubarsky, S., Dory, V., Duggan, P., Gagnon, R., & Charlin, B. (2013). Script concordance testing: from theory to practice: AMEE guide no. 75. Medical Teacher, 35(3), 184-193. https://doi.org/10.3109/0142159X.2013.760036

Lubarsky, S., Dory, V., Meterissian, S., Lambert, C., & Gagnon, R. (2018). Examining the effects of gaming and guessing on script concordance test scores. Perspectives on Medical Education, 7(3), 174-181. https://doi.org/10.1007/s40037-018-0435-8  

Michael, K., Dror, M. G., & Karnieli-Miller, O. (2019). Students’ patient-centered-care attitudes: The contribution of self-efficacy, communication, and empathy. Patient Education and Counseling. https://doi.org/10.1016/j.pec.2019.06.004

Muis, K. R., Winne, P. H., & Jamieson-Noel, D. (2007). Using a multitrait-multimethod analysis to examine conceptual similarities of three self-regulated learning inventories. British Journal of Educational Psychology, 77(Pt 1), 177-195. https://doi.org/10.1348/000709905X90876

Mullikin, T. C., Shahi, V., Grbic, D., Pawlina, W., & Hafferty, F. W. (2019). First year medical student peer nominations of professionalism: A methodological detective story about making sense of non-sense. Anatomical Sciences Education, 12(1), 20-31. https://doi.org/10.1002/ase.1782 

Nevin, C. R., Westfall, A. O., Rodriguez, J. M., Dempsey, D. M., Cherrington, A., Roy, B., Patel, M., & Willig, J. H. (2014). Gamification as a tool for enhancing graduate medical education. Postgraduate Medical Journal, 90(1070), 685-693. https://doi.org/10.1136/postgradmedj-2013-132486

Nicola, S., Virag, I., & Stoicu-Tivadar, L. (2017). vr medical gamification for training and education. Studies in Health Technology and Informatics, 236, 97-103. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/28508784 

O’Connor, D., Brennan, L., & Caulfield, B. (2018). The use of neuromuscular electrical stimulation (NMES) for managing the complications of ageing related to reduced exercise participation. Maturitas, 113, 13-20. https://doi.org/10.1016/j.maturitas.2018.04.009

Paros, S., & Tilburt, J. (2018). Navigating conflict and difference in medical education: insights from moral psychology. BMC Medical Education, 18(1), 273. https://doi.org/10.1186/s12909-018-1383-z

Patel, M. S., Benjamin, E. J., Volpp, K. G., Fox, C. S., Small, D. S., Massaro, J. M., Lee, J. J., Hilbert, V., Valentino, M., Taylor, D. H., & Manders, E. S.  (2017). effect of a game-based intervention designed to enhance social incentives to increase physical activity among families: The BE FIT randomized clinical trial. Journal of the American Medical Association Internal Medicine, 177(11), 1586-1593. https://doi.org/10.1001/jamainternmed.2017.3458  

Pickren, W. (2007). Psychology and medical education: A historical perspective from the United States. Indian Journal of Psychiatry, 49(3), 179-181. https://doi.org/10.4103/0019-5545.37318  

Roepke, A. M., Jaffee, S. R., Riffle, O. M., McGonigal, J., Broome, R., & Maxwell, B. (2015). Randomized controlled trial of superbetter, a smartphone-based/internet-based self-help tool to reduce depressive symptoms. Games for Health Journal, 4(3), 235-246. https://doi.org/10.1089/g4h.2014.0046

Rutledge, C., Walsh, C. M., Swinger, N., Auerbach, M., Castro, D., Dewan, M., Khattab, M., Rake, A., Harwayne-Gidansky, I., Raymond, T. T., & Maa, T. (2018). Gamification in action: Theoretical and practical considerations for medical educators. Academic Medicine, 93(7), 1014-1020. https://doi.org/10.1097/ACM.0000000000002183

Savulich, G., Piercy, T., Fox, C., Suckling, J., Rowe, J. B., O’Brien, J. T., & Sahakian, B. J. (2017). Cognitive training using a novel memory game on an ipad in patients with amnestic mild cognitive impairment (aMCI). International Journal of Neuropsychopharmacology, 20(8), 624-633. https://doi.org/10.1093/ijnp/pyx040

Shah, A., Carter, T., Kuwani, T., & Sharpe, R. (2013). Simulation to develop tomorrow’s medical registrar. The Clinical Teacher, 10(1), 42-46. https://doi.org/10.1111/j.1743-498X.2012.00598.x  

Sheehan, D., Bagg, W., de Beer, W., Child, S., Hazell, W., Rudland, J., & Wilkinson, T. J. (2010). The good apprentice in medical education. New Zealand Medical Journal, 123(1308), 89-96. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/20201158

Sheikhnezhad Fard, F., & Trappenberg, T. P. (2019). A novel model for arbitration between planning and habitual control systems. Frontiers in Neurorobotics, 13, 52. https://doi.org/10.3389/fnbot.2019.00052

Team, R. C. (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing.

Turan, S., Demirel, O., & Sayek, I. (2009). Metacognitive awareness and self-regulated learning skills of medical students in different medical curricula. Medical Teacher, 31(10), e477-483. https://doi.org/10.3109/01421590903193521

van Houten-Schat, M. A., Berkhout, J. J., van Dijk, N., Endedijk, M. D., Jaarsma, A. D. C., & Diemers, A. D. (2018). Self-regulated learning in the clinical context: A systematic review. Medical Education, 52(10), 1008-1015. https://doi.org/10.1111/medu.13615

Van Nuland, S. E., Roach, V. A., Wilson, T. D., & Belliveau, D. J. (2015). Head to head: The role of academic competition in undergraduate anatomical education. Anatomical Sciences Education, 8(5), 404-412. https://doi.org/10.1002/ase.1498

Villavicencio, F. T., & Bernardo, A. B. (2013). Positive academic emotions moderate the relationship between self-regulation and academic achievement. British Journal of Educational Psychology, 83(Pt 2), 329-340. https://doi.org/10.1111/j.2044-8279.2012.02064.x

Vohs, K. D., Baumeister, R. F., Schmeichel, B. J., Twenge, J. M., Nelson, N. M., & Tice, D. M. (2008). Making choices impairs subsequent self-control: A limited-resource account of decision making, self-regulation, and active initiative. Journal of Personality and Social Psychology, 94(5), 883-898. https://doi.org/10.1037/0022-3514.94.5.883

Wan, M. S., Tor, E., & Hudson, J. N. (2018). Improving the validity of script concordance testing by optimising and balancing items. Medical Education, 52(3), 336-346. https://doi.org/10.1111/medu.13495  

Wisniewski, A. B., & Tishelman, A. C. (2019). Psychological perspectives to early surgery in the management of disorders/differences of sex development. Current Opinion in Pediatrics, 31(4), 570-574. https://doi.org/10.1097/MOP.0000000000000784

Yue, P., Zhu, Z., Wang, Y., Xu, Y., Li, J., Lamb, K. V., Xu, Y., & Wu, Y. (2019). Determining the motivations of family members to undertake cardiopulmonary resuscitation training through grounded theory. Journal of Advanced Nursing, 75(4), 834-849. https://doi.org/10.1111/jan.13923

*Ang Eng Tat
Department of Anatomy
Yong Loo Lin School of Medicine
MD10, National University of Singapore
Singapore 117599
Email address: antaet@nus.edu.sg

Submitted: 4 August 2020
Accepted: 14 October 2020
Published online: 4 May, TAPS 2021, 6(2), 1-8
https://doi.org/10.29060/TAPS.2021-6-2/RA2370

Tow Keang Lim

Department of Medicine, National University Hospital, Singapore

Abstract

Introduction: Clinical diagnosis is a pivotal and highly valued skill in medical practice. Most current interventions for teaching and improving diagnostic reasoning are based on the dual process model of cognition. Recent studies which have applied the popular dual process model to improve diagnostic performance by “Cognitive De-biasing” in clinicians have yielded disappointing results. Thus, it may be appropriate to also consider alternative models of cognitive processing in the teaching and practice of clinical reasoning.

Methods: This is critical-narrative review of the predictive brain model.

Results: The theory of predictive brains is a general, unified and integrated model of cognitive processing based on recent advances in the neurosciences. The predictive brain is characterised as an adaptive, generative, energy-frugal, context-sensitive action-orientated, probabilistic, predictive engine. It responds only to predictive errors and learns by iterative predictive error management, processing and hierarchical neural coding. 

Conclusion: The default cognitive mode of predictive processing may account for the failure of de-biasing since it is not thermodynamically frugal and thus, may not be sustainable in routine practice. Exploiting predictive brains by employing language to optimise metacognition may be a way forward. 

Keywords:            Diagnosis, Bias, Dual Process Theory, Predictive Brains

Practice Highlights

  • According to the dual process model of cognition diagnostic errors are caused by bias reasoning.
  • Interventions to improve diagnosis based on “Cognitive De-biasing” methods report disappointing results.
  • The predict brain is a unified model of cognition which accounts for diagnostic errors, the failure of “Cognitive De-biasing” and may point to effective solutions.
  • Using appropriate language as simple rules or thumb, to fine-tune predictive processing meta-cognitively may be a practical strategy to improve diagnostic problem solving.

I. INTRODUCTION

Clinical diagnostic expertise is a critical, highly valued, and admired skill (Montgomery, 2006). However, diagnostic errors are common and important adverse events which merit research and effective prevention (Gupta et al., 2017; Singh et al., 2014; Skinner et al., 2016). Thus, it is now widely acknowledged and recognized that concerted efforts are required to improve the research, training and practice of clinical reasoning in improving diagnosis (Simpkin et al., 2017; Singh & Graber, 2015; Zwaan et al., 2013). The consensus among practitioners, researchers and preceptors is that most preventable diagnostic errors are associated with bias reasoning during rapid, non-analytical, default cognitive processing of clinical information (Croskerry, 2013). The most widely held theory which accounts for this observation is the dual process model of cognition (B. Djulbegovic et al., 2012; Evans, 2008; Schuwirth, 2017). It posits that most diagnostic errors reside in intuitive, non-analytical or systems 1 thinking (Croskerry, 2009). Thus, the logical, practical and common sense implication which follows from this assumption is that we should activate and apply analytical or system 2 thinking to counter-check or “De-bias” system 1 errors (Croskerry, 2009). This is a popular notion and it has facilitated the emergence of many schools of clinical reasoning based on training methods designed to deliberately understand, recognise, categorise and avoid specific diagnostic errors arising from system thinking 1 or cognitive bias (Reilly et al., 2013; Rencic et al., 2017; Restrepo et al., 2020). However, careful research on the merits of these interventions under controlled conditions do not show consistent nor clear benefits (G. Norman et al., 2014; G. R. Norman et al., 2017; O’Sullivan & Schofield, 2019; Sherbino et al., 2014; Sibbald et al., 2019; J. N. Walsh et al., 2017). Moreover, even the recognition and categorization of these cognitive error events themselves are deeply confounded by hindsight bias itself (Zwaan et al., 2016). Perhaps, at this juncture, it might be appropriate to consider alternative models of cognition based on advances in multi-disciplinary neuroscience research which have expanded greatly in recent years (Monteiro et al., 2020).

Over the past decade the theory of predictive brains has emerged as an ambitious, unified, convergent and integrated model of cognitive processing from research in a large variety of core domains in cognition which include philosophy, meta-physics, cellular physics, thermodynamics, Associative Learning theory, Bayesian-probability theory, Information theory, machine learning, artificial intelligence, behavioural science, neuro-cognition, neuro-imaging, constructed emotions and psychiatry (Bar, 2011; Barrett, 2017a; Barrett, 2017b; Clark, 2016; Friston, 2010; Hohwy, 2013; Seligman, 2016; Teufel & Fletcher, 2020). It may have profound and practical implications on how we live, work and learn. However, to my knowledge, there is almost no discussion of this novel proposition in either medical education pedagogy or research. Thus, in this presentation I will review recent developments in the predictive brain model of cognition, map its key elements which impacts on pedagogy and research in medical education and propose an application in the training of diagnostic reasoning based on it.

An early version of this work had been presented as an abstract (Lim & Teoh, 2018).

II. METHODS

This is a critical-narrative review of the predictive brain model from Friston’s “The free energy principle” proposition a decade ago to more recent critical examination of the emerging supportive evidence based on neurophysiological studies over the past 5 years (Friston, 2010; K. S. Walsh et al., 2020).

III. RESULTS

A. The Brain is a Frugal Predictive Engine

The Brain Is A Frugal Predictive Engine (General references (Bar, 2011; Barrett, 2017a; Barrett, 2017b; Clark, 2013; Clark, 2016; Friston, 2010; Gilbert & Wilson, 2007; Hohwy, 2013; Seligman, 2016; Seth et al., 2011; Sterling, 2012).

In contrast with traditional top-down, feed-forward models of cognition, the predictive brain model reverses and inverts this process. Perception is characterised as an entirely inferential rapidly adaptive, generative, energy-frugal, context-sensitive action-orientated, probabilistic, predictive process (Tschantz et al., 2020). This system is governed by the need to respond rapidly to ever changing demands from the external environmental and our body’s internal physiological signals (intero-ception) and yet minimise free energy expenditure (or waste) (Friston, 2010; Kleckner et al., 2017; Sterling, 2012). Thus, it is not passive and reactive to new information but predictive and continuously proactive. From very early, elemental and sparse cues it is continuously generating predictive representations based on remembered similar experiences in the past which may include simulations. It performs iterative matching of top down prior representations with bottom up signals and cues in a hierarchy of categories of abstractions and content specificity over scales of space and time (Clark, 2013; Friston & Kiebel, 2009; Spratling, 2017a). This matching process is also sensitive to variations in context and thus enable us to make sense of rapidly changing and complex situations (Clark, 2016).

Cognitive resource, in terms of allocating attention, is only focused on the management of errors in prediction or the mismatch between prior representations and new emergent information. It seeks to minimise prediction errors (PEs) and there is repetitive, recognition-expectation-based signal suppression when this is achieved. Thus, this is a system which only responds to the unfamiliar situation or what it considers as news worthy. This is analogous to Claude Shannons’s classic analysis of “surprisals” in information theory (Shannon et al., 1993). Learning is based on the generation and neural coding of a new predictive representations in memory. The most direct and powerful evidence for this process comes from optogenetic experiments with their exquisitely high degree of resolution in the monitoring and manipulations over space-time of neuronal signalling and behaviour in freely forging rats which show causal linkages between PE, dopamine neurons and learning (Nasser et al., 2017; Steinberg et al., 2013).

The brain intrinsically generates representations of the world in which it finds itself from past experience which is refined by sensory data. New sensory information is represented and inferred in terms of these known causes. Determining which combination of the many possible causes best fits the current sensory data is achieved through a process of minimising the error between the sensory data and the sensory inputs predicted by the expected causes, i.e. the PE. In the service of PE reduction, the brain will also generate motor actions such as saccadic eye movement and foraging behaviour. The prediction arises from a process of “backwards thinking” or inferential Bayesian best guess or approximation based simultaneously on sensory data and prior experience (Chater & Oaksford, 2008; Kersten et al., 2004; Kwisthout et al., 2017a; Kwisthout et al., 2017b; Ting et al., 2015). It is a hierarchical predictive coding process, reflecting the serial organization of the neuronal architecture of cerebral cortex; higher levels are abstract, whereas the lowest level amounts to a prediction of the incoming sensory data (Kolossa et al., 2015; Shipp, 2016; Ting et al., 2015). The actual sensory data is compared to the predicted sensory data, and it is the discrepancies, or ‘error’ that ascends up the hierarchy to refine all higher levels of abstraction in the model. Thus, this is a learning process whereby, with each iteration, the model representations are optimised and encoded in long term memory as the PEs minimise (Friston, FitzGerald, Rigoli et al., 2017; Spratling, 2017b).

This system of neural responses is regulated and fine-tuned by varying the gains on the weightage of the reliability (or precision) of the PE estimate itself. In other words, it is the level of confidence (versus uncertainty) in the PE which determines the intensity of attention allocated to it and strength of coding in memory following its resolution (Clark, 2013; Clark, 2016; Feldman & Friston, 2010; Hohwy, 2013). This regulatory, neuro-modulatory process is impacted by the continuous cascade of action relevant information which is sensitive to both external context and internal interoceptive (i.e. from perception of our own physiological responses) and affective signals (Clark, 2016). This metacognitive capacity to effectively manipulate and re-calibrate the precision of PE itself may be a critical aspect of decision making, problem solving behaviour and learning. (Hohwy, 2013; Picard & Friston, 2014).

B. Clinical Reasoning is Predictive Error Processing and Learning is Predictive Coding

The core processes of the predictive brain which are engaged during diagnostic reasoning are summarised in Table 1 and Figure 1.

Core features of the predictive brain model

Clinical reasoning features and processes

The frugal brain and free energy principle(Friston, 2010)

Cognitive load in problem solving (Young et al., 2014)

 

Iterative matching of top down priors Vs bottom up signals

Inductive foraging (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017)

Predictive error processing

Pattern recognition in diagnosis

Recognition-expectation-based signal suppression  

Premature closure (Blissett & Sibbald, 2017; Melo et al., 2017)

Hierarchical predictive error coding as learning

Development of illness scripts (Custers, 2014)

Probabilistic-Bayesian inferential approximations   

Bayesian inference in clinical reasoning

Context sensitivity  

Contextual factors in diagnostic errors(Durning et al., 2010)

Action orientation   

Foraging behaviour in clinical diagnosis (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017)

Interoception and affect in prediction error management          

Gut feel and regret (metacognition)

The precision(reliability/uncertainty) of prediction errors

Clinical uncertainty (metacognition) (Bhise et al., 2017; Simpkin & Schwartzstein, 2016)

Table 1: Core features of the predictive brain model of cognition manifested as clinical reasoning processes

Legend to Figure 1

A summary of the cognitive processes engaged by the predict brain model during clinical diagnosis

A: Active search for diagnostic clues based on prior experience of similar patients in similar situations.

B: Recognition of key features will activate a series of familiar illness script from long term memory to match with the new case.  If this is successful, a diagnosis made and any prediction error signals are rapidly silenced.

C & D: When the illness scripts do not match the presenting features (????), cognition slows down, attention is heightened and further searches are made for additional matching clues and illness scripts. This is iterated                until a satisfactory match is found or a new illness script is generated to account for the mismatch.

E: A new variation in the presenting features for that disease is then encoded in memory as a new illness script in memory and thus, a valuable learning moment.

F: The degree of uncertainty or level of confidence in matching key presenting features to a diagnosis is a meta-cognitive skill and a critical expertise in clinical diagnosis. This corresponds to the precision or gain/weightage of prediction errors (Meta cognition) in the predictive brain model.

Figure 1: A summary of the cognitive processes engaged by the predict brain model during clinical diagnosis

Thermodynamic frugality is a central feature of the predictive brain model and in this system, the primacy of attending only to surprises or PEs is pivotal (Friston, 2010). This might be regard as an energy efficient strategy in coping with cognitive load which has been long recognised as an important consideration in clinical problem solving and learning (Young et al., 2014; Van Merrienboer & Sweller, 2010).

From the first moments of a diagnostic encounter the clinician is alert to clues which might point to the diagnosis and begins to generate possible diagnosis scenarios and simulations based upon her prior experience of similar patients and situations (Donner-Banzhoff & Hertwig, 2014). This is iterative and, from a scanty set of presenting features, a plausible diagnosis may be considered within a few seconds to minutes (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017). Thus, a familiar illness script is activated from long term memory to match with the new case (Custers, 2014). If this is successful, a particular diagnosis is recognised and any PE signal is rapidly silenced. Functional MRI studies of clinicians during this process showed that highly salient diagnostic information, reducing uncertainty about the diagnosis, rapidly decreased monitoring activity in the frontoparietal attentional network and may contribute to premature diagnostic closure, an important cause of diagnostic errors (Melo et al., 2017). This may be considered a form of diagnosis or recognition related PE signal suppression analogous to the well know phenomenon of repetitive suppression (Blissett & Sibbald, 2017; Bunzeck & Thiel, 2016; Krupat et al., 2017).

In cases where the illness scripts do not match the presenting features, a PE event is encountered, cognition slows down, attention is heightened and further searches are made for additional matching clues and illness scripts (Custers, 2014). This is iterated until a satisfactory match is found or a new illness script is generated to account for the mismatch. This is then encoded in memory as a new variation in the presenting features for that disease and thus, a valuable learning moment. Bayesian inference is a fundamental feature of both clinical diagnostic reasoning and the predictive brain model (Chater & Oaksford, 2008).

As in the predictive brain model, external contextual factors and internal emotional and physiological responses such as gut feeling and regret, exert profound effects on clinical decision making (M. Djulbegovic et al., 2015; Durning et al., 2010; Stolper & van de Wiel, 2014; Stolper et al., 2014). Also active inductive foraging behaviour in searching for diagnostic clues described in experienced primary physicians is analogous to behaviour directed at reducing PEs (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017). The precision or gain/weightage of PEs is manifested metacognitively as uncertainties or levels of confidence in clinical reasoning (Sandved-Smith et al., 2020). Metacognition is a critical capacity and expertise in effective decision making. (Bhise et al., 2017; Fleming & Frith, 2014; Simpkin & Schwartzstein, 2016).

C. Why Applying the Dual Process Model May Not Improve Clinical Reasoning

Recent studies which have applied the popular dual process model to improve diagnostic performance by “cognitive de-biasing” in clinicians have yielded disappointing results (G. R. Norman et al., 2017). Cognitive processing of the predictive brain as the dominant default network mode of operation may account for this setback since de-biasing is not naturistic, requires retrospective “off line” processing after the monitoring salience network has already shut off (Krupat et al., 2017; Melo et al., 2017). It is not thermodynamically frugal and thus, may not be sustainable in routine practice (Friston, 2010; Young et al., 2014). Even Daniel Kahneman himself admits that, despite decades of research in cognitive bias he is unable to exert agency of the moment and de-bias himself (Kahneman, 2013). This will be more so in novice diagnosticians in the training phase who have scanty illness scripts and limited tolerance of any further cognitive loading (Young et al., 2014). The failure to even identify cognitive biases reliably by clinicians due to hindsight bias itself suggests that this intervention will be the least effective one in improving diagnostic reasoning (Zwaan et al., 2016).

 D. Using Words to Fine Tune the Precision of Diagnostic Prediction Error

Daniel Kahneman, the foremost expert on cognitive bias, cautions that, contrary to what some experts in medical education advice, avoiding bias is ineffective in improving decision making under uncertainty (Restrepo et al., 2020). By contrast he suggested that we apply simple, common sense, rules of thumb (Kahneman et al., 2016). I hypothesise that instructing clinical trainees to use appropriate words to self in the diagnostic setting during active, naturalistic PE processing before the diagnosis is made and not as a retrospective counter check to cognition afterwards may be a way forward (Betz et al., 2019; Clark, 2016; Lupyan, 2017). In a multi-center, iterative thematic content analysis of over 2,000 cases of diagnostic errors with a structured taxonomy, Schiff and colleagues identified a limited number of pitfall themes which were overlooked and predisposed physicians to reasoning errors (Reyes Nieva H et al., 2017). These pitfall themes included three which are of particular interest in relation to naturalistic PE processing namely: (1) counter diagnostic cues, (2) things that do not fit and (3) red flags (Reyes Nieva H et al., 2017). Thus, we instructed our student interns and internal medicine residents to pay particular attend to these three diagnostic pitfalls during review of new patients and clinical problems (Lim & Teoh, 2018). They were required to append the following sub-headings to their clerking impression in the patient’s electronic health record (eHR): (a) Counter diagnostic features; (b) Things that do not fit; (c) Red flags. This template was added after the resident had entered his or her numerated list of diagnoses or issues. “Counter diagnostic features” was defined as symptoms, signs or investigations which were inconsistent with the proposed primary diagnosis. “Things that do not fit” was defined as any finding that could not be reasonably accounted for taking into account the main and differential diagnoses. “Red flags” were defined as findings which raised the possibility of a more serious underlying illness requiring early diagnosis or intervention. The attending physicians were required, during bedside rounds, to give feedback on these points and make amendments to the eHR as appropriate. This exercise may give us an opportunity to see if we can improve diagnostic accuracy by using pivotal words-to-self in the appropriate setting to maintain cognitive openness, flexibility and thus, avoid premature (Krupat et al., 2017). It is also a valuable critical, metacognitive thinking habit to inculcate in tyro diagnosticians (Carpenter et al., 2019).

IV. CONCLUSION

The theory of predictive brains has emerged as a major narrative in the understanding of how our mind works. It may account for the limitations of interventions designed to improve diagnostic problem solving which are based on the dual process theory of cognition. Exploiting predictive brains by employing language to optimise metacognition may be a way forward.

Note on Contributor

Lim designed the paper, reviewed the literature, drafted and revised it.

Ethical Approval

There is no ethical approval associated with this paper.

Funding

No funding sources are associated with this paper. 

Declaration of Interest

No conflicts of interest are associated with this paper. 

References

Bar, M. (2011). Predictions in the brain using our past to generate a future (pp. xiv, 383 p. ill. (some col.) 327 cm.).

Barrett, L. F. (2017a). How emotions are made: the secret life of the brain. Houghton Mifflin Harcourt.

Barrett, L. F. (2017b). The theory of constructed emotion: An active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12(1), 1-23.  https://doi.org/10.1093/scan/nsw154

Betz, N., Hoemann, K., & Barrett, L. F. (2019). Words are a context for mental inference. Emotion, 19(8), 1463-1477. https://doi.org/10.1037/emo0000510

Bhise, V., Rajan, S. S., Sittig, D. F., Morgan, R. O., Chaudhary, P., & Singh, H. (2017). Defining and measuring diagnostic uncertainty in medicine: A systematic review. Journal of General Internal Medicine 33,  103–115. https://doi.org/10.1007/s11606-017-4164-1

Blissett, S., & Sibbald, M. (2017). Closing in on premature closure bias. Medical Education, 51(11), 1095-1096. https://doi.org/10.1111/medu.13452

Bunzeck, N., & Thiel, C. (2016). Neurochemical modulation of repetition suppression and novelty signals in the human brain. Cortex, 80, 161-173. https://doi.org/10.1016/j.cortex.2015.10.013

Carpenter, J., Sherman, M. T., Kievit, R. A., Seth, A. K., Lau, H., & Fleming, S. M. (2019). Domain-general enhancements of metacognitive ability through adaptive training. Journal of Experimental Psychology. General, 148(1), 51-64. https://doi.org/10.1037/xge0000505

Chater, N., & Oaksford, M. (2008). The probabilistic mind : prospects for Bayesian cognitive science. Oxford University Press.

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences, 36(3), 181–204. https://doi.org/10.1017/S0140525X12000477

Clark, A. (2016). Surfing uncertainty : Prediction, action, and the embodied mind: Oxford University Press.

Croskerry, P. (2009). Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Advances in Health Sciences Education : Theory and Practice, 14 Suppl 1, 27–35. https://doi.org/10.1007/s10459-009-9182-2

Croskerry, P. (2013). From mindless to mindful practice–cognitive bias and clinical decision making. The New England Journal of Medicine, 368(26), 2445–2448. https://doi.org/10.1056/NEJMp1303712

Custers, E. J. (2014). Thirty years of illness scripts: Theoretical origins and practical applications. Medical Teacher, 1-6. https://doi.org/10.3109/0142159X.2014.956052

Djulbegovic, B., Hozo, I., Beckstead, J., Tsalatsanis, A., & Pauker, S. G. (2012). Dual processing model of medical decision-making. BMC Medical Informatics and Decision Making, 12, 94. https://doi.org/10.1186/1472-6947-12-94

Djulbegovic, M., Beckstead, J., Elqayam, S., Reljic, T., Kumar, A., Paidas, C., & Djulbegovic, B. (2015). Thinking styles and regret in physicians. Public Library of Science One, 10(8), e0134038. https://doi.org/10.1371/journal.pone.0134038

Donner-Banzhoff, N., & Hertwig, R. (2014). Inductive foraging: Improving the diagnostic yield of primary care consultations. European Journal of General Practice, 20(1), 69–73. https://doi.org/10.3109/13814788.2013.805197

Donner-Banzhoff, N., Seidel, J., Sikeler, A. M., Bosner, S., Vogelmeier, M., Westram, A., & Gigerenzer, G. (2017). The phenomenology of the diagnostic process: A primary care-based survey. Medical Decision Making, 37(1), 27-34. https://doi.org/10.1177/0272989X16653401

Durning, S. J., Artino, A. R., Jr., Pangaro, L. N., van der Vleuten, C., & Schuwirth, L. (2010). Perspective: redefining context in the clinical encounter: Implications for research and training in medical education. Academic Medicine: Journal of the Association of American Medical Colleges, 85(5), 894–901. https://doi.org/10.1097/ACM.0b013e3181d7427c

Evans, J. S. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278. https://doi.org/10.1146/annurev.psych.59.103006.093629

Feldman, H., & Friston, K. J. (2010). Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience4, 215. https://doi.org/10.3389/fnhum.2010.00215

Fleming, S. M., & Frith, C. D. (2014). The cognitive neuroscience of metacognition. Springer.

Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews. Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787

Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1–49. https://doi.org/10.1162/NECO_a_00912

Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London. Series B, Biological sciences364(1521), 1211–1221. https://doi.org/10.1098/rstb.2008.0300

Gilbert, D. T., & Wilson, T. D. (2007). Prospection: Experiencing the future. Science, 317(5843), 1351-1354. https://doi.org/10.1126/science.1144161

Gupta, A., Snyder, A., Kachalia, A., Flanders, S., Saint, S., & Chopra, V. (2017). Malpractice claims related to diagnostic errors in the hospital. BMJ Quality and Safety, 27(1), 53-60. https://doi.org/10.1136/bmjqs-2017-006774

Hohwy, J. (2013). The predictive mind. Oxford University Press..

Kahneman, D. (2013). Thinking, fast and slow (1st pbk. ed.).  Farrar, Straus and Giroux.

Kahneman, D., Rosenfield, A. M., Gandhi, L., & Blaser, T. O. M. (2016). NOISE: How to overcome the high, hidden cost of inconsistent decision making. (cover story). Harvard Business Review, 94(10), 38-46. Retrieved from http://libproxy1.nus.edu.sg/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=118307773&site=ehost-live

Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as bayesian inference. Annual Review of Psychology, 55, 271–304. https://doi.org/10.1146/annurev.psych.55.090902.142005

Kleckner, I. R., Zhang, J., Touroutoglou, A., Chanes, L., Xia, C., Simmons, W. K., & Feldman Barrett, L. (2017). Evidence for a large-scale brain system supporting allostasis and interoception in humans. Nature Human Behaviour, 1, 0069. https://doi.org/10.1038/s41562-017-0069

Kolossa, A., Kopp, B., & Fingscheidt, T. (2015). A computational analysis of the neural bases of Bayesian inference. Neuroimage, 106, 222-237.  https://doi.org/10.1016/j.neuroimage.2014.11.007

Krupat, E., Wormwood, J., Schwartzstein, R. M., & Richards, J. B. (2017). Avoiding premature closure and reaching diagnostic accuracy: Some key predictive factors. Medical Education, 51(11), 1127-1137. https://doi.org/10.1111/medu.13382

Kwisthout, J., Bekkering, H., & van Rooij, I. (2017a). To be precise, the details don’t matter: On predictive processing, precision, and level of detail of predictions. Brain and Cognition, 112, 84–91. https://doi.org/10.1016/j.bandc.2016.02.008

Kwisthout, J., Phillips, W. A., Seth, A. K., van Rooij, I., & Clark, A. (2017b). Editorial to the special issue on perspectives on human probabilistic inference and the ‘Bayesian brain’. Brain and Cognition, 112, 1-2. https://doi.org/10.1016/j.bandc.2016.12.002

Lim T.K., & Teoh, C. M. (2018). Exploiting predictive brains for better diagnosis. Diagnosis (Berl), 5(3), eA40. Retrieved from https://www.degruyter.com/view/journals/dx/5/3/article-peA1.xml

Lupyan, G. (2017). Changing what you see by changing what you know: The role of attention. Frontiers in Psychology, 8, 553. https://doi.org/10.3389/fpsyg.2017.00553

Melo, M., Gusso, G. D. F., Levites, M., Amaro, E., Jr., Massad, E., Lotufo, P. A., & Friston, K. J. (2017). How doctors diagnose diseases and prescribe treatments: An fMRI study of diagnostic salience. Scientific Reports, 7(1), 1304. http://observatorio.fm.usp.br/handle/OPI/19951

Monteiro, S., Sherbino, J., Sibbald, M., & Norman, G. (2020). Critical thinking, biases and dual processing: The enduring myth of generalisable skills. Medical Education, 54(1), 66-73. https://doi.org/10.1111/medu.13872

Montgomery, K. (2006). How doctors think: Clinical judgement and the practice of medicine. Oxford University Press.

Nasser, H. M., Calu, D. J., Schoenbaum, G., & Sharpe, M. J. (2017). The dopamine prediction error: Contributions to associative models of reward learning. Frontiers in Psychology, 8, 244. https://doi.org/10.3389/fpsyg.2017.00244

Norman, G., Sherbino, J., Dore, K., Wood, T., Young, M., Gaissmaier, W., & Monteiro, S. (2014). The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Academic Medicine: Journal of the Association of American Medical Colleges, 89(2), 277–284. https://doi.org/10.1097/ACM.0000000000000105

Norman, G. R., Monteiro, S. D., Sherbino, J., Ilgen, J. S., Schmidt, H. G., & Mamede, S. (2017). The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking. Academic Medicine: Journal of the Association of American Medical Colleges92(1), 23–30. https://doi.org/10.1097/ACM.0000000000001421

O’Sullivan, E. D., & Schofield, S. J. (2019). A cognitive forcing tool to mitigate cognitive bias – A randomised control trial. BMC Medical Education, 19(1), 12. https://doi.org/10.1186/s12909-018-1444-3

Picard, F., & Friston, K. (2014). Predictions, perception, and a sense of self. Neurology, 83(12), 1112-1118. https://doi.org/10.1212/WNL.0000000000000798

Reilly, J. B., Ogdie, A. R., Von Feldt, J. M., & Myers, J. S. (2013). Teaching about how doctors think: A longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Quality & Safety, 22(12), 1044–1050. https://doi.org/10.1136/bmjqs-2013-001987

Rencic, J., Trowbridge, R. L., Jr., Fagan, M., Szauter, K., & Durning, S. (2017). Clinical reasoning education at us medical schools: Results from a national survey of internal medicine clerkship directors.  Journal of General Internal Medicine, 32(11), 1242–1246. https://doi.org/10.1007/s11606-017-4159-y

Restrepo, D., Armstrong, K. A., & Metlay, J. P. (2020). Annals Clinical Decision Making: Avoiding Cognitive Errors in Clinical Decision Making. Annals of Internal Medicine, 172(11), 747–751. https://doi.org/10.7326/M19-3692

Reyes Nieva H., V. M., Wright A, Singh H, Ruan E, Schiff G. (2017). Diagnostic Pitfalls: A New Approach to Understand and Prevent Diagnostic Error. In Diagnosis (Vol. 4, pp. eA1). https://www.degruyter.com/view/journals/dx/5/4/article-peA59.xml

Sandved-Smith, L., Hesp, C., Lutz, A., Mattout, J., Friston, K., & Ramstead, M. (2020, June 10). Towards a formal neurophenomenology of metacognition: Modelling meta-awareness, mental action, and attentional control with deep active inference. https://doi.org/10.31234/osf.io/5jh3c

Schuwirth, L. (2017). When I say … dual-processing theory. Medical Education, 51(9), 888–889. https://doi.org/10.1111/medu.13249

Seligman, M. E. P. (2016). Homo Prospectus. Oxford University Pres.

Seth, A. K., Suzuki, K., & Critchley, H. D. (2011). An interoceptive predictive coding model of conscious presence. Frontiers in Psychology, 2, 395. https://doi.org/10.3389/fpsyg.2011.00395

Shannon, C. E., Sloane, N. J. A., Wyner, A. D., & IEEE Information Theory Society. (1993). Claude Elwood Shannon : Collected Papers. IEEE Press.

Sherbino, J., Kulasegaram, K., Howey, E., & Norman, G. (2014). Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: A controlled trial. Canadian Journal of Emergency Medicine16(1), 34–40. https://doi.org/10.2310/8000.2013.130860

Shipp, S. (2016). Neural Elements for Predictive Coding. Frontiers in Psychology, 7, 1792. https://doi.org/10.3389/fpsyg.2016.01792

Sibbald, M., Sherbino, J., Ilgen, J. S., Zwaan, L., Blissett, S., Monteiro, S., & Norman, G. (2019). Debiasing versus knowledge retrieval checklists to reduce diagnostic error in ECG interpretation. Advances in Health Sciences Education: Theory and Practice, 24(3), 427–440. https://doi.org/10.1007/s10459-019-09875-8

Simpkin, A. L., & Schwartzstein, R. M. (2016). Tolerating uncertainty – The next medical revolution? The New England Journal of Medicine, 375(18), 1713–1715. https://doi.org/10.1056/NEJMp1606402

Simpkin, A. L., Vyas, J. M., & Armstrong, K. A. (2017). Diagnostic Reasoning: An endangered competency in internal medicine training. Annals of Internal Medicine, 167(7), 507–508. https://doi.org/10.7326/M17-0163

Singh, H., & Graber, M. L. (2015). Improving diagnosis in health care- The next imperative for patient safety. The New England Journal of Medicine, 373(26), 2493–2495. https://doi.org/10.1056/NEJMp1512241

Singh, H., Meyer, A. N., & Thomas, E. J. (2014). The frequency of diagnostic errors in outpatient care: Estimations from three large observational studies involving US adult populations. BMJ Quality & Safety, 23(9), 727–731. https://doi.org/10.1136/bmjqs-2013-002627

Skinner, T. R., Scott, I. A., & Martin, J. H. (2016). Diagnostic errors in older patients: A systematic review of incidence and potential causes in seven prevalent diseases. International Journal of General Medicine, 9, 137–146. https://doi.org/10.2147/IJGM.S96741

Spratling, M. W. (2017a). A hierarchical predictive coding model of object recognition in natural images. Cognitive Computation, 9(2), 151–167. https://doi.org/10.1007/s12559-016-9445-1

Spratling, M. W. (2017b). A review of predictive coding algorithms. Brain and Cognition, 112, 92–97. https://doi.org/10.1016/j.bandc.2015.11.003

Steinberg, E. E., Keiflin, R., Boivin, J. R., Witten, I. B., Deisseroth, K., & Janak, P. H. (2013). A causal link between prediction errors, dopamine neurons and learning. Nature Neuroscience, 16(7), 966–973. https://doi.org/10.1038/nn.3413

Sterling, P. (2012). Allostasis: A model of predictive regulation. Physiology & Behavior, 106(1), 5–15. https://doi.org/10.1016/j.physbeh.2011.06.0044

Stolper, C. F., & van de Wiel, M. W. (2014). EBM and gut feelings. Medical Teacher, 36(1), 87-88. https://doi.org/10.3109/0142159X.2013.835390

Stolper, C. F., Van de Wiel, M. W., Hendriks, R. H., Van Royen, P., Van Bokhoven, M. A., Van der Weijden, T., & Dinant, G. J. (2014). How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship? Advances in Health Sciences Education: Theory and Practice, 20(2), 499–513. https://doi.org/10.1007/s10459-014-9543-3

Teufel, C., & Fletcher, P. C. (2020). Forms of prediction in the nervous system. Nature Reviews Neuroscience, 21(4), 231–242. https://doi.org/10.1038/s41583-020-0275-5

Ting, C. C., Yu, C. C., Maloney, L. T., & Wu, S. W. (2015). Neural mechanisms for integrating prior knowledge and likelihood in value-based probabilistic inference. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience35(4), 1792–1805. https://doi.org/10.1523/JNEUROSCI.3161-14.2015

Tschantz, A., Seth, A. K., & Buckley, C. L. (2020). Learning action-oriented models through active inference. PLoS Computational Biology, 16(4), e1007805. https://doi.org/10.1371/journal.pcbi.1007805

Van Merrienboer, J. J., & Sweller, J. (2010). Cognitive load theory in health professional education: Design principles and strategies. Medical Education, 44(1), 85-93. https://doi.org/10.1111/j.1365-2923.2009.03498.x

Walsh, J. N., Knight, M., & Lee, A. J. (2017). Diagnostic errors: Impact of an educational intervention on pediatric primary care. Journal of Pediatric Health Care : Official Publication of National Association of Pediatric Nurse Associates & Practitioners, 32(1), 53–62. https://doi.org/10.1016/j.pedhc.2017.07.004

Walsh, K. S., McGovern, D. P., Clark, A., & O’Connell, R. G. (2020). Evaluating the neurophysiological evidence for predictive processing as a model of perception. Annals of the New York Academy of Sciences, 1464(1), 242–268. https://doi.org/10.1111/nyas.14321

Young, J. Q., Van Merrienboer, J., Durning, S., & Ten Cate, O. (2014). Cognitive load theory: Implications for medical education: AMEE Guide No. 86. Medical Teacher, 36(5), 371-384. https://doi.org/10.3109/0142159X.2014.889290

Zwaan, L., Monteiro, S., Sherbino, J., Ilgen, J., Howey, B., & Norman, G. (2016). Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Quality & Safety. 26(2), 104–110. https://doi.org/10.1136/bmjqs-2015-005014

Zwaan, L., Schiff, G. D., & Singh, H. (2013). Advancing the research agenda for diagnostic error reduction. BMJ Quality & Safety, 22 Suppl 2, ii52-ii57. https://doi.org/10.1136/bmjqs-2012-001624

*Lim Tow Keang
Department of Medicine 
National University Hospital
5 Lower Kent Ridge Rd
Singapore 119074
Email: mdclimtk@nus.edu.sg

Published online: 5 January, TAPS 2021, 6(1), 1-2
https://doi.org/10.29060/TAPS.2021-6-1/EV6N1

In our January 2020 Editorial, we drew the attention of our readers to “Grit in Healthcare Education and Practice”. In particular, we focused on developing the “Grit” of students and trainees; medical students who are well-equipped with the ‘Power of Grit’ will display a “passion for patient well-being and perseverance in the pursuit of that goal [which] become social norms at the individual, team and institutional levels” (Lee & Duckworth, 2018). However, never could we imagine then that such an attribute (i.e. ‘Grit’) would become contextual so soon, as exemplified by the passion and perseverance of healthcare practitioners in patient care in their response to the serious disruptions in individual health (including fatalities) caused by the Covid-19 pandemic!

We are pleased to have this opportunity to share with our readers, yet once again, the unexpected course of events associated with the Covid-19 pandemic which brought out the best in many on a global scale. In particular, as the education and training of medical students, residents and those in allied health institutions were disrupted by the Covid-19 pandemic. The educators supported by the administration in medical and health professions institutions designed curricula innovations that incorporated culturally sensitive interventions to develop individual resilience and well-being in order to support the community of learners—including students, faculty, administrators involved and of course, patients.

The current Covid-19 pandemic served as a catalyst that provided opportunities for educators to rapidly and creatively design safe, yet effective, novel and innovative solutions to ensure continuation in the education and training of medical and closely allied health professional students (Samarasekera, Goh, & Lau, 2020). Thus, there is a need to break away from decades of tradition in designing such educational strategies for continued student learning, as a rapid response to the Covid-19 pandemic. In this context then, both, institutional as well as program leadership are required to facilitate the process for the design of creative, yet safe, effective and innovative strategies for the continuation of student learning; such a step is expected to mitigate the disruptive effects of the Covid-19 pandemic! In this context then, educators leverage on available technology as the preferred mode for the delivery of instruction to students. The learning environment was also transformed from one that was predominantly classroom-based to one that is mainly online. It is also gratifying that, both, junior and senior faculty have embraced the use of technology, although some degree of ‘resistance’ to the use of technology in education was experienced earlier. Perhaps, a caveat should be added: student learning using technology over a long period of time may result in a lack of social interaction among the students and, consequently, a lack of preparation for teamwork which is so critical for healthcare practice in the 21st century.

The Covid-19 pandemic has also exposed wider societal gaps which were seldom evident previously, but needs to be addressed. It is useful then to note that The Lancet Global Independent Commission had already expressed, in its Report (Frenk et al., 2010) that “Indeed, the use of IT might be the most important driver in transformative learning ….” and that “Advanced information technology is important not only for more efficient education of health professionals; its existence also demands a change in competencies.” The ‘Report’ also drew attention to the fact that “IT-empowered learning is already a reality for the younger generation in most countries, ….” However, due to financial constraints, the ‘Report’ also cautioned that “Not all students, of course, have full access to IT resources” and suggested “A global policy to overcome such unequal distribution of digital resources [referred to as the digital divide] ….” Attention to such inequalities have also been recently addressed by Blundell, Costa Dias, Joyce, and Xu (2020).

A major concern of medical and allied health professional institutions is the well-being of students and staff who ensure the continuation of student education and training disrupted by the Covid-19 pandemic. Many institutions provided strong support to students and staff in such challenging times. Students received financial support and, if required, counselling as well in order to enhance their psycho-social well-being. Students infected by the virus or who were quarantined received special care. Many institutional policies were swiftly revised to match the rapidly changing environment: clear lines of communication were established for staff and students (Ashokka, Ong, Tay, Loh, Gee, & Samarasekera, 2020).

A more resilient community of staff and students have remarkably emerged from the trials and tribulations experienced: students have adapted rapidly to blended and virtual learning environments. Students have also organised their learning engagements around virtual student communities, as most institutions have minimised their face-to-face classroom activities. Faculty responded by designing a more adaptive curriculum that is flexible to the needs of the learner. Pre-clinical and clinical learning activities were further refined and streamlined with the removal of some content and examinations—a process unthinkable prior to the crisis (disruptions) of the Covid-19 pandemic; the prior status involved strict control of the curricula which was managed by the institution and/or professional / statutory bodies. Within a short period of time newer course materials and assessment instruments, all aligned to support online, blended or hybrid learning requirements, were developed. However, the most significant contribution from staff to the disrupted student learning is the proactive support to optimise and meet the needs of learners in the crisis triggered by the Covid-19 pandemic! Such an action by the staff were greatly appreciated; stronger bonds with a closer community spirit between the students and staff were soon established.

In conclusion, it can be said that medical and allied health professional educators have benefitted much (lessons learnt) from the disruptive effects of the Covid-19 pandemic on student learning. Instead of wallowing in self-pity, sadness and simply awaiting time-out. A determined and focused faculty can mitigate the effects of the formidable challenge posed by the Covid-19 pandemic by responding rapidly to make changes to the learning environment—using appropriate technology to deliver instruction to students, in order to ensure the continuation of safe, timely, and quality education!

Providing constant support to students by the staff and the institution will help students develop relevant coping strategies that foster their resilience and well-being. Ultimately, a community of learners and practitioners will emerge with the ability to provide and maintain quality healthcare during challenging times like the one we are now experiencing.

 

Dujeepa D. Samarasekera & Matthew C. E. Gwee
Centre for Medical Education (CenMED), NUS Yong Loo Lin School of Medicine,
National University Health System, Singapore

 

 

Ashokka, B., Ong, S. Y., Tay, K. H., Loh, N., Gee, C. F., & Samarasekera, D. D. (2020). Coordinated responses of academic medical centres to pandemics: Sustaining medical education during COVID-19. Medical Teacher, 42(7), 762-771.

Blundell, R., Costa Dias, M., Joyce, R., & Xu, X. (2020). COVID‐19 and Inequalities. Fiscal Studies, 41(2), 291-319.

Frenk, J., Chen, L., Bhutta, Z. A., Cohen, J., Crisp, N., Evans, T., … & Kistnasamy, B. (2010). Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. The Lancet, 376(9756), 1923-1958.

Lee, T. H., & Duckworth, A. L. (2018). Organizational grit. Harvard Business Review, 96(5), 98-105.

Samarasekera, D. D., Goh, D. L. M., & Lau, T. C. (2020). Medical school approach to manage the current COVID-19 crisis. Academic Medicine, 95(8), 1126-1127.

Submitted: 4 May 2020
Accepted: 3 August 2020
Published online: 5 January, TAPS 2021, 6(1), 3-29
https://doi.org/10.29060/TAPS.2021-6-1/RA2351

Elisha Wan Ying Chia1,2, Huixin Huang1,2, Sherill Goh1,2, Marlyn Tracy Peries1,2, Charlotte Cheuk Yiu Lee2,3, Lorraine Hui En Tan1,2, Michelle Shi Qing Khoo1,2, Kuang Teck Tay1,2, Yun Ting Ong1,2, Wei Qiang Lim1,2, Xiu Hui Tan1,2, Yao Hao Teo1,2, Cheryl Shumin Kow1,2, Annelissa Mien Chew Chin4, Min Chiam5, Jamie Xuelian Zhou2,6,7 & Lalit Kumar Radha Krishna1,2,5,7-10

1Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 2Division of Supportive and Palliative Care, National Cancer Centre Singapore, Singapore; 3Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore; 4Medical Library, National University of Singapore Libraries, National University of Singapore, Singapore; 5Division of Cancer Education, National Cancer Centre Singapore, Singapore; 6Lien Centre of Palliative Care, Duke-NUS Graduate Medical School, Singapore; 7Duke-NUS Graduate Medical School, Singapore; 8Centre for Biomedical Ethics, National University of Singapore, Singapore; 9Palliative Care Institute Liverpool, Academic Palliative & End of Life Care Centre, University of Liverpool; 10PalC, The Palliative Care Centre for Excellence in Research and Education, Singapore

Abstract

Introduction: Whilst the importance of effective communications in facilitating good clinical decision-making and ensuring effective patient and family-centred outcomes in Intensive Care Units (ICU)s has been underscored amidst the global COVID-19 pandemic, training and assessment of communication skills for healthcare professionals (HCPs) in ICUs remain unstructured

Methods: To enhance the transparency and reproducibility, Krishna’s Systematic Evidenced Based Approach (SEBA) guided Systematic Scoping Review (SSR), is employed to scrutinise what is known about teaching and evaluating communication training programmes for HCPs in the ICU setting. SEBA sees use of a structured search strategy involving eight bibliographic databases, the employ of a team of researchers to tabulate and summarise the included articles and two other teams to carry out content and thematic analysis the included articles and comparison of these independent findings and construction of a framework for the discussion that is overseen by the independent expert team.

Results: 9532 abstracts were identified, 239 articles were reviewed, and 63 articles were included and analysed. Four similar themes and categories were identified. These were strategies employed to teach communication, factors affecting communication training, strategies employed to evaluate communication and outcomes of communication training.  

Conclusion: This SEBA guided SSR suggests that ICU communications training must involve a structured, multimodal approach to training. This must be accompanied by robust methods of assessment and personalised timely feedback and support for the trainees. Such an approach will equip HCPs with greater confidence and prepare them for a variety of settings, including that of the evolving COVID-19 pandemic.

Keywords:           Communication, Intensive Care Unit, Assessment, Skills Training, Evaluation, COVID-19, Medical Education

Practice Highlights

  • The global COVID-19 pandemic has underscored the importance of effective communications in the Intensive Care Unit (ICU).
  • ICU communications training should adopt a longitudinal, structured and multimodal approach.
  • Robust stepwise evaluation of learner outcomes via Kirkpatrick’s Hierarchy is needed.
  • Supportive host organisation and conducive learning environment and are key to successful curricula.

I. INTRODUCTION

    The COVID-19 pandemic has placed immense strain on intensive care units (ICU)s with healthcare teams and resources stretched to meet the sudden increased healthcare demands of critically ill patients. To further complicate the situation, ICU teams are called to not only communicate closely with colleagues in a bid to support them but also counsel families confronting acute distress and uneasy waits separated from their loved ones due to restrictions to visiting in an effort to limit the spread of this pandemic (Ministry of Health, 2020; World Health, 2020). From breaking bad news (Blackhall, Erickson, Brashers, Owen, & Thomas, 2014; J. Yuen & Carrington Reid, 2011), to conveying the need for sedation and intubation (Carrillo Izquierdo, Diaz Agea, Jimenez Rodriguez , Leal Costa, & Sanchez Exposito, 2018) and providing progress reports on critically ill patients (Curtis et al., 2005; Curtis, White, Curtis, & White, 2008; Yang et al., 2020), communication skills amongst ICU healthcare professionals (HCPs) are pivotal in reassuring anxious, emotional and stressed patients and families (Ahrens, Yancey, & Kollef, 2003; Foa et al., 2016; Kirchhoff et al., 2002). Good communication in the ICU has also been shown to improve patient-physician relationships (K. G. Anderson & Milic, 2017), patient and family-centred outcomes, quality of care, and patient and family satisfaction (Bloomer, Endacott, Ranse, & Coombs, 2017; Cao et al., 2018; Currey, Oldland, Considine, Glanville, & Story, 2015). Effective communications between HCPs in ICU also enhances clinical decision-making (Kleinpell, 2014), reduces medication and treatment errors (Clark, Squire, Heyme, Mickle, & Petrie, 2009; Happ et al., 2014; Sandahl et al., 2013), decreases physician burnout (Rachwal et al., 2018), and improves staff retention and satisfaction (Hope et al., 2015).

    With evidence suggesting that poor communication skills (Downar, Knickle, Granton, & Hawryluck, 2012; Foa et al., 2016) and training (Smith, O’Sullivan, Lo, & Chen, 2013) are likely to increase patients’ (Dithole, Sibanda, Moleki, & Thupayagale ‐ Tshweneagae, 2016) and families’ (Curtis et al., 2008) stress, adversely affect care and recovery (Dithole et al., 2016), and increase healthcare costs (Kalocsai et al., 2018), some authors have suggested that effective communication skills are at least as important (Adams, Mannix, & Harrington, 2017; Cicekci et al., 2017; Van Mol, Boeter, Verharen, & Nijkamp, 2014) to good patient care as clinical acumen (Curtis et al., 2001a). Yet despite evidence of the importance of communication skills in ICU, communication skills training remains inconsistent, variable and not evidence-based in most ICU settings (Adams et al., 2017; Berlacher, Arnold, Reitschuler-Cross, Teuteberg, & Teuteberg, 2017; Bloomer et al., 2017; D. A. Boyle et al., 2017; Miller et al., 2018; Sanchez Exposito et al., 2018).

    With this in mind, a systematic scoping review (SSR) is proposed to map current approaches to communications skills training in ICUs (Munn et al., 2018) and potentially guide design of a communications training programme. An SSR allows for systematic extraction and synthesis of actionable and applicable information whilst summarising available literature across a wide range of pedagogies and practice settings employed to understand what is known about teaching and evaluating communication training programmes for HCPs in the ICU setting (Munn et al., 2018).

     II. METHODS

    To overcome concerns about the transparency and reproducibility of SSR, a novel approach called Krishna’s Systematic Evidenced Based Approach (henceforth SEBA) is proposed (Kow et al., 2020; Krishna et al., 2020; Ngiam et al., 2020). This SEBA guided SSR (henceforth SSRs in SEBA) adopts a constructivist perspective to map this complex topic from multiple angles (Popay et al., 2006) whilst a relativist lens helps account for variability in communication skills training (Crotty, 1998; Ford, Downey, Engelberg, Back, & Curtis, 2012; Pring, 2000; Schick-Makaroff, MacDonald, Plummer, Burgess, & Neander, 2016).

    To provide a balanced review, the research team was supported by the medical librarians from the National University of Singapore’s (NUS) Yong Loo Lin School of Medicine (YLLSoM), the National Cancer Centre Singapore (NCCS) and local educational experts and clinicians at the NCCS, the Palliative Care Institute Liverpool, YLLSoM and Duke-NUS Medical School (henceforth the expert team). The research and expert teams adopted an interpretivist approach as they proceeded through the five stages of SEBA (Figure 1).

    Figure 1. The SEBA Process

    A. Stage 1: Systematic Approach

    1) Determining the title and research question: The research and expert teams agreed upon the goals, population, context and concept to be evaluated in this SSR. The two teams then agreed that the primary research question should be “What is known about teaching and evaluating communication training programs for HCPs in the ICU setting?” The secondary research questions were “How are communication skills taught and assessed in the ICU setting?” and “How effective have such interventions been as described in the published literature?”

    2) Inclusion criteria: A Population, Intervention, Comparison, Outcome, Study Design (PICOS) format was adopted to guide the research process (Peters, Godfrey, Khalil, et al., 2015a; Peters, Godfrey, McInerney, et al., 2015b) (Table 1).

    PICOS

    Inclusion Criteria

    Exclusion Criteria

    Population

    ·       Undergraduate and postgraduate healthcare providers (e.g. doctors, medical students, nurses, social workers) within ICU setting

    ·       ICU settings including medical, surgical, cardiology and neurology ICU

    ·       Communication between healthcare providers and patients in the ICU, or between healthcare providers in the ICU and patients’ families

    ·       Communication between or within healthcare providers’ teams in the ICU

    ·       Articles focusing solely on neonatal/ paediatric ICU setting

    ·       Articles focusing solely on speech therapy/ physical therapy/ occupational therapy

    ·       Non-ICU settings (e.g. general wards, emergency department)

    ·       Non-medical professions (e.g. Science, Veterinary, Dentistry)

    ·       Communication carried out over technological platforms 

    Intervention

    ·       Need for/ importance of interventions to teach communication in ICU setting

    ·       Facilitators and barriers to teaching communication in ICU setting

    ·       Recommendations, interventions, methods (e.g. tools, simulations, videos), curriculum content and assessments used for teaching communication in ICU setting

     

    Comparison

    ·       Comparisons of various interventions, methods, curricula and evaluation methods used to teach or assess communication in ICU setting and its impact upon patients, healthcare providers, healthcare, and society

     

    Outcome

    ·       Impact of interventions on patients, healthcare providers, healthcare, and society

    ·       Evaluation methods to assess interventions, methods, or curriculum used to teach communication

     

    Study design

    ·       Articles in English or translated to English

    ·       All study designs including:

    o    Mixed methods research, meta-analyses, systematic reviews, randomised controlled trials, cohort studies, case-control studies, cross-sectional studies, and descriptive papers

    o    Case reports and series, ideas, editorials, and perspectives

    ·       Publication dates: 1st January 2000 – 31st December 2019

    ·       Databases: PubMed, ERIC, JSTOR, Embase, CinaHL, Scopus, PsycINFO, Google Scholar

     

    Table 1. PICOS

    Nine members of the research team carried out independent searches for articles published between 1st January 2000 – 31st December 2019 in eight bibliographic databases (PubMed, ERIC, JSTOR, Embase, CINAHL, Scopus, Psycinfo and Google Scholar). The searches were carried out between 27th January 2020 and 14th February 2020. The PubMed search strategy can be found in Supplementary Material A. An independent hand search was done to identify key articles.

    3) Extracting and charting: Nine members of the research team independently reviewed the titles and abstracts identified and created individual lists of titles to be included which were discussed online. Consensus was achieved on the final list of articles to be included using (Sambunjak, Straus, & Marusic, 2010)’s “negotiated consensual validation” approach through collaborative discussion and negotiation on points of disagreement on online meetings.

    B. Stage 2. Split Approach

    Working in three independent groups, the reviewers analysed the included articles using the ‘split approach’ (Ng et al., 2020). In one group, four researchers independently reviewed and summarised all the included articles in keeping with according recommendations set out by Wong, Greenhalgh, Westhorp, Buckingham, and Pawson (2013)’s “RAMESES publication standards: meta-narrative reviews” and Popay et al. (2006)’s “Guidance on the conduct of narrative synthesis in systematic reviews”. The four research team members then discussed their individual findings at online meetings and employed ‘negotiated consensual validation’ to achieve consensus on the tabulated summaries (Sambunjak et al., 2010). The tabulated summaries served to highlight key points from the included articles.

    The four members of the research team also employed the Medical Education Research Study Quality Instrument (MERSQI) (Reed et al., 2008) and the Consolidated Criteria for Reporting Qualitative Studies (COREQ) (Tong, Sainsbury, & Craig, 2007) also evaluated the quality of qualitative and quantitative studies included in this review.

    Concurrently, the second group of five researchers analysed all the included articles using (Braun & Clarke, 2006)’s approach to thematic analysis then discussed their individual findings at online meetings and employed ‘negotiated consensual validation’ to achieve consensus on the final themes (Sambunjak et al., 2010). The third group of four researchers employed Hsieh and Shannon (2005)’s approach to directed content analysis to independently analyse all the included articles, discussed their independent findings online and employed ‘negotiated consensual validation’ to achieve consensus on the final themes (Sambunjak et al., 2010). This split approach consisting of the tabulated summaries and concurrent thematic analysis and content analysis enhances the reliability of the analyses. The tabulated summaries also help ensure that important themes are not lost.

    1) Thematic analysis: Phase 1 of Braun and Clarke (2006)’s approach saw the team ‘actively’ reading the included articles to find meaning and patterns in the data. In phase 2, ‘codes’ were constructed from the ‘surface’ meaning (Braun & Clarke, 2006; Sawatsky, Parekh, Muula, Mbata, & Bui, 2016; Voloch, Judd, & Sakamoto, 2007) and collated into a code book to code and analyse the rest of the articles using an iterative step-by-step process. As new codes emerged, these were associated with previous codes and concepts (Price & Schofield, 2015). In phase 3, the categories were organised into themes that best depict the data. In phase 4, the themes were refined to best represent the whole data set and discussed. In phase 5, the research team discussed the results of their independent analysis online and at reviewer meetings. “Negotiated consensual validation” was used to determine a final list of themes (Sambunjak et al., 2010).

    2) Directed content analysis: Hsieh and Shannon (2005)’s approach to directed content analysis (Hsieh & Shannon, 2005) was employed in three stages.

    Using deductive category application (Elo & Kyngäs, 2008; Wagner-Menghin, de Bruin, & van Merriënboer, 2016), the first stage (Mayring, 2004; Wagner-Menghin et al., 2016) saw codes drawn from the article “Enhancing collaborative communication of nurse and physician leadership into two intensive care units” (D. K. Boyle & Kochinda, 2004). Drawing upon Mayring (2004)’s account, each code was defined in the code book that contained “explicit examples, definitions and rules” drawn from the data. The code book served to guide the subsequent coding process.

    Stage 2 saw the four reviewers using the ‘code book’ to independently extract and code the relevant data from the included articles. Any relevant data not captured by these codes were assigned a new code that was also described in the code book. In keeping with deductive category application (Wagner-Menghin et al., 2016), coding categories and their definitions were revised. The final codes were compared and discussed with the final author to enhance the reliability of the process (Wagner-Menghin et al., 2016). The final author checked the primary data sources to ensure that the codes made sense and were consistently employed. The reviewers and the final author used “negotiated consensual validation” to resolve any differences in the coding (Sambunjak et al., 2010). The final categories were selected (Neal, Neal, Lawlor, Mills, & McAlindon, 2018) based on whether they appeared in more than 70% of the articles reviewed (Curtis et al., 2001b; Humble, 2009).

    The narrative produced was guided by the Best Evidence Medical Education (BEME) Collaboration guide (Haig & Dozier, 2003) and the STORIES (Structured approach to the Reporting In healthcare education of Evidence Synthesis) statement (Gordon & Gibbs, 2014).

    III. RESULTS

    9532 abstracts were identified from ten databases, 239 articles reviewed, and 63 articles were included as shown in Figure 2 (Moher, Liberati, Tetzlaff, & Altman, 2009).

    Figure 2. PRISMA Flowchart

    3) Comparisons between summaries of the included articles, thematic analysis and directed content analysis: In keeping with SEBA approach the findings of each arm of the split approach was discussed amongst the research and expert teams. The themes identified using Braun and Clarke (2006)’s approach to thematic analysis were how to teach and evaluate communication training in ICU and the factors affecting training.

    The categories identified using Hsieh and Shannon (2005)’s approach to directed content analysis were 1) strategies employed to teach communication, 2) factors affecting communication training, 3) strategies employed to evaluate communication, and 4) outcomes of communication training. These categories reflected the major issues identified in the tabulated summaries.

    These findings were reviewed with the expert team who agreed that given that the themes identified could be encapsulated by the categories identified, the categories and the themes will be presented together.

    a) Strategies employed to teach communication in ICU: 61 articles described various interventions used to teach communication in the ICU. 19 involved ICU physicians, 18 involved ICU nurses, 4 saw participation of ICU physicians and nurses, 13 included the multidisciplinary team in the ICU, 1 was aimed at medical interns, 2 at medical students, 2 at nursing students, and 2 at both medical and nursing students. Given the overlap between teaching strategies, topics taught, and assessment methods employed in ICU communication training for nurses, doctors, nursing and medical students and HCPs in the literature, we discuss and generalise the results across HCPs.    

    In curriculum design, seven studies (D. K. Boyle & Kochinda, 2004; Hope et al., 2015; Krimshtein et al., 2011; Lorin, Rho, Wisnivesky, & Nierman, 2006; McCallister, Gustin, Wells-Di Gregorio, Way, & Mastronarde, 2015; Miller et al., 2018; Sullivan, Rock, Gadmer, Norwich, & Schwartzstein, 2016) designed a curriculum based on extensive reviews of literature on teaching communication. Brunette and Thibodeau-Jarry (2017) used Kern’s 6-step approach to curriculum development to design a structured curriculum targeted at meeting the needs identified whilst Sullivan et al. (2016) and Lorin et al. (2006) used the authors’ own experiences in tandem with existing literature to guide curriculum design. W. G. Anderson et al. (2017) designed a communication training workshop based on behaviour theories whilst McCallister et al. (2015) based their curriculum on principles of shared decision-making and patient-centred communication. Northam, Hercelinskyj, Grealish, and Mak (2015) conducted a pilot study before implementing their intervention.

    Topics included in the curriculum were categorised into “core topics”, or topics essential to the curriculum, and “advanced” which may be useful to incorporate into the curriculum. Core topics were deemed as topics that were most frequently cited in the literature or are crucial across a variety of interactions in the ICU setting such as history taking, relationship skills as well as on common scenarios in the ICU such as breaking bad news and communicating difficult decisions. “Advanced’ topics, though important, are not mentioned as frequently and appeared to be more site specific and sociocultural and ethical issues. These topics are outlined in Table 2 (full table with references found in Supplementary Material B). The methods employed are outlined in Table 3 (full table with references found in Supplementary Material C).

     

    Curriculum

    Core curriculum content

    Communication skills

    –        With families (n=25)

    –        With patients (n=5)

    –        With HCPs (n=12)

    –        General principles

    Breaking bad news

    Understanding/defining goals of care, building therapeutic relationships with families, setting goals and expectations, shared decision making

    Eliciting understanding and providing information about a patient’s clinical status

    Relationship skills

    –        Recognising and dealing with strong emotions

    –        Empathy

    Relationship skills include the “key principles” of esteem, empathy, involvement, sharing, and support

    Problem solving/conflict management/facing challenges

    Frameworks for good communication

    –        Ask-Tell-Ask

    –        “Tell Me More”

    –        “SBAR” – Situation, Background, Assessment, Recommendation: to share information obtained in discussions with patients or family members with other HCPs

    –        “3Ws” – What I see, What I’m concerned about, and What I want

    –        Four-Step Assertive Communication Tool – get attention, state the concern (eg, “I’m concerned about…” or “I’m uncomfortable with…”), offer a solution, and get resolution by ending with a question (eg, “Do you agree?”)

    –        “4 C’s” palliative communication model:

    a.      Convening – ensuring necessary communication occurs between the patient, family, and interprofessional team;

    b.      Checking – for understanding;

    c.      Caring – conveying empathy and responding to emotion; and

    d.      Continuing – following up with patients and families after discussions to provide support and clarify information.

    –        ‘‘Communication Strategy of the Week’’ using teaching posters

    –        PACIENTE Interview (Introduce yourself, Listen carefully, Tell you the diagnosis, Advises treatment, Exposes the prognosis, Appoints the bad news introductory phrases, Takes time to comfort empathic, Explains a plan of action involving the family)

    –        Stages of communication (open, clarify, develop, agree, close)

    –        Processes of communication (procedural suggestions, check for understanding)

    –        Explain illness in clear, simple terms

    –        Using a reference manual and pocket reference cards

    –        How HCPs should introduce himself to patients/family members/other HCPs

    ICU decision making

    –        Survival after CPR

    –        DNR discussions

    –        Prognostication

    –        Legal and ethical issues surrounding life-sustaining treatment decisions

    –        Withdrawing therapies

    Advanced Topics

    Ethics

    –        Eg. Offering organ donation

    Cultural/spirituality/religious issues

    Leadership

    Roles and responsibilities in communication with patients and families

    Discussing patient safety incidents

    Integration of 5 common behaviour theories: health belief model, theory of planned behaviour, social cognitive theory, an ecological perspective, and transtheoretical model

    Law

    Table 2. Topics taught

    Methods Employed

    Number of Studies

    Didactic Teaching, which may be employed in conjunction with other methods in a structured programme

    20

    Simulated scenarios with family members/ standardised patients

    17

    Role-play

    12

    Use of simulation technology such as with mannequins

    6

    Group discussions, group reflections and team-based learning

    7

    Case presentations, case discussions and patient care conferences

    4

    Online videos

    3

    Online Powerpoint slides

    3

    Did not specify

    9

    Table 3. Pedagogy

    b) Factors affecting communication training: Identifying facilitators and barriers are critical to the success of communication programmes. Facilitators and barriers to training may be found in Table 4 (full table with references may be found in Supplementary Material D).

    Facilitators

    Barriers

    Longitudinal, structured process with horizontal and vertical integration

    Lack of time

    Safe learning environment

    Resource constraints

    Clear programme objectives and programme content

    Poor design and a lack of longitudinal support

    Funding for training

    Insecurity and awkwardness during simulations

    Simulated patients

    Disrupted training

    Protected time for training

    Programmes that were not pitched at the right level

    Faculty experts helping to plan and review curricula and implement interventions

    Training that is not learner centered

    Stakeholders’ engagement to facilitate interprofessional collaboration, as well as debriefing and program feedback

    Training that lacked feedback or debrief sessions

    Reflective practice

    Lack of a longitudinal aspect to training

    Timely and appropriate feedback

    A lack of a supportive environment in which HCPs can apply the skills learnt

    Multidisciplinary learning

    Discordance between physicians’ and nurses’ communication with families

    Role modeling

    Peer support

    Table 4. Facilitators and barriers to training

    c) Strategies employed to evaluate communication training: Thirty-nine articles discussed evaluation methods of communication training. The assessment methods are described as follows in Table 5 (full table with references may be found in Supplementary Material E).

    Method

    Self-assessment

    1

    Quantitative and qualitative surveys were administered to learners to assess their knowledge, experience in the programme, and perceived preparedness, comfort and confidence in communicating

    1.1

    Some programmes only used post-intervention assessments

    1.2

    Others used a combination of pre- and post-intervention assessments of learners

    1.3

    Some programmes adapted existing tools to conduct post-intervention surveys to evaluate learners’ experiences and skills learnt

    Feedback from Others

    2

    patients, family members, peers and simulated patients was obtained through a combination of surveys and interviews that assessed their level of satisfaction with learners’ communication skills

    Observation

    3

    Direct observation of HCPs’ communication skills to ascertain the frequency, quality, success and ease of communication post-intervention. This was done through the use of modified communication tools and feedback forms

    Debriefing Sessions

    4

    One study used debriefing sessions to understand shared experiences of learners.

    Table 5. Assessment Methods

    d) Outcomes of communication training: The outcomes of communication training may be mapped to 5 levels of the Adapted Kirkpatrick’s Hierarchy (Jamieson, Palermo, Hay, & Gibson, 2019; Littlewood et al., 2005; Roland, 2015) allowing outcome measures used were also identified. Majority of the programmes achieved Level 2a and Level 2b outcomes as shown in Table 6 (full table with references may be found in Supplementary Material F). 40 articles described successes and three articles described variable outcomes of teaching communications.

    Adapted Kirkpatrick’s Hierarchy

    Items evaluated

    Level 1 (participation)

    Experience in the programme

    Assessment of programme’s effectiveness

    Trainee satisfaction

    Programme completion

    Level 2a (attitudes and perception)

    Attitudes towards/ experience with communication

    Self-rated confidence/ preparedness in communication

    Colleagues’ satisfaction with communication

    Trainees’ views on training programme (e.g. satisfaction, perceived effectiveness)

    Self-perceived job stress/ job satisfaction

    Level 2b (knowledge and skills)

    Self-rated skill level using Likert scales

    Form asking trainees to list/ indicates skills they learnt during the programme

    Self-rated knowledge level using Likert scales

    Self-evaluation of communication skills using validated tools

    Evaluation of trainees’ knowledge by faculty/ experts

    Evaluation of trainees’ communication skills by faculty/ experts

    Level 3 (behavioural change)

    Feedback from peers and facilitators on interactions with actors

    Records of ICU rounds

    Notes from colleagues documenting supportive environment and involvement in communication

    Frequency of usage of communication skills taught

    Workplace observations

    Evaluation of trainees’ communication skills in clinical setting by patients and colleagues

    Level 4a (increased interprofessional collaboration)

    Workplace observations

    Level 4b (patient benefits)

    Self-perceived quality of care

    Patient and family satisfaction with communication

    Family satisfaction with communication

    Table 6. Outcome Measures mapped onto Adapted Kirkpatrick’s Hierarchy

    Three studies compared outcomes with non-intervention arms and reported improved patient satisfaction and self-rated and third party reported improvements in communication (Awdish et al., 2017; Happ et al., 2014; McCallister et al., 2015).

    C. Stage 3: Jigsaw Perspective

    The jigsaw perspective builds upon Moss and Haertel’s (2016) concept of methodological pluralism and sees data from different methodological approaches as pieces of a jigsaw providing a partial picture of the area of interest. The Jigsaw perspective brings data from complementary pieces of the training process in order to paint a cohesive picture of ICU communication training. As a result, related aspects of the training structure and the working culture were studied together so as to better understand the influences each of the aforementioned have on the other.

     D.Stage 4. An Iterative Process

    Whilst there was consensus on the themes/categories identified, the expert team and stakeholders raised concerns that data from grey literature which is neither quality assessed nor necessarily evidenced based could bias the discussion. To address this concern, the research team thematically analysed the data from grey literature and non-research-based pieces such as letters, opinion and perspective pieces, commentaries and editorials drawn from the bibliographic databases separately and compared these themes against themes drawn from peer reviewed evidenced based data. This analysis revealed the same themes with an additional tool (PACIENTE tool) identified in the grey literature to enhance communication with patients’ families (Pabon et al., 2014).

    IV. DISCUSSION

    E. Stage 5. Synthesis of Systematic Scoping Review in SEBA

    This SSR in SEBA reaffirms the importance of communications training in ICU and suggests that a combination of training techniques is required (Akgun & Siegel, 2012; Chiarchiaro et al., 2015; Happ et al., 2010; Happ et al., 2015; Hope et al., 2015; Lorin et al., 2006; Miller et al., 2018; Roze des Ordons, Doig, Couillard, & Lord, 2017; Sandahl, et al., 2013; D. J. Shaw, Davidson, Smilde, Sondoozi, & Agan, 2014).

    A framework for the design of a competency-based approach to ICU communications training (W. G. Anderson et al., 2017; Berkenstadt et al., 2013; D. Boyle et al., 2016; Brown, Durve, Singh, Park, & Clark, 2017; Chiarchiaro et al., 2015; Fins & Solomon, 2001; Happ et al., 2010; Hope et al., 2015; Karlsen, Gabrielsen, Falch, & Stubberud, 2017; Pabon et al., 2014; Roze des Ordons et al., 2017; Tamerius, 2013; J. Yuen & Carrington Reid, 2011) may be found in Figure 3 below.

    Figure 3. Framework for Competency-based Approach to ICU Communication Skills Training

    These findings resonate with Kirkpatrick’s Hierarchy (Jamieson et al., 2019; Littlewood et al., 2005; Roland, 2015) where each level builds upon the next and the learner moves from “peripheral participation” to active “doing and internalising” in real clinical practice.

    Such a competency-based programme necessitates a structured approach to holistic and longitudinal assessments of the learner’s progress. Such a structured approach must be horizontally and vertically integrated into other forms of clinical training as cogent communication is a fundamental skillset across all practice and specialties (Akgun & Siegel, 2012; Roze des Ordons et al., 2017).

    Whilst Kirkpatrick’s Hierarchy offers a viable framework for assessing trainees’ progress (Boothby, Gropelli, & Succheralli, 2018; Roze des Ordons et al., 2017), ICU training programmes may also keep in mind the various outcomes measures listed previously in Table 3 when designing assessment tools. These tools should conscientiously account for perspectives offered by trainers, standardised patients and family members involved in the evaluation process and should consider benefits and repercussions of their communication abilities to patients, families and the ICU multidisciplinary team(Aslakson, Randall Curtis, & Nelson, 2014; Awdish et al., 2017; Blackhall et al., 2014; D. A. Boyle et al., 2017; DeMartino, Kelm, Srivali, & Ramar, 2016; Happ et al., 2014; Happ et al., 2015; Hope et al., 2015; Miller et al., 2018; Sanchez Exposito et al., 2018; Sullivan et al., 2016; Turkelson, Aebersold, Redman, & Tschannen, 2017).

    With flexibility within training programmes highlighted as essential (Ernecoff et al., 2016), this flexibility should also extend to cover remediation and provision of additional support in areas jointly identified and agreed upon by trainees and trainers to be paramount for targeted improvement. As it is worrying that no studies have focused on the effects of remediation on ICU communication skills training thus far, this should be a critical area for future research considering its importance (Steinert, 2013).

    Likewise, it is pivotal that trainers should undergo rigorous training (Berlacher et al., 2017; Roze des Ordons et al., 2017) and are granted protected time for this undertaking (Boothby et al., 2018; Happ et al., 2010; Roze des Ordons et al., 2017). In order to ensure that quality and up-to-date skills and knowledge are transferred down the line, it is posited that trainers should also be holistically and longitudinally assessed alongside their charges (Roze des Ordons et al., 2017). Whilst trainers should ideally nurture a safe, collaborative, learning environment for all (Hales & Hawryluck, 2008; Milic et al., 2015; Roze des Ordons et al., 2017; Sandahl, et al., 2013), it is clear that this can only be achieved through sustained administrative and financial support, according learners and trainers sufficient time and resources to foster cordial relationships open to mutual and honest feedback (Akgun & Siegel, 2012; Miller et al., 2018).

     V. LIMITATIONS

    The SSR in SEBA approach is robust, reproducible and transparent addressing many of the concerns about inconsistencies in SSR methodology and structure arising from diverse epistemological lenses and lack of cogency in weaving together context-sensitive medical education programmes. Through a reiterative step-by-step process, the hallmark ‘Split Approach’ which saw concurrent and independent analyses and tabulated summaries by separate teams of researchers allowed for a holistic picture of prevailing ICU communications training programmes without loss of any conflicting data. Consultations with experts every step of the way also significantly curtailed researcher bias and enhanced the accountability and coherency of the data.

    Yet it must be acknowledged that this SSR focused on articles published in English or with English translations. Hence, much of the data comes from North American and European countries, potentially skewing perspectives and raising questions as to the applicability of these findings in the setting of other cultures. Moreover, whilst databases used were selected by the expert team and the team utilised independent selection processes, critical papers may still have been unintentionally omitted. Whilst use of thematic analysis to review the impact of the grey literature greatly improves transparency of the review, inclusion of grey literature-based themes may nonetheless bias results and provide these opinion-based views with a ‘veneer of respectability’ despite a lack of evidence to support it.

     VI. CONCLUSION

    In the absence of a standardised evidence-based communication training programme for HCPs in ICUs, many HCPs are left in the hope that clinical experience alone will be sufficient to ensure their proficiency in communication. This SSR provides guidance on how to effectively develop and structure a communications training programme for HCPs in ICUs and suggests that communications training in ICU must involve a structured multimodal approach to training carried out in a supportive learning environment. This must be accompanied by robust methods of assessment and personalised and timely feedback and support of the trainees. Such an approach will equip HCPs with greater confidence and preparedness in a variety of situations, including that of the evolving COVID-19 pandemic.

    To effectively institute change in communication training within ICUs, further studies should look into the desired characteristics of trainers and trainees, the context and settings as well as the case scenarios used. The design of an effective tool to evaluate learners’ communication skills longitudinally, holistically, and in different settings should be amongst the primary concerns for future research.

    Notes on Contributors

    Dr EWYC recently graduated from Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms HH is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms SG is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms MTP is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms CCYL is a nursing student at Alice Lee Centre for Nursing Studies, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms LHET is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Dr MSQK recently graduated from Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Dr KTT recently graduated from Yong Loo Lin School of Medicine, National University of Singapore. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms YTO is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Mr WQL is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms XHT is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Mr YHT is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms CSK is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms AMCC is a senior librarian from Medical Library, National University of Singapore Libraries, National University of Singapore, Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ms MC is a researcher at the Division of Cancer Education, NCCS. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Dr JXZ is a Consultant at the Division of Supportive and Palliative Care, NCCS. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Professor LKRK is a Senior Consultant at the Division of Supportive and Palliative Care, NCCS. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.

    Ethical Approval

    This is a systematic scoping review study which does not require ethical approval.

    Acknowledgement

    This work was carried out as part of the Palliative Medicine Initiative run by the Department of Supportive and Palliative Care at the National Cancer Centre Singapore. The authors would like to dedicate this paper to the late Dr S Radha Krishna whose advice and ideas were integral to the success of this study.

    Funding

    There is no funding for the paper.

    Declaration of Interest

    The authors declare that they have no competing interests.

    References

    Adams, A. M. N., Mannix, T., & Harrington, A. (2017). Nurses’ communication with families in the intensive care unit – A literature review. Nursing in Critical Care, 22(2), 70-80. https://doi.org/10.1111/nicc.12141

    Ahrens, T., Yancey, V., & Kollef, M. (2003). Improving family communications at the end of life: implications for length of stay in the intensive care unit and resource use. American Journal of Critical Care, 12(4), 317-323. https://doi.org/10.4037/ajcc2003.12.4.317

    Akgun, K. M., & Siegel, M. D. (2012). Using standardized family members to teach end-of-life skills to critical care trainees. Critical Care Medicine, 40(6), 1978-1980. https://doi.org/10.1097/CCM.0b013e3182536cd1 

    Anderson, K. G., & Milic, M. (2017). Doctor know thyself: Improving patient communication through modeling and self-analysis. Journal of General Internal Medicine, 32(2), S670-S671.

    Anderson, W. G., Puntillo, K., Cimino, J., Noort, J., Pearson, D., Boyle, D., . . . Pantilat, S. Z. (2017). Palliative care professional development for critical care nurses: A multicenter program. American Journal of Critical Care, 26(5), 361-371. https://doi.org/10.4037/ajcc2017336

    Anstey, M. (2013). Communication training in the ICU: Room for improvement? Critical Care Medicine, 41(12), A179. https://doi.org/10.1097/01.ccm.0000439963.49763.f2

    Aslakson, R. A., Curtis, J. R., & Nelson, J. E. (2014). The changing role of palliative care in the ICU. Critical Care Medicine, 42(11), 2418-2428. https://doi.org/10.1097/CCM.0000000000000573

    Awdish, R. L., Buick, D., Kokas, M., Berlin, H., Jackman, C., Williamson, C., . . . Chasteen, K. (2017). A communications bundle to improve satisfaction for critically ill patients and their families: A prospective, cohort pilot study. Journal of Pain and Symptom Management, 53(3), 644-649. https://doi.org/10.1016/j.jpainsymman.2016.08.024

    Barbour, S., Puntillo, K., Cimino, J., & Anderson, W. (2016). Integrating multidisciplinary palliative care into the ICU (impact-ICU) project: A multi-center nurse education quality improvement initiative. Journal of Pain and Symptom Management, 51(2), 355. https://doi.org/10.1016/j.jpainsymman.2015.12.203

    Barth, M., Kaffine, K., Bannon, M., Connelly, E., Tescher, A., Boyle, C., … Ballinger, B. (2013). 827: Goals of care conversations: A collaborative approach to the process in a Surgical/Trauma ICU. Critical Care Medicine, 41(12), A206. https://doi.org/10.1097/01.ccm.0000440065.45453.3f

    Berkenstadt, H., Perlson, D., Shalomson, O., Tuval, A., Haviv-Yadid, Y., & Ziv, A. (2013). Simulation-based intervention to improve anesthesiology residents communication with families of critically ill patients–preliminary prospective evaluation. Harefuah, 152(8), 453-456, 500, 499.

    Berlacher, K., Arnold, R. M., Reitschuler-Cross, E., Teuteberg, J., & Teuteberg, W. (2017). The Impact of Communication Skills Training on Cardiology Fellows’ and Attending Physicians’ Perceived Comfort with Difficult Conversations. Journal of Palliative Medicine, 20(7), 767-769. https://doi.org/10.1089/jpm.2016.0509

    Blackhall, L. J., Erickson, J., Brashers, V., Owen, J., & Thomas, S. (2014). Development and validation of a collaborative behaviors objective assessment tool for end-of-life communication. Journal of Palliative Medicine, 17(1), 68-74. https://doi.org/10.1089/jpm.2013.0262

    Bloomer, M. J., Endacott, R., Ranse, K., & Coombs, M. A. (2017). Navigating communication with families during withdrawal of life-sustaining treatment in intensive care: a qualitative descriptive study in Australia and New Zealand. Journal of Clinical Nursing, 26(5-6), 690-697. https://doi.org/10.1111/jocn.13585

    Boothby, J., Gropelli, T., & Succheralli, L. (2018). An Innovative Teaching Model Using Intraprofessional Simulations. Nursing Education Perspectives. https://doi.org/10.1097/01.Nep.0000000000000340 

    Boyle, D., Grywalski, M., Noort, J., Cain, J., Herman, H., & Anderson, W. (2016). Enhancing bedside nurses’ palliative communication skill competency: An exemplar from the University of California academic Hospitals’ qualiy improvement collaborative. Supportive Care in Cancer, 24(1), S25. https://doi.org/10.1007/s00520-016-3209-z

    Boyle, D. A., & Anderson, W. G. (2015). Enhancing the communication skills of critical care nurses: Focus on prognosis and goals of care discussions. Journal of Clinical Outcomes Management, 22(12), 543-549.

    Boyle, D. A., Barbour, S., Anderson, W., Noort, J., Grywalski, M., Myer, J., & Hermann, H. (2017). Palliative Care Communication in the ICU: Implications for an Oncology-Critical Care Nursing Partnership. Seminars in Oncology Nursing, 33(5), 544-554. https://doi.org/10.1016/j.soncn.2017.10.003

    Boyle, D. K., & Kochinda, C. (2004). Enhancing collaborative communication of nurse and physician leadership in two intensive care units. Journal of Nursing Administration, 34(2), 60-70.

    Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

    Brown, S., Durve, M. V., Singh, N., Park, W. H. E., & Clark, B. (2017). Experience of ‘parallel communications’ training, a novel communication skills workshop, in 50 critical care nurses in a UK Hospital. Intensive Care Medicine Experimental, 5(2). https://doi.org/10.1186/s40635-017-0151-4

    Brunette, V., & Thibodeau-Jarry, N. (2017). Simulation as a Tool to Ensure Competency and Quality of Care in the Cardiac Critical Care Unit. Canadian Journal of Cardiology, 33(1), 119-127. https://doi.org/10.1016/j.cjca.2016.10.015

    Cameron, K. (2017). Bridging the Gaps: An Experiential Commentary on Building Capacity for Interdisciplinary Communication in the Cardiac Intensive Care Unit (CICU). Canadian Journal of Critical Care Nursing, 28(2), 53-54.

    Cao, V., Tan, L. D., Horn, F., Bland, D., Giri, P., Maken, K., … Nguyen, H. B. (2018). Patient-Centered Structured Interdisciplinary Bedside Rounds in the Medical ICU. Critical Care Medicine, 46(1), 85-92. https://doi.org/10.1097/ccm.0000000000002807 

    Centofanti, J., Duan, E., Hoad, N., Swinton, M., Perri, D., Waugh, L., . . . Cook, D. (2015). Improving an ICU daily goals checklist: Integrated and end-of-grant knowledge translation. Canadian Respiratory Journal, 22(2), 80.

    Centofanti, J., Duan, E., Hoad, N., Waugh, L., & Perri, D. (2012). Residents’ perspectives on a daily goals checklist: A mixed-methods study. Critical Care Medicine, 40(12), 150. https://doi.org/10.1097/01.ccm.0000425605.04623.4b

    Chiarchiaro, J., Schuster, R. A., Ernecoff, N. C., Barnato, A. E., Arnold, R. M., & White, D. B. (2015). Developing a simulation to study conflict in intensive care units. Annals of the American Thoracic Society, 12(4), 526-532. https://doi.org/10.1513/AnnalsATS.201411-495OC

    Cicekci, F., Duran, N., Ayhan, B., Arican, S., Ilban, O., Kara, I., … Kara, I. (2017). The communication between patient relatives and physicians in intensive care units. BMC Anesthesiology, 17(1), 97. https://doi.org/10.1186/s12871-017-0388-1

    Clark, E., Squire, S., Heyme, A., Mickle, M. E., & Petrie, E. (2009). The PACT Project: Improving communication at handover. Medical Journal of Australia, 190(S11), S125-S127. https://doi.org/10.5694/j.1326-5377.2009.tb02618.x

    Crotty, M. (1998). The Foundations of Social Research: Meaning and Perspective in the Research Process. Thousand Oaks, United States: Sage Publications Inc. 

    Currey, J., Oldland, E., Considine, J., Glanville, D., & Story, I. (2015). Evaluation of postgraduate critical care nursing students’ attitudes to, and engagement with, Team-Based Learning: A descriptive study. Intensive Critical Care Nursing, 31(1), 19-28. https://doi.org/10.1016/j.iccn.2014.09.003 

    Curtis, J. R., Engelberg, R. A., Wenrich, M. D., Shannon, S. E., Treece, P. D., & Rubenfeld, G. D. (2005). Missed opportunities during family conferences about end-of-life care in the intensive care unit. American Journal of Respiratory & Critical Care Medicine, 171(8), 844-849. https://doi.org/ 10.1164/rccm.200409-1267OC 

    Curtis, J. R., Patrick, D. L., Shannon, S. E., Treece, P. D., Engelberg, R. A., & Rubenfeld, G. D. (2001a). The family conference as a focus to improve communication about end-of-life care in the intensive care unit: Opportunities for improvement. Critical Care Medicine, 29(2 Suppl), N26-33.

    Curtis, J. R., Wenrich, M. D., Carline, J. D., Shannon, S. E., Ambrozy, D. M., & Ramsey, P. G. (2001b). Understanding physicians’ skills at providing end‐of‐life care: Perspectives of patients, families, and health care workers. Journal of General Internal Medicine, 16(1), 41-49. https://doi.org/10.1111/j.1525-1497.2001.00333.x

    Curtis, J. R., White, D. B., Curtis, J. R., & White, D. B. (2008). Practical guidance for evidence-based ICU family conferences. CHEST, 134(4), 835-843. https://doi.org/10.1378/chest.08-0235 

    DeMartino, E. S., Kelm, D. J., Srivali, N., & Ramar, K. (2016). Education considerations: Communication curricula, simulated resuscitation, and duty hour restrictions. American Journal of Respiratory and Critical Care Medicine, 193(7), 801-803. https://doi.org/10.1164/rccm.201510-2012RR

    Dithole, K. S., Sibanda, S., Moleki, M. M., & Thupayagale ‐ Tshweneagae, G. (2016). Nurses’ communication with patients who are mechanically ventilated in intensive care: the Botswana experience. International Nursing Review, 63(3), 415-421. https://doi.org/10.1111/inr.12262

    Dorner, L., Schwarzkopf, D., Skupin, H., Philipp, S., Gugel, K., Meissner, W., … Hartog, C. S. (2015). Teaching medical students to talk about death and dying in the ICU: Feasibility of a peer-tutored workshop. Intensive Care Medicine, 41(1), 162-163. https://doi.org/10.1007/s00134-014-3541-z

    Downar, J., Knickle, K., Granton, J. T., & Hawryluck, L. (2012). Using standardized family members to teach communication skills and ethical principles to critical care trainees. Critical Care Medicine, 40(6), 1814-1819. https://doi.org/10.1097/CCM.0b013e31824e0fb7

    Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107-115. https://doi.org/10.1111/j.1365-2648.2007.04569.x 

    Ernecoff, N. C., Witteman, H. O., Chon, K., Buddadhumaruk, P., Chiarchiaro, J., Shotsberger, K. J., … & Lo, B. (2016). Key stakeholders’ perceptions of the acceptability and usefulness of a tablet-based tool to improve communication and shared decision making in ICUs. Journal of Critical Care33, 19-25. https://doi.org/10.1016/j.jcrc.2016.01.030

    Fins, J. J., & Solomon, M. Z. (2001). Communication in intensive care settings: The challenge of futility disputes. Critical Care Medicine, 29(2 Suppl), N10-N15.

    Foa, C., Cavalli, L., Maltoni, A., Tosello, N., Sangilles, C., Maron, I., … Artioli, G. (2016). Communications and relationships between patient and nurse in Intensive Care Unit: Knowledge, knowledge of the work, knowledge of the emotional state. Acta Bio-medica: Atenei Parmensis, 87(4-s), 71-82.

    Ford, D. W., Downey, L., Engelberg, R., Back, A. L., & Curtis, J. R. (2012). Discussing religion and spirituality is an advanced communication skill: An exploratory structural equation model of physician trainee self-ratings. Journal of Palliative Medicine, 15(1), 63-70. https://doi.org/10.1089/jpm.2011.0168

    Gordon, M., & Gibbs, T. (2014). STORIES statement: Publication standards for healthcare education evidence synthesis. BMC medicine, 12(1), 143.

    Haig, A., & Dozier, M. (2003). BEME Guide no 3: Systematic searching for evidence in medical education–Part 1: Sources of information. Medical Teacher, 25(4), 352-363. https://doi.org/ 10.1080/0142159031000136815 

    Hales, B. M., & Hawryluck, L. (2008). An interactive educational workshop to improve end of life communication skills. Journal of Continuing Education in the Health Professions, 28(4), 241-248; quiz 249-255. https://doi.org/10.1002/chp.191

    Happ, M. B., Baumann, B. M., Sawicki, J., Tate, J. A., George, E. L., & Barnato, A. E. (2010). SPEACS-2: intensive care unit “communication rounds” with speech language pathology. Geriatric Nursing, 31(3), 170-177. https://doi.org/10.1016/j.gerinurse.2010.03.004

    Happ, M. B., Garrett, K. L., Tate, J. A., DiVirgilio, D., Houze, M. P., Demirci, J. R., … Sereika, S. M. (2014). Effect of a multi-level intervention on nurse–patient communication in the intensive care unit: Results of the SPEACS trial. Heart & Lung: The Journal of Critical Care, 43(2), 89-98. https://doi.org/10.1016/j.hrtlng.2013.11.010

    Happ, M. B., Sereika, S. M., Houze, M. P., Seaman, J. B., Tate, J. A., Nilsen, M. L., … Barnato, A. E. (2015). Quality of care and resource use among mechanically ventilated patients before and after an intervention to assist nurse-nonvocal patient communication. Heart & Lung: The Journal of Critical Care, 44(5), 408-415.e402. https://doi.org/10.1016/j.hrtlng.2015.07.001 

    Havrilla-Smithburger, P., Kane-Gill, S., & Seybert, A. (2012). Use of high fidelity simulation for interprofessional education in an ICU environment. Critical Care Medicine, 40(12), 148. https://doi.org/10.1097/01.ccm.0000425605.04623.4b

    Hope, A. A., Hsieh, S. J., Howes, J. M., Keene, A. B., Fausto, J. A., Pinto, P. A., & Gong, M. N. (2015). Let’s talk critical: Development and evaluation of a communication skills training programme for critical care fellows. Annals of the American Thoracic Society, 12(4), 505-511. https://doi.org/10.1513/AnnalsATS.201501-040OC

    Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277-1288. https://doi.org/10.1177/1049732305276687

    Hughes, E. A. (2010). Crucial conversations: Perceptions of staff and patients’ families of communication in an intensive care unit. Dissertation Abstracts International, 71(5-A), 1548. 

    Humble, Á. M. (2009). Technique triangulation for validation in directed content analysis. International Journal of Qualitative Methods, 8(3), 34-51. https://doi.org/10.1177/160940690900800305

    Jamieson, J., Palermo, C., Hay, M., & Gibson, S. (2019). Assessment practices for dietetics trainees: A systematic review. Journal of the Academy of Nutrition and Dietetics, 119(2), 272-292. e223. https://doi.org/10.1016/j.jand.2018.09.010

    Kalocsai, C., Amaral, A., Piquette, D., Walter, G., Dev, S. P., Taylor, P., . . . Gotlib Conn, L. (2018). “It’s better to have three brains working instead of one”: a qualitative study of building therapeutic alliance with family members of critically ill patients. BMC health services research, 18(1), 533. https://doi.org/10.1186/s12913-018-3341-1

    Karlsen, M.-M. W., Gabrielsen, A. K., Falch, A. L., & Stubberud, D.-G. (2017). Intensive care nursing students’ perceptions of simulation for learning confirming communication skills: A descriptive qualitative study. Intensive and Critical Care Nursing, 42, 97-104. https://doi.org/http://dx.doi.org/10.1016/j.iccn.2017.04.005

    Kirchhoff, K. T., Walker, L., Hutton, A., Spuhler, V., Cole, B. V., & Clemmer, T. (2002). The vortex: Families’ experiences with death in the intensive care unit. American Journal of Critical Care, 11(3), 200-209. https://doi.org/10.4037/ajcc2002.11.3.200

    Kleinpell, R. M. (2014). Improving communication in the ICU. Heart and Lung: The Journal of Acute and Critical Care, 43(2), 87. https://doi.org/10.1016/j.hrtlng.2014.01.008

    Kow, C. S., Teo, Y. H., Teo, Y. N., Chua, K. Z. Y., Quah, E. L. Y., Kamal, N. H. B. A., . . . Tay, K. T. J. B. M. E. (2020). A systematic scoping review of ethical issues in mentoring in medical schools. BMC Medical Education, 20(1), 1-10. https://doi.org/10.1186/s12909-020-02169-3

    Krimshtein, N. S., Luhrs, C. A., Puntillo, K. A., Cortez, T. B., Livote, E. E., Penrod, J. D., & Nelson, J. E. (2011). Training nurses for interdisciplinary communication with families in the intensive care unit: An intervention. Journal of Palliative Medicine, 14(12), 1325-1332. https://doi.org/10.1089/jpm.2011.0225

    Krishna, L. K. R., Tan, L. H. E., Ong, Y. T., Tay, K. T., Hee, J. M., Chiam, M., … & Kow, C. S. (2020). Enhancing mentoring in palliative pare: An evidence based mentoring Framework. Journal of Medical Education and Curricular Development, 7, 2382120520957649. https://doi.org/10.1177/2382120520957649

    Littlewood, S., Ypinazar, V., Margolis, S. A., Scherpbier, A., Spencer, J., & Dornan, T. (2005). Early practical experience and the social responsiveness of clinical education: Systematic review. The BMJ, 331(7513), 387-391. https://doi.org/ 10.1136/bmj.331.7513.387

    Lorin, S., Rho, L., Wisnivesky, J. P., & Nierman, D. M. (2006). Improving medical student intensive care unit communication skills: A novel educational initiative using standardized family members. Critical Care Medicine, 34(9), 2386-2391. https://doi.org/10.1097/01.Ccm.0000230239.04781.Bd

    Mayring, P. (2004). Qualitative content analysis. A companion to qualitative research, 1(2004), 159-176.

    McCallister, J. W., Gustin, J. L., Wells-Di Gregorio, S., Way, D. P., & Mastronarde, J. G. (2015). Communication skills training curriculum for pulmonary and critical care fellows. Annals of the American Thoracic Society, 12(4), 520-525. https://doi.org/10.1513/AnnalsATS.201501-039OC

    Milic, M. M., Puntillo, K., Turner, K., Joseph, D., Peters, N., Ryan, R., … Anderson, W. G. (2015). Communicating with patients’ families and physicians about prognosis and goals of care. American Journal of Critical Care, 24(4), e56-64. https://doi.org/10.4037/ajcc2015855

    Miller, D. C., Sullivan, A. M., Soffler, M., Armstrong, B., Anandaiah, A., Rock, L., … Hayes, M. M. (2018). Teaching residents how to talk about death and dying: A mixed-methods analysis of barriers and randomized educational intervention. The American Journal of Hospice & Palliative Care, 35(9), 1221-1226. https://doi.org/10.1177/1049909118769674

    Ministry of Health, Singapore. (2020). Updates on COVID-19 (Coronavirus disease 2019) local situation. Retrieved from https://www.moh.gov.sg/covid-19.

    Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of Internal Medicine, 151(4), 264-269.

    Moss, P. A., & Haertel, E. H. (2016). Engaging methodological pluralism. In Drew H. Gitomer & Courtney A. Bell (Eds)  Handbook of Research on Teaching (5th ed., pp. 127-247). Washington, D.C: American Educational Research Association

    Motta, M., Ryder, T., Blaber, B., Bautista, M., & Lim-Hing, K. (2018). Multimodal communication enhances family engagement in the neurocritical care unit. Critical Care Medicine, 46, 409. https://doi.org/10.1097/01.ccm.0000528859.01109.0f 

    Munn, Z., Peters, M. D., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC medical research methodology, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x

    Neal, J. W., Neal, Z. P., Lawlor, J. A., Mills, K. J., & McAlindon, K. (2018). What makes research useful for public school educators? Administration and Policy in Mental Health and Mental Health Services Research, 45(3), 432-446.  

    Ng, Y. X., Koh, Z. Y. K., Yap, H. W., Tay, K. T., Tan, X. H., Ong, Y. T., … Shivananda, S. J. P. o. (2020). Assessing mentoring: A scoping review of mentoring assessment tools in internal medicine between 1990 and 2019. PLOS One, 15(5), e0232511. https://doi.org/10.1371/journal.pone.0232511

    Ngiam, L. X. L., Ong, Y. T., Ng, J. X., Kuek, J. T. Y., Chia, J. L., Chan, N. P. X., … & Ng, C. H. (2020). Impact of caring for terminally ill children on physicians: A systematic scoping review. American Journal of Hospice and Palliative Medicine, 1049909120950301. https://doi.org/10.1177/1049909120950301

    Northam, H. L., Hercelinskyj, G., Grealish, L., & Mak, A. S. (2015). Developing graduate student competency in providing culturally sensitive end of life care in critical care environments – A pilot study of a teaching innovation. Australian Critical Care, 28(4), 189-195. https://doi.org/10.1016/j.aucc.2014.12.003

    Pabon, M. C., Roldan, V. C., Insuasty, A., Buritica, L. S., Taboada, H., & Mendez, L. U. (2014). Paciente a semi-structured interview enhances family communication in the ICU. Cirtical Care Medicine, 42(12), A1507. https://doi.org/10.1097/01.ccm.0000458108.38450.38

    Pantilat, S., Anderson, W., Puntillo, K., & Cimino, J. (2014). Palliative care in university of California intensive care units. Journal of Palliative Medicine, 17(3), A15. https://doi.org/10.1089/jpm.2014.9449

    Peters, M. D., Godfrey, C. M., Khalil, H., McInerney, P., Parker, D., & Soares, C. B. (2015a). Guidance for conducting systematic scoping reviews. International Journal of Evidence-Based Healthcare, 13(3), 141-146. https://doi.org/10.1097/XEB.0000000000000050

    Peters, M., Godfrey, C., McInerney, P., Soares, C., Khalil, H., & Parker, D. (2015b). The Joanna Briggs Institute reviewers’ manual 2015: Methodology for JBI scoping reviews. Adelaide, SA: The Joanna Briggs Institute.

    Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., … Duffy, S. (2006). Guidance on the conduct of narrative synthesis in systematic reviews. A product from the ESRC methods programme. Version, 1, b92.

    Price, S., & Schofield, S. (2015). How do junior doctors in the UK learn to provide end of life care: A qualitative evaluation of postgraduate education. BMC Palliative Care, 14, 45. https://doi.org/10.1186/s12904-015-0039-6

    Pring, R. (2000). The ‘False Dualism’ of educational research. Journal of Philosophy of Education, 34(2), 247-260. https://doi.org/10.1111/1467-9752.00171

    Rachwal, C. M., Langer, T., Trainor, B. P., Bell, M. A., Browning, D. M., & Meyer, E. C. (2018). Navigating communication challenges in clinical practice: A new approach to team education. Critical Care Nurse, 38(6), 15-22. https://doi.org/10.4037/ccn201874 

    Reed, D. A., Beckman, T. J., Wright, S. M., Levine, R. B., Kern, D. E., & Cook, D. A. (2008). Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM’s Medical Education Special Issue. Journal of General Internal Medicine, 23(7), 903-907. https://doi.org/10.1007/s11606-008-0664-3

    Roland, D. (2015). Proposal of a linear rather than hierarchical evaluation of educational initiatives: The 7Is framework. Journal of Educational Evaluation for Health Professions, 12. https://doi.org/10.3352/jeehp.2015.12.35 

    Roze des Ordons, A. L., Doig, C. J., Couillard, P., & Lord, J. (2017). From communication skills to skillful communication: A longitudinal integrated curriculum for critical care medicine fellows. Academic Medicine : Journal of the Association of American Medical Colleges, 92(4), 501-505. https://doi.org/10.1097/ACM.0000000000001420

    Roze Des Ordons, A. L., Lockyer, J., Hartwick, M., Sarti, A., & Ajjawi, R. (2016). An exploration of contextual dimensions impacting goals of care conversations in postgraduate medical education. BMC Palliative Care, 15(1). https://doi.org/10.1186/s12904-016-0107-6

    Sambunjak, D., Straus, S. E., & Marusic, A. (2010). A systematic review of qualitative research on the meaning and characteristics of mentoring in academic medicine. Journal of General Internal Medicine, 25(1), 72-78. https://doi.org/10.1007/s11606-009-1165-8

    Sanchez Exposito, J., Leal Costa, C., Diaz Agea, J. L., Carrillo Izquierdo, M. D., & Jimenez Rodriguez, D. (2018). Ensuring relational competency in critical care: Importance of nursing students’ communication skills. Intensive and Critical Care Nursing, 44, 85-91. https://doi.org/10.1016/j.iccn.2017.08.010

    Sandahl, C., Gustafsson, H., Wallin, C. J., Meurling, L., Ovretveit, J., Brommels, M., & Hansson, J. (2013). Simulation team training for improved teamwork in an intensive care unit. International Journal of Health Care Quality Assurance, 26(2), 174-188. https://doi.org/10.1108/09526861311297361

    Sawatsky, A. P., Parekh, N., Muula, A. S., Mbata, I., & Bui, T. (2016). Cultural implications of mentoring in sub-Saharan Africa: A qualitative study. Medical Education, 50(6), 657-669. https://doi.org/10.1111/medu.12999

    Schick-Makaroff, K., MacDonald, M., Plummer, M., Burgess, J., & Neander, W. (2016). What synthesis methodology should I use? A review and analysis of approaches to research synthesis. AIMS Public Health, 3(1), 172. https://doi.org/10.3934/publichealth.2016.1.172 

    Shannon, S. E., Long-Sutehall, T., & Coombs, M. (2011). Conversations in end-of-life care: Communication tools for critical care practitioners. Nursing Critical Care, 16(3), 124-130. https://doi.org/10.1111/j.1478-5153.2011.00456.x 

    Shaw, D., Davidson, J., Smilde, R., & Sondoozi, T. (2012). Interdisciplinary team training for family conferences in the intensive care unit. Critical Care Medicine, 40(12), 226. https://doi.org/10.1097/01.ccm.0000425605.04623.4b

    Shaw, D. J., Davidson, J. E., Smilde, R. I., Sondoozi, T., & Agan, D. (2014). Multidisciplinary team training to enhance family communication in the ICU. Critical Care Medicine, 42(2), 265-271. https://doi.org/10.1097/CCM.0b013e3182a26ea5

    Smith, L., O’Sullivan, P., Lo, B., & Chen, H. (2013). An educational intervention to improve resident comfort with communication at the end of life. Journal of Palliative Medicine, 16(1), 54-59. https://doi.org/10.1089/jpm.2012.0173

    Steinert, Y. J. M. t. (2013). The “problem” learner: Whose problem is it? AMEE Guide No. 76. 35(4), e1035-e1045. https://doi.org/10.3109/0142159X.2013.774082

    Sullivan, A. M., Rock, L. K., Gadmer, N. M., Norwich, D. E., & Schwartzstein, R. M. (2016). The impact of resident training on communication with families in the intensive care unit resident and family outcomes. Annals of the American Thoracic Society, 13(4), 512-521. https://doi.org/10.1513/AnnalsATS.201508-495OC

    Tamerius, N. (2013). Palliative care in the ICU: Improving patient outcomes. Journal of Palliative Medicine, 16(4), A19. https://doi.org/10.1089/jpm.2013.9516

    Thomson, N., Tan, M., Hellings, S., & Frys, L. (2016). Integrating regular multidisciplinary ‘insitu’ simulation into the education program of a critical care unit. How we do it. Journal of the Intensive Care Society, 17(4), 73-74. https://doi.org/10.1177/1751143717708966

    Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19(6), 349-357. https://doi.org/10.1093/intqhc/mzm042

    Turkelson, C., Aebersold, M., Redman, R., & Tschannen, D. (2017). Improving nursing communication skills in an intensive care unit using simulation and nursing crew resource management strategies: An implementation project. Journal of Nursing Care Quality, 32(4), 331-339. https://doi.org/10.1097/NCQ.0000000000000241

    Van Mol, M., Boeter, T., Verharen, L., & Nijkamp, M. (2014). To communicate with relatives; An evaluation of interventions in the intensive care unit. Critical Care Medicine, 42(12), A1506-A1507. https://doi.org/10.1097/01.ccm.0000458107.38450.dc

    Voloch, K. A., Judd, N., & Sakamoto, K. (2007). An innovative mentoring program for Imi Ho’ola Post-Baccalaureate students at the University of Hawai’i John A. Burns School of Medicine. Hawaii Medical Journal, 66(4), 102-103.

    Wagner-Menghin, M., de Bruin, A., & van Merriënboer, J. J. (2016). Monitoring communication with patients: Analyzing judgments of satisfaction (JOS). Advances in Health Sciences Education, 21(3), 523-540. https://doi.org/10.1007/s10459-015-9642-9

    Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: Meta-narrative reviews. BMC medicine, 11(1), 20. https://doi.org/ 10.1186/1741-7015-11-20

    World Health, O. (2020). Coronavirus disease 2019 (COVID-19) Situation Report. Retrieved from https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200330-sitrep-70-covid-19.pdf?sfvrsn=7e0fe3f8_2

    Yang, X., Yu, Y., Xu, J., Shu, H., Liu, H., Wu, Y., … Yu, T. (2020). Clinical course and outcomes of critically ill patients with SARS-CoV-2 pneumonia in Wuhan, China: A single-centered, retrospective, observational study. The Lancet Respiratory Medicine. https://doi.org/10.1016/S2213-2600(20)30079-5 

    Yuen, J., & Carrington Reid, M. (2011). Development of an innovative workshop to teach communication skills in goals-of-care discussions in the ICU. Jounral of General Internal Medicine, 26, S605. https://doi.org/10.1007/s11606-011-1730-9

    Yuen, J. K., Mehta, S. S., Roberts, J. E., Cooke, J. T., & Reid, M. C. (2013). A brief educational intervention to teach residents shared decision making in the intensive care unit. Journal of Palliative Medicine, 16(5), 531-536. https://doi.org/10.1089/jpm.2012.0356

    *Ong Yun Ting
    1E Kent Ridge Road,
    NUHS Tower Block, Level 11,
    Singapore 119228
    Tel: +65 6227 3737
    Email: e0326040@u.nus.edu

    Submitted: 14 March 2020
    Accepted: 20 July 2020
    Published online: 5 January, TAPS 2021, 6(1), 30-39
    https://doi.org/10.29060/TAPS.2021-6-1/OA2235

    Yit Shiang Lui, Abigail HY Loh, Tji Tjian Chee, Jia Ying Teng, John Chee Meng Wong & Celine Hsia Jia Wong

    Department of Psychological Medicine, National University Health System, Singapore

    Abstract

    Introduction: A good understanding of basic child-and-adolescent psychiatry (CAP) is important for general medical practice. The undergraduate psychiatry teaching programme included various adult and CAP topics within a six-week time frame. A team of psychiatry tutors developed two new teaching formats for CAP and obtained feedback from the students about these teaching activities.

    Methods: Medical students were introduced to CAP via small group teaching in two different modes. One mode was the “Clinical Vignettes Tutorial” (CVT) and the other mode “Observed Clinical Interview Tutorial” (OCIT). In CVT, tutors would discuss clinical vignettes of real patients with the students, followed by explanations about theoretical concepts and management strategies. OCIT involved simulated-patients (SPs) who assisted by acting as patients presenting with problems related to CAP, or as parents for such patients. At each session, students were given the opportunity to interview “patients” and “parents”. Feedback was given following these interviews. The students then completed surveys about the teaching methods.

    Results: Students rated very-positive feedback for the teaching of CAP in small groups. Almost all found these small groups enjoyable and that it helped them apply what they had learnt. Majority agreed that the OCIT sessions increased their level of confidence in speaking with adolescents and parents. Some students agreed that these sessions had stimulated their interest to know more about CAP.

    Conclusion: Small group teaching in an interactive manner enhanced teaching effectiveness. Participants reported a greater degree of interest towards CAP, and enhanced confidence in treating youths with mental health issues as well as engaging their parents.

    Keywords:           Child Adolescent Psychiatry, Medical Education, Small Group, Teaching

    Practice Highlights

    • Psychiatric disorders are among the most common medical conditions experienced by children and adolescents, and data from the Singapore Mental Health survey conducted in 2010 had shown the prevalence rates of emotional and behavioural problems among Singaporean youth to be at 12.5%.
    • Most medical students had limited exposure to Child & Adolescent Psychiatry (CAP) in their medical curriculum due to reduced proportions of teaching time and opportune clinical exposures allocated to CAP programmes.
    • This would be further compounded by the limited number of child and adolescent psychiatrists involved in teaching at medical schools and supervising clinical postings.
    • This manuscript described synergistic teaching methods employed in educating medical students within the field of Child & Adolescent Psychiatry and examined the effectiveness and acceptability of CAP teaching using small-group teaching classes.
    • The CAP small group interactive teaching sessions for medical students received good feedback from majority of the participants and translated to applicability and skillsets transferability.

    I. INTRODUCTION

    Psychiatric disorders are among the most common medical conditions experienced by children and adolescents during their developmental years. Epidemiological data from developed countries demonstrated transitions from acute and infectious diseases to chronic conditions, that included mental health problems as well (Baranne & Falissard, 2018; Kyu et al., 2016; World Health Organization, 2014). Recent global health surveys had estimated the median prevalence of psychiatric disorders present in children and adolescents to be about 12% (Costello, Egger, & Angold, 2005). Data from the Singapore Mental Health survey conducted in 2010 had shown the prevalence rates of emotional and behavioural problems among Singaporean youth to be at 12.5% which was comparable with global data (Lim, Ong, Chin, & Fung, 2015). Some studies had also demonstrated a growing trend of a burgeoning proportion of disabilities in children and adolescents that would be attributable to mental health disorders. Therefore, increasingly more health resources would be expected to meet these demands (Baranne & Falissard, 2018; Erskine et al., 2015). This would largely come in the form of services focusing on prevention, identification, and management of child and adolescent psychiatric disorders (Baranne & Falissard, 2018; Costello et al., 2005; Erskine et al., 2015). There is hence a demand to fill the gap for escalating mental health needs in this population of children and adolescents. Delays in accessing prompt and adequate assessment may incur socio-economic costs and bring about further psychiatric comorbidities. 

    Increasing the numbers of trained child and adolescent psychiatrists may be necessary to meet the current and projected needs in youth mental health (Baranne & Falissard, 2018; Breton, Plante, & St-Georges, 2005; Thomas & Holzer, 2006). Globally, as well as in Singapore, the number of such specialists fell short of meeting the demands, and increased recruitment was needed to address this workforce shortage (Breton et al., 2005; Lim et al., 2015; Thomas & Holzer, 2006). Hence, there had been moves in recent years to increase exposure to, and interest in, child and adolescent psychiatry (CAP) among medical students (Hunt, Barrett, Grapentine, Liguori, & Trivedi, 2008; Malloy, Hollar, & Lindsey, 2008; Plan, 2002; Thomas & Holzer, 2006). Most medical students had limited exposure to CAP in their medical curriculum due to reduced proportions of teaching time and opportune clinical exposures allocated to CAP programmes. This would be further compounded by the limited number of child and adolescent psychiatrists involved in teaching at medical schools and supervising clinical postings (Dingle, 2010; Lim et al., 2015; Plan, 2002; Sawyer & Giesen, 2007). It remained important however that medical students were taught CAP, given the burden of mental health disorders in our youths today (Dingle, 2010; Hunt et al., 2008; Kaplan & Lake, 2008; Sawyer & Giesen, 2007; Thomas & H, 2006). Other specialist practitioners such as family medicine specialists and paediatricians also frequently managed youths with psychiatric problems. Understanding early childhood development, critical milestones in childhood and adolescents would be essential in any specialty that had to interact and manage children as part of routine practice (Hunt et al., 2008; Plan, 2002). This would form the basis why CAP would be taught in medical schools as part of regular and wider curricula (Dingle, 2010; Hunt et al., 2008; Kaplan & Lake, 2008; Malloy et al., 2008; Plan, 2002; Sawyer & Giesen, 2007). The current medical school pedagogy may have underestimated the salience of teaching CAP in the undergraduate curriculum. This resulted in allocating much less time, attention as well as teaching resources towards CAP. Curriculum designers will also have severely under-appreciated the transferability of skillset due to the inherent challenges in undertaking interviews with children and their parents.

    A. The Curriculum and Teaching Methods

    In Yong Loo Lin School of Medicine at the National University of Singapore, CAP teaching would be embedded within a six-week General Psychiatry clerkship for Fourth-Year medical students. CAP teaching would consist of a period of 20-hour centralised teaching at the affiliated National University Hospital, together with clinical attachments to the outpatient child psychiatry clinics in other restructured hospitals. The 20-hour teaching would include online lectures made accessible through students’ Intranet, didactic lectures delivered in large group setting by clinical tutors, as well as small group teaching classes. In this paper, the authors examined the effectiveness and acceptability of CAP teaching using these small group teaching classes.

    A comprehensive CAP education will ensure the following domains are included such as emotional symptomatology (e.g. depression, anxiety, enuresis), conduct and disruptive behavioural problems (e.g. attention deficit disorder, conduct disorder, bullying), developmental delays (e.g. specific learning, speech or autistic spectrum) and relationship difficulties, personal habits and injuries (e.g. abuse, suicide, digital overuse). Knowledge will include normal child developmental psychology as well as the assessment and management of common CAP conditions. Practice imparts interview skills of CAP and counselling of young parents.  

    Small group teaching sessions consisted of several components in its general pedagogic approach. The aim of these sessions was to cover the teaching of core knowledge and practices in common CAP cases, as well as training of interview skills required in communicating with children, adolescents, and their parents. Each session would start off with a series of lectures on four major domains of CAP: (1) emotional symptoms, (2) conduct and disruptive behavioural problems, (3) developmental delays, and (4) relationship difficulties, personal habit, and injuries. The lectures would be followed by both “Clinical Vignettes Tutorial” (CVT) and “Observed Clinic Interview Tutorials” (OCIT). The teaching sessions were structured as such in view of time constraints in the undergraduate curriculum that precluded comprehensive clinical exposure—a combination of didactics and simulated practice was designed to maximise the transferability of necessary theoretical knowledge and practical skills set for the students.

    In the CVT, tutors would discuss clinical vignettes derived from real-life patients, and their underpinning theoretical concepts for about 2½ hours. This teaching activity would have covered the principles of psychopharmacology in the youths, as well as three distinct childhood conditions: a) Adolescent Depression with self-harm behaviour, b) Post-traumatic Stress Disorder in an adolescent and c) Adjustment Disorder in an adolescent with chronic medical illnesses. The anonymised vignettes were based on actual patient profiles. During each interactive discussion of these clinical presentations, students were encouraged by tutors to raise critical questions as pertinent portions of the history unfolded to enhance their analytic thinking of the cases and remember these teachable moments.

    The second teaching activity of the OCITs would take place after a second series of lectures on other CAP conditions had been conducted. During this three-hour long OCIT, students would be provided opportunities to interview simulated patients (SPs). Each group would comprise of 12 to 18 students led by one clinical tutor.

    The four pre-prepared clinical scenarios included one case of an adolescent with Anorexia Nervosa; another of an adolescent with Social Anxiety Disorder; a parent of a child with Attention Deficit Hyperactivity Disorder; and last but not least a parent of a child with features of Autism Spectrum Disorder. Each of these scenarios would include a case template that comprised an interesting title, the learning and assessment objective, the student’s task and the script for the SP complete with an opening statement, standard statements and character presentation (behaviour, affect and mannerism).

    Students would take turn to interview the SPs in attempts to collate accurate and adequate clinical information to arrive at provisional diagnoses. The students were then tasked to discuss the possible differential diagnoses, to provide treatment options as well as to formulate prognoses of the conditions with the SPs. The SPs were in turn invited to comment on the interactions they had with the students. The clinical tutors would also conduct follow-up discussions to provide feedback to the students on aspects of their interviewing techniques and knowledge of the clinical conditions. The discussions also focused on the differential diagnoses and management strategies for various conditions.

    II. METHODS

    Paper and pen self-report surveys for both the CVT and OCIT sessions were done to evaluate the student participants’ learning, experience, and interest in CAP (Appendix A). Student participants were asked to grade responses on a five-point Likert scale (1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree and 5 = Strongly agree), in relation to statements such as “I found the session enjoyable” and “The case scenarios were relevant”. The surveys were completed and submitted anonymously at the end of each teaching session. The surveys also included a free–text segment for any open feedback, in which the question asked the student participants to list down “The best things about the session” and “Some ways which I think can make the sessions better”. The surveys utilised for each teaching session differed slightly owing to varied content validity of the teaching methods, but the questions were largely identical for most of the surveys. Implied informed consent was provided for by the participating students during the surveys.

    For the current study, the authors analysed data from the surveys completed by the Fourth-Year undergraduate medical students who were rotated to the six-week Psychiatry clerkship period of five months between July and November in 2017.

    Descriptive statistics were used to analyse the findings of the survey.

    III. RESULTS

    A total of 289 students completed the survey between July 2017 and November 2017. With regards to the CVT, majority of the students agreed or strongly agreed that the sessions were enjoyable (90.7%) and beneficial to their overall learning (90.7%; Table 1). They provided feedback that the session had helped them to apply what they had learnt (95.8%), and that the case scenarios were relevant (98.2%).

    Survey Statement

    Participants Who Indicated “Agree” Or “Strongly Agree

     

     

    N

    %

    1

    “I found the session enjoyable…”

     

    262

    90.7

    2

    “The session helped me to apply what I have learnt…”

     

    277

    95.8

    3

    “The case scenarios were relevant…”

     

    284

    98.2

    4

    “My clinical tutor was effective in facilitating the session…”

     

    281

    97.2

    5

    “The session stimulated my interest in Child and Adolescent Psychiatry…”

     

    247

    85.5

    6

    “There was sufficient time for each section…”

     

    272

    94.1

    7

    “Overall, I found the session beneficial…”

     

    262

    90.7

    Table 1. Survey results for the Clinical Vignettes Tutorial (CVT)

    For the OCIT, most of the survey respondents agreed or strongly agreed that the activity had helped them to learn psychiatric interviewing skills (97.7%), increased their confidence in speaking with adolescents or parents (95.1%) (Table 2). Most of the students who responded to the survey had reported that the simulated patients’ performances were realistic (97.7%). A large proportion of the respondents indicated that the teaching session had met their learning objectives (98.5%).

    Survey Statement

    Participants Who Indicated “Agree” Or “Strongly Agree

     

     

    N

    %

    1

    “The session helped me to learn psychiatric interviewing skills…”

     

    260

    97.7

    2

    “The session increased my confidence in speaking to adolescents/parents…”

     

    253

    95.1

    3

    “The session helped me to apply what I have learnt…”

     

    248

    96.9

    4

    “The session stimulated my interest in Child and Adolescent Psychiatry…”

     

    219

    83.3

    5

    “My clinical tutor provided useful feedback…”

     

    259

    97.3

    6

    “The simulated patients’ performances felt realistic…”

     

    258

    97.7

    7

    “There was sufficient time for each case…”

     

    256

    95.9

    8

    “Overall, the session met the learning objectives…”

     

    257

    98.5

    Table 2. Survey results for the Observed Clinical Interview Tutorial (OCIT)

    Examining the effectiveness of these teaching activities in stimulating the students’ interest towards CAP, 83.3% of the respondents indicated that the CVT had done so, while a slightly higher proportion (85.5%) of the respondents reported that the OCIT stimulated their interest in CAP.

    Majority of the respondents indicated that the clinical tutors were effective in facilitating the CVT (97.2%). Similarly, most of the respondents reported that the clinical tutors provided useful feedback during the OCIT (97.3%).

    Entries in the free-text feedback section about what the students liked best about the CVT and OCIT included comments such as “good for application”, session allowed for “practice of interviewing skills” and “helped consolidate knowledge” (Figure 1). Several students liked the “interactive” nature of the interviews and discussions, as well as “feedback” from tutors, which also helped in their learning.

    Figure. 1. Open comment feedback to the survey question “The best things about the sessions were…”

    In areas that the students indicated for further improvement, they had cited for a “shorter” duration in each teaching session (Figure 2). This was likely due to the nature of a full day programme of CAP teaching which could last eight hours in a day with a one-hour lunch break. Others had shared that they preferred “smaller” groups so students could get more chances to practice interviewing the SPs and also be provided “more time for discussion” to allow more in-depth feedback as well as discussion of each clinical condition. Some students remarked that Objective Structured Clinical Examination (OSCE) styled marking schemes could help enhance their learning experiences as this method might be more structured, compared to an open discussion.

    Figure 2. Open comment feedback to the survey question “Some ways which I think can make the sessions better are…”

    IV. DISCUSSION

    This study evaluated the effectiveness and acceptability of small group tutorials for CAP conditions, which are packaged inseparably as part of a medical undergraduate psychiatry teaching programme. CVT and OCIT are synergistically designed to complement each other in the curriculum. The surveys used to compile the medical undergraduates’ responses had focused on their learning experience with the CAP curriculum. The effectiveness of the teaching methods namely CVT and OCIT would be determined from transferability of the requisite knowledge base and the clinical skills, as well as availability of opportunities to experience interviewing for the participants. The survey responses were also used to gauge the performance of the SPs and the clinical tutors’ usefulness. In addition, the degree of how impactful the teaching sessions had in generating interest towards CAP was also evaluated.

    The fourth-year medical students gave good feedback for the small group teaching sessions. They reported that the CVT were enjoyable, beneficial and had allowed them to apply what they had learnt. For the OCIT, most of the respondents indicated that the session had helped them to learn psychiatric interviewing skills, increased their level of confidence in speaking with adolescents and parents, and had helped them to apply in clinical scenarios what they had learnt. There is discernible difference between the feedback for CVT and OCIT. The students’ feedback for CVT affirmed applicability of the knowledge content of CAP whereas those for OCIT concurred with transferability of interviewing skills in terms of confidence level.  

    In the open feedback segment of the survey, respondents reported that they had particularly liked the interactive and hands-on aspect of the session, the frequent opportunities for evaluation and feedback, as well as for practice. However, they highlighted that certain factors such as the size of grouping, the length of the sessions and random allocation of conditions could be improved further to enhance their learning experience. Overall, their feedback still indicated positive experiences in these small group sessions, and this translated to an increased knowledge base, a heightened level of confidence, and burgeoned interest in CAP among the student participants.

    This study’s limitations included the challenges inherent with attempting to accurately assess the students’ genuine experiences and feelings towards the sessions; with possible biases (recall and Hawthorne effect) in responding to questionnaires; and the lack of correlation to actual performances in real-world settings. Furthermore, what remained unanswered was how such sessions might truly generate interest leading to possibly pursuit of a career in CAP. In addition, it is uncertain whether changing the teaching methods with the curriculum could inspire more medical students and young doctors to consider specialising in this field and raise the number of residency applications. The data from our study did appear to be consistent with findings from other CAP clinical teaching programmes. In these programmes, more exposure to CAP and increased clinical opportunities did correlate with changes in impressions towards and appreciation of clinical interactions with children, increased positive views of CAP as part of medical practice, and heightened interest in CAP as a field of medical specialty (Dingle, 2010; Kaplan & Lake, 2008; Malloy et al., 2008; Martin, Bennett, & Pitale, 2005).

    In the current undergraduate medical curriculum, the amount of time allocated to teaching CAP is relatively small compared to other topics. Child and adolescent psychiatric cases can be particularly complex and their management demand sensitive handling, which may pose challenges to real world practice. Youth patients and their parents may value privacy and sometimes do not allow medical students to be involved in initial assessments and subsequent follow-up consultations. These factors collectively pose unique challenges to teaching and equipping medical students with the skills and knowledge to address child and adolescent mental health disorders. While clinical contact and patient experience would be preferred and desirable for training, it may be impractical given the various constraints mentioned above (Kaplan & Lake, 2008). Hence, other creative methods of “exposure” to CAP patients should be incorporated into teaching rotations to offer medical students the opportunities to expand this knowledge base, apply the knowledge to practice scenarios, and further their clinical and communication skills. Small group sessions such as the CVTs and OCITs are teaching activities that can be used to overcome some of these challenges.

    Our study showed that small group interactive teaching is effective in helping medical students to apply what they have learnt about CAP, increase their confidence in speaking to adolescents as patients and learn psychiatric interviewing skills. It also exposes them to a wide range of relevant CAP cases to which they can apply their theoretical knowledge and practice interview and management techniques. Furthermore, we have found that all this can be adequately achieved in a tailored environment that is conducive for learning. The collective constructive feedback had been used to further improve the content and deliverability style so as to enhance implementation in future batches. It has also been conceptualised to compare CVT and OCIT as individual teaching methods for future scholarly research.

    V. CONCLUSION

    The CAP small group interactive teaching sessions for medical students received good feedback from majority of the participants. This positive validation would spur the authors on to explore further how this pedagogy could help spark interests in Child and Adolescent Psychiatry among medical students given the shortfall of child and adolescent psychiatrists worldwide.

    Notes on Contributors

    AHYL analysed and interpreted data. CHJW, together with TJY and JCMW planned and conducted the child psychiatry small group teaching and collected feedback data from the medical students. TJY developed the feedback questionnaire. YSL, together with AHYL, CHJW and TTC planned and wrote the manuscript. All authors read and approved the final manuscript.

    Ethical Approval

    NHG DSRB reference number 2019/00431 for exemption.

    Data Availability

    Datasets generated and/or analysed during the current study are available from corresponding author on reasonable request.

    Acknowledgements

    The authors wish to thank the team from Centre for Healthcare Simulation, Yong Loo Lin School of Medicine, National University of Singapore for the invaluable support in recruiting and training the simulated patients for the CAP teaching program. We appreciate the participation of the simulated patients and medical students in the teaching programme.

    Funding

    There is no funding for this paper.

    Declaration of Interest

    As far as all the authors are concerned, we do not know of, or foresee any future competing interests. We are not aware of any issues relating to journal policies in submitting this manuscript. All the authors have approved of the manuscript for submission. The authors declare that they have no competing interests.

    References

    Baranne, M. L., & Falissard, B. (2018). Global burden of mental disorders among children aged 5–14 years. Child and Adolescent Psychiatry and Mental Health12(1), 19.

    Breton, J. J., Plante, M. A., & St-Georges, M. (2005). Challenges facing child psychiatry in Quebec at the dawn of the 21st Century. The Canadian Journal of Psychiatry50(4), 203-212.

    Costello, E. J., Egger, H., & Angold, A. (2005). 10-year research update review: The epidemiology of child and adolescent psychiatric disorders: I. Methods and public health burden. Journal of the American Academy of Child & Adolescent Psychiatry44(10), 972-986.

    Dingle, A. D. (2010). Child psychiatry: What are we teaching medical students? Academic Psychiatry34(3), 175-182.

    Erskine, H. E., Moffitt, T. E., Copeland, W. E., Costello, E. J., Ferrari, A. J., Patton, G., … & Scott, J. G. (2015). A heavy burden on young minds: The global burden of mental and substance use disorders in children and youth. Psychological Medicine45(7), 1551-1563.

    Hunt, J., Barrett, R., Grapentine, W. L., Liguori, G., & Trivedi, H. K. (2008). Exposure to child and adolescent psychiatry for medical students: Are there optimal “teaching perspectives”?. Academic Psychiatry32(5), 357-361.

    Kaplan, J. S., & Lake, M. (2008). Exposing medical students to child and adolescent psychiatry: A case-based seminar. Academic Psychiatry32(5), 362-365.

    Kyu, H. H., Pinho, C., Wagner, J. A., Brown, J. C., Bertozzi-Villa, A., Charlson, F. J., … & Fitzmaurice, C. (2016). Global and national burden of diseases and injuries among children and adolescents between 1990 and 2013: Findings from the global burden of disease 2013 study. JAMA Pediatrics170(3), 267-287.

    Lim, C. G., Ong, S. H., Chin, C. H., & Fung, D. S. S. (2015). Child and adolescent psychiatry services in Singapore. Child and Adolescent Psychiatry and Mental Health9(1), 7.

    Malloy, E., Hollar, D., & Lindsey, B. A. (2008). Increasing interest in child and adolescent psychiatry in the third-year clerkship: Results from a post-clerkship survey. Academic Psychiatry32(5), 350-356.

    Martin, V. L., Bennett, D. S., & Pitale, M. (2005). Medical students’ perceptions of child psychiatry: Pre-and post-psychiatry clerkship. Academic Psychiatry29(4), 362-367.

    Plan, S. (2002). A Call to Action: Children Need Our Help! American Academy of Child & Adolescent Psychiatry. Retrieved from https://www.aacap.org/app_themes/aacap/docs/resources_for_primary_care/workforce_issues/AACAP_Call_to_Action.pdf

    Sawyer, M., & Giesen, F. (2007). Undergraduate teaching of child and adolescent psychiatry in Australia: Survey of current practice. Australian & New Zealand Journal of Psychiatry41(8), 675-681.

    Thomas, C. R., & Holzer, C. E., 3rd (2006). The continuing shortage of child and adolescent psychiatrists. Journal of the American Academy of Child & Adolescent Psychiatry45(9), 1023-1031.

    World Health Organization. (2014). Adolescent health epidemiology. Retrieved from http://www.who.int/maternal_child_adolescent/epidemiology/adolescence/en/

    *Yit Shiang Lui
    1E Kent Ridge Road
    Tower Block, Level 9,
    Singapore 119228
    Tel: 6772 6331
    Email address: yit_shiang_lui@nuhs.edu.sg

    Submitted: 14 February 2020
    Accepted: 1 July 2020
    Published online: 5 January, TAPS 2021, 6(1), 40-48
    https://doi.org/10.29060/TAPS.2021-6-1/OA2227

    Shirley Beng Suat Ooi1,2, Clement Woon Teck Tan3,4 & Janneke M. Frambach5

    1Emergency Medicine Department, National University Hospital, National University Health System, Singapore; 2Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 3Department of Ophthalmology, National University Hospital, National University Health System, Singapore; 4Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 5School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, The Netherlands

    Abstract

    Introduction: Almost all published literature on effective clinical teachers were from western countries and only two compared medical students with residents. Hence, this study aims to explore the perceived characteristics of effective clinical teachers among medical students compared to residents graduating from an Asian medical school, and specifically whether there are differences between cognitive and non-cognitive domain skills, to inform faculty development.

    Methods: This qualitative study was conducted at the National University Health System (NUHS), Singapore involving six final year medical students at the National University of Singapore, and six residents from the NUHS Residency programme. Analysis of the semi-structured one-on-one interviews was done using a 3-step approach based on principles of Grounded Theory.

    Results: There are differences in the perceptions of effective clinical teachers between medical students and residents. Medical students valued a more didactic spoon-feeding type of teacher in their earlier clinical years. However final year medical students and residents valued feedback and role-modelling at clinical practice. The top two characteristics of approachability and passion for teaching are in the non-cognitive domains. These seem foundational and lead to the acquisition of effective teaching skills such as the ability to simplify complex concepts and creating a conducive learning environment. Being exam-oriented is a new characteristic not identified before in “Western-dominated” publications.

    Conclusion: The results of this study will help to inform educators of the differences in a learner’s needs at different stages of their clinical development and to potentially adapt their teaching styles.

    Keywords:           Clinical Teachers, Medical Students, Residents, Cognitive/Non-Cognitive, Asian Healthcare, Faculty Development

    Practice Highlights

    • Approachability and teaching passion are foundational non-cognitive skills in effective clinical teachers.
    • These foundational skills are more important for undergraduate than postgraduate teaching.
    • Procedural residents can accept less ‘warm’ teachers if they can learn advanced clinical skills.
    • Medical students value didactic ‘spoon-feeding’ type of teachers in their earlier clinical years.
    • Final year medical students and residents value feedback and role-modelling at clinical practice.

    I. INTRODUCTION

      “The transformation of our students requires the engagement of innovative and outstanding clinician-teachers who not only supervise students in their development of technical skills and applied knowledge but also serve as role models of the values and attributes of the profession and of the life of a professional” (Sutkin, Wagner, Harris, & Schiffer, 2008). This statement nicely encapsulates the very important role played by outstanding clinical teachers in helping students to ultimately become professionals with the attributes our healthcare system desires. Previous research has extensively investigated characteristics of effective clinical teachers to inform faculty development (e.g. Branch, Osterberg, & Weil, 2015; Hatem et al., 2011; Hillard, 1990; Kernan, Lee, Stone, Freudigman, & O’Connor, 2000; Paukert & Richards, 2000; Singh et al., 2013; Sutkin et al., 2008; White & Anderson, 1995). However, despite the large body of existing research on effective clinical teaching, two issues related to the needs of different groups of learners need further investigation to enable more tailored faculty development.

      First, effective clinical teaching may look different in undergraduate as compared with postgraduate education. In many healthcare institutions, clinical teachers are expected to teach across the medical education continuum, i.e., undergraduate medical students, graduate doctors in training, as well as part of continuing medical education, and teaching abilities are a necessary prerequisite in an academic environment (Hatem et al., 2011). Based on the conceptual framework of constructivism (Bednar, Cunningham, Duffy, & Perry, 1991)—­a theory which equates learning with creating meaning from experience or contextual learning­—Jonassen (1991) argues that constructive learning environments are most effective for acquiring knowledge in the advanced stage of knowledge, the stage between introductory and expert. According to Jonassen (1991), the initial or introductory stage of knowledge acquisition occurs when learners have very little directly transferable prior knowledge about a skill or content area. In this stage, knowledge is best acquired through more objectivistic approaches which can be described as ‘spoon-feeding’. Medical students in general would fit into this introductory stage, in varying degrees depending on their seniority and individual progress in learning. Jonassen’s (1991) second stage is advanced knowledge acquisition where the domains are ill-structured and more knowledge-based. This is in contrast to his third or final stage of knowledge acquisition of experts that require very little instructional support but are able to deal with elaborate structures, schematic patterns and seeing the interconnectedness in knowledge through experience. The stage of junior doctors in training would be fit into the second or advanced stage of learning. Constructivist teachers help students construct knowledge to become active learners rather than passive recipients of knowledge from the teachers or textbooks. In view of this constructivist framework, it appears logical to postulate that as medical students mature to become practicing doctors, their perceptions of effective clinical teachers may change from one who ‘spoon-feeds’ them with medical knowledge to one who encourages them to actively construct new meaning as they become clinically more experienced and have to deal with complex and ill-defined problems. Low, Khoo, Kuan, and Ooi (2020) showed that although the top four characteristics of effective medical teachers are consistent across all 5 years of medical school, characteristics that facilitate active learner participation are emphasised in the clinical years consistent with constructivist learning theory. However, as there is a paucity of comparative research on perceptions of effective clinical teachers among undergraduates as compared to postgraduates to plan more focused faculty development to address the attributes the learners look for in their clinical teachers, this warrants further research.

      The second issue relates to potential differences in the clinical teaching role between Asian and Western settings. In Western studies, as noted above, effective clinical teachers are encouraged to stimulate students’ intellectual curiosity leading to more self-directed learning (Hillard, 1990; Kernan et al., 2000; White & Anderson, 1995). In contrast, feelings of uncertainty about the independence required in self-directed learning, a focus on tradition that respects ‘old ways’, hierarchy expecting ‘truths’ to come from persons of higher status, and an achievement orientation to pass and excel in examinations have been identified as more prominent in non-Western than in Western cultures (Frambach, Driessen, Chan, & van der Vleuten, 2012). This is despite the recent introductions of more student-centred education methods. In Singapore for example, there is a move in the Yong Loo Lin School of Medicine (YLLSoM) to try to embed students into healthcare teams (Jacobs & Samarasekera, 2012) and implement newer methods of learning such as flipped classroom. However, many teachers still employ traditional methods of lectures and small group tutorials focused on exam preparation. A comprehensive review study of 68 articles on effective clinical teaching (Sutkin et al., 2008), comprised only one article that reported research from a non-Western setting (Elzubeir & Rizk, 2001). In this article, originating from the United Arab Emirates, there is no discussion on whether there is a difference in the perception of a role model between medical students in Asian countries compared to the West (Elzubeir & Rizk, 2001). Another study conducted in Asia showed differences in the perceptions of first-year and fifth-year medical students in Singapore on what makes an effective medical teacher (Kua, Voon, Tan, & Goh, 2006). More first-year students preferred handouts in contrast to fifth-year students who were less reliant on ‘spoon-feeding’. Research on effective clinical teaching is growing in the Asian setting (Ciraj et al., 2013;  Haider, Snead, & Bari, 2016; Kikukawa et al., 2013; Mohan & Chia, 2017; Nishiya et al., 2019; Venkataramani et al., 2016) though there is still a paucity of literature in the Asian setting compared with studies conducted in the West and there are none that directly compared medical students with residents.

      Another issue that deserves further attention is the role of non-cognitive domain skills in clinical teaching. Sutkin et al.’s (2008) review study described three main categories of characteristics of good clinical teachers: 1) physician characteristics, 2) teacher characteristics, and 3) human characteristics (Table 1). Approximately two-thirds of the characteristics were in non-cognitive domains (such as those involving relationship skills, emotional states, and personality types), and one-third in cognitive domains (such as those involving reasoning, memory, judgment, perception, and procedural skills). The article noted that cognitive abilities can be taught and learned, in contrast to non-cognitive attributes which are more difficult to develop and teach. Faculty development programmes currently often focus on traditional cognitive skills, such as curriculum design, large-group teaching, and assessment of learners (Searle, Hatem, Perkowski, & Wilkerson, 2006). In contrast, if non-cognitive domains are more important in contributing to outstanding teaching, they might need greater emphasis in the curricula of these workshops. The good news is that according to Schiffer, Rao, and Fogel (2003), non-cognitive behaviours are both measurable and alterable. Most of them have underlying neural networks which are entering our sphere of understanding. Hence non-cognitive skills, although much more challenging to develop than cognitive skills, have a potential to be developed. It is not clear whether there are differences in the distribution between cognitive and non-cognitive domains skills between the perceptions of medical students compared to residents of an effective clinical teacher.

      The aim of this qualitative study is to explore the perceived characteristics of an effective clinical teacher among medical students compared to residents graduating from an Asian medical school and whether there is a difference regarding cognitive and non-cognitive domain skills.

      II. METHODS

      A. Participants

      The participants consisted of final/fifth year medical students (M5s) from the Yong Loo Lin School of Medicine (YLLSoM), National University of Singapore (NUS) who were posted to the National University Hospital (NUH) to do their student internship posting in 2016. To ensure sufficient working experience, the National University Health System (NUHS) residents who had graduated from the YLLSoM and who had recently completed their intermediate specialty examinations were recruited. These were third to fifth year residents in different programmes. Maximal variation sampling of the M5s and the residents of both gender, different ethnic groups and from different specialties (for residents only) was done.

      B. Design

      A pragmatic qualitative research design (Savin-Baden & Howell Major, 2013) was used to get the participants to reflect on their own learning journey affecting their perceptions of the qualities that make an effective clinical teacher from the time they were first exposed to clinical medicine in year 3 (M3) of medical school to final year (M5) for the students, and to residency for the residents.

       C. Data Collection

      Semi-structured one-on-one interviews using open-ended questions were conducted. A list of M5s doing their student internship programme in the various departments in NUH was invited via an e-mail invitation to participate in this study. To ensure maximal variation sampling, M5s of both gender and as far as possible different ethnic groups were recruited. As for the residents, through the Graduate Medicine Education Office in NUH, residents of both gender, from different ethnic groups and different specialties (both procedural and non-procedural) were selected from those who responded voluntarily to the invitation to participate in this study to ensure maximal variation sampling as residents from procedural specialties may have different perceptions of effective clinical teachers from non-procedural specialties.

      Written consent after reading the Participant Information Sheet was taken from the interviewees before the interview was conducted in a quiet room. The interview was audiotaped and lasted between 30 and 45 minutes.

      D. Data Analysis

      The audiotaped interviews were transcribed. As all the 12 interviews were conducted by the principal investigator (PI) (SO) and although the coding and official analysis of the interviews were done after all the 12 interviews were transcribed, the PI had taken note of themes emerging and decided on ending the interviews after no substantial new themes had emerged.

      In the first phase, open coding, initial categories of the information on characteristics of effective clinical teachers by segmenting information and assigning open codes were formed. In the second coding phase, broader categories were developed through conceptually related ideas. The third phase involved selective coding where the individual categories were counterchecked with Sutkin et al.’s (2008) categories of teacher, physician and human characteristics and whether they were in the cognitive or non-cognitive domains (Table 1). Further related categories according to Sutkin et al.’s (2008) classification were brought together.

      Physician Characteristics

      P1

      Demonstrates medical/clinical knowledge

      P2

      Demonstrates clinical and technical skills/competence, clinical reasoning

      P3

      Shows enthusiasm for medicine

      P4

      A close doctor-patient relationship

      P5

      Exhibits professionalism

      P6

      Is scholarly (does research)

      P7

      Values teamwork and has collegial skills

      P8

      Is experienced

      P9

      Demonstrates skills in leadership and /or administration

      P10

      Accepts uncertainty in medicine

      P11

      Others

      Teacher Characteristics

      T1

      Maintains positive relationships with students and a supportive learning environment

      T2

      Demonstrates enthusiasm for teaching

      T3

      Is accessible/available to students

      T4

      Provides effective explanations, answers to questions, and demonstrations

      T5

      Provides feedback and formative assessment

      T6

      Is organized and communicates objectives

      T7

      Demonstrates knowledge of teaching skills, methods, principles, and their application

      T8

      Stimulates students’ interest in learning and/or subject

      T9

      Stimulates or inspires trainees’ thinking

      T10

      Encourages trainees’ active involvement in clinical work

      T11

      Provides individual attention to students

      T12

      Demonstrates commitment to improvement of teaching

      T13

      Actively involves students

      T14

      Demonstrates learner assessment/evaluation skills

      T15

      Uses questioning skills

      T16

      Stimulates trainees’ reflective practice and assessment

      T17

      Teaches professionalism

      T18

      Is dynamic, enthusiastic, and engaging

      T19

      Emphasizes observation

      T20

      Others

      Human Characteristics

      H1

      Communication skills

      H2

      Acts as role model

      H3

      Is an enthusiastic person

      H4

      Is personable

      H5

      Is compassionate/emphathetic

      H6

      Respect others

      H7

      Displays honesty

      H8

      Has wisdom, intelligence, common sense, and good judgement

      H9

      Appreciates culture and different cultural backgrounds

      H10

      Consider other’s perspectives

      H11

      Is patient

      H12

      Balances professional and personal life

      H13

      Is perceived as a virtuous person and a globally good person

      H14

      Maintains health, appearance, and hygiene

      H15

      Is modest and humble

      H16

      Has a good sense of humour

      H17

      Is responsible and conscientious

      H18

      Is imaginative

      H19

      Has self-insight, self-knowledge, and is reflective

      H20

      Is altruistic

      H21

      Others

      Note: Italics denotes cognitive characteristics; Bold denotes non-cognitive characteristics.

      Table 1. Classification of characteristics of outstanding clinical teachers (Sutkin et al., 2008)

      E. Trustworthiness

      To enhance the credibility of the research, member checking on the accuracy of interview transcription was done. The same transcription was coded by the PI (SO) and a co-researcher (CT) and the themes and differences were discussed and resolved together. The themes were then discussed with another co-researcher (JF) who is an outsider to the research setting. To contribute to the dependability of the data, a reflexivity diary was kept to reflect on the process and the PI’s role and influence on this study. This is because the PI is the person overall in charge of the residency training and has vast experience in teaching both undergraduate and postgraduate learners and has observed undergraduates seemingly valuing the willingness of time spent teaching in contrast to postgraduate learners who value effective teaching on the job. The PI emphasised to participants that whatever they mentioned in this study would not affect them in any way in their assessments, selection into a residency programme, job selection nor career progression. As a point of note, none of the interviewees mentioned any of the authors by name in the interviews when describing an effective clinical teacher.

       III. RESULTS

      A total of six final year medical students from the YLLSoM consisting of three males and three females with a mean age of 23 years old were interviewed. As for the residents group, they consisted of four males and two females. There were two internal medicine year 3 residents, one paediatric year 5 resident, one emergency medicine year 4 resident, one orthopaedic year 3 resident and one urology year 4 resident with a mean age of 29 years (range 26-33 years). All of them were of Chinese ethnicity.

      The characteristics of effective teachers were mapped onto Sutkin et al.’s (2008) review paper (Table 1) and while the majority of the characteristics could be mapped, those characteristics not able to be mapped would be considered as new characteristics. Referring to the summary of results in Table 2, the top characteristic identified equally by the medical students and residents group was approachability, in the non-cognitive domain. This was described as being “relatable, personable, forming good rapport, warm, able to remember students’ names, having a sense of humour, sharing personal experience”. Medical student 2 aptly described its importance: “Approachability in being willing to teach is an inborn trait. It acts as a screening tool. It opens the door for a student to decide whether or not this clinical tutor is someone she is likely to approach to learn from.” Interestingly, while both the medical students and residents group unanimously identified the need for a clinical teacher to have a threshold level of clinical competence, followed by a teacher who is warm and approachable with a passion to teach, this latter requirement was emphasised as particularly important in undergraduate teaching. In contrast, a postgraduate trainee/resident was able to accept a less warm but skillful clinician to learn advanced surgical skills from as they were more able to do self-directed learning being already in a training programme and they could observe and learn.

      Total

      MS

      R

      Characteristics

      Teacher

      Physician

      Human

      Cognitive

      Non-Cognitive

      10

      5

      5

      Approachability

      X (T3)

       

      X (H4)

       

      x

      9

      3

      6

      Passion/enthusiasm in teaching/engaging

      X  (T2)

       

       

       

      x

      8

      5

      3

      Provide effective explanations, answers to questions, and demonstrations (T4)

      Demonstrate clinical and technical skills/competence, clinical reasoning (P2)

      X (T4)

      X (P2)

       

      x

       

      7

      3

      4

      Creates conducive learning environment

      • patient (H11)
      • humble (H15)
      • learning without fear/non-threatening,
      • open to suggestions/questions

      X (T1)

       

      X (H11, H15)

       

      x

      7

      3

      4

      Role modeling

      • Learn art of Medicine
      • Patient interaction, shows respect (H6)
      • Shows by example
      • Communication (H1)

      x

      x

      X (H1, H6)

       

      x

      7

      2

      5

      Teach at appropriate level/know learning objectives

      X (T6)

       

       

      x

       

      7

      3

      4

      Sacrifice time

      x

       

       

      x

       

      6

      3

      3

      Realistic/concrete learning

      X (T6)

       

       

      x

       

      6

      2

      4

      Feedback, supervision, assessment for learning

      X (T5, T19)

       

       

      x

       

      5

      2

      3

      Knowledgeable/up to date/evidence-based

       

      X (P1)

       

      x

       

      5

      4

      1

      Exam-oriented

      x

       

       

      x

       

      4

      2

      2

      Inspirational to learning

      X (T8, T9, T18)

       

       

       

      x

      4

      1

      3

      Clinical thinking/Demonstrate to impart/pedagogy

      X (T9)

      X (P2)

       

      x

       

      3

      2

      1

      Nurturing/encouraging/compassion for students & team

      X (T11)

       

      X

       

      x

      2

      0

      2

      Allows hands-on/encourages trainees active involvement in clinical work

      X (T10)

       

       

       

      x

       

       

       

      Others: Strict, elocution, fair/moral compass (H13, H7), innovative (T12), directs learners, worldly-wise; empathy (H5), interpersonal skills, humour (H16)

       

       

       

       

       

      Note: (T), (P) and (H) refer to the specific Sutkin et al.’s (2008) classification as given in Table 1.

      Table 2. Characteristics of effective teachers identified by Medical Students (MS) and Residents (R) classified into teacher, physician and human characteristics and cognitive vs non-cognitive domains and mapped onto Sutkin et al.’s (2008) Classification (Table 1)

      The second most important characteristic identified was having a passion/enthusiasm in teaching, in the non-cognitive domain. This was described as “engaging, enthusiastic to help residents learn, enthusiasm/infectious attitude rubs off, lively, draws out from learners, takes time to explain to students”. Resident 5 explained: “Passion is actually demonstrated in the knowledge you display. Because when you are interested in something, you can go on to explore the depth.  People who display passion are able to depict the subject matter in a very interesting, personal and in a lively way. Passion is also about the desire to learn about things and to contribute to things. So in a sense teaching is not a passive tool for the diffusion of students … it’s also the ability to be able to draw things out from the students …draw contribution or ideas…”. Passion as a characteristic was mentioned by all the residents but not by all of the medical students.

      The third most important characteristic identified can be summarised as “providing effective explanations, answers to questions, and demonstrations” (a teacher characteristic) and “demonstrates clinical and technical skills/competence, clinical reasoning” (a physician characteristic) in the cognitive domain. This was described as “being able to break down concepts into digestible chunks; being able to synthesise and teach in understandable way; how to think, synthesise and use information; concise, targeted, clear thinking; headings, subheadings, elaborations; clarity in giving instructions and thought so that everyone is on the same page; demonstrate better way of presenting and more accurate way of physical examination”. This was identified more in the medical student group than in the resident group.

      Most of the other characteristics generally coincided with Sutkin et al.’s (2008) paper. Among the two cognitive domains skills were “teaching at appropriate level/knowing learning objectives” as well as being willing to “sacrifice time” demonstrating commitment for student education. The teachers who sacrificed their time gave additional teaching sessions and did not rush through. The medical students and the residents identified this characteristic as something they really valued in undergraduate teaching. Another characteristic in the cognitive domain was “Realistic/concrete learning” was described as “bedside teaching; teaching with practical aspect, case-based teaching; use of clinical pictures, electrocardiogram, clinical quiz and learning aids”. This form of learning was identified as being effective by both the medical students and residents equally. In contrast, “feedback, supervision, assessment for learning” described as “being able to discuss in detail as physically present; balance between supervision and resisting urge to take over in an operation; good feedback with balance of positive and negative points done in a fun and nice way” was identified more by the residents than the medical students group.

      Being “exam-oriented” i.e., the teacher being able to prepare the students well for exams, was notably a characteristic identified mainly by the medical students but was one not identified at all in Sutkin et al.’s (2008) paper nor other more recent references. To quote medical student 1, “I guess especially for medical students, it is whether this tutor prepares us well for the exams and in terms of meeting our academic objectives.”  Medical student 5: “He teaches us very exam focused and he synthesises all the information very succinctly for exams.”

      The medical students were specifically asked whether they identified a difference in the characteristics they valued in their teachers between when they were first introduced to clinical medicine in M3 compared to now in M5. The students almost unanimously expressed that in M3, as they had just been exposed to clinical medicine, they identified the need to build up their medical knowledge through more content-heavy didactic style of teaching that could be described as more of spoon-feeding than self-directed learning. Medical student 5 said, “Year 3 is more introductory kind of year so we don’t know anything. So what a good tutor to me in year 3 was whoever can teach me approaches, impart didactic teachings like knowledge.” They valued connections back to the basic sciences taught in their first two years of medical school and teachers who taught them how to approach patients. They were open to the gradual introduction of self-directed learning but it should not hold up the pace of the lesson if the students were unable to answer. In contrast, at the time of interview they were in M5 and they had two main aims. Their first aim was to look for good role-models for their upcoming internship and choice of residency for some. Hence, they appreciated bedside teaching with close supervision and feedback on medical knowledge applied to actual clinical care. Moreover, bedside examination skills and patient communications cannot be studied at home. At M5, they valued more self-directed learning as they were more equipped to search for information themselves unlike when they were in M3. They also greatly valued preparation for their final exams which would involve clinical examination in the form of Objective Structured Clinical Examination. In this aspect, they valued teachers who could teach them clinical reasoning on how to synthesise information to be applied to management of actual patients. The second aim had become more important as their final exams drew near. This feedback was also expressed by the residents when they recalled on what they looked for in their undergraduate years.

      For the residents who were in their third year of their residency and beyond, they identified the need for more active, self-directed learning. They mentioned the need to ask the ‘why questions’ and to learn evidence-based clinical practice. They appreciated experienced tutors who shared pearls and personal experience with them. They preferred to learn from good teachings during ward rounds and clinics and mentioned that didactic teaching was less important unlike in their undergraduate days and also as a first year resident where they still appreciated more spoon-feeding. As a more senior resident, they found discussions, greater analysis, asking questions to identify knowledge gaps, opportunity to present and testing useful because they already had a fund of medical knowledge.

       IV. DISCUSSION

      The results of this study suggest that there are differences in the perceived characteristics of an effective clinical teacher among medical students compared to residents. The results support Jonassen’s theory of constructivism (1991) as seen by the medical students at the beginning of their clinical year (M3) wanting more didactic teaching to ‘spoon-feed’ them with medical knowledge. As these students move on to become more senior in M5, and then residency, they start appreciating teachers who help them become more self-directed learners. These more senior learners also value feedback to help them deal with more complex ill-defined problems that they encounter during their daily clinical work. This is supported by more residents than medical students identifying feedback and supervision as well as clinical decision making/thinking as important characteristics of an effective clinical teacher (Table 2).

      It is also interesting to note that the top two characteristics of approachability and passion/enthusiasm in teaching are both in the non-cognitive domains. In fact, they are probably fundamental attributes that make a good teacher into a great one as they lead to a lot of teaching experience coupled with feedback from the learners that make them become good at simplifying and explaining concepts well, especially in undergraduate teaching. For the students beyond a baseline clinical competence, they value clinical teachers who want to teach rather than those who may be excellent top clinicians who do not possess the soft skills and the approachability for the students to want to have the courage to learn effectively from him/her. In contrast, the residents are willing to accept less ‘warm’ teachers if they are able to learn advanced clinical skills from them, particularly in the procedural specialties.

      One of the characteristics that has not been identified in any of the references, including Sutkin et al.’s (2008) review paper is that of being exam-oriented. This was a characteristic identified by four of the medical students but only by one of the residents who mentioned it while recalling his undergraduate days. This is not too surprising because Frambach et al. (2012) have found that Asian students tend to strive for success and to rank among the top achievers in an examination. The fact that the YLLSoM is Asia’s leading medical school (QS Top Universities, 2015; Times Higher Education, 2015) and hence the crème de la crème of Singapore’s students study at YLLSoM as seen by both the 10th and 90th percentiles of Medical students getting all A grades in their Singapore-Cambridge GCE A-level admission scores (National University of Singapore, 2019) can explain the exam-orientedness of the students. Moreover, Singapore practices meritocracy (Prime Minister’s Office, 2015) and in a small country of only 719.1 km² with a population of 5.35 million (World Bank, 2015) with only three public healthcare clusters, doing well in exams is seen as a tried and tested way of securing a good future. Failing in a high-stakes exam such as the final Bachelor of Medicine and Bachelor of Surgery (MBBS) exams will delay one’s progression to the next stage of one’s career such as admission to a residency training programme, and in a small country like Singapore where it is perceived to have few opportunities of starting afresh, it is not surprising that so much emphasis is placed on doing well in exams and a teacher who is able to prepare students well for exams is greatly valued.

      There are several limitations to this study. Although we had wanted to recruit interviewees from different ethnicity, all 12 who responded to our invitation were Chinese, though participating in a multi-cultural and multi-ethnic public school. Another limitation is that this study only explores the perceptions of the learners themselves. It will be more balanced if the viewpoints of the teachers are obtained as well.

      V. CONCLUSION

      This study suggests that there are differences in the perceptions of an effective clinical teacher between medical students compared to residents. Medical students valued a more didactic spoon-feeding type of teacher in their earlier clinical years. However, final year medical students and residents valued feedback and role-modelling at clinical practice. The top two characteristics of approachability and passion for teaching are in the non-cognitive domains. The results of this study will help to inform educators of the differences in a learner’s needs at different stages of their clinical development and to potentially adapt their teaching styles.  In addition, it is also possible for certain non-cognitive domain skills to be developed through recognition of clinical teachers who are role models in showing by example the art of the practice of Medicine and being able to create a conducive non-threatening learning environment. There are definitely faculty development programmes which target at how to develop a conducive learning environment.

      Notes on Contributors

      Shirley Ooi, MBBS(S’pore), FRCSEd(A&E), MHPE(Maastricht) is senior consultant emergency physician at NUH and associate professor at NUS. She was the Designated Institutional Official NUHS Residency programme at the time of the study. Currently she is the Associate Dean at NUH. This study was her MHPE thesis. She reviewed the literature, designed the study, conducted the interviews, analysed the transcripts and wrote the manuscript.

      Clement Tan, MBBS(S’pore), FRCSEd (Ophth), MHPE(Maastricht), is associate professor, senior consultant and head of the Department of Ophthalmology, NUS and NUH. He was the first author’s local MHPE thesis supervisor. He co-analysed the transcripts and approved the final versions of the manuscripts.  

      Janneke M. Frambach PhD is assistant professor at the School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, the Netherlands. She was the first author’s MHPE thesis supervisor. She supervised the study from the beginning to the final stage of manuscript writing with its revisions.

      Ethical Approval

      This study was reviewed and approved by the NUS Institutional Review Board (approval no. 3172), which considered the letter of invitation for recruitment of participants, participant information sheet, written informed consent for the audio-recordings of the one-on-one interviews, interview guide and confidentiality of participants.

      Acknowledgements

      The authors would like to thank the following for their help, advice and support, without which this study would not have been possible:

      • Medical student, Gerald Tan, for his help in transcribing many of the interviews.
      • The six YLLSoM medical students who had willingly come forward to be interviewed for this study.
      • The six NUHS residents who had willingly spared their time to be interviewed for this study.

      Funding

      No grant nor external funding was received for this study.

      Declaration of Interest

      The PI as the interviewer emphasised to the participants that whatever they mentioned in this study would not affect them in any way in their assessments, selection into a residency programme, job selection nor career progression. Moreover, their participation was entirely voluntary. The other two authors had no conflict of interest.

      References

      Bednar, A. K., Cunningham, D., Duffy, T. M., & Perry, J. D. (1991). Theory into practice: How do we link? In G.J. Anglin (Ed.), Instructional Technology: Past, Present, and Future. Englewood, CO: Libraries Unlimited.

      Branch, W. T., Osterberg, L., & Weil, R. (2015). The highly influential teacher: Recognising our unsung heroes. Medical Education, 49, 1121-27.

      Ciraj, A., Abraham, R., Pallath, V., Ramnarayan, K., Kamath, A., & Kamath, R. (2013). Exploring attributes of effective teachers-student perspectives from an Indian medical school. South-East Asian Journal of Medical Education, 7(1), 8-13.

      Elzubeir, M. A., & Rizk, D. E. E. (2001). Identifying characteristics that students, interns and residents look for in their role models. Medical Education, 35, 272-277.

      Frambach, J. M., Driessen, E. W., Chan, L. C., & van der Vleuten, C. P. M. (2012). Rethinking the globalization of problem-based learning: How culture challenges self-directed learning. Medical Education, 46, 738-747.

      Haider, S. I., Snead, D. R., & Bari, M. F. (2016). Medical students’ perceptions of clinical teachers as role model. PloS ONE, 11(3): e0150478. https://doi:10.1371/journal.pone.0150478

      Hatem, C. J., Searle, N. S., Gunderman, R., Krane, N. K., Perkowski, L., Schutze, G. E., & Steinert, Y. (2011). The educational attributes and responsibilities of effective medical educators. Academic Medicine, 86(4), 474-480.

      Hillard, R. I. (1990). The good and effective teacher as perceived by paediatric residents and faculty. American Journal of Diseases of Childhood, 144, 1106 –1110.

      Jacobs, J.  L., & Samarasekera, D. D. (2012). How we put into practice the principles of embedding medical students into healthcare teams. Medical Teacher, 34, 1008-1011.

      Jonassen, D. H. (1991). Evaluating constructivistic learning. Educational Technology, 31(9), 28-33.

      Kernan, W. N., Lee, M. Y., Stone, S. L., Freudigman, K. A., & O’Connor, P. G. (2000). Effective teaching for preceptors of ambulatory care: A survey of medical students. American Journal of Medicine, 108(6), 499-502.

      Kikukawa, M., Nabeta, H., Ono, M., Emura, S., Oda, Y., Koizumi, S., & Sakemi, T. (2013). The characteristics of a good clinical teacher as perceived by resident physicians in Japan: A qualitative study. BMC Medical Education, 13(1), 100.

      Kua, E. H., Voon, F., Tan, C. H., & Goh, L. G. (2006). What makes an effective medical teacher? Perceptions of medical students. Medical Teacher, 28(8), 738-741. 

      Low, M. J. W., Khoo, K. S. M., Kuan, W. S., & Ooi, S. B. S. (2020). Cross-sectional study of perceptions of qualities of a good medical teacher among medical students from first to final year. Singapore Medical Journal, 61(1), 28-33. 

      Mohan, N., & Chia, Y. Y. (2017). Our first steps into surgery: The role of inspiring teachers. The Asia-Pacific Scholar, 2(1), 29-30. https://doi.org/10.29060/TAPS.2017-2-1/PV1027

      National University of Singapore. (2019). National University of Singapore Undergraduate Programmes Indicative Grade Profile. Retrieved from http://www.nus.edu.sg/oam/gradeprofile/sprogramme-igp.html

      Nishiya, K., Sekiguchi, S., Yoshimura, H., Takamura, A., Wada, H., Konishi, E., Saiki, T., Tsunekawa, K., Fujisaki, K., & Suzuki, Y. (2019). Good clinical teachers in Paediatrics: The perspective of paediatricians in Japan. Paediatrics International, 62(5), 549-555.

      Prime Minister’s Office. (2010, May 5). “Old and new citizens get equal chance,” says MM Lee. [Press release].

      Paukert, J. L., & Richards, B. F. (2000). How medical students and residents describe the roles and characteristics of their influential clinical teachers. Academic Medicine, 75, 843-845.

      QS Top Universities. (2015) QS World University Rankings 2015-20116. Retrieved from: https://www.topuniversities.com/university-rankings/world-university-rankings/2015

      Savin-Baden, M., & Howell Major, C. (2013). Pragmatic qualitative research. In M. Savin-Baden & C. H. Major (Eds). Qualitative Research. The Essential Guide to Theory and Practice. London: Routledge. 

      Schiffer, R. B., Rao, S. M., & Fogel, B. S. (2003). Neuropsychiatry: A Comprehensive Textbook (2nd ed.). Philadelphia, PA: Lippincott Williams and Wilkins. 

      Searle, N. S., Hatem, C. J., Perkowski, L., & Wilkerson, L. (2006). Why invest in an educational fellowship program? Academic Medicine, 81, 936-940.

      Singh, S., Pai, D. R., Sinha, N. K., Kaur, A., Soe, H. H. K., & Barua, A. (2013). Qualities of an effective teacher: What do medical teachers think? BMC Medical Education, 13(128), 1-7.

      Sutkin, G., Wagner, E., Harris, I., & Schiffer, R. (2008). What makes a good clinical teacher in Medicine? A review of the literature. Academic Medicine, 83(5), 452-466.

      Times Higher Education. (2015). The World University Rankings 2015-16. Retrieved from: https://www.timeshighereducation.com/student/news/best-universities-world-revealed-world-university-rankings-2015-2016

      Venkataramani, P., Krishnaswamy, N., Sugathan, S., Sadanandan, T., Sidhu, M., & Gnanasekaran, A. (2016). Attributes expected of a medical teacher by Malaysian medical students from a private medical school. South-East Asian Journal of Medical Education, 10(2), 39-45.

      White, J. A., & Anderson, P. (1995). Learning by internal medicine residents: Differences and similarities of perceptions by residents and faculty. Journal of General Internal Medicine, 10, 126-132.

      World Bank. (2015). Annual Report 2015. Retrieved from: https://www.worldbank.org/en/about/annual-report-2015

      *Shirley Ooi
      Emergency Medicine Department,
      National University Hospital
      9 Lower Kent Ridge Road, Level 4,
      National University Centre for Oral Health Building,
      Singapore 119085
      Tel: (65)6772-2458
      Fax: (65)6775-8551
      Email: shirley_ooi@nuhs.edu.sg

      Submitted: 15 April 2020
      Accepted: 5 June 2020
      Published online: 5 January, TAPS 2021, 6(1), 49-59
      https://doi.org/10.29060/TAPS.2021-6-1/OA2248

      Amaya Tharindi Ellawala1, Madawa Chandratilake2 & Nilanthi de Silva2

      1Department of Medical Education, Faculty of Medical Sciences, University of Sri Jayewardenepura, Sri Lanka; 2Faculty of Medicine, University of Kelaniya, Sri Lanka

      Abstract

      Introduction: Professionalism is a context-specific entity, and should be defined in relation to a country’s socio-cultural backdrop. This study aimed to develop a framework of medical professionalism relevant to the Sri Lankan context.

      Methods: An online Delphi study was conducted with local stakeholders of healthcare, to achieve consensus on the essential attributes of professionalism for a doctor in Sri Lanka. These were built into a framework of professionalism using qualitative and quantitative methods.

      Results: Forty-six attributes of professionalism were identified as essential, based on Content Validity Index supplemented by Kappa ratings. ‘Possessing adequate knowledge and skills’, ‘displaying a sense of responsibility’ and ‘being compassionate and caring’ emerged as the highest rated items. The proposed framework has three domains: professionalism as an individual, professionalism in interactions with patients and co-workers and professionalism in fulfilling expectations of the profession and society, and displays certain characteristics unique to the local context.

      Conclusion: This study enabled the development of a culturally relevant, conceptual framework of professionalism as grounded in the views of multiple stakeholders of healthcare in Sri Lanka, and prioritisation of the most essential attributes.

      Keywords:           Professionalism, Culture, Consensus

      Practice Highlights

      • Medical professionalism is recognised as a culturally dependent entity.
      • This has led to the emergence of definitions unique to socio-cultural settings.
      • List-based definitions provide operationalisable means of portraying its meaning.
      • A Delphi study was conducted to achieve consensus on locally relevant professionalism attributes.
      • Using quantitative and qualitative methods, a conceptual framework of professionalism was developed.

      I. INTRODUCTION

        There is no single definition of medical professionalism that encompasses its many subtle nuances (Birden et al., 2014).  The realisation that professionalism is a dynamic, multi-dimensional entity (Van de Camp, Vernooij-Dassen, Grol, & Bottema, 2004), significantly dependent on context (Van Mook et al., 2009), and cultural backdrop (Chandratilake, Mcaleer, & Gibson, 2012), has led to the emergence of definitions specific to cultures and socio-economic backgrounds.

        Many of the current definitions originate from Western societies. Certain Eastern cultures have embraced such definitions, though they are undeniably in conflict with local traditional views (Pan, Norris, Liang, Li, & Ho, 2013). In parallel however, countries such as Egypt, Saudi Arabia, Japan, China and Taiwan have explored how professionalism is conceptualised within their contexts (Al-Eraky, Chandratilake, Wajid, Donkers, & Van Merrienboer, 2014; Leung, Hsu, & Hui, 2012; Pan et al., 2013). Such studies have portrayed the interplay between cultural, socio-economic and religious factors in shaping perceptions on professionalism, further fuelling the notion that professionalism must be “interpreted in view of local traditions and ethos” (Al-Eraky et al., 2014, p. 14).

        Culture is the embodiment of elements such as attitudes, beliefs and values that are shared among individuals of a community and is therefore, an entity that distinguishes one group of people from another (Hofstede, 2011). Various cultural theories provide insight into inter-cultural differences across the globe (Hofstede, n.d.; Schwartz, 1999). The Sri Lankan cultural context, while aligned with those of its closest geographical neighbours in South Asia in some ways, differs from them in other important aspects. 

        Certain attempts have been made to explore the meaning of professionalism in Sri Lanka. Chandratilake et al. (2012) provided a degree of insight while comparing cultural similarities and dissonances in conceptualising professionalism among doctors of several nations. Monrouxe, Chandratilake, Gosselin, Rees, and Ho (2017) built on this work with their analysis of professionalism as viewed by local medical students. The sole regulatory authority of the medical profession in the country, the Sri Lanka Medical Council (SLMC, 2009) has delineated what it expects in terms of professionalism, by outlining the constituents of ‘good medical practice’, many of which converge with elements of professionalism described in the literature.

        While the work mentioned here has shed some light on the topic, to our knowledge, there were no studies that focused solely on the local conceptualisation of professionalism, drawing on the views of diverse stakeholders of healthcare.

        There exist two schools of thought on how professionalism can be defined: as a list of desirable attributes (Lesser et al., 2010), or as an over-arching, value-laden entity that transcends such lists (Irby & Hamstra, 2016; Wynia, Papadakis, Sullivan, & Hafferty, 2014). Unlike the latter, a list may not address the “foundational purpose of professionalism” (Wynia et al., 2014, p. 712); however, it will provide a tangible, operationalisable portrayal (Lesser et al., 2010). It is possibly for this reason that many studies have opted for list-based definitions, an approach that is supported in the East (Al-Eraky & Chandratilake, 2012; Al-Eraky et al., 2014; Pan et al., 2013).

        The aim of this study was to develop a culturally appropriate conceptual framework of medical professionalism in Sri Lanka using a combination of qualitative and quantitative methods. We envisioned that identifying a list of desirable attributes would be appropriate, providing a definition that could readily be operationalised for teaching/learning, assessment and research purposes (Wilkinson, Wade, & Knock, 2009).

        II. METHODS

        A. The Approach

        We followed a consensus approach, and opted for the Delphi technique as it was imperative to involve a large number of participants not limited by geographical location (Humphrey-Murto et al., 2017). The method offered the further advantage of providing participants with equal opportunity to express their opinions (De Villiers, De Villiers, & Kent, 2005), thereby negating the possible drawbacks of face-to-face interactions and resulting in a ‘process gain’ (Powell, 2003).

        B. Participant Panel

        The panel comprised nation-wide stakeholders of healthcare (Table 1), from both rural and urban regions who were presumably exposed to diverse forms of medical services and geographical variations in their distribution.

        Stakeholder group

        Description

        Number

        Medical teachers

        Four Medical Faculties (nation-wide)

        69 (44%)

        Medical students

        Four Medical Faculties (nation-wide) – fourth and final years

        36 (23%)

        Hospital doctors

        Four Teaching Hospitals (nation-wide)

        14 (9%)

        Healthcare staff

        Selected secondary and tertiary hospitals

        5 (3%)

        General practitioners

        Selected GP practices around the country

        2 (1%)

        Medical administrators

        Selected secondary and tertiary hospitals

        5 (3%)

        Policy makers in healthcare

        Ministry of Health, professional associations and regulatory bodies

        2 (1%)

         General public

        Employees of selected private and state banks

        Non-academic staff of four Medical Faculties

        Teachers of selected private and state schools

        25 (16%)

        Table 1. Composition of the Delphi panel

        1) Delphi Round I: The question posed in the first round was ‘What are the attributes of professionalism you would expect in a doctor working in the Sri Lankan context?’. No limitation was posed on the number of answers to this open-ended question. This was piloted among a group comprising local medical educationists, medical officers and members of public and edited based on their feedback. Invitations to participate were emailed and informed consent was obtained through an online link. Participants were then automatically granted access to the online questionnaire. An email reminder was sent to the initial mailing list after one week. The questionnaire was accessible for three weeks from the date of launch. Invitations were emailed to 920 individuals, of which 158 (17.2%) responded.

        To analyse the data of Round I, we used conventional content analysis, which is employed when literature and theory on a phenomenon is limited, thereby allowing themes to emerge from and be grounded in the data itself (Hsieh & Shannon, 2005). Initially, individual responses – considered as meaning units – were listed out verbatim, removing exact duplicates. Meaning units varied from single word responses to longer phrases, and were therefore divided into short and long meaning units. The latter were shortened into condensed meaning units, while preserving the original meaning. Finally, condensed and short meaning units were coded. Similar phrases were assigned the same code. A final scrutiny of the codes allowed the removal of synonymous items and coupling of items with similar meaning. We followed this process iteratively till the items had been refined to the maximum extent possible.

        With two additional experts, we reviewed the appropriateness of items. Four common misconceptions of professionalism (distractors) were added, in order to prevent inattentive responses to the large number of items included in the subsequent round (Meade & Craig, 2012). A search of literature also revealed a number of evidenced-based items that had not emerged in the data. Three of these were agreed to be relevant and important to the local context, and were therefore added to the list, to ensure that a comprehensive coverage of items was achieved.

        2) Delphi Round II: The attributes of professionalism were compiled into another online survey and emailed to all individuals initially invited to participate in the study, three weeks after completion of the first round; 118 of the initial sample (dropout rate = 25.3%) participated in Round II. Respondents were asked to rate each item on a five-point Likert scale ranging from ‘not important’ to ‘very important’, according to perceived importance in the local context. An email reminder was sent out after one week. The form was accessible for three weeks.

        The aim of this second round was to select the attributes considered most essential. Content Validity Index (CVI) was chosen for this purpose, over less rigorous methods such as prioritisation by mean. The CVI is the proportion of respondents rating an item as essential (Polit & Beck, 2006). Responses ‘4’ and ‘5’of the Likert scale were determined as reflecting ‘essentialness’. The general acceptance is that in a study with a large number of raters (as in this case), a CVI > 0.78 will indicate that an item is essential (Lynn, 1986).

        To avoid the possibility of agreement being due to chance, Kappa statistics – a measure of inter-rater agreement and the probability of chance responses – were computed. K-values can range from -1 to +1; -1 indicating perfect disagreement below chance, +1, perfect agreement above chance and 0, agreement equal to chance (Randolph, n.d.). A K-value ≥0.7 indicates acceptable inter-rater agreement.

        As the final step, the prioritised list of attributes was emailed to participants requesting further comments; however, none were received. The Delphi study concluded at this stage.

        In order to organise the attributes in a more meaningful manner, we attempted to identify the emerging domains of professionalism. Initially this was performed through an Exploratory Factor Analysis, a method which allows identification of underpinning, latent ‘factors’ that are inferred from the variables. Scholars have recommended however, that quantitative analysis of studies with a social science perspective be complemented with qualitative methods (Tavakol & Sandars, 2014). Therefore, a panel of experts individually sorted the attributes into themes using the constant comparison technique; data was sorted, systematically compared and the emergence of a theme was acknowledged when many similar items appeared across the data set (Maykut & Morehouse, 1994). The results were compared with those of the Factor Analysis and by identifying common domains, a final framework of professionalism was formulated. As an additional measure, the internal consistency (Cronbach’s Alpha) within each domain was computed to determine close clustering of items. The framework developed was vetted by a group of reviewers.

        III. RESULTS

        A. Profile of Participant Panel

        The response rates of the different participant groups are depicted in Table 1. As demographic details were not re-obtained in Round II, the profile of this group could not be determined.

        B. Results of Round I

        A total of 288 items were initially documented, and condensed to 53 attributes following content analysis. The three evidence-based items and four distractors were added to make a final inventory of 60 items (Table 2).

        C. Results of Round II

        1) Essential Attributes of Professionalism: Forty-six items achieved a CVI > 0.78 and were therefore labelled as ‘essential’. The attributes are arranged in descending order of importance in Table 2. The Kappa value was 0.77, confirming that rating of items was not due to chance.

        Attribute of professionalism

        CVI

        Possessing adequate medical knowledge and skills

        0.99

        Displaying a sense of responsibility

        0.98

        Being compassionate and caring

        0.97

        Managing limited resources for optimal outcome

        0.97

        Ensuring confidentiality and patient privacy

        0.97

        Being punctual

        0.97

        Maintaining standards in professional practice

        0.97

        Displaying effective communication skills

        0.97

        Displaying honesty and integrity

        0.97

        Displaying commitment to work

        0.97

        Being empathetic towards patients

        0.96

        Being able to work as a member of a team

        0.96

        Being reliable

        0.96

        Displaying professional behaviour and conduct

        0.96

        Being accountable for one’s actions and decisions

        0.96

        Being available

        0.95

        Being responsive

        0.95

        Being clear in documentation

        0.95

        Being patient

        0.94

        Displaying effective problem-solving skills

        0.94

        Understanding limitations in professional competence

        0.94

        Being respectful and polite

        0.94

        Ability to effectively manage time

        0.93

        Being a committed teacher/supervisor

        0.92

        Being open to change

        0.92

        Commitment to continuing professional development

        0.91

        Having scientific thinking and approach

        0.91

        Being accurate and meticulous

        0.91

        Maintaining work-life balance

        0.91

        Displaying self confidence

        0.91

        Ability to provide and receive constructive criticism

        0.90

        Non-judgmental attitude and ensuring equality

        0.90

        Engaging in reflective practice

        0.90

        Respecting patient autonomy

        0.90

        Being accessible

        0.88

        Avoiding substance and alcohol misuse*

        0.86

        Working towards a common goal with the health system

        0.85

        Providing leadership

        0.84

        Being humble

        0.84

        Advocating for patients

        0.83

        Maintaining professional relationships

        0.83

        Adhering to a professional dress code

        0.82

        Avoiding conflicts of interest

        0.82

        Displaying sensitivity to socio-cultural and religious issues related to patient care

        0.81

        Being composed

        0.80

        Stands for professional autonomy**

        0.79

        Being amiable

        0.77

        Displaying sensitivity to socio-cultural and religious issues in dealing with colleagues and students*

        0.76

        Being assertive

        0.75

        Being creative in work related matters

        0.74

        Not money minded

        0.73

        Willingness to work in rural areas

        0.72

        Respecting professional hierarchy**

        0.69

        Possessing knowledge in areas outside of medicine

        0.68

        Being altruistic

        0.65

        Adhering to socio-cultural norms*

        0.64

        Fluency in multiple languages

        0.62

        Abiding by religious beliefs

        0.32

        Displaying self-importance**

        0.19

        Using professional status for personal advantage**

        0.07

        Note: *Evidence-based items sourced from the literature **Distractors

        Table 2. Attributes of professionalism arranged in order of perceived importance

        The highest rated attributes were, ‘possessing adequate medical knowledge and skills’, followed by ‘displaying a sense of responsibility’ and ‘being compassionate and caring’. Five items were mentioned collectively across the main stakeholder groups:

        • Being empathetic towards patients
        • Possessing adequate knowledge and skills
        • Displaying effective communication skills
        • Displaying honesty and integrity
        • Being respectful and polite

        2) Development of a Professionalism Framework: The main themes of professionalism identified by the expert panel and through exploratory factor analysis are summarised in Table 3.

        Panelist 1

        Panelist 2

        Panelist 3

        Factor Analysis

        Professionalism in interactions with patients (1)

        Interpersonal (1,2)

        Competency – Competency in managing patients and clinical reasoning (3)

        Qualities required to effectively work within the healthcare team (2)

        Professionalism in interactions in the workplace (2)

        Intrapersonal (4)

         

        Accountability – Taking responsibility for work performed as a doctor in the clinical context and in interactions with co-workers (2)

        Clinical competency, excellence and continuous development (3)

         

        Professionalism in fulfilling expectations of the profession and society (3)

        Societal/public (3)

         

        Attitude – Thought process, internal qualities of the doctor (4)

        Equal and fair treatment of patients (1)

         

         

         

        Behaviour – External actions of the doctor (1)

        Humane qualities in dealing with patients (1)

        Table 3. Themes of professionalism identified quantitatively and qualitatively*

        Based on the convergence of these domains, a framework was developed, which portrayed professionalism as encompassing three main elements: individual traits, inter-personal interactions and responsibilities to the profession and community (Figure 1). Cronbach Alpha values for the three domains were 0.882, 0.918 and 0.755, thereby confirming the relevance of the constituents to each overarching element.

        Professionalism in interactions with patients and co-workers

        Professionalism as an individual

        Professionalism in fulfilling expectations of the profession and society

        Ensuring confidentiality and patient privacy

        Displaying a sense of responsibility

        Managing limited resources for optimal outcome

        Displaying effective communication skills

        Being punctual

        Maintaining standards in professional practice

        Being empathetic towards patients

        Displaying honesty and integrity

        Displaying professional behaviour and conduct

        Being able to work as a member of a team

        Displaying commitment to work

        Working towards a common goal with the health system

        Being available

        Being reliable

        Adhering to a professional dress code

        Being responsive

        Being accountable for one’s actions and decisions

        Avoiding conflicts of interest

        Being respectful and polite

        Being clear in documentation

        Stands for professional autonomy

        Being a committed teacher/supervisor

        Displaying effective problem-solving skills

        Possessing adequate medical knowledge and skills

        Respecting patient autonomy

        Understanding limitations in professional competence

        Maintaining work-life balance

        Being accessible

        Ability to effectively manage time

        Avoiding substance and alcohol misuse

        Providing leadership

        Being open to change

         

        Advocating for patients

        Commitment to continuing professional development

         

        Maintaining professional relationships

        Having scientific thinking and approach

         

        Displaying sensitivity to socio-cultural and religious issues related to patient care

        Being accurate and meticulous

         

        Being compassionate and caring

        Displaying self confidence

         

        Being patient

        Non-judgemental attitude and ensuring equality

         

        Ability to provide and receive constructive criticism

        Engaging in reflective practice

         

        Being humble

         

         

        Being composed

         

        Figure 1. A framework of medical professionalism for Sri Lanka

        IV. DISCUSSION

        A. Framework of Professionalism Attributes

        The framework depicts a progressively widening circle, with desirable individual traits at its core, expanding into interactions within the workplace and finally, responsibilities as a professional in wider society. It thus depicts the fundamental areas that must be addressed in aspiring towards professionalism. The three domains are largely congruent with the broad areas of professionalism described by Van de Camp et al. (2004) and Hodges et al. (2011). Though portrayed as distinct entities however, we emphasise that the domains should not be interpreted as evolving in sequential stages; professional development should ideally occur in these areas simultaneously.

        Frameworks developed in other Eastern cultures have highlighted significant tenets of local traditions and ethos that have shaped perceptions on professionalism. Confucian values in Taiwan (Ho, Yu, Hirsh, Huang, & Yang, 2011), principles of Bushido in Japan (Nishigori, Harrison, Busari, & Dornan, 2014), and Islamic teachings within Egypt (Al-Eraky et al., 2014), have been shown to be deeply entrenched within such understandings.

        Sri Lanka possesses a rich and diverse cultural heritage. British ideologies in particular appear to influence local medical education (Uragoda, 1987), and the conceptualisation of professionalism (Babapulle, 1992; Monrouxe et al., 2017), resulting in a strong emphasis on ethical behaviour. Sri Lanka is widely acknowledged to have a ‘religious’ background. Theravada Buddhism, the religion followed by the majority of Sri Lankans, as well as less widespread religions such as Christianity, Hinduism and Islam, exert a significant influence on local culture (Gildenhuys, 2004). Virtues collectively upheld by these doctrines, such as generosity, impartiality, honesty and peace are thought to be central to the development of professionalism (Keown, 2002). Of these, honesty, impartiality (equality) and peace (composure) were echoed within the theme ‘professionalism as an individual’, as were responsibility, reliability and accountability. These characteristics, built on a foundation of integrity, are fundamental tenets of Sri Lanka’s socio-cultural framework. Thus, we reasoned that ‘professionalism as an individual’ was ideally depicted as central to the local concept of professionalism, highlighting the importance of building a solid foundation of fundamental characteristics.

        We also drew on elements of the ‘cultural dimension’ (Hofstede, n.d.) and ‘cultural value’ (Schwartz, 1999) theories in developing the framework. Accordingly, the collectivist nature of local culture provides the basis for qualities that enable harmonious interactions with others, as depicted in the second domain. The hierarchical disposition of local society dictates that the doctor is duty-bound to ensure that responsibilities to the profession and community are met.

        B. Essential Attributes of Professionalism

        Among the essential items, broad areas encompassing competence, humanism, interpersonal skills and ethics were prioritised. Qualities most consistently mentioned in literature – accountability, integrity and respect – received high ratings (Van de Camp et al., 2004). Reflective practice, understanding limitations in practice, accepting constructive criticism and continuous professional development – ‘cornerstones’ of the medical profession – were also labelled as significant (Chandratilake et al., 2012; Wynia et al., 2014), in contrast to other Eastern settings (Adkoli, Al-Umran, Al-Sheikh, Deepak, & Al-Rubaish, 2011). The striking omission was altruism, which was intriguingly rated as non-essential. Altruism has been named as one of the most consistently valued attributes of professionalism worldwide (Van de Camp et al., 2004), and would assumedly be espoused in the local collectivist culture. Our findings suggest that even qualities accepted as key tenets of professionalism may not be equally valued cross-culturally. However, it has been claimed that altruism is traditionally a Western concept (Nishigori et al., 2014), and the acceptance of altruism as a composite of professionalism has been challenged in recent years, on the premise that selflessness may in fact be causing considerable harm (Harris, 2018; Nishigori, Suzuki, Matsui, Busari, & Dornan, 2019).

        Participants rated ‘possessing adequate medical knowledge and skills’ as the most essential professionalism attribute. This coincides with findings from Canada (Brownell & Cote, 2001) and Asia (Leung et al., 2012; Pan et al., 2013), though conflicting with a school of thought that considers competence to be the foundation of professionalism, rather than an integral part of it (Stern, 2006). The primacy afforded to knowledge and skills most likely stems from the significance placed on education, which is upheld in Sri Lanka as the primary means of elevating one’s socio-economic status. The emphasis on responsibility and compassion – the second and third highest rated items – as well as morality and empathy, can be attributed to the deeply religious background of the country. It was unsurprising that respectfulness was prioritised, being a cardinal virtue embraced by Sri Lankans, as in other Eastern settings (Nishigori et al., 2014).

        A comparison of professionalism attributes hailed as important in various contexts, with the highest rated qualities locally, revealed a convergence of several items (Table 4). This provides assurance that the local conceptualisation of professionalism reflects the ‘core’ principles of medical professionalism and shows considerable alignment with definitions provided by professional bodies around the world (General Medical Council [GMC], 2013; Medical Professionalism Project, 2002).

        Sri Lanka

        USA

        (American Board of Internal Medicine, 2001)

        Western countries

        (Hilton & Slotnick, 2005)

        Canada

        (Steinert, Cruess, Cruess, Boudreau, & Fuks, 2007)

        Taiwan

        (Ho et al., 2011)

        China

        (Pan et al., 2013)

        Knowledge and skills

         

         

        Competence

        Clinical competence

        Clinical competence

        Responsibility

        Accountability

        Accountability

        Social responsibility

        Responsibility

        Accountability

        Accountability

        Compassion and caring

         

         

         

        Humanism

        Humanism

        Managing limited resources for optimal outcome

         

         

         

         

        Economic consideration

        Confidentiality and patient privacy

         

         

         

         

         

        Punctuality

         

         

         

         

         

        Maintaining standards in professional practice

        Excellence

         

         

        Excellence

        Excellence

        Effective communication skills

         

         

         

        Communication

        Communication

        Honesty and integrity

        Integrity

         

        Honesty

        Integrity

        Integrity

         

        Commitment to work

        Duty

         

        Commitment

         

         

         

        Altruism

         

        Altruism

        Altruism

        Altruism 

         

         

        Respect

        Respect

         

         

         

         

         

        Self-awareness

        Reflection

        Self-regulation

         

        Self-management

         

         

        Teamwork

        Teamwork

         

        Teamwork

         

         

        Ethical practice

        Ethics

        Ethics

        Ethics

         

         

         

        Morality

         

        Morality

         

        Honour

         

         

         

         

         

         

         

        Autonomy

         

         

         

         

         

         

         

        Health promotion

        Table 4. Comparison of main attributes of professionalism identified locally with those of Western and Eastern contexts

        Interestingly, certain items globally recognised as insignificant in terms of professionalism (work-life balance, leadership, professional appearance and composure) (Chandratilake et al., 2012), were highlighted as essential locally. The local expectation that professionals maintain an appearance befitting of their social status and the high power-distance between doctor and patient (Hofstede, n.d.), could have contributed to the emphasis on appearance. Similarly, power distance could explain the significance placed on leadership, a crucial skill required to handle subordinates and patients at the ‘lower end’ of the power spectrum. A promising finding was the importance placed on ‘work-life balance’, complementing the lack of emphasis on altruism and coinciding with recommendations of multiple professional bodies that underscore the value of personal well-being (GMC, 2013). The significance assigned to composure can be attributed to Sri Lanka’s conservative nature (Schwartz, 1999), where cultural norms dictate that public displays of intense emotion be suppressed.

        It was intriguing to note that of the four distractors—which were expected to be rated as non-essential— ‘stands for professional autonomy’ achieved a CVI just above the baseline. In Sri Lanka, political influence is known to permeate into the workplace; therefore, this attribute can be viewed in light of being able to perform one’s duties in the midst of such pressures. The paternalistic nature of the doctor-patient relationship common to many Eastern cultures, could also underpin the significance afforded to professional autonomy (Ho & Al-Eraky, 2016; Susilo, Marjadi, van Dalen, & Scherpbier, 2019). Incidentally, this item was not corroborated elsewhere in the literature and was therefore, unique to this study. Other items that were exclusive to the Sri Lankan context were clarity in documentation, patience, time management and maintaining professional relationships.

        As a whole, it is evident that the local conceptualisation of professionalism—while including areas unique to the Sri Lankan context—greatly coincides with the perceptions representing professionalism shared by the global medical community.

        C. Strengths and Limitations

        The study has responded to calls for culture-specific discourse on professionalism (Monrouxe et al., 2017) and prioritisation of essential qualities in terms of professionalism (Jha, Bekker, Duffy, & Roberts, 2007). Many studies seeking to define professionalism have drawn on the views of particular stakeholder groups in isolation; few have attempted to collate the views of the many groups (Ho et al., 2011; Leung et al., 2012; Pan et al., 2013). Scholars have challenged the medical profession to determine who should define professionalism, with the belief that this onus should not be placed solely on doctors (Wear & Kuczewski, 2004). The assimilation of views of multiple stakeholder groups therefore, was a significant strength of this study.

        Although the initial list of 920 individuals who were invited to participate in the study was representative of all groups of stakeholders, the majority of those who responded were medical teachers and students. Thus, the study results predominantly reflect the views of these two groups. This may have precluded identification of attributes considered essential by the less represented groups, especially the public.

        Another limitation of the study was the exclusive use of English, which though widely used in Sri Lanka, is not the first language of the majority of the population. The decision was justified as all potential participant groups were posited to be adequately fluent in English to participate. However, we recognise that providing the option of Sinhalese and Tamil translations may have increased participation in certain groups (healthcare staff and the public).

        Finally, we acknowledge that while this framework reflects the current perception regarding medical professionalism, this notion is far from static, and will undeniably evolve with time. We therefore propose that future research involve repeated discussions that may inform the evolution of the current framework with time, being mindful of achieving a fair balance of stakeholder representation to this end.

        V. CONCLUSION

        This study has enabled us, through a consensus seeking approach, to paint a picture of medical professionalism as grounded in the views of the multiple stakeholders of healthcare in Sri Lanka. The conceptual framework that represents these opinions, reflects how perceptions on professionalism are shaped by cultural, societal, religious, economic and other factors. Moreover, it has enabled identification of individual elements of professionalism that are expected of a doctor in the local context, and prioritisation of those most essential among them.

        Notes on Contributors

        Amaya Ellawala MBBS, PGDME, MD, is a Lecturer in Medical Education in the Department of Medical Education, Faculty of Medical Sciences, University of Sri Jayewardenepura, Sri Lanka. Amaya Ellawala reviewed the literature, developed the methodological framework for the study, performed data collection, analysis and wrote the manuscript.

        Madawa Chandratilake MBBS, MMed, PhD, is a Professor of Medical Education at the Department of Medical Education, Faculty of Medicine, University of Kelaniya, Sri Lanka. Madawa Chandratilake contributed to the development of the methodological framework, data analysis and writing of the manuscript.

        Nilanthi de Silva MBBS, MSc, MD, is a Senior Professor in the Department of Parasitology, Faculty of Medicine, University of Kelaniya, Sri Lanka. Nilanthi de Silva contributed to the development of the methodological framework, data analysis and writing of the manuscript.

        All authors read and approved the final manuscript.

        Ethical Approval

        Ethics approval was obtained from the Ethics Review Committee, Faculty of Medicine, University of Kelaniya (P/15/01/2016).

        Funding

        This study was not funded.

        Declaration of Interest

        The authors declare that they have no competing interests.

        References

        Adkoli, B. V., Al-Umran, K. U., Al-Sheikh, M., Deepak, K. K., & Al-Rubaish, A. M. (2011). Medical students’ perception of professionalism: A qualitative study from Saudi Arabia. Medical Teacher, 33(10), 840–845.

        Al-Eraky, M. M., & Chandratilake, M. (2012). How medical professionalism is conceptualised in Arabian context: A validation study. Medical Teacher, 34(S1), S90–S95.

        Al-Eraky, M. M., Chandratilake, M., Wajid, G., Donkers, J., & Van Merrienboer, J. G. (2014). A Delphi study of medical professionalism in Arabian Countries: The Four-Gates Model. Medical Teacher, 36, S8–S16.

        American Board of Internal Medicine. (2001). Project Professionalism. Philadelphia, USA: American Board of Internal Medicine.

        Babapulle, C. J. (1992). Teaching of medical ethics in Sri Lanka. Medical Education, 26(3), 185–189.

        Birden, H., Glass, N., Wilson, I., Harrison, M., Usherwood, T., & Nass, D. (2014). Defining professionalism in medical education: A systematic review. Medical Teacher, 36(1), 47–61.

        Brownell, A. K. W., & Cote, L. (2001). Senior residents’ views on the meaning of professionalism and how they learn about it. Academic Medicine, 76(7), 734–737.

        Chandratilake, M., Mcaleer, S., & Gibson, J. (2012). Cultural similarities and differences in medical professionalism: A multi-region study. Medical Education, 46(3), 257–266.

        De Villiers, M. R., De Villiers, P. J. T., & Kent, A. P. (2005). The Delphi technique in health sciences education research. Medical Teacher, 27(7), 639–643.

        General Medical Council. (2013). Good Medical Practice. London, UK: GMC.

        Gildenhuys, J. S. H. (2004). Ethics and Professionalism. Stellenbosch. South Africa: Sun Press.

        Harris, J. (2018). Altruism: Should it be included as an attribute of medical professionalism? Health Professions Education, 4, 3–8.

        Hilton, S. R., & Slotnick, H. B. (2005). Proto-professionalism: How professionalisation occurs across the continuum of medical education. Medical Education, 39, 58–65.

        Ho, M., & Al-Eraky, M. (2016). Professionalism in context: Insights from the United Arab Emirates and beyond. Journal of Graduate Medical Education, 8(2), 268–270.

        Ho, M., Yu, K., Hirsh, D., Huang, T., & Yang, P. (2011). Does one size fit all? Building a framework for medical professionalism. Academic Medicine, 86(11), 1407–1414.

        Hodges, B. D., Ginsburg, S., Cruess, R., Cruess, S., Delport, R., Hafferty, F., … Wade, W. (2011). Assessment of professionalism: Recommendations from the Ottawa 2010 conference. Medical Teacher, 33(5), 354–363.

        Hofstede, G. (n.d.). Cultural Dimensions. Retrieved from http://geerthofstede.com/culture-geert-hofstede-gert-jan-hofstede/6d-model-of-national-culture/

        Hofstede, G. (2011). Dimensionalizing cultures: The Hofstede model in context. Online Reading in Psychology and Culture, 2(1), 1–26.

        Hsieh, H., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288.

        Humphrey-Murto, S., Varpio, L., Wood, T. J., Gonsalves, C., Ufholz, L., Mascioli, K., … Foth, T. (2017). The use of the Delphi and other consensus group methods in medical education research. Academic Medicine, 92(10), 1491–1498.

        Irby, D. M., & Hamstra, S. J. (2016). Parting the clouds: Three professionalism frameworks in medical education. Academic Medicine, 91(12), 1606–1611.

        Jha, V., Bekker, H. L., Duffy, S. R. G., & Roberts, T. E. (2007). A systematic review of studies assessing and facilitating attitudes towards professionalism in medicine. Medical Education, 41, 822–829.

        Keown, D. (2002). Buddhism and medical ethics. Principles of Practice, 7, 39–70.

        Lesser, C. S., Lucey, C. R., Egener, B., Braddock, C. H., Linas, S. L., & Levinson, W. (2010). A behavioral and systems view of professionalism. Journal of the American Medical Association, 304(24), 2732–2737.

        Leung, D. C., Hsu, E. K., & Hui, E. C. (2012). Perceptions of professional attributes in medicine: A qualitative study in Hong Kong. Hong Kong Medical Journal, 18(4), 318–324.

        Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35, 382–385.

        Maykut, P., & Morehouse, R. (1994). Beginning Qualitative Research: A Philosophic and Practical Guide. London, UK: Falmer Press.

        Meade, A. W., & Craig, S. B. (2012).  Identifying careless responses in survey data. Psychological Methods, 17(3), 437-455.

        Medical Professionalism Project. (2002). Medical professionalism in the new millennium: A physicians’ charter. The Lancet, 359, 520-522.

        Monrouxe, L. V., Chandratilake, M., Gosselin, K., Rees, C. E., & Ho, M. J. (2017). Taiwanese and Sri Lankan students’ dimensions and discourses of professionalism. Medical Education, 51(7), 1-14.

        Nishigori, H., Harrison, R., Busari, J., & Dornan, T. (2014). Bushido and medical professionalism in Japan. Academic Medicine, 89(4), 560–563.

        Nishigori, H., Suzuki, T., Matsui, T., Busari, J., & Dornan, T. (2019). A two-edged sword: Narrative inquiry into Japanese doctors’ intrinsic motivation. The Asia Pacific Scholar, 4(3), 24-32.

        Pan, H., Norris, J. L., Liang, Y., Li, J., & Ho, M. (2013). Building a professionalism framework for healthcare providers in China: A Nominal Group technique study. Medical Teacher, 35, e1531–e1536.

        Polit, D., & Beck, C. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing and Health, 29, 489–497.

        Powell, C. (2003). The Delphi Technique: Myths and realities – Methodological issues in nursing research. Journal of Advances in Nursing, 41(4), 376–382.

        Randolph, J. (n.d.). Online Kappa Calculator. Retrieved from http://justusrandolph.net/kappa/

        Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied Psychology: An International Review, 48(1), 23–47.

        Sri Lanka Medical Council. (2009). Guidelines on Ethical Conduct for Medical and Dental Practitioners Registered with the Sri Lanka Medical Council. Colombo, Sri Lanka: SLMC.

        Steinert, Y., Cruess, R. L., Cruess, S. R., Boudreau, J. D., & Fuks, A. (2007). Faculty development as an instrument of change: A case study on teaching professionalism. Academic Medicine, 82(11), 1057-1064.

        Stern, A. (2006). Measuring Professionalism. New York, USA: Oxford University Press.

        Susilo, A. P., Marjadi, B., van Dalen, J., & Scherpbier, A. (2019). Patients’ decision-making in the informed consent process in a hierarchical and communal culture. The Asia Pacific Scholar, 4(3), 57-66.

        Tavakol, M., & Sandars, J. (2014). Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Medical Teacher, 36(10), 838–848.

        Uragoda, C. (1987). A History of Medicine in Sri Lanka – From the Earliest Times to 1948. Colombo, Sri Lanka: Sri Lanka Medical Association.

        Van de Camp, K., Vernooij-Dassen, M. J. F. J., Grol, R. P. T. M., & Bottema, B. J. A. M. (2004). How to conceptualize professionalism: A qualitative study. Medical Teacher, 26(8), 696–702.

        Van Mook, W. N. K. A., Van Luijk, S. J., O’Sullivan, H., Wass, V., Schuwirth, L. W., & Van Der Vleuten, C. P. M. (2009). General considerations regarding assessment of professional behaviour. European Journal of Internal Medicine, 20(4), e90–e95.

        Wear, D., & Kuczewski, M. G. (2004). The professionalism movement: Can we pause? American Journal of Bioethics, 4(2), 1–10.

        Wilkinson, T. J., Wade, W. B., & Knock, L. D. (2009). A blueprint to assess professionalism: Results of a systematic review. Academic Medicine, 84(5), 551–558.

        Wynia, M. K., Papadakis, M. A., Sullivan, W. M., & Hafferty, F. W. (2014). More than a list of values and desired behaviors: A foundational understanding of medical professionalism. Academic Medicine, 89(5), 712–714.

        *Amaya Ellawala
        Department of Medical Education,
        Faculty of Medical Sciences,
        University of Sri Jayewardenepura,
        Sri Lanka
        Email address: amaya@sjp.ac.lk

          Submitted: 17 March 2020
          Accepted: 3 June 2020
          Published online: 5 January, TAPS 2021, 6(1), 60-69
          https://doi.org/10.29060/TAPS.2021-6-1/OA2239

          Frank Bate1, Sue Fyfe2, Dylan Griffiths1, Kylie Russell1, Chris Skinner1, Elina Tor1

          1University of Notre Dame Australia, Australia; 2Curtin University, Australia

          Abstract

          Introduction: In 2017, the School of Medicine of the University of Notre Dame Australia implemented a data-informed mentoring program as part of a more substantial shift towards programmatic assessment. Data-informed mentoring, in an educational context, can be challenging with boundaries between mentor, coach and assessor roles sometimes blurred. Mentors may be required to concurrently develop trust relationships, guide learning and development, and assess student performance. The place of data-informed mentoring within an overall assessment design can also be ambiguous. This paper is a preliminary evaluation study of the implementation of data informed mentoring at a medical school, focusing specifically on how students and staff reacted and responded to the initiative.

          Methods: Action research framed and guided the conduct of the research. Mixed methods, involving qualitative and quantitative tools, were used with data collected from students through questionnaires and mentors through focus groups.

          Results: Both students and mentors appreciated data-informed mentoring and indications are that it is an effective augmentation to the School’s educational program, serving as a useful step towards the implementation of programmatic assessment.

          Conclusion: Although data-informed mentoring is valued by students and mentors, more work is required to: better integrate it with assessment policies and practices; stimulate students’ intrinsic motivation; improve task design and feedback processes; develop consistent learner-centred approaches to mentoring; and support data-informed mentoring with appropriate information and communications technologies. The initiative is described using an ecological model that may be useful to organisations considering data-informed mentoring.

          Keywords:            Data-Informed Mentoring, Mentoring, Programmatic Assessment, E-Portfolio

          Practice Highlights

          • Students and mentors appreciated the introduction of data-informed mentoring.
          • Assessment policies and practices should be integrated with data-informed mentoring.
          • Data-informed mentoring presents curriculum challenges in task design and framing feedback.
          • The student context informs the data-informed mentoring approach (learner-centred to mentor-directed).
          • Data-informed mentoring requires supportive information and communications technologies.

          I. INTRODUCTION

            An often-cited definition of mentoring, highlights the role of experienced and empathetic others guiding students to re-examine their ideas, learning and personal and professional development (Standing Committee on Postgraduate Medical and Dental Education, 1998).

            Heeneman and de Grave (2017) identify some subtle differences between traditional conceptions of mentoring and the type of mentoring that is required under programmatic assessment, which in this paper we refer to as Data-Informed Mentoring (D-IM). For example, D-IM is embedded in a curriculum in which rich data on student progress arises from student interaction with assessment tasks, informing and enhancing their progress (see Appendix). Further, in programmatic assessment, the learning portfolio is typically the setting in which the mentor-mentee relationship develops. This setting brings together institutional imperatives (e.g. assessable tasks), and personal imperatives such as evidence of competence and personal reflection. Situating mentoring in a curriculum and assessment framework impacts upon the mentoring relationship.

            Meeuwissen, Stalmeijer, and Govaerts (2019) propose that a different type of mentoring is required under programmatic assessment. Mentors interpret data and feedback provided by content experts across domains of learning thus providing an evidence-base to facilitate student reflection. They might also take on a variety of roles (e.g. critical friend, coach, assessor) that could influence the mentoring relationship including the level of trust that is established with the student. These challenges suggest that conventional definitions of mentoring might not capture the essence of D-IM. Whilst the availability of rich information potentially enhances the mentoring experience and personalises learning, mentors and students are challenged to make sense and act upon this information; students might focus on issues that fall outside of the scope of the data provided (e.g. their wellbeing); mentors may also struggle to delineate boundaries between multiple roles or draw a line on where their scope of practice, as a mentor, begins and ends.

            Mentoring is a social construct and as such is best considered through a holistic lens taking account of societal, institutional and personal factors (Sambunjak, 2015). The current study adopted Sambunjak’s “ecological model” (2015, p. 48) as a framework to help understand the impact of D-IM (Figure 1). Societal, institutional and personal forces are inter-related. For example, a student’s approach to D-IM might be influenced by financial circumstances resulting in the need to work part-time (societal); a medical school’s assessment policy (institutional); or a student’s learning style (personal). The model is presented as a set of cogs where the optimal educational experience is achieved if all elements work in harmony. The study uses the ecological model to help answer the central research question that guided the study: how did students and staff react and respond to D-IM?

            Figure 1. An ecological framework for conceptualizing D-IM (modified from Sambunjak, 2015)

            This paper shares findings from the study derived from the first two years of data collection. Its focus is on the implementation of D-IM and how students and staff reacted to this implementation (Kirkpatrick & Kirkpatrick, 2006).

            II. METHODS

            The School of Medicine Fremantle (the School) of the University of Notre Dame Australia introduced D-IM as part of its incremental approach to programmatic assessment. The School offers a four-year doctor of medicine (MD) with around 100 students enrolling each year. The first two years are pre-clinical consisting of problem-based learning supported by lectures and small group learning. The final two years involve clinical rotations mostly located at hospital sites. Each year of the MD constitutes a course that students need to pass in order to progress to the next year. The School’s assessment mix includes knowledge-based examinations (multiple choice/case-based), Objective Structured Clinical Examinations, work-based assessments and rubric-based assessments (e.g. reflections). Examinations are administered mid-year and end-of-year for pre-clinical students and end-of-year for students in the clinical years.

            All performance data informs D-IM. Regular feedback from assessors is provided and collated in an e-portfolio (supported by Blackboard) so that students have opportunities to reflect on their progress and plan future learning. Students are allocated a mentor each year who has access to their students’ e-portfolio.

            Mentoring was provided by 26 pre-clinical de-briefing (CD) tutors whose role was to facilitate student reflection on their learning and support and guide their interpretation of the feedback they had received. D-IM was introduced to first year students in 2017 and first and second year students in 2018. Three mentoring meetings were conducted per student per year. CD tutors also have a role in assessing student performance and providing feedback. Each CD tutor has a CD group which is also their mentoring group (8-10 students). However, tasks are assessed and feedback is provided by a different tutor. This means that mentor and assessor functions are separated.

            In preparation for the implementation of D-IM, targeted professional development was provided to tutors which unpacked the mentoring role and provided examples of how performance data can be used to underpin mentoring sessions.

            The University of Notre Dame Australia Human Research Ethics Committee (HREC) provided ethical approval for the research, and a research team was formed in 2017. Action research guided the conduct of the research, as it aims to understand and influence the change process. Action research is the “systematic collection and analysis of data for the purpose of taking action and making change” (Gillis & Jackson, 2002, p. 264). It involves cycles of planning, implementing, observing and reflecting on the processes and consequences of the action. The subjects of the research have input into cycles and influence changes that are made as a result of feedback and reflection (Kemmis & McTaggart, 2000). Each cycle of the research runs for one year so that planning, action, observation and reflection can inform the next iteration.

            Mixed methods research involving qualitative and quantitative methods, was used. Data were collected each year from student questionnaires and focus groups which included mentors. Participation in the research was underpinned by a Statement of Informed Consent. For the questionnaire consent constituted ticking a box on an online form.  For the focus group, a physical form was signed before taking part in a focus group. The student questionnaire comprised qualitative and quantitative components and posed 9 statements on mentoring. The questionnaire was critically appraised by a panel of 8 academic staff in May 2017 and it was agreed that the questionnaire had attained face validity before it was administered in September 2017.

            Students were asked to rate each statement of the questionnaire according to a Likert-type scale from Strongly Disagree, Disagree, Neutral, Agree to Strongly Agree. For interpretation, a numerical value was assigned to each response from 1=Strongly Disagree through to 5=Strongly Agree. Quantitative data were downloaded from SurveyMonkey as Excel files for extraction of descriptive statistics and then imported into SPSS Version 25. Statistical analysis was undertaken using SPSS version 25. Two statistical tests were conducted. The first test, a non-parametric median test on students’ perception of each aspect of DI-M, is consistent with the purpose of action research to inform future practice. Responses to individual survey items using a Likert-type response scale are ordinal in nature, and the distributions are not identical for the two cohorts, therefore a median test was used. This statistic compares the responses from two independent groups to individual survey items, with reference to the overall pooled median rating for the two cohorts combined. More specifically, the median test examines whether there are the same proportion of responses above and below the overall pooled median rating, in each of the two cohorts, for each individual item. A second test, an aggregate mean score (an integer), was calculated from the students’ responses to the nine statements in each cohort. The mean score for each cohort provided an overall indication on the extent to which respondents were satisfied with the mentoring program. A parametric test, (independent t-test) was used to examine if there were statistically significant differences in mean scores between the two independent cohorts.

            Qualitative data were coded from students’ comments to two open-ended questions in the student questionnaire: (1) Please comment on any aspect of the learning portfolio that you feel were particularly beneficial for your learning journey; and (2) Please comment on any aspect of the learning portfolio that could be improved in the future. Qualitative data from mentors through three focus groups in both 2017 and 2018 were recorded, transcribed and imported into Nvivo12 to help identify patterns across and within data sources. Data saturation was achieved after two focus group iterations. Two researchers independently coded students’ comments and staff transcripts and then met to discuss and resolve differences in interpretation. These codes were then presented to the broader team in which ideas were further unpacked and themes developed using Braun and Clarke’s (2006) thematic approach to analysis.

            III. RESULTS

            In 2017, 29% of the year 1 student cohort responded to the questionnaire (n=33) and in 2018, the response fraction across both Year 1 and Year 2 was 47% (n=98). The 2017 student cohort is described as Cohort 1 and the 2018 Student Cohort is Cohort 2. The response fraction for Cohort 1 increased from 29% in 2017 to 46% in 2018. In 2017, 21 staff participated in focus groups (7 of whom were mentors). In 2018, 17 staff took part (9 mentors). Tables 1-2 compare student responses to the 9 items on mentoring on the following basis:

            • Over time in 2017 and 2018 within Cohort 1 (Table 1);
            • For first year students–Cohort 1 2017 and Cohort 2 2018 (Table 2).

            For each table, median ratings are shown for each item along with the results of the median test to discern statistically significant differences between or within cohorts. Table 1 compares Cohort 1 responses to D-IM over time.

            Item

            Overall Pooled Median*

            Cohort 1

            2017

            (n=32)

            Cohort 1

            2018

            (n=51)

             

             

             

            n> pooled median

            n<= pooled median

            n>

            pooled median

            n<= pooled median

            Median Test
            (chi square (χ2); df; p value)

            The mentoring process was well organised

             

             

            4

            6

            26

            6

            45

            χ2 =0.776; df=1; p=0.378

            My mentor was personally very well organised

             

            5

            0

            32

            0

            50

            n/a**

             

            There were an appropriate number of mentoring meetings throughout the year

             

            4

            2

            30

            4

            47

            χ2 =0.074; df=1; p=0.785

            My mentor was respectful

             

            5

            0

            32

            0

            51

            n/a**

             

             

            My mentor listened to me

             

            5

            0

            32

            0

            50

            n/a**

             

             

            My mentor asked thought-provoking questions which helped me to reflect

             

            4

            10

            22

            12

            39

            χ2 =0.602; df=1; p=0.438

            My mentor added value to my learning

             

             

            4

            10

            22

            11

            40

            χ2 =0.975; df=1; p=0.323

            My mentor helped me to set future goals that were achievable

             

            4

            9

            23

            11

            40

            χ2 =0.462; df=1; p=0.497

            The summaries provided of my performance in the Blackboard Community Site were useful in helping me to reflect on my progress

            3

            17

            16

            14

            37

            χ2 =4.983; df=1; p=0.026***

            Note. *In the median test, a comparison is made between the median rating in each group to the ‘overall pooled median’ from both groups. **Values are less than or equal to the overall pooled median therefore Median Test could not be performed. ***Significant at p < 0.05 level.

            Table 1. Student Perceptions of D-IM within Cohort 1 in 2017 and 2018–Median Tests for Individual Items

            The only statistically significant difference noted for Cohort 1 was for the summaries of performance provided in Blackboard that were designed to underpin D-IM. The data provided in these summaries was less valued by students who engaged with D-IM in their second year.

            The aggregate mean score in response to the statements on D-IM in the survey was positive in 2017 (M=4.02; SD=0.62; n=32). Mentoring continued to be well perceived by Cohort 1 as they progressed to second year in 2018 (M=3.80; SD=0.67; n=51). The slight difference in aggregate mean scores between 2017 and 2018 is not statistically significant (t=1.571; df=82, p=0.120). Table 2 compares first year students’ perceptions of D-IM.

            Item

            Overall Pooled Median*

            Cohort 1

            n=32

            Cohort 2

             n=47

             

             

             

            n>

            pooled median

            <= pooled median

            pooled median

            <= pooled median

            Median Test
            (chi square (χ2); df; p value)

            The mentoring process was well organised

             

             

            4

            6

            26

            9

            37

            χ2 =0.008; df=1; p=0.928

            My mentor was personally very well organised

             

            5

            0

            32

            0

            47

            n/a**

            There were an appropriate number of mentoring meetings throughout the year

             

            4

            2

            30

            8

            39

            χ2 =0.998; df=1; p=0.158

            My mentor was respectful

            5

             

            0

            32

            0

            47

            n/a**

             

             

            My mentor listened to me

            5

            0

            32

            0

            47

            n/a**

             

             

            My mentor asked thought-provoking questions which helped me to reflect

             

            4

            10

            22

            18

            29

            χ2 =0.413; df=1; p=0.520

            My mentor added value to my learning

             

             

            4

            10

            22

            17

            30

            χ2 =0.205; df=1; p=0.651

            My mentor helped me to set future goals that were achievable

             

            4

            9

            23

            17

            30

            χ2 =0.558; df=1; p=0.455

            The summaries provided of my performance in the Blackboard Community Site were useful in helping me to reflect on my progress

            3

            17

            16

            17

            30

            χ2 =1.868; df=1; p=0.172

            Note. *In the median test, a comparison is made between the median rating in each group to the ‘overall pooled median’ from both groups. **Values less than or equal to the overall pooled median therefore Median Test could not be performed.

            Table 2. First Year Student Perceptions of D-IM –Median Tests between Cohort 1 and Cohort 2 for Individual Items

            No statistically significant differences were noted between cohorts 1 and 2.

            The aggregate mean score in response to the statements on D-IM in the survey was positive for Cohort 1 in 2017 (M=4.02; SD=0.62; n=32). Equally positive responses were noted in Cohort 2 in 2018 (M=3.91; SD=0.79; n=47). The difference between aggregate mean scores for first year students’ perceptions is not statistically significant (t=0.686; df=78, p=0.495).

            Data from tables 1 and 2 reveals that students are highly satisfied with three aspects of mentoring: the personal organisation of the mentor along with their respectful and listening attributes. Students were also satisfied with the mentoring process, the number of mentoring meetings, the ability of the mentoring to assist in reflection and to add value to their learning, and also the propensity of the mentor to assist in action-planning. However, the summaries provided in the Blackboard environment were a source of dissatisfaction for students.

            As discussed, qualitative data were collected from students through the questionnaire and staff through focus groups. The research team collated the qualitative data and confirmed that the qualitative data corroborated quantitative results with students and mentors appreciating the introduction of D-IM. For example, “Mentor sessions are important in providing support to students and…are a welcome introduction” (Yr1 Student, 2017); “Mentoring was useful to develop self-directed learning and to check where you were” (Yr2 Student, 2018); “You get to know the students, things were revealed which would not have been otherwise” (Mentor, 2017); and “Mentoring enabled me to facilitate more, listen more. Definitely a difference when you’re one-on-one with somebody” (Mentor, 2018).

            In tune with the action research method adopted by the study which seeks to identify and respond to opportunities for improvement, the Research Team identified three concerns from the qualitative data: differing views of the purpose of D-IM and the role of the mentor; the provision of student feedback and information and communications technologies (ICT); and workload.

            A. Differing Views of the Purpose of D-IM and the Role of Mentor

            Mentors had differing conceptions of the purpose of D-IM and the role of a mentor. Some mentors perceived their primary function to be one of facilitating reflection and being encouraging whilst others were more directive, providing advice or sharing their own experiences. “I was…a sounding board to prompt their thoughts about how their progress was going. Rather than offering ways of solving problems it was more pointing where problems might lie and encouraging them to think of solutions” (Mentor, 2017); “The basic rule is to guide them… guide them properly, maybe to get them to change their study strategies and other things” (Mentor, 2017).

             B. Provision of Student Feedback and ICT

            Students reported that feedback was inconsistent in timeliness and quality. Often feedback lacked guidance for improvement or was too late for it to help the student improve their learning: More in-depth feedback on work, and returned in a timeframe that allows it to be relevant to our learning” (Yr1 Student, 2018); “Marking seemed thoughtless and halfhearted” (Yr2 Student, 2018).The use of a Blackboard Wiki to collate and present data points was also less than ideal with students finding the site difficult to navigate and use although they generally reported that it was safe and secure.

            C. Workload

            Students understood the role of reflection and appreciated having a mentor although there was some misunderstanding of the role of the portfolio with some students seeing it as extra work: “The amount of work required…was disproportionate” (Yr2 Student, 2018). Some students felt that the added stress and anxiety detracted from their study of medicine: “The portfolio actually detracts from spending time learning content that is essential to clinical years” (Yr2 Student, 2018). These concerns needed to be addressed by the School and are discussed in the context of changes that have and will be made to D-IM for preclinical students in the School.

            IV. DISCUSSION

            On the whole there was a positive response to D-IM implementation by students and staff. This response is consistent with Frei, Stamm, and Buddeberg-Fischer (2010, p. 1) who found that the “personal student-faculty relationship is important in that it helps students to feel that they are benefiting from individual advice.”

            The findings of the research, however, reveal some tensions between the various elements of Sambunjak’s (2015) ecological model that link to the three areas of concern identified in the research. These tensions are shown in Figure 2.

            Figure 2. The ecological framework to explore tensions in D-IM

            A. Purpose and Role of Mentors and D-IM

            The role of the mentor at the School is to support and guide students, and this role was not confused with other functions such as content expert or assessor. In this respect, the role conflict described by Meeuwissen et al. (2019) and Heeneman and deGrave (2017) was not evident at the School. However, mentoring approaches were situated on a continuum between learner-centred and mentor-directed. It is probable that the mentor’s style–empowering, checking or directing (Meeuwissen et al., 2019, p.605)–and their potentially different view of their role impacted on how D-IM sessions played out. Three ways of understanding the role of mentor in medical education have been identified: someone who can answer questions and give advice, someone who shares what it means to be a doctor and someone who listens and stimulates reflection (Stenfors-Hayes, Hult, & Owe Dahlgren, 2011). In a study of mentoring styles of beginning teachers, Richter et al. (2013) found that the mentor’s beliefs about learning have the greatest impact on the quality of the mentoring experience. Although professional development was provided to mentors on their role as facilitators of reflection and these issues were outlined and discussed, there were differences in interpretation of the role in the D-IM context.

            Heeneman and de Grave (2017) argue that students need to be self-directed in order to be effective medical professionals. It is posited that a number of factors can influence the extent to which the mentor directs proceedings including the mentor’s experience, role clarity, rate of student progress, depth of student reflections and the perceived importance of the data required for assessment purposes.

            In this study most students engaged positively with D-IM, though, albeit with variations in the extent and quality of reflection and action planning. A slight decrease in students’ enthusiasm towards D-IM was noted as they progressed from first to second year. This could be related to the novelty of D-IM diminishing over time that has been evident in other educational technology innovations (Kuykendall, Janvier, Kempton, & Brown, 2012). However, students also have a different mentor in each year. According to Sambunjak (2015), mentoring requires commitment sustained over a long period of time. At Maastricht University, for example, Heeneman and de Grave (2017) report that students are allocated the same mentor for a four-year medical course. It is, therefore, likely that in the current study the short timeframe for mentors to establish student relationships, and the introduction of a different mentor each year contributed to a reduction in student satisfaction.

            B. Feedback and ICT Support

            D-IM is dependent on quality data. That is, the perceived value of tasks that students engage with, and the feedback that they receive on, these tasks. Findings suggest that students found some tasks repetitive and feedback belated and superficial. Better task design and feedback practices are required. This finding is consistent with those of Bate, Macnish and Skinner (2016) in a study of task design within a learning portfolio. Findings also indicated dissatisfaction with Blackboard ICT environment. The portal was not intuitive and the structure and requirements for use of the template did not stimulate the desired level of reflection.

             C. Workload

            Students at the School are “time poor” and many work part-time whilst studying. They are graduate entrants used to achieving academic success. Most are millennials comfortable with distilling and manipulating data and using online technologies. These characteristics are consistent with what Waljee, Copra, and Saint (2018) see as the new breed of medical students, being accustomed to distilling information and desirous of rapid career advancement. In these circumstances, it is unsurprising that students valued D-IM as it promoted focused data-driven discussion on their progress. However, it is also unsurprising that students were critical of anything that, in their opinion, did not support the “study of medicine”. Although students were sometimes critical of tasks that fed into D-IM (Bate et al., 2020), the reflective and action planning components of DI-M were not onerous and were at any rate optional.

            For most students, grades rather than learning were paramount and this created a competitive environment which fuelled strategic learning in engaging with tasks underpinning D-IM. The School’s Assessment Policy has implications here. Progression is determined by passing discrete assessments and causes students to focus on grades rather than learning. These dispositions play out in D-IM sessions where, for example, goals are sometimes framed around passing examinations rather than addressing deficits in understanding. The School also distinguishes between formative and summative assessment with the result being that formative assessments are less valued by students. Opportunities to test understanding through formative testing are sometimes not taken up and result in less information for students and their mentor to gauge learning progress.

            Bhat, Burm, Mohan, Chahine, and Goldszmidt (2018, p. 620) identified a set of “threshold concepts” in medicine that are crucial for students transitioning into clinical practice. Among these are self-directed, metacognitive and collaborative dispositions to learning. However, for a student in the preclinical years, these threshold concepts are not perceived to be the important factors that determine their progress through the course and their aim to become a doctor. Thus the tensions between students valuing mentoring but feeling that reflecting on their performance through D-IM is time-consuming and unrelated to their course progression is a source of tension within the model.

            D. Actions as a Result of the Study

            The action research approach of this study meant that in all results the Research Team was looking for ways to improve the system. Some issues could be improved quickly. A refinement of the Blackboard environment and a change to a software solution called SONIA was implemented in 2019 to improve the ICT interface and reduce workload. Continuing professional development (PD) for staff is undertaken and takes the research results into account. Within the mentor PD program, the Research Team saw that mentoring requires mentors to be able to diagnose the readiness and willingness of students to consider their learning educational journey. This means that, whilst the D-IM program needs a consistent view of D-IM where mentors see their role as facilitating reflection, different mentoring skills and behaviours are needed by mentors for different students. PD is also needed for students so that they understand the relationship between their achievement of learning and the role of D-IM in their journey.

            Some issues are longer-term or resource dependent. A focus on the role of feedback in the system, especially for student reflection and its timeliness for mentoring sessions and action planning is critical to making D-IM valued by students. However, it is not always possible for staff to provide feedback in an optimum timeframe although the quality of the feedback can be improved by clear guidelines, expectations and an intuitive online interface.

            Of great complexity and more difficult to resolve is the tension between developing the “threshold concepts” (Bhat et al., 2018); the generic skills which are built on self-reflection and are supported by D-IM and the ways in which a student progresses through the course. These are School-based rules of progression and produce a framework within which D-IM needs to operate.

            V. LIMITATIONS OF THE STUDY

            The study was conducted at one University and although it will ultimately cover a six-year timeframe, findings should be gauged within the context of this setting. Relatively low response rates were noted, and selection bias is a possibility with students most engaged with D-IM completing the questionnaire. Although professional development was provided to underpin the mentoring role, there was variation in the way tutors interpreted this role. The study was conducted at a time where other changes were occurring at the School (e.g. development of more continuous forms of assessment) and these changes might have impacted on D-IM. The questionnaire used in the study contained nine questions on mentoring. To gain a more nuanced understanding of D-IM at the School, it may be useful to use a comprehensive and validated questionnaire (e.g. Heeneman & de Grave, 2019) capturing the perceptions of mentees and mentors.

            VI. CONCLUSION

            The School aims to create quality, patient-centred and compassionate doctors who are lifelong learners (Candy, 2006). D-IM is an effective augmentation to the School’s educational program and the paper has demonstrated that it was well received by students and staff. Future directions include consideration of D-IM in clinical mentoring, development of more consistent learner-centred approaches to mentoring; improved task design and feedback; support for D-IM with appropriate ICT; and better integration of D-IM with assessment policies and practices.

            Notes on Contributors

            Associate Professor Frank Bate completed his PhD at Murdoch University. He is the Director of Medical and Health Professional Education at the School of Medicine Fremantle, University of Notre Dame Australia. He conceptualised and led the research, and was the primary author responsible for developing, reviewing and improving the manuscript.

            Professor Sue Fyfe attained her PhD at the University of Western Australia and is an adjunct professor at Curtin University. She assisted in conceptualising the research design, conducted the qualitative data analysis and made a significant contribution to reviewing and improving the manuscript.

            Dr Dylan Griffiths has a PhD from the University of Essex and is the Quality Assurance Manager at the School of Medicine Fremantle, University of Notre Dame Australia. He conducted data collection and assisted with preliminary analysis.

            Associate Professor Kylie Russell obtained her PhD from the University of Notre Dame Australia. She is currently a Project Officer at the School of Medicine Fremantle, University of Notre Dame Australia. She assisted in the development of the research methodology and made a contribution to reviewing and improving the manuscript.

            Associate Professor Chris Skinner completed his PhD at the University of Western Australia. He is Domain Chair of Personal and Professional development at the School of Medicine Fremantle, University of Notre Dame Australia. He assisted in conceptualising the research design and made a contribution to reviewing and improving the manuscript.

            Associate Professor Elina Tor completed her PhD at Murdoch University and is the Associate Professor of Psychometrics at the School of Medicine Fremantle, University of Notre Dame Australia. She helped conceptualise the research design, led the quantitative data analysis, and made a significant contribution to reviewing and improving the manuscript.

            Ethical Approval

            The University of Notre Dame Australia Human Research Ethics Committee (HREC) has provided ethical approval for the research (Approval Number 017066F).

            Acknowledgement

            The authors acknowledge the thoughtful and insightful feedback provided by staff and students.

            Funding

            No internal or external funding was sought to conduct this research.

            Declaration of Interest

            There is no conflict of interest to declare.

            References

            Bate, F., Fyfe, S., Griffiths, D., Russell, K., Skinner, C., & Tor, E. (2020). Does an incremental approach to implementing programmatic assessment work: Reflections on the change process. MedEdPublish, 9(1), 55. https://doi.org/10.15694/mep.2020.000055.1

            Bate, F., Macnish, J., & Skinner, C. (2016). The cart before the horse? Exploring the potential of ePortfolios in a Western Australian medical school. International Journal of ePortfolio, 6 (2), 85-94.

            Bhat, C., Burm, S., Mohan, T., Chahine, S., & Goldszmidt, M. (2018). What trainees grapple with: A study of threshold concepts on the medicine ward. Medical Education, 52(6), 620–631. https://doi.org/10.1111/medu.13526

            Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77-101. https://doi.org/10.1191/1478088706qp063oa

            Candy, P. (2006). Promoting lifelong learning: Academic developers and the university as a learning organization. International Journal for Academic Development, 1(1), 7-18. https://doi.org/10.1080/1360144960010102

            Frei, E., Stamm, M., & Buddeberg-Fischer, B. (2010). Mentoring programs for medical students: A review of PubMed literature 2000-2008, BMC Medical Education, 10(32), 1-14. https://doi.org/10.1186/1472-6920-10-32

            Gillis, A., & Jackson, W. (2002). Research for Nurses: Methods and Interpretation. Philadelphia, PA: F.A. Davis Co.

            Heeneman, S., & de Grave, W. (2017). Tensions in mentoring medical students toward self-directed and reflective learning in a longitudinal portfolio-based mentoring system – An activity theory analysis. Medical Teacher, 39(4), 368-376. https://doi.org/10.1080/0142159X.2017.1286308

            Heeneman, S., & de Grave, W. (2019). Development and initial validation of a dual-purpose questionnaire capturing mentors’ and mentees’ perceptions and expectations of the mentoring process. BMC Medical Education, 19(133), 1-13. https://doi.org/10.1186/s12909-019-1574-2

            Kemmis, S., & McTaggart, R. (2000). Participatory action research. In N. K. Denzin & Y. S. Lincoln (Eds.) Handbook of Qualitative Research (2nd Ed.; pp 567-606). New York: Sage Publications.

            Kirkpatrick, D., & Kirkpatrick, J. (2006). Evaluating Training Programs: The Four Levels (3rd ed.). Oakland, CA: Berrett-Koehler Publishers, Inc.

            Kuykendall, B., Janvier, M., Kempton, I., & Brown, D. (2012). Interactive whiteboard technology: Promise and reality. In T. Bastiaens & G. Marks (Eds.), Proceedings of E-Learn 2012 – World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 1 (pp. 685-690). Retrieved from Association for the Advancement of Computing in Education (AACE), https://www.learntechlib.org/p/41669

            Meeuwissen, N., Stalmeijer, R., & Govaerts, M. (2019). Multiple-role mentoring: Mentors conceptualisations, enactments and role conflicts. Medical Education, 53, 605-615. https://doi.org/10.1111/medu.13811

            Richter, D., Kunter, M., Lüdtke, O., Klusmann, U., Anders, Y., & Baumert, J. (2013). How different mentoring approaches affect beginning teachers’ development in the first years of practice. Teaching and Teacher Education, 36, 166-177. https://doi.org/10.1016/j.tate.2013.07.012

            Sambunjak, D. (2015). Understanding the wider environmental influences on mentoring: Towards an ecological model of mentoring in academic medicine. Acta Medica Academia, 44(1), 47-57. https://doi.org/10.5644/ama2006-124.126

            Standing Committee on Postgraduate Medical and Dental Education. (1998). Supporting Doctors and Dentists at Work: An Enquiry into Mentoring. London: SCOPME.

            Stenfors-Hayes, T., Hult, H., & Owe Dahlgren, L. (2011). What does it mean to be a mentor in medical education? Medical Teacher, 33(8), 423-428. https://doi.org/10.3109/0142159X.2011.586746 

            Waljee, J. F., Copra, V., & Saint, S. (2018). Mentoring Millennials. JAMA, 319(15), 1547-1548. https://doi.org/10.1001/jama.2018.3804

            *Frank Bate
            Medical and Health Professional Education,
            School of Medicine Fremantle,
            University of Notre Dame Australia,
            PO Box 1225, Fremantle,
            Western Australia 6959
            Telephone: +66 9433 0944
            Email address: frank.bate@nd.edu.au

            Submitted: 20 March 2020
            Accepted: 28 July 2020
            Published online: 5 January, TAPS 2021, 6(1), 70-82
            https://doi.org/10.29060/TAPS.2021-6-1/OA2241

            Sandra Widaty1, Hardyanto Soebono2, Sunarto3 & Ova Emilia4

            1Department of Dermatology and Venereology, Faculty of Medicine, Universitas Indonesia – Dr. Cipto Mangunkusumo Hospital, Indonesia; 2Department of Dermatology and Venereology Faculty of Medicine, Universitas Gadjah Mada, Indonesia; 3Pediatric Department, Faculty of Medicine, Universitas Gadjah Mada Indonesia, Indonesia; 4Medical Education Department, Faculty of Medicine, Universitas Gadjah Mada, Indonesia

            Abstract

            Introduction: Performance assessment of residents should be achieved with evaluation procedures, informed by measured and current educational standards. The present study aimed to develop, test, and evaluate a psychometric instrument for evaluating clinical practice performance among Dermatology and Venereology (DV) residents.

            Methods: This is a qualitative and quantitative study conducted from 2014 to 2016. A pilot instrument was developed by 10 expert examiners from five universities to rate four video-recorded clinical performance, previously evaluated as good and bad performance. The next step was the application of the instrument to evaluate the residents which was carried out by the faculty of DV at two Universities.

            Results: The instrument comprised 11 components. There was a statistically significant difference (p < 0.001) between good and bad performance. Cronbach’s alpha documented high overall reliability (a = 0.96) and good internal consistency (a = 0.90) for each component. The new instrument correctly evaluated 95.0% of poor performance. The implementation study showed that inter-rater reliability between evaluators range from low to high (correlation coefficient r =0.79, p < 0.001).

            Conclusion: The instrument is a reliable and valid instrument for assessing clinical practice performance of DV residents. More studies are required to evaluate the instrument in different situation.

            Keywords:            Instrument, Clinical Assessment, Performance, Resident, Dermatology-Venereology, Workplace-Based Assessment

            Practice Highlights

            • The residents’ performance will reflect on their professionalism and competencies. Furthermore, clinical care provided by Dermatology and Venereology field is unique, therefore a standard instrument is needed to assess their performance.
            • Dermatology – Venereology Clinical Practice Performance Examination instrument is proven to be reliable and valid in assessing residents’ clinical performance

            I. INTRODUCTION

              Performance assessment in medical clinical practice has been a great concern for medical education programmes worldwide. (Holmboe, 2014; Khan & Ramachandran, 2012; Naidoo, Lopes, Patterson, Mead, & MacLeod, 2017). It is an accepted premise that performance may differ according to competency (Cate, 2014; Khan & Ramachandran, 2012). Performance also occurs within a domain; therefore, the assessment of performance should be separated from that of competency. Performance assessment of medical residents should also be informed by existing medical standards and performance criteria (Li, Ding, Zhang, Liu, & Wen, 2017; Naidoo et al., 2017).

              Assessment of residents during their training programme is an important issue in postgraduate medical education, which has declared formative evaluation and constructive feedback as priorities (World Federation for Medical Education, 2015). An earmark of postgraduate medical specialist training is that it occurs in the workplace; therefore, the most appropriate measurement tools are Workplace-Based Assessments (WPBA). In medical education, these assessments emphasise on result and professionalism (Boursicot et al., 2011; Joshi, Singh, & Badyal, 2017).

              In response to a standardisation programme for postgraduate medical specialist training (PMST), the World Federation for Medical Education (WFME) had published guidelines which adopted by several countries including Indonesia (Indonesian College of Dermatology and Venereology, 2008; World Federation for Medical Education, 2015). Clinical care provided in the Dermatology and Venereology (DV) field is unique; a brief examination of the patient is often useful before taking a lengthy history (Garg, Levin, & Bernhard, 2012). Privacy is a top priority, especially for venereology patients, patients with communicable diseases, cosmetic dermatology and skin surgery care.

              Until now, no standard instrument has been available for performance assessment of PMST in DV; therefore, a variety of assessments are in use which may cause discrepancies (Jhorar, Waldman, Bordelon, & Whitaker-Worth, 2017). A valid and reliable method of assessment is required that can be used in various facilities and related to proficiency in both content and process (Kurtz, Silverman, Benson, & Drapper, 2003). Therefore, a study was conducted to focus on the development of a residents’ clinical performance assessment based on certain standards and principles such as the WPBA and WFME standards.

              II. METHODS

              A. Instrument Development

              The instrument was developed and tested using qualitative and quantitative study designs. It started with a solicitation of inputs regarding expected performance from a variety of stakeholders in DV: patients, nurses, laboratory staff, newly graduated DV specialists, DV practitioners, and faculty. A literature review was performed, which included various documents such as the educational programme standards for DV residents, and documentation on available assessment tools (Cate, 2014; Hejri et al., 2017; Norcini, 2010). The instrument was developed according to the current standards (Campbell, Lockyer, Laidlaw, & MacLeod, 2007; McKinley, Fraser, van der Vleuten, & Hastings, 2000).

              The resulted 11-items instrument was subsequently evaluated by faculty groups from various universities in Indonesia. Repeated revisions were carried out. Psychometric data for the instrument were provided through independent evaluations of performance videos of the residents and also through comparison of the results of the new instrument (Dermatology -Venereology Clinical Practice Performance Examination/DVP-Ex) and the compared instrument. The design was a validation study in which psychometric data for the instrument were provided. Further step was the assessment of residents’ performances when performing their clinical practice using the instrument to evaluate instrument reliability and feedback. Flowchart of the study process is shown in Appendix A.

              B. Setting

              The present study was conducted at the Department of Dermatology and Venereology, Dr. Cipto Mangunkusumo Hospital, a teaching hospital for Faculty of Medicine Universitas Indonesia, from 2014 to 2016. The study was conducted in four steps. When developing the instrument (Step 1), we included faculty members from five medical faculties in Indonesia that have DV Residency Programme (Universitas Indonesia, Universitas Sriwijaya (UNSRI), Universitas Padjajaran (UNPAD), Universitas Gadjah Mada, and Universitas Sam Ratulangi) through in depth interview and expert panel. The study received ethical approval from the Research Ethics Committee of the Faculty of Medicine Universitas Gadjah Mada Number KE/FK/238/EC.

              The instrument that had been developed was sent to five senior faculty members from three universities (Department of Dermatology and Venereology, Universitas Indonesia, UNPAD and UNSRI) (Step 2). They were asked to give their assessment in order to have face and content validity. As a test of criterion validity, we recruited 10 faculty members of Faculty of Medicine, Universitas Indonesia, randomised them into two groups. Randomisation was performed to prevent bias against the instruments being tested. One group used DVP-Ex and the other used the current instrument. The single inclusion criterion was more than three years of teaching experience. After receiving some inputs, final correction was done and training was provided for faculty member who would use the instrument.

              C. Performance Video

              To obtain standardised performance of the residents, video recordings of the resident’s clinical practice were made. Two residents were voluntarily recruited and a special team recorded their clinical practice performance using scenarios created by the first author. (Campbell et al., 2007; McKinley et al., 2000)

              There were four videos, each of which showed the clinical practice performance of the residents when they were presented with a difficult case (dermatomyositis) and a common case (leprosy tuberculoid borderline type). Patients had to sign informed consent before included in this study. A good (first and fourth video clips) and poor (second and third clips) standard of performance were demonstrated. Activities presented in the scenarios were those associated with patient care (Campbell et al., 2007; Iobst et al., 2010). After the recording session was finished, patients were managed accordingly and provided with rewards.

              D. Training on the Performance Instrument

              An hour-long training was provided for the 10 faculty members (the examiners). The faculty then practiced scoring, using the recorded video clips. During the training, we received some input and made necessary corrections to the rubrics. There was no training given for the comparison instrument because the entire faculty was already accustomed to this instrument. Step 3 is the step to produce validity, reliability and accuracy of performance instrument, which was conducted through a comparative study between two instruments of assessment; i.e. performance and control instruments. It evaluated the clinical practice performance of residents in the form of video film recording their performance

               E. Implementation of Resident Performance Assessment with Performance Instrument

              This step was aimed to evaluate the reliability of the instrument and results of instrument implementation when it was used to assess the residents (Step 4). The sample included residents of Postgraduate Medical Specialist Training Programme in Dermatology and Venereology, Faculty of Medicine, University of Indonesia and UGM, who were at their basic level (residents who were on their 1st semester in clinical setting), intern level (semester II-V) and independent level (semester VI or higher). 

              Sample size:  n = 3 – 4/level/Faculty of Medicine = 20. The evaluators were five lecturers/ Faculty of Medicine = 10, and each lecturer evaluated six residents.

              F. Data Collection

              One week after the training, the instrument was evaluated. Faculty members assessed the performance of the residents in the four video recordings at the same time. Three days later, the groups underwent a rotation to reassess the video with whichever of the two instruments they had not already used. The examiners were asked to provide feedback and information on the ease of completing the instrument and the clarity of its instructions.  For the implementation of resident assessment performance with the instrument, one resident was being evaluated by three lecturers simultaneously. The lecturers were grouped randomly; therefore, every lecturer could evaluate six residents out of ten residents from each group that would be assessed.

              G. Data Analysis

              The analyses aimed to evaluate validity, reliability, and precision of the instrument for discriminating the performance of the residents as poor, good, or excellent.

              H. Validity and Reliability

              A reliability test was performed, i.e. internal consistency in the form of responses against items in each field (Cronbach alpha coefficient). Face and content validity were assessed by addressing the relevant performance standards and criteria, and by optimizing clarity of instruction, specific criteria, acceptable format, gradation of responses, correct and comprehensive answers (including all assessed variables). The cut-off score of the instrument was determined using ROC (receiver operating curve) principles, which was then used to evaluate sensitivity, specificity, positive and negative predictive value. The accuracy of the instrument was determined to evaluate the precision of the instrument in distinguishing between good and poor performance.

              I. Statistical Analysis

              The statistical analysis was performed using SPSS 11.5 software. Total assessment scores of each examiner were analysed using analysis of variance (ANOVA). Internal consistency was determined using Cronbach’s α and Spearman analysis was performed to acquire p value for the validity. The accuracy was determined by comparing failed or passed score results and comparing it with the video. To obtain the intergroup difference, McNemar’s test and Kappa analysis were carried out. Qualitative analysis was also performed, especially to evaluate feedbacks by performing several analytical steps.

              III. RESULTS

              A performance instrument was developed with 11 competency components, for which evaluation responses were given in the form of rubric scale (Appendix B). All 10 faculty members completed an assessment of each of the four videos. Eight examiners had more than 3 years of teaching experiences, and five examiners were DV consultants.

              A. Validity

              For validity, face, content, and construct validity remain solid points of reference for validity evaluation (Colliver, Conlee, & Verhulst, 2012; Johnson & Christensen, 2008). Face content was evaluated by five experts from three universities. The evaluation was implemented to improve the instrument. The scale of the rubrics described the capacity of residents to perform activities according to the Standard Competency of DV specialist and the domain of performance for physicians has made the instrument evaluated as the instrument with good validity on its face, content and construction.

              The results of the assessments made on performance videos with the DVP-Ex showed that examiners agreed that the performances of the first and fourth videos (the “good” videos) were good performance (>60); conversely, the second and third videos (the “bad” ones) were evaluated as poor performance by 10 and 9 out of 10 faculty members, respectively (Table 1).

              Video

              Mean

              (Score)

              N

              Standard Deviation

              Median

              Minimum

              Maximum

              Score >60

              Score <60

              1

              87.45

              10

              12.59

              89.44

              56.00

              100.00

              90%

              10%

              2

              33.54

              10

              15.77

              35.92

              4.17

              51.85

              100%

              0

              3

              25.31

              10

              16.84

              25.00

              3.70

              64.00

              90%

              10%

              4

              81.96

              10

              9.06

              84.25

              66.67

              96.29

              100%

              0

              Note: Chi Square, Kruskal–Wallis p < 0.001

              Table 1. Assessment scores for each of the four videos (n = 10)

              Faculty members also gave feedback suggesting that the instrument would be useful for assessing residents’ performance. They also commented that the instrument was more objective than the one currently in use, that it was challenging in that they had to read the instrument carefully in order to use it properly, and that the response options allowed several aspects of the residents’ performance to be assessed.

              B. Validity and Reliability

              Validity of the instrument was measure using Spearman analyses showed significant result for all of the competency component (p > 0.001). Reliability measure of the correlation between each item score and the total score on all relevant items (Cohen, Manion, & Morrison, 2008). Our analysis revealed good overall reliability, with Cronbach α = 0.96. All components of competency achieved internal reliability scores >0.95. The correlation between each item score on the competency components and the overall score was excellent (range: 0.64–0.99).

              No

              Competency Component

              Corrected Item-Total Correlation

              Alpha if item Deleted (Cronbach a 0.96)

              1

              C1

              0.76

              0.96

              2

              C2

              0.81

              0.96

              3

              C3

              0.79

              0.96

              4

              C4

              0.76

              0.96

              5

              C5

              0.84

              0.96

              6

              C6

              0.82

              0.96

              7

              C7

              0.88

              0.96

              8

              C8

              0.99

              0.95

              9

              C9

              0.64

              0.96

              10

              C10

              0.90

              0.95

              11

              C11

              0.89

              0.96

              Note: C1 = history-taking, C2 = effective communication, C3 = physical examination, C4 = workup, C5 = diagnosis/ differential diagnosis, C6 = DV management, C7 = information and/ education, C8 = data documentation on medical record, C9 = multidisciplinary consultation, C10 = self-development/ transfer of knowledge, C11 = introspective, ethical, and professional attitude

              Table 2. Analysis of internal consistency for each competency component

              Results from instrument

              Video type

              Amount

              Good

              Poor

              Passed

              Failed

              19

              1

              1

              19

              20

              20

              Total

              20

              20

              40

              Note: McNemar’s test: p = 0.50, Kappa Analysis κ = 0.90, p < 0.001; accuracy = 95%

              Table 3. Comparison of the results from the DVP-Ex instrument and video type (n=40)

              It can be concluded that the instrument was able to accurately assess the clinical practice performance demonstrated in the videos (Table 3). The control instrument can accurately identify 80% of the insufficient performance, which makes it a valuable tool for assessment during the clinical years (Table 4). From both data, it can be concluded that DVP-Ex was better than the control instrument in assessing the video with superior accuracy (95% vs 80%, respectively) and better interrater reliability (0.90 vs 0.60, respectively).

              Results from instrument

              Video type

              Total

              Good

              Poor

              Passed

              Failed

              18

              2

              6

              14

              24

              16

              Total

              20

              20

              40

              Note: McNemar’s test: p = 0.289, Kappa analysis κ = 0.60, p <0.001, accuracy: 80%

              Table 4. Comparison of the results from the control instrument and video type (n=40)

              C. Implementation of the Instrument

              By using the cut-off score of 60, a reliability test was performed among instrument evaluators gradually, i.e. between the evaluator I and II (PI-II), evaluator I and III (PI-III) and evaluator II and III (PII-III). We found the following results: (Table 5).

               

               

              Evaluator I

              Evaluator II

              Evaluator III

              Evaluator I

              Coefficient of correlation

              1.000

              0.59(**)

              0.49

               

              P value

              .

              0.01

              0.07

               

              N

              20

              20

              14

              Evaluator II

              Coefficient of correlation

              0.59(**)

              1.00

              0.79(**)

               

              P value

              0.006

              .

              0.001

               

              N

              20

              20

              14

              Evaluator III

              Coefficient of correlation

              0.49

              0.79(**)

              1.00

               

              P value

              0.07

              0.001

              .

               

              N

              14

              14

              20

              Note: **significant correlation

              Table 5. Analysis of reliability on performance instrument with Spearman’s Rho correlation

              D. Feedback on Assessment with the Performance Instrument

              Most feedback was about skill and the process of the clinical practice being performed. In contrast to results of another study suggesting that most feedback addresses communication (Pelgrim, Kramer, Mokkink, & van der Vleuten, 2012), only 5% of examiners’ remarks mentioned a need to improve communication skill. Additionally, 20% of examiner comments mentioned the importance of attitude, especially as a part of effective communication.

              IV. DISCUSSION

              The present study was conducted to develop a WPBA instrument to assess clinical practice performance, and to obtain psychometric data on the instrument. The DVP-Ex can easily be used by faculty members, Early psychometric evaluation has demonstrated promising levels of validity and reliability of the instrument.

              We found that examiners experienced some difficulties in completing the instrument, therefore, repeated trainings are necessary. Further workup or laboratory examination (C4), multidisciplinary consultation (C9) and knowledge transfer and self-development (C10) were not always scored because they were not observable in every clinical encounter. However, those components (C4, C9, and C10) are important and are not assessed at all by other WPBA instruments (Norcini & Burch, 2007; Norcini, 2010).

              The validity evaluation through face and content validity was performed by the experts, who agreed in their approval of the content and construction of the instrument and its relevance to the competencies and performance of physicians. Moreover, the consistency of the examiners in evaluating the performance videos has provided further evidence that the instrument is appropriate for DV residents. Analysis of internal consistency provided ample evidence of the instrument’s reliability. Additionally, the DVP-Ex’s 95% success rate in categorising poor performance as failing offers yet another converging piece of evidence of the instrument’s validity for identifying residents who are struggling.

              On the step of implementation, not all of inter-evaluator reliability values were good, which might be cause by the unfamiliarity of the evaluators with the performance instrument; therefore, a more intensive training on how to use the instrument may improve inter-evaluator reliability value. The advantage of utilisation of instrument for evaluators in association with instrument reliability has been discussed in various studies (Boursicot et al., 2011). A special strategy is required to produce a successful assessment process (Kurtz et al., 2003). Full participation in the assessment process and training, including providing the feedbacks are needed (Norcini & Burch, 2007).

              The promising results for this instrument’s ability to differentiate poor and good performance could be the basis for further studies to assess the formative functions of the instrument through repeated assessment of the same resident by several examiners. In addition, further studies are needed to justify whether this instrument can also be used as a summative tool. Limitations of the study are that some of the experts were from the same university as the residents which could impose bias on the assessment and no training for the level of questioning. Also, a lot of training and standardisation of the assessors should be addressed if this instrument is to be used in a larger population. 

              V. CONCLUSION

              DVP-Ex is a reliable and valid instrument for assessing DV residents’ clinical performance. With intensive training for the evaluator, this instrument can correctly classify a poor clinical practice performance as a failed performance according to applicable standards. Therefore, it can improve the DV education programme. 

              Notes on Contributors

              Sandra Widaty is a dermato-venereologist consultant and a fellow of Asia Academy of Dermatology and Venereology. She is a Faculty member in Dermatology and Venereology Post Graduate Training and Medical Education Department of Faculty of Medicine Universitas Indonesia. She is the main investigator in this study. 

              Hardyanto Soebono is a professor and faculty member in Dermatology and Venereology and Medical Education Department of Faculty of Medicine Universitas Gadjah Mada.  He conducts lots of publication in both fields. He contributed to the conceptual development and data analysis, including approving this final manuscript.

              Sunarto is a faculty member and teach residents in Pediatrics Department. He conducts lots of research and publication in the field of medical education. He contributed to conceptual development and editing, including approving this final manuscript.

              Ova Emilia got her PhD degree in Medical Education. She teaches doctoral degree in Medical Education. Currently, she is the dean of Faculty of Medicine Universitas Gadjah Mada. She contributed to the conceptual development, data analysis and editing, including approving this final manuscript.

              Ethical Approval

              Research Ethics Committee of the Faculty of Medicine University Gadjah Mada Number KE/FK/238/EC. 

              Acknowledgement

              The authors would like to thank Joedo Prihartono for the statistical calculation and analysis. 

              Funding

              No funding source was required.

              Declaration of Interest

              All authors declared no conflict of interest. 

              References

              Boursicot, K., Etheridge, L., Setna, Z., Sturrock, A., Ker, J., Smee, S., & Sambandam, E. (2011). Performance in assessment: Consensus statement and recommendations from the Ottawa conference. Medical Teacher, 33(5), 370-383.

              Campbell, C., Lockyer, J., Laidlaw, T., & MacLeod, H. (2007). Assessment of a matched-pair instrument to examine doctor – Patient communication skills in practising doctors. Medical Education, 41(2), 123- 129.

              Cate, O. T. (2014). Competency-based postgraduate medical education: Past, present and future. GMS Journal for Medical Education, 34(5), 1-13.

              Cohen, L., Manion, L., & Morrison, K. (2008). Research Methods in Education (6th ed.). London: Routledge.

              Colliver, J. A., Conlee, M. J., & Verhulst, S. J. (2012). From test validity to construct validity and back? Medical Education, 46(4), 366-371.

              Garg, A., Levin, N. A., & Bernhard, J. D. (2012). Structure of skin lesions and fundamentals of clinical diagnosis.  In: L. A. Goldsmith , S. I. Katz, B. A. Gilchrest, A. S. Paller, D. J Leffel & K. Wolff (Eds), Fitzpatrick’s Dermatology in General Medicine, 8e. New York: McGraw-Hill Medical.

              Hejri, S. M., Jalili, M., Shirazi, M., Masoomi, R., Nedjat, S., & Norcini, J. (2017). The utility of mini-clinical evaluation exercise (mini-CEx) in undergraduate and postgraduate medical education: Protocol for a systematic review. Systematic Reviews, 6(1), 146-53.

              Holmboe, E. S. (2014). Work-based assessment and co-production in postgraduate medical training. GMS Journal for Medical Education, 34(5), 1-15.

              Indonesian College of Dermatology and Venereology. (2008). Standard of Competencies for Dermatologists and Venereologists. Jakarta: Indonesian Collegium Dermatology and Venereology.

              Iobst, W. F., Sherbino, J., Cate, O. T., Richardson, D. L., Swing, S. R., Harris, P., … Frank, J. R. (2010). Competency-based medical education in postgraduate medical education. Medical Teacher, 32(8), 651-656.

              Jhorar, P., Waldman, R., Bordelon, J., & Whitaker-Worth, D. (2017). Differences in dermatology training abroad: A comparative analysis of dermatology training in the United States and in India. International Journal of Women’s Dermatology, 3(3), 164-169.

              Johnson, B., & Christensen, L. (2008). Educational Research, Quantitative, Qualitative and Mixed Approaches (3rd ed.). London: Sage Publications, Thousand Oaks.

              Joshi, M. K., Singh, T., & Badyal, D. K. (2017). Acceptability and feasibility of mini-clinical evaluation exercise as a formative assessment tool for workplace based assessment for surgical postgraduate students. Journal of Postgraduate Medicine, 63(2), 100-105.

              Khan, K., & Ramachandran, S. (2012). Conceptual framework for performance assessment: Competency, competence and performance in the context of assessments in healthcare – Deciphering the terminology. Medical Teacher, 34(11), 920-928.

              Kurtz, S., Silverman, J., Benson, J., & Drapper, J. (2003). Marrying content and process in clinical method teaching: Enhancing the Calgary–Cambridge guide. Academic Medicine, 78(8), 802-809.

              Li, H., Ding, N., Zhang, Y., Liu, Y., & Wen, D. (2017). Assessing medical professionalism: A systematic review of instruments and their measurement properties. PLOS One, 12(5), 1-28.

              McKinley, R. K., Fraser, R. C., van der Vleuten, C. P., & Hastings, A. M. (2000). Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package. Medical Education, 34(7), 573-579.

              Naidoo, S., Lopes, S., Patterson, F., Mead, H. M., & MacLeod, S. (2017). Can colleagues’, patients’ and supervisors’ assessments predict successful completion of postgraduate medical training? Medical Education, 51(4), 423-431.

              Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher. 29(9):855-71.

              Norcini, J. J. (2010). Workplace based assessment. In: T. Swanwick (Ed), Understanding Medical Education: Evidence, Theory and Practice (1st ed., pp. 232-245). London UK: The Association for the Study of Medical Education.

              Pelgrim, E. A., Kramer, A. W., Mokkink, H. G., & van der Vleuten, C. P. (2012). The process of feedback in workplace-based assessment: Organisation, delivery, continuity. Medical Education, 46(6), 604-612.

              World Federation for Medical Education. (2015). Postgraduate medical education WFME global standards for quality improvement. University of Copenhagen, Denmark: WFME Office.[Accessed 2018 July 20] http://wfme.org/publications/wfme-global-standards-for-quality-improvement-pgme-2015/

              *Sandra Widaty
              Jl. Diponegoro 71,
              Central Jakarta,
              Jakarta, Indonesia, 10430
              Tel: +622131935383
              Email: sandra.widaty@gmail.com

              Submitted: 16 April 2020
              Accepted: 24 June 2020
              Published online: 5 January, TAPS 2021, 6(1), 83-92
              https://doi.org/10.29060/TAPS.2021-6-1/OA2251

              Eng Koon Ong 

              Division of Supportive and Palliative Care, National Cancer Centre Singapore, Singapore; Assisi Hospice, Singapore

              Abstract

              Introduction: Physician empathy is declining due to an unproportionate focus on technical knowledge and skills. The medical humanities can counter this by allowing connection with our patients. This is a pilot study that aims to investigate the acceptability, efficacy, and feasibility of a humanities educational intervention to develop physician empathy.

              Methods: Junior doctors at the Division of Supportive and Palliative Care at the National Cancer Centre Singapore between July 2018 and June 2019 attended two small-group sessions facilitated by psychologists to learn about empathy using literature and other arts-based materials. Feasibility was defined as a completion rate of at least 80% while acceptability was assessed by a 5-question Likert-scale questionnaire. Empathy was measured pre- and post-intervention using Jefferson’s Scale of Physician Empathy (JSPE) and the modified-CARE (Consultation and Relational Empathy) measure.

              Results: Seventeen participants consented, and all completed the programme. Acceptability scores ranged from 18 to 50 out of 50 (mean 38, median 38). There was an increase in JSPE scores (pre-test mean 103.6, SD=11.0 and post-test mean 108.9, SD=9.9; t (17) =2.49, P=.02). The modified-CARE score increased between pre-test mean of 22.9(SD=5.8) and a post-test mean of 28.5(SD=5.9); t (17) = 5.22, P<0.001.

              Conclusion: Results indicate that the programme was acceptable, effective, and feasible. The results are limited by the lack of longitudinal follow-up. Future studies that investigate the programme’s effect over time and qualitative analysis can better assess its efficacy and elicit the participants’ experiences for future implementation and refinement.

              Keywords:            Empathy, Humanities, Literature, Palliative Medicine

              Practice Highlights

              • The medical humanities can be used to teach empathy by facilitating reflective practice.
              • This novel educational programme was acceptable, effective, and feasible.
              • Limitations include the lack of longitudinal follow-up and the quantitative nature of assessment.
              • Future studies should investigate the programme’s effect over time and include qualitative analysis.

              I. INTRODUCTION

              Empathy can be defined as having feelings that are more congruent with another situation than one’s own by recognising the perspectives of others (Hojat et al., 2002). Higher physician empathy leads to better patient care outcomes and satisfaction (Hall et al., 2002) and has also been associated with lower levels of physician burnout (Lee, Loh, Sng, Tung, & Yeo, 2018). However, studies suggest a worrying trend that empathy levels decline as training progresses for medical students and residents as well as a correlation between decreasing empathy and increasing burnout (Lee et al., 2018). The various reasons for such a decline were elicited in a recent systematic review and can be summarised into 4 main domains (refer Table 1) (Neumann et al., 2011).

              Domains of  various reasons for empathy decline

              Details

              1.  Individual variables

              Personality traits, upbringing, and experiences during adulthood

              2.  Individual distress

              Burnout, depression, and decreased quality of life are associated with decreased empathy levels.

              3.  Nature of medical practice

              Uncertainties increase the vulnerability of the medical practitioner and lead to negative coping mechanisms like depersonalisation and detachment from patients.

              4.  Learning environment

              Inadequate and inappropriate role models and the hidden curriculum cause moral distress and decrease empathy as a consequent of poor coping mechanisms. 

              Table 1.  Reasons contributing to decline in empathy

              The medical humanities are an inter-disciplinary field where the concepts, content, and methods from art, history, and literature are used to investigate the experience of illness and to understand the professional identity of healthcare providers (Shapiro, Coulehan, Wear, & Montello, 2009). It is hypothesised that experiences and perspectives illustrated by the medical humanities through stories depicted in novels, literature, drama, and poetry can promote the development of empathy by encouraging deep reflection, facilitating meaning-finding and comfort with uncertainty and providing new perspectives (Bleakley, 2015; Bleakley & Marshall, 2014; Dennhardt, Apramian, Lingard, Torabi, & Amtfield, 2016). The medical humanities have the potential to address the factors mentioned in Table 1.

              A. Individual Distress:

              The medical humanities allow an avenue for physicians to express difficult emotions encountered in clinical practice like anxiety, guilt, and regret. Such emotions may be due to uncertain disease trajectories, ethical dilemmas, and physical exhaustion. The medical humanities allow such emotions to be expressed and discussed, with the intention to support physicians and decrease distress from burnout.

               B. Nature of Medical Practice:

              The uncertainties of medical practice and the consequent vulnerability of the medical practitioner affect empathy levels. Doctors may develop negative coping mechanisms like depersonalisation that may seemingly help meet the unrealistic expectation that medicine can always cure. To counter this, physicians must be given the time to share their clinical experiences in a safe environment and subsequently support each other by establishing relationships and reducing isolation (Batt-Rawden, Chisolm, Anton, & Flickinger, 2013; Feld & Heyse-Moore, 2006; Wear & Zarconi, 2016). Reflective writings and creative arts are some of the methods that have been used to facilitate such a process (West, Dyrbye, Erwin, & Shanafelt, 2016).

               C. Learning Environment:

              Palliative medicine has been touted to be able to provide an ideal environment to impart empathetic values in view of its patient-centred philosophy of care (Block & Billings, 1998; Othuis & Dekkers, 2003). This is achieved through the routine use of the humanities to understand the personhood of our patients and develop empathetic connections. History, art, music, and narratives define our patients’ life experiences and influence their responses to disease and treatment. Learning through mentorship and role-modelling of such an approach to patient care allows junior doctors to appreciate the importance of using the humanities to achieve better patient care outcomes. 

              There is currently no conclusive evidence on the best method of teaching empathy or the best person to teach it. Where educators have tried to teach humanism and empathy in medicine, research done on its curriculum has been criticised in terms of clinical relevance and methodology (Birden et al., 2013; Ousager & Johannessen, 2010; Perry, Maffulli, Wilson, & Morrissey, 2011; Schwartz et al., 2009; Wear & Zarconi, 2016). However, the impact of the humanities on the factors that cause declining physician empathy levels illustrated above demonstrates that the medical humanities may be an important tool in teaching empathy. This pilot study has the potential to fill this gap by taking the first step in establishing the acceptability and feasibility of a pilot humanities education programme based on established conceptual frameworks. There were two specific aims to this pilot study: 1) To determine the acceptability and feasibility of the proposed curriculum; 2) To assess the efficacy of the HAPPE programme. The data collected will inform future studies on whether the humanities can be one of the best tools to use in teaching empathy.

              II. METHODS

              A. Intervention Design

              The Humanistic Aspirations as a Propeller of Palliative medicine Education (HAPPE) programme was conceived to introduce and develop a novel curriculum in empathy for junior doctors undergoing a palliative medicine rotation.  The overall goal of the study was to design an effective education programme based on the humanities to teach doctors empathy. Our study draws upon Schon’s work of Reflective Practice (see Table 2; Schön, 1987).

              Concepts of Reflective Practice

              Planned activities during HAPPE

              Expected Outcomes

              “Reflection-in-action” – reflecting during an event and act on a decision “on the spot”

              Discussion and awareness of perspectives that trigger powerful emotions and empathetic reflections and responses.

              Recalls triggers, leading to empathetic change in behaviour and decisions in actual practice.

              “Reflection-on-action” – reflecting after an event, process feelings and experiences, gain new perspectives.

              Uses rich perspectives of patients, caregivers, and healthcare providers via humanities, leading to deep reflections.   

              Reinforces changes in practice “on the ground”.

              Table 2. Application of the Theory of Reflective Practice in the design of the HAPPE programme

              Based on the theory of reflective practice, the components of the HAPPE programme are elaborated in Table 3.  The principles listed are supported by existing research literature (Gibbs, 1988; Shapiro et al., 2009).

              Principles

              Component of HAPPE

              1.  Facilitating factors of reflective practice include a safe environment, conducive settings, and trained facilitators. 

                

              1.  The sessions are facilitated by two trained clinical psychologists who are experienced in conducting support group sessions for both patients and staff.

               

              2.  To ensure psychological safety for intimate sharing, all participants are provided explicit consent for the study.  The project was submitted for the institution review board review but was exempted. 

                

              3.  The sessions are conducted via small-group discussions and ground rules are set before the start of each session (see Appendix A). 

               

              4.  Data collected is blinded to the investigator. 

              2.  Reflective practice is propelled by materials and modalities that provide rich perspectives and trigger strong emotions and empathetic personal inquiry. 

              1.  Arts-based materials are used to prompt deep reflection and facilitate by examining multiple perspectives and challenging expectations and vulnerabilities of junior doctors.

               

              2.  The novel The Death of Ivan Ilyich is chosen in view of its ability to elicit deep reflections about suffering and care. The interpretation of the apparently ambiguous piece of fictitious literary piece based on the learner’s personal beliefs can stimulate personal growth, develop non-judgmental attributes as well as improve coping with uncertainty in medical practice.

              Table 3.  Components of the HAPPE programme designed according to the theory of reflective practice.

              Acceptability was measured by a Likert-scale questionnaire (see Annex 1). Feasibility of the curriculum was defined as a completion rate of at least 80%. The efficacy of the curriculum was measured by the self-reported Jefferson Physician Empathy Scale (JSPES) (Hojat et al., 2001) as well as the third-party reported modified-Consultation and Relational Empathy (CARE) Measure (Mercer, Maxwell, Heaney, & Watt, 2004).

              B. Study Design

              This was a quantitative study that assessed the acceptability, feasibility, and effectiveness of the HAPPE programme pre- and post-intervention. Participants: All junior doctors who rotated through the department between 1st July 2018 and 30th June 2019 were invited to participate in this study. About 30 junior doctors (residents and medical officers) rotate through the division of palliative medicine as part of their postgraduate training yearly. The junior doctors have varying levels of prior training and exposure to palliative medicine. These junior doctors worked in palliative care teams each consisting of a consultant, registrar/resident physician, and a nurse who assess and manage patients with palliative care needs. The duration of each rotation ranged from 1 to 6 months. An independent research coordinator provided the participants with information regarding the study and took written consent from each participant face-to-face.

              C. Intervention

              The HAPPE programme consisted of two 1.5-hour sessions during office hours of small group discussions held at the National Cancer Centre Singapore (NCCS) facilitated by two clinical psychologists 1-week apart. The two facilitators are senior psychologists who are trained in counselling and group facilitation and regularly encounter complex clinical scenarios in communications and grief. The programme was repeated throughout the year at regular intervals for all new junior doctors rotated into the department. The junior doctors were considered to have completed the curriculum in its entirety when they attend both sessions of the HAPPE programme during their posting.

              In the first session, a brief introduction on the novel, The Death of Ivan Ilyich was presented by the facilitators (Charlton & Verghese, 2010; Florijn & Kaptein, 2013). There was no need for the learners to read the entirety of the novel before the session. The sections of the novel used are found in Annex 2. 

              The learners were asked the following questions that addressed the tenets of empathy. (standing in patient’s shoes, compassionate care, and perspective-taking):

              1. What was described about Ivan Ilyich’s life preceding his illness that you think was important to know if you were his doctor?
              2. Why do you think Ivan Ilyich was so distressed before he died?
              3. How different do you think you will feel if you were Ivan Ilyich?

               

              In the second session, learners were asked to bring along any arts-based material (paintings, literature, music, drama) and share with the class their reflections on why the material was chosen and how appreciation and/or critique of the art piece helped them develop empathy and patient-centred care. The participants brought materials available from the internet like photographs, paintings, illustrations from magazines, and references to non-fiction books that they had previously read.

              Prompting questions included:

              • Why was the material chosen?
              • How did the material trigger reflections on the concept of empathy?
              • What were some of the emotions elicited when reflecting on the concepts of empathy using the materials?

              The two clinical psychologists employed techniques that encouraged personal sharing in a safe environment. Participants were reassured that their sharing was confidential, and they were free to leave the session at any point in time if they felt uncomfortable. Sharing was encouraged by picking up themes of similarities and contrasts between participant’s sharing, asking questions with the intention to clarify, reflect and hypothesise, progressing from “participant to facilitators communication” to “between-participants communications” and progressing from talking about “Ivan Ilyich” to “themselves if they were Ivan Ilyich or Ivan’s doctor or Ivan’s family member/friend” to “themselves”. 

              This study was submitted for review in the Institutional Review Board but was exempted in view of its nature as a medical education project.

              D. Outcomes Assessment

              To assess acceptability, the junior doctors were asked to complete a questionnaire post-intervention (see Annex 1). Feasibility was defined as at least 80% of junior doctors completing the curriculum in its entirety.

              The assessment of efficacy is investigated using 2 scales pre- and post-intervention:

              1. The Jefferson Physician Empathy Scale (JPES) is a self-reported 20-item empathy measure based on a seven-point Likert scale designed to assess empathy in physicians. It has been validated, has an alpha coefficient of 0.87 for internal consistency, and is the most widely used in literature. There are ten positively worded items and ten negatively worded items, and the negatively worded items will be reverse scored on a Likert scale of 7 (strongly disagree) to 1 (strongly agree). Scores can range from 20 to 140 with higher scores indicating participants to be more empathic.

               

              1. As there were currently no validated tools for the assessment of empathy of palliative care doctors, the Consultation and Relational Empathy (CARE) Measure was chosen. It is a 10-item patient-rated questionnaire developed and validated to assess a physician’s level of empathy and patient-centred care when used by patients, with an alpha coefficient of 0.92 for internal consistency. However, as the enrolment of patients for this purpose for the study was not possible, this measure was modified with permission from its developer, to generate third party-rated outcomes from the junior doctors’ team members (consultant, registrar or resident physician and the nurse) (see Annex 3). As the participants work in small teams of not more than three, all their respective team members were invited to perform the assessment. They will observe interactions between the junior doctors and their patients during their daily work and rate the 10 items that are each described in the questionnaire. No prior training is needed.

               

              III. RESULTS

                A total of 17 junior doctors agreed to participate in the study and all of them completed the programme and assessments. Out of a full score of 50, the acceptability score ranged from 18 to 50. The median and mean were both 38. 

                The JPES scores pre-test had a range of 77 to 123 out of 140. The mean was 103.6 (SD 11). Post-test, the JSPE scores ranged from 93 to 132. The mean value was 108.9 (SD 9.9). This gave a paired t-score difference of 2.49 with P value of 0.02.

                The modified-CARE score pre-test had a range of 12 to 31 out of 50, a mean of 22.9 (SD 5.8). Post-test, the scores ranged from 17 to 37, with a mean of 28.5 (SD 5.9). This gave a paired t-score difference of 5.22 with a P value of <0.001.

                 IV. DISCUSSION

                This is a quantitative pilot study conducted to investigate the acceptability, efficacy, and feasibility of a novel educational intervention based on the humanities to teach empathy to junior doctors in a palliative medicine rotation. It is the first project under the Humanities Initiative Programme (HIP) at the Division of Supportive and Palliative Care (DSPC) at the National Cancer Centre of Singapore (NCCS). The results of this pilot study are encouraging and are consistent with other similar pilot studies that investigated the efficacy of a humanities-based programme in medical education and will propel the development of the HIP (Perry et al., 2011). The positive results regarding acceptability and feasibility are important as they suggest that the implementation of such an intervention on a larger scale that spans across disciplines is possible. The increase in empathy scores demonstrates the efficacy of the programme, although further analysis is needed to investigate whether such a change is due to other factors as the intervention is relatively short and effects may not be sustainable.

                There are other limitations to this study. The study is limited by the small number of participants in a single institution and deficiencies of a self-assessed rating scale (Boud & Falchikov, 1989). The possible reasons that the enrolment rate is low include lack of awareness about the concept of the humanities and its role in medical education and difficulty with balancing time between clinical duties and educational activities. The programme was also novel and junior doctors may be hesitant to enrol due to uncertainties about the nature of the programme.

                The limitations of self-assessment tools are mitigated by the employment of a third-party empathy measure that allowed triangulation of results, but caution should remain about the clinical significance of the results. Inherent biases by fellow team members and difficulty in having adequate time and making effort for observation and accurate grading of the participants in a busy clinical service may render third-party assessment unreliable. Ideally, an independent party observing the participants during their daily work may reduce biases. Stealthy observations may also avoid both conscious and unconscious alteration in behaviour from the participants’ awareness of being observed. Unfortunately, this was not logistically possible in this study.

                The use of a modified-CARE measure which has not been validated for use by fellow colleagues of the doctor may also render the increase in scores post-intervention less reliable and valid.

                Lastly, the indication for programme feasibility is set at 80% based on the investigator’s discretion. This is due to a lack of data about standards on feasibility from existing studies of humanities-based educational programmes. It could be possible that there are other measures of feasibility that are more valid.

                Future research will need to address the choice of outcome measures including assessment of feasibility and empathy scores. Discrete studies for the design and validation of such measures will grant important rigor to future studies in this field.

                Studies that utilise qualitative research methodology could also provide rich data that answers questions about the choice of materials and facilitators. Possible methods include thematic analysis (Braun & Clarke, 2006) and narrative inquiry – a developing methodology of investigating lived experiences in the context of place, sociality, and time (Clandinin & Connelly, 2000). This will help the investigator assess the suitability and transferability of the HAPPE programme to other disciplines with varying participant demographics and how further refinement in design and methods can improve efficacy and sustainability.

                A. Moving Forward

                As this was a pilot study, the investigator had chosen to focus on only quantitative parameters to achieve the aims of the study. It is recognised that qualitative analysis will provide richer data on the experiences of the participants and further guide implementation and refinement of programmes based on the humanities and there are ongoing projects that have started within the institution based on the need to address this gap of the pilot study.

                Research done on humanities programmes in medicine has commonly been criticised in terms of methodology. A literature review of arts-based interventions in medical education found poor designs of methodology (Perry et al., 2011) while another study on needs assessment noted only a minority of studies describing outcome measures beyond learner satisfaction (Taylor, Lehmann, & Chisolm, 2017). Publications have also been criticised for the lack of a conceptual basis in the design of interventions. This pilot study aimed to address some of these challenges by clearly stately the conceptual theory of reflective practice that impacts the study results. In addition, the Gagne Instructional Plan guided lesson planning with the steps of gaining attention, informing the learner of objectives, stimulating recall, presenting stimulus, and providing learning guidance achieved (Gagne, Briggs, & Wager, 1988). However, the lack of validated and relevant assessment outcomes remains.  Future research should focus on developing suitable assessment tools that can achieve their aims without stifling the responses of participants. One possible approach may be the adoption of formative assessments that focus on feedback, in contrast to summative tools that typically impact outcomes of appraisals (Taras, 2008).

                Finally, there is a paucity of studies that employ the humanities as educational resources in the Asia-Pacific region. This is despite the rich multi-cultural nature of the societies in this region, many with deep-rooted and unique practices in the arts. The investigator of this study hopes that this pilot programme will inspire like-minded medical educators in the region to embark on similar projects within their institutions and develop the arts as an educational tool for the benefit of both healthcare professionals and patients.

                V. CONCLUSION

                This pilot study has produced encouraging results regarding the use of humanities in medical education. The humanities have the potential for multiple functions in medicine and perhaps most importantly serve to bridge the gap between biomedical sciences and the “art of medicine” (Best, 2015; Chew, 2008; Ong & Anantham, 2019). Further research in this field will provide guidance on the development of a robust educational intervention that adheres to the best practices of medical education research.

                Note on Contributor

                OEK is a consultant at the Division of Supportive and Palliative Care in the National Cancer Centre of Singapore. OEK reviewed the literature, designed the study, engaged the facilitators for the programme, analysed results, and wrote the manuscript.

                Ethical Approval

                This study was submitted to the institution’s review board but received an exemption due to its nature as an educational intervention (CIRB Ref: 2018/2276).

                Acknowledgements

                The author would like to acknowledge Ms Tan Yee Pin and Ms Jacinta Phoon from the Division of Psycho-oncology at the National Cancer Centre of Singapore who facilitated the HAPPE sessions. The author would like to thank Professor Steward Mercer for his generosity in sharing the CARE measure as an evaluation tool in this study.

                Funding

                This study was supported by the Lien Centre of Palliative Care, Singapore, Education Incubator Grant (Reference code: LCPC-EX18-0001).

                Declaration of Interest

                The author declares no conflict of interest in this study.

                References

                Batt-Rawden, S. A., Chisolm, M. S., Anton, B., & Flickinger, T. E. (2013). Teaching empathy to medical students: An updated, systematic review. Academic Medicine, 88(8), 1171-1177. https://doi.org/10.1097/acm.0b013e318299f3e3

                Best, J. (2015). 22nd Gordon Arthur Ransome oration: Is medicine still an art? Annals of the Academy of Medicine Singapore, 44, 353-357.

                Birden, H., Glass, N., Wilson, I., Harrison, M., Usherwood, T., & Nass, D. (2013). Teaching professionalism in medical education: A Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 25. Medical Teacher, 35(7), e1252-e1266. https://doi.org/10.3109/0142159x.2013.789132

                Bleakley, A. (2015). Medical Humanities and Medical Education: How the Medical Humanities Can Shape Better Doctors. New York: Routledge. https://doi.org/10.4324/9781315771724

                Bleakley, A., & Marshall, R. (2014). Can the science of communication inform the art of the medical humanities? Medical Education, 47(2), 126-133. https://doi.org/10.1111/medu.12056

                Block, S., & Billings, A. (1998). Nurturing Humanism through teaching palliative care. Academic Medicine, 73(7), 763-765. https://doi.org/10.1097/00001888-199807000-00012

                Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18, 529-549. https://doi.org/10.1007/bf00138746

                Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

                Charlton, B., & Verghese, A. (2010). Caring for Ivan Ilyich. Journal of General Internal Medicine, 25(1), 93-95. https://doi.org/10.1007/s11606-009-1177-4

                Chew, C. H. (2008). 5th College of Physicians lecture—A physician’s odyssey: Recollections and reflections. Annals of the Academy of Medicine Singapore, 37, 968-976.

                Clandinin, D. J., & Connelly, F. M. (2000). Narrative Inquiry: Experience and Story in Qualitative Research. San Francisco, CA: Jossey-Bass. https://doi.org/10.1016/b978-008043349-3/50013-x

                Dennhardt, S., Apramian, T., Lingard, L., Torabi, N., & Amtfield, S. (2016). Rethinking research in the medical humanities: A scoping review and narrative synthesis of quantitative outcome studies. Medical Education, 50, 285-299. https://doi.org/10.1111/medu.12812

                Feld, J., & Heyse-Moore, L. (2006). An evaluation of a support group for junior doctors working in palliative medicine. American Journal of Hospice and Palliative Care, 23(4), 287-296. https://doi.org/10.1177/1049909106290717

                Florijn, B. W., & Kaptein, A. A. (2013). How Tolstoy and Solzhenitsyn define life and death in cancer: Patient perceptions in oncology. American Journal of Hospice and Palliative Care, 30(5), 507-511. https://doi.org/10.1177/1049909112452626

                Gagne, R. M., Briggs, L. J., & Wager, W. W. (1988). Principles of Instructional Design. New York: Holt, Rinehart and Winston Inc.

                Gibbs, G. (1988). Learning by Doing: A Guide to Teaching and Learning Methods. Oxford: Further Education Unit, Oxford Polytechnic.

                Hall, M. A., Zheng, B., Dugan, E., Camacho, F., Kidd, K. E., Mishra, A., & Balkrishnan, R. (2002). Measuring patients’ trust in their primary care providers. Medical Care Research and Review, 59, 293-318. https://doi.org/10.1177/1077558702059003004

                Hojat, M., Gonnella, J. S., Nasca, T. J., Mangione, S., Vergare, M., & Magee, M. (2002). Physician empathy: Definition, components, measurement, and relationship to gender and specialty. American Journal of Psychiatry, 159(9), 1563-1569. https://doi.org/10.1176/appi.ajp.159.9.1563

                Hojat, M., Mangione, S., Nasca, T. J., Cohen, M. J. M., Gonnella, J. S., Erdmann, J. B., … Magee, M. (2001). The Jefferson Scale of Physician Empathy: Development and Preliminary Psychometric Data.  Educational and Psychological Measurement, 61(2), 349-365. https://doi.org/10.1177/00131640121971158

                Lee, P. T., Loh, J., Sng, G., Tung, J., & Yeo, K. K. (2018). Empathy and burnout: A study on residents from a Singapore institution. Singapore Medical Journal, 59(1), 50-54. https://doi.org/10.11622/smedj.2017096

                Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21(6), 699-705. https://doi.org/10.1093/fampra/cmh621

                Neumann, M., Edelhäuser, F., Tauschel, D., Fischer, M. R., Wirtz, M., Woopen, C., … Scheffer, C. (2011). Empathy decline and its reasons: A systematic review of studies with medical students and residents. Academic Medicine, 86(8), 996-1009. https://doi.org/10.1097/acm.0b013e318221e615

                Ong, E. K., & Anantham, D. (2019). The medical humanities: Reconnecting with the soul of medicine. Annals of the Academy of Medicine Singapore, 48(7), 233-237.

                Othuis, G., & Dekkers, W. (2003). Professional competence and palliative care: An ethical perspective. Journal of Palliative Care, 19(3), 192-197. https://doi.org/10.1177/082585970301900308

                Ousager, J., & Johannessen, H. (2010). Humanities in undergraduate medical education: A literature review. Academic Medicine, 85, 988-998. https://doi.org/10.1097/acm.0b013e3181dd226b

                Perry, M., Maffulli, N., Wilson, S., & Morrissey, D. (2011). The effectiveness of arts-based interventions in medical education: A literature review. Medical Education, 45(2), 141-148. https://doi.org/10.1111/j.1365-2923.2010.03848.x

                Schön, D. A. (1987). Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions. San Francisco, CA: Jossey‐Bass.

                Schwartz, A. W., Abramson, J. S., Wojnowich, I., Accordino, R., Ronan, E. J., & Rifkin, M. R. (2009). Evaluating the impact of the humanities in medical education. Mount Sinai Journal of Medicine, 76, 372-380.  https://doi.org/10.1002/msj.20126

                Shapiro, J., Coulehan, J., Wear, D., & Montello, M. (2009). Medical humanities and their discontents: Definitions, critiques, and implications. Academic Medicine, 84, 192-198. https://doi.org/10.1097/acm.0b013e3181938bca

                Taras, M. (2008). Summative and formative assessment: Perceptions and realities. Active Learning in Higher Education, 9(2), 172-192. https://doi.org/10.1177/1469787408091655

                Taylor, A., Lehmann, S., & Chisolm, M. (2017). Integrating humanities curricula in medical education: A needs assessment. MedEdPublish. https://doi.org/10.15694/mep.2017.000090

                Wear, D., & Zarconi, J. (2016). Humanism and other acts of faith.  Medical Education, 50, 271-281. https://doi.org/10.1111/medu.12974

                West, C. P., Dyrbye, L. N., Erwin, P. J., & Shanafelt, T. D. (2016). Interventions to prevent and reduce physician burnout: A systematic review and meta-analysis. The Lancet, 388(10057), 2272-2281. https://doi.org/10.1016/S0140-6736(16)31279-X

                *Ong Eng Koon
                Division of Supportive and Palliative Care,
                National Cancer Centre Singapore
                11 Hospital Drive, Singapore 169610
                Tel: +6564368462
                Email address: ong.eng.koon@singhealth.com.sg

                Announcements