Exploring online learning interactions among medical students during a self-initiated enrichment year
Submitted: 30 April 2020
Accepted: 8 September 2020
Published online: 4 May, TAPS 2021, 6(2), 66-77
https://doi.org/10.29060/TAPS.2021-6-2/OA2391
Pauline Luk & Julie Chen
The University of Hong Kong, Hong Kong
Abstract
Introduction: A novel initiative allowed third year medical students to pursue experiential learning during a year-long Enrichment Year programme as part of the core curriculum. ‘connect*ed’, an online virtual community of learning was developed to provide learning and social support to students and to help them link their diverse experiences with the common goal of being a doctor. This study examined the nature, pattern, and content of online interactions among medical students within this community of learning to identify features that support learning and personal growth.
Methods: This was a quantitative-qualitative study using platform data analytics, social network analysis, thematic content analysis to analyse the nature and pattern of online interactions. Focus group interviews with the faculty mentors and medical students were used to triangulate the results.
Results: Students favoured online interactions focused on sharing and learning from each other rather than structured tasks. Multimedia content, especially images, attracted more attention and stimulated more constructive discussion. We identified five patterns of interaction. The degree centrality and reciprocity did not affect the team interactivity but mutual encouragement by team members and mentors can promote a positive team dynamic.
Conclusion: Online interactions that are less structured, relate to personal interests, and use of multimedia appear to generate the most meaningful content and teams do not necessarily need to have a leader to be effective. A structured online network that adopts these features can better support learners who are geographically separated and engaged in different learning experiences.
Keywords: Online Learning, Undergraduate, Interaction, Experiential Learning
Practice Highlights
- Image-based messages and less structured online activities focused on experience-sharing engage students and stimulate a more constructive discussion.
- The proactivity of students and mentors can foster a positive team dynamic and learning experience.
- A team or group leader is not always necessary to promote group interaction.
I. INTRODUCTION
Increasingly, medical schools are recognising the potential of a holistic, experiential curriculum to nurture the professional development of their students (Kallail et al., 2020). A growing body of evidence supports the benefits of experiential learning. Experiential learning has been associated with increasing interest in learning (Kallail et al., 2020) a better understanding of career choice (Lyons, 2017), and higher-order critical thinking skills (Alamodi et al., 2018).
Beginning in 2018-19, the Li Ka Shing Faculty of Medicine of The University of Hong Kong (HKUMed) introduced a mandatory, credit-bearing Enrichment Year for all third-year medical students. This initiative provided opportunities for substantive engagement in a personal area of interest related to research, service or humanitarian work, pursuit of a higher degree, or university exchange anywhere in the world in order to further the professional and personal development of students.
Recognising the difficulties students may encounter when they are off-campus and the need to support student experiential learning, we developed an online virtual community of learning called ‘connect*ed’ to provide learning and social support to students and to help them link their diverse experiences with their common goal of becoming a doctor. The idea of an online virtual learning space is well situated within the social constructivist theoretical framework (Vygotsky, 1978) which views social interaction as the basis for learning. Individuals develop and construct knowledge better when interacting with others rather than unilaterally receiving information, thereby conceptualising learning as a collaborative process. Building on this idea, Lave and Wenger discussed ‘communities of practice’ in which socially supported learning takes place (Lave & Wenger, 1991). In this related theory, social learning takes place within communities of practice defined as groups who have a common interest or domain, who engage and interact in shared activities thus developing a relationship. This dialogic interaction among the learner, peers and tutor evolves over time and can take place and be captured in the virtual learning space to support the evolution of work (Greenberg, 2006). In the higher education setting, online discussion forums, or web 2.0 technologies such as blogs and wikis draw on the benefits of social learning and communities of practice giving students time to think, contribute and give and receive feedback to help their learning.
This aim of this study is to examine the nature, pattern, and content of online interactions among medical students within the virtual community of learning, connect*ed, to identify features that support learning and personal growth. Findings will offer insight on how to further optimize collaborative online learning.
II. METHODS
A. Context
During the Enrichment Year, students were allocated to teams with a designated faculty mentor. Team composition was designed to maximise diversity of learning experiences, hence each team would have at least one student who was doing research, one doing service or humanitarian work, and one pursuing an exchange opportunity abroad. This allowed students to benefit from the experiences of their teammates. Prior to departure, a Launch Day was convened in June 2018 to facilitate team cohesion among members and their mentor, to familiarise with the connect*ed objectives, the e-learning platform, mentor and student teammates.
We chose to use the commercially developed e-platform, Workplace by Facebook to house connect*ed after extensive consultation and testing with stakeholders. The interface of Workplace is very similar to Facebook but operates in a closed community only accessible to registered connect*ed users. This helped to address legitimate privacy and confidentiality concerns while providing a user-friendly and familiar platform that students and teachers were willing to use.
Teams were encouraged to share their learning experience with each other and with their mentor on Workplace. Structured learning modules called “Inquiry Pods” (IP) were released online on a regular basis to help facilitate the sharing and discussion. The themes for the inquiry pods were communicator, ethical decision-maker, and global citizen, based on the six educational aims and learning outcomes of the university and the Bachelor of Medicine and Bachelor of Surgery (MBBS) programme (HKU, 2017). Students completed each IP by posting, commenting, and reacting to trigger material provided in the IP or based on their own/others Enrichment Year experience. Most of the posts were photos, video, text, or sharing of online information, via hyperlinks.
connect*ed is a graded component of the Enrichment Year and students must earn a pass (60%) in order to proceed to the next year of study. Team mentors graded each inquiry pod as a formative assessment, and at the end of the year, provided a summative assessment based the overall performance in the IP, online participation and team impact presentation. All the assessments were rubric-based (Appendix 1: Grading rubrics).
B. Study Design
This was a mixed methods quantitative-qualitative study that combined analyses of platform analytic data and qualitative information drawn from student work and focus group discussions (FGD) used to provide a richer understanding of online learning interactions among students (Ma, 2012).
C. Subjects
In the academic year 2018-19, 206 students participated in the Enrichment Year. They participated in 302 activities in Hong Kong and in 23 different cities around the world (Appendix 2: Activities undertaken by students in 2018-2019). These students were selectively divided into 33 teams of five to eight students, according to gender, destination and nature of activities, to ensure the most diversified combination of members.
D. Data Sources, Collection and Analysis
1) Level of activity: At the end of the first academic year, we evaluated the students’ online activity by analysing the usage data collected through the Application Programming Interface (API) of Workplace from June 2018 to May 2019. These showed the frequency of activity in terms of students, mentors and teams who posted, commented, replied, and reacted on the platform.
2) Social network interaction: Social network analysis is a method for studying the structure of relationships and the effect this social structure has on the attitudes, behaviour, and performance of the individual members of a group (Saqr et al., 2018). We extracted the Workplace data using Workplace Graph API, which allowed us to create objects by nodes and joined along edges, and developed a web tool (PHP +Vue.js+JQuery) to export data from Workplace. We focused our analysis on team members’ position and role in teams. The extracted data were imported to the open source software, Gephi that generated a graph for social network analysis. The software used nodes and edges to represent the connections between each member of the team and presented the interactions within the social network in terms of the size, gradient, and direction of the communication (Bastian et al., 2009).
3) Content of posts: The content of posts by each team was analysed for common themes based on the type of messages posted on the platform. Initial codes were generated based on the purpose of the posts and then categorised to find the essence of each theme. This allowed us to identify how students were using the platform and thereby understand the basic functions of the virtual community of learning.
4) Feedback and focus group discussion: We conducted FGD with students and mentors from March – June 2019. There were 13 FGD with 30 mentors and three with 9 students. Participation in FGD was voluntary and no monetary incentive was given to students or mentors. For mentors, the FGD was part of the evaluation, feedback and engagement effort to encourage mentors to continue their involvement in their project which is why all mentors were invited and most participated. Therefore, the participation rate was high. For the students’ sessions, there was a purposive selection of subjects based on student volunteers who were keen to share their experience and deliberate invitation to those who were comparatively inactive in the project. Each interview session lasted for 60 to 90 minutes. A semi-structured interview guide with pre-determined questions was used to focus the conversation on desired themes. The questions for both mentors and students were similar and covered participants’ experiences with connect*ed, using the Workplace platform, challenges and suggestions for improvement. All FGD were recorded by contemporaneous notes that were organised immediately following each session.
III. RESULTS
A. Level of Activity
There was a total of 815 posts, 8198 comments, and 6250 emoticon reactions: like (5843), love (169), haha (152), wow (71), sad (14), and angry (1) by 206 students and 33 mentors as summarised in Table 1.
|
Post (average) |
Comment (average) |
Reaction (average) |
|
Mentor N=33 |
539 (16.3) |
1484 (44.9) |
3017 (91.4) |
|
Students N=206 |
276 (1.3) |
6714 (32.6) |
3233 (15.7) |
|
Total |
815 |
8198 |
6250 |
Table 1. Summary of online interactions in 2018-19
B. Social Network Interaction
The pattern of interactions was visually represented in a social network analysis by Gephi. In the diagram, the red node represents the mentor, the green node represents the students. The edge between the nodes represents the interactions. The thicker and darker colour of the edge represents more interaction. The arrow represents the direction of the communication.
We categorised the patterns according to the number of responses of mentor and students. By comparing the frequency of responses (posts, comments, and reactions), we found that there were five common patterns of interaction that were reflected in all teams, regardless of their level of activity as summarised in Table 2.
Pattern |
Frequency of Posts |
Frequency of Comments |
Frequency of Reactions |
Team identifier |
|
1 |
High (from mentor) |
High (from mentor) |
High (from mentor) |
1, 9, 10, 11, 17, 21, 25, 26, 33, |
|
2 |
High from mentor |
High from mentor |
Average/Low from mentor |
2, 19, 31 |
|
3 |
High from mentor |
Low from mentor |
Average/Low from mentor |
3, 7, 14, 18, 28 |
|
4 |
High from mentor |
Low from mentor |
High from mentor |
4, 6, 12, 13, 15, 20, 23, 27, 29 |
|
5 |
High from mentor |
Average from mentor and students |
Any frequency |
5, 8, 16, 22, 24, 30, 32 |
Remark: H=high participation compare to team average; A=average participation that mentors are having similar amount of participation as students; L= low participation compare to team average
Table 2. Patterns of interactions among teams
In general, all mentors were more active than students as teachers initiated new posts and were often keen to share information with students (Appendix 3). Even when students were encouraged to create new posts, they tended to focus on completing the tasks in the Inquiry Pods.

Diagram 1: Patterns of online interactions by teams
1) Pattern 1: Mentor degree centrality: We found that the number of responses from mentors were much higher than the students. For example, in Team 1, the mentor made 113 posts, 236 comments, and 227 reactions, while the five students made between 1-5 posts, 38-54 comments, and 14-43 reactions. Mentors were the centre point and driving force of the interaction. Students interact with others in response to mentor facilitation making the degree centrality towards to the mentor. The thickness of the edges is evenly distributed indicating a consistent level of interaction among all team members.
2) Pattern 2: Mentor degree centrality: Similar to Pattern 1, mentors were also active in posting and commenting, but gave much fewer reactions than students. The centre point is towards the mentor, and also the most active students in the team as shown by the two thick edges in the diagram. For instance, in team 2, the mentor posted 43 posts, 127 comments, and 19 reactions, while the seven students posted 1 to 12 post, 28 to 65 comments, and 8 to 66 reactions respectively. In this pattern, mentor was also the centre point, however, some nodes of students shared thicker edges.
3) Pattern 3: Student degree centrality: Team 3 is such an example, showing that the thick arrows are pointing towards students, meaning that the interaction is initiated by students. The mentor took a less important role in the conversations. The degree centrality is towards students and the mentor was outside of the interaction centre. For instance, in team 3, the mentor posted 12 posts, 19 comments, and 20 reactions, while the seven students posted 1 to 12 post, 17 to 68 comments, and 0 to 64 reactions respectively. In this pattern, mentor was situated outside the conversation circle. The degree of centrality shifted to students.
4) Pattern 4: Student degree centrality: The dynamics of interaction leaned towards active students, which were represented by the thick edges towards certain students. In this pattern, there were usually multiple centre points that did not include the mentor. For instance, in team 4, the mentor posted 11 posts, 27 comments, and 90 reactions, while the seven students posted 0 to 3 post, 28 to 78 comments, and 0 to 39 reactions respectively. The degree of centrality shifted to multiple students.
5) Pattern 5: Diversified degree centrality: Mentors were active in posting, having similar frequency of comments as the students and the number of comments among all members are the same, and having low reactions. In this pattern, there are multiple conversation nodes and most are interactions between students. Those interactions are more student-driven and indicate multi-centred conversation. For instance, in team 5, the mentor posted 18 posts, 36 comments, and 3 reactions, while the six students posted 0 to 3 post, 31 to 62 comments, and 4 to 29 reactions respectively. The degree of centrality shifted to more than one student. In this pattern, there is more interaction between students as shown by the bi-directional arrows. The degree centrality is low with diversified centres.
These patterns show that teams could have single-centred interaction (Pattern 1 & 2) or multi-centred interaction (Pattern 3, 4, & 5) with each representing different team interactions. Team activity, and not the centeredness of the interaction, was associated with the effectiveness of collaboration and the completion of tasks. In addition, most teams demonstrated one-way communication when interacting. That means the reciprocity of a network is low. In Teams 1 and 2, the interaction dynamics favoured the mentors, while in Teams 3, 4, and 5, the dynamics leaned towards active students. In contrast, team 5 demonstrated strong reciprocity.
However, after comparing the patterns of all 33 teams, there is no indication that a certain pattern was better than the others. There was no significant difference in the on-time completion rate for the IP assignments for the five most active teams (86.9%) compared with all 33 teams (85.5%).
C. Content of Posts
In connect*ed, students shared their Enrichment Year experience using text, photo, video, or related links of other websites. The nature of interactions was predominantly text-based, as it is easier to post and interact using the text. However, image-based messages attracted more attention and stimulated a more constructive discussion.
There were three particular areas that generated greater levels of interactivity. Firstly, students were very willing to share and reflect on their personal experiences. Taking the ‘Communicator’ Inquiry Pod as an example, students shared their observations on communication in their respective settings by posting on the team wall:
“There is a huge contrast here, where students actively ask questions even if the setting involves 80+ students. I suppose the background behind the two nationalities have a huge role in it, as Asians tend to be a bit more shy compared to the extrovertiveness commonly shared by Westerners. While we should embrace who we were and are, I think it is also beneficial to observe others and learn from such observations.”
Student A (studied abroad)
This text-based conversation thread compared and contrasted effective classroom communication in different countries. It enabled students to reflect and to draw on their own experiences to benefit all team members.
The second area of interest for students was social support. One of the most popular activities was the posting of photos and videos about their Enrichment Year activities including when they are performing social service missions, cooking a gourmet meal or joining group gatherings during festive occasions. Those posts generated numerous responses and reactions indicating a keen interest in reaching out and maintaining social connectedness.
Thirdly, students were more active online when there was information being shared related to the medical practice and they are more willing to discuss their views as shown in this sequence from Team 3:
“Being a MBBS student, people around may ask us for medical advice. They think we are knowledgeable to make a diagnosis based on their description and believe we are able to help. However, as we are not yet qualified, it is inappropriate for us to give any professional opinion. Sometimes, I would like to share what I have learnt and suggest some possible solutions. Nevertheless, at the same time, I am afraid my opinion would affect their health seeking behaviour, for instance, they might just follow what I share instead of seeing a doctor.”
Student B
“It’s true that we are not knowledgeable enough to give medical advice and it will be misleading to our relatives and friends if they take our opinions as professional advice and decide not to seek proper medical opinions. Thus, we should always remind ourselves of the role as medical students and think about the impacts of our words.”
Student C
“I understand your feeling as my relatives and friends also ask me for medical advice. It will be safer to advise them to seek help from medical professionals for diagnosis or other serious health issues. However, as a medical student, I think it is possible for us to give them some lifestyle advice without causing harmful consequences, for instance, smoking cessation, diet with lower cholesterol content and moderate exercise. Although we are not qualified to make a diagnosis at the moment, we can still use our medical knowledge as a way to promote public health and arise their health conscious.”
Student D
Students more actively express their opinions when the topics under discussion are related to the profession they are aiming to join.
D. Feedback and Focus Group Interviews
Although there were only 9 student interviewees, we found repeated themes and content suggesting data saturation. This may be because connect*ed comprised only 10% of the overall Enrichment Year and students did not pay particular attention to this component resulting in little variation in responses during the FGD.
The main theme that arose from the FGD with students and mentors was about the most rewarding aspect of the online interaction in connect*ed. Both groups indicated that this was the social connectivity attained through student sharing of day-to-day life during the Enrichment Year.
“The photo and video did not need much effort to share with others, but they are more interesting and can let me know more about how others were doing during their Enrichment Year.”
Student E
“I am very interested in knowing the life of others in other universities. I hope they can share more and we could see others’ videos.”
Student F
“Sharing things we learned with the team could help us to be more socialized.”
Student G
Mentors also enjoyed knowing more about students’ Enrichment Year life and believed that students should enjoy themselves while learning.
“The connect*ed is a good example helping students to bridge their knowledge and core value. The sharing of experience (related to the Enrichment Year) is important, it engages students in the discussion”
Mentor X
“The platform support each other very well. This is a platform for socializing and communicating. I know what students are doing if they posted on the team wall.”
Mentor Y
The value of social connectivity for support was further emphasized by students suggesting that the platform was more useful for social networking than learning.
“connect*ed provided a platform for us sharing the struggles and support each other when I was having my Enrichment Year.”
Student I
This view was echoed by mentors who believed that connect*ed provided necessary support for students especially those who were overseas. Mentors used the chat function on Workplace to have personalized communication with their members and to offer advice.
“I used the chat function on the Workplace, which is more personal and can support each other very well…. I can have immediate interaction with students.”
Mentor Z
Students found mentors were motivating and encouraged them to interact in teams which led to some mentor-centric team interactions.
“Our mentor is very motivating and encourages all members to participate in the discussion. She guided us through to complete the inquiry pods.”
Student J
“In my team, there are some inactive members who have demotivated me to interact. If there is an active member, I think it would help.”
Student K
The findings also indicated that proactivity by student members, participation by the mentor, responsiveness, and social/non-academic discussions fostered a positive team dynamic and a positive online learning experience, regardless of whether the team interaction was primarily single-centred or multi-centred.
IV. DISCUSSION
This study examined the nature, pattern, and content of online interactions among medical students within a virtual community of learning among the inaugural cohort of the Enrichment Year to identify features that support learning and personal growth.
Our results found that students favoured online interactions that were less structured, image-rich and focused on sharing of experience to learn from each other and to support one another. Multimedia content, especially images, attracted more attention and stimulated discussion that was more constructive. This is consistent with findings in the literature that show that images have a positive influence on learning and engagement (Chan & Unsworth, 2011; Stuijfzand et al., 2016). Sharing of personal experience helped students to reflect on their own experience and explore how others experienced their Enrichment Year. The results support previous studies that suggest self-reflection and community building enhances experiential learning (Arnold & Paulus, 2010; Pai, 2016). This builds a virtual community that allows students to share their struggles which students found to be a crucial aspect of giving and receiving social support. The use of the different modes of communication available on Workplace, including the text messaging and voice calls as well as social media posting provided flexible avenues of support to students. The finding is very similar to the outcomes of a project involving a mobile application for experiential learning activities (Schnepp & Rogers, 2017). We also observed that the number of positive reactions (like, love, haha or wow) far exceeded the number of negative reactions (sad and angry). This is a visual form of encouragement from mentors and peers that reflected their interest in engaging with each other.
From the patterns derived from our social network analysis, we found that the interaction could be uni-directional or bi-directional, but there was no correlation of the interaction with team effectiveness in completing tasks on time. As seen in the social network patterns identified, active mentors can drive team interaction. However, in contrast with other findings in the literature, the degree centrality and reciprocity do not affect the team interactional dynamic (Jan & Vlachopoulos, 2019). Regardless of the directionality of the predominant interaction, if there are active members in the team or the mentors are motivating, these individuals are the key to generating more interaction and enhancing online learning experience of students.
Ideally, both mentors and all students should be active, but having at least one or two more active students, can raise the team dynamic. Once some students are willing to share their experience and give timely responses, it can stimulate others to join. Continued encouragement of active members and mentors can promote a positive team dynamic. In terms of degree centrality, the observation that no pattern of interaction was superior to the others suggests that a single leader is not always necessary to for the team to be effective.
This study suggests that interactions will occur most naturally when students are doing what they feel is useful such as maintaining social support with each other and their mentor. In order to be accepted, learning initiatives such as linking learning with experiential activities will need to be less formal and integrate more smoothly with students’ demonstrated desire for social support and interest to share experience. In addition, attention to team formation and ensuring opportunity develop team cohesion would be essential as students in the FGD, commented that when there were members they do not know well, it will be a hindrance for interaction. As the connect*ed is one of the graded components of the Enrichment Year, we observed that the assessment could serve as an external motivator encouraging students to contribute to the work and support their team. However, it could also have a negative impact as it is perceived as an additional burden and may pressure some students to participate for the sake of participating and doing so in an inauthentic way.
V. CONCLUSION
The online virtual community, connect*ed, to support experiential learning for medical students is still at an early stage. Features of connect*ed that facilitated learning and personal growth included a focus on student support and sharing especially with multimedia, less structured interactions, and teams with active members and/or mentor. It is important to note that interaction does not equate to learning (Jan & Vlachopoulos, 2019), and so the use of an online network that adopts these features may better support learners but the effectiveness of achieving formal learning outcome should be further studied. We will continue to modify and evaluate the functionality of the connect*ed community to ensure it is fit-for-purpose to support students’ needs and learning.
Notes on Contributors
Pauline Luk and Julie Chen contributed to the design and implementation of the research, analysis of the results and writing of the manuscript. PL drafted the manuscript, and JL edited and contributed to the intellectual content of the manuscript and provided overall supervision of the project. Both authors and approved the final manuscript.
Ethical Approval
This research received approval from the HKU Institutional Review Board (UW18-121). Consent was obtained from participants for the research study.
Acknowledgements
The authors sincerely appreciate the support from the mentors and students who participated in this study, the collaboration with the Education University of Hong Kong, and the administrative and technical support rendered by Mr. Francis Tsoi and Miss Joyce Tsang throughout the project.
Funding
This project was funded by the Hong Kong University Grants Committee (UGC) Funding Scheme for Teaching and Learning Related Proposals (2016-19 Triennium).
Declaration of Interest
The authors report no conflicts of interest.
References
Alamodi, A. A., Abu-Zaid, A., Eshaq, A. M., & Al-Kattan, K. (2018). The summer enrichment program: A multidimensional experiential enriching experience for junior medical students. The American Journal of the Medical Sciences, 356(2), 185-186. https://doi.org/10.1016/j.amjms.2018.05.005
Arnold, N., & Paulus, T. (2010). Using a social networking site for experiential learning: Appropriating, lurking, modeling and community building. The Internet and Higher Education, 13(4), 188-196. https://doi.org/10.1016/j.iheduc.2010.04.002
Bastian, M., Heymann, S., & Jacomy, M. (2009). Gephi: an open source software for exploring and manipulating networks. In the International AAAI Conference on Weblogs and Social Media. 361-362. Retrieved April 1, 2020, from https://vbn.aau.dk/ws/files/328840013/154_3225_1_PB.pdf
Chan, E., & Unsworth, L. (2011). Image–language interaction in online reading environments: Challenges for students’ reading comprehension. Australian Educational Researcher, 38(2), 181-202. https://doi.org/10.1007/s13384-011-0023-y
Greenberg, G. (2006). Can we talk? Electronic portfolios as collaborative learning spaces. In A. Jafari & C. Kaufman (Eds.), Handbook of Research on ePortfolios. Idea Group Inc.
HKU. (2017). Educational aims and institutional learning outcomes. Retrieved April 1, 2020, from http://www.handbook.hku.hk/ug/full-time-2017-18/important-policies/educational-aims-and-institutional-learning-outcomes
Jan, S., & Vlachopoulos, P. (2019). Social network analysis: A framework for identifying communities in higher education online learning. Technology, Knowledge and Learning, 24(4), 621-639. https://doi.org/10.1007/s10758-018-9375-y
Kallail, K. J., Shaw, P., Hughes, T., & Berardo, B. (2020). Enriching medical student learning experiences. Journal of Medical Education and Curricular Development, 7, 4. https://doi.org/10.1177/2382120520902160
Lave, J., & Wenger, E. (1991). Situated learning: legitimate peripheral participation. Cambridge University Press
Lyons, Z. (2017). Establishment and implementation of a psychiatry enrichment programme for medical students. Australasian Psychiatry, 25(1), 69-72. https://doi.org/10.1177/1039856216671663
Ma, L. (2012). Some philosophical considerations in using mixed methods in library and information science research. Journal of the American Society for Information Science and Technology, 63(9), 1859-1867. https://doi.org/10.1002/asi.22711
Pai, H. C. (2016). An integrated model for the effects of self-reflection and clinical experiential learning on clinical nursing performance in nursing students: A longitudinal study. Nurse Education Today, 45, 156. https://doi.org/10.1016/j.nedt.2016.07.011
Saqr, M., Fors, U., Tedre, M., & Nouri, J. (2018). How social network analysis can be used to monitor online collaborative learning and guide an informed intervention. PLoS One, 13(3). http://dx.doi.org/10.1371/journal.pone.0194777
Schnepp, J., & Rogers, C. (2017). Evaluating the acceptability and usability of EASEL: A mobile application that supports guided reflection for experiential learning activities. Journal of Information Technology Education: Innovations in Practice, 16, 195.
Stuijfzand, B. G., van Der Schaaf, M. F., Kirschner, F. C., Ravesloot, C. J., van Der Gijp, A., & Vincken, K. L. (2016). Medical students’ cognitive load in volumetric image interpretation: Insights from human-computer interaction and eye movements. Computers in Human Behavior, 62, 394-5632.
Vygotsky, L. S. (1978). Mind in Society. Harvard University Press. https://doi.org/10.2307/j.ctvjf9vz4
*Pauline Luk
5/F William MW Mong Block
21 Sassoon Road,
Pokfulam, Hong Kong
Email: pluk@hku.hk
Submitted: 21 August 2020
Accepted: 12 November 2020
Published online: 4 May, TAPS 2021, 6(2), 57-65
https://doi.org/10.29060/TAPS.2021-6-2/OA2378
Nicholas Beng Hui Ng1,2, Mae Yue Tan1,2, Shuh Shing Lee3, Nasyitah binti Abdul Aziz3, Marion M Aw1,2 & Jeremy Bingyuan Lin1,2
1Khoo Teck Puat-National University Children’s Medical Institute, National University Health System Singapore; 2Department of Paediatrics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 3Centre for Medical Education (CenMED), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
Abstract
Introduction: The coronavirus disease 2019 (COVID-19) pandemic has brought about additional challenges beyond the usual transitional stresses faced by a newly qualified doctor. We aimed to evaluate the impact of COVID-19 on interns’ stress, burnout, emotions, and implications on their training, while exploring their coping mechanisms and resilience levels.
Methods: Newly graduated doctors interning in a Paediatric department in Singapore, who experienced escalation of the pandemic from January to April 2020, were invited to participate. Participants completed the Perceived Stress Scale (PSS), Maslach’s Burnout Inventory (MBI), and Connor Davidson Resilience Scale 25-item (CD-RISC 25) pre-pandemic and 4 months into COVID-19. Group interviews were conducted to supplement the quantitative responses to achieve study aims.
Results: Response rate was 100% (n=10) for post-exposure questionnaires and group interviews. Despite working through the pandemic, interns’ stress levels were not increased, burnout remained low, while resilience remained high. Four themes emerged from the group interviews – the impacts of the pandemic on their psychology, duties, training, as well as protective mechanisms. Their responses, particularly the institutional mechanisms and individual coping strategies, enabled us to understand their unexpected low burnout and high resilience despite the pandemic.
Conclusion: This study demonstrated that it is possible to mitigate stress, burnout and preserve resilience of vulnerable healthcare workers such as interns amidst a pandemic. The study also validated a multifaceted approach that targets institutional, faculty as well as individual levels, can ensure the continued wellbeing of healthcare workers even in challenging times.
Keywords: COVID-19, Stress, Burnout, Resilience, Junior Doctor, Intern
Practice Highlights
- Intern doctors face additional and unique challenges in a pandemic, besides the usual stresses of their school-to-work transition.
- Our study shows that a multi-faceted approach that target institution, faculty and individual can lead to reduced burnout and preserved resilience in these doctors.
I. INTRODUCTION
With the coronavirus disease 2019 (COVID-19) pandemic, there are new stressors contributing to burnout in healthcare workers. We were particularly interested in evaluating the impact of COVID-19 on newly qualified doctors doing their internship, also known as House Officers or post-graduate year 1 doctors in Singapore. This is a particularly vulnerable group of healthcare workers as the school-to-work transitional year is traditionally a challenging period with high reports of burnout (Low et al., 2019; Sturman et al., 2017).
In Singapore, our first case of COVID-19 was on 23 January 2020. By February 2020, Singapore had one of the highest numbers of cases out of China (Chia & Moynihan, 2020). A global pandemic was declared on 12 March 2020. In early April 2020, the government tightened local measures with a ‘Circuit Breaker’, akin to the lockdowns in many countries (Ministry of Health Singapore, 2020).
Newly graduated doctors in Singapore complete a 12-month training period (4-month rotations in 3 different disciplines) prior to full medical registration. The period of January to April 2020 was during their third block and coincided with the full evolution of the pandemic, which came with multiple unexpected changes in work within the hospital. These included new protocols for personal protection, team segregation and mechanisms to cope with the increase in COVID-19 cases. In our department, interns and residents were divided into active and passive teams rotating fortnightly, where the active team had to shoulder the responsibility of caring for at risk or COVID-19 paediatric patients, with an intense overnight call duty schedule, different from the weekly frequency in the non-pandemic setting. In addition to work changes, there were also cancellation of overseas leave as well as cessation of scheduled teaching sessions.
With these changes, we aimed to evaluate the impact of the COVID-19 pandemic on interns in our department, focusing on their psychological well-being in terms of stress and burnout, and impact on clinical training. Our secondary aim was to explore the interns’ resilience, coping mechanisms and identify systemic measures they perceived as helpful during this pandemic.
II. METHODS
A. Study Design and Sample
This was a mixed-methods quantitative and qualitative study involving interns who worked from January to April 2020, in a paediatric department at a tertiary academic hospital that actively admitted COVID-19 patients. Informed consent was obtained from all participants for both the quantitative and qualitative components of the study.
B. Quantitative Data Methodology
Pre-pandemic data on perceived stress, burnout and resilience levels were collected a priori in early January 2020, when the interns first joined the department. This was part of a baseline evaluation of a separate study. We employed validated scales: the Perceived Stress Scale (PSS) (Cohen et al., 1983), the Maslach Burnout Inventory (MBI) for Health Services Survey (Maslach & Leiter, 2016), and the Connor-Davidson Resilience Scale 25-item (CD-RSIC 25) (Connor & Davidson, 2003) to measure stress, burnout and resilience respectively. The PSS measures the perception of stress, and is designed to tap how unpredictable, uncontrollable, and overloaded respondents find their lives. Scores ranging from 0-13, 14-26, and 27-40 are mild, moderate, and high perceived stress, respectively. The MBI is a 22-item inventory with scores in 3 domains of burnout: emotional exhaustion (EE), depersonalization (DP), and low personal accomplishment (PA) based on multiple questions for each of these subscales. We used a strict definition of burnout as having fulfilled criteria in all 3 domains of the MBI (i.e. high EE ≧ 27, high DP ≧ 10, and low PA ≦ 33). A liberal definition (i.e. high EE ≧ 27 and high DP ≧ 10 with or without a low PA) was also measured as both definitions are widely adopted in literature (Rotenstein et al., 2018). The CD-RISC 25-item (English version) is a validated scale to measure resilience. It gives a score ranging from 0 to 100, with higher scores reflecting greater resilience. On completion of the posting in end April 2020, the interns repeated the same set of questionnaires.
C. Qualitative Data Methodology: Group Discussions
We conducted group interviews to further evaluate the responses obtained from the questionnaires and to better understand the impact on the interns. Invitation emails were sent to all interns; participation was voluntary. The questions were developed to explore the challenges, emotions, psychological states and reflections of their coping mechanisms and supportive measures of the interns while working in the pandemic. The questions were developed and refined by the authors after discussion and consensus (Appendix 1). Two group interviews were conducted on separate days by the same interviewer, to maintain team segregation and physical distancing. Each group had 5 participants. The sessions were recorded and subsequently transcribed by an independent party.
D. Data Analysis
Quantitative data on the validated scales were scored according to the corresponding manuals. Descriptive and comparative analysis was done with SPSS, Version 23. For the interviews, thematic analysis was conducted. Two of the authors (SS & NAA) read the transcripts to understand fully the data, generated the initial codes independently. Next, codes with consistently similar content were grouped into sub-categories, and similar sub-categories were then combined into categories to form themes. In the event there were differing views on the coding or theme, they re-examined the primary data and further discussed to achieve consensus.
III. RESULTS
A. Quantitative Results
We had a 90% response rate (n=9) for the pre-exposure and 100% (n=10) for the post-exposure questionnaires. There was no change in PSS scores among the interns despite the pandemic, with both median scores in the moderate stress category at 17.5 post-exposure and 17 pre-exposure. There was no high perceived stress in all interns post-exposure. Using the strictest definition of burnout, burnout remained low at 20% post-exposure, compared to 11.1% pre-exposure (Table 1). When a more liberal definition of burnout is used as discussed in the methodology section, only 20% of participants were burnout post-exposure, compared to 66.7% of participants pre-exposure. High resilience levels were maintained, with median score of 74 pre-exposure and 72.5 post-exposure.
|
Measures |
Pre-exposure, (n=9) |
Post-exposure, (n=10) |
p value |
|
Perceived Stress Scale (PSS) |
|||
|
Median (SD) |
17 (6.75) |
17.50 (5.70)
|
N.A |
|
Low stress, n (%) |
4 (44.4%) |
3 (30%)
|
0.65 |
|
Moderate stress, n (%) |
4 (44.4%) |
7 (70%)
|
0.37 |
|
High stress, n (%) |
1 (11.1%) |
0 (0%)
|
0.474 |
|
Maslach Burnout Inventory (MBI) |
|||
|
No burnout, n (%) |
3 (33.3%) |
4 (40.0%) |
0.999 |
|
Strict definition of burnout, n (%) |
1 (11.1%) |
2 (20.0%)
|
0.999 |
|
Liberal definition of burnout, n (%) |
6 (66.7%) |
2 (20%) |
0.09 |
Table 1: Quantitative results showing scores on the Perceived Stress Scale and Maslach Burnout Inventory of the interns pre-pandemic, compared with scores post-exposure. (SD= Standard Deviation).
B. Qualitative Results
We had 100% participation in the group interviews (n=10). Four themes emerged from the qualitative analysis – psychological impact (feelings), impact on duties, impact on teaching and learning as well as preventive measures and support system. These are summarised in Table 2.
|
Key Theme 1: Psychological Impact (Feelings) |
|
|
Sub-themes |
Sample of quotations |
|
a) Loss of control coping with many changes
b) Emotional exhaustion (fear, burnout, uncertainty, loneliness)
c) Positive feelings |
“…throughout the pandemic, there were a lot of unexpected changes and uncertainty among the junior doctors especially the PGY1s (referring to interns)…”
“…COVID gives people much stress due to the uncertainty in a lot of things…” “the thought of COVID patients is scary” “…if I really contract this (COVID-19) I wouldn’t have too much concern (but) I was more scared I would pass it on to my family “…stress stemming from fear” “… cannot help but experienced feelings of isolation and loneliness… I avoided my mother, who is immunocompromised as I worry about passing the infection to her even when I am off active COVID-care duty…” “feeling of being protected alleviated stress and concerns related to contracting the virus” “…months during pandemic (in the posting) were enriching and enjoyable…” “working during pandemic is deemed as “a badge of honour” “felt the months during pandemic situation was a ‘good learning experience’”
|
|
Key Theme 2: Impact on Duties |
|
|
Sub-themes |
Sample of quotations |
|
a) Changes in clinical duties
b) Dealing with rapidly changing protocols
|
“felt that manpower shortage coupled with more frequent on-call duties within two weeks causes early burnout”
“…I think on the ground level the protocol is always bleak, for example who to swab and when…” “delayed updating of protocol online led to a bit of confusion” “not getting updated instantaneously and lack of accessible to the information” |
|
Key Theme 3: Impact on Teaching and Learning |
|
|
Sub-themes |
Sample of quotations |
|
a) Clinical exposure
b) Changes in teaching approaches |
“…in terms of the variety of cases in posting, it is significantly affected due to pandemic that changed demographic of attendees”
“…there wasn’t much teaching on-going until recently when we got the online platforms which I do feel is more helpful…” “due to having lesser patients, feels consultants have more time to teach” “while there is no group teaching, there is more teaching of cases on wards” |
|
Theme 4: Protective Measures and Support System |
|
|
Sub-themes |
Sample of quotations |
|
a) Rotation system which ensured sufficient manpower and rest
b) Institutional measures for personal protection against COVID-19 infection
c) Seniors, Peers and Staff support
d) Self-adaptability and resilience
|
“…we have enough manpower to actually toggle between the rotations for COVID-care and non-COVID services…”
“…PGY1s (Interns) are protected as we don’t swab the patients and we don’t have to expose ourselves to the possible aerolisation of the secretions, so I think that really protected us and relieved our stress…”
“… regular meetings (with) seniors that sat down to uncover our worries… seniors were open to taking feedback about rostering and manpower…” “…I really think it’s the support that has been given by the department and the institution, and the seniors especially have been very supportive…”
“…think of the hardships faced by other health professionals, one’s situation will not compare to theirs” “…stay strong, persevere, and that everyone will get through it together by supporting each other” “…remember that it was a choice and that it is also a privilege to be in medicine…” |
Table 2: Summary of key themes and sub-themes as well as verbatim quotations from our interns, from the group interviews.
1) Theme 1 – Psychological Impact (Feelings): Most interns perceived that the pandemic had caused drastic changes in their personal and work lives, with various psychological impacts. They expressed increased emotional exhaustion such as stress and burnout, that is mainly related to changes in their clinical duties (Theme 2). The interns also shared about risks of COVID-19 infection to self and especially to family and loved ones, increasing their worries and stress. Interns followed physical distancing measures and team segregation at work, but several interns avoided their loved ones at home, especially the elderly and immunocompromised. For these interns, they further shared feelings of isolation and loneliness. Positive emotions such as feeling secure, valued and protected existed simultaneously and were mainly associated with the protective measures and support systems (Theme 4) in the workplace. Some also reported that the posting was still enjoyable and felt proud to be working in the pandemic.
2) Theme 2 – Impact on Duties: The interns highlighted there were many changes in institutional work processes and their duties due to the pandemic. Due to manpower changes, there were pervasive reports of physical fatigue. There were however those who felt the workload was still manageable. Interns also raised the issue of non-timely information and unclear protocols which often led to confusion and uncertainty in their work.
3) Theme 3 – Impact on Teaching and Learning: There were mixed comments on this. As a result of strict physical distancing and team segregation, initial planned teaching sessions on general paediatrics were cancelled and the interns felt they “missed out” on their clinical training. Sessions were subsequently conducted using web-based platforms, which many found helpful. All interns felt that learning was restricted in the pandemic. Although it was beneficial to learn about pandemic response and management of suspected or affected COVID-19 patients, they felt their exposure to general paediatrics was reduced due to the limited variety of ward cases. However, there were some who felt there was better quality of teaching on the ward rounds as consultants had more time to teach with fewer elective and non-urgent cases in the rotations of non-COVID care.
4) Theme 4 – Preventive Measures and Support Systems: Despite the impacts on the interns’ psychology, duties and learning, they also shared on the various protective measures and support systems they perceived helped them cope. This was also the main reason for reported positive feelings of protection and support. Departmental and institutional work processes were implemented to take care of the interns’ physical and psychological welfare such as a rotational system of team segregation, which they reported provided a strict work-rest cycle as well as respite from COVID-care. In addition, seniors and faculty also ensured interns were competent and comfortable dealing with COVID-19 patients prior to taking on high risk duties such as swabbing patients. Support from multiple levels (seniors, department, institution) helped them through. In particular, the seniors and faculty provided support to the interns through regular “check-in” meetings where they could share concerns and provide feedback. The interns also shared that as a result of the strong support received, they were able to develop adaptability, perseverance and resilience, and they were even grateful to be in healthcare at this time.
IV. DISCUSSION
According to the demand-control-support model (Thomas, 2004), occupational stress causes burnout when job demands are high, individual autonomy is low and when job stress interferes with home life (Campbell et al., 2001; Linzer et al., 2001). On that note, we hypothesised that with the COVID-19 pandemic, interns would have increased stress and burnout, in addition to their routine difficulties in the transition from student to doctor. The pandemic-related concerns our interns had were similar to many healthcare workers globally – including the fear of contracting COVID-19 and more so transmitting it to vulnerable loved ones (Chen et al., 2020). Physical fatigue was also seen in our interns given the more intensive work schedule (Sasangohar et al., 2020). Although the total amount of admissions during the period was reduced to 40% of the usual load, the need for team segregation had led to a smaller pool of interns covering each clinical area. In addition, each intern had to do more in-house night calls while on active service. Segregation also meant that there would be less cross-coverage of duties where interns would receive less support from peers who would otherwise have been able to help with the workload on the ground. Another important aspect that had led to reported stress among many was the frequent changes in clinical workflows coupled with the lack of timely and reliable information (Wu et al., 2020). Many interns also highlighted concerns with regards to compromise and interference with their paediatric internship training (Liang et al., 2020). Despite all these, objectively the interns’ perceived stress was maintained without increase in burnout.
Burnout is known to be inversely related to resilience – this pattern is also reflected in our results. Resilience is the process of adapting well in the face of adversity, trauma, tragedy, threats or even significant sources of stress (Southwick et al., 2014). Our interns had high resilience scores, above what has previously been published among physicians (McKinley et al., 2020). One reason for this may be the development of resilience through a time of crisis, a phenomenon well encapsulated by the Crisis Theory: during a crisis or disequilibrium such as the current pandemic, people make attempts to adapt and seek solutions to restore stability. (Brooks et al., 2017; Caplan, 1964). The development of resilience is increasingly emphasised as an integral strategy to combat burnout. Potentially, the mitigating factors, coping mechanisms and support shared by our interns in the interviews, could explain their low burnout and high resilience.
Our interns perceived many systemic measures helped them cope with the pandemic – giving testament to the importance of institutional leadership in implementing safeguards for psychological health (Dewey et al., 2020; Wu et al., 2020). Protocols relating to staff protection, availability of personal protective equipment (Rasmussen et al., 2020) were some of the measures common to institutions worldwide. Furthermore, interns being the most junior member of the team, were spared from doing aerosolising procedures such as intubation, nebulisation administration and airway suctioning that were deferred to clinicians with prior experience and training. This allowed interns time to learn and improve in their competency and confidence prior to assuming these responsibilities. The interns were also thankful for the protected work-rest cycles (Wu et al., 2020), and that they were allowed to take paid leave – which is essential, more so in the pandemic to reduce fatigue and allowed time for rejuvenation.
Other than institutional support, direct support from seniors and faculty were significant in our interns’ responses in helping them, supporting the importance of mentorship (Ramanan et al., 2006). Despite feeling that they might not have reliable and timely access to important updates, they felt supported under the direct guidance of seniors who took the lead on the ground. Regular fortnightly ‘check-in’ sessions were conducted to elicit concerns, obtain feedback, and ensure continual wellbeing. This channel of communication was well received by interns: they appreciated the faculty’s concerns, had the autonomy of being able to input and contribute to the care of patients, the opportunity to air grievances confidentially and importantly, had closure on concerns they have raised regarding their rotations and training (Fischer et al., 2019). The enhanced collegiality between interns, support from seniors and improved cooperation among healthcare workers during this time of crisis naturally also contributed to reduced burnout levels, a finding well established in literature.(Li et al., 2013)
In terms of the impact of training, teaching sessions were initially discontinued to maintain physical distancing. Moreover, the interns had a higher proportion of time spent in the provision of COVID-19 care, which meant traditional general paediatric exposure was compromised. However, within 4 weeks of the pandemic, departmental teaching activities were restored via web-based sessions which interns found useful. The role of faculty in persisting with academic continuity, is again important in mitigating the impact of the pandemic on learning – some interns felt they had more teaching on the wards as consultants had more time to teach for each patient.
We believe that the perceived continual institutional and senior support for our interns allowed them to maintain high personal resilience, that could have mitigated their stress and burnout. In this pandemic, interns demonstrated adaptability and perseverance to the many changes, ability to persevere as well as finding gratitude amidst the challenges and focusing on their goal to help patients and fight the pandemic, which are all known features of resilience (Bird & Pincavage, 2016; Zwack & Schweitzer, 2013).
To our knowledge, this is the first research study in the pandemic that objectively evaluated the impact of the COVID-19 on interns’ psychological state, resilience and training. However, we recognise our study limitations. The small population would mean that it would be difficult to derive statistical comparisons in the pre- and post-exposure results. However, we believe the temporal exposure of the pandemic for this group of interns during their posting, made the pre- and post-pandemic results valid. The results were further supported by qualitative findings from a good group interview participation (100%) and in-depth discussion, that provided substantial explanations to the trend of results. We recognise that 2-4 months might be a short duration for negative psychological effects such as stress, and burnout to set in. Nonetheless, the amount of unprecedented changes and intensity of work for the interns involved within this period, were undoubtedly high. Another study limitation is the inclusion of Paediatric interns only and the possible lower exposure to COVID-19 as compared to their adult counterparts due to decreased disease morbidity and mortality in children. Although this factor could potentially result in less impact on the psychological factors studied, we believe other interns are likely to face similar concerns and challenges in the pandemic, due to their similar backgrounds and job scopes across most departments and disciplines.
This study elucidated the impact of the pandemic on interns in terms of their stress, burnout, as well as clinical duties and training. Despite increasing concerns on the psychological well-being of healthcare workers in the pandemic, our study has demonstrated that it is possible to mitigate their stress, burnout and preserve resilience, even in vulnerable new medical graduates. Our findings objectively validated the importance and effectiveness of the multi-faceted approach that target institution, faculty as well as the individual level, to build resilience and combat burnout in healthcare providers in this pandemic and beyond.
Notes on Contributors
Nicholas BH Ng contributed to conception and design of study, interpretation of data, drafting and critical revising of the article. Mae Yue Tan contributed to analysis and interpretation of data, drafting and critical revising of the article. Shuh Shing Lee contributed to analysis and interpretation of data, drafting and critical revising of the article. Nasyitah bte Abdul Aziz contributed to analysis and interpretation of data, drafting of the article. Marion M Aw contributed to interpretation of data, drafting and critical revising of the article. Jeremy BY Lin contributed to conception and design, interpretation of data, drafting and critical revising of the article. All authors gave final approval of the version to be published.
Data Availability
The data for this study can be found at https://doi.org/10.6084/m9.figshare.12924029.v1. The access to these datasets are available for use subject to approval of the authors of this article.
Ethical Approval
Ethics approval was obtained from the NHG Domain Specific Review Board (DSRB), with NHG DSRB reference number of 2020/00392.
Acknowledgement
The authors would like to thank the interns who participated in this study.
Funding
Funding for this study was obtained from NUHS Fund Limited – Medical Affairs (Education) Fund.
Declaration of Interest
All authors have no conflicts of interest to declare.
References
Bird, A., & Pincavage, A. (2016). A curriculum to foster resident resilience. MedEdPORTAL, 12, 10439. https://doi.org/10.15766/mep_2374-8265.10439
Brooks, S. K., Dunn, R., Amlôt, R., Rubin, G. J., & Greenberg, N. (2017). Social and occupational factors associated with psychological wellbeing among occupational groups affected by disaster: A systematic review. Journal of Mental Health, 26(4), 373-384. https://doi.org/10.1080/09638237.2017.1294732
Campbell, D. A., Jr., Sonnad, S. S., Eckhauser, F. E., Campbell, K. K., & Greenfield, L. J. (2001). Burnout among American surgeons. Surgery, 130(4), 696-702; discussion 702-695. https://doi.org/10.1067/msy.2001.116676
Caplan, G. (1964). Principles of preventive psychiatry. Basic Books.
Chen, Q., Liang, M., Li, Y., Guo, J., Fei, D., Wang, L., He, L., Sheng, C., Cai, Y., Li, X., Wang, J., & Zhang, Z. (2020). Mental health care for medical staff in China during the COVID-19 outbreak. Lancet Psychiatry, 7(4), e15-e16. https://doi.org/10.1016/s2215-0366(20)30078-x
Chia, R., & Moynihan, Q. (2020, February 20). This alarming map shows where the coronavirus has spread in Singapore, one of the worst-hit areas outside of China Business Insider Singapore. Business Insider. https://www.businessinsider.com/coronavirus-singapore-map-shows-spread-worst-hit-outside-china-2020-2?IR=T.
Cohen, S., Kamarck, T., & Mermelstein, R. (1983). A global measure of perceived stress. Journal of Health and Social Behaviour, 24(4), 385-396.
Connor, K. M., & Davidson, J. R. (2003). Development of a new resilience scale: The Connor-Davidson resilience scale (CD-RISC). Depression and Anxiety, 18(2), 76-82. https://doi.org/10.1002/da.10113
Dewey, C., Hingle, S., Goelz, E., & Linzer, M. (2020). Supporting clinicians during the COVID-19 pandemic. Annals of Internal Medicine, 172(11), 752-753. https://doi.org/10.7326/M20-1033
Fischer, J., Alpert, A., & Rao, P. (2019). Promoting intern resilience: Individual chief wellness check-ins. MedEdPORTAL, 15, 10848. https://doi.org/10.15766/mep_2374-8265.10848
Li, B., Bruyneel, L., Sermeus, W., Van den Heede, K., Matawie, K., Aiken, L., & Lesaffre, E. (2013). Group-level impact of work environment dimensions on burnout experiences among nurses: A multivariate multilevel probit model. International Journal of Nursing Studies, 50(2), 281–291. https://doi.org/10.1016/j.ijnurstu.2012.07.001
Liang, Z. C., Ooi, S. B. S., & Wang, W. (2020). Pandemics and their impact on medical training: Lessons From Singapore. Academic Medicine. https://doi.org/10.1097/acm.0000000000003441
Linzer, M., Visser, M. R., Oort, F. J., Smets, E. M., McMurray, J. E., & de Haes, H. C. (2001). Predicting and preventing physician burnout: results from the United States and the Netherlands. The American Journal of Medicine, 111(2), 170-175. https://doi.org/10.1016/s0002-9343(01)00814-2
Low, Z. X., Yeo, K. A., Sharma, V. K., Leung, G. K., McIntyre, R. S., Guerrero, A., Lu, B., Lam, C. C. S. F., Tran, B. X., Nguyen, L. H., Ho, C. S., Tam, W. W., & Ho, R. C. (2019). Prevalence of burnout in medical and surgical residents: A meta-analysis. International Journal of Environmental Research and Public Health, 16(9). https://doi.org/10.3390/ijerph16091479
Maslach, C. J. S., & Leiter, M. P. (2016). Maslach burnout inventory manual. Mind Garden Inc.
McKinley, N., McCain, R. S., Convie, L., Clarke, M., Dempster, M., Campbell, W. J., & Kirk, S. J. (2020). Resilience, burnout and coping mechanisms in UK doctors: A cross-sectional study. British Medical Journal Open, 10(1), e031765. https://doi.org/10.1136/bmjopen-2019-031765
Ministry of Health (MOH), Singapore. (2020). Circuit breaker to minimise further spread of COVID-19. https://www.moh.gov.sg/news-highlights/details/circuit-breaker-to-minimise-further-spread-of-covid-19. (Retrieved April 3, 2020)
Ng, N. B. H (2020). The COVID-19 Pandemic: Impact on Paediatric Postgraduate Year One Doctors [Data set]. Figshare. https://figshare.com/s/74c81ca193638a553ea2
Ramanan, R. A., Taylor, W. C., Davis, R. B., & Phillips, R. S. (2006). Mentoring matters. Mentoring and career preparation in internal medicine residency training. Journal of General Internal Medicine, 21(4), 340-345. https://doi.org/10.1111/j.1525-1497.2006.00346.x
Rasmussen, S., Sperling, P., Poulsen, M. S., Emmersen, J., & Andersen, S. (2020). Medical students for health-care staff shortages during the COVID-19 pandemic. The Lancet, 395(10234), e79-e80. https://doi.org/10.1016/s0140-6736(20)30923-5
Rotenstein, L. S., Torre, M., Ramos, M. A., Rosales, R. C., Guille, C., Sen, S., & Mata, D. A. (2018). Prevalence of burnout among physicians: A systematic review. The Journal of the American Medical Association, 320(11), 1131-1150. https://doi.org/10.1001/jama.2018.12777
Sasangohar, F., Jones, S. L., Masud, F. N., Vahidy, F. S., & Kash, B. A. (2020). Provider burnout and fatigue during the COVID-19 pandemic: Lessons learned from a high-volume intensive care unit. Anesthesia and Analgesia, 131(1), 106–111. https://doi.org/10.1213/ane.0000000000004866
Southwick, S. M., Bonanno, G. A., Masten, A. S., Panter-Brick, C., & Yehuda, R. (2014). Resilience definitions, theory, and challenges: Interdisciplinary perspectives. European Journal of Psychotraumatology, 5,(1), 25338. https://doi.org/10.3402/ejpt.v5.25338
Sturman, N., Tan, Z., & Turner, J. (2017). “A steep learning curve”: Junior doctor perspectives on the transition from medical student to the health-care workplace. BMC Medical Education, 17(1), 92. https://doi.org/10.1186/s12909-017-0931-2
Thomas, N. K. (2004). Resident burnout. The Journal of the American Medical Association, 292(23), 2880-2889. https://doi.org/10.1001/jama.292.23.2880
Wu, P. E., Styra, R., & Gold, W. L. (2020). Mitigating the psychological effects of COVID-19 on health care workers. Canadian Medical Association Journal, 192(17), E459-e460. https://doi.org/10.1503/cmaj.200519
Zwack, J., & Schweitzer, J. (2013). If every fifth physician is affected by burnout, what about the other four? Resilience strategies of experienced physicians. Academic Medicine, 88(3), 382-389. https://doi.org/10.1097/ACM.0b013e318281696b
*Jeremy Bingyuan Lin
1E Kent Ridge Road,
NUHS Tower Block Level 12,
Singapore 119228
Tel: (65) 6772 4847
Email: jeremy_lin@nuhs.edu.sg
Submitted: 28 July 2020
Accepted: 18 November 2020
Published online: 4 May, TAPS 2021, 6(2), 48-56
https://doi.org/10.29060/TAPS.2021-6-2/OA2367
Oscar Gilang Purnajati1, Rachmadya Nur Hidayah2 & Gandes Retno Rahayu2
1Faculty of Medicine, Universitas Kristen Duta Wacana, Yogyakarta, Indonesia; 2Department of Medical Education, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta, Indonesia
Abstract
Introduction: Objective Structured Clinical Examination (OSCE) examiners come from various backgrounds. This background variability may affect the way they score examinees. This study aimed to understand the effect of background variability influencing the examiners’ score agreement in OSCE’s procedural skill.
Methods: A mixed-methods study was conducted with explanatory sequential design. OSCE examiners (n=64) in the Faculty of Medicine Universitas Kristen Duta Wacana (FoM-UKDW) took part to assess two videos of Cardio-Pulmonary Resuscitation (CPR) competence to get their level of agreement by using Fleiss Kappa. One video portrayed CPR according to performance guideline, and the other portrayed CPR not according to performance guidelines. Primary survey, CPR procedure, and professional behaviour were assessed. To confirm the assessment results qualitatively, in-depth interviews were also conducted.
Results: Fifty-one examiners (79.7%) completed the assessment forms. From 18 background categories, there was a good agreement (>60%) in: Primary survey (4 groups), CPR procedure (15 groups), and professional behaviour (7 groups). In-depth interviews revealed several personal factors involved in scoring decisions: 1) Examiners use different references in assessing the skills; 2) Examiners use different ways in weighting competence; 3) The first impression might affect the examiners’ decision; and 4) Clinical practice experience drives examiners to establish a personal standard.
Conclusion: This study identifies several factors of examiner background that allow better agreement of procedural section (CPR procedure) with specific assessment guidelines. We should address personal factors affecting scoring decisions found in this study in preparing faculty members as OSCE examiners.
Keywords: OSCE Score, Background Variability, Agreement, Personal Factor
Practice Highlights
- The examiners’ background variability influences the OSCE scoring agreement results.
- The reason for assessment inaccuracy remains unclear regarding the score agreement.
- The absence of assessment instruments that could provide a loophole for examiners to improvise.
- Personal factors affecting scoring decisions found in this study should be addressed in preparing OSCE examiners.
I. INTRODUCTION
To assess medical students’ competencies in a variety of skills, most medical schools in Indonesia implement the Objective Structured Clinical Examination (OSCE) both as a clinical skills examination at the undergraduate stage and as a national exit exam (Rahayu et al., 2016; Suhoyo et al., 2016). Most OSCE stations test both communication domains and specific clinical skills that will be assessed based on rubrics and scoring checklists which relies on examiners’ observations (Setyonugroho et al., 2015). The OSCE has a challenge in its complexity to standardise the scores, which are very depend on OSCE examiners’ perceptions (Pell et al., 2010). In a well-designed OSCE the examinees performance should only influence the examinees’ score, with minimal effects from other sources of variance (Khan et al., 2013). Research showed that there are influences of examiner’s background variability on OSCE results although they have been asked to standardise their behaviour (Pell et al., 2010) The decision and behaviour of OSCE examiners will affect the quality of assessment, including making a pass or fail decision, considering the complexity of knowledge, skill, and attitude in medical education (Colbert-Getz et al., 2017; Fuller et al., 2017).
Examiners’ observations also rely on their clinical practice experience, OSCE examining experience, and gender conformity (Mortsiefer et al., 2017). Even in OSCE that is held in the most standard conditions, the examiner factor has the biggest role in scoring inaccurately (Mortsiefer et al., 2017). However, the reason for this inaccuracy remains unclear since there are concerns regarding the scoring agreement of examiners in OSCE and how the result might be affected by this issue. There is a need to consider the influence of examiners’ background variability (gender, educational level, clinical practice experiences, length of clinical practice experiences, OSCE experience, and OSCE training experience) when preparing teachers as OSCE examiners. This study aimed to understand background variability as a factor influencing examiners’ scoring agreement in assessing students’ performance in procedural skill, as the first step of faculty development program to ensure the standard quality for examiners.
II. METHODS
A. Study Design
This mixed-method study used a sequential explanatory design. This mixed-method approach is expected to provide more comprehensive results and better understanding than using a separated method (Creswell & Clark, 2018).
This study comprised of 2 sequential phases of data collection and analysis (QUANTITATIVE: qualitative) using sequential design. First, quantitative data were collected as a cross-sectional study of the examiners’ strength of agreement using Fleiss Kappa while assessing the clinical skill performance recorded in the 2 videos: one video portrayed CPR according to performance guideline and the other portrayed CPR not according to performance guideline. We used these 2 videos in order to portray more comprehensively how the consistency of OSCE examiner agreement both on good and poor clinical skill performance.

Figure 1 Mixed method explanatory design
In the second phase, in-depth interviews were used to complement the quantitative results to gain more information and a detailed confirmation about how the scores were decided (Stalmeijer et al., 2014). In this stage of study, researchers explored and explained the examiners’ OSCE experiences and behaviour when they give a score on a clinical skill examination and the influences on their scoring regarding their backgrounds.
B. Materials and/or Subjects
The strength of agreement of the videos’ score came from 64 OSCE examiners FoM UKDW. Mortsiefer et al., (2017), explained that more subjects are better when investigate examiner characteristics associated with inter-examiner reliability (Mortsiefer et al., 2017). In the second phase, in-depth interviews were conducted with 6 examiners of FoM UKDW, selected by purposive sampling regarding their scores and how they represented their own unique background (Table 1).
Researcher (OGP) provided all the participants with written information about this research and addressed ethical issues in an informed consent form. Researcher ensured participants understand the research protocol and clarified any questions regarding this study. Participants who agreed to take part, sign the informed consent form prior to the data collection.
We held interviews in FoM UKDW with maximum 30 minutes of duration each interview. The inclusion criteria for examiners who were selected for this study were involved as full-time faculty members, had over 4 times OSCE examination experience, and had done OSCE examiner training, expecting that they had enough interaction with other faculty members and had influences from medical doctor education (Park et al., 2015). The exclusion criteria were participant did not answer the research invitation and did not fill the assessment form completely. Main researcher (OGP) conducted the interview. Main researcher was a male, student of Master of Health Profession Education Universitas Gadjah Mada, and the staff of FoM UKDW.
C. Statistics
1)Quantitative data analysis: We grouped examiners into 18 groups based on their background which were gender, educational level, clinical practice experiences, length of clinical practice experiences, OSCE experience, and OSCE training experience as shown in Table 1. We analysed all gathered data using IBM SPSS Statistics 25 and Microsoft Office Excel 365 (IBM Corp., Chicago). We presented quantitative data as a strength of agreement in percentage. The strength of agreement was calculated using Fleiss Kappa to determine the agreement between each group of each examiner background on whether CPR performances (primary survey, CPR Procedure, and professional behaviour), that portrayed in those 2 videos, were exhibiting score either “0”, “1”, “2”, or “3” based on the assessment guideline and rubric’s criteria (Purnajati, 2020). Based on recent research, agreement above 60% was considered as a substantial and adequate agreement (Stoyan et al., 2017; Vanbelle, 2019).
2) Qualitative data analysis: In-depth interviews were analysed using thematic analysis. We prepared a structured list of questions. It consisted of one key question: What was your experience in scoring the OSCE? The other additional questions evaluated the experiences of examiners in OSCE scoring including: the use of other references, differences in assessment weighting, use of own decision, clinical practice experience affecting the decision, and gender related decision making. Next, the collected data resulting from in-depth interviews were recorded using audio file recorder, read, and categorised into themes whenever they were related. The transcripts and identified themes were then given to an external coder in this study. This step was followed by our agreement for each theme. There was no repeated interview.
III. RESULTS
A. Quantitative Data Result
We deposited both quantitative and qualitative data in an online repository (Purnajati, 2020). The study participants in this quantitative phase were 64 OSCE examiners who are full-time faculty members. Twelve participants were excluded because did not fulfil the inclusion criteria. Fifty-one (79.7%) examiners who returned the completed assessment form are described below in Table 1.
|
Quantitative Phase Participant |
|||
|
Background |
Groups |
Number of Participant (N=51) |
|
|
Gender |
Male |
22 (43%) |
|
|
Female |
29 (57%) |
||
|
Education |
Bachelor undergraduate |
19 (37%) |
|
|
Master’s degree |
16 (31%) |
||
|
Doctoral degree |
3 (6%) |
||
|
Specialist doctor |
13 (25%) |
||
|
Clinical Practice Experience |
General practitioner |
28 (55%) |
|
|
Specialist |
14 (27%) |
||
|
No clinical practice |
9 (18%) |
||
|
Duration of clinical practice experience |
< 2 years |
9 (18%) |
|
|
2-5 years |
17 (33%) |
||
|
>5 years |
25 (49%) |
||
|
OSCE experience |
< 2 years |
9 (18%) |
|
|
2-5 years |
24 (47%) |
||
|
>5 years |
18 (35%) |
||
|
OSCE examiner training |
< 3 times |
21 (41%) |
|
|
3-5 times |
17 (33%) |
||
|
>5 times |
13 (25%) |
||
|
Qualitative Phase Participants. |
|||
|
a Video portrayed CPR according to performance guideline. b Video portrayed CPR not according to performance guidelines |
|||
Table 1. Descriptive characteristics of participants
The assessment rubric was divided into three main competencies: (1) primary survey, (2) CPR procedure, and (3) professional behaviour. The results showed overall agreement on each main competency based on each examiners’ background variability by using Fleiss Kappa. The percentage of agreement is shown in Figure 2, 3, and 4.

Figure 2. Primary Survey percentage of overall agreement (n = 51). Agreement above 60% (*) is considered as a substantial and adequate agreement

Figure 3. CPR Procedure percentage of overall agreement (n=51). Agreement above 60% (*) is considered as a substantial and adequate agreement

Figure 4. Professional Behaviour percentage of overall agreement (n=51). Agreement above 60% (*) is considered as a substantial
After completing the CPR competency assessment, all examiners’ background characteristics met a cutoff of approval above 60% in assessing CPR procedure except for examiners with clinical practice experience <3 years, OSCE testing experience <2 years, and OSCE examiner training> 5 years (Figure 3). This finding showed a good strength of agreement in assessing CPR procedure regardless of examiners’ background. However, there were many instances where the cut-off point of 60% was not achieved in the aspects of primary surveys and professional behaviour (Figure 2 and 4), which showed fair strength of agreement between examiners when they examined these competencies.
B. Qualitative Data Results
Two theme categories were determined: (1) OSCE experience and (2) specific behaviour in OSCE. The first theme contains of 3 sub-themes: (1) student performance, (2) examiner background effect, and (3) using assessment instrument. The second theme consists of 5 sub-themes: (1) use of assessment references, (2) score weighting, (3) personal inferences, (4) clinical experience, and (5) gender conformity.
Theme 1: Examiners argued that they understand the difference in student performance in performing clinical skills and can distinguish from the coherent skills performed by students according to checklist.
“Very easy in giving an assessment, because everything is in accordance with the assessment rubric”
(ID 35)
“The plot is clear, well organised”
(ID 26)
“You can compare the inadequacies; it is enough to be compared”
(ID 11)
“The 2 different students are quite striking, so in my opinion it is not too difficult”
(ID 28)
Nevertheless, some examiners had difficulty to distinguish student performance when only used a checklist. Examiner background did not affect their way in scoring clinical skills performance, but some background may have the potential to affect their scoring, such as clinical practice experience.
“I am trying to avoid personal interpretations, as much as possible, but of course that cannot be 100 percent. In my opinion, the assessment rubric still gives room for subjectivity”
(ID 28)
In this research, it seemed easy for examiners to understand the assessment instrument when giving score to those 2 videos and their understanding were good.
Theme 2: Interviews revealed that: 1) Examiners use other references such as their clinical experience in assessing the skills;
“If the assessment guideline is unclear, the students are also unclear, yes I will improvise. Or when the assessment guideline is clear and the students are unclear which criteria are included, yes I will improvise”
(ID 35)
“Maybe yes, because once again the template at the beginning is not very clear”
(ID 23)
2) Examiners use different ways in giving weight of competence, for example, procedural steps are considered more important than primary survey;
“For those that I feel have a small weight because the instructions are also short, so I don’t have to look carefully”
(ID 24)
“When I feel that competence is not important, it does not get my emphasis, the more emergency that will get more attention.”
(ID 28)
3) The first impression of examinees might affect their decision in scoring their performance;
“That first impression will affect me in giving value. I will be more critical. I see more, pay more attention to the small things they do”
(ID 24)
4) Clinical practice experience drives examiners to establish a personal standard on how a doctor should be;
“Clinical experience when practice is one of the judgments”
(ID 24)
“The reference is just my instinct because it has been running as a doctor after all these years. Yes, I use my previous knowledge”
(ID 26)
And 5) Gender of examinees does not affect their decision, while their professionalism (e.g. showing respect to patients) will surely affect their decision.
“I pay more attention especially to politeness and professional behaviour”
(ID 24)
“Students of any gender still have the same standard of evaluation, a score of professionalism which is more influential”
(ID 23)
IV. DISCUSSION
Examiners’ agreement in this study was high in assessing the CPR procedure, which has a fixed and specific procedure in almost all groups of examiners. These results are consistent and can be explained by results from previous studies, which show that assessment with specific cases will provide high inter-examiner agreement (Erdogan et al., 2016). The differences in the examiner’s background will not have much influence on their agreement in giving an assessment in a specific case. This was supported by the opinions of examiners in the in-depth interviews who stated that in the CPR assessment procedure, assessment instruments are clear, easy to understand, with clear procedure flow, and performance that is easily distinguished, which made it easier for examiners to be able to distinguish student performance. A specific assessment instrument that could not provide a loophole for examiners to improvise assessment, made the opportunity for examiners to portray their subjectivity was minimised. This simplicity could lead to high agreement among examiners in specific competencies as shown in this study and based on clear evidence can increase the reliability of the assessment (Daniels et al., 2014) .
In this study, it was found in the primary survey assessment and professional behaviour which has an assessment guide that is not as specific as the CPR procedure, the percentage of agreement between examiner groups was lower, with only a few of them reaching 60% of agreement. This difference happened for reasons confirmed in the in-depth interviews which raised the issue that although the examiners tried to minimise their subjectivity in assessing, but it was said that there were still gaps in the assessment guide that still gives room for subjectivity. There are also examiners who were dissatisfied with the checklist, so they used their personal decisions in evaluating students.
According to a recent study, this could be due to the lack of specific instructions in the general assessment guidelines which will result in lower inter-examiner reliability compared to the use of more specific assessment guidelines (Mortsiefer et al., 2017). In the primary survey section and professional behaviour, there were also aspects of communication that were judged to be more susceptible to bias than physical examination skills because physical examination is more well-documented, clear instructions, and more widely accepted by examiners (Chong et al., 2018) The validity and reliability of a clinical skills assessment depend on factors including how the student’s performance on the exam, the character of the population, the environment, and even the assessment instrument itself can affect how examiners carry out the assessment (Brink & Louw, 2012). These phenomena were seen in the in-depth interviews which revealed that there were certain moments namely when the student being tested does not match the expectations written in the assessment guide and when the assessment guide is not clear so that it still gives room for subjectivity examiner. In addition, in the in-depth interviews the results also revealed that the examiners differentiated their attention on certain competencies with certain criteria such as the length of information in the assessment rubric, so that competencies that were considered not important did not get as much attention.
This finding may be in line with previous research which stated that constructs and conceptual definitions in this category that still provide a gap in the subjectivity of examiners cause shifting attention focus and weighting of their judgments to be different so that there are differences in important aspects between examiners (Schierenbeck & Murphy, 2018; Yeates et al., 2013). The difference in these important aspects can bring examiners to reorganise competency weights so that simpler and easier competencies (in this case those that have clearer and more detailed assessment guidelines) will be done first, and more complex ones (in this case, guides that have lower rigidity ratings) will be assessed later with the possibility of using more narratives (Chahine et al., 2015). This reorganisation can reflect how the examiners’ decision, allowing them to direct their attention to the more important aspects as the testers revealed in in-depth interviews with this research.
The personal factor, such as assessment references is a potential variability of the assessment conducted by the examiner. Examiners are trained and understand the use of assessment instruments, but produce varying assessments because they do not apply assessment criteria appropriately, but use personal best practice, use other test participants better as benchmarks, use patient outcomes (e.g. correct diagnosis, do patients understand, etc.), and use themselves as a comparison (Gingerich et al., 2014; Kogan et al., 2011; Yeates et al., 2013).
Another personal factors, including first impressions, can occur spontaneously unconsciously and can be a source of difference in judgment between examiners (Gingerich et al., 2011). First impressions based on observers’ observations have the same decisions and influences as social interactions, so it makes sense that first impressions are able to influence judgments, can be accurate and have a relationship with the final assessment results, but do not occur in examiners in general (Wood, 2014; Wood et al., 2017).
In providing assessments, there are gaps for examiners to give different competency weights to other examiners. Providing assessments based on targets that differ from competency standards and comparisons with the performance of other examinees will make the examiners recalibrate their own weighting and this is an explanation why there are variations in assessment and differences in the important points of the examinees’ performance among examiners (Gingerich et al., 2018; Yeates et al., 2015; Yeates et al., 2013).
The variability of personal factors between examiners can be conceptualise more as a different emphasis on building doctor-patient relationships and / or certain medical expertise rather than variations in the examiner’s background itself. The examiners’ own understanding can be conceptualized as a combination of whether what the examinees do is good enough and whether what they do is enough to build a doctor-patient relationship.
This research had some limitations such as it only used specific cases (i.e., CPR) to minimise the bias of the assessment instrument so that it would reveal more bias in the examiners themselves. In more complicated cases such as communication skills and clinical reasoning it is also necessary to provide a more complete picture of how the examiners’ scores agree in other cases. Generalization also became a limitation in this study because it only involved examiners from one medical education institution, however the study participants sufficiently described the variability of the examiner’s background.
V. CONCLUSION
This study identifies several factors of examiner background variability that influence examiners’ judgment in terms of inter-examiner agreement. Female examiners, bachelor education, less OSCE experience, and non-clinician examiners allow better agreement of procedural section (CPR procedure) with specific assessment guidelines. Cases that have unspecified assessment guidelines in this research, primary survey and professional behaviour, have lower agreement among examiners and must be examined deeper. We should note that personal factors of OSCE examiners can influence assessment discrepancies. However, the reasons for using these personal factors in scoring OSCE performance might be affected by unknown biases that require further research. Therefore, to improve clinical skills assessment such as OSCE for undergraduate medical programme, we must address personal factors affecting scoring decisions found in this study in preparing faculty members as OSCE examiners.
Notes on Contributors
Oscar Gilang Purnajati, MD was student of Master of Health Professions Education Study Program, Faculty of Medicine, Universitas Gadjah Mada, Indonesia. He concepted the research, reviewed the literature, designed the study, acquisited funding, conducted interviews, analysed quantitative data and transcripts, and wrote the manuscript.
Rachmadya Nur Hidayah, MD., M.Sc., Ph.D is lecturer of Department of Medical Education, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta, Indonesia. She supervised author Oscar Gilang Purnajati, developed the concepted framework for the study, critically analysed the data, cured the data, and reviewed the final manuscript.
Prof. Gandes Retno Rahayu, MD., M.Med.Ed, Ph.D is professor at the Department of Medical Education, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta, Indonesia. She supervised author Oscar Gilang Purnajati, advised the design of the study, critically analysed the data, gave critical feedback to the conducted interviews, reviewed the final manuscript.
All the authors have read and approved the final manuscript.
Ethical Approval
This study was approved by Health Research Ethics Committee Faculty of Medicine Universitas Kristen Duta Wacana (Reference No.1068/C.16/FK/2019).
Data Availability
All data were deposited in an online repository. The data is available at Open Science Framework with DOI: https://doi.org/10.17605/OSF.IO/RDP65
Acknowledgements
The author would like to thank Hikmawati Nurrokhmanti, MD, M.Sc for helping with the process of coding the in-depth interview transcripts. The author also would like to thank the staffs of Faculty of Medicine, Universitas Kristen Duta Wacana for supporting the research.
Funding Statement
This work was supported by the Universitas Kristen Duta Wacana (No. 075/B.03/UKDW/2018) as a part of study scholarship.
Declaration of Interest
No potential conflict of interest relevant to this article was reported.
Abbreviations and specific symbols
OSCE: Objective Structured Clinical Examination.
References
Brink, Y., & Louw, Q. A. (2012). Clinical instruments: Reliability and validity critical appraisal. Journal of Evaluation in Clinical Practice,18(6), 1126-1132. https://doi.org/10.1111/j.1365-2753.2011.01707.x
Chahine, S., Holmes, B., & Kowalewski, Z. (2015). In the minds of OSCE examiners: Uncovering hidden assumptions. Advances in Health Sciences Education : Theory and Practice, 21(3), 609–625. https://doi.org/10.1007/s10459-015-9655-4
Chong, L., Taylor, S., Haywood, M., Adelstein, B.-A., & Shulruf, B. (2018). Examiner seniority and experience are associated with bias when scoring communication, but not examination, skills in objective structured clinical examinations in Australia. Journal of Educational Evaluation for Health Professions, 15(17). https://doi.org/10.3352/jeehp.2018.15.17
Colbert-Getz, J. M., Ryan, M., Hennessey, E., Lindeman, B., Pitts, B., Rutherford, K. A., Schwengel, D., Sozio, S. M., George, J., & Jung, J. (2017). Measuring assessment quality with an assessment utility rubric for medical education. MedEdPORTAL : The Journal of Teaching and Learning Resources, 13, 1-5. https://doi.org/10.15766/mep_2374-8265.10588
Creswell, J. W., & Clark, V. L. P. (2018). Designing and conducting mixed method research. SAGE Publications, Inc.
Daniels, V. J., Bordage, G., Gierl, M. J., & Yudkowsky, R. (2014). Effect of clinically discriminating, evidence-based checklist items on the reliability of scores from an internal medicine residency OSCE. Advances in Health Sciences Education : Theory and Practice, 19(4), 497-506. https://doi.org/10.1007/s10459-013-9482-4
Erdogan, A., Dong, Y., Chen, X., Schmickl, C., Berrios, R. A. S., Arguello, L. Y. G., Kashyap, R., Kilickaya, O., Pickering, B., Gajic, O., & O’Horo, J. C. (2016). Development and validation of clinical performance assessment in simulated medical emergencies: An observational study. BMC Emergency Medicine, 16, 4. https://doi.org/10.1186/s12873-015-0066-x
Fuller, R., Homer, M., Pell, G., & Hallam, J. (2017). Managing extremes of assessor judgment within the OSCE. Medical Teacher, 39(1), 58-66. https://doi.org/ 10.1080/0142159X.2016.1230189
Gingerich, A., Kogan, J., Yeates, P., Govaerts, M., & Holmboe, E. (2014). Seeing the ‘black box’ differently: Assessor cognition from three research perspectives. Medical Education, 48(11), 1055–1068. https://doi.org /10.1111/medu.12546
Gingerich, A., Regehr, G., & Eva, K. W. (2011). Rater-based assessments as social judgments: Rethinking the etiology of rater errors. Academic Medicine : Journal of the Association of American Medical Colleges, 86(10), S1-S7. https://doi.org/10.1097/ACM.0b013e31822a6cf8
Gingerich, A., Schokking, E., & Yeates, P. (2018). Comparatively salient: Examining the influence of preceding performances on assessors’ focus and interpretations in written assessment comments. Advances in Health Sciences Education: Theory and Practice, 23(5), 937-959. https://doi.org /10.1007/s10459-018-9841-2
Khan, K. Z., Ramachandran, S., Gaunt, K., & Pushkar, P. (2013). The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part I: An historical and theoretical perspective. Medical Teacher, 35(9), 1437-1446. https://doi.org/ 10.3109/0142159X.2013.818634
Kogan, J. R., Conforti, L., Bernabeo, E., Iobst, W., & Holmboe, E. (2011). Opening the black box of clinical skills assessment via observation: A conceptual model. Medical Education, 45(10), 1048-1060. https://doi.org/10.1111/j.1365-2923.2011.04025.x
Mortsiefer, A., Karger, A., Rotthoff, T., Raski, B., & Pentzek, M. (2017). Examiner characteristics and interrater reliability in a communication OSCE. Patient Education and Counseling, 100(6), 1230-1234. https://doi.org/ 10.1016/j.pec.2017.01.013
Park, S. E., Kim, A., Kristiansen, J., & Karimbux, N. Y. (2015). The Influence of Examiner Type on Dental Students’ OSCE Scores. Journal of Dental Education, 79(1), 89-94.
Pell, G., Fuller, R., Homer, M., & Roberts, T. (2010). How to measure the quality of the OSCE: A review of metrics – AMEE guide no. 49. Medical Teacher, 32(10), 802-811. https://doi.org/ 10.3109/0142159X.2010.507716
Purnajati, O. G. (2020). Does objective structured clinical examination examiners’ backgrounds influence the score agreement? [Data set]. Open Science Framework. https://doi.org/ 10.17605/OSF.IO/RDP65
Rahayu, G. R., Suhoyo, Y., Nurhidayah, R., Hasdianda, M. A., Dewi, S. P., Chaniago, Y., Wikaningrum, R., Hariyanto, T., Wonodirekso, S., & Achmad, T. (2016). Large-scale multi-site OSCEs for national competency examination of medical doctors in Indonesia. Medical Teacher, 38(8), 801-807. https://doi.org /10.3109/0142159X.2015.1078890
Schierenbeck, M. W., & Murphy, J. A. (2018). Interrater reliability and usability of a nurse anesthesia clinical evaluation instrument. Journal of Nursing Education, 57(7), 446-449. https://doi.org/10.3928/01484834-20180618-12
Setyonugroho, W., Kennedy, K. M., & Kropmans, T. J. B. (2015). Reliability and validity of OSCE checklists used to assess the communication skills of undergraduate medical students: A systematic review. Patient Education and Counseling, 98(12), 1482-1491. https://doi.org/ 10.1016/j.pec.2015.06.004
Stalmeijer, R. E., McNaughton, N., & Van Mook, W. N. (2014). Using focus groups in medical education research: AMEE Guide No. 91. Medical Teacher, 36(11), 923-939. https://doi.org/ 10.3109/0142159X.2014.917165
Stoyan, D., Pommerening, A., Hummel, M., & Kopp-Schneider, A. (2017). Multiple-rater kappas for binary data: Models and interpretation. Biometrical Journal, 60(5), 381-394. https://doi.org/ 10.1002/bimj.201600267
Suhoyo, Y., Rahayu, G. R., & Cahyani, N. (2016). A national collaboration to improve OSCE delivery. Medical Education, 50(11), 1150–1151. https://doi.org/ 10.1111/medu.13189
Vanbelle, S. (2019). Asymptotic variability of (multilevel) multirater kappa coefficients. Statistical Methods in Medical Research, 28(10-11), 3012-3026. https://doi.org /10.1177/0962280218794733
Wood, T. J. (2014). Exploring the role of first impressions in rater-based assessments. Advances in Health Sciences Education : Theory and Practice, 19(3), 409-427. https://doi.org/ 10.1007/s10459-013-9453-9
Wood, T. J., Chan, J., Humphrey-Murto, S., Pugh, D., & Touchie, C. (2017). The influence of first impressions on subsequent ratings within an OSCE station. Advances in Health Sciences Education : Theory and Practice, 22(4), 969-983. https://doi.org/10.1007/s10459-016-9736-z
Yeates, P., Moreau, M., & Eva, K. (2015). Are examiners’ judgments in osce-style assessments influenced by contrast effects? Academic Medicine : Journal of the Association of American Medical Colleges, 90(7), 975-980. https://doi.org /10.1097/ACM.0000000000000650
Yeates, P., O’Neill, P., Mann, K., & Eva, K. (2013). Seeing the same thing differently: Mechanisms that contribute to assessor differences in directly-observed performance assessments. Advances in Health Sciences Education : Theory and Practice, 18(3), 325-341. https://doi.org/10.1007/s10459-012-9372-1
*Oscar Gilang Purnajati
Faculty of Medicine,
Universitas Kristen Duta Wacana,
Jl. Dr. Wahidin Sudirohusodo No. 5-25.
Yogyakarta City,
Special Region of Yogyakarta
55224, Indonesia.55224, Indonesia.
Tel: +62-274-563929
Email: oscargilang@staff.ukdw.ac.id
Submitted: 8 July 2020
Accepted: 23 October 2020
Published online: 4 May, TAPS 2021, 6(2), 38-47
https://doi.org/10.29060/TAPS.2021-6-2/OA2338
Enjy Abouzeid1, Rebecca O’Rourke2, Yasser El-Wazir1, Nahla Hassan1, Rabab Abdel Ra’oof1 & Trudie Roberts2
1Faculty of Medicine, Ismailia, Egypt; 2LIME, University of Leeds, United Kingdom
Abstract
Introduction: Although, several factors have been identified as significant determinants in online learning, the human interactions with those factors and their effect on academic achievement are not fully elucidated. This study aims to determine the effect of self-regulated learning (SRL) on achievement in online learning through exploring the relations and interaction of the conception of learning, online discussion, and the e-learning experience.
Methods: A non-probability convenience sample of 128 learners in the Health Professions Education program through online learning filled-out three self-reported questionnaires to assess SRL strategies, the conception of learning, the quality of e-Learning experience and online discussion. A scoring rubric was used to assess the online discussion contributions. A path analysis model was developed to examine the effect of self-regulated learning on achievement in online learning through exploring the relations and interaction among the other factors.
Results: Path analysis showed that SRL has a statistically significant relationship with the quality of e-learning experience, and the conception of learning. On the other hand, there was no correlation with academic achievement and online discussion. However, academic achievement did show a correlation with online discussion.
Conclusion: The study showed a dynamic interaction between the students’ beliefs and the surrounding environment that can significantly and directly affect their behaviour in online learning. Moreover, online discussion is an essential activity in online learning.
Keywords: Online Learning, Conception of Learning, E-learning Experience, Human-Computer Interface, Self-regulated Learning, Path Analysis
Practice Highlights
- The learner who views learning as a constructive process will show better use of self-regulated learning strategies.
- Learners’ beliefs and perceptions can shape the learning experience.
- Online discussion can directly and significantly affect academic achievement in online learning.
- Self-regulated learning is responsible for a small portion of the change in academic achievement.
- Online discussion may affect self-regulated learning negatively.
I. INTRODUCTION
In just a few years, online e-learning has become part of the mainstream in medical education for postgraduates in both developed and developing countries. The use of online e-learning may provide solutions for many educational problems, especially for health professions graduates. It can help them achieving their developmental and educational goals despite the lack of time and overburdened schedules. This raised the need for better understanding of learning in online learning context.
The training that most schools offer to students and instructors on online leaning is mainly limited to using technologies that allow learners to interact with instructors and other learners effectively and flexibly. However, learners in online learning are facing several and complex challenges due to the nature of this context. Online learning is a form of distance learning that represent not only the access to learning experience via the use of technology and internet but also it relies on connectivity, flexibility and ability to promote varied interactions (Hiltz & Turoff, 2005). It characterised by autonomy and relative isolation due to the lack of face-to-face support. One of these important challenges is the need for self-regulated skills. It has been reported that these skills are more important in online learning as compared to traditional one (Azevedo et al., 2008).
Self-regulation is defined as the degree to which students are metacognitively, motivationally, and behaviourally active participants in their learning process (Zimmerman, 1986). This definition focused on students’ proactive use of specific behaviours to improve their academic achievement. In short, the ability to regulate one’s learning process is a critical skill to achieve personal learning objectives in online courses due to the absence of the support and guidance that is typically available in face-to-face learning environments (e.g., an instructor setting deadlines and structuring the learning process). Therefore, online learners need to determine when and how to engage with course content without any other support than the course content and structure, which can pose a challenge for many learners (Lajoie & Azevedo, 2006).
Hence, it seems reasonable to assume that SRL may be a reliable predictor of academic performance. It has been shown that self-regulated learners are more effective learners (Toering et al., 2012), who attain higher grades in medical education (Lucieer et al., 2016). However, the effect of SRL on academic achievement in online learning is still unclear.
Several factors may interact and affect learning in online learning. However, some had received only limited discussion in the medical education literature while others had relatively little empirical testing. Although several research studies have investigated the effect of conception of learning on learners’ approaches, efforts, and motivation, however the effect of conception of learning on self-regulation is still insufficiently explored. Moreover, it can be assumed that students in online learning context may show different conceptions of learning as studies have shown that conception of learning is a context-depended construct that may differ according to the domain of the study or the surrounding context (Chiu et al., 2016; Tsai & Tsai, 2014). Additionally, SRL processes depend on both the learner and the surrounding environment (Bembenutty, 2006). As a result, we assumed that the learners’ perception of the quality of the surrounding learning environment might directly affect their behaviour and outcomes. In other words, the quality and interactivity of the learning environment may shape the learners’ attitude towards the learning experiences and influence the behavioural control of the learner (Zhao, 2016).

Figure 1: The study conceptual framework
Therefore, a model was hypothesized to explore the interaction between self-regulated learning, the conception of learning, online discussion, and the e-learning experience in an online environment, and how this interaction may affect academic achievement. This cross-sectional study provides an exciting opportunity to advance our knowledge about the learning process in online learning by raising the following questions:
- What is the relationship between SRL and academic achievement in online learning?
- What are the interactions between personal characteristics, beliefs, behaviours, and environment in online learning?
- Does these interactions affect academic achievement in online learning?
II. METHODS
A. Type of the Study and Setting
An observation cross-sectional study was performed at the Faculty of Medicine, Suez Canal University, Egypt. The Medical Education Department offers postgraduate online learning programs in Medical Education to the graduates of Health Professions Education specialties. The program is one of the first online programs in health professions education in the Arab region. It is a two-year program in which students submitted weekly assignments through WordPress / Eleum and receive online feedback on the same Learning Management system (LMS). Also, participate in an online discussion forum through the web-based application Listserv on Google group.
B. Participants and Sampling
‘Out of 231 learners in the online program, a non-probability convenience sample of 128 learners was recruited in the current study; of which, 88 participants had an input in the online discussion’. The subjects were selected from all the program fellows based on their approval to be included in the study sample. The participants were asked to participate in the study through a mass email composed of a detailed description of the nature of the study, the purpose of the study and its relevance to the field of medical education. In all cases, fellows were informed that any information they included in the questionnaires would be treated with confidentiality.
C. Data Collection Tools
Instruments were selected in the current study because it was constructed and used in relevant contexts and the design of the final version of the questionnaires were validated using factor; reliability and test- retest analysis.
1) Measuring learners’ self-regulated learning: The Online Self-Regulated Learning Questionnaire (OSLQ) was used to measure the self-regulated learning behaviours of the fellows (Barnard et al., 2008). The OSLQ consists of six subscale constructs including: environment structuring; goal setting; time management; help seeking; task strategies; and self-evaluation.
2) Measuring learners’ conception of learning: The mental model section of the Inventory of Learning Style (ILS) was used to explore the learners’ conception of learning. The questionnaire was kindly provided by J.D. Vermunt, who originally developed this inventory (Vermunt, 1998). The conception of learning section is composed of 25 items categorised under five scales: construction of knowledge, intake of knowledge, use of knowledge, stimulating education & cooperation of learning.
3) Measuring of the quality of e-learning experience: The e-Learning Experience Questionnaire was used to explore the role of the learning environment (Ginns & Ellis, 2007). The questionnaire consisted of subscales which would reflect students’ perceptions of Good Teaching, Good Resources Clear Goals and Standards, Appropriate Assessment, Generic skills, Appropriate Workload and student interaction.
4)Online discussion: The assessment of the fellows’ input in the online discussion was done by using a scoring rubric that was included in a framework proposed by Nandi et al. (2009). This framework defines several themes on which qualitative online interaction can be designed and assessed. The scoring rubric composed of three broad categories: content, interaction quality and participation.
5) Academic achievement: The fellows’ final grade is the sum of the educational units’ mean which, in turn, is the sum of the unit assignments’ mean was used as an indicator of academic achievement. The academic achievement was categorized into four categories according to the final mean of the units: excellent: means 9-10, very good: means 8, good: means 7 and pass: means 6 and fail means > 6.
III. RESULTS
Data analysis was conducted using Statistical Package for the Social Sciences (SPSS®) version 20 software and International Business Machines SPSS Amos™ version 20. Out of the 231 learners in the Health Professions Education program through distance learning, 128 postgraduate learners were included in the study. The sample composed of 40 males and 88 female learners. Furthermore, they were divided according to their previous academic rank into 2 groups (Dr: 69 & Prof: 59 students). Student t-test revealed that there is no significant difference between male and female in SRL, t (126) = 1.43, conception of learning, t (126) = 0.13, quality of E-learning experience, t (126) = 0.78, online discussion, t (126) = -1.46 and academic achievement, t (126) = -0.79, p<0.05.

Table 1: Correlation between SRL, quality of e-Learning experience, conception of learning, online discussion and academic achievement using Pearson’s product moment correlation.
Table 1 shows that SRL have a statistically significant relation with Quality of e-Learning experience, conception of learning while there was no correlation with academic achievement and online discussion. However, academic achievement showed correlation with online discussion.

Figure 2: Path analysis for the relationships between SRL, quality of e-Learning experience, conception of learning, online discussion, and academic achievement1.
_______________________
1Active: active conception of learning group (Use of knowledge & Construction of knowledge), Passive: passive conception of learning group ( Intake of knowledge), Interactive: interactive conception of learning group ( Stimulating of learning & Cooperation), Knowledge: Prior academic experience, E-experience: Quality of e-Learning experience, Online_dis: Quality of online discussion, SRL: Self-regulating learning, Academic: Academic achievement and *** : statistical significance difference at the p= 0.05 level
Figure 1 illustrates a summary of the conceptual path model created between the different study variables. The model showed a good fit between a good fit between the tested model and the data (χ2= 5.84, df =10, χ2/df =0.584, The Goodness of Fit Index (CFI =1.00), The Normed Fit Index (NFI =0.96), The Root Mean Square Error of Approximation (RMSEA =0.00). Some path coefficients were statistically significant (p < 0.05) and some paths also demonstrated practical significance (β > 0.3).
Quality of e-experience is directly affected by the active conception of learning (β = 0.45). SRL is affected directly by quality of e-experience (β = 0.44) and indirectly affected by active conception of learning. Finally, the online discussion is negatively affected SRL (β = -0.09). Academic achievement is directly influenced by online discussion (β = 0.29) and prior experience/academic rank (knowledge) (β = 0.22). However, SRL has a small effect on academic achievement (0.04).
IV. DISCUSSION
At this time of transformative change in the use of technology in medical education, it is recommended to study how online learning can be improved in terms of the inter-relationship of conception of learning, self-regulated capacity and learner’s achievement. This study is of high relevance to all medical schools that adopt or plan to incorporate online learning in their curricula. It is noteworthy that many medical schools in the Asia Pacific region are increasingly adopting online learning in their programs as it may solve some medical education challenges in the region (Karunathilake & Samaraskera, 2019).
The results of the path analysis have revealed that conception of learning, quality of e-learning experience and online discussions are significant factors for learning in online context. Despite previous studies having explored the effect of satisfaction and SRL (Liaw & Huang, 2013) however, the link between conceptions of learning, perception of e-learning experience and SRL was discussed in only a very few studies so far (Kassab, et al., 2015; Zhao & Chen ,2016).
The developed model has gained advantage through confirming that as student perceptions of the quality of e-learning experience becomes more positive their self-reported degree of self-regulation in online learning also increases. It can be explained as the students’ positive perception of satisfaction and usefulness from different dimensions of the e-learning experience may help them in applying positive behaviours because they are motivated and enjoying the learning experiences. This supports researchers who have concluded that user satisfaction and self-regulation are highly correlated in e-learning environments (Liaw & Huang, 2013).
Additionally, the findings of this study added that the active conception of learning only are positively and significantly related to quality of learning experience and SRL. This relation should be tracked to the role of conceptions of learning in the students’ learning approach. Students with active conception of learning will adopt deeper approaches that in turn will foster the learner -content interaction. This interaction will affect student motivation and satisfaction (Barger et al., 2016; Tsai P. S., et al., 2011).
These current findings indicate that as students’ active conception of learning become more positive, their self-regulation indirectly improves. This point was tested by the current COVID-19 pandemic that revealed that students can take learning into their own hands. Enforced online learning is showing everyone that students can play a much more proactive role in content discovery and assume more responsibility for their own growth as learners. In other words, when the students’ perception of learning had changed, they own the reins of their learning (Ciotti, 2020). It was also supported by extant research literature. Loyens et al. (2008) found structural positive relations between students’ constructive conceptions of learning on the one hand and their use of deep processing and self-regulation strategies on the other. Moreover, the learning conceptions ‘construction of knowledge’ was negatively related to external regulation and lack of regulation.
However, the findings did not show significant relation between SRL and academic achievement. The current study confirmed that some variation in learners’ performance could be explained by the students’ self-regulated learning skills. Nevertheless, this finding can be explained by the importance of introducing SRL skills explicitly in the learning objectives and syllabus with enough space for the learners to develop and apply SRL skills during the program activities. Self-regulated learning skills need to be taught (Zimmerman, 1989) and learners should be provided with appropriate instructions to guide them to develop and apply SRL skills. It may be expected that senior or postgraduate leaners can develop these skills alone because there is correlation between maturity and SRL skills (Premkumar, et al., 2013; Reio & Davis, 2005). However, studies showed that the use of learning strategies is domain-specific and a learner who is highly self-regulated in one situation may be very much less self-regulated in a new and unfamiliar context (Fisher et al., 2001). Therefore, it seems important that learners need be trained to extend their metacognitive knowledge base and make it more coherent in both under and post graduate learning.
It is interesting to note that there was a statistically significant relation between online discussion and academic achievement. The study program provides an interactive learning environment through the listserv activity. It is an interactive multiple-edged activity that can foster different types of interactions; learner-learner, learner-instructor, and learner- content. These interactions are assumed to affect the learners’ behaviours and achievement positively. Therefore, the social interaction may be crucial element in the formation of online learning communities. As demonstrated by previous studies these interactions will enhances the individual’s regulation of cognition, metacognition, behaviour, and motivation which in turn affects the achievement (Alzahrani, 2017; Delaney et al., 2019).
Given this, it is somewhat surprising that online discussion negatively affects online self-regulation. Students needs to be deeply involved in online discussion so they can plan, monitor, and reflect upon their interactions with other students (Delen & Liew, 2016). But the negative relation between online discussion and SRL shows that students may not be engaged in deep-level interaction with other students for knowledge creation. Instead, many online students participate minimally in discussions only to meet participation requirements (Hew et al., 2010). In the current study, 42% of the participants were evaluated as satisfactory while 1% as excellent. Moreover, 32% of the participants had no input in the discussion.
Additionally, the design of the online forum, especially the proportion of online interactions required for assessment purposes and how the online discussion is evaluated, may also be a factor in the results. The small portion that the evaluation of the online discussion contributes to the final grade in the current study may cause the students not to take online interaction with other students seriously. This point was also reported by Cho & Cho, (2017), who found online discussion is often evaluated by the h number of posts and accounts for 10% of the total grades.
A. Study Limitation
Although the research design of the current study does not lack rigor, these data must be interpreted with caution. With such a relatively small sample size and the sampling techniques, the findings might not to be validated in a larger population. The sample also may affect the interactions in path analysis. Moreover, the tool used to measure the students’ self-regulated learning skills. Some students may have overestimated or down estimated their self-regulated learning skills, which may have influenced the findings.
V. SIGNIFCANCE AND CONCLUSION
This study offers some insight into learning process in online environment; this information can potentially be used as a guide for the future developer of online learning programs to identify the significant factors that may shape their students learning experience and impact the quality of online programs in the region. The study provided evidence which suggests that structure and interaction are critical factors in online learning and that student beliefs and interactivity can play an important role in their achievement and perception of the e-learning experience. Moreover, it confirms the importance of the quality of online discussion in online learning due to the direct and significant relationship with academic achievement.
Notes on Contributors
Enjy Abouzeid reviewed the literature, designed the study, developed the methodological framework of the study, collected the data, analysed the data, and written the manuscript. Rebecca O’Rourke advised on the design of the study and gave critical feedback on manuscript drafts. Yasser El-Wazir advised on the design of the study and gave critical feedback on manuscript drafts. Nahla Hassan gave critical feedback on manuscript drafts. Rabab Abdel Ra’oof advised on the design of the study and gave critical feedback on manuscript drafts. Trudie Roberts advised on the design of the study and gave critical feedback on manuscript drafts. All authors have read and approved the final manuscript.
Ethical Approval
All the students were voluntarily involved in the study and the purpose of the study was clearly communicated to them. An informed consent was administrated to them including the purpose, terms, and conditions. Approval from research Ethics Committee, Faculty of Medicine Suez Canal University No 2455 was taken before starting data collection.
Funding
No funding was raised for this research.
Declaration of Interest
The authors report no conflicts of interest in this work.
References
Alzahrani, M. (2017). The effect of using online discussion forums on students’ learning. Turkish Online Journal of Educational Technology, 16(1), 164-176.
Azevedo, R., Moos, D. C., Greene, J. A., Winters, F. I., & Cromley, J. C. (2008). Why is externally-regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development, 56(1), 45-72.
Barger, M. M., Wormington, S. V., Huettel, L. G., & Linnenbrink-Garcia, L. (2016). Developmental changes in college engineering students’ personal epistemology profiles. Learning and Individual Differences, 48, 1-8.
Barnard, L., Paton, V. O., & Lan, W. Y. (2008). Online self-regulatory learning behaviours as a mediator in the relationship between online course perceptions with achievement. International Review of Research in Open and Distance Learning, 9(2), 1-11.
Bembenutty, H. (2006). Self-regulation of learning. Academic Exchange Quarterly, 10(4), 221-248.
Chiu, Y. L., Lin, T. J., & Tsai, C. C. (2016). The conceptions of learning science by laboratory among university science-major students: Qualitative and quantitative analyses. Research in Science & Technological Education, 34(3), 359-377.
Cho, M.-H., & Cho, Y.-J. (2017). Self-regulation in three types of online interaction: A scale development. Distance Education, 38(1), 70-83. https://doi.org/10.1080/01587919.2017.1299563
Ciotti. (2020). Covid-19 is transforming how we think about online learning. Retrieved March 30, 2020, from https://enterprise.press/blackboards/covid-19-transforming-think-online-learning/2020
Delaney, D., Kummer, T.‐F., & Singh, K. (2019). Evaluating the impact of online discussion boards on student engagement with group work. British Journal of Educational Technology, 50(2), 902-920. https://doi.org/10.1111/bjet.12614
Delen, E., & Liew, J. (2016). The use of interactive environments to promote self-regulation in online learning: A literature review. European Journal of Contemporary Education, 15(1), 24-33.
Fisher, M., King, J., & Tague, G. (2001). Development of a self-directed learning readiness scale for nursing education. Nurse Education Today, 21(7), 516-525. https://doi.org/10.1054/nedt.2001.0589
Ginns, P., & Ellis, R. (2007). Quality in blended learning: Exploring the relations between on-line and face-to-face teaching and learning. The Internet and Higher Education, 10(1), 53-64.
Hew, K. F., Cheung, W. S., & Ng, C. S. L. (2010). Student contribution in asynchronous online discussion: A review of the research and empirical exploration. Instructional Science. 38(6), 571-606.
Hiltz, S. R., & Turoff, M. (2005). Education goes digital: The evolution of online learning and the revolution in higher education. Communications of the Association for Computing Machinery. 48(10), 59-64.
Karunathilake, I., & Samaraskera, D. (2019). Technology enhanced medical education in the Asia Pacific region-Diversity as advantages. Research Gate. https://www.researchgate.net/publication/337185891_Technology_Enhanced_Medical_Education_in_the_Asia_Pacific_Region-Diversity_as_Advantages
Kassab, S. E., Al-Shafei, A. I., Salem, A. H., & Otoom, S. (2015). Relationships between the quality of blended learning experience, self-regulated learning, and academic achievement of medical students: A path analysis. Advances in Medical Education and Practice, 6, 27-34. https://doi.org/10.2147/AMEP.S75830
Lajoie, S. P., & Azevedo, R. (2006). Teaching and learning in technology-rich environments. In P. A. Alexander & P. H. Winne (Eds.), Handbook of Educational Psychology (pp. 803-821). Routledge.
Liaw, S. S., & Huang, H. M. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors to self-regulation in e-Learning environments. Computer & Education, 60, 14-24.
Loyens, S. M. M., Rikers, R. M. J. P., & Schmidt, H. G. (2008). Relationships between students’ conceptions of constructivist learning and their regulation and processing strategies. Instructional Science, 36(5), 445-462.
Lucieer, S. M., Jonker, L., Visscher, C. H., Rikers, R. M. J. P., & Themmen, A. P. N. (2016). Self-regulated learning and academic performance in medical education. Medical Teacher, 38(6), 585-593.
Nandi, D., Chang, S. & Balbo, S. (2009). A conceptual framework for assessing interaction quality in online discussion forums. In Same places, different spaces. Proceedings of the 26th ASCILITE conference. Australian Society for Computers in Learning in Tertiary Education.
Premkumar, K., Pahwa, P., Banerjee, A., Baptiste, K., Bhatt, H., & Lim, H. J. (2013). Does medical training promote or deter self-directed learning? A longitudinal mixed-methods study. Academic Medicine, 88(11), 1754-1764.
Reio, T., & Davis, W. (2005). Age and gender differences in self-directed learning readiness: A developmental perspective. International Journal Self-directed Learning, 2, 40-49.
Toering, T., Elferink-Gemser, M. T., Jonker, L., Heuvelen, M. J. G., & Visscher, C. (2012). Measuring self-regulation in a learning context: Reliability and validity of the self-Regulation of learning self-report scale (SRL-SRS). International Journal of Sport and Exercise Psychology, 10(1), 24-38.
Tsai, P. S., Tsai, C.-C., & Hwang, G. H. (2011). College Students’ conceptions of context-aware ubiquitous learning: A phenomenographic analysis. The Internet and Higher Education, 14, 137-141.
Tsai, P.-S., & Tsai, C.-C. (2014). College students’ skills of online argumentation: The role of scaffolding and their conceptions. Internet and Higher Education. 21, 1–8.
Vermunt, J. D. (1998). The regulation of constructive learning processes. British Journal of Educational Psychology, 68, 149-171.
Zhao, H. (2016). Factors influencing self-regulation in E-learning 2.0: Confirmatory factor model. Canadian Journal of Learning and Technology, 42(2).
Zhao, H., & Chen, L. (2016). How can self-regulated learning be supported in e-learning 2.0 environment: A comparative study. Journal of Educational Technology Development and Exchange, 9(2), 1-28.
Zimmerman, B. J. (1986). Development of self-regulated learning: Which are the key subprocesses? Contemporary Educational Psychology, 16, 307-313.
Zimmerman, B. J. (1989). Models of self-regulated learning and academic achievement. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theory, research, and practice (pp. 1-25). Springer.
*Enjy Abouzeid
6A Hassan El Bassry Street,
Ismailia, Egypt
Email: Enjyabouzeid@yahoo.com
Submitted: 24 June 2020
Accepted: 8 September 2020
Published online: 4 May, TAPS 2021, 6(2), 31-37
https://doi.org/10.29060/TAPS.2021-6-2/OA2328
Julie Yun Chen1,2, Weng-Yee Chin1, Agnes Tiwari3, Janet Wong3, Ian C K Wong4, Alan Worsley4, Yibin Feng5, Mai Har Sham6, Joyce Pui Yan Tsang1,2 & Chak Sing Lau7
1Department of Family Medicine and Primary Care, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 2Bau Institute of Medical and Health Sciences Education, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 3School of Nursing, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 4Department of Pharmacology and Pharmacy, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 5School of Chinese Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 6School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 7Department of Medicine, Li Ka Shing Faculty of Medicine, The University of Hong, Hong Kong
Abstract
Introduction: The demanding nature of medical and health sciences studies can cause stress among students in these disciplines affecting their wellbeing and academic performance. The Perceived Stress Scale (PSS-10) is a widely used measure of perceived stress among medical students and healthcare professionals that has not yet been validated among medical and health sciences students in Hong Kong. The aim of this study is to establish the construct validity and reliability of the PSS-10 in this context.
Methods: 267 final year medical and health sciences students were surveyed using the PSS-10. The data were analysed using exploratory factor analysis for construct validity and Cronbach’s alpha coefficient and corrected item-total correlations for reliability.
Results: Exploratory factor analysis revealed a two-factor structure for PSS-10, with Cronbach’s alpha of 0.865 and 0.796, indicating good internal consistency. Corrected item-total correlations showed satisfactory correlation ranged from 0.539 to 0.748 for all items and their respective subscale. Both tests supported PSS-10 as a two-factor scale.
Conclusion: The PSS-10 is a valid measure for assessing perceived stress in Hong Kong medical and health sciences students.
Keywords: Undergraduate Students, Medicine, Nursing, Pharmacy, Health Sciences, Validation, Perceived Stress
Practice Highlights
- It is important to have a valid instrument for early detection of stress in health science students.
- Perceived Stress Scale (PSS-10) has a two-factor structure, a finding that is consistent with most other studies.
- PSS-10 has satisfactory internal consistency and reliability.
- PSS-10 can be used to assess the level of stress in medical and health sciences students.
I. INTRODUCTION
Undertaking studies in healthcare disciplines can be stressful as the programmes are demanding and students are often competing with higher achieving peers from admission to graduation. Significant stress can lead to psychological distress that has negative implications on current and future performance. Medical students have a higher prevalence of distress and poorer mental quality of life than their non-medical peers (Dyrbye et al., 2006; Shin et al., 2016), and also experience sleep deprivation, anxiety, and feelings of social isolation as revealed in focus group interviews conducted by Henning et al. (2010). There may also be a negative impact in quality of patient care (Firth-Cozens, 2001) and higher rate of medical errors (West et al., 2009). High perceived stress level correlated to impaired clinical performances in nursing students, including application of knowledge, clinical skills and communication (Ye et al., 2018). High level of stress and impaired quality of life were also found in third year pharmacy students in the United States (Marshall et al., 2008). In a study on pre-medical and health sciences students, higher perceived stress was a predictor of poor academic achievement (Henning et al., 2018).
As in many Asian cultures, Hong Kong students in general are under pressure to perform well in school as education is viewed as a crucial stepping-stone to success (S. Chan, 1999; Tan & Yates, 2011). This pressure may be particularly pronounced in medical students who manifest a greater degree of psychological distress, including perceived stress, depressive symptoms and anxiety, than other university students (Wong et al., 2005). A survey on medical students from the University of Hong Kong also revealed that majority of medical students were screened positive for minor psychiatric disorders and up to 95% of them were burned out (Chau et al., 2019). Many students may be “pushed” into a career path by extrinsic factors such as parental expectation (Sreeramareddy et al., 2007) or as a part of family tradition. Asian medical students may also tend to focus on academic achievement and seek to outperform their peers (Henning et al., 2011). Given the risk for developing high level of stress for these students, and the particularly intense environment in Hong Kong, it is important to have a valid instrument for early detection of stress so that appropriate strategies may be instituted at an early stage.
The Perceived Stress Scale (PSS-10) (Cohen, 1988) is widely used to measure perceived stress among healthcare students and doctors in different countries (Jones et al., 2015; Wongpakaran & Wongpakaran, 2010), and healthcare workers in Hong Kong (Chua et al., 2004). Healthcare students and healthcare workers may respond differently to a stressful event, as shown in the studies by Chua et al. (2004) and Wong et al. (2004), where the psychological effects of the SARS outbreak were different for healthcare students and workers. PSS-10 has been translated and validated in various languages, including Spanish, Turkish, Portuguese, Chinese, Thai and Japanese, among different populations such as patients, students, pregnant women, and adults in the general population (Lee, 2012). These validation studies are fundamentally robust, yet validating the PSS-10 is important in the specific undergraduate medical and health professions educational context in Hong Kong. Our study population is subject to different cultural, societal and educational influences that affect the perception of stress and the understanding of the items in the instrument so validation studies done elsewhere may not be applicable to our local context. The aim of this study is therefore, to establish the construct validity and reliability of the PSS-10 for use in this population.
II. METHODS
A. Participants and Data Collection
All final year students undertaking studies in Li Ka Shing Faculty of Medicine in the University of Hong Kong (HKUMed) in the academic year of 2014-2015 were the target population of this study. A research assistant, who was not involved in teaching and assessment of the students, invited the students to participate in the study during a designated compulsory face-to-face teaching session for each programme. Those who provided written consent completed a written questionnaire in January – February 2015 or June 2015. The specific time for each cohort was chosen to avoid known stressful periods such as exams. The questionnaire included the PSS-10 and demographic information.
B. Measure
The Perceived Stress Scale (PSS-10) (Cohen, 1988) was chosen as the instrument for measuring perceived stress. We considered other often-used instruments including the Depression Anxiety Stress Scale (DASS) (Lovibond & Lovibond, 1995) that measures depression and anxiety, in addition to stress and the General Health Questionnaire (GHQ) (Goldberg & Hillier, 1979) that measures medical complaints as a reflection of emotional stress, but these looked at broader conceptualisations of psychological distress beyond the scope of our study. PSS-10 was the most fit-for-purpose in measuring stress in terms of respondents’ views about their lives. In addition, we wished to be able to compare the stress in medical and health professions students to other key local comparator populations (e.g. university students, doctors, general population etc) and using the same instrument would facilitate this comparison.
PSS-10 is a 10-item instrument that assesses the extent of stress of respondents. PSS-10 is the abbreviated version of the original instrument with 14 items (PSS-14). A brief version with four items (PSS-4) is also available. Among the three versions of PSS, PSS-10 was found to be superior in psychometric properties, in terms of validity and reliability, than the other two versions (Lee, 2012). In the PSS-10, respondents rate statements about how unpredictable, uncontrollable, and overloaded they find their lives on a 5-point Likert scale from “never” to “very often”. Each response is converted to a score of 0 to 4 with the overall PSS score computed as the total score of the 10 items, with four reverse-coded items. The higher the score, the worse the perceived stress, with a maximum score of 40. There is no specific cut-off score that corresponds to high or low stress. We used the original English version of PSS-10 because as an English-medium university, students at HKUMed are taught in English (except during bedside teaching and clinical practicums) and students are proficient in English.
C. Data Analysis
To establish the construct validity of the PSS-10, exploratory factor analysis (EFA) was performed on the responses to PSS-10 items by final year medical students, using principal component extraction with varimax rotation and the criterion of eigenvalue greater than 1.00. The Kaiser-Meyer-Olkin (KMO) measure equal to or greater than 0.5 was used to indicate sampling adequacy, while the Barlett’s Test of Sphericity with p<0.001 was used to ensure the appropriateness of the data set for EFA. Cumulative variance explained in the factor structure identified by EFA model was reported.
Cronbach’s alpha coefficient and corrected item-total correlations were used to examine reliability. Cronbach’s alpha coefficient was calculated to assess the internal consistency of each scale, which was considered acceptable if greater than 0.7 (Nunnally, 1994). Corrected item-total correlations were evaluated by Pearson’s correlation coefficient. A correlation of more than 0.4 was considered satisfactory (Wolfinbarger & Gilly, 2003).
III. RESULTS
A total of 267 students completed the survey, with an overall response rate of 86.5%. 104 (39%) of the respondents were male (Table 1). Female students had significantly higher perceived stress than male students (20.84 vs 18.59; p<0.001). Table 2 shows the descriptive statistics for PSS-10 items and total score by programme of study.
|
|
All (n=267) |
Average PSS-10 |
|
Age (mean) |
22.71 |
19.95 |
|
Gender |
||
|
Male |
104 |
18.59 |
|
Female |
160 |
20.84 |
|
Programme of study |
||
|
MBBS |
120 |
18.17 |
|
BNurs |
94 |
21.20 |
|
BChinMed |
13 |
21.77 |
|
BPharm |
28 |
22.39 |
|
BBMS |
10 |
20.20 |
|
MBBS: Bachelor of Medicine and Bachelor of Surgery; BNurs: Bachelor of Nursing; BChinMed: Bachelor of Chinese Medicine; BPharm: Bachelor of Pharmacy; BBMS: Bachelor of Biomedical Sciences *Numbers may not add up to the total number of respondents due to missing data |
||
Table 1. Demographic of respondents and average PSS-10 score
|
|
|
All (n=265) |
MBBS (n=120) |
BNurs (n=94) |
BChinMed (n=13) |
BPharm (n=28) |
BBMS (n=10) |
|
In the last month, how often have you… |
|||||||
|
1. |
been upset because of something that happened unexpectedly |
2.10 |
1.83 |
2.27 |
2.62 |
2.39 |
2.30 |
|
2. |
felt that you were unable to control the important things in your life |
2.05 |
1.79 |
2.19 |
2.31 |
2.46 |
2.40 |
|
3. |
felt nervous and “stressed” |
2.19 |
1.87 |
2.39 |
2.31 |
2.64 |
2.60 |
|
4. |
felt confident about your ability to handle your personal problems |
2.19 |
2.25 |
2.21 |
2.00 |
1.93 |
2.20 |
|
5. |
felt that things were going your way |
2.11 |
2.21 |
2.07 |
2.00 |
1.93 |
1.90 |
|
6. |
found that you could not cope with all the things that you had to do |
1.97 |
1.77 |
2.07 |
2.23 |
2.39 |
2.00 |
|
7. |
been able to control irritations in your life |
2.19 |
2.25 |
2.12 |
2.15 |
2.14 |
2.40 |
|
8. |
felt that you were on top of things |
1.79 |
2.00 |
1.66 |
1.46 |
1.46 |
1.90 |
|
9. |
been angered because of things that were outside of your control |
1.94 |
1.81 |
2.19 |
1.85 |
1.82 |
1.60 |
|
10. |
felt difficulties were piling up so high that you could not overcome them |
1.96 |
1.77 |
2.15 |
2.08 |
2.14 |
1.70 |
|
|
Total* |
19.93 |
18.17 |
21.20 |
21.77 |
22.39 |
20.20 |
|
MBBS: Bachelor of Medicine and Bachelor of Surgery; BNurs: Bachelor of Nursing; BChinMed: Bachelor of Chinese Medicine; BPharm: Bachelor of Pharmacy; BBMS: Bachelor of Biomedical Sciences *Total score is calculated by the sum of the 10 PSS items, with item 4, 5, 7 and 8 reverse coded. |
|||||||
Table 2. Mean score for PSS-10 items by programme of study
A. Exploratory Factor Analysis on PSS-10
Using the final year medical and health sciences student data for EFA (Table 3), the KMO measure for PSS-10 was 0.823, indicating sampling adequacy. The scale had a p-value of <0.001 for the Bartlett’s Test of Sphericity, confirming variability in the data was sufficient. The factor loadings of varimax rotated solution and the eigenvalue of the two factors identified (Perceived Helplessness and Perceived Control) are shown in Table 3. The cumulative variances explained were 61.386%.
B. Reliability
Cronbach’s alpha for the two factors were 0.865 and 0.796 respectively, which indicates good internal consistency reliability (Table 4). To determine the robustness of the analysis, each item was deleted in turn from the calculation and the resulting Cronbach’s alpha remained high (0.724-0.859). Corrected item-total correlations showed satisfactory correlation for all items and their respective subscale (range from 0.539 to 0.748) (Table 4). Items with the highest corrected item-total correlation were item 2 (“felt that you were unable to control the important things in your life”), item 3 (“felt nervous and ‘stressed’”), and item 10 (felt difficulties were piling up so high that you could not overcome them). Both tests supported the PSS-10 as a two-factor scale.
|
|
Factor loading |
|
||||
|
Perceived helplessness |
Perceived control |
|
||||
|
In the last month, how often have you… |
|
|||||
|
2. |
felt that you were unable to control the important things in your life |
0.826 |
-0.168 |
|
||
|
1. |
been upset because of something that happened unexpectedly |
0.793 |
0.021 |
|
||
|
3. |
felt nervous and “stressed” |
0.793 |
-0.167 |
|
||
|
10. |
felt difficulties were piling up so high that you could not overcome them |
0.782 |
-0.154 |
|
||
|
9. |
been angered because of things that were outside of your control |
0.712 |
0.099 |
|
||
|
6. |
found that you could not cope with all the things that you had to do |
0.698 |
-0.132 |
|
||
|
4. |
felt confident about your ability to handle your personal problems |
-0.017 |
0.815 |
|
||
|
5. |
felt that things were going your way |
-0.102 |
0.811 |
|
||
|
7. |
been able to control irritations in your life |
-0.100 |
0.774 |
|
||
|
8. |
felt that you were on top of things |
-0.086 |
0.732 |
|
||
|
Eigenvalue |
3.879 |
2.260 |
|
|||
|
% of variance |
38.791 |
22.595 |
|
|||
|
|
Cumulative % of variance |
61.386 |
|
|||
Table 3. Factor loadings by exploratory factor analysis for PSS-10
|
|
|
Corrected Item-Total Correlation |
Cronbach’s Alpha if Item Deleted |
|
|
In the last month, how often have you… |
|
|||
|
Perceived helplessness (Cronbach’s Alpha = 0.865 ) |
|
|||
|
1. |
been upset because of something that happened unexpectedly |
0.674 |
0.840 |
|
|
2. |
felt that you were unable to control the important things in your life |
0.748 |
0.826 |
|
|
3. |
felt nervous and “stressed” |
0.705 |
0.835 |
|
|
6. |
found that you could not cope with all the things that you had to do |
0.591 |
0.854 |
|
|
9. |
been angered because of things that were outside of your control |
0.562 |
0.859 |
|
|
10. |
felt difficulties were piling up so high that you could not overcome them |
0.688 |
0.838 |
|
|
Perceived control (Cronbach’s Alpha = 0.796) |
|
|||
|
4. |
felt confident about your ability to handle your personal problems |
0.635 |
0.732 |
|
|
5. |
felt that things were going your way |
0.652 |
0.724 |
|
|
7. |
been able to control irritations in your life |
0.609 |
0.745 |
|
|
8. |
felt that you were on top of things |
0.539 |
0.781 |
|
|
Cut-offs for item-total correlation: <0.4 indicates poor correlation between item and total score. |
|
|||
Table 4. Corrected Item-Total Correlation
IV. DISCUSSION
A. Exploratory Factor Analysis
Exploratory factor analysis for PSS-10 revealed a two-factor structure, which was consistent with the findings in the original study (Cohen, 1988) and other validation studies (Andreou et al., 2011; Chaaya et al., 2010; Lesage et al., 2012; Leung et al., 2010; Örücü & Demir, 2009; Siqueira et al., 2010; Wongpakaran & Wongpakaran, 2010). The two factors identified in our study were related to the concept of control and ability to cope, as reflected in the positively-worded items, and the concept of helplessness, as reflected in negative items, respectively. The three items that loaded most heavily on the helplessness factor related to a lack of control (item 2), anxiety (item 3) and feeling overwhelmed (item 10).
B. Locus of Control
It was evident that feeling unable to control important things in life (Item 2) greatly contributed to perceived stress of students. (Table 4) External locus of control, where people believe external factors control success or failure, is associated with higher stress (Linn & Zeppa, 1984) and understandable for healthcare students. For example, the teaching timetable is often changed at the last minute as the teachers might have urgent clinical duties or they may be expected to do more self-directed learning in which the breadth or depth of the learning may not be made clear. The expectations for clinical skills in clinical settings are often different from what was taught in school (Gibbons et al., 2008). The uncertainty of the curriculum, progress and assessment also contribute to stress in healthcare students (Elzubeir et al., 2010). Moreover, as the most junior member of the healthcare team, students have no decision-making capacity and may feel helpless when confronted with situations beyond their expertise or observe actions contrary to their personal views (Jennings, 2009).
C. Anxiety
Feeling nervous (item 3) was another contributing factor for perceived stress (Table 4). Medical and health sciences students are required to sit high-stakes examinations in order to be promoted to the next year of study or to graduate, where test anxiety is understandably prevalent (Encandela et al., 2014). Clinical competency exams such as OSCEs are particularly anxiety-provoking (Muldoon et al., 2014). This is especially relevant to the final year students in this study, as the final summative exams in all programmes are intense. In particular, the written final Bachelor of Medicine and Bachelor of Surgery (MBBS) exam covers material from the whole year and all disciplines including medicine, surgery, psychiatry, obstetrics and gynaecology, paediatrics, orthopaedics, and family medicine, and also includes a clinical competency test in each discipline. This final exam constitutes the licensing exam to become a doctor in Hong Kong.
In addition, the vast majority of students admitted to undergraduate healthcare professions studies in Hong Kong are secondary school graduates. The age-related level of maturity may affect their ability to cope with a strenuous, content-rich curriculum as well as the pressures of clinical practicums and clerkships. Students have raised concerns about exam-induced anxiety and the heavy academic workload and in fact, the most common reason for students to seek counselling support at our institution is because of academic-related stress or psychological distress.
Working in the clinical environment also produces anxiety, especially when starting a new rotation in a new discipline when students often lack clinical experience, are unfamiliar with the ward, encounter difficult patients, and have a fear of making mistakes (Sharif & Masoumi, 2005). The hierarchical medical culture is more pronounced in healthcare settings and can be intimidating for undergraduate healthcare professions students who are seen as the lowest rung on the ladder. Other situations where the students are singled-out, such as during simulations, being observed, evaluated or video-recorded, also increases anxiety (Nielsen & Harder, 2013) especially as these teaching sessions are done in small groups. The style of learning for our students in these clinical years also require a more proactive, interactive and self-reliant style of learning. In addition to scheduled bedside teaching with a clinician, students have to seek out patients to clerk in order to hone their clinical skills and gain clinical experience. This may be an adjustment to students used to a more traditional classroom style and textbook learning.
D. Overwhelmed
The third most important item contributing to high perceived stress was the feeling of being overwhelmed with the workload and difficulties (item 10) (Table 4). Healthcare studies are well-known for being content heavy. Students have a heavy workload including long hours of lectures, tutorials, laboratories and clinical attachments, and are also expected to spend substantial time on independent study. Because most healthcare professions students in Hong Kong are admitted to such programmes directly upon completion of secondary education, higher diploma or associate degree, the curricula are even more packed with basic foundational as well as profession-specific advanced content.
Students in their final year of study have to contend with clinical experiential learning but must also further develop their knowledge base. This entails acquiring a huge volume of factual content as well as applying concepts to clinical scenarios. Students must work more independently in clinical attachments and may have some responsibility for patient care or administrative work. For example, nursing students’ progress from having practicums in small groups to shadowing a practising nurse, and working as a member of the nursing team in the ward in their senior years.
In addition, clinical teaching settings in Hong Kong, can be challenging learning environments especially the tertiary care teaching hospitals where much of the training takes place. The business of routine patient care already involving a multitude of staff makes it a daunting place for healthcare professions students who have to compete with each other for the opportunity to clerk patients.
In the clinical environment, students also come face-to-face with difficult situations and experience feelings that they may have difficulty resolving. This may include having problems communicating communication with patients or their families, struggling with ethical dilemmas such as witnessing a medical error, or experiencing the illness experience of patients and the helplessness of not being able alleviate their suffering. Medical students can be overwhelmed by the burden of suppressing their own natural emotions when facing the pain and suffering of their patients (Jennings, 2009). Likewise, nursing students also expressed that workload from clinical work and their own studies exceeded their physical and emotional capacity (C. K. Chan et al., 2009).
E. Limitations
At the time of data collection, no data were collected for other scales of similar or opposite construct. Hence no convergent or divergent validity could be calculated. Also, test-retest reliability could not be done as this was a one-off cross-sectional survey. Despite these limitations our data supported a two-factor structure of the PSS-10, consistent with the original and other previous studies.
V. CONCLUSION
Demonstrating good construct validity and internal consistency, PSS-10 is a valid measure for assessing self-reported stress in medical students as well as in health sciences students. Longitudinal studies on student stress using this measure will help to assess the extent and patterns of stress in a high-risk population in order to develop timely interventions.
Notes on Contributors
JY Chen and JPY Tsang reviewed the literature, designed the study, performed data collection and data analysis, and developed the manuscript. WY Chin, A Tiwari, J Wong, ICK Wong, A Worsley, Y Feng, MH Sham and CS Lau advised on the study design, facilitated data collection and gave critical feedback on the manuscript. All authors have read and approved the final manuscript.
Ethical Approval
Ethical approval of this study was granted by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster (Reference No.: UW 14-472). All participants have given written consent for their data to be used in the research and for publication.
Acknowledgements
We would like to thank the students of HKUMed for participating in the study, and the administrative staff of Li Ka Shing Faculty of Medicine, School of Nursing, School of Chinese Medicine, Department of Pharmacology and Pharmacy, and School of Biomedical Sciences for helping with the logistical arrangement of the questionnaire administrations.
Funding
This work was supported by a Teaching Development Grant funded by the University of Hong Kong.
Declaration of Interest
The authors declare that there is no conflict of interest.
References
Andreou, E., Alexopoulos, E. C., Lionis, C., Varvogli, L., Gnardellis, C., Chrousos, G. P., & Darviri, C. (2011). Perceived stress scale: Reliability and validity study in Greece. International Journal of Environmental Research and Public Health, 8(8), 3287-3298.
Chaaya, M., Osman, H., Naassan, G., & Mahfoud, Z. (2010). Validation of the Arabic version of the Cohen Perceived Stress Scale (PSS-10) among pregnant and postpartum women. BMC Psychiatry, 10(1), 111.
Chan, C. K., So, W. K., & Fong, D. Y. (2009). Hong Kong baccalaureate nursing students’ stress and their coping strategies in clinical practice. Journal of Professional Nursing, 25(5), 307-313.
Chan, S. (1999). The Chinese learner–A question of style. Education & Training, 41(6/7), 294-305.
Chau, S. W., Lewis, T., Ng, R., Chen, J. Y., Farrell, S. M., Molodynski, A., & Bhugra, D. (2019). Wellbeing and mental health amongst medical students from Hong Kong. International Review of Psychiatry, 31(7-8), 626-629.
Chua, S. E., Cheung, V., Cheung, C., McAlonan, G. M., Wong, J. W., Cheung, E. P., Chan, M. T., Wong, M. M., Tang, S. W., Choy, K. M., Wong, M. K., Chu, C. M., & Tsang, K. W. (2004). Psychological effects of the SARS outbreak in Hong Kong on high-risk health care workers. The Canadian Journal of Psychiatry, 49(6), 391-393. https://doi.org/10.1177/070674370404900609
Cohen, S. (1988). Perceived stress in a probability sample of the United States. In S. Spacapan & S. Oskamp (Eds.), The social psychology of health. The Claremont Symposium on applied social psychology (pp. 31-67). Sage.
Dyrbye, L. N., Thomas, M. R., Huntington, J. L., Lawson, K. L., Novotny, P. J., Sloan, J. A., & Shanafelt, T. D. (2006). Personal life events and medical student burnout: A multicenter study. Academic Medicine, 81(4), 374-384. https://doi.org/10.1097/00001888-200604000-00010
Elzubeir, M., Elzubeir, K., & Magzoub, M. (2010). Stress and coping strategies among Arab medical students: Towards a research agenda. Education for Health, 23(1), 355.
Encandela, J., Gibson, C., Angoff, N., Leydon, G., & Green, M. (2014). Characteristics of test anxiety among medical students and congruence of strategies to address it. Medical Education Online, 19(1), 25211.
Firth-Cozens, J. (2001). Interventions to improve physicians’ well-being and patient care. Social Science & Medicine, 52(2), 215-222.
Gibbons, C., Dempster, M., & Moutray, M. (2008). Stress and eustress in nursing students. Journal of Advanced Nursing, 61(3), 282-290.
Goldberg, D. P., & Hillier, V. F. (1979). A scaled version of the General Health Questionnaire. Psychological Medicine, 9(1), 139-145.
Henning, M. A., Hawken, S. J., Krageloh, C., Zhao, Y. P., & Doherty, I. (2011). Asian medical students: Quality of life and motivation to learn. Asia Pacific Education Review, 12(3), 437-445.
Henning, M. A., Krageloh, C., Hawken, S., Zhao, Y., & Doherty, I. (2010). Quality of life and motivation to learn: A study of medical students. Issues in Educational Research, 20(3), 244-256.
Henning, M. A., Krägeloh, C. U., Booth, R., Hill, E. M., Chen, J., & Webster, C. (2018). An exploratory study of the relationships among physical health, competitiveness, stress, motivation, and grade attainment: Pre-medical and health science students. The Asia Pacific Scholar, 3(3), 5-16.
Jennings, M. (2009). Medical student burnout: Interdisciplinary exploration and analysis. Journal of Medical Humanities, 30(4), 253.
Jones, G., Hocine, M., Salomon, J., Dab, W., & Temime, L. (2015). Demographic and occupational predictors of stress and fatigue in French intensive-care registered nurses and nurses’ aides: A cross-sectional study. International Journal of Nursing Studies, 52(1), 250-259.
Lee, E.-H. (2012). Review of the psychometric evidence of the perceived stress scale. Asian Nursing Research, 6(4), 121-127.
Lesage, F.-X., Berjot, S., & Deschamps, F. (2012). Psychometric properties of the French versions of the Perceived Stress Scale. International Journal of Occupational Medicine and Environmental Health, 25(2), 178-184.
Leung, D. Y., Lam, T.-h., & Chan, S. S. (2010). Three versions of Perceived Stress Scale: validation in a sample of Chinese cardiac patients who smoke. BMC Public Health, 10(1), 513.
Linn, B. S., & Zeppa, R. (1984). Stress in junior medical students: Relationship to personality and performance. Journal of Medical Education. 59(1), 7-12.
Lovibond, S. H., & Lovibond, P. F. (1995). Manual for the Depression Anxiety Stress Scales. (2nd. Ed.) Psychology Foundation.
Marshall, L. L., Allison, A., Nykamp, D., & Lanke, S. (2008). Perceived stress and quality of life among doctor of pharmacy students. American Journal of Pharmaceutical Education, 72(6), 137. https://doi.org/10.5688/aj7206137
Muldoon, K., Biesty, L., & Smith, V. (2014). ‘I found the OSCE very stressful’: Student midwives’ attitudes towards an objective structured clinical examination (OSCE). Nurse Education Today, 34(3), 468-473.
Nielsen, B., & Harder, N. (2013). Causes of student anxiety during simulation: What the literature says. Clinical Simulation in Nursing, 9(11), e507-e512.
Nunnally, J. C. (Ed.) (1994). Psychometric Theory (3rd ed.). McGraw Hill.
Örücü, M. Ç., & Demir, A. (2009). Psychometric evaluation of perceived stress scale for Turkish university students. Stress and Health, 25(1), 103-109.
Sharif, F., & Masoumi, S. (2005). A qualitative study of nursing student experiences of clinical practice. BMC Nursing, 4(1), 6.
Shin, H. K., Kang, S. H., Lim, S.-H., Yang, J. H., & Chae, S. (2016). Development of a Modified Korean East Asian Student Stress Inventory by Comparing Stress Levels in Medical Students with Those in Non-Medical Students. Korean Journal of Family Medicine, 37(1), 14-17.
Siqueira, R. R., Ferreira Hino Adriano, A., & Romélio Rodriguez Añez, C. (2010). Perceived stress scale: Reliability and validity study in Brazil. Journal of Health Psychology, 15(1), 107-114.
Sreeramareddy, C. T., Shankar, P. R., Binu, V., Mukhopadhyay, C., Ray, B., & Menezes, R. G. (2007). Psychological morbidity, sources of stress and coping strategies among undergraduate medical students of Nepal. BMC Medical Education, 7(1), 26.
Tan, J. B., & Yates, S. (2011). Academic expectations as sources of stress in Asian students. Social Psychology of Education, 14(3), 389-407.
West, C. P., Tan, A. D., Habermann, T. M., Sloan, J. A., & Shanafelt, T. D. (2009). Association of resident fatigue and distress with perceived medical errors. The Journal of the American Medical Association, 302(12), 1294-1300.
Wolfinbarger, M., & Gilly, M. C. (2003). eTailQ: Dimensionalizing, measuring and predicting etail quality. Journal of Retailing, 79(3), 183-198.
Wong, J. G. W. S., Cheung, E. P., Cheung, V., Cheung, C., Chan, M. T., Chua, S. E., McAlonan, G. M., Tsang, K. W. T., & Ip, M. S. (2004). Psychological responses to the SARS outbreak in healthcare students in Hong Kong. Medical Teacher, 26(7), 657-659.
Wong, J. G. W. S., Patil, N. G., Beh, S. L., Cheung, E. P., Wong, V., Chan, L. C., & Lieh Mak, F. (2005). Cultivating psychological well-being in Hong Kong’s future doctors. Medical Teacher, 27(8), 715-719.
Wongpakaran, N., & Wongpakaran, T. (2010). The Thai version of the PSS-10: An Investigation of its psychometric properties. BioPsychoSocial Medicine, 4(1), 6.
Ye, Y., Hu, R., Ni, Z., Jiang, N., & Jiang, X. (2018). Effects of perceived stress and professional values on clinical performance in practice nursing students: A structural equation modeling approach. Nurse Education Today, 71, 157-162.
*Julie Chen
Department of Family Medicine &
Bau Institute of Medical and Health Sciences Education,
Li Ka Shing Faculty of Medicine,
University of Hong Kong
21 Sassoon Rd, Pok Fu Lam
Hong Kong
Email: juliechen@hku.hk
Submitted: 19 June 2020
Accepted: 21 October 2020
Published online: 4 May, TAPS 2021, 6(2), 25-30
https://doi.org/10.29060/TAPS.2021-6-2/OA2327
Nicola Ngiam1,2 & Chuen-Yee Hor1
1Centre for Healthcare Simulation, National University of Singapore, Singapore; 2Khoo Teck Puat-National University Children’s Medical Institute, National University Hospital, Singapore
Abstract
Introduction: Standardised patients (SPs) have been involved in medical education for the past 50 years. Their role has evolved from assisting in history-taking and communication skills to portraying abnormal physical signs and hybrid simulations. This increases exposure of their physical and psychological domains to the learner. Asian SPs who come from more conservative cultures may be inhibited in some respect. This study aims to explore the attitudes and perspectives of Asian SPs with respect to their role and case portrayal.
Methods: This was a cohort questionnaire study of SPs involved in a high-stakes assessment activity at a university medical school in Singapore.
Results: 66 out of 71 SPs responded. Racial distribution was similar to population norms in Singapore (67% Chinese, 21% Malay, 8% Indian). SPs were very keen to provide feedback to students. A significant number were uncomfortable with portraying mental disorders (26%) or terminal illness (16%) and discussing Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome (HIV/AIDS, 14%) or Sexually Transmitted Diseases (STDs, 14%). SPs were uncomfortable with intimate examinations involving the front of the chest (46%, excluding breast), and even abdominal examination (35%). SPs perceive that they improve quality of teaching and are cost effective.
Conclusion: The Asian SPs in our institution see themselves as a valuable tool in medical education. Sensitivity to the cultural background of SPs in case writing and the training process is necessary to ensure that SPs are comfortable with their role. Additional training and graded exposure may be necessary for challenging scenarios and physical examination.
Keywords: Standardised Patients, Perspective, Asian, Medical Education, Survey
Practice Highlights
- The Asian SPs in our institution see themselves as a valuable tool in medical education.
- Sensitivity to the cultural background of SPs in case writing and the training process is necessary to ensure that SPs are comfortable with the roles that they portray.
- Additional training and graded exposure for SPs may be necessary for challenging scenarios and physical examination in the Asian context.
I. INTRODUCTION
Standardised patients (SPs) have been involved in medical education since the 1960s (Barrows & Abrahamson, 1964). SP methodology has been widely used in North America and Europe. By the 1990s, majority of American medical schools were using the SP methodology in teaching clinical skills, assessments and for providing feedback to learners (Anderson et al., 1994). The prevalence of employing SP methodology in medical education in Asia is presumed to be less ubiquitous. It is therefore imperative to understand the views of Asian SPs so that the SP methodology can be fostered.
SPs started out simulating medical symptoms and patient concerns as well as evaluating medical interviewing skills in 1976 (Barrows & Abrahamson, 1964; Stillman et al., 1976). Their role has evolved to demonstrating abnormal physical signs, providing feedback on medical interviewing skills and being involved in hybrid simulations. This increases their exposure to the medical environment and to different medical experiences that they may not have experienced before. Certain experiences may potentially cause psychological distress. The SP could be in a vulnerable position and personal attitudes and beliefs towards illness should be taken into consideration when engaging SPs to portray these roles.
This is particularly true in Asian SPs. For example, patients may not be willing to discuss mental health issues for fear of social stigma and shame (Kramer et al., 2002). Asian SPs are also likely to be more conservative and modest with regards to physical examination. This can be extrapolated from findings that cultural attitudes toward breast cancer screening tests and modesty are some reasons why Asian women are reluctant to seek out breast cancer screening (Parsa et al., 2006).
In the past, SPs were routinely employed in objective structured clinical examinations at our medical school. They were not required to provide any form of feedback to the learners. We endeavoured to develop a more structured SP training program at our institution. In the initial phase, this study was conducted to survey the attitudes of the SPs who work at our institution towards case portrayal and the value of SP methodology.
II. METHODS
This was an anonymous cohort questionnaire study. An online questionnaire was administered to standardized patients who were recruited to work at a high-stakes objective structured clinical examination at a university medical school. Participants were sent a link to an electronic survey by email after the event. Participation was voluntary. Questions about race, age, gender and years of experience as an SP were asked. The importance of the contribution of an SP to medical education and their comfort with discussing medical conditions, portraying abnormal signs and undergoing different physical examinations were evaluated. A Likert scale of 1-5 was used where appropriate. This questionnaire study is covered by the institutional review board approval (Study Reference Number: 09-288) of the standardised patient program in our institution. Being an anonymous, voluntary survey, the consent was implied when the participants filled and returned the completed survey.
Descriptive statistics and the electronic survey were generated using Vovici software version 6 (Vovici Corp, Dulles, Virginia, United States).
III. RESULTS
66 out of 71 SPs (93%) responded. 40% of the SPs were aged 31-40 years (Figure 1) and 72% were female. Racial distribution was similar to population norms in Singapore (67% Chinese, 21% Malay, 8% Indian, 4% others).

Figure 1: SP comfort with portrayal
With regards to their role, 95% of SPs felt it was important for them to be involved in teaching students and providing feedback. A significant number were uncomfortable with portraying mental disorders (26%) or terminal illness (16%) (Figure 1) and discussing HIV/AIDS (14%) or sexually transmitted diseases (14%) (Figure 2). With regards to death and dying, 6% of SPs were uncomfortable discussing this while another 6% were unsure about it. As expected, SPs were uncomfortable with examinations involving the front of the chest (46%, excluding breast examination) and even abdominal examination (35%). The 60% of the female SPs surveyed were uncomfortable with breast examination (Figure 3). SPs perceive themselves to improve the quality of teaching (98%) and to be cost effective (98%). The majority of this group of SPs (83%) felt that this was a viable option for sustainable employment.

Figure 2: SP comfort with discussing topic

Figure 3: SP comfort with physical examination
IV. DISCUSSION
The benefits of SP methodology in providing a safe environment for practice and experiential learning are well established. In an effort to expand the use of SP methodology at our institution, information regarding the acceptability and feasibility were required. In the past, SPs were mainly employed in summative assessment activities and did not provide learners with feedback. Before pushing the boundaries of the SP job description, it was important to understand the perspectives of our SPs and which areas of SP work they would feel comfortable or uncomfortable with.
The areas of interest were comfort with portraying roles that involved taboo topics such as mental health issues, sexually transmitted disease, death and dying. In many Asian cultures, mental illness is stigmatizing; it reflects poorly on family lineage and can influence others’ beliefs about the suitability of an individual for marriage. (Kramer et al., 2002). Many people of Asian descent view people with mental illnesses as dangerous and aggressive (Lauber & Rössler, 2007) and believe that mental illness is a punishment from God (Fogel & Ford, 2005). In China, mental health problems are believed to be a result of weak character, having evil spirits, or punishment for not respecting ancestors (Lam et al., 2006). Asian American women avoid seeking treatment for depression and suicide ideation because of Asian family and community stigma associated with mental health issues (Augsberger et al., 2015). With regards to sexual practices and sexually transmitted disease, literature shows that Chinese men regard homosexual-related stigma and discrimination as major barriers to HIV testing. Most men were reluctant to obtain an HIV test in fear that their homosexual identity would be exposed, and they sometimes encountered discrimination even from medical personnel (Wei et al., 2014). Living with HIV in an Asian society is fraught with difficulty in the context of fear and disapproval (Ho & Goh, 2017). Death and dying are generally considered taboo in Asian cultures. Open discussions about death are regarded as a bad omen (Hall & Hall, 1976). Even for those who are dying, discussion about death is avoided because it is believed that such talk may hasten the dying process or even cause death prematurely (Xu, 2007). The avoidance if discussion about death and dying in traditional Chinese culture has been found to impede the ability to discuss advanced care planning (Cheng, 2018). These cultural beliefs were reflected in the discomfort expressed by some study participants with portrayal of roles involving mental health issues, HIV or sexually transmitted diseases, terminal illness and death and dying. This is evidence that some of our SPs do have traditional Asian perspectives regarding these sensitive issues but it is encouraging that a larger proportion are comfortable with these issues. This informs us that SPs should be given advanced notice regarding the content of the case that they are expected to portray so that they can make an informed choice when accepting roles. This is especially important when taboo or sensitive content is involved. SPs should also be given an option to withdraw from the assignment if they feel uncomfortable with the content of the case after they have been trained for the case.
In view of the more conservative nature of Asians, the hypothesis was that there could be areas of the body that SPs would not be willing to have examined by students. Asian women seem to be more conservative as only 53% of respondents in a study did breast self-examinations (Sim et al., 2009; Tan et al., 2005) reported that, between 2000 and 2003, 21.5% of women in Singapore presented with stage III or IV breast cancer which may potentially be due to cultural attitudes toward breast cancer screening tests and modesty, which inhibit Asian women from participating in breast cancer screening (Parsa et al., 2006). Spiritual and religious beliefs were found to act as a barrier to breast cancer screening in Singaporean Malay women (Shaw et al., 2018). As expected, more than half of our female SPs were uncomfortable with breast examination. When both genders were considered, examination of the front of chest (excluding breast) and abdominal examination were also flagged as concerns. This made us aware of the hesitance of some SPs in this area and the need to explore this further while trying to expand the role of the SP. In developing our SP program, consent for physical examination needs to be explained in detail and comfort of the individual SP with any physical examination must be taken into consideration.
The SPs in our study perceived themselves to be of value in medical education. Standardized patients in a study in Switzerland felt motivated, engaged, and willing to invest effort in their task and did not mind the increasing demands of their work as long as the social environment in SP programs was supportive (Schlegel et al., 2016). This is encouraging for a developing SP program to know as we feel confident to expand the job scope of our SPs as long as adequate explanation and training is provided to support the SPs. With more structured coaching and exposure, we expect that SPs will become more comfortable with more challenging roles and would be willing to push the boundaries of their comfort zone.
One limitation of this study is the large majority of female participants. This was a convenience sample to optimize response rate. Further studies should aim to include a more balanced gender representation. Another limitation would be that only quantitative data was collected. In exploring perspectives, focused interviews with qualitative analysis would have provided a more in-depth understanding of the beliefs and values of the SPs.
V. CONCLUSION
This study provides initial insights into the perspectives of Asian SPs at a university medical school in an Asian country. They see themselves as a valuable tool in medical education and are willing to expand their role in the curriculum. Faculty and trainers need to be sensitive to the cultural background of our SPs in case writing and the training process to ensure that SPs are comfortable with the roles that they portray. This is of particular relevance to SP programs that employ predominantly Asian SPs. There is evidence from this study of discomfort with portraying patients with mental health issues, terminal illness and sexually transmitted diseases. The areas of exposure required in physical examination also need to be carefully considered. Additional training and graded exposure may be necessary for SPs willing to be involved in these scenarios and certain types of physical examination. Concerns about the scenario from the SPs may not be immediately apparent. The results presented here will make SP trainers more aware of the possibility of SP discomfort. Future research will be required on what type of training and what other factors will promote comfort with these scenarios as well as the impact of taking on such roles on the SPs.
Notes on Contributors
Nicola Ngiam conceptualized and designed the study, analyzed the data and interpreted the results, wrote the manuscript draft, revised it, read it and gave final approval of the manuscript.
Hor Chuen-Yee developed the methodological framework for the study, performed data collection and data analysis, revised the manuscript, read it and gave final approval of the manuscript.
Ethical Approval
This study is covered by the institutional review board approval (Study Reference Number: 09-288).
Acknowledgement
We thank Dr Dimple Rajgor for her assistance in editing, formatting, reviewing, and in submitting the manuscript for publication.
Funding
No funding source required.
Declaration of Interest
The authors have no conflicts of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.
References
Anderson, M. B., Stillman, P. L., & Wang, Y. (1994). Growing use of standardized patients in teaching and evaluation in medical education. Teaching and Learning in Medicine: An International Journal, 6(1), 15-22.
Augsberger, A., Yeung, A., Dougher, M., & Hahm, H. C. (2015). Factors influencing the underutilization of mental health services among Asian American women with a history of depression and suicide. BMC Health Services Research, 15, 542-542. https://doi.org/10.1186/s12913-015-1191-7.
Barrows, H. S., & Abrahamson, S. (1964). The programmed patient: A technique for appraising student performance in clinical neurology. Academic Medicine, 39(8), 802-805.
Cheng, H. W. B. (2018). Advance care planning in Chinese seniors: Cultural perspectives. Journal of Palliative Care, 33(4), 242-246.
Fogel, J., & Ford, D. (2005). Stigma beliefs of Asian Americans with depression in an internet sample. Canadian Journal of Psychiatry, 50(8), 470-478.
Hall, E. T., & Hall, E. (1976). How cultures collide. Psychology Today, 10(2), 66-74.
Ho, L. P., & Goh, E. C. L. (2017). How HIV patients construct liveable identities in a shame based culture: The case of Singapore. International Journal of Qualitative Studies on Health and Well-Being, 12(1), 1333899. https://doi.org/10.1080/17482631.2017.1333899
Kramer, E., Kwong, K., Lee, E., & Chung, H. (2002). Cultural factors influencing the mental health of Asian Americans. The Western Journal of Medicine, 176(4), 227-231.
Lam, C. S., Tsang, H., Chan, F., & Corrigan, P. W. (2006). Chinese and American perspectives on stigma. Rehabilitation Education, 20(4), 269-279. https://doi.org/10.1891/088970106805065368
Lauber, C., & Rössler, W. (2007). Stigma towards people with mental illness in developing countries in Asia. International Review of Psychiatry, 19(2), 157-178.
Parsa, P., Kandiah, M., Abdul, H. R., & Zulkefli, N. (2006). Barriers for breast cancer screening among Asian women: A mini literature review. Asian Pacific Journal of Cancer Prevention, 7(4), 509-514.
Schlegel, C., Bonvin, R., Rethans, J., & der Vleuten Van, C. (2016). Standardized patients’ perspectives on workplace satisfaction and work-related relationships: A multicenter study. Simulation in Healthcare: Journal of the Society for Simulation in Healthcare, 11(4), 278-285.
Shaw, T., Ishak, D., Lie, D., Menon, S., Courtney, E., Li, S. T., & Ngeow, J. (2018). The influence of Malay cultural beliefs on breast cancer screening and genetic testing: A focus group study. Psycho‐Oncology, 27(12), 2855-2861.
Sim, H., Seah, M., & Tan, S. (2009). Breast cancer knowledge and screening practices: A survey of 1,000 Asian women. Singapore Medical Journal, 50(2), 132-138.
Stillman, P. L., Sabers, D. L., & Redfield, D. L. (1976). The use of paraprofessionals to teach interviewing skills. Pediatrics, 57(5), 769-774.
Tan, E., Wong, H., Ang, B., & Chan, M. (2005). Locally advanced and metastatic breast cancer in a tertiary hospital. Annals of the Academy of Medicine, Singapore, 34(10), 595-601.
Wei, C., Yan, H., Yang, C., Raymond, H., Li, J., Yang, H., Zhao, J., Huan, X., & Stall, R. (2014). Accessing HIV testing and treatment among men who have sex with men in China: A qualitative study. AIDS Care, 26(3), 372-378.
Xu, Y. (2007). Death and dying in the Chinese culture: Implications for health care practice. Home Health Care Management & Practice, 19(5), 412-414.
*Nicola Ngiam
Department of Medicine
National University Health System
1E Kent Ridge Rd,
Singapore 119228
Email address: nicola_ngiam@nuhs.edu.sg
Submitted: 28 March 2020
Accepted: 23 September 2020
Published online: 4 May, TAPS 2021, 6(2), 9-24
https://doi.org/10.29060/TAPS.2021-6-2/OA2242
De Zhang Lee1, Jia Yi Choo1, Li Shia Ng2, Chandrika Muthukrishnan1 & Eng Tat Ang1
1Department of Anatomy, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 2Department of Otolaryngology, National University Hospital, Singapore
Abstract
Introduction: Gamification has been shown to improve academic gains, but the mechanism remains elusive. We aim to understand how psychological constructs interact, and influence medical education using mathematical modelling.
Methods: Studying a group of medical students (n=100; average age: 20) over a period of 4 years with the Personal Responsibility Orientation to Self-Direction in Learning Scale (PRO-SDLS) survey. Statistical tests (Paired t-test) and models (logistic regression) were used to decipher the changes within these psychometric constructs (Motivation, Control, Self-efficacy & Initiative), with gamification as a tool. Students were encouraged to partake in a maze (10 stations) that challenged them to answer anatomical questions using potted human specimens.
Results: We found that the combinatorial effects of the maze and Script Concordance Test (SCT) resulted in a significant improvement for “Self-Efficacy” and “Initiative” (p<0.05). However, the “Motivation” construct was not improved significantly with the maze alone (p<0.05). Interestingly, the “Control” construct was eroded in students not exposed to gamification (p<0.05). All these findings were supported by key qualitative comments such as “helpful”, “fun” and “knowledge gap” by the participants (self-awareness of their thought processes). Students found gamification reinvigorating and useful in their learning of clinical anatomy.
Conclusion: Gamification could influence some psychometric constructs for medical education, and by extension, the metacognition of the students. This was supported by the improvements shown in the SCT results. It is therefore proposed that gamification be further promoted in medical education. In fact, its usage should be more universal in education.
Keywords: Psychometric Constructs, Medical Education, Motivation, Initiative, Self-efficacy
Practice Highlights
- Student’s enjoyment (interest) of the curriculum will determine the eventual academic outcome.
- Metacognition (defined as the “learning of learning”, “knowing of knowing” and/ or the awareness of one’s thought processes) was improved with SCT and gamification.
- Gamification is useful as a form of augmentation for didactic teaching but should never replace it.
- Different type of psychometric scale (e.g. LASSI versus PRO-SDLS) used in your research will produce varying results.
- Gamification is resource intensive and needs extra time to prepare compared to didactic approaches.
I. INTRODUCTION
Psychology is integral to healthcare and education but has often been overshadowed, compared to the other basic disciplines (Choudhry et al., 2019; Pickren, 2007). This is ironical because human psyche needs to be properly understood in order to manage them effectively (Wisniewski & Tishelman, 2019). Presently, the study of psychology does not feature prominently in the medical curriculum (Gallagher et al., 2015) with the exception of psychiatry (Douw et al., 2019). This gap needs to be addressed (Paros & Tilburt, 2018). In this research, we seek to understand the constructs for good medical learning via gamification which has wide ranging effects (Mullikin et al., 2019). The psychometric constructs to be analysed were as follows: 1) “Motivation”; define as the desire to learn out of interest or enjoyment (Yue et al., 2019). 2) “Initiative”; refers to how proactive a student is to learning (Boyatzis et al., 2000). 3) “Control”; is how much influence one has over the circumstances (Sheikhnezhad Fard & Trappenberg, 2019). 4) “Self-Efficacy”; relates to how confident one is, to do what needs to be done (Michael et al., 2019). We believe that these constructs contribute to the student’s awareness of their own thought processes (metacognition) towards their medical education.
Gamification” is defined as a process of adding game-like elements to something so as to encourage more participation (Rutledge et al., 2018; Van Nuland et al., 2015). The idea of using games to “lighten up” medical education in the clinical setting was first proposed in 2002 (Howarth-Hockey & Stride, 2002). The authors observed increased engagement and participation during lunchtime medical quizzes in the hospital. They therefore concluded that medical education could be fun, and since then, gamification has been taken seriously by the community (Evans et al., 2015; Nevin et al., 2014). In essence, gamification could be something as simple as having board games (Ang et al., 2018) but importantly, its impact on students’ learning must be evaluated and validated. Most studies in the literature did not fulfil this requirement (Graafland et al., 2012). The impact of games on the behavioral and/or psychological outcomes should be studied (Graafland et al., 2017; Graafland et al., 2014).
A PubMed search would reveal that there are numerous self-reporting tools such as LASSI (Learning and Strategies Study Inventory (Muis et al., 2007), MSLQ (Motivated Strategies for Learning Questionnaire (Villavicencio & Bernardo, 2013), and the SRLPS (Self-regulated Learning Perception Scale) (Turan et al., 2009) etc. Given the choices, how does one decide which one to adopt for their studies? In our research, we chose to use the PRO-SDLS survey questions with some modifications. The choice was both serendipitous and practical, as we have previously validated it via the Cronbach alpha (>0.7). In our earlier work, feedback scores and results yielded inconclusive evidence to support enhanced motivation among our students. Furthermore, was this due to gamification? With the current endeavour, we aim to prove via mathematical modelling that there are indeed alterations to the psychometric constructs. Hence, we re-analyse the old data set together with additional new information, using statistical analysis tools such as the logistic regression model, Wilcoxon tests, and the Paired t-test.
Medical teaching and learning is a complex endeavour based on an apprenticeship model (Cortez et al., 2019), which may or may not be an ideal arrangement (Sheehan et al., 2010). Furthermore, the decision making is often delegated to the seniors (Chessare, 1998). Conversely, gamification could empower the students to take charge of one’s learning, including decision making (Shah et al., 2013). Furthermore, one needs to understand what works from what is empirical (Cote et al., 2017). While our initial research addressed the impact of the games on academic performance, we now sought to further understand its effects on the psychometric dimensions. This will help to understand the psychology of self-directed (or regulated) learning. We hypothesize that the amount of gamification will impact these constructs. In summary, we hope to achieve the following:
Aims:
- Understanding the role of psychometric constructs and gamification in medical education via suitable mathematical modelling.
- To decipher the interaction of different psychometric constructs (Motivation, Self-efficacy, Control and Initiatives) in producing desired learners’ behaviours (metacognition) via the anatomy maze.
II. METHODS
First-year medical students (M1) took part in this retrospective analytical research. Two randomised groups of medical students (n=75, median age: 20 years) consented to the study (Group 1 & 2). A randomised group of students (n=25) exposed to no gamifications (Group 0) served as the control. Every student was required to complete a pre- and post- PRO-SDLS for the research. There were no penalties for withdrawing from the IRB-approved project (See IRB: B-16-205).
Gamification was carried out according to the scheme in Figure 1. Each group was divided into 3 to 4 subgroups that would enter the maze with a clue card (see example in Figure 1) linked to a specific pot specimen. They were required to explore the museum for the next clue and had to answer the hidden questions (see examples in Appendix) which would provide further directions. At the conclusion, students were given a competitive pop quiz that had no impact on their summative academic grades.

The main purpose was to assess formative knowledge acquisition. The validated PRO-SDLS include the following psychometric constructs: “Motivation” (7 questions), “Initiative” (6 questions), “Control” (6 questions) and “Self-Efficacy” (6 questions). (See Sup. Materials). The responses are then collapsed into an average accordingly. A higher score indicated more agreeability towards that construct for self-directed learning (Ang et al., 2017). The survey was designed with backward scoring to ensure accuracy. For quantification purposes, we subtracted the pre-feedback from the post-feedback scores for each question. An increased score for a particular construct suggests improvement (Cazan & Schiopca, 2014). Furthermore, students in Group 2 were given Script Concordance Test (SCT) quizzes (See Sup. Materials) as part of gamification (Lubarsky et al., 2013; Lubarsky et al., 2018; Wan et al., 2018). SCT were meant to enhance clinical reasoning. All data were analysed from two perspectives:
- The magnitude of score increase (or decrease) of the post- PRO-SDLS survey responses, with respect to the pre- responses.
- The odds of a student reporting an increased score in the post- PRO-SDLS survey responses.
In (a), the paired differences for each student’s response were studied using a parametric approach (paired t-test). In (b), we studied the odds of increased score for each construct, and investigate if grouping affected these odds. More formally, for each construct k (where k is one of the four constructs), we define variable as the probability of a student from group showing an increase in score for construct (and hence, as the probability that the student’s score decreased or remained unchanged). The value of can be estimated by dividing the number of students from group with an increased score for construct by the total number of students from group . If the interventions are unsuccessful, we would expect to be around since a student’s score would likely either increase or decrease at random, with an equal probability. This can be tested using the t-test.
An alternative approach would be to study the odds of success, which can be written as . A common mathematical model used to study these odds is the logistic regression model. For each construct, the logistic regression model studies the odds of a student from a given group showing an increased or decreased score. The overall significance of the model can be tested using the p-value obtained from the likelihood ratio test, while the significance of the individual odds can be tested using the t-test. For more details on the logistic regression model, we refer the reader to (Agresti, 2003).
We utilised the open source software R (Team, 2019) to perform our statistical analysis.
III. RESULTS
Participation rate in the gamification endeavour was consistently 90±5%, and there was zero withdrawal from it, accompanied by reported favourable qualitative comments.
A. Studying the Absolute Scores
The average change in scores across all the groups for each construct is given in Table 1. From these scores, we believe that our gamification exercises may have had a positive impact on “Self-Efficacy” and “Initiative”. To visualize the spread of responses, we have prepared box plots of the post – pre scores (available in Supplementary Materials).
|
Groups |
Constructs |
|||
|
Self-Efficacy |
Initiative |
Motivation |
Control |
|
|
0 |
0.07 |
-0.05 |
0.11 |
0.03 |
|
1 |
0.13 |
0.26 |
0.05 |
0.01 |
|
2 |
0.13 |
0.20 |
0.12 |
0.07 |
Table 1: Average post-pre scores
To determine if the construct scores pre and post intervention were different, we used the paired t-test, under the null hypothesis that there is no change. The p-values obtained are summarized in Table 2.
|
Groups |
Constructs |
|||
|
Self-Efficacy |
Initiative |
Motivation |
Control |
|
|
0 |
0.46 |
0.57 |
0.43 |
0.79 |
|
1 |
0.07 |
0.00 |
0.54 |
0.86 |
|
2 |
0.01 |
0.00 |
0.09 |
0.14 |
Table 2: p-values of t-test (to 2 decimal places)
We observe that the null hypothesis of no difference between pre and post intervention levels for all constructs are not rejected (under p=0.05) for the control group. Both tests also failed to show any significant change for the “Control” construct.
There is strong evidence that the classroom interventions employed by Groups 1 and 2 had an impact on “Initiative” levels of students, reflected by the small p-values obtained using both tests. The average increase in “Initiative” scores for students in Groups 1 and 2 are 0.71 and 0.67, respectively, which are similar. Recall that the students in Group 2 participated in the SCT, in addition to the maze which is common across both groups. This suggests that the SCT has a negligible impact on “Initiative”.
There is also strong evidence (p=0.05) to show that the games enhanced the “Self-Efficacy” levels among the students. The t-test also gives strong evidence (p=0.05) that there is a significant change in Group 2, and milder evidence for Group 1 (p=0.10). The average increase in “Self-Efficacy” levels for Groups 1 and 2 are 0.63 and 0.56, respectively. Again, the differences are negligible, and this suggests that the SCT has a negligible impact. Finally, there is mild evidence (p=0.10) of a significant change in “Motivation” for Group 2, but no such evidence for Group 1. The average increase in “Motivation” score for Group 2 is 0.55. This time, the SCT might have helped to improve students’ motivation.
B. Studying the Odds of Score Improvement
We will now turn our attention to modelling the odds of a student reporting an increase in construct score. Earlier, we defined as the probability of a student from group showing an increase in score for construct , and explained why we would expect to be around if the games have no impact on the odds of “success”. The t-test was used to test this, under the null hypothesis that for all groups and constructs. The p-values obtained are summarised in Table 3.
|
Groups |
Constructs |
|||
|
Self-Efficacy |
Initiative |
Motivation |
Control |
|
|
0 |
0.56 |
0.56 |
0.85 |
0.07 |
|
1 |
0.08 |
0.00 |
0.78 |
0.25 |
|
2 |
0.57 |
0.08 |
0.57 |
0.85 |
Table 3: p-values of t-test (to 2 decimal places)
We first notice that the p-values reported using both tests are almost identical. Interestingly, there is mild evidence (p=0.10) that the value of “Control” construct for Group 0 deviates significantly from 0.5, and it is estimated to be 0.32. This means that the students in the control group reported a drop in “Control” levels.
There is also mild evidence (p=0.10) that the probability of a student reporting an increase in “Self-Efficacy” for Group 1 deviates significantly from 0.5. This probability is estimated to be 0.63, which indicates that the odds of a student from Group 1 reporting an increase in “Self-Efficacy” levels is higher compared to the others.
Finally, there is evidence that the probability of reporting an increase in “Initiative” levels for students from Groups 1 and 2 deviates significantly from 0.5. The probabilities for Group 1 and Group 2 are 0.71 and 0.67, respectively.
Next, we will model our data using the logistic regression model. We will fit four models, one for each construct. For each model, we calculate the odds of a student from a given group showing an increased or decreased score. An odds of greater than 1 means that the student is more likely to show an increased score, while an odds of less than 1 means the opposite. An odds of exactly 1 means that the student is neither more nor less likely to show a changed score. The statistical significance of the individual odds and the overall model fit for each construct was computed using the t-test, and likelihood ratio test, respectively. The results are summarised in Table 4, with the statistically significant (p=0.10) odds highlighted in blue, together with their respective p-values.
|
Odds |
Constructs |
|||
|
Self-Efficacy |
Initiative |
Motivation |
Control |
|
|
Group 0 |
0.92 |
0.79 |
1.27 |
0.47 (0.08) |
|
Group 1 |
1.08 |
2.41 (0.02) |
1.67 |
0.72 |
|
Group 2 |
1.25 |
1.99 (0.10) |
1.26 |
0.93 |
|
|
Constructs |
|||
|
Self-Efficacy |
Initiative |
Motivation |
Control |
|
|
p-value |
0.93 |
0.01 |
0.22 |
0.29 |
Table 4: (top) Coefficients for each construct (significant odds in blue, p-values in brackets) (bottom) p-values to assess logistic regression model fit using likelihood ratio test
Under the logistic regression model, not rejecting the null hypothesis for a given odds means that we assume it takes on the value 1. It should be noted that the individual coefficients should be examined when the model is determined to be significant under the likelihood ratio test, as the coefficients obtained under a poor model fit may not be meaningful.
We notice that the significant terms flagged out by the t-test (Table 3) largely agree with the significant terms of the logistic regression model, except for the “Self-Efficacy” odds for Group 1. However, the “Self-Efficacy” model was not determined to be a good fit using the likelihood ratio test.
The only model which was deemed to be a good fit was the one for the “Initiative” construct. The odds for Group 0 is deemed to be insignificant (and hence assumed to be 1), while the odds for Groups 1 and 2 are statistically significant. We can interpret this model as follows,
- Since the odds for Group 0 is statistically insignificant under the t-test, we assume the odds to be 1. In other words, it is equally likely for a student from the control group to show an increase or decrease in score.
- The odds for both Group 1 and 2 are statistically significant. The odds of success of Group 1 is 2.41, which can be translated to a roughly 7 in 10 chance (probability of 0.71) for a student in this group showing an improved score. A similar interpretation can be made for Group 2, which showed an odds of 1.99. This translates to a slightly lower probability of 0.67 for a student from Group 2 displaying an improved score.
With this, we have presented a logistic regression approach of mathematically modelling these odds. A search on Google Scholar and PubMed yielded no previous work which made us of this mathematical modelling approach on the PRO-SDLS survey data. With the derived odds, we can compare the degree of success of the various classroom interventions. The logistic regression modelling approach is therefore, proposed as a complement of the t-test approach, which is restricted to detecting the presence of statistically significant differences.
C. Qualitative Comments (underlined words underpinning for metacognition)
1) Positive feedbacks:
- “The maze games were the most helpful as they helped me to consolidate my learning, and also enables me to ask the tutor any questions that I have from class. They allowed me to learn anatomy in a fun, enjoyable and memorable way”
- “It allowed me to visualize the things that I was learning and helped with clarifying doubts”
- “The extra question posted at each station was helpful”
- “Wanting to be able to identify things in the museum makes me more motivated to prepare beforehand”
- “Allows me to identify the knowledge gap so that I can work on it”
- “I like the quiz as it motivates me to study beforehand and shows me the gaps in my knowledge”
- “The clinically relevant questions made me think a lot”
2) Negative feedbacks:
- “I prefer didactic teaching”
- “We did not interact much with the exhibits”
- “The maze was more of a mini quiz or test to check if we remember anything”
- “Perhaps we could go into more complex concepts”
- “More challenging questions”
- “Students just follow each other around the anatomy museum and it defeats the purpose of the maze”
- “The maze could have a competitive element to make it more exciting. Maybe more MCQ questions per model so we can make use of it more”
IV. DISCUSSION
We undertook this research to decipher how gamification as a concept helps medical students learn a basic subject like human anatomy. We also want to understand how psychometric constructs interact to produce behavioural changes towards self-directed learning. This was done by analysing the data from the PRO-SDLS via statistical tests. Put simply, one needs to understand that medical education is a very complex process that demands balance between apprenticeship (fellowship) (Sheehan et al., 2010), and a dose of self-directed learning (van Houten-Schat et al., 2018). With our initial research into gamification of anatomy education (Ang et al., 2018), there were other studies suggesting similar benefits (Felszeghy et al., 2019; Nicola et al., 2017; Van Nuland et al., 2015). We are therefore convinced that gamification could help to engage students and improve academic gains. However, the notion of gaming can be very broad (Virtual Reality, board games, digital apps etc.), so there is a need to understand the underlying psychology. With that in mind, we re-analyse our previous data with the existing, using proven statistical tools to decipher the learning psychology of these medical students, and their awareness of their own thought processes (metacognition).
We earlier hypothesised that gamification would influence these dependent constructs differently and indeed this was the outcome. In our analysis, we found that the combinatorial effects of the maze and SCT resulted in a significant improvement for “Self-Efficacy” and “Initiative”. While the maze alone did not significantly improve “Motivation”, we saw mild evidence of an improvement in terms of psychometric scores, when the SCT and maze were used in combination. In lay terms, the maze encouraged these students to learn on their own. By extension, one could also argue that gamification will help the students in making decisions since “Motivation” and “Initiatives” are key attributes (Vohs et al., 2008). The ability to make a simple clinical judgement, and the courage to act on them, are the virtues that we should be imbuing in the medical students, and some junior doctors. Interestingly, there is mild evidence that the “Control” construct was undergoing erosion in the students not exposed to gamification, as the course progresses. This adverse result is not seen in both groups exposed to the games. Perhaps the more relaxed classroom setting with gamification helped students to feel more in control of their learning process. Logically, this made a lot of sense across the education landscape.
A follow up question would be, does the feedback confirm the results given in our qualitative analysis? Recall that in our logistic regression model, students from both non-control groups displayed a statistically significant improvement in “Initiative” levels. This is supported by some of the positive feedback received for our endeavours, such as being “motivated to prepare beforehand”, “identify the knowledge gap” and work on them, as well as helping them to “think a lot” about the course content. Furthermore, some of the negative feedback, such as requests for more challenging questions, or more questions in general, suggests that the students are taking the initiative to learn more. This certainly adds credence to the findings of our proposed logistic regression model, as well as highlighting the importance of studying both qualitative and quantitative feedback.
There are caveats that one should be aware when implementing gamification. The formative part of the endeavour could be variable, and dependent on numerous factors such as the tutor involved, and the type of games, interventions, and reporting scales used. In the feedback, 76% of the participants felt that the maze should continue as an adjunct but not to totally replace didactic tutorials. In other words, introducing gaming elements into the curriculum should be done judiciously. With reverse scoring, it was shown that “Self-Efficacy” fell as the level of gamification is increased. In lay terms, students might be feeling that the maze trivialize the learning of the subject. As a counter measure, and to maintain quality assurance, we could introduce video lectures from previous years to allay these fears. In summary, we now confirmed that gamification works, and it influences learning outcomes as demonstrated by others (Burgess et al., 2018; Goyal et al., 2017; Kollei et al., 2017; Kouwenhoven-Pasmooij et al., 2017; Kurtzman et al., 2018; O’Connor et al., 2018; Patel et al., 2017; Savulich et al., 2017). Separately, there were criticisms as to why SCT was introduced into the research. We believed that such augmentation will add “fun” for the pre-clinical students to tackle the various clinical scenarios and clinical anatomy.
V. LIMITATIONS OF THE STUDY
Our research necessitated that the students take part in the maze and the SCT. Although it was not compulsory, no students opted out of it. Some critics would misconstrue this to be a form of forced play. According to Jane McGonigal, gamification should ideally not be mandated (Roepke et al., 2015).
VI. CONCLUSION
Through statistical modelling, we have shown how the “Initiative”, “Motivation”, and “Self-Efficacy” constructs could potentially benefit from gamification. The before-after experimental set up allowed for powerful comparisons to be made. Studying the odds of construct score improvement, alongside the raw scores, allowed us to study the data from different perspectives. Though this approach, we discovered how the potential benefits of our gamification exercises outweigh the potential adverse effects. Gamification had resulted in improved “Initiative” in these medical students. We believe that their decision-making skills will also be boosted if existing culture allows for more self-discovery (to improve “Initiative”, “Control” and “Self-efficacy”) and autonomy. If these recommendations are duly considered and implemented thoughtfully, there is little doubt that our future doctors will be better equipped to serve humanity. This may also help to avoid possible burnout in residents (Hale et al., 2019).
Stronger conclusion and potential for applications are as follows: In a continuum, we started gamifying anatomy education and proven that academic grades could be improved by the process (Ang et al., 2018). We then asked a fundamental question in how exactly it happened. This was done by carrying out a psychometric analysis on the participants. We discovered that psychometric constructs were important, and this was proven in this manuscript. The impact of gamification is now elevated given the COVID-19 pandemic that necessitated more online teaching. Moving forward, we believe that gamification should move towards creating an electronic application that the students may access 24/7. This will ensure that medical teaching will be fortified and be somewhat protected from further disruptions.
Notes on Contributors
Lee De Zhang graduated with a degree in Statistics and Computer Science. He reviewed the literature, analysed the data and wrote part of the manuscript.
Eng Tat Ang, Ph.D., is a senior lecturer in anatomy at the Department of Anatomy at the YLLSoM, NUS. He reviewed the literature, designed the research, collected and analysed the data. He developed the manuscript.
Choo Jiayi, BSc (Hons) graduated with a degree in life sciences. She executed the research, and help collected the data. She contributed to the development of the manuscript.
M Chandrika, MBBS, DO, MSc is an instructor at the Department of Anatomy at the YLLSoM, NUS. She helped to execute the research and collected the data.
Ng Li Shia, MBBS, Master of Medicine (Otorhinolaryngology), MRCS(Glasg) is a consultant at the Department of Otolaryngology, Head & Neck Surgery (ENT), National University Hospital. She developed the SCT questions.
Ethical Approval
This project has received full IRB and Ethical clearance (NUS IRB: B-16-205).
Acknowledgements
A big thank you to all students who took part in the research, and to the CDTL, NUS, for providing a teaching enhancement funds to support this research. Appreciation also due to Dr Patricia Chen (Dept. of Psychology, NUS) for her helpful advice.
Funding
NUS TEG AY2017/2018 was awarded to help the investigators pay Mr De Zhang Lee for the statistical modelling that gamification drove medical education via a MAZE.
Declaration of Interest
All authors have no conflict of interest to declare.
References
Agresti, A. (2003). Categorical data analysis. John Wiley & Sons.
Ang, E. T., Abu Talib, S. N., Samarasekera, D., Thong, M., & Charn, T. C. (2017). Using video in medical education: What it takes to succeed. The Asia Pacific Scholar. 2(3), 15-21.
Ang, E. T., Chan, J. M., Gopal, V., & Li Shia, N. (2018). Gamifying anatomy education. Clinical Anatomy, 31(7), 997-1005. https://doi.org/10.1002/ca.23249
Boyatzis, R. E., Murphy, A. J., & Wheeler, J. V. (2000). Philosophy as a missing link between values and behaviour. Psychological Reports, 86(1), 47-64. https://doi.org/10.2466/pr0.2000.86.1.47
Burgess, J., Watt, K., Kimble, R. M., & Cameron, C. M. (2018). Combining Technology and Research to Prevent Scald Injuries (the Cool Runnings Intervention): Randomized Controlled Trial. Journal of Medical Internet Research, 20(10), e10361. http://doi.org/10.2196/10361
Cazan, A. M., & Schiopca, B. A. (2014). Self-directed learning, personality traits and academic achievement. Procedia-Social and Behavioral Sciences (127), 640-644.
Chessare, J. B. (1998). Teaching clinical decision-making to pediatric residents in an era of managed care. Pediatrics, 101(4 Pt 2), 762-766; discussion 766-767. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9544180
Choudhry, F. R., Ming, L. C., Munawar, K., Zaidi, S. T. R., Patel, R. P., Khan, T. M., & Elmer, S. (2019). Health literacy studies conducted in australia: A scoping review. International Journal of Environmental Research and Public Health, 16(7). https://doi.org/10.3390/ijerph16071112
Cortez, A. R., Winer, L. K., Kassam, A. F., Hanseman, D. J., Kuethe, J. W., Quillin, R. C., 3rd, & Potts, J. R., 3rd. (2019). See none, do some, teach none: An analysis of the contemporary operative experience as nonprimary surgeon. Journal of Surgical Education, 76(6), e92-e101. https://doi.org/10.1016/j.jsurg.2019.05.007
Cote, L., Rocque, R., & Audetat, M. C. (2017). Content and conceptual frameworks of psychology and social work preceptor feedback related to the educational requests of family medicine residents. Patient Education and Counseling, 100(6), 1194-1202. https://doi.org/10.1016/j.pec.2017.01.012
Douw, L., van Dellen, E., Gouw, A. A., Griffa, A., de Haan, W., van den Heuvel, M., Hillebrand, A., Van Mieghem, P., Nissen, I. A., Otte, W. M., & Reijmer, Y. D. (2019). The road ahead in clinical network neuroscience. Network Neuroscience, 3(4), 969-993. https://doi.org/10.1162/netn_a_00103
Evans, K. H., Daines, W., Tsui, J., Strehlow, M., Maggio, P., & Shieh, L. (2015). Septris: a novel, mobile, online, simulation game that improves sepsis recognition and management. Academic Medicine, 90(2), 180-184. https://doi.org/10.1097/ACM.0000000000000611
Felszeghy, S., Pasonen-Seppänen, S., Koskela, A., Nieminen, P., Härkönen, K., Paldanius, K. M., Gabbouj, S., Ketola, K., Hiltunen, M., Lundin, M., & Haapaniemi, T. (2019). Using online game-based platforms to improve student performance and engagement in histology teaching. BMC Medical Education, 19(1), 273. https://doi.org/10.1186/s12909-019-1701-0
Gallagher, S., Wallace, S., Nathan, Y., & McGrath, D. (2015). ‘Soft and fluffy’: medical students’ attitudes towards psychology in medical education. Journal of Health Psychology, 20(1), 91-101. https://doi.org/10.1177/1359105313499780
Goyal, S., Nunn, C. A., Rotondi, M., Couperthwaite, A. B., Reiser, S., Simone, A., Katzman, D. K., Cafazzo, J. A., & Palmert, M. R. (2017). A mobile app for the self-management of Type 1 Diabetes among adolescents: A randomized controlled trial. Journal of Medical Internet Research mHealth and uHealth, 5(6), e82. https://doi.org/10.2196/mhealth.7336
Graafland, M., Bemelman, W. A., & Schijven, M. P. (2017). Game-based training improves the surgeon’s situational awareness in the operation room: a randomized controlled trial. Surgical Endoscopy, 31(10), 4093-4101. https://doi.org/10.1007/s00464-017-5456-6
Graafland, M., Schraagen, J. M., & Schijven, M. P. (2012). Systematic review of serious games for medical education and surgical skills training. British Journal of Surgery, 99(10), 1322-1330. https://doi.org/10.1002/bjs.8819
Graafland, M., Vollebergh, M. F., Lagarde, S. M., van Haperen, M., Bemelman, W. A., & Schijven, M. P. (2014). A serious game can be a valid method to train clinical decision-making in surgery. World Journal of Surgery, 38(12), 3056-3062. https://doi.org/10.1007/s00268-014-2743-4
Hale, A. J., Ricotta, D. N., Freed, J., Smith, C. C., & Huang, G. C. (2019). Adapting Maslow’s Hierarchy of Needs as a Framework for Resident Wellness. Teaching and Learning in Medicine, 31(1), 109-118. https://doi.org/10.1080/10401334.2018.1456928
Howarth-Hockey, G., & Stride, P. (2002). Can medical education be fun as well as educational? British Medical Journal, 325(7378), 1453-1454. https://doi.org/10.1136/bmj.325.7378.1453
Kollei, I., Lukas, C. A., Loeber, S., & Berking, M. (2017). An app-based blended intervention to reduce body dissatisfaction: A randomized controlled pilot study. Journal of Consulting and Clinical Psychology, 85(11), 1104-1108. https://doi.org/10.1037/ccp0000246
Kouwenhoven-Pasmooij, T. A., Robroek, S. J., Ling, S. W., van Rosmalen, J., van Rossum, E. F., Burdorf, A., & Hunink, M. G. (2017). A blended web-based gaming intervention on changes in physical activity for overweight and obese employees: Influence and usage in an experimental pilot study. Journal of Medical Internet Research Serious Games, 5(2), e6. https://doi.org/10.2196/games.6421
Kurtzman, G. W., Day, S. C., Small, D. S., Lynch, M., Zhu, J., Wang, W., Rareshide, C. A., & Patel, M. S. (2018). Social incentives and gamification to promote weight loss: The lose it randomized, controlled trial. Journal of General Internal Medicine, 33(10), 1669-1675. https://doi.org/10.1007/s11606-018-4552-1
Lubarsky, S., Dory, V., Duggan, P., Gagnon, R., & Charlin, B. (2013). Script concordance testing: from theory to practice: AMEE guide no. 75. Medical Teacher, 35(3), 184-193. https://doi.org/10.3109/0142159X.2013.760036
Lubarsky, S., Dory, V., Meterissian, S., Lambert, C., & Gagnon, R. (2018). Examining the effects of gaming and guessing on script concordance test scores. Perspectives on Medical Education, 7(3), 174-181. https://doi.org/10.1007/s40037-018-0435-8
Michael, K., Dror, M. G., & Karnieli-Miller, O. (2019). Students’ patient-centered-care attitudes: The contribution of self-efficacy, communication, and empathy. Patient Education and Counseling. https://doi.org/10.1016/j.pec.2019.06.004
Muis, K. R., Winne, P. H., & Jamieson-Noel, D. (2007). Using a multitrait-multimethod analysis to examine conceptual similarities of three self-regulated learning inventories. British Journal of Educational Psychology, 77(Pt 1), 177-195. https://doi.org/10.1348/000709905X90876
Mullikin, T. C., Shahi, V., Grbic, D., Pawlina, W., & Hafferty, F. W. (2019). First year medical student peer nominations of professionalism: A methodological detective story about making sense of non-sense. Anatomical Sciences Education, 12(1), 20-31. https://doi.org/10.1002/ase.1782
Nevin, C. R., Westfall, A. O., Rodriguez, J. M., Dempsey, D. M., Cherrington, A., Roy, B., Patel, M., & Willig, J. H. (2014). Gamification as a tool for enhancing graduate medical education. Postgraduate Medical Journal, 90(1070), 685-693. https://doi.org/10.1136/postgradmedj-2013-132486
Nicola, S., Virag, I., & Stoicu-Tivadar, L. (2017). vr medical gamification for training and education. Studies in Health Technology and Informatics, 236, 97-103. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/28508784
O’Connor, D., Brennan, L., & Caulfield, B. (2018). The use of neuromuscular electrical stimulation (NMES) for managing the complications of ageing related to reduced exercise participation. Maturitas, 113, 13-20. https://doi.org/10.1016/j.maturitas.2018.04.009
Paros, S., & Tilburt, J. (2018). Navigating conflict and difference in medical education: insights from moral psychology. BMC Medical Education, 18(1), 273. https://doi.org/10.1186/s12909-018-1383-z
Patel, M. S., Benjamin, E. J., Volpp, K. G., Fox, C. S., Small, D. S., Massaro, J. M., Lee, J. J., Hilbert, V., Valentino, M., Taylor, D. H., & Manders, E. S. (2017). effect of a game-based intervention designed to enhance social incentives to increase physical activity among families: The BE FIT randomized clinical trial. Journal of the American Medical Association Internal Medicine, 177(11), 1586-1593. https://doi.org/10.1001/jamainternmed.2017.3458
Pickren, W. (2007). Psychology and medical education: A historical perspective from the United States. Indian Journal of Psychiatry, 49(3), 179-181. https://doi.org/10.4103/0019-5545.37318
Roepke, A. M., Jaffee, S. R., Riffle, O. M., McGonigal, J., Broome, R., & Maxwell, B. (2015). Randomized controlled trial of superbetter, a smartphone-based/internet-based self-help tool to reduce depressive symptoms. Games for Health Journal, 4(3), 235-246. https://doi.org/10.1089/g4h.2014.0046
Rutledge, C., Walsh, C. M., Swinger, N., Auerbach, M., Castro, D., Dewan, M., Khattab, M., Rake, A., Harwayne-Gidansky, I., Raymond, T. T., & Maa, T. (2018). Gamification in action: Theoretical and practical considerations for medical educators. Academic Medicine, 93(7), 1014-1020. https://doi.org/10.1097/ACM.0000000000002183
Savulich, G., Piercy, T., Fox, C., Suckling, J., Rowe, J. B., O’Brien, J. T., & Sahakian, B. J. (2017). Cognitive training using a novel memory game on an ipad in patients with amnestic mild cognitive impairment (aMCI). International Journal of Neuropsychopharmacology, 20(8), 624-633. https://doi.org/10.1093/ijnp/pyx040
Shah, A., Carter, T., Kuwani, T., & Sharpe, R. (2013). Simulation to develop tomorrow’s medical registrar. The Clinical Teacher, 10(1), 42-46. https://doi.org/10.1111/j.1743-498X.2012.00598.x
Sheehan, D., Bagg, W., de Beer, W., Child, S., Hazell, W., Rudland, J., & Wilkinson, T. J. (2010). The good apprentice in medical education. New Zealand Medical Journal, 123(1308), 89-96. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/20201158
Sheikhnezhad Fard, F., & Trappenberg, T. P. (2019). A novel model for arbitration between planning and habitual control systems. Frontiers in Neurorobotics, 13, 52. https://doi.org/10.3389/fnbot.2019.00052
Team, R. C. (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing.
Turan, S., Demirel, O., & Sayek, I. (2009). Metacognitive awareness and self-regulated learning skills of medical students in different medical curricula. Medical Teacher, 31(10), e477-483. https://doi.org/10.3109/01421590903193521
van Houten-Schat, M. A., Berkhout, J. J., van Dijk, N., Endedijk, M. D., Jaarsma, A. D. C., & Diemers, A. D. (2018). Self-regulated learning in the clinical context: A systematic review. Medical Education, 52(10), 1008-1015. https://doi.org/10.1111/medu.13615
Van Nuland, S. E., Roach, V. A., Wilson, T. D., & Belliveau, D. J. (2015). Head to head: The role of academic competition in undergraduate anatomical education. Anatomical Sciences Education, 8(5), 404-412. https://doi.org/10.1002/ase.1498
Villavicencio, F. T., & Bernardo, A. B. (2013). Positive academic emotions moderate the relationship between self-regulation and academic achievement. British Journal of Educational Psychology, 83(Pt 2), 329-340. https://doi.org/10.1111/j.2044-8279.2012.02064.x
Vohs, K. D., Baumeister, R. F., Schmeichel, B. J., Twenge, J. M., Nelson, N. M., & Tice, D. M. (2008). Making choices impairs subsequent self-control: A limited-resource account of decision making, self-regulation, and active initiative. Journal of Personality and Social Psychology, 94(5), 883-898. https://doi.org/10.1037/0022-3514.94.5.883
Wan, M. S., Tor, E., & Hudson, J. N. (2018). Improving the validity of script concordance testing by optimising and balancing items. Medical Education, 52(3), 336-346. https://doi.org/10.1111/medu.13495
Wisniewski, A. B., & Tishelman, A. C. (2019). Psychological perspectives to early surgery in the management of disorders/differences of sex development. Current Opinion in Pediatrics, 31(4), 570-574. https://doi.org/10.1097/MOP.0000000000000784
Yue, P., Zhu, Z., Wang, Y., Xu, Y., Li, J., Lamb, K. V., Xu, Y., & Wu, Y. (2019). Determining the motivations of family members to undertake cardiopulmonary resuscitation training through grounded theory. Journal of Advanced Nursing, 75(4), 834-849. https://doi.org/10.1111/jan.13923
*Ang Eng Tat
Department of Anatomy
Yong Loo Lin School of Medicine
MD10, National University of Singapore
Singapore 117599
Email address: antaet@nus.edu.sg
Submitted: 4 August 2020
Accepted: 14 October 2020
Published online: 4 May, TAPS 2021, 6(2), 1-8
https://doi.org/10.29060/TAPS.2021-6-2/RA2370
Tow Keang Lim
Department of Medicine, National University Hospital, Singapore
Abstract
Introduction: Clinical diagnosis is a pivotal and highly valued skill in medical practice. Most current interventions for teaching and improving diagnostic reasoning are based on the dual process model of cognition. Recent studies which have applied the popular dual process model to improve diagnostic performance by “Cognitive De-biasing” in clinicians have yielded disappointing results. Thus, it may be appropriate to also consider alternative models of cognitive processing in the teaching and practice of clinical reasoning.
Methods: This is critical-narrative review of the predictive brain model.
Results: The theory of predictive brains is a general, unified and integrated model of cognitive processing based on recent advances in the neurosciences. The predictive brain is characterised as an adaptive, generative, energy-frugal, context-sensitive action-orientated, probabilistic, predictive engine. It responds only to predictive errors and learns by iterative predictive error management, processing and hierarchical neural coding.
Conclusion: The default cognitive mode of predictive processing may account for the failure of de-biasing since it is not thermodynamically frugal and thus, may not be sustainable in routine practice. Exploiting predictive brains by employing language to optimise metacognition may be a way forward.
Keywords: Diagnosis, Bias, Dual Process Theory, Predictive Brains
Practice Highlights
- According to the dual process model of cognition diagnostic errors are caused by bias reasoning.
- Interventions to improve diagnosis based on “Cognitive De-biasing” methods report disappointing results.
- The predict brain is a unified model of cognition which accounts for diagnostic errors, the failure of “Cognitive De-biasing” and may point to effective solutions.
- Using appropriate language as simple rules or thumb, to fine-tune predictive processing meta-cognitively may be a practical strategy to improve diagnostic problem solving.
I. INTRODUCTION
Clinical diagnostic expertise is a critical, highly valued, and admired skill (Montgomery, 2006). However, diagnostic errors are common and important adverse events which merit research and effective prevention (Gupta et al., 2017; Singh et al., 2014; Skinner et al., 2016). Thus, it is now widely acknowledged and recognized that concerted efforts are required to improve the research, training and practice of clinical reasoning in improving diagnosis (Simpkin et al., 2017; Singh & Graber, 2015; Zwaan et al., 2013). The consensus among practitioners, researchers and preceptors is that most preventable diagnostic errors are associated with bias reasoning during rapid, non-analytical, default cognitive processing of clinical information (Croskerry, 2013). The most widely held theory which accounts for this observation is the dual process model of cognition (B. Djulbegovic et al., 2012; Evans, 2008; Schuwirth, 2017). It posits that most diagnostic errors reside in intuitive, non-analytical or systems 1 thinking (Croskerry, 2009). Thus, the logical, practical and common sense implication which follows from this assumption is that we should activate and apply analytical or system 2 thinking to counter-check or “De-bias” system 1 errors (Croskerry, 2009). This is a popular notion and it has facilitated the emergence of many schools of clinical reasoning based on training methods designed to deliberately understand, recognise, categorise and avoid specific diagnostic errors arising from system thinking 1 or cognitive bias (Reilly et al., 2013; Rencic et al., 2017; Restrepo et al., 2020). However, careful research on the merits of these interventions under controlled conditions do not show consistent nor clear benefits (G. Norman et al., 2014; G. R. Norman et al., 2017; O’Sullivan & Schofield, 2019; Sherbino et al., 2014; Sibbald et al., 2019; J. N. Walsh et al., 2017). Moreover, even the recognition and categorization of these cognitive error events themselves are deeply confounded by hindsight bias itself (Zwaan et al., 2016). Perhaps, at this juncture, it might be appropriate to consider alternative models of cognition based on advances in multi-disciplinary neuroscience research which have expanded greatly in recent years (Monteiro et al., 2020).
Over the past decade the theory of predictive brains has emerged as an ambitious, unified, convergent and integrated model of cognitive processing from research in a large variety of core domains in cognition which include philosophy, meta-physics, cellular physics, thermodynamics, Associative Learning theory, Bayesian-probability theory, Information theory, machine learning, artificial intelligence, behavioural science, neuro-cognition, neuro-imaging, constructed emotions and psychiatry (Bar, 2011; Barrett, 2017a; Barrett, 2017b; Clark, 2016; Friston, 2010; Hohwy, 2013; Seligman, 2016; Teufel & Fletcher, 2020). It may have profound and practical implications on how we live, work and learn. However, to my knowledge, there is almost no discussion of this novel proposition in either medical education pedagogy or research. Thus, in this presentation I will review recent developments in the predictive brain model of cognition, map its key elements which impacts on pedagogy and research in medical education and propose an application in the training of diagnostic reasoning based on it.
An early version of this work had been presented as an abstract (Lim & Teoh, 2018).
II. METHODS
This is a critical-narrative review of the predictive brain model from Friston’s “The free energy principle” proposition a decade ago to more recent critical examination of the emerging supportive evidence based on neurophysiological studies over the past 5 years (Friston, 2010; K. S. Walsh et al., 2020).
III. RESULTS
A. The Brain is a Frugal Predictive Engine
The Brain Is A Frugal Predictive Engine (General references (Bar, 2011; Barrett, 2017a; Barrett, 2017b; Clark, 2013; Clark, 2016; Friston, 2010; Gilbert & Wilson, 2007; Hohwy, 2013; Seligman, 2016; Seth et al., 2011; Sterling, 2012).
In contrast with traditional top-down, feed-forward models of cognition, the predictive brain model reverses and inverts this process. Perception is characterised as an entirely inferential rapidly adaptive, generative, energy-frugal, context-sensitive action-orientated, probabilistic, predictive process (Tschantz et al., 2020). This system is governed by the need to respond rapidly to ever changing demands from the external environmental and our body’s internal physiological signals (intero-ception) and yet minimise free energy expenditure (or waste) (Friston, 2010; Kleckner et al., 2017; Sterling, 2012). Thus, it is not passive and reactive to new information but predictive and continuously proactive. From very early, elemental and sparse cues it is continuously generating predictive representations based on remembered similar experiences in the past which may include simulations. It performs iterative matching of top down prior representations with bottom up signals and cues in a hierarchy of categories of abstractions and content specificity over scales of space and time (Clark, 2013; Friston & Kiebel, 2009; Spratling, 2017a). This matching process is also sensitive to variations in context and thus enable us to make sense of rapidly changing and complex situations (Clark, 2016).
Cognitive resource, in terms of allocating attention, is only focused on the management of errors in prediction or the mismatch between prior representations and new emergent information. It seeks to minimise prediction errors (PEs) and there is repetitive, recognition-expectation-based signal suppression when this is achieved. Thus, this is a system which only responds to the unfamiliar situation or what it considers as news worthy. This is analogous to Claude Shannons’s classic analysis of “surprisals” in information theory (Shannon et al., 1993). Learning is based on the generation and neural coding of a new predictive representations in memory. The most direct and powerful evidence for this process comes from optogenetic experiments with their exquisitely high degree of resolution in the monitoring and manipulations over space-time of neuronal signalling and behaviour in freely forging rats which show causal linkages between PE, dopamine neurons and learning (Nasser et al., 2017; Steinberg et al., 2013).
The brain intrinsically generates representations of the world in which it finds itself from past experience which is refined by sensory data. New sensory information is represented and inferred in terms of these known causes. Determining which combination of the many possible causes best fits the current sensory data is achieved through a process of minimising the error between the sensory data and the sensory inputs predicted by the expected causes, i.e. the PE. In the service of PE reduction, the brain will also generate motor actions such as saccadic eye movement and foraging behaviour. The prediction arises from a process of “backwards thinking” or inferential Bayesian best guess or approximation based simultaneously on sensory data and prior experience (Chater & Oaksford, 2008; Kersten et al., 2004; Kwisthout et al., 2017a; Kwisthout et al., 2017b; Ting et al., 2015). It is a hierarchical predictive coding process, reflecting the serial organization of the neuronal architecture of cerebral cortex; higher levels are abstract, whereas the lowest level amounts to a prediction of the incoming sensory data (Kolossa et al., 2015; Shipp, 2016; Ting et al., 2015). The actual sensory data is compared to the predicted sensory data, and it is the discrepancies, or ‘error’ that ascends up the hierarchy to refine all higher levels of abstraction in the model. Thus, this is a learning process whereby, with each iteration, the model representations are optimised and encoded in long term memory as the PEs minimise (Friston, FitzGerald, Rigoli et al., 2017; Spratling, 2017b).
This system of neural responses is regulated and fine-tuned by varying the gains on the weightage of the reliability (or precision) of the PE estimate itself. In other words, it is the level of confidence (versus uncertainty) in the PE which determines the intensity of attention allocated to it and strength of coding in memory following its resolution (Clark, 2013; Clark, 2016; Feldman & Friston, 2010; Hohwy, 2013). This regulatory, neuro-modulatory process is impacted by the continuous cascade of action relevant information which is sensitive to both external context and internal interoceptive (i.e. from perception of our own physiological responses) and affective signals (Clark, 2016). This metacognitive capacity to effectively manipulate and re-calibrate the precision of PE itself may be a critical aspect of decision making, problem solving behaviour and learning. (Hohwy, 2013; Picard & Friston, 2014).
B. Clinical Reasoning is Predictive Error Processing and Learning is Predictive Coding
The core processes of the predictive brain which are engaged during diagnostic reasoning are summarised in Table 1 and Figure 1.
|
Core features of the predictive brain model |
Clinical reasoning features and processes |
|
The frugal brain and free energy principle(Friston, 2010) |
Cognitive load in problem solving (Young et al., 2014)
|
|
Iterative matching of top down priors Vs bottom up signals |
Inductive foraging (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017) |
|
Predictive error processing |
Pattern recognition in diagnosis |
|
Recognition-expectation-based signal suppression |
Premature closure (Blissett & Sibbald, 2017; Melo et al., 2017) |
|
Hierarchical predictive error coding as learning |
Development of illness scripts (Custers, 2014) |
|
Probabilistic-Bayesian inferential approximations |
Bayesian inference in clinical reasoning |
|
Context sensitivity |
Contextual factors in diagnostic errors(Durning et al., 2010) |
|
Action orientation |
Foraging behaviour in clinical diagnosis (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017) |
|
Interoception and affect in prediction error management |
Gut feel and regret (metacognition) |
|
The precision(reliability/uncertainty) of prediction errors |
Clinical uncertainty (metacognition) (Bhise et al., 2017; Simpkin & Schwartzstein, 2016) |
Table 1: Core features of the predictive brain model of cognition manifested as clinical reasoning processes

Legend to Figure 1
A summary of the cognitive processes engaged by the predict brain model during clinical diagnosis
A: Active search for diagnostic clues based on prior experience of similar patients in similar situations.
B: Recognition of key features will activate a series of familiar illness script from long term memory to match with the new case. If this is successful, a diagnosis made and any prediction error signals are rapidly silenced.
C & D: When the illness scripts do not match the presenting features (????), cognition slows down, attention is heightened and further searches are made for additional matching clues and illness scripts. This is iterated until a satisfactory match is found or a new illness script is generated to account for the mismatch.
E: A new variation in the presenting features for that disease is then encoded in memory as a new illness script in memory and thus, a valuable learning moment.
F: The degree of uncertainty or level of confidence in matching key presenting features to a diagnosis is a meta-cognitive skill and a critical expertise in clinical diagnosis. This corresponds to the precision or gain/weightage of prediction errors (Meta cognition) in the predictive brain model.
Figure 1: A summary of the cognitive processes engaged by the predict brain model during clinical diagnosis
Thermodynamic frugality is a central feature of the predictive brain model and in this system, the primacy of attending only to surprises or PEs is pivotal (Friston, 2010). This might be regard as an energy efficient strategy in coping with cognitive load which has been long recognised as an important consideration in clinical problem solving and learning (Young et al., 2014; Van Merrienboer & Sweller, 2010).
From the first moments of a diagnostic encounter the clinician is alert to clues which might point to the diagnosis and begins to generate possible diagnosis scenarios and simulations based upon her prior experience of similar patients and situations (Donner-Banzhoff & Hertwig, 2014). This is iterative and, from a scanty set of presenting features, a plausible diagnosis may be considered within a few seconds to minutes (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017). Thus, a familiar illness script is activated from long term memory to match with the new case (Custers, 2014). If this is successful, a particular diagnosis is recognised and any PE signal is rapidly silenced. Functional MRI studies of clinicians during this process showed that highly salient diagnostic information, reducing uncertainty about the diagnosis, rapidly decreased monitoring activity in the frontoparietal attentional network and may contribute to premature diagnostic closure, an important cause of diagnostic errors (Melo et al., 2017). This may be considered a form of diagnosis or recognition related PE signal suppression analogous to the well know phenomenon of repetitive suppression (Blissett & Sibbald, 2017; Bunzeck & Thiel, 2016; Krupat et al., 2017).
In cases where the illness scripts do not match the presenting features, a PE event is encountered, cognition slows down, attention is heightened and further searches are made for additional matching clues and illness scripts (Custers, 2014). This is iterated until a satisfactory match is found or a new illness script is generated to account for the mismatch. This is then encoded in memory as a new variation in the presenting features for that disease and thus, a valuable learning moment. Bayesian inference is a fundamental feature of both clinical diagnostic reasoning and the predictive brain model (Chater & Oaksford, 2008).
As in the predictive brain model, external contextual factors and internal emotional and physiological responses such as gut feeling and regret, exert profound effects on clinical decision making (M. Djulbegovic et al., 2015; Durning et al., 2010; Stolper & van de Wiel, 2014; Stolper et al., 2014). Also active inductive foraging behaviour in searching for diagnostic clues described in experienced primary physicians is analogous to behaviour directed at reducing PEs (Donner-Banzhoff & Hertwig, 2014; Donner-Banzhoff et al., 2017). The precision or gain/weightage of PEs is manifested metacognitively as uncertainties or levels of confidence in clinical reasoning (Sandved-Smith et al., 2020). Metacognition is a critical capacity and expertise in effective decision making. (Bhise et al., 2017; Fleming & Frith, 2014; Simpkin & Schwartzstein, 2016).
C. Why Applying the Dual Process Model May Not Improve Clinical Reasoning
Recent studies which have applied the popular dual process model to improve diagnostic performance by “cognitive de-biasing” in clinicians have yielded disappointing results (G. R. Norman et al., 2017). Cognitive processing of the predictive brain as the dominant default network mode of operation may account for this setback since de-biasing is not naturistic, requires retrospective “off line” processing after the monitoring salience network has already shut off (Krupat et al., 2017; Melo et al., 2017). It is not thermodynamically frugal and thus, may not be sustainable in routine practice (Friston, 2010; Young et al., 2014). Even Daniel Kahneman himself admits that, despite decades of research in cognitive bias he is unable to exert agency of the moment and de-bias himself (Kahneman, 2013). This will be more so in novice diagnosticians in the training phase who have scanty illness scripts and limited tolerance of any further cognitive loading (Young et al., 2014). The failure to even identify cognitive biases reliably by clinicians due to hindsight bias itself suggests that this intervention will be the least effective one in improving diagnostic reasoning (Zwaan et al., 2016).
D. Using Words to Fine Tune the Precision of Diagnostic Prediction Error
Daniel Kahneman, the foremost expert on cognitive bias, cautions that, contrary to what some experts in medical education advice, avoiding bias is ineffective in improving decision making under uncertainty (Restrepo et al., 2020). By contrast he suggested that we apply simple, common sense, rules of thumb (Kahneman et al., 2016). I hypothesise that instructing clinical trainees to use appropriate words to self in the diagnostic setting during active, naturalistic PE processing before the diagnosis is made and not as a retrospective counter check to cognition afterwards may be a way forward (Betz et al., 2019; Clark, 2016; Lupyan, 2017). In a multi-center, iterative thematic content analysis of over 2,000 cases of diagnostic errors with a structured taxonomy, Schiff and colleagues identified a limited number of pitfall themes which were overlooked and predisposed physicians to reasoning errors (Reyes Nieva H et al., 2017). These pitfall themes included three which are of particular interest in relation to naturalistic PE processing namely: (1) counter diagnostic cues, (2) things that do not fit and (3) red flags (Reyes Nieva H et al., 2017). Thus, we instructed our student interns and internal medicine residents to pay particular attend to these three diagnostic pitfalls during review of new patients and clinical problems (Lim & Teoh, 2018). They were required to append the following sub-headings to their clerking impression in the patient’s electronic health record (eHR): (a) Counter diagnostic features; (b) Things that do not fit; (c) Red flags. This template was added after the resident had entered his or her numerated list of diagnoses or issues. “Counter diagnostic features” was defined as symptoms, signs or investigations which were inconsistent with the proposed primary diagnosis. “Things that do not fit” was defined as any finding that could not be reasonably accounted for taking into account the main and differential diagnoses. “Red flags” were defined as findings which raised the possibility of a more serious underlying illness requiring early diagnosis or intervention. The attending physicians were required, during bedside rounds, to give feedback on these points and make amendments to the eHR as appropriate. This exercise may give us an opportunity to see if we can improve diagnostic accuracy by using pivotal words-to-self in the appropriate setting to maintain cognitive openness, flexibility and thus, avoid premature (Krupat et al., 2017). It is also a valuable critical, metacognitive thinking habit to inculcate in tyro diagnosticians (Carpenter et al., 2019).
IV. CONCLUSION
The theory of predictive brains has emerged as a major narrative in the understanding of how our mind works. It may account for the limitations of interventions designed to improve diagnostic problem solving which are based on the dual process theory of cognition. Exploiting predictive brains by employing language to optimise metacognition may be a way forward.
Note on Contributor
Lim designed the paper, reviewed the literature, drafted and revised it.
Ethical Approval
There is no ethical approval associated with this paper.
Funding
No funding sources are associated with this paper.
Declaration of Interest
No conflicts of interest are associated with this paper.
References
Bar, M. (2011). Predictions in the brain using our past to generate a future (pp. xiv, 383 p. ill. (some col.) 327 cm.).
Barrett, L. F. (2017a). How emotions are made: the secret life of the brain. Houghton Mifflin Harcourt.
Barrett, L. F. (2017b). The theory of constructed emotion: An active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12(1), 1-23. https://doi.org/10.1093/scan/nsw154
Betz, N., Hoemann, K., & Barrett, L. F. (2019). Words are a context for mental inference. Emotion, 19(8), 1463-1477. https://doi.org/10.1037/emo0000510
Bhise, V., Rajan, S. S., Sittig, D. F., Morgan, R. O., Chaudhary, P., & Singh, H. (2017). Defining and measuring diagnostic uncertainty in medicine: A systematic review. Journal of General Internal Medicine 33, 103–115. https://doi.org/10.1007/s11606-017-4164-1
Blissett, S., & Sibbald, M. (2017). Closing in on premature closure bias. Medical Education, 51(11), 1095-1096. https://doi.org/10.1111/medu.13452
Bunzeck, N., & Thiel, C. (2016). Neurochemical modulation of repetition suppression and novelty signals in the human brain. Cortex, 80, 161-173. https://doi.org/10.1016/j.cortex.2015.10.013
Carpenter, J., Sherman, M. T., Kievit, R. A., Seth, A. K., Lau, H., & Fleming, S. M. (2019). Domain-general enhancements of metacognitive ability through adaptive training. Journal of Experimental Psychology. General, 148(1), 51-64. https://doi.org/10.1037/xge0000505
Chater, N., & Oaksford, M. (2008). The probabilistic mind : prospects for Bayesian cognitive science. Oxford University Press.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences, 36(3), 181–204. https://doi.org/10.1017/S0140525X12000477
Clark, A. (2016). Surfing uncertainty : Prediction, action, and the embodied mind: Oxford University Press.
Croskerry, P. (2009). Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Advances in Health Sciences Education : Theory and Practice, 14 Suppl 1, 27–35. https://doi.org/10.1007/s10459-009-9182-2
Croskerry, P. (2013). From mindless to mindful practice–cognitive bias and clinical decision making. The New England Journal of Medicine, 368(26), 2445–2448. https://doi.org/10.1056/NEJMp1303712
Custers, E. J. (2014). Thirty years of illness scripts: Theoretical origins and practical applications. Medical Teacher, 1-6. https://doi.org/10.3109/0142159X.2014.956052
Djulbegovic, B., Hozo, I., Beckstead, J., Tsalatsanis, A., & Pauker, S. G. (2012). Dual processing model of medical decision-making. BMC Medical Informatics and Decision Making, 12, 94. https://doi.org/10.1186/1472-6947-12-94
Djulbegovic, M., Beckstead, J., Elqayam, S., Reljic, T., Kumar, A., Paidas, C., & Djulbegovic, B. (2015). Thinking styles and regret in physicians. Public Library of Science One, 10(8), e0134038. https://doi.org/10.1371/journal.pone.0134038
Donner-Banzhoff, N., & Hertwig, R. (2014). Inductive foraging: Improving the diagnostic yield of primary care consultations. European Journal of General Practice, 20(1), 69–73. https://doi.org/10.3109/13814788.2013.805197
Donner-Banzhoff, N., Seidel, J., Sikeler, A. M., Bosner, S., Vogelmeier, M., Westram, A., & Gigerenzer, G. (2017). The phenomenology of the diagnostic process: A primary care-based survey. Medical Decision Making, 37(1), 27-34. https://doi.org/10.1177/0272989X16653401
Durning, S. J., Artino, A. R., Jr., Pangaro, L. N., van der Vleuten, C., & Schuwirth, L. (2010). Perspective: redefining context in the clinical encounter: Implications for research and training in medical education. Academic Medicine: Journal of the Association of American Medical Colleges, 85(5), 894–901. https://doi.org/10.1097/ACM.0b013e3181d7427c
Evans, J. S. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278. https://doi.org/10.1146/annurev.psych.59.103006.093629
Feldman, H., & Friston, K. J. (2010). Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience, 4, 215. https://doi.org/10.3389/fnhum.2010.00215
Fleming, S. M., & Frith, C. D. (2014). The cognitive neuroscience of metacognition. Springer.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews. Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1–49. https://doi.org/10.1162/NECO_a_00912
Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London. Series B, Biological sciences, 364(1521), 1211–1221. https://doi.org/10.1098/rstb.2008.0300
Gilbert, D. T., & Wilson, T. D. (2007). Prospection: Experiencing the future. Science, 317(5843), 1351-1354. https://doi.org/10.1126/science.1144161
Gupta, A., Snyder, A., Kachalia, A., Flanders, S., Saint, S., & Chopra, V. (2017). Malpractice claims related to diagnostic errors in the hospital. BMJ Quality and Safety, 27(1), 53-60. https://doi.org/10.1136/bmjqs-2017-006774
Hohwy, J. (2013). The predictive mind. Oxford University Press..
Kahneman, D. (2013). Thinking, fast and slow (1st pbk. ed.). Farrar, Straus and Giroux.
Kahneman, D., Rosenfield, A. M., Gandhi, L., & Blaser, T. O. M. (2016). NOISE: How to overcome the high, hidden cost of inconsistent decision making. (cover story). Harvard Business Review, 94(10), 38-46. Retrieved from http://libproxy1.nus.edu.sg/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=118307773&site=ehost-live
Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as bayesian inference. Annual Review of Psychology, 55, 271–304. https://doi.org/10.1146/annurev.psych.55.090902.142005
Kleckner, I. R., Zhang, J., Touroutoglou, A., Chanes, L., Xia, C., Simmons, W. K., & Feldman Barrett, L. (2017). Evidence for a large-scale brain system supporting allostasis and interoception in humans. Nature Human Behaviour, 1, 0069. https://doi.org/10.1038/s41562-017-0069
Kolossa, A., Kopp, B., & Fingscheidt, T. (2015). A computational analysis of the neural bases of Bayesian inference. Neuroimage, 106, 222-237. https://doi.org/10.1016/j.neuroimage.2014.11.007
Krupat, E., Wormwood, J., Schwartzstein, R. M., & Richards, J. B. (2017). Avoiding premature closure and reaching diagnostic accuracy: Some key predictive factors. Medical Education, 51(11), 1127-1137. https://doi.org/10.1111/medu.13382
Kwisthout, J., Bekkering, H., & van Rooij, I. (2017a). To be precise, the details don’t matter: On predictive processing, precision, and level of detail of predictions. Brain and Cognition, 112, 84–91. https://doi.org/10.1016/j.bandc.2016.02.008
Kwisthout, J., Phillips, W. A., Seth, A. K., van Rooij, I., & Clark, A. (2017b). Editorial to the special issue on perspectives on human probabilistic inference and the ‘Bayesian brain’. Brain and Cognition, 112, 1-2. https://doi.org/10.1016/j.bandc.2016.12.002
Lim T.K., & Teoh, C. M. (2018). Exploiting predictive brains for better diagnosis. Diagnosis (Berl), 5(3), eA40. Retrieved from https://www.degruyter.com/view/journals/dx/5/3/article-peA1.xml
Lupyan, G. (2017). Changing what you see by changing what you know: The role of attention. Frontiers in Psychology, 8, 553. https://doi.org/10.3389/fpsyg.2017.00553
Melo, M., Gusso, G. D. F., Levites, M., Amaro, E., Jr., Massad, E., Lotufo, P. A., & Friston, K. J. (2017). How doctors diagnose diseases and prescribe treatments: An fMRI study of diagnostic salience. Scientific Reports, 7(1), 1304. http://observatorio.fm.usp.br/handle/OPI/19951
Monteiro, S., Sherbino, J., Sibbald, M., & Norman, G. (2020). Critical thinking, biases and dual processing: The enduring myth of generalisable skills. Medical Education, 54(1), 66-73. https://doi.org/10.1111/medu.13872
Montgomery, K. (2006). How doctors think: Clinical judgement and the practice of medicine. Oxford University Press.
Nasser, H. M., Calu, D. J., Schoenbaum, G., & Sharpe, M. J. (2017). The dopamine prediction error: Contributions to associative models of reward learning. Frontiers in Psychology, 8, 244. https://doi.org/10.3389/fpsyg.2017.00244
Norman, G., Sherbino, J., Dore, K., Wood, T., Young, M., Gaissmaier, W., & Monteiro, S. (2014). The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Academic Medicine: Journal of the Association of American Medical Colleges, 89(2), 277–284. https://doi.org/10.1097/ACM.0000000000000105
Norman, G. R., Monteiro, S. D., Sherbino, J., Ilgen, J. S., Schmidt, H. G., & Mamede, S. (2017). The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking. Academic Medicine: Journal of the Association of American Medical Colleges, 92(1), 23–30. https://doi.org/10.1097/ACM.0000000000001421
O’Sullivan, E. D., & Schofield, S. J. (2019). A cognitive forcing tool to mitigate cognitive bias – A randomised control trial. BMC Medical Education, 19(1), 12. https://doi.org/10.1186/s12909-018-1444-3
Picard, F., & Friston, K. (2014). Predictions, perception, and a sense of self. Neurology, 83(12), 1112-1118. https://doi.org/10.1212/WNL.0000000000000798
Reilly, J. B., Ogdie, A. R., Von Feldt, J. M., & Myers, J. S. (2013). Teaching about how doctors think: A longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Quality & Safety, 22(12), 1044–1050. https://doi.org/10.1136/bmjqs-2013-001987
Rencic, J., Trowbridge, R. L., Jr., Fagan, M., Szauter, K., & Durning, S. (2017). Clinical reasoning education at us medical schools: Results from a national survey of internal medicine clerkship directors. Journal of General Internal Medicine, 32(11), 1242–1246. https://doi.org/10.1007/s11606-017-4159-y
Restrepo, D., Armstrong, K. A., & Metlay, J. P. (2020). Annals Clinical Decision Making: Avoiding Cognitive Errors in Clinical Decision Making. Annals of Internal Medicine, 172(11), 747–751. https://doi.org/10.7326/M19-3692
Reyes Nieva H., V. M., Wright A, Singh H, Ruan E, Schiff G. (2017). Diagnostic Pitfalls: A New Approach to Understand and Prevent Diagnostic Error. In Diagnosis (Vol. 4, pp. eA1). https://www.degruyter.com/view/journals/dx/5/4/article-peA59.xml
Sandved-Smith, L., Hesp, C., Lutz, A., Mattout, J., Friston, K., & Ramstead, M. (2020, June 10). Towards a formal neurophenomenology of metacognition: Modelling meta-awareness, mental action, and attentional control with deep active inference. https://doi.org/10.31234/osf.io/5jh3c
Schuwirth, L. (2017). When I say … dual-processing theory. Medical Education, 51(9), 888–889. https://doi.org/10.1111/medu.13249
Seligman, M. E. P. (2016). Homo Prospectus. Oxford University Pres.
Seth, A. K., Suzuki, K., & Critchley, H. D. (2011). An interoceptive predictive coding model of conscious presence. Frontiers in Psychology, 2, 395. https://doi.org/10.3389/fpsyg.2011.00395
Shannon, C. E., Sloane, N. J. A., Wyner, A. D., & IEEE Information Theory Society. (1993). Claude Elwood Shannon : Collected Papers. IEEE Press.
Sherbino, J., Kulasegaram, K., Howey, E., & Norman, G. (2014). Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: A controlled trial. Canadian Journal of Emergency Medicine, 16(1), 34–40. https://doi.org/10.2310/8000.2013.130860
Shipp, S. (2016). Neural Elements for Predictive Coding. Frontiers in Psychology, 7, 1792. https://doi.org/10.3389/fpsyg.2016.01792
Sibbald, M., Sherbino, J., Ilgen, J. S., Zwaan, L., Blissett, S., Monteiro, S., & Norman, G. (2019). Debiasing versus knowledge retrieval checklists to reduce diagnostic error in ECG interpretation. Advances in Health Sciences Education: Theory and Practice, 24(3), 427–440. https://doi.org/10.1007/s10459-019-09875-8
Simpkin, A. L., & Schwartzstein, R. M. (2016). Tolerating uncertainty – The next medical revolution? The New England Journal of Medicine, 375(18), 1713–1715. https://doi.org/10.1056/NEJMp1606402
Simpkin, A. L., Vyas, J. M., & Armstrong, K. A. (2017). Diagnostic Reasoning: An endangered competency in internal medicine training. Annals of Internal Medicine, 167(7), 507–508. https://doi.org/10.7326/M17-0163
Singh, H., & Graber, M. L. (2015). Improving diagnosis in health care- The next imperative for patient safety. The New England Journal of Medicine, 373(26), 2493–2495. https://doi.org/10.1056/NEJMp1512241
Singh, H., Meyer, A. N., & Thomas, E. J. (2014). The frequency of diagnostic errors in outpatient care: Estimations from three large observational studies involving US adult populations. BMJ Quality & Safety, 23(9), 727–731. https://doi.org/10.1136/bmjqs-2013-002627
Skinner, T. R., Scott, I. A., & Martin, J. H. (2016). Diagnostic errors in older patients: A systematic review of incidence and potential causes in seven prevalent diseases. International Journal of General Medicine, 9, 137–146. https://doi.org/10.2147/IJGM.S96741
Spratling, M. W. (2017a). A hierarchical predictive coding model of object recognition in natural images. Cognitive Computation, 9(2), 151–167. https://doi.org/10.1007/s12559-016-9445-1
Spratling, M. W. (2017b). A review of predictive coding algorithms. Brain and Cognition, 112, 92–97. https://doi.org/10.1016/j.bandc.2015.11.003
Steinberg, E. E., Keiflin, R., Boivin, J. R., Witten, I. B., Deisseroth, K., & Janak, P. H. (2013). A causal link between prediction errors, dopamine neurons and learning. Nature Neuroscience, 16(7), 966–973. https://doi.org/10.1038/nn.3413
Sterling, P. (2012). Allostasis: A model of predictive regulation. Physiology & Behavior, 106(1), 5–15. https://doi.org/10.1016/j.physbeh.2011.06.0044
Stolper, C. F., & van de Wiel, M. W. (2014). EBM and gut feelings. Medical Teacher, 36(1), 87-88. https://doi.org/10.3109/0142159X.2013.835390
Stolper, C. F., Van de Wiel, M. W., Hendriks, R. H., Van Royen, P., Van Bokhoven, M. A., Van der Weijden, T., & Dinant, G. J. (2014). How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship? Advances in Health Sciences Education: Theory and Practice, 20(2), 499–513. https://doi.org/10.1007/s10459-014-9543-3
Teufel, C., & Fletcher, P. C. (2020). Forms of prediction in the nervous system. Nature Reviews Neuroscience, 21(4), 231–242. https://doi.org/10.1038/s41583-020-0275-5
Ting, C. C., Yu, C. C., Maloney, L. T., & Wu, S. W. (2015). Neural mechanisms for integrating prior knowledge and likelihood in value-based probabilistic inference. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 35(4), 1792–1805. https://doi.org/10.1523/JNEUROSCI.3161-14.2015
Tschantz, A., Seth, A. K., & Buckley, C. L. (2020). Learning action-oriented models through active inference. PLoS Computational Biology, 16(4), e1007805. https://doi.org/10.1371/journal.pcbi.1007805
Van Merrienboer, J. J., & Sweller, J. (2010). Cognitive load theory in health professional education: Design principles and strategies. Medical Education, 44(1), 85-93. https://doi.org/10.1111/j.1365-2923.2009.03498.x
Walsh, J. N., Knight, M., & Lee, A. J. (2017). Diagnostic errors: Impact of an educational intervention on pediatric primary care. Journal of Pediatric Health Care : Official Publication of National Association of Pediatric Nurse Associates & Practitioners, 32(1), 53–62. https://doi.org/10.1016/j.pedhc.2017.07.004
Walsh, K. S., McGovern, D. P., Clark, A., & O’Connell, R. G. (2020). Evaluating the neurophysiological evidence for predictive processing as a model of perception. Annals of the New York Academy of Sciences, 1464(1), 242–268. https://doi.org/10.1111/nyas.14321
Young, J. Q., Van Merrienboer, J., Durning, S., & Ten Cate, O. (2014). Cognitive load theory: Implications for medical education: AMEE Guide No. 86. Medical Teacher, 36(5), 371-384. https://doi.org/10.3109/0142159X.2014.889290
Zwaan, L., Monteiro, S., Sherbino, J., Ilgen, J., Howey, B., & Norman, G. (2016). Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Quality & Safety. 26(2), 104–110. https://doi.org/10.1136/bmjqs-2015-005014
Zwaan, L., Schiff, G. D., & Singh, H. (2013). Advancing the research agenda for diagnostic error reduction. BMJ Quality & Safety, 22 Suppl 2, ii52-ii57. https://doi.org/10.1136/bmjqs-2012-001624
*Lim Tow Keang
Department of Medicine
National University Hospital
5 Lower Kent Ridge Rd
Singapore 119074
Email: mdclimtk@nus.edu.sg
Published online: 5 January, TAPS 2021, 6(1), 1-2
https://doi.org/10.29060/TAPS.2021-6-1/EV6N1
In our January 2020 Editorial, we drew the attention of our readers to “Grit in Healthcare Education and Practice”. In particular, we focused on developing the “Grit” of students and trainees; medical students who are well-equipped with the ‘Power of Grit’ will display a “passion for patient well-being and perseverance in the pursuit of that goal [which] become social norms at the individual, team and institutional levels” (Lee & Duckworth, 2018). However, never could we imagine then that such an attribute (i.e. ‘Grit’) would become contextual so soon, as exemplified by the passion and perseverance of healthcare practitioners in patient care in their response to the serious disruptions in individual health (including fatalities) caused by the Covid-19 pandemic!
We are pleased to have this opportunity to share with our readers, yet once again, the unexpected course of events associated with the Covid-19 pandemic which brought out the best in many on a global scale. In particular, as the education and training of medical students, residents and those in allied health institutions were disrupted by the Covid-19 pandemic. The educators supported by the administration in medical and health professions institutions designed curricula innovations that incorporated culturally sensitive interventions to develop individual resilience and well-being in order to support the community of learners—including students, faculty, administrators involved and of course, patients.
The current Covid-19 pandemic served as a catalyst that provided opportunities for educators to rapidly and creatively design safe, yet effective, novel and innovative solutions to ensure continuation in the education and training of medical and closely allied health professional students (Samarasekera, Goh, & Lau, 2020). Thus, there is a need to break away from decades of tradition in designing such educational strategies for continued student learning, as a rapid response to the Covid-19 pandemic. In this context then, both, institutional as well as program leadership are required to facilitate the process for the design of creative, yet safe, effective and innovative strategies for the continuation of student learning; such a step is expected to mitigate the disruptive effects of the Covid-19 pandemic! In this context then, educators leverage on available technology as the preferred mode for the delivery of instruction to students. The learning environment was also transformed from one that was predominantly classroom-based to one that is mainly online. It is also gratifying that, both, junior and senior faculty have embraced the use of technology, although some degree of ‘resistance’ to the use of technology in education was experienced earlier. Perhaps, a caveat should be added: student learning using technology over a long period of time may result in a lack of social interaction among the students and, consequently, a lack of preparation for teamwork which is so critical for healthcare practice in the 21st century.
The Covid-19 pandemic has also exposed wider societal gaps which were seldom evident previously, but needs to be addressed. It is useful then to note that The Lancet Global Independent Commission had already expressed, in its Report (Frenk et al., 2010) that “Indeed, the use of IT might be the most important driver in transformative learning ….” and that “Advanced information technology is important not only for more efficient education of health professionals; its existence also demands a change in competencies.” The ‘Report’ also drew attention to the fact that “IT-empowered learning is already a reality for the younger generation in most countries, ….” However, due to financial constraints, the ‘Report’ also cautioned that “Not all students, of course, have full access to IT resources” and suggested “A global policy to overcome such unequal distribution of digital resources [referred to as the digital divide] ….” Attention to such inequalities have also been recently addressed by Blundell, Costa Dias, Joyce, and Xu (2020).
A major concern of medical and allied health professional institutions is the well-being of students and staff who ensure the continuation of student education and training disrupted by the Covid-19 pandemic. Many institutions provided strong support to students and staff in such challenging times. Students received financial support and, if required, counselling as well in order to enhance their psycho-social well-being. Students infected by the virus or who were quarantined received special care. Many institutional policies were swiftly revised to match the rapidly changing environment: clear lines of communication were established for staff and students (Ashokka, Ong, Tay, Loh, Gee, & Samarasekera, 2020).
A more resilient community of staff and students have remarkably emerged from the trials and tribulations experienced: students have adapted rapidly to blended and virtual learning environments. Students have also organised their learning engagements around virtual student communities, as most institutions have minimised their face-to-face classroom activities. Faculty responded by designing a more adaptive curriculum that is flexible to the needs of the learner. Pre-clinical and clinical learning activities were further refined and streamlined with the removal of some content and examinations—a process unthinkable prior to the crisis (disruptions) of the Covid-19 pandemic; the prior status involved strict control of the curricula which was managed by the institution and/or professional / statutory bodies. Within a short period of time newer course materials and assessment instruments, all aligned to support online, blended or hybrid learning requirements, were developed. However, the most significant contribution from staff to the disrupted student learning is the proactive support to optimise and meet the needs of learners in the crisis triggered by the Covid-19 pandemic! Such an action by the staff were greatly appreciated; stronger bonds with a closer community spirit between the students and staff were soon established.
In conclusion, it can be said that medical and allied health professional educators have benefitted much (lessons learnt) from the disruptive effects of the Covid-19 pandemic on student learning. Instead of wallowing in self-pity, sadness and simply awaiting time-out. A determined and focused faculty can mitigate the effects of the formidable challenge posed by the Covid-19 pandemic by responding rapidly to make changes to the learning environment—using appropriate technology to deliver instruction to students, in order to ensure the continuation of safe, timely, and quality education!
Providing constant support to students by the staff and the institution will help students develop relevant coping strategies that foster their resilience and well-being. Ultimately, a community of learners and practitioners will emerge with the ability to provide and maintain quality healthcare during challenging times like the one we are now experiencing.
Dujeepa D. Samarasekera & Matthew C. E. Gwee
Centre for Medical Education (CenMED), NUS Yong Loo Lin School of Medicine,
National University Health System, Singapore
Ashokka, B., Ong, S. Y., Tay, K. H., Loh, N., Gee, C. F., & Samarasekera, D. D. (2020). Coordinated responses of academic medical centres to pandemics: Sustaining medical education during COVID-19. Medical Teacher, 42(7), 762-771.
Blundell, R., Costa Dias, M., Joyce, R., & Xu, X. (2020). COVID‐19 and Inequalities. Fiscal Studies, 41(2), 291-319.
Frenk, J., Chen, L., Bhutta, Z. A., Cohen, J., Crisp, N., Evans, T., … & Kistnasamy, B. (2010). Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. The Lancet, 376(9756), 1923-1958.
Lee, T. H., & Duckworth, A. L. (2018). Organizational grit. Harvard Business Review, 96(5), 98-105.
Samarasekera, D. D., Goh, D. L. M., & Lau, T. C. (2020). Medical school approach to manage the current COVID-19 crisis. Academic Medicine, 95(8), 1126-1127.
Submitted: 4 May 2020
Accepted: 3 August 2020
Published online: 5 January, TAPS 2021, 6(1), 3-29
https://doi.org/10.29060/TAPS.2021-6-1/RA2351
Elisha Wan Ying Chia1,2, Huixin Huang1,2, Sherill Goh1,2, Marlyn Tracy Peries1,2, Charlotte Cheuk Yiu Lee2,3, Lorraine Hui En Tan1,2, Michelle Shi Qing Khoo1,2, Kuang Teck Tay1,2, Yun Ting Ong1,2, Wei Qiang Lim1,2, Xiu Hui Tan1,2, Yao Hao Teo1,2, Cheryl Shumin Kow1,2, Annelissa Mien Chew Chin4, Min Chiam5, Jamie Xuelian Zhou2,6,7 & Lalit Kumar Radha Krishna1,2,5,7-10
1Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 2Division of Supportive and Palliative Care, National Cancer Centre Singapore, Singapore; 3Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore; 4Medical Library, National University of Singapore Libraries, National University of Singapore, Singapore; 5Division of Cancer Education, National Cancer Centre Singapore, Singapore; 6Lien Centre of Palliative Care, Duke-NUS Graduate Medical School, Singapore; 7Duke-NUS Graduate Medical School, Singapore; 8Centre for Biomedical Ethics, National University of Singapore, Singapore; 9Palliative Care Institute Liverpool, Academic Palliative & End of Life Care Centre, University of Liverpool; 10PalC, The Palliative Care Centre for Excellence in Research and Education, Singapore
Abstract
Introduction: Whilst the importance of effective communications in facilitating good clinical decision-making and ensuring effective patient and family-centred outcomes in Intensive Care Units (ICU)s has been underscored amidst the global COVID-19 pandemic, training and assessment of communication skills for healthcare professionals (HCPs) in ICUs remain unstructured
Methods: To enhance the transparency and reproducibility, Krishna’s Systematic Evidenced Based Approach (SEBA) guided Systematic Scoping Review (SSR), is employed to scrutinise what is known about teaching and evaluating communication training programmes for HCPs in the ICU setting. SEBA sees use of a structured search strategy involving eight bibliographic databases, the employ of a team of researchers to tabulate and summarise the included articles and two other teams to carry out content and thematic analysis the included articles and comparison of these independent findings and construction of a framework for the discussion that is overseen by the independent expert team.
Results: 9532 abstracts were identified, 239 articles were reviewed, and 63 articles were included and analysed. Four similar themes and categories were identified. These were strategies employed to teach communication, factors affecting communication training, strategies employed to evaluate communication and outcomes of communication training.
Conclusion: This SEBA guided SSR suggests that ICU communications training must involve a structured, multimodal approach to training. This must be accompanied by robust methods of assessment and personalised timely feedback and support for the trainees. Such an approach will equip HCPs with greater confidence and prepare them for a variety of settings, including that of the evolving COVID-19 pandemic.
Keywords: Communication, Intensive Care Unit, Assessment, Skills Training, Evaluation, COVID-19, Medical Education
Practice Highlights
- The global COVID-19 pandemic has underscored the importance of effective communications in the Intensive Care Unit (ICU).
- ICU communications training should adopt a longitudinal, structured and multimodal approach.
- Robust stepwise evaluation of learner outcomes via Kirkpatrick’s Hierarchy is needed.
- Supportive host organisation and conducive learning environment and are key to successful curricula.
I. INTRODUCTION
The COVID-19 pandemic has placed immense strain on intensive care units (ICU)s with healthcare teams and resources stretched to meet the sudden increased healthcare demands of critically ill patients. To further complicate the situation, ICU teams are called to not only communicate closely with colleagues in a bid to support them but also counsel families confronting acute distress and uneasy waits separated from their loved ones due to restrictions to visiting in an effort to limit the spread of this pandemic (Ministry of Health, 2020; World Health, 2020). From breaking bad news (Blackhall, Erickson, Brashers, Owen, & Thomas, 2014; J. Yuen & Carrington Reid, 2011), to conveying the need for sedation and intubation (Carrillo Izquierdo, Diaz Agea, Jimenez Rodriguez , Leal Costa, & Sanchez Exposito, 2018) and providing progress reports on critically ill patients (Curtis et al., 2005; Curtis, White, Curtis, & White, 2008; Yang et al., 2020), communication skills amongst ICU healthcare professionals (HCPs) are pivotal in reassuring anxious, emotional and stressed patients and families (Ahrens, Yancey, & Kollef, 2003; Foa et al., 2016; Kirchhoff et al., 2002). Good communication in the ICU has also been shown to improve patient-physician relationships (K. G. Anderson & Milic, 2017), patient and family-centred outcomes, quality of care, and patient and family satisfaction (Bloomer, Endacott, Ranse, & Coombs, 2017; Cao et al., 2018; Currey, Oldland, Considine, Glanville, & Story, 2015). Effective communications between HCPs in ICU also enhances clinical decision-making (Kleinpell, 2014), reduces medication and treatment errors (Clark, Squire, Heyme, Mickle, & Petrie, 2009; Happ et al., 2014; Sandahl et al., 2013), decreases physician burnout (Rachwal et al., 2018), and improves staff retention and satisfaction (Hope et al., 2015).
With evidence suggesting that poor communication skills (Downar, Knickle, Granton, & Hawryluck, 2012; Foa et al., 2016) and training (Smith, O’Sullivan, Lo, & Chen, 2013) are likely to increase patients’ (Dithole, Sibanda, Moleki, & Thupayagale ‐ Tshweneagae, 2016) and families’ (Curtis et al., 2008) stress, adversely affect care and recovery (Dithole et al., 2016), and increase healthcare costs (Kalocsai et al., 2018), some authors have suggested that effective communication skills are at least as important (Adams, Mannix, & Harrington, 2017; Cicekci et al., 2017; Van Mol, Boeter, Verharen, & Nijkamp, 2014) to good patient care as clinical acumen (Curtis et al., 2001a). Yet despite evidence of the importance of communication skills in ICU, communication skills training remains inconsistent, variable and not evidence-based in most ICU settings (Adams et al., 2017; Berlacher, Arnold, Reitschuler-Cross, Teuteberg, & Teuteberg, 2017; Bloomer et al., 2017; D. A. Boyle et al., 2017; Miller et al., 2018; Sanchez Exposito et al., 2018).
With this in mind, a systematic scoping review (SSR) is proposed to map current approaches to communications skills training in ICUs (Munn et al., 2018) and potentially guide design of a communications training programme. An SSR allows for systematic extraction and synthesis of actionable and applicable information whilst summarising available literature across a wide range of pedagogies and practice settings employed to understand what is known about teaching and evaluating communication training programmes for HCPs in the ICU setting (Munn et al., 2018).
II. METHODS
To overcome concerns about the transparency and reproducibility of SSR, a novel approach called Krishna’s Systematic Evidenced Based Approach (henceforth SEBA) is proposed (Kow et al., 2020; Krishna et al., 2020; Ngiam et al., 2020). This SEBA guided SSR (henceforth SSRs in SEBA) adopts a constructivist perspective to map this complex topic from multiple angles (Popay et al., 2006) whilst a relativist lens helps account for variability in communication skills training (Crotty, 1998; Ford, Downey, Engelberg, Back, & Curtis, 2012; Pring, 2000; Schick-Makaroff, MacDonald, Plummer, Burgess, & Neander, 2016).
To provide a balanced review, the research team was supported by the medical librarians from the National University of Singapore’s (NUS) Yong Loo Lin School of Medicine (YLLSoM), the National Cancer Centre Singapore (NCCS) and local educational experts and clinicians at the NCCS, the Palliative Care Institute Liverpool, YLLSoM and Duke-NUS Medical School (henceforth the expert team). The research and expert teams adopted an interpretivist approach as they proceeded through the five stages of SEBA (Figure 1).

Figure 1. The SEBA Process
A. Stage 1: Systematic Approach
1) Determining the title and research question: The research and expert teams agreed upon the goals, population, context and concept to be evaluated in this SSR. The two teams then agreed that the primary research question should be “What is known about teaching and evaluating communication training programs for HCPs in the ICU setting?” The secondary research questions were “How are communication skills taught and assessed in the ICU setting?” and “How effective have such interventions been as described in the published literature?”
2) Inclusion criteria: A Population, Intervention, Comparison, Outcome, Study Design (PICOS) format was adopted to guide the research process (Peters, Godfrey, Khalil, et al., 2015a; Peters, Godfrey, McInerney, et al., 2015b) (Table 1).
|
PICOS |
Inclusion Criteria |
Exclusion Criteria |
|
Population |
· Undergraduate and postgraduate healthcare providers (e.g. doctors, medical students, nurses, social workers) within ICU setting · ICU settings including medical, surgical, cardiology and neurology ICU · Communication between healthcare providers and patients in the ICU, or between healthcare providers in the ICU and patients’ families · Communication between or within healthcare providers’ teams in the ICU |
· Articles focusing solely on neonatal/ paediatric ICU setting · Articles focusing solely on speech therapy/ physical therapy/ occupational therapy · Non-ICU settings (e.g. general wards, emergency department) · Non-medical professions (e.g. Science, Veterinary, Dentistry) · Communication carried out over technological platforms |
|
Intervention |
· Need for/ importance of interventions to teach communication in ICU setting · Facilitators and barriers to teaching communication in ICU setting · Recommendations, interventions, methods (e.g. tools, simulations, videos), curriculum content and assessments used for teaching communication in ICU setting |
|
|
Comparison |
· Comparisons of various interventions, methods, curricula and evaluation methods used to teach or assess communication in ICU setting and its impact upon patients, healthcare providers, healthcare, and society |
|
|
Outcome |
· Impact of interventions on patients, healthcare providers, healthcare, and society · Evaluation methods to assess interventions, methods, or curriculum used to teach communication |
|
|
Study design |
· Articles in English or translated to English · All study designs including: o Mixed methods research, meta-analyses, systematic reviews, randomised controlled trials, cohort studies, case-control studies, cross-sectional studies, and descriptive papers o Case reports and series, ideas, editorials, and perspectives · Publication dates: 1st January 2000 – 31st December 2019 · Databases: PubMed, ERIC, JSTOR, Embase, CinaHL, Scopus, PsycINFO, Google Scholar |
|
Table 1. PICOS
Nine members of the research team carried out independent searches for articles published between 1st January 2000 – 31st December 2019 in eight bibliographic databases (PubMed, ERIC, JSTOR, Embase, CINAHL, Scopus, Psycinfo and Google Scholar). The searches were carried out between 27th January 2020 and 14th February 2020. The PubMed search strategy can be found in Supplementary Material A. An independent hand search was done to identify key articles.
3) Extracting and charting: Nine members of the research team independently reviewed the titles and abstracts identified and created individual lists of titles to be included which were discussed online. Consensus was achieved on the final list of articles to be included using (Sambunjak, Straus, & Marusic, 2010)’s “negotiated consensual validation” approach through collaborative discussion and negotiation on points of disagreement on online meetings.
B. Stage 2. Split Approach
Working in three independent groups, the reviewers analysed the included articles using the ‘split approach’ (Ng et al., 2020). In one group, four researchers independently reviewed and summarised all the included articles in keeping with according recommendations set out by Wong, Greenhalgh, Westhorp, Buckingham, and Pawson (2013)’s “RAMESES publication standards: meta-narrative reviews” and Popay et al. (2006)’s “Guidance on the conduct of narrative synthesis in systematic reviews”. The four research team members then discussed their individual findings at online meetings and employed ‘negotiated consensual validation’ to achieve consensus on the tabulated summaries (Sambunjak et al., 2010). The tabulated summaries served to highlight key points from the included articles.
The four members of the research team also employed the Medical Education Research Study Quality Instrument (MERSQI) (Reed et al., 2008) and the Consolidated Criteria for Reporting Qualitative Studies (COREQ) (Tong, Sainsbury, & Craig, 2007) also evaluated the quality of qualitative and quantitative studies included in this review.
Concurrently, the second group of five researchers analysed all the included articles using (Braun & Clarke, 2006)’s approach to thematic analysis then discussed their individual findings at online meetings and employed ‘negotiated consensual validation’ to achieve consensus on the final themes (Sambunjak et al., 2010). The third group of four researchers employed Hsieh and Shannon (2005)’s approach to directed content analysis to independently analyse all the included articles, discussed their independent findings online and employed ‘negotiated consensual validation’ to achieve consensus on the final themes (Sambunjak et al., 2010). This split approach consisting of the tabulated summaries and concurrent thematic analysis and content analysis enhances the reliability of the analyses. The tabulated summaries also help ensure that important themes are not lost.
1) Thematic analysis: Phase 1 of Braun and Clarke (2006)’s approach saw the team ‘actively’ reading the included articles to find meaning and patterns in the data. In phase 2, ‘codes’ were constructed from the ‘surface’ meaning (Braun & Clarke, 2006; Sawatsky, Parekh, Muula, Mbata, & Bui, 2016; Voloch, Judd, & Sakamoto, 2007) and collated into a code book to code and analyse the rest of the articles using an iterative step-by-step process. As new codes emerged, these were associated with previous codes and concepts (Price & Schofield, 2015). In phase 3, the categories were organised into themes that best depict the data. In phase 4, the themes were refined to best represent the whole data set and discussed. In phase 5, the research team discussed the results of their independent analysis online and at reviewer meetings. “Negotiated consensual validation” was used to determine a final list of themes (Sambunjak et al., 2010).
2) Directed content analysis: Hsieh and Shannon (2005)’s approach to directed content analysis (Hsieh & Shannon, 2005) was employed in three stages.
Using deductive category application (Elo & Kyngäs, 2008; Wagner-Menghin, de Bruin, & van Merriënboer, 2016), the first stage (Mayring, 2004; Wagner-Menghin et al., 2016) saw codes drawn from the article “Enhancing collaborative communication of nurse and physician leadership into two intensive care units” (D. K. Boyle & Kochinda, 2004). Drawing upon Mayring (2004)’s account, each code was defined in the code book that contained “explicit examples, definitions and rules” drawn from the data. The code book served to guide the subsequent coding process.
Stage 2 saw the four reviewers using the ‘code book’ to independently extract and code the relevant data from the included articles. Any relevant data not captured by these codes were assigned a new code that was also described in the code book. In keeping with deductive category application (Wagner-Menghin et al., 2016), coding categories and their definitions were revised. The final codes were compared and discussed with the final author to enhance the reliability of the process (Wagner-Menghin et al., 2016). The final author checked the primary data sources to ensure that the codes made sense and were consistently employed. The reviewers and the final author used “negotiated consensual validation” to resolve any differences in the coding (Sambunjak et al., 2010). The final categories were selected (Neal, Neal, Lawlor, Mills, & McAlindon, 2018) based on whether they appeared in more than 70% of the articles reviewed (Curtis et al., 2001b; Humble, 2009).
The narrative produced was guided by the Best Evidence Medical Education (BEME) Collaboration guide (Haig & Dozier, 2003) and the STORIES (Structured approach to the Reporting In healthcare education of Evidence Synthesis) statement (Gordon & Gibbs, 2014).
III. RESULTS
9532 abstracts were identified from ten databases, 239 articles reviewed, and 63 articles were included as shown in Figure 2 (Moher, Liberati, Tetzlaff, & Altman, 2009).

Figure 2. PRISMA Flowchart
3) Comparisons between summaries of the included articles, thematic analysis and directed content analysis: In keeping with SEBA approach the findings of each arm of the split approach was discussed amongst the research and expert teams. The themes identified using Braun and Clarke (2006)’s approach to thematic analysis were how to teach and evaluate communication training in ICU and the factors affecting training.
The categories identified using Hsieh and Shannon (2005)’s approach to directed content analysis were 1) strategies employed to teach communication, 2) factors affecting communication training, 3) strategies employed to evaluate communication, and 4) outcomes of communication training. These categories reflected the major issues identified in the tabulated summaries.
These findings were reviewed with the expert team who agreed that given that the themes identified could be encapsulated by the categories identified, the categories and the themes will be presented together.
a) Strategies employed to teach communication in ICU: 61 articles described various interventions used to teach communication in the ICU. 19 involved ICU physicians, 18 involved ICU nurses, 4 saw participation of ICU physicians and nurses, 13 included the multidisciplinary team in the ICU, 1 was aimed at medical interns, 2 at medical students, 2 at nursing students, and 2 at both medical and nursing students. Given the overlap between teaching strategies, topics taught, and assessment methods employed in ICU communication training for nurses, doctors, nursing and medical students and HCPs in the literature, we discuss and generalise the results across HCPs.
In curriculum design, seven studies (D. K. Boyle & Kochinda, 2004; Hope et al., 2015; Krimshtein et al., 2011; Lorin, Rho, Wisnivesky, & Nierman, 2006; McCallister, Gustin, Wells-Di Gregorio, Way, & Mastronarde, 2015; Miller et al., 2018; Sullivan, Rock, Gadmer, Norwich, & Schwartzstein, 2016) designed a curriculum based on extensive reviews of literature on teaching communication. Brunette and Thibodeau-Jarry (2017) used Kern’s 6-step approach to curriculum development to design a structured curriculum targeted at meeting the needs identified whilst Sullivan et al. (2016) and Lorin et al. (2006) used the authors’ own experiences in tandem with existing literature to guide curriculum design. W. G. Anderson et al. (2017) designed a communication training workshop based on behaviour theories whilst McCallister et al. (2015) based their curriculum on principles of shared decision-making and patient-centred communication. Northam, Hercelinskyj, Grealish, and Mak (2015) conducted a pilot study before implementing their intervention.
Topics included in the curriculum were categorised into “core topics”, or topics essential to the curriculum, and “advanced” which may be useful to incorporate into the curriculum. Core topics were deemed as topics that were most frequently cited in the literature or are crucial across a variety of interactions in the ICU setting such as history taking, relationship skills as well as on common scenarios in the ICU such as breaking bad news and communicating difficult decisions. “Advanced’ topics, though important, are not mentioned as frequently and appeared to be more site specific and sociocultural and ethical issues. These topics are outlined in Table 2 (full table with references found in Supplementary Material B). The methods employed are outlined in Table 3 (full table with references found in Supplementary Material C).
|
|
Curriculum |
|
Core curriculum content |
Communication skills – With families (n=25) – With patients (n=5) – With HCPs (n=12) – General principles |
|
Breaking bad news |
|
|
Understanding/defining goals of care, building therapeutic relationships with families, setting goals and expectations, shared decision making |
|
|
Eliciting understanding and providing information about a patient’s clinical status |
|
|
Relationship skills – Recognising and dealing with strong emotions – Empathy Relationship skills include the “key principles” of esteem, empathy, involvement, sharing, and support |
|
|
Problem solving/conflict management/facing challenges |
|
|
Frameworks for good communication – Ask-Tell-Ask – “Tell Me More” – “SBAR” – Situation, Background, Assessment, Recommendation: to share information obtained in discussions with patients or family members with other HCPs – “3Ws” – What I see, What I’m concerned about, and What I want – Four-Step Assertive Communication Tool – get attention, state the concern (eg, “I’m concerned about…” or “I’m uncomfortable with…”), offer a solution, and get resolution by ending with a question (eg, “Do you agree?”) – “4 C’s” palliative communication model: a. Convening – ensuring necessary communication occurs between the patient, family, and interprofessional team; b. Checking – for understanding; c. Caring – conveying empathy and responding to emotion; and d. Continuing – following up with patients and families after discussions to provide support and clarify information. – ‘‘Communication Strategy of the Week’’ using teaching posters – PACIENTE Interview (Introduce yourself, Listen carefully, Tell you the diagnosis, Advises treatment, Exposes the prognosis, Appoints the bad news introductory phrases, Takes time to comfort empathic, Explains a plan of action involving the family) – Stages of communication (open, clarify, develop, agree, close) – Processes of communication (procedural suggestions, check for understanding) – Explain illness in clear, simple terms – Using a reference manual and pocket reference cards – How HCPs should introduce himself to patients/family members/other HCPs |
|
|
ICU decision making – Survival after CPR – DNR discussions – Prognostication – Legal and ethical issues surrounding life-sustaining treatment decisions – Withdrawing therapies |
|
|
Advanced Topics |
Ethics – Eg. Offering organ donation |
|
Cultural/spirituality/religious issues |
|
|
Leadership |
|
|
Roles and responsibilities in communication with patients and families |
|
|
Discussing patient safety incidents |
|
|
Integration of 5 common behaviour theories: health belief model, theory of planned behaviour, social cognitive theory, an ecological perspective, and transtheoretical model |
|
|
Law |
Table 2. Topics taught
|
Methods Employed |
Number of Studies |
|
Didactic Teaching, which may be employed in conjunction with other methods in a structured programme |
20 |
|
Simulated scenarios with family members/ standardised patients |
17 |
|
Role-play |
12 |
|
Use of simulation technology such as with mannequins |
6 |
|
Group discussions, group reflections and team-based learning |
7 |
|
Case presentations, case discussions and patient care conferences |
4 |
|
Online videos |
3 |
|
Online Powerpoint slides |
3 |
|
Did not specify |
9 |
Table 3. Pedagogy
b) Factors affecting communication training: Identifying facilitators and barriers are critical to the success of communication programmes. Facilitators and barriers to training may be found in Table 4 (full table with references may be found in Supplementary Material D).
|
Facilitators |
Barriers |
|
Longitudinal, structured process with horizontal and vertical integration |
Lack of time |
|
Safe learning environment |
Resource constraints |
|
Clear programme objectives and programme content |
Poor design and a lack of longitudinal support |
|
Funding for training |
Insecurity and awkwardness during simulations |
|
Simulated patients |
Disrupted training |
|
Protected time for training |
Programmes that were not pitched at the right level |
|
Faculty experts helping to plan and review curricula and implement interventions |
Training that is not learner centered |
|
Stakeholders’ engagement to facilitate interprofessional collaboration, as well as debriefing and program feedback |
Training that lacked feedback or debrief sessions |
|
Reflective practice |
Lack of a longitudinal aspect to training |
|
Timely and appropriate feedback |
A lack of a supportive environment in which HCPs can apply the skills learnt |
|
Multidisciplinary learning |
Discordance between physicians’ and nurses’ communication with families |
|
Role modeling |
|
|
Peer support |
Table 4. Facilitators and barriers to training
c) Strategies employed to evaluate communication training: Thirty-nine articles discussed evaluation methods of communication training. The assessment methods are described as follows in Table 5 (full table with references may be found in Supplementary Material E).
|
Method |
|
|
Self-assessment |
|
|
1 |
Quantitative and qualitative surveys were administered to learners to assess their knowledge, experience in the programme, and perceived preparedness, comfort and confidence in communicating |
|
1.1 |
Some programmes only used post-intervention assessments |
|
1.2 |
Others used a combination of pre- and post-intervention assessments of learners |
|
1.3 |
Some programmes adapted existing tools to conduct post-intervention surveys to evaluate learners’ experiences and skills learnt |
|
Feedback from Others |
|
|
2 |
patients, family members, peers and simulated patients was obtained through a combination of surveys and interviews that assessed their level of satisfaction with learners’ communication skills |
|
Observation |
|
|
3 |
Direct observation of HCPs’ communication skills to ascertain the frequency, quality, success and ease of communication post-intervention. This was done through the use of modified communication tools and feedback forms |
|
Debriefing Sessions |
|
|
4 |
One study used debriefing sessions to understand shared experiences of learners. |
Table 5. Assessment Methods
d) Outcomes of communication training: The outcomes of communication training may be mapped to 5 levels of the Adapted Kirkpatrick’s Hierarchy (Jamieson, Palermo, Hay, & Gibson, 2019; Littlewood et al., 2005; Roland, 2015) allowing outcome measures used were also identified. Majority of the programmes achieved Level 2a and Level 2b outcomes as shown in Table 6 (full table with references may be found in Supplementary Material F). 40 articles described successes and three articles described variable outcomes of teaching communications.
|
Adapted Kirkpatrick’s Hierarchy |
Items evaluated |
|
Level 1 (participation) |
Experience in the programme |
|
Assessment of programme’s effectiveness |
|
|
Trainee satisfaction |
|
|
Programme completion |
|
|
Level 2a (attitudes and perception) |
Attitudes towards/ experience with communication |
|
Self-rated confidence/ preparedness in communication |
|
|
Colleagues’ satisfaction with communication |
|
|
Trainees’ views on training programme (e.g. satisfaction, perceived effectiveness) |
|
|
Self-perceived job stress/ job satisfaction |
|
|
Level 2b (knowledge and skills) |
Self-rated skill level using Likert scales |
|
Form asking trainees to list/ indicates skills they learnt during the programme |
|
|
Self-rated knowledge level using Likert scales |
|
|
Self-evaluation of communication skills using validated tools |
|
|
Evaluation of trainees’ knowledge by faculty/ experts |
|
|
Evaluation of trainees’ communication skills by faculty/ experts |
|
|
Level 3 (behavioural change) |
Feedback from peers and facilitators on interactions with actors |
|
Records of ICU rounds |
|
|
Notes from colleagues documenting supportive environment and involvement in communication |
|
|
Frequency of usage of communication skills taught |
|
|
Workplace observations |
|
|
Evaluation of trainees’ communication skills in clinical setting by patients and colleagues |
|
|
Level 4a (increased interprofessional collaboration) |
Workplace observations |
|
Level 4b (patient benefits) |
Self-perceived quality of care |
|
Patient and family satisfaction with communication |
|
|
Family satisfaction with communication |
Table 6. Outcome Measures mapped onto Adapted Kirkpatrick’s Hierarchy
Three studies compared outcomes with non-intervention arms and reported improved patient satisfaction and self-rated and third party reported improvements in communication (Awdish et al., 2017; Happ et al., 2014; McCallister et al., 2015).
C. Stage 3: Jigsaw Perspective
The jigsaw perspective builds upon Moss and Haertel’s (2016) concept of methodological pluralism and sees data from different methodological approaches as pieces of a jigsaw providing a partial picture of the area of interest. The Jigsaw perspective brings data from complementary pieces of the training process in order to paint a cohesive picture of ICU communication training. As a result, related aspects of the training structure and the working culture were studied together so as to better understand the influences each of the aforementioned have on the other.
D.Stage 4. An Iterative Process
Whilst there was consensus on the themes/categories identified, the expert team and stakeholders raised concerns that data from grey literature which is neither quality assessed nor necessarily evidenced based could bias the discussion. To address this concern, the research team thematically analysed the data from grey literature and non-research-based pieces such as letters, opinion and perspective pieces, commentaries and editorials drawn from the bibliographic databases separately and compared these themes against themes drawn from peer reviewed evidenced based data. This analysis revealed the same themes with an additional tool (PACIENTE tool) identified in the grey literature to enhance communication with patients’ families (Pabon et al., 2014).
IV. DISCUSSION
E. Stage 5. Synthesis of Systematic Scoping Review in SEBA
This SSR in SEBA reaffirms the importance of communications training in ICU and suggests that a combination of training techniques is required (Akgun & Siegel, 2012; Chiarchiaro et al., 2015; Happ et al., 2010; Happ et al., 2015; Hope et al., 2015; Lorin et al., 2006; Miller et al., 2018; Roze des Ordons, Doig, Couillard, & Lord, 2017; Sandahl, et al., 2013; D. J. Shaw, Davidson, Smilde, Sondoozi, & Agan, 2014).
A framework for the design of a competency-based approach to ICU communications training (W. G. Anderson et al., 2017; Berkenstadt et al., 2013; D. Boyle et al., 2016; Brown, Durve, Singh, Park, & Clark, 2017; Chiarchiaro et al., 2015; Fins & Solomon, 2001; Happ et al., 2010; Hope et al., 2015; Karlsen, Gabrielsen, Falch, & Stubberud, 2017; Pabon et al., 2014; Roze des Ordons et al., 2017; Tamerius, 2013; J. Yuen & Carrington Reid, 2011) may be found in Figure 3 below.

Figure 3. Framework for Competency-based Approach to ICU Communication Skills Training
These findings resonate with Kirkpatrick’s Hierarchy (Jamieson et al., 2019; Littlewood et al., 2005; Roland, 2015) where each level builds upon the next and the learner moves from “peripheral participation” to active “doing and internalising” in real clinical practice.
Such a competency-based programme necessitates a structured approach to holistic and longitudinal assessments of the learner’s progress. Such a structured approach must be horizontally and vertically integrated into other forms of clinical training as cogent communication is a fundamental skillset across all practice and specialties (Akgun & Siegel, 2012; Roze des Ordons et al., 2017).
Whilst Kirkpatrick’s Hierarchy offers a viable framework for assessing trainees’ progress (Boothby, Gropelli, & Succheralli, 2018; Roze des Ordons et al., 2017), ICU training programmes may also keep in mind the various outcomes measures listed previously in Table 3 when designing assessment tools. These tools should conscientiously account for perspectives offered by trainers, standardised patients and family members involved in the evaluation process and should consider benefits and repercussions of their communication abilities to patients, families and the ICU multidisciplinary team(Aslakson, Randall Curtis, & Nelson, 2014; Awdish et al., 2017; Blackhall et al., 2014; D. A. Boyle et al., 2017; DeMartino, Kelm, Srivali, & Ramar, 2016; Happ et al., 2014; Happ et al., 2015; Hope et al., 2015; Miller et al., 2018; Sanchez Exposito et al., 2018; Sullivan et al., 2016; Turkelson, Aebersold, Redman, & Tschannen, 2017).
With flexibility within training programmes highlighted as essential (Ernecoff et al., 2016), this flexibility should also extend to cover remediation and provision of additional support in areas jointly identified and agreed upon by trainees and trainers to be paramount for targeted improvement. As it is worrying that no studies have focused on the effects of remediation on ICU communication skills training thus far, this should be a critical area for future research considering its importance (Steinert, 2013).
Likewise, it is pivotal that trainers should undergo rigorous training (Berlacher et al., 2017; Roze des Ordons et al., 2017) and are granted protected time for this undertaking (Boothby et al., 2018; Happ et al., 2010; Roze des Ordons et al., 2017). In order to ensure that quality and up-to-date skills and knowledge are transferred down the line, it is posited that trainers should also be holistically and longitudinally assessed alongside their charges (Roze des Ordons et al., 2017). Whilst trainers should ideally nurture a safe, collaborative, learning environment for all (Hales & Hawryluck, 2008; Milic et al., 2015; Roze des Ordons et al., 2017; Sandahl, et al., 2013), it is clear that this can only be achieved through sustained administrative and financial support, according learners and trainers sufficient time and resources to foster cordial relationships open to mutual and honest feedback (Akgun & Siegel, 2012; Miller et al., 2018).
V. LIMITATIONS
The SSR in SEBA approach is robust, reproducible and transparent addressing many of the concerns about inconsistencies in SSR methodology and structure arising from diverse epistemological lenses and lack of cogency in weaving together context-sensitive medical education programmes. Through a reiterative step-by-step process, the hallmark ‘Split Approach’ which saw concurrent and independent analyses and tabulated summaries by separate teams of researchers allowed for a holistic picture of prevailing ICU communications training programmes without loss of any conflicting data. Consultations with experts every step of the way also significantly curtailed researcher bias and enhanced the accountability and coherency of the data.
Yet it must be acknowledged that this SSR focused on articles published in English or with English translations. Hence, much of the data comes from North American and European countries, potentially skewing perspectives and raising questions as to the applicability of these findings in the setting of other cultures. Moreover, whilst databases used were selected by the expert team and the team utilised independent selection processes, critical papers may still have been unintentionally omitted. Whilst use of thematic analysis to review the impact of the grey literature greatly improves transparency of the review, inclusion of grey literature-based themes may nonetheless bias results and provide these opinion-based views with a ‘veneer of respectability’ despite a lack of evidence to support it.
VI. CONCLUSION
In the absence of a standardised evidence-based communication training programme for HCPs in ICUs, many HCPs are left in the hope that clinical experience alone will be sufficient to ensure their proficiency in communication. This SSR provides guidance on how to effectively develop and structure a communications training programme for HCPs in ICUs and suggests that communications training in ICU must involve a structured multimodal approach to training carried out in a supportive learning environment. This must be accompanied by robust methods of assessment and personalised and timely feedback and support of the trainees. Such an approach will equip HCPs with greater confidence and preparedness in a variety of situations, including that of the evolving COVID-19 pandemic.
To effectively institute change in communication training within ICUs, further studies should look into the desired characteristics of trainers and trainees, the context and settings as well as the case scenarios used. The design of an effective tool to evaluate learners’ communication skills longitudinally, holistically, and in different settings should be amongst the primary concerns for future research.
Notes on Contributors
Dr EWYC recently graduated from Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms HH is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms SG is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms MTP is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms CCYL is a nursing student at Alice Lee Centre for Nursing Studies, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms LHET is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Dr MSQK recently graduated from Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Dr KTT recently graduated from Yong Loo Lin School of Medicine, National University of Singapore. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms YTO is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Mr WQL is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms XHT is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Mr YHT is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms CSK is a medical student at Yong Loo Lin School of Medicine, National University of Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms AMCC is a senior librarian from Medical Library, National University of Singapore Libraries, National University of Singapore, Singapore. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ms MC is a researcher at the Division of Cancer Education, NCCS. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Dr JXZ is a Consultant at the Division of Supportive and Palliative Care, NCCS. She was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Professor LKRK is a Senior Consultant at the Division of Supportive and Palliative Care, NCCS. He was involved in research design and planning, data collection and processing, data analysis, results synthesis, manuscript writing and review and administrative work for journal submission.
Ethical Approval
This is a systematic scoping review study which does not require ethical approval.
Acknowledgement
This work was carried out as part of the Palliative Medicine Initiative run by the Department of Supportive and Palliative Care at the National Cancer Centre Singapore. The authors would like to dedicate this paper to the late Dr S Radha Krishna whose advice and ideas were integral to the success of this study.
Funding
There is no funding for the paper.
Declaration of Interest
The authors declare that they have no competing interests.
References
Adams, A. M. N., Mannix, T., & Harrington, A. (2017). Nurses’ communication with families in the intensive care unit – A literature review. Nursing in Critical Care, 22(2), 70-80. https://doi.org/10.1111/nicc.12141
Ahrens, T., Yancey, V., & Kollef, M. (2003). Improving family communications at the end of life: implications for length of stay in the intensive care unit and resource use. American Journal of Critical Care, 12(4), 317-323. https://doi.org/10.4037/ajcc2003.12.4.317
Akgun, K. M., & Siegel, M. D. (2012). Using standardized family members to teach end-of-life skills to critical care trainees. Critical Care Medicine, 40(6), 1978-1980. https://doi.org/10.1097/CCM.0b013e3182536cd1
Anderson, K. G., & Milic, M. (2017). Doctor know thyself: Improving patient communication through modeling and self-analysis. Journal of General Internal Medicine, 32(2), S670-S671.
Anderson, W. G., Puntillo, K., Cimino, J., Noort, J., Pearson, D., Boyle, D., . . . Pantilat, S. Z. (2017). Palliative care professional development for critical care nurses: A multicenter program. American Journal of Critical Care, 26(5), 361-371. https://doi.org/10.4037/ajcc2017336
Anstey, M. (2013). Communication training in the ICU: Room for improvement? Critical Care Medicine, 41(12), A179. https://doi.org/10.1097/01.ccm.0000439963.49763.f2
Aslakson, R. A., Curtis, J. R., & Nelson, J. E. (2014). The changing role of palliative care in the ICU. Critical Care Medicine, 42(11), 2418-2428. https://doi.org/10.1097/CCM.0000000000000573
Awdish, R. L., Buick, D., Kokas, M., Berlin, H., Jackman, C., Williamson, C., . . . Chasteen, K. (2017). A communications bundle to improve satisfaction for critically ill patients and their families: A prospective, cohort pilot study. Journal of Pain and Symptom Management, 53(3), 644-649. https://doi.org/10.1016/j.jpainsymman.2016.08.024
Barbour, S., Puntillo, K., Cimino, J., & Anderson, W. (2016). Integrating multidisciplinary palliative care into the ICU (impact-ICU) project: A multi-center nurse education quality improvement initiative. Journal of Pain and Symptom Management, 51(2), 355. https://doi.org/10.1016/j.jpainsymman.2015.12.203
Barth, M., Kaffine, K., Bannon, M., Connelly, E., Tescher, A., Boyle, C., … Ballinger, B. (2013). 827: Goals of care conversations: A collaborative approach to the process in a Surgical/Trauma ICU. Critical Care Medicine, 41(12), A206. https://doi.org/10.1097/01.ccm.0000440065.45453.3f
Berkenstadt, H., Perlson, D., Shalomson, O., Tuval, A., Haviv-Yadid, Y., & Ziv, A. (2013). Simulation-based intervention to improve anesthesiology residents communication with families of critically ill patients–preliminary prospective evaluation. Harefuah, 152(8), 453-456, 500, 499.
Berlacher, K., Arnold, R. M., Reitschuler-Cross, E., Teuteberg, J., & Teuteberg, W. (2017). The Impact of Communication Skills Training on Cardiology Fellows’ and Attending Physicians’ Perceived Comfort with Difficult Conversations. Journal of Palliative Medicine, 20(7), 767-769. https://doi.org/10.1089/jpm.2016.0509
Blackhall, L. J., Erickson, J., Brashers, V., Owen, J., & Thomas, S. (2014). Development and validation of a collaborative behaviors objective assessment tool for end-of-life communication. Journal of Palliative Medicine, 17(1), 68-74. https://doi.org/10.1089/jpm.2013.0262
Bloomer, M. J., Endacott, R., Ranse, K., & Coombs, M. A. (2017). Navigating communication with families during withdrawal of life-sustaining treatment in intensive care: a qualitative descriptive study in Australia and New Zealand. Journal of Clinical Nursing, 26(5-6), 690-697. https://doi.org/10.1111/jocn.13585
Boothby, J., Gropelli, T., & Succheralli, L. (2018). An Innovative Teaching Model Using Intraprofessional Simulations. Nursing Education Perspectives. https://doi.org/10.1097/01.Nep.0000000000000340
Boyle, D., Grywalski, M., Noort, J., Cain, J., Herman, H., & Anderson, W. (2016). Enhancing bedside nurses’ palliative communication skill competency: An exemplar from the University of California academic Hospitals’ qualiy improvement collaborative. Supportive Care in Cancer, 24(1), S25. https://doi.org/10.1007/s00520-016-3209-z
Boyle, D. A., & Anderson, W. G. (2015). Enhancing the communication skills of critical care nurses: Focus on prognosis and goals of care discussions. Journal of Clinical Outcomes Management, 22(12), 543-549.
Boyle, D. A., Barbour, S., Anderson, W., Noort, J., Grywalski, M., Myer, J., & Hermann, H. (2017). Palliative Care Communication in the ICU: Implications for an Oncology-Critical Care Nursing Partnership. Seminars in Oncology Nursing, 33(5), 544-554. https://doi.org/10.1016/j.soncn.2017.10.003
Boyle, D. K., & Kochinda, C. (2004). Enhancing collaborative communication of nurse and physician leadership in two intensive care units. Journal of Nursing Administration, 34(2), 60-70.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
Brown, S., Durve, M. V., Singh, N., Park, W. H. E., & Clark, B. (2017). Experience of ‘parallel communications’ training, a novel communication skills workshop, in 50 critical care nurses in a UK Hospital. Intensive Care Medicine Experimental, 5(2). https://doi.org/10.1186/s40635-017-0151-4
Brunette, V., & Thibodeau-Jarry, N. (2017). Simulation as a Tool to Ensure Competency and Quality of Care in the Cardiac Critical Care Unit. Canadian Journal of Cardiology, 33(1), 119-127. https://doi.org/10.1016/j.cjca.2016.10.015
Cameron, K. (2017). Bridging the Gaps: An Experiential Commentary on Building Capacity for Interdisciplinary Communication in the Cardiac Intensive Care Unit (CICU). Canadian Journal of Critical Care Nursing, 28(2), 53-54.
Cao, V., Tan, L. D., Horn, F., Bland, D., Giri, P., Maken, K., … Nguyen, H. B. (2018). Patient-Centered Structured Interdisciplinary Bedside Rounds in the Medical ICU. Critical Care Medicine, 46(1), 85-92. https://doi.org/10.1097/ccm.0000000000002807
Centofanti, J., Duan, E., Hoad, N., Swinton, M., Perri, D., Waugh, L., . . . Cook, D. (2015). Improving an ICU daily goals checklist: Integrated and end-of-grant knowledge translation. Canadian Respiratory Journal, 22(2), 80.
Centofanti, J., Duan, E., Hoad, N., Waugh, L., & Perri, D. (2012). Residents’ perspectives on a daily goals checklist: A mixed-methods study. Critical Care Medicine, 40(12), 150. https://doi.org/10.1097/01.ccm.0000425605.04623.4b
Chiarchiaro, J., Schuster, R. A., Ernecoff, N. C., Barnato, A. E., Arnold, R. M., & White, D. B. (2015). Developing a simulation to study conflict in intensive care units. Annals of the American Thoracic Society, 12(4), 526-532. https://doi.org/10.1513/AnnalsATS.201411-495OC
Cicekci, F., Duran, N., Ayhan, B., Arican, S., Ilban, O., Kara, I., … Kara, I. (2017). The communication between patient relatives and physicians in intensive care units. BMC Anesthesiology, 17(1), 97. https://doi.org/10.1186/s12871-017-0388-1
Clark, E., Squire, S., Heyme, A., Mickle, M. E., & Petrie, E. (2009). The PACT Project: Improving communication at handover. Medical Journal of Australia, 190(S11), S125-S127. https://doi.org/10.5694/j.1326-5377.2009.tb02618.x
Crotty, M. (1998). The Foundations of Social Research: Meaning and Perspective in the Research Process. Thousand Oaks, United States: Sage Publications Inc.
Currey, J., Oldland, E., Considine, J., Glanville, D., & Story, I. (2015). Evaluation of postgraduate critical care nursing students’ attitudes to, and engagement with, Team-Based Learning: A descriptive study. Intensive Critical Care Nursing, 31(1), 19-28. https://doi.org/10.1016/j.iccn.2014.09.003
Curtis, J. R., Engelberg, R. A., Wenrich, M. D., Shannon, S. E., Treece, P. D., & Rubenfeld, G. D. (2005). Missed opportunities during family conferences about end-of-life care in the intensive care unit. American Journal of Respiratory & Critical Care Medicine, 171(8), 844-849. https://doi.org/ 10.1164/rccm.200409-1267OC
Curtis, J. R., Patrick, D. L., Shannon, S. E., Treece, P. D., Engelberg, R. A., & Rubenfeld, G. D. (2001a). The family conference as a focus to improve communication about end-of-life care in the intensive care unit: Opportunities for improvement. Critical Care Medicine, 29(2 Suppl), N26-33.
Curtis, J. R., Wenrich, M. D., Carline, J. D., Shannon, S. E., Ambrozy, D. M., & Ramsey, P. G. (2001b). Understanding physicians’ skills at providing end‐of‐life care: Perspectives of patients, families, and health care workers. Journal of General Internal Medicine, 16(1), 41-49. https://doi.org/10.1111/j.1525-1497.2001.00333.x
Curtis, J. R., White, D. B., Curtis, J. R., & White, D. B. (2008). Practical guidance for evidence-based ICU family conferences. CHEST, 134(4), 835-843. https://doi.org/10.1378/chest.08-0235
DeMartino, E. S., Kelm, D. J., Srivali, N., & Ramar, K. (2016). Education considerations: Communication curricula, simulated resuscitation, and duty hour restrictions. American Journal of Respiratory and Critical Care Medicine, 193(7), 801-803. https://doi.org/10.1164/rccm.201510-2012RR
Dithole, K. S., Sibanda, S., Moleki, M. M., & Thupayagale ‐ Tshweneagae, G. (2016). Nurses’ communication with patients who are mechanically ventilated in intensive care: the Botswana experience. International Nursing Review, 63(3), 415-421. https://doi.org/10.1111/inr.12262
Dorner, L., Schwarzkopf, D., Skupin, H., Philipp, S., Gugel, K., Meissner, W., … Hartog, C. S. (2015). Teaching medical students to talk about death and dying in the ICU: Feasibility of a peer-tutored workshop. Intensive Care Medicine, 41(1), 162-163. https://doi.org/10.1007/s00134-014-3541-z
Downar, J., Knickle, K., Granton, J. T., & Hawryluck, L. (2012). Using standardized family members to teach communication skills and ethical principles to critical care trainees. Critical Care Medicine, 40(6), 1814-1819. https://doi.org/10.1097/CCM.0b013e31824e0fb7
Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107-115. https://doi.org/10.1111/j.1365-2648.2007.04569.x
Ernecoff, N. C., Witteman, H. O., Chon, K., Buddadhumaruk, P., Chiarchiaro, J., Shotsberger, K. J., … & Lo, B. (2016). Key stakeholders’ perceptions of the acceptability and usefulness of a tablet-based tool to improve communication and shared decision making in ICUs. Journal of Critical Care, 33, 19-25. https://doi.org/10.1016/j.jcrc.2016.01.030
Fins, J. J., & Solomon, M. Z. (2001). Communication in intensive care settings: The challenge of futility disputes. Critical Care Medicine, 29(2 Suppl), N10-N15.
Foa, C., Cavalli, L., Maltoni, A., Tosello, N., Sangilles, C., Maron, I., … Artioli, G. (2016). Communications and relationships between patient and nurse in Intensive Care Unit: Knowledge, knowledge of the work, knowledge of the emotional state. Acta Bio-medica: Atenei Parmensis, 87(4-s), 71-82.
Ford, D. W., Downey, L., Engelberg, R., Back, A. L., & Curtis, J. R. (2012). Discussing religion and spirituality is an advanced communication skill: An exploratory structural equation model of physician trainee self-ratings. Journal of Palliative Medicine, 15(1), 63-70. https://doi.org/10.1089/jpm.2011.0168
Gordon, M., & Gibbs, T. (2014). STORIES statement: Publication standards for healthcare education evidence synthesis. BMC medicine, 12(1), 143.
Haig, A., & Dozier, M. (2003). BEME Guide no 3: Systematic searching for evidence in medical education–Part 1: Sources of information. Medical Teacher, 25(4), 352-363. https://doi.org/ 10.1080/0142159031000136815
Hales, B. M., & Hawryluck, L. (2008). An interactive educational workshop to improve end of life communication skills. Journal of Continuing Education in the Health Professions, 28(4), 241-248; quiz 249-255. https://doi.org/10.1002/chp.191
Happ, M. B., Baumann, B. M., Sawicki, J., Tate, J. A., George, E. L., & Barnato, A. E. (2010). SPEACS-2: intensive care unit “communication rounds” with speech language pathology. Geriatric Nursing, 31(3), 170-177. https://doi.org/10.1016/j.gerinurse.2010.03.004
Happ, M. B., Garrett, K. L., Tate, J. A., DiVirgilio, D., Houze, M. P., Demirci, J. R., … Sereika, S. M. (2014). Effect of a multi-level intervention on nurse–patient communication in the intensive care unit: Results of the SPEACS trial. Heart & Lung: The Journal of Critical Care, 43(2), 89-98. https://doi.org/10.1016/j.hrtlng.2013.11.010
Happ, M. B., Sereika, S. M., Houze, M. P., Seaman, J. B., Tate, J. A., Nilsen, M. L., … Barnato, A. E. (2015). Quality of care and resource use among mechanically ventilated patients before and after an intervention to assist nurse-nonvocal patient communication. Heart & Lung: The Journal of Critical Care, 44(5), 408-415.e402. https://doi.org/10.1016/j.hrtlng.2015.07.001
Havrilla-Smithburger, P., Kane-Gill, S., & Seybert, A. (2012). Use of high fidelity simulation for interprofessional education in an ICU environment. Critical Care Medicine, 40(12), 148. https://doi.org/10.1097/01.ccm.0000425605.04623.4b
Hope, A. A., Hsieh, S. J., Howes, J. M., Keene, A. B., Fausto, J. A., Pinto, P. A., & Gong, M. N. (2015). Let’s talk critical: Development and evaluation of a communication skills training programme for critical care fellows. Annals of the American Thoracic Society, 12(4), 505-511. https://doi.org/10.1513/AnnalsATS.201501-040OC
Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277-1288. https://doi.org/10.1177/1049732305276687
Hughes, E. A. (2010). Crucial conversations: Perceptions of staff and patients’ families of communication in an intensive care unit. Dissertation Abstracts International, 71(5-A), 1548.
Humble, Á. M. (2009). Technique triangulation for validation in directed content analysis. International Journal of Qualitative Methods, 8(3), 34-51. https://doi.org/10.1177/160940690900800305
Jamieson, J., Palermo, C., Hay, M., & Gibson, S. (2019). Assessment practices for dietetics trainees: A systematic review. Journal of the Academy of Nutrition and Dietetics, 119(2), 272-292. e223. https://doi.org/10.1016/j.jand.2018.09.010
Kalocsai, C., Amaral, A., Piquette, D., Walter, G., Dev, S. P., Taylor, P., . . . Gotlib Conn, L. (2018). “It’s better to have three brains working instead of one”: a qualitative study of building therapeutic alliance with family members of critically ill patients. BMC health services research, 18(1), 533. https://doi.org/10.1186/s12913-018-3341-1
Karlsen, M.-M. W., Gabrielsen, A. K., Falch, A. L., & Stubberud, D.-G. (2017). Intensive care nursing students’ perceptions of simulation for learning confirming communication skills: A descriptive qualitative study. Intensive and Critical Care Nursing, 42, 97-104. https://doi.org/http://dx.doi.org/10.1016/j.iccn.2017.04.005
Kirchhoff, K. T., Walker, L., Hutton, A., Spuhler, V., Cole, B. V., & Clemmer, T. (2002). The vortex: Families’ experiences with death in the intensive care unit. American Journal of Critical Care, 11(3), 200-209. https://doi.org/10.4037/ajcc2002.11.3.200
Kleinpell, R. M. (2014). Improving communication in the ICU. Heart and Lung: The Journal of Acute and Critical Care, 43(2), 87. https://doi.org/10.1016/j.hrtlng.2014.01.008
Kow, C. S., Teo, Y. H., Teo, Y. N., Chua, K. Z. Y., Quah, E. L. Y., Kamal, N. H. B. A., . . . Tay, K. T. J. B. M. E. (2020). A systematic scoping review of ethical issues in mentoring in medical schools. BMC Medical Education, 20(1), 1-10. https://doi.org/10.1186/s12909-020-02169-3
Krimshtein, N. S., Luhrs, C. A., Puntillo, K. A., Cortez, T. B., Livote, E. E., Penrod, J. D., & Nelson, J. E. (2011). Training nurses for interdisciplinary communication with families in the intensive care unit: An intervention. Journal of Palliative Medicine, 14(12), 1325-1332. https://doi.org/10.1089/jpm.2011.0225
Krishna, L. K. R., Tan, L. H. E., Ong, Y. T., Tay, K. T., Hee, J. M., Chiam, M., … & Kow, C. S. (2020). Enhancing mentoring in palliative pare: An evidence based mentoring Framework. Journal of Medical Education and Curricular Development, 7, 2382120520957649. https://doi.org/10.1177/2382120520957649
Littlewood, S., Ypinazar, V., Margolis, S. A., Scherpbier, A., Spencer, J., & Dornan, T. (2005). Early practical experience and the social responsiveness of clinical education: Systematic review. The BMJ, 331(7513), 387-391. https://doi.org/ 10.1136/bmj.331.7513.387
Lorin, S., Rho, L., Wisnivesky, J. P., & Nierman, D. M. (2006). Improving medical student intensive care unit communication skills: A novel educational initiative using standardized family members. Critical Care Medicine, 34(9), 2386-2391. https://doi.org/10.1097/01.Ccm.0000230239.04781.Bd
Mayring, P. (2004). Qualitative content analysis. A companion to qualitative research, 1(2004), 159-176.
McCallister, J. W., Gustin, J. L., Wells-Di Gregorio, S., Way, D. P., & Mastronarde, J. G. (2015). Communication skills training curriculum for pulmonary and critical care fellows. Annals of the American Thoracic Society, 12(4), 520-525. https://doi.org/10.1513/AnnalsATS.201501-039OC
Milic, M. M., Puntillo, K., Turner, K., Joseph, D., Peters, N., Ryan, R., … Anderson, W. G. (2015). Communicating with patients’ families and physicians about prognosis and goals of care. American Journal of Critical Care, 24(4), e56-64. https://doi.org/10.4037/ajcc2015855
Miller, D. C., Sullivan, A. M., Soffler, M., Armstrong, B., Anandaiah, A., Rock, L., … Hayes, M. M. (2018). Teaching residents how to talk about death and dying: A mixed-methods analysis of barriers and randomized educational intervention. The American Journal of Hospice & Palliative Care, 35(9), 1221-1226. https://doi.org/10.1177/1049909118769674
Ministry of Health, Singapore. (2020). Updates on COVID-19 (Coronavirus disease 2019) local situation. Retrieved from https://www.moh.gov.sg/covid-19.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of Internal Medicine, 151(4), 264-269.
Moss, P. A., & Haertel, E. H. (2016). Engaging methodological pluralism. In Drew H. Gitomer & Courtney A. Bell (Eds) Handbook of Research on Teaching (5th ed., pp. 127-247). Washington, D.C: American Educational Research Association
Motta, M., Ryder, T., Blaber, B., Bautista, M., & Lim-Hing, K. (2018). Multimodal communication enhances family engagement in the neurocritical care unit. Critical Care Medicine, 46, 409. https://doi.org/10.1097/01.ccm.0000528859.01109.0f
Munn, Z., Peters, M. D., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC medical research methodology, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x
Neal, J. W., Neal, Z. P., Lawlor, J. A., Mills, K. J., & McAlindon, K. (2018). What makes research useful for public school educators? Administration and Policy in Mental Health and Mental Health Services Research, 45(3), 432-446.
Ng, Y. X., Koh, Z. Y. K., Yap, H. W., Tay, K. T., Tan, X. H., Ong, Y. T., … Shivananda, S. J. P. o. (2020). Assessing mentoring: A scoping review of mentoring assessment tools in internal medicine between 1990 and 2019. PLOS One, 15(5), e0232511. https://doi.org/10.1371/journal.pone.0232511
Ngiam, L. X. L., Ong, Y. T., Ng, J. X., Kuek, J. T. Y., Chia, J. L., Chan, N. P. X., … & Ng, C. H. (2020). Impact of caring for terminally ill children on physicians: A systematic scoping review. American Journal of Hospice and Palliative Medicine, 1049909120950301. https://doi.org/10.1177/1049909120950301
Northam, H. L., Hercelinskyj, G., Grealish, L., & Mak, A. S. (2015). Developing graduate student competency in providing culturally sensitive end of life care in critical care environments – A pilot study of a teaching innovation. Australian Critical Care, 28(4), 189-195. https://doi.org/10.1016/j.aucc.2014.12.003
Pabon, M. C., Roldan, V. C., Insuasty, A., Buritica, L. S., Taboada, H., & Mendez, L. U. (2014). Paciente a semi-structured interview enhances family communication in the ICU. Cirtical Care Medicine, 42(12), A1507. https://doi.org/10.1097/01.ccm.0000458108.38450.38
Pantilat, S., Anderson, W., Puntillo, K., & Cimino, J. (2014). Palliative care in university of California intensive care units. Journal of Palliative Medicine, 17(3), A15. https://doi.org/10.1089/jpm.2014.9449
Peters, M. D., Godfrey, C. M., Khalil, H., McInerney, P., Parker, D., & Soares, C. B. (2015a). Guidance for conducting systematic scoping reviews. International Journal of Evidence-Based Healthcare, 13(3), 141-146. https://doi.org/10.1097/XEB.0000000000000050
Peters, M., Godfrey, C., McInerney, P., Soares, C., Khalil, H., & Parker, D. (2015b). The Joanna Briggs Institute reviewers’ manual 2015: Methodology for JBI scoping reviews. Adelaide, SA: The Joanna Briggs Institute.
Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., … Duffy, S. (2006). Guidance on the conduct of narrative synthesis in systematic reviews. A product from the ESRC methods programme. Version, 1, b92.
Price, S., & Schofield, S. (2015). How do junior doctors in the UK learn to provide end of life care: A qualitative evaluation of postgraduate education. BMC Palliative Care, 14, 45. https://doi.org/10.1186/s12904-015-0039-6
Pring, R. (2000). The ‘False Dualism’ of educational research. Journal of Philosophy of Education, 34(2), 247-260. https://doi.org/10.1111/1467-9752.00171
Rachwal, C. M., Langer, T., Trainor, B. P., Bell, M. A., Browning, D. M., & Meyer, E. C. (2018). Navigating communication challenges in clinical practice: A new approach to team education. Critical Care Nurse, 38(6), 15-22. https://doi.org/10.4037/ccn201874
Reed, D. A., Beckman, T. J., Wright, S. M., Levine, R. B., Kern, D. E., & Cook, D. A. (2008). Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM’s Medical Education Special Issue. Journal of General Internal Medicine, 23(7), 903-907. https://doi.org/10.1007/s11606-008-0664-3
Roland, D. (2015). Proposal of a linear rather than hierarchical evaluation of educational initiatives: The 7Is framework. Journal of Educational Evaluation for Health Professions, 12. https://doi.org/10.3352/jeehp.2015.12.35
Roze des Ordons, A. L., Doig, C. J., Couillard, P., & Lord, J. (2017). From communication skills to skillful communication: A longitudinal integrated curriculum for critical care medicine fellows. Academic Medicine : Journal of the Association of American Medical Colleges, 92(4), 501-505. https://doi.org/10.1097/ACM.0000000000001420
Roze Des Ordons, A. L., Lockyer, J., Hartwick, M., Sarti, A., & Ajjawi, R. (2016). An exploration of contextual dimensions impacting goals of care conversations in postgraduate medical education. BMC Palliative Care, 15(1). https://doi.org/10.1186/s12904-016-0107-6
Sambunjak, D., Straus, S. E., & Marusic, A. (2010). A systematic review of qualitative research on the meaning and characteristics of mentoring in academic medicine. Journal of General Internal Medicine, 25(1), 72-78. https://doi.org/10.1007/s11606-009-1165-8
Sanchez Exposito, J., Leal Costa, C., Diaz Agea, J. L., Carrillo Izquierdo, M. D., & Jimenez Rodriguez, D. (2018). Ensuring relational competency in critical care: Importance of nursing students’ communication skills. Intensive and Critical Care Nursing, 44, 85-91. https://doi.org/10.1016/j.iccn.2017.08.010
Sandahl, C., Gustafsson, H., Wallin, C. J., Meurling, L., Ovretveit, J., Brommels, M., & Hansson, J. (2013). Simulation team training for improved teamwork in an intensive care unit. International Journal of Health Care Quality Assurance, 26(2), 174-188. https://doi.org/10.1108/09526861311297361
Sawatsky, A. P., Parekh, N., Muula, A. S., Mbata, I., & Bui, T. (2016). Cultural implications of mentoring in sub-Saharan Africa: A qualitative study. Medical Education, 50(6), 657-669. https://doi.org/10.1111/medu.12999
Schick-Makaroff, K., MacDonald, M., Plummer, M., Burgess, J., & Neander, W. (2016). What synthesis methodology should I use? A review and analysis of approaches to research synthesis. AIMS Public Health, 3(1), 172. https://doi.org/10.3934/publichealth.2016.1.172
Shannon, S. E., Long-Sutehall, T., & Coombs, M. (2011). Conversations in end-of-life care: Communication tools for critical care practitioners. Nursing Critical Care, 16(3), 124-130. https://doi.org/10.1111/j.1478-5153.2011.00456.x
Shaw, D., Davidson, J., Smilde, R., & Sondoozi, T. (2012). Interdisciplinary team training for family conferences in the intensive care unit. Critical Care Medicine, 40(12), 226. https://doi.org/10.1097/01.ccm.0000425605.04623.4b
Shaw, D. J., Davidson, J. E., Smilde, R. I., Sondoozi, T., & Agan, D. (2014). Multidisciplinary team training to enhance family communication in the ICU. Critical Care Medicine, 42(2), 265-271. https://doi.org/10.1097/CCM.0b013e3182a26ea5
Smith, L., O’Sullivan, P., Lo, B., & Chen, H. (2013). An educational intervention to improve resident comfort with communication at the end of life. Journal of Palliative Medicine, 16(1), 54-59. https://doi.org/10.1089/jpm.2012.0173
Steinert, Y. J. M. t. (2013). The “problem” learner: Whose problem is it? AMEE Guide No. 76. 35(4), e1035-e1045. https://doi.org/10.3109/0142159X.2013.774082
Sullivan, A. M., Rock, L. K., Gadmer, N. M., Norwich, D. E., & Schwartzstein, R. M. (2016). The impact of resident training on communication with families in the intensive care unit resident and family outcomes. Annals of the American Thoracic Society, 13(4), 512-521. https://doi.org/10.1513/AnnalsATS.201508-495OC
Tamerius, N. (2013). Palliative care in the ICU: Improving patient outcomes. Journal of Palliative Medicine, 16(4), A19. https://doi.org/10.1089/jpm.2013.9516
Thomson, N., Tan, M., Hellings, S., & Frys, L. (2016). Integrating regular multidisciplinary ‘insitu’ simulation into the education program of a critical care unit. How we do it. Journal of the Intensive Care Society, 17(4), 73-74. https://doi.org/10.1177/1751143717708966
Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19(6), 349-357. https://doi.org/10.1093/intqhc/mzm042
Turkelson, C., Aebersold, M., Redman, R., & Tschannen, D. (2017). Improving nursing communication skills in an intensive care unit using simulation and nursing crew resource management strategies: An implementation project. Journal of Nursing Care Quality, 32(4), 331-339. https://doi.org/10.1097/NCQ.0000000000000241
Van Mol, M., Boeter, T., Verharen, L., & Nijkamp, M. (2014). To communicate with relatives; An evaluation of interventions in the intensive care unit. Critical Care Medicine, 42(12), A1506-A1507. https://doi.org/10.1097/01.ccm.0000458107.38450.dc
Voloch, K. A., Judd, N., & Sakamoto, K. (2007). An innovative mentoring program for Imi Ho’ola Post-Baccalaureate students at the University of Hawai’i John A. Burns School of Medicine. Hawaii Medical Journal, 66(4), 102-103.
Wagner-Menghin, M., de Bruin, A., & van Merriënboer, J. J. (2016). Monitoring communication with patients: Analyzing judgments of satisfaction (JOS). Advances in Health Sciences Education, 21(3), 523-540. https://doi.org/10.1007/s10459-015-9642-9
Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: Meta-narrative reviews. BMC medicine, 11(1), 20. https://doi.org/ 10.1186/1741-7015-11-20
World Health, O. (2020). Coronavirus disease 2019 (COVID-19) Situation Report. Retrieved from https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200330-sitrep-70-covid-19.pdf?sfvrsn=7e0fe3f8_2
Yang, X., Yu, Y., Xu, J., Shu, H., Liu, H., Wu, Y., … Yu, T. (2020). Clinical course and outcomes of critically ill patients with SARS-CoV-2 pneumonia in Wuhan, China: A single-centered, retrospective, observational study. The Lancet Respiratory Medicine. https://doi.org/10.1016/S2213-2600(20)30079-5
Yuen, J., & Carrington Reid, M. (2011). Development of an innovative workshop to teach communication skills in goals-of-care discussions in the ICU. Jounral of General Internal Medicine, 26, S605. https://doi.org/10.1007/s11606-011-1730-9
Yuen, J. K., Mehta, S. S., Roberts, J. E., Cooke, J. T., & Reid, M. C. (2013). A brief educational intervention to teach residents shared decision making in the intensive care unit. Journal of Palliative Medicine, 16(5), 531-536. https://doi.org/10.1089/jpm.2012.0356
*Ong Yun Ting
1E Kent Ridge Road,
NUHS Tower Block, Level 11,
Singapore 119228
Tel: +65 6227 3737
Email: e0326040@u.nus.edu
Announcements
- Best Reviewer Awards 2025
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2025.
Refer here for the list of recipients. - Most Accessed Article 2025
The Most Accessed Article of 2025 goes to Analyses of self-care agency and mindset: A pilot study on Malaysian undergraduate medical students.
Congratulations, Dr Reshma Mohamed Ansari and co-authors! - Best Article Award 2025
The Best Article Award of 2025 goes to From disparity to inclusivity: Narrative review of strategies in medical education to bridge gender inequality.
Congratulations, Dr Han Ting Jillian Yeo and co-authors! - Best Reviewer Awards 2024
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2024.
Refer here for the list of recipients. - Most Accessed Article 2024
The Most Accessed Article of 2024 goes to Persons with Disabilities (PWD) as patient educators: Effects on medical student attitudes.
Congratulations, Dr Vivien Lee and co-authors! - Best Article Award 2024
The Best Article Award of 2024 goes to Achieving Competency for Year 1 Doctors in Singapore: Comparing Night Float or Traditional Call.
Congratulations, Dr Tan Mae Yue and co-authors! - Best Reviewer Awards 2023
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2023.
Refer here for the list of recipients. - Most Accessed Article 2023
The Most Accessed Article of 2023 goes to Small, sustainable, steps to success as a scholar in Health Professions Education – Micro (macro and meta) matters.
Congratulations, A/Prof Goh Poh-Sun & Dr Elisabeth Schlegel! - Best Article Award 2023
The Best Article Award of 2023 goes to Increasing the value of Community-Based Education through Interprofessional Education.
Congratulations, Dr Tri Nur Kristina and co-authors! - Best Reviewer Awards 2022
TAPS would like to express gratitude and thanks to an extraordinary group of reviewers who are awarded the Best Reviewer Awards for 2022.
Refer here for the list of recipients. - Most Accessed Article 2022
The Most Accessed Article of 2022 goes to An urgent need to teach complexity science to health science students.
Congratulations, Dr Bhuvan KC and Dr Ravi Shankar. - Best Article Award 2022
The Best Article Award of 2022 goes to From clinician to educator: A scoping review of professional identity and the influence of impostor phenomenon.
Congratulations, Ms Freeman and co-authors.









