A randomised control trial study on the efficacy of high-fidelity simulation in enhancing knowledge

Number of Citations: 0

Submitted: 16 May 2022
Accepted: 3 January 2023
Published online: 4 July, TAPS 2023, 8(3), 5-14
https://doi.org/10.29060/TAPS.2023-8-3/OA2813

Bikramjit Pal1, Aung Win Thein2, Sook Vui Chong3, Ava Gwak Mui Tay4, Htoo Htoo Kyaw Soe5 & Sudipta Pal6

1Department of Surgery, Manipal University College Malaysia, Melaka, Malaysia; 2Department of Surgery, Manipal University College Malaysia, Melaka, Malaysia; 3Department of Medicine, Manipal University College Malaysia, Melaka, Malaysia; 4Department of Surgery, Manipal University College Malaysia, Melaka, Malaysia; 5Department of Community Medicine, Manipal University College Malaysia, Melaka, Malaysia; 6Department of Community Medicine, Manipal University College Malaysia, Melaka, Malaysia

Abstract

Introduction: The practice of high-fidelity simulation-based medical education has become a popular small-group teaching modality across all spheres of clinical medicine. High-fidelity simulation (HFS) is now being increasingly used in the context of undergraduate medical education, but its superiority over traditional teaching methods is still not established. The main objective of this study was to analyse the effectiveness of HFS-based teaching over video-assisted lecture (VAL)-based teaching in the enhancement of knowledge for the management of tension pneumothorax among undergraduate medical students.

Methods: A cohort of 111 final-year undergraduate medical students were randomised for this study. The efficacy of HFS-based teaching (intervention group) and VAL-based teaching (control group), on the acquisition of knowledge, was assessed by single-best answer multiple choice questions (MCQ) tests in the first and eighth week of their surgery posting. Mean and standard deviation (SD) for the total score of MCQ assessments were used as outcome measures. ANCOVA was used to determine the difference in post-test MCQ marks between groups. The intragroup comparison of the pre-test and post-test MCQ scores was done by using paired t-test. The P-value was set at 0.05.

Results: The mean of post-test MCQ scores were significantly higher than the mean of pre-test MCQ scores in both groups. The mean pre-test and post-test MCQ scores in the intervention group were slightly more than those of the control group but not statistically significant.

Conclusion: There was a statistically significant enhancement of knowledge in both groups but the difference in knowledge enhancement between the groups was insignificant.

Keywords:           High-Fidelity Simulation, Video-Assisted Lecture, Simulation-Based Medical Education (SBME), Randomized Controlled Trial (RCT), Medical Education, Pre-test and Post-test Knowledge Assessments

Practice Highlights

  • An RCT study to evaluate the effectiveness of HFS over video-assisted lecture teaching method.
  • HFS seems to be not superior than VAL-based teaching for knowledge acquisition and retention.
  • HFS may be used judiciously when the objectives are mainly knowledge based.
  • Further research may determine curricular areas where HFS is superior and worth adopting.

I. INTRODUCTION

    High-Fidelity Simulation (HFS) is an innovative healthcare education methodology that involves the use of sophisticated life-like mannequins to create a realistic patient environment. HFS can be considered an innovative teaching method that aids students in translating knowledge and psychomotor skills from the classroom to the actual clinical setting. Kolb’s Experiential Learning Cycle (Kolb, 1984) provides a basis for the integration of active learning of simulation with conventional teaching methods for a comprehensive learning experience in undergraduate medical education. HFS-based education is potentially an efficacious pedagogy that is now available for teaching. The usefulness of HFS has been recognized by the Accreditation Council of Graduate Medical Education (Accreditation Council of Graduate Medical Education [ACGME], 2020). HFS has the added benefit of increasing students’ confidence and their ability to care for the patients at the bedside (Kiernan, 2018). HFS-based education and video-assisted lecture-based teaching are both effective in achieving factual learning. Despite the increasing acceptance of HFS, there are limited studies to compare the usefulness of HFS with conventional teaching methods for factual learning among undergraduate medical students. At present, the different research studies have not provided enough evidence to establish HFS-based teaching’s superiority over traditional educational methods in the acquisition and retention of knowledge. There is inconsistent and variable outcome regarding the effectiveness of HFS on student learning (Yang & Liu, 2016). HFS-based education is both time-consuming and resource intensive. Its long-term merits in retaining knowledge and translating it into enhanced patient care need further research. As educators, we need to systematically evaluate the expensive newer teaching-learning modules like HFPS for their effectiveness by using rigorous research methodology and protocols. This is to ensure that we are providing the best learning opportunities conceivable for the students. Previous studies were mostly done in North America and, therefore, the generalisability of these results is guarded and might not be applicable in the context of Europe and Asia due to many differences in academic and curriculum aspects (Davies, 2008). The purpose of this study was to establish the feasibility of the use of HFS to deliver critical care education to final-year medical students and to find its efficacy in the enhancement of knowledge when compared to video-assisted lectures. The study compared the effectiveness of two methods of teaching pedagogy in the enhancement of knowledge acquisition using pre-test and post-test MCQ. This study was designed to provide insights that may be applied to the future development and improvement of HFS-based education among undergraduate medical students and its possibility of integrating it into course curricula.

    II. METHODS

    A. Study Design

    Randomized Controlled Trial (RCT) with parallel groups and 1:1 allocation. Please see Appendix 1 for the Flow Chart.

    B. Sample Size

    G*Power software was used to calculate the sample size (Faul et al., 2007). Based on the preliminary RCT study of our institute done with the same protocol in 2018, the calculated sample size was 114 with a power of 0.95 for this study.

    C. Inclusion and Exclusion Criteria

    All male and female final-year undergraduate medical (MBBS) students in our institute were recruited after obtaining their written informed consent. All final-year students in the institute consented to the study. The participants were between the ages of 22-26 years.

    The total number of participants recruited was 123.

    The number of participants dropped out was 12 (9.77%).

    Out of 111 participants who completed the study, 61 (54.95%) were female and 50 (45.05%) were male.

    The study was conducted in the Clinical Skills Simulation Lab of Melaka Manipal Medical College (presently known as Manipal University College Malaysia).

    The study period was from March 2019 to February 2020 (12 months).

    D. Interventions

    1) Description of HFPS-based teaching: It was an interactive session using a high-fidelity patient simulator demonstrating the management of tension pneumothorax by performing Needle Decompression on METIman (Pre-Hospital) following the Advanced Trauma Life Support Manual developed by the American College of Surgeons (ATLS Subcommittee et al., 2013).

    2) Description of Hi-fidelity simulator: METIman Pre-Hospital HI-Fidelity Simulator (MMP-0418) was used for the simulation sessions. It was a fully wireless, adult High-Fidelity Patient Simulator (HFPS) with modelled physiology. It comes with extensive clinical features and capabilities designed specifically for learners to practice, gain experience, and develop clinical mastery in a wide range of patient care scenarios.

    3) Description of video-assisted lecture-based teaching: It was a small group interactive session delivered face-to-face to the participants using a recorded video clip demonstrating the management of tension pneumothorax by performing Needle Decompression on METIman (Pre-Hospital) following the Advanced Trauma Life Support Manual developed by the American College of Surgeons (ATLS Subcommittee et al., 2013).

    E. Outcome

    The tool for measurement of knowledge was an identical set of single-best answer A-type MCQs. These MCQs were used for both Pre-test and Post-test knowledge assessments. MCQs were constructed based on the teaching sessions to assess their learning outcome.

    The efficacy of HFPS-based teaching when compared to video-assisted lecture-based teaching is enhancement of knowledge for management of tension pneumothorax.

    F. Recruitment

    The students were recruited in the study during their final year surgical posting.

    G. Randomisation

    A cohort of 12 to 14 students from each rotation was randomised into intervention (HFPS-based teaching) and control (video-assisted lecture-based teaching) groups following random sequence generation method.

    A computer-generated random sequence number was developed from randomizer.org. The independent randomiser was a biostatistician who did not participate in the delivery of interventions. The allocated interventions were then sealed in a sequentially numbered, opaque envelope.

    Block randomisation with a block size of two was used to assign the students into intervention and control groups.

    H. Implementation

    A biostatistician generated the allocation sequence. One independent investigator enrolled the participants, and another independent investigator assigned the participants to interventions. The outcome assessor and the biostatistician were kept blinded to the randomisation.

    I. Procedure for Data Collection

    The participants who gave consent were enrolled in the study. Each session was conducted with a group of 12 to 14 participants. On the first day, the participants were briefed about the sessions and expected learning outcomes. As part of the briefing process, they were explained the confidentiality of the HFPS, the video-assisted lecture sessions and the ethical issues involved. All the participants were introduced to the high-fidelity patient simulator (METIman) in the clinical lab set-up to make them aware of its functions and familiarise them with the handling of the mannequin. An assurance was given to the students that the training course was not part of the evaluation process for the surgical curriculum. The briefing was followed by the first knowledge assessment (Pre-test MCQ) of all the participants. Pre-test MCQ was designed to collect the score of initial background knowledge about tension pneumothorax and its management following the ATLS protocol. The module for the aetiology, pathophysiology and clinical presentation of tension pneumothorax and its steps of management following the ATLS protocol was part of their final year course curriculum. It was taught before they participated in the study. After the Pre-test MCQ session, they were randomized into intervention and control groups consisting of 6 to 7 participants each. For the intervention group, an independent investigator used the high-fidelity simulator (METIman Pre-hospital) to demonstrate the diagnosis and management of tension pneumothorax (Needle Decompression) in an emergency setting. The demonstration time was 20 minutes followed by hands-on training for another 20 minutes. For the control group, a recorded video clip of the identical facilitated simulation session on the diagnosis and management of tension pneumothorax (Needle Decompression) was shown by another investigator. The video demonstration lasted for 20 minutes. This session was followed by a 20-minute interactive discussion session with the faculty. All the participants in both groups were apprised of the importance of aetiology, pathophysiology and clinical presentation in arriving at the diagnosis and management of tension pneumothorax during these interactive teaching sessions. The participants were encouraged to explore how they would manage the stated clinical situation through discussion. The faculty were instructed to emphasize the teaching points related to the outcome of the study. The total duration for both types of teaching was 40 minutes. There were no more additional hands-on practice or video-assisted lecture sessions for the participants during the course of the research study. In the seventh/eighth week, both the intervention and the control groups again participated in the second knowledge assessment (Delayed Post-test MCQ) to assess their gain and retention of knowledge. Delayed Post-test MCQ assessment may minimise the recall bias and test their retained memory better.

    Both Pre-test and Post-test knowledge assessments comprised 20 MCQs which were to be completed in 20 minutes. The single-best answer A-type MCQs with five options of answers were prepared following the guidelines framed by the National Board of Medical Examiners (Case & Swanson, 2001). For each correct response, a score of one point was awarded. No negative marking was awarded for incorrect response. Based on the learning objectives, the MCQs were constructed by 6 experts in the field of Surgery, Medicine and Medical Education who were not part of this research study. The MCQs covered the items on pathophysiology, diagnosis, and management of tension pneumothorax, and assessed for knowledge comprehension and knowledge application. The order of the questions was changed between the Pre-test and the Post-test. The MCQ answer sheets were scanned by Konica Minolta FM (172.17.5.12) scanner and graded by using Optical Mark Recognition (OMR) software (Remark Office OMR, version 9.5, 2014; Gravic Inc., USA). Before the main study, a preliminary study involving 56 students was conducted to explore the time management, feasibility, acceptability, and validation of the MCQs (Pal et al., 2021). In the preliminary study, the Pre-test and the Post-test were administered in the first week and the fourth week respectively to note the short-term retention of knowledge. This study is an extension of the preliminary study with a different cohort of students where the Pre-test and the Delayed Post-test were administered in the first week and the seventh/eighth week respectively to determine the medium-term retention of knowledge. The MCQs were reviewed based on the feedback from the preliminary study on the appropriateness of the content, clarity in wording, and difficulty level. The difficulty index and the bi-serial correlation for item discrimination of all MCQs were checked. The value between 30 and 95 in the difficulty index and the bi-serial correlation value > 0.2 were chosen as the accepted standard for this study. 

    At the end of the study, the participants in the intervention group were provided with access to the identical video-assisted lecture sessions as designed for the control group. Similarly, the participants in the control group were provided with access to the same HFS sessions. This is to ensure parity between the groups for their professional development of knowledge.

    J. Statistical Analysis

    SPSS software (version 25) was used for data analysis. The descriptive statistics such as frequency and percentage for categorical data and the mean and standard deviation for the total score of the assessments were calculated. ANCOVA was used to determine the difference in post-test MCQ marks between intervention and control groups with pre-test MCQ marks as a covariate. Intragroup comparison of pre-test and post-test MCQ marks was also done by calculating paired t-test. For intergroup comparison, the effect size – Partial Eta Squared was calculated in ANCOVA. Cohen’s dz was calculated for the comparison of dependent means. The level of significance was set at 0.05 and the null hypothesis was rejected when P < 0.05. We measured the scale-level content validity index (SCVI) and item-level content validity index (ICVI) for the validity and Cronbach alpha for the internal consistency (reliability) of the MCQs. The average values of SCVI and ICVI were 0.94 & 0.89 respectively. The value of Cronbach’s alpha was 0.78.

    III. RESULTS

    The data that support the findings this RCT study are openly available at https://doi.org/10.6084/m9.figshare.19932053 (Pal et al., 2022).

    A. General Data Analysis

    There was no difference in the highest Pre-test scores achieved by the participants in both intervention and control groups. The lowest scores recorded in the intervention group were better than the control group in both Pre-test and Post-test. There was a negligible difference between the highest Post-test scores among control and intervention groups (See Table 1).

    Test score

    Intervention

    Control

    PRE-TEST

     Mean (SE)

    12.31 (0.34)

    12.23 (0.36)

     95% CI for Mean

    11.64 – 12.98

    11.50 – 12.96

     Min – Max

    6.0 – 18.0

    6.0 – 18.0

    POST-TEST

     Mean (SE)

    13.65 (0.27)

    13.60 (0.30)

     95% CI for Mean

    13.12 – 14.19

    12.98 – 14.20

    Min – Max

    8.0 – 18.0

    7.0 – 17.0

    Table 1. Highest, lowest and unadjusted mean MCQ scores among intervention and control groups

    SE – Standard Error                CI – Confidence Interval

    Min – Minimum                      Max – Maximum

    B. Statistical Data Analysis

    ANCOVA was used to determine the difference in Post-test MCQ scores among control and intervention groups after adjusting pre-test MCQ scores. There was a linear relationship between Pre-test and Post-test MCQ scores for each group, as determined by visual inspection of the scatterplot. The homogeneity of regression slopes was noted as the interaction term was not statistically significant, F (1, 107) = 0.889, P = 0.348. When assessed by Shapiro-Wilk’s test, standardized residuals were normally distributed (P > 0.05) in the intervention group, but not normally distributed in the control group (P < 0.05). Both homoscedasticity and homogeneity of variance were noted, as assessed by visual inspection of a scatterplot and Levene’s test of homogeneity of variance (P = 0.531), respectively. Data were adjusted with mean ± standard error unless otherwise stated. The effect size, Partial Eta Squared (Partial η2) was calculated in ANCOVA. A partial η2 value of 0.01 or less was considered to be small. For the comparison of dependent means, the effect size, Cohen’s dz was calculated; where the effect size of 0.5-0.8 was considered to be moderate (Ellis, 2010). Post-test MCQ score was higher in the intervention group but after adjustment for pre-test MCQ scores, there was no statistically significant difference in post-test MCQ scores between the control and intervention groups. The effect size was small (See Table 2).

    Variable

    n

    Post-test MCQ score

    Mean (SE)

    Mean difference (95% CI)

    P-value

    Partial η2

    Intervention

    55

    13.65 (0.27)

    0.04 (-0.69, 0.77)

    0.917

    0.0001

    Control

    56

    13.60 (0.30)

    Table 2. Intergroup comparison of post-test MCQ scores between intervention and control groups after adjusting pre-test MCQ marks (ANCOVA)

    n: number of students

    SE: Standard error

    95% CI: 95% confidence interval

    Partial η2: Partial Eta Squared

    There was a statistically significant difference between pre-test and post-test MCQ scores among the intervention and control groups. The mean of post-test MCQ scores was significantly higher than the mean of pre-test MCQ scores in both intervention and control groups. The effect size was moderate in both groups (See Table 3).

    Variable

    n

    Mean (SD)

    Mean difference (95% CI)

    t (df)

    P-value

    Dz

    Pre-test MCQ scores

    Post-test MCQ scores

    Intervention

    55

    12.31 (2.49)

    13.65 (1.99)

    1.34 (0.64, 2.05)

    3.841 (54)

    * < 0.001

    0.518

    Control

    56

    12.23 (2.72)

    13.60 (2.26)

    1.36 (0.68, 2.04)

    3.998 (55)

    * < 0.001

    0.534

    Table 3. Intragroup comparison of pre and post MCQ scores among intervention and control groups (Paired t-test)

    n: number of students                                                                                * Significant

    SD: Standard deviation

    95% CI: 95% confidence interval

    dz: Cohen’s dz

    IV. DISCUSSION

    Multiple studies have revealed slight to the modest enhancement of knowledge in simulation-based medical education (SBME) when compared to other instructional teaching methods (Cook et al., 2012; Gordon et al., 2006; Lo et al., 2011; Ray et al., 2012; Ten Eyck et al., 2009). Notwithstanding the increasing popularity of SBME, there is little evidence to conclude that it is superior to other small-group teaching modalities for the acquisition of knowledge (Alluri et al., 2016). The common perception is that knowledge lies at the lowest level of competence in Miller’s model of clinical acumen (Miller, 1990), but it is also important to note that knowledge is the basic foundation of competence and proficiency (Norman, 2009). Theoretically, SBME is advantageous for assessment of both knowledge and skills but there are few studies which directly evaluated the effectiveness of HFS in the assessment of knowledge (McGaghie et al., 2009; Rogers, 2008).

    The mean scores of both Pre-test and the Post-test were higher in the intervention group in this study. In comparison, our preliminary study demonstrated that the control group had higher mean MCQ marks than the intervention group in Pre-test whereas at Post-test, the intervention group had higher mean MCQ marks than the control group (Pal et al., 2021).

    In our study, there is significant enhancement of knowledge (P < 0.001) in both modes of teaching which corroborates the findings of Alluri et al. (2016). Their RCT study demonstrated that the participants in both the simulation and lecture groups had improved post-test scores (p < 0.05). The comparison of Pre-test and Post-test MCQ scores in our preliminary study also revealed significant higher mean MCQ scores at Post-test than Pre-test in both intervention and control groups (Pal et al., 2021). A study by Couto et al. (2015) showed improved post-test scores in both methods. Similar results were noted in the studies by Chen et al. (2017) and Vijayaraghavan et al. (2019). The finding of a study by Hall (2013) showed a slight increase in post-test scores in both the HFPS and control groups.

    A systematic review by La Cerra et al. (2019) revealed that HFS was superior to other teaching methods in improving knowledge and performance. Significant higher scores for participants in the HFS group in the studies by Larsen et al. (2020) and Solymos et al. (2015) demonstrated that HFS may be superior to conventional teaching methods for factual learning. In another study by Bartlett et al. (2021), HFS showed a significant long-term gain in knowledge over traditional teaching methods, but short-term knowledge gain was insignificant. Our study revealed that the Post-test MCQ score was higher in the HFS group but after adjustment of pre-test scores, there was no significant difference in knowledge gain between the control and intervention groups. The findings were similar in our preliminary study where  the intervention group had higher mean change score of MCQ scores than the control group but it was not statistically significant (Pal et al., 2021).

    On the other hand, there was no significant knowledge improvement in both simulation and traditional teaching methods as observed in the studies (Corbridge et al., 2010; Kerr et al., 2013; Moadel et al., 2017). The findings of Alluri et al. (2016) also showed no difference in knowledge gain between simulation and lecture-based teaching. The studies by Morgan et al. (2002) and Tan et al. (2008), demonstrated equal efficacy between simulation and conventional lectures. The findings of a study by Kerr et al. (2013) demonstrated that SBME was not beneficial in acquisition and retention of knowledge. There was no significant improvement in knowledge after simulation-based education as revealed by the findings of three RCTs (Cavaleiro et al., 2009; Cherry et al., 2007; Kim et al., 2002).

    Despite simulation being effective in acquisition of knowledge, it may not be the most efficient modality when compared to other traditional educational methods (Bordage et al., 2009). There is ample evidence that SBME usually leads to enhancement of knowledge and skills among undergraduate students but its superiority over other conventional teaching methods is yet to be defined (Nestel et al., 2015).

    A. Limitations

    There is a possibility of potential biases in the form of design, recruitment, sample populations and data analysis that could have influenced the findings. Due to randomization in blocks of two, the allocation of participants may be predictable which may result in selection bias. The confounding factors such as communication between the different groups of students prior to the second MCQ assessment, participants’ recall memory and preparation for the post-test after 7 – 8 weeks need to be considered. As it was a single-centre study which included final-year medical students only, the validity of the findings may not be applicable to other settings.

    V. CONCLUSION

    Conventional teaching modalities and HFS, when used in conjunction with bedside teaching, may complement clinical practice, leading to higher retention of knowledge. Therefore, more studies are required to measure the efficacy of simulation for a better understanding of the differences that it can make in the acquisition of knowledge. Our study revealed that the efficacy of high-fidelity simulation-based teaching was not superior to video-assisted lecture-based teaching in terms of knowledge acquisition and retention. The substantially higher cost and maintenance associated with HFS need to be considered before planning a teaching-learning activity. It may be used judiciously with conventional teaching when the objectives are mainly knowledge-based. More studies are required to determine its effectiveness and further evaluation as a teaching-learning tool in medical education.

    Notes on Contributors

    Bikramjit Pal was involved in Conceptualization, Formal Analysis, Literature Review, Methodology, Project administration & Supervision, Data Analysis and Writing (original draft & editing).

    Aung Win Thein was involved in Formal Analysis, Literature Review, Methodology, Supervision and Writing (review & editing).

    Sook Vui Chong was involved in Literature Review, Methodology, Supervision and Writing (review & editing).

    Ava Gwak Mui Tay was involved in Formal Analysis, Literature Review, Supervision and Writing (review & editing).

    Htoo Htoo Kyaw Soe was involved in Formal Analysis, Methodology, Data curation, Statistical Analysis and Validation.

    Sudipta Pal was involved in Literature Review, Methodology, Formal Analysis, Data curation and Writing (review & editing).

    Ethical Approval

    Ethical approval was duly obtained from the Ethical Committee / IRB of Manipal University College Malaysia. Informed consent was taken from all the participants. All information about the participants was kept confidential.

    Approval number: MMMC/FOM/Research Ethics Committee – 11/2018.

    Data Availability

    The data that supports the findings of this RCT study are openly available at Figshare repository, https://doi.org/10.6084/m9.figshare.19932053.v2 (Pal et al., 2022).

    Acknowledgement

    The authors would like to acknowledge the final year MBBS students of Manipal University College Malaysia who had participated in this research project, the faculty of the Department of Surgery, the lab assistants and technicians of Clinical Skills Lab and the Management of Manipal University College Malaysia.

    Funding

    The researchers had not received any funding or benefits from industry or elsewhere to conduct this study.

    Declaration of Interest

    The researchers had no conflicts of interest.

    References

    Accreditation Council for Graduate Medical Education. (2020, July 1). Program requirements for graduate medical education in general surgery. https://www.acgme.org/globalassets/pfassets/programrequirements/440_generalsurgery_2020.pdf.

    Alluri, R. K., Tsing, P., Lee, E., & Napolitano, J. (2016). A randomized controlled trial of high-fidelity simulation versus lecture-based education in preclinical medical students. Medical Teacher, 38(4), 404–409. https://doi.org/10.3109/0142159X.2015.1031734

    ATLS Subcommittee, American College of Surgeons’ Committee on Trauma, & International ATLS working group. (2013). Advanced trauma life support (ATLS®): The ninth edition. Journal of Trauma and Acute Care Surgery, 74(5), 1363–1366. https://doi.org/10.1097/TA.0b013e31828b82f5

    Bartlett, R. S., Bruecker, S., & Eccleston, B. (2021). High-fidelity simulation improves long-term knowledge of clinical swallow evaluation. American Journal of Speech-Language Pathology, 30(2), 673–686. 

    Bordage, G., Carlin, B., Mazmanian, P. E., & American College of Chest Physicians Health and Science Policy Committee (2009). Continuing medical education effect on physician knowledge: Effectiveness of continuing medical education: American College of Chest Physicians evidence-based educational guidelines. Chest, 135(3 Suppl), 29S–36S. https://doi.org/10.1378/chest.08-2515

    Case, S. M., & Swanson, D. B. (2001). Constructing written test questions for the basic and clinical sciences. 3rd ed. Philadelphia: National Board of Medical Examiners.

    Cavaleiro, A. P., Guimarães, H., & Calheiros, F. (2009). Training neonatal skills with simulators? Acta Paediatrica, 98(4), 636–639.    

    Chen, T., Stapleton, S., Ledford, M., & Frallicciardi, A. (2017). Comparison of high-fidelity simulation versus case-based discussion on fourth- year medical student performance. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 18(5.1). https://escholarship.org/uc/item/5k73f4qc

    Cherry, R. A., Williams, J., George, J., & Ali, J. (2007). The effectiveness of a human patient simulator in the ATLS shock skills station.  Journal of Surgical Research, 139(2), 229–235. https://doi.org/10.1016/j.jss.2006.08.010

    Cook, D. A., Brydges, R., Hamstra, S. J., Zendejas, B., Szostek, J. H., Wang, A. T., Erwin, P. J., & Hatala, R. (2012). Comparative effectiveness of technology-enhanced simulation versus other instructional methods: A systematic review and meta-analysis. Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare, 7(5), 308–320. https://doi.org/10.1097/SIH.0b013e3182614f95

    Corbridge, S. J., Robinson, F. P., Tiffen, J., & Corbridge, T. C. (2010). Online learning versus simulation for teaching principles of mechanical ventilation to nurse practitioner students. International Journal of Nursing Education Scholarship, 7(1), Article 12. https://doi.org/10.2202/1548-923X.1976

    Couto, T. B., Farhat, S. C., Geis, G. L., Olsen, O., & Schvartsman, C. (2015). High-fidelity simulation versus case-based discussion for teaching medical students in Brazil about pediatric emergencies. Clinics (Sao Paulo, Brazil), 70(6), 393–399. https://doi.org/10.6061/clinics/2015(06)02

    Davies, R. (2008). The Bologna process: the quiet revolution in nursing higher education. Nurse Education Today, 28(8), 935–942. https://doi.org/10.1016/j.nedt.2008.05.008

    Ellis, P. (2010). The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge University Press. https://doi.org/10.1017/CBO9780511761676

    Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/bf03193146

    Gordon, J. A., Shaffer, D. W., Raemer, D. B., Pawlowski, J., Hurford, W. E., & Cooper, J. B. (2006). A randomized controlled trial of simulation-based teaching versus traditional instruction in medicine: A pilot study among clinical medical students. Advances in Health Sciences Education , 11(1), 33–39. https://doi.org/10.1007/s10459-004-7346-7

    Hall, R. M. (2013). Effects of high fidelity simulation on knowledge acquisition, self-confidence, and satisfaction with baccalaureate nursing students using the solomon-four research design [Doctoral dissertation, East Tennessee State University]. East Tennessee State University Higher Education Commons. https://dc.etsu.edu/etd/2281

    Kerr, B., Hawkins, T. L., Herman, R., Barnes, S., Kaufmann, S., Fraser, K., & Ma, I. W. (2013). Feasibility of scenario-based simulation training versus traditional workshops in continuing medical education: a randomized controlled trial. Medical Education Online, 18(1), Article 21312. https://doi.org/10.3402/meo.v18i0.21312

    Kiernan, L. C. (2018). Evaluating competence and confidence using simulation technology. Nursing, 48(10), 45–52. https://doi.org/10.1097/01.NURSE.0000545022.36908.f3

    Kim, J. H., Kim, W. O., Min, K. T., Yang, J. Y., & Nam, Y. T. (2002). Learning by computer simulation does not lead to better test performance on advanced cardiac life support than textbook study. The Journal of Education in Perioperative Medicine, 4(1), Article E019. 

    Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice Hall. 

    La Cerra, C., Dante, A., Caponnetto, V., Franconi, I., Gaxhja, E., Petrucci, C., Alfes, C. M., & Lancia, L. (2019). Effects of high-fidelity simulation based on life-threatening clinical condition scenarios on learning outcomes of undergraduate and postgraduate nursing students: A systematic review and meta-analysis. BMJ Open, 9(2), Article e025306. https://doi.org/10.1136/bmjopen-2018-025306

    Larsen, T., Jackson, N., & Napolitano, J. (2020). A comparison of simulation-based education and problem-based learning in pre-clinical medical undergraduates. MedEdPublish, 9(1), Article 172.

    Lo, B. M., Devine, A. S., Evans, D. P., Byars, D. V., Lamm, O. Y., Lee, R. J., Lowe, S. M., & Walker, L. L. (2011). Comparison of traditional versus high-fidelity simulation in the retention of ACLS knowledge. Resuscitation, 82(11), 1440–1443. https://doi.org/10.1016/j.resuscitation.2011.06.017

    McGaghie, W. C., Siddall, V. J., Mazmanian, P. E., Myers, J., & American College of Chest Physicians Health and Science Policy Committee (2009). Lessons for continuing medical education from simulation research in undergraduate and graduate medical education: Effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest, 135(3 Suppl), 62S–68S. https://doi.org/10.1378/chest.08-2521

    Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67. https://doi.org/10.1097/00001888-199009000-00045

    Moadel, T., Varga, S., & Hile, D. (2017). A prospective randomized controlled trial comparing simulation, lecture and discussion-based education of sepsis to emergency medicine residents. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 18(5.1). https://escholarship.org/uc/item/0132981t

    Morgan, P. J., Cleave-Hogg, D., McIlroy, J., & Devitt, J. H. (2002). Simulation technology: A comparison of experiential and visual learning for undergraduate medical students. Anesthesiology, 96, 10–16. https://doi.org/10.1097/00000542-200201000-00008

    Nestel, D., Harlim, J., Smith, C., Krogh, K., & Bearman, M. (2015). Simulated learning technologies in undergraduate curricula: An evidence check review for HETI.

    Norman, G. (2009). The American College of Chest Physicians evidence-based educational guidelines for continuing medical education interventions: a critical review of evidence-based educational guidelines. Chest, 135(3), 834–837. https://doi.org/10.1378/chest.09-0036

    Pal, B., Chong, S. V., Thein, A. W., Tay, A. G., Soe, H. H., & Pal, S. (2021). Is high-fidelity patient simulation-based teaching superior to video-assisted lecture-based teaching in enhancing knowledge and skills among undergraduate medical students? Journal of Health and Translational Medicine, 24(1), 83-90. https://doi.org/10.22452/jummec.vol24no1.14

    Pal, B., Thein, A. W., Chong, S. V., Tay, A., Htoo, H., & Pal, S. (2022). A randomized controlled trial study to compare the effectiveness of high-fidelity based teaching with video-assisted based lecture teaching in enhancing knowledge [Dataset]. Figshare. https://doi.org/10.6084/m9.figshare.19932053

    Ray, S. M., Wylie, D. R., Shaun Rowe, A., Heidel, E., & Franks, A. S. (2012). Pharmacy student knowledge retention after completing either a simulated or written patient case. American Journal of Pharmaceutical Education, 76(5), 86. https://doi.org/10.5688/ajpe76586

    Rogers, D. A. (2008). The role of simulation in surgical continuing medical education. Seminars in Colon and Rectal Surgery, 19(2), 108-114. https://doi.org/10.1053/j.scrs.2008.02.007

    Solymos, O., O’Kelly, P., & Walshe, C. M. (2015). Pilot study comparing simulation-based and didactic lecture-based critical care teaching for final-year medical students. BMC Anesthesiology, 15, Article 153. https://doi.org/10.1186/s12871-015-0109-6

    Tan, G. M., Ti, L. K., Tan, K., & Lee, T. (2008). A comparison of screen-based simulation and conventional lectures for undergraduate teaching of crisis management. Anaesthesia and Intensive Care, 36(4), 565–569.

    Ten Eyck, R. P., Tews, M., & Ballester, J. M. (2009). Improved medical student satisfaction and test performance with a simulation-based emergency medicine curriculum: a randomized controlled trial. Annals of Emergency Medicine, 54(5), 684–691. https://doi.org/10.1016/j.annemergmed.2009.03.025

    Vijayaraghavan, S., Rishipathak, P., & Hinduja, A. (2019). High-fidelity simulation versus case-based discussion for teaching bradyarrhythmia to emergency medical services students. Journal of Emergencies, Trauma, and Shock, 12(3), 176–178. https://doi.org/10.4103/JETS.JETS_115_18

    Yang, Y., & Liu, H. P. (2016). Systematic evaluation influence of high-fidelity simulation teaching on clinical competence of nursing students. Chinese Nursing Research, 30(7), 809–814. https://caod.oriprobe.com/articles/47628779/Systematic_evaluation_influence_of_high_fidelity_s.htm

    *Bikramjit Pal
    RCSI & UCD Malaysia Campus (RUMC),
    4 Jalan Sepoy Lines,
    Georgetown, Penang, 10450, Malaysia
    +6042171908-1908 (Ext)
    Email: bikramjit.pal@rcsiucd.edu.my

    Announcements