1. bookVolume 8 (2021): Issue 3 (September 2021)
Journal Details
License
Format
Journal
First Published
30 Mar 2018
Publication timeframe
4 times per year
Languages
English
access type Open Access

Effect of simulation-based teaching on nursing skill performance: a systematic review and meta-analysis

Published Online: 21 Sep 2021
Page range: 193 - 208
Received: 14 Jun 2020
Accepted: 17 Jul 2020
Journal Details
License
Format
Journal
First Published
30 Mar 2018
Publication timeframe
4 times per year
Languages
English
Abstract Objective

To summarize and produce aggregated evidence on the effect of simulation-based teaching on skill performance in the nursing profession. Simulation is an active learning strategy involving the use of various resources to assimilate the real situation. It enables learners to improve their skills and knowledge in a coordinated environment.

Methods

Systematic literature search of original research articles was carried out through Google Scholar, Medline, and Cochrane Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. Studies conducted on simulation-based teaching and skill performance among nursing students or clinical nursing staff from 2010 to 2019, and published in the English language, were included in this study. Methodological quality was assessed by Joanna Briggs Institute, and the risk of bias was also assessed by Cochrane risk of bias and the risk of bias assessment tool for non-randomized studies (ROBINS-I) checklists.

Results

Initially, 638 titles were obtained from 3 sources, and 24 original studies with 2209 study participants were taken for the final analysis. Of the total studies, 14 (58.3%) used single group prep post design, 7 (29.1%) used high fidelity simulator (HFS), and 7 (29.1%) used a virtual simulator (VS). Twenty (83.3%) studies reported improved skill performance following simulation-based teaching. Simulation-based teaching improves skill performance among types of groups (single or double), study regions, high fidelity (HF), low fidelity (LF), and standard patient (SP) users. But the effect over virtual and medium fidelity simulators was not statistically significant. Overall, simulation-based teaching improves the skill performance score among the experimental group (d = 1.01, 95% confidence interval [CI] [0.69–1.33], Z = 6.18, P < 0.01, 93.9%). Significant heterogeneity and publication bias were observed during the pooled analysis.

Conclusions

Simulation did improve skill performance among the intervention groups, but the conclusion is uncertain due to the significant heterogeneity. The large extent of difference among original research has necessitated the development of well-defined assessment methods for skills and standardized simulation set-up for proper assessment of their effects.

Keywords

Introduction

Simulation is an active learning strategy involving the use of various resources to assimilate the real situation.1 Moreover, it allows students to practice skills, exercise clinical reasoning, and make patient care decisions in a safe environment.2 It is also ideal for teaching reflective skills and management of patients in a crisis situation.

Bland et al (2011) summarized features of simulation as a learning strategy, as it encompasses creating a hypothetical opportunity, authentic representation, active participation, integration, repetition, evaluation, and reflection. As a result, it promotes active learning, creative thinking, and high-level problem solving that can produce the capability of independent work among students.3

In contrast with this, the use of simulation also has disadvantages such as high cost, the need for staff development to manipulate the performance, limited time for training of faculty, and some chance of false transfer due to wrong adjustment of simulators.4 Again, higher psychological preparation of students is needed since most of the simulation activities cause students to be anxious and frustrated.5

Some of the driving forces for current attention for simulation-based teaching are the patient bill of write, a greater need for high competency, and the changing trend of teaching approach from passive to experiential learning. Besides, a professional obligation to keep patient safety, difficulties to find clinical sites, and the greater need to provide high-quality clinical practice also influenced the current trends of teaching.2

In nursing, there was a lack of high-stake research that can provide strong evidence on the effect of simulation with a well-organized procedure.6 This indicates the need to conduct more investigations and arrive at a consensus on the issue among nurse experts.

The individual studies reported both negative and positive effects of simulation-based teaching. For example, in medicine, the use of high fidelity (HF) simulation is criticized for causing overconfidence in students that was even hampering their real practice.7 On the other hand, nursing literature also reported no effect of simulation on knowledge, skill, and confidence.8 As a result, this analysis aimed to narrow this gap by producing pooled evidence about the effect of simulation-based teaching over skill performance in the nursing profession. Moreover, this study considers the students and clinical nursing staff as a comparison group to ascertain differences, if any, in skill performance.

Simulation has many advantages and effects for learners and as well as the health care industry as a whole. Studies reported that simulation helped the student to acquire knowledge, skill, and confidence in actual patient-based care.9,10,11

Methods
Protocol and registration

To summarize and produce aggregated evidence on the effect of simulation-based teaching on nursing skill performance in the nursing profession, this review followed the guidelines proposed by Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA).

Eligibility criteria

Literature published in the English language, original, which deal with nursing students or nursing professionals, and which compare any type of simulation with no simulation or traditional lecture-based teaching, were included. Moreover, studies available in full text that measure the effect of simulation on skill performance, and published between 2009 and 2019 (10-year review), were also included. But, qualitative study, interprofessional study, non-nursing study, review study, study population patient, observational study, and combination training (simulation-based + other, and then simulation alone) were excluded from the review and analysis.

Participants

Participants were undergraduate nursing students and clinical nursing staff.

Intervention

Intervention was based on simulation-based teaching (using low fidelity [LF], HF, medium fidelity, standard patient (SP), and virtual based teaching).

Control

No treatment or other conventional training such as interactive lecture alone or in combination with conventional manikin-based teaching.

Outcomes

The primary outcome was skill performance score after intervention. The term score was used because an inconsistency was observed in separate reporting of acquisition and retention of skill performance. For this review, skill score was used as a general term representing a change in skill performance score following simulation-based teaching. The skill performance score was taken as it was reported by original researchers.

Information sources

Study data were obtained from the databases of Google Scholar, PubMed, Cochrane database (CINAHL), and other references.

Studies

Both non-randomized (quasi) and randomized original trail studies were included in the review and analysis.

Study selection

At first instance, literature were retrieved from original sources and merged using the software package EndNote X8 (reference management software) and an Excel sheet. Thereafter, the duplicate records were removed. Titles and abstracts were used for primary screening; then, the full text was used if needed. The two authors independently screened each study according to the inclusion criteria. Studies were included if they: (1) include undergraduate nursing students and/or clinical nursing staffs, (2) measure the effect of simulation-based teaching using various types of simulators, (3) use skill performance score as the primary outcome, (4) are randomized controlled trials (RCTs) or non-RCTs (quasi), and (5) produce sufficient data for calculation of sizes of effect. At the same time, the following criteria were used to exclude specific studies from the review process, including non-nursing, not assess simulation, interprofessional study, not original study, qualitative study, result that was not readily used as the report of median and different study populations.

Data collection process

The two review authors (AA and NA) independently extracted the data using an Excel sheet for a one-page summary. Accordingly, the information about the general overview of the article, the study design, country, population, sample size, intervention, the comparison, duration of the simulation, the outcome, and the methodological quality by JBI score checklist was filled over the pre-defined Excel sheet.

Risk of bias across studies

The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias Tool for RCTs.12 This tool has 6 areas to assess experimental study and the authors decide to use the tool without modifications. Each study was scored (1) for a high risk of bias, (2) for the unclear statements about specific areas of bias, and (3) for low risk of bias. The non-randomized trials were evaluated against the Risk of Bias Assessment tool for Non-randomized Studies (Robins-I). Robins-I have 5 domains to be scored for individual studies. They are (1) bias arising from the randomization process, (2) bias due to deviation from intended interventions, (3) bias due to missing outcome data, (4) bias in the measurement of the outcomes, and (5) bias in the selection of reported result. Each domain is expected to report scores of low, high, or concern.13

The quality of the included studies was also done using JBI critical appraisal checklist.14 The tool was used to judge a study over 9 areas and researchers used 4 phrases with justification: Yes, No, Unclear, and Not applicable.15 Additionally, publication bias was tested by Trim and Fill methods to assess the effect of publication bias on effect size.

Summary measures

The composite score of skill performance reflects an overall aggregate score derived from various tools designed by the original researcher or adopted that were used to assess skill ability or performance before and after the experiment. The tools were varied in terms of their type, content, and number of points included in rubrics or checklists.

Synthesis of results

The analysis was performed by comprehensive meta-analysis version 2 (CMA) software. The quantitative description of pooled analysis was planned. The final discussion of pooled results is dictated by the level of heterogeneity obtained. Then subsequent subgroup analysis was done for the type of study groups, level of fidelity, study regions, types of participants, and types of outcome variables. The heterogeneity was assessed using the Cochran χ2 test (Q-test) with the alpha level of significance set at 0.10.16 The degree of heterogeneity was also estimated and interpreted using the I2 statistic Cochrane Handbook for Systematic Reviews of Interventions recommendations with the alpha level of significance set at 0.10,12 which describes the percentage of total variation across studies that result from heterogeneity rather than chance. Finally, based on the final level of heterogeneity, pooled estimate was reported, discussed, and generalized to the group based on the significance level. The rest of the individual studies were included in a systematic review to avoid misleading readers.

The final size of effect was estimated and reported using a computed random standard deviation (SD) of mean difference (d) with a respective confidence interval (CI). This estimate is appropriate for effect size computed from a different study with different measurement context of outcome variables.17

Risk of bias across studies

Assessment of quality of studies and risk of bias at study level was done by JBI and Cochrane checklist. Overall publication bias was tested by using Trim and Fill methods, which have a higher level of sensitivity to assess the effect of publication bias on effect size.18

Patient and public involvement

This review had no contact with patients. All information was obtained from published studies and electronic databases.

Results
Study selection

Initially, 638 records were identified from 3 sources Cochrane, namely, (CINAHL), PubMed, and Google scholar. Then, 40 duplicated articles were removed using EndNote X8 citation manager19 and an Excel sheet. Then, 502 were removed due to focus on other issues (n = 78), non-nursing study (n = 96), out of date (n = 5), not assess simulation (n = 287), interprofessional study (n = 16), literature review (n = 15), and qualitative study (n = 5). From 96 studies, another 72 studies were removed because of results that were not ready for use (n = 9), not intended outcome (n = 24), populations are patients (n = 11), unclear interventions (n = 5), out of date (n = 7), and non-nursing study (n = 16). Twenty-four studies were used for the final analysis (Figure 1).

Figure 1

Flow diagram showing the process of study identification and selection.

Study characteristics

The included studies varied in terms of their design, the population used, and duration of simulation, type of test used to evaluate outcome variable, type of interventions, learning theory used, and level of fidelity in the simulator.

Totally 2209 study subjects participated in 24 original studies with a maximum of 36720 to a minimum of 3021 sample size. The proportion of studies that involved clinical nursing staff amounted to 13.4%, while the rest comprised undergraduate nursing students (86.6%). A large proportion of individual studies came from Turkey (33.3%) followed by the USA (29%), in which both constituted more than half of all studies. Moreover, more than three-fourth of the studies were quasi-experimental (n = 20; 83.3%), (29%) used HF, (29%) also used virtual simulators (VSs), and (58.3%) used both control and experimental group (double group). The total duration spent for simulation intervention ranged from a maximum of 24 h22 to a minimum of 20 min.23 The simulation duration was not clearly mentioned in 3 studies24,25,26 (Table 1).

Characteristics of included studies.

Study Interventions Study type, duration, sample size Scenario Outcome measures Result Effects
1 Aqel & Ahmed 2014, Jordan,27 RCT Training of participant over simulated case with cardiac arrest scenario and debriefing discussion. HFS, 25!90 CPR Direct observation using Checklist: mock codes were conducted over manikin over floor and evaluation using AHA checklist. The results revealed the existence of a significant difference in the post-test CPR knowledge as well as the CPR skills in favor of participants in the intervention group. Improved
2 Basak et al., 2016, Turkey,28, 29 Quasi, Single pre-post 45 min paper-based drug dose calculation simulation and debriefing session for discussion. LFS, 45!82 Actual physician prescription Rating: Drug dose calculation was evaluated from 100 points immediately after training and 1 month later. The difference between the mean pre-test score and the mean post-test score was statistically significant (t = 8.767, df = 89, P = 0.001) Improved
3 Basak et al., 2019, Turkey,30 RCT, equivalent control group 20 min simulation with 40 min debriefing and self-evaluation for 10 min generally 80 min discussion about teaching skill over SPs. SP, 80!71 Inhaler drug administration Direct Observation using Check list: Teaching skill measured by checklist consisted of 15 procedural steps developed and tested by principal investigators. Total patient teaching skill score for control group was 26.73 ± 5.63 and 39.08 ± 5.49 for SP group which causes a statistically significant difference (P ≤ 0.01) Improved
4 Bogossian et al. 2015, Australia,20 Quasi Single pre-post Interactive e simulation clinical scenario with video recording patient conditions, pop-up task, and respective response. VS, 24!367 Cardiac, shock, and respiratory Virtual skill performance A paired t-test showed a significant improvement in performance between the first and last scenarios (t = −8.037, df = 366, CI 2.05–1.24; P = 0.00). Improved
5 Bowling et al., 2015, USA,31 Quasi, equivalent control group 50 min respiratory distress simulated cased training and participant required to react to simulated case. MFS, 50!73 Respiratory distress OSCE with six station lasting 7 min and rater-based evaluations There was a significant difference for both groups in knowledge and skill performance (measured with a mini OSCE), but not between the groups Improved
6 Boyde M et al., 2018, Australia,24 Quasi, Single pre-post Innovative teaching of emergency management of patient using HF simulation with Jefferies simulation principles. HFS, Not mentioned, 50 Emergency patient Self-assessment: The self-efficacy in clinical performance scale was used to measure participant's assessment and handovepractice. The mean change in handover skill from 7.88 ± 1.76 to 8.79 ± 1.22 was statistically significant with t (41) = 3.41, P < 0.01 Improved
7 Chen et al., 2015, Canada,32 Quasi, equivalent control group Auscultation skills training using low and HF training. HFS, 40!54 Pneumothorax and a systolic murmur: Auscultation skills OSCE using Check list: Participants required to correctly identify 20 different sounds on simulators. There was no evidence that the HFS group performed better than the LFS group in clinical skills or in auscultation sounds recognition on HFS. No change
8 Durmaz et al., 2012, Turkey,33 RCT Intervention: Participants receive 4 h computer-based education simulation about pre-operative and post-operative patient management. VS, 4 h,82 Pre-post case OSCE for pre and post-operative management and deep breathing and coughing exercise: e. There was not a significant difference between the students’ post-education practical deep breathing and coughing exercise education skills (P = 0.867). Improved
9 Ismailoglu et al., 2018, Turkey,25 Quasi, equivalent control group IV training over virtual IV simulator VS, Not clear, 62 Encoded case Direct observation Check list: Intravenous catheterization Skill list performance evaluation. Mean psychomotor skills score of the experimental group 45.18 (33.73 ± 4.22) was higher than that of the control group 20.44 (26.53 ± 4.45) with Z = 5.294, P = 0.000. Improved
10 Jaberi et al., 2019, Iran,34 RCT Abdominal examination skill was tested after teaching student sing SP for 45 min. SP, 45!,87 Physical examination of abdomen OSCE using checklist: Six station OSCE were used with one rater for each station were assigned to evaluate performance over SPs. The mean score in intervention group changed from 5.35 ± 1.77 to 15.39 ± 3.2, while it was changed 4.98 ± 2.17 to 14.43 ± 3.93 in control group. There was a significant difference between the mean pre-test and post-test scores in each group (P < 0.05). Improved
11 Karabacak et al., 2019, Turkey,35 Quasi, Single pre-post A 12 h theory and laboratory-based training using SP on selected fundamental of nursing skills. SP, 12 h, 65 Fundamental of nursing issues Self-assessment: Proficiency self-assessment Form for proper communication with the patient, establishing a safe patient unit, safe patient transfer and act on body mechanics. No significant difference has been found between pre-scenario (7.05 + 9.17) and post-scenario (5.89 + 2.02) scores about self-assessment of safe patient transfer (t = 1.01; P = 0.32). No change
12 Keleekai et al., 2016, USA,36 RCT, equivalent control group Virtual based 3 h training to improve/decrease IV reinsertion VS, 3 h, 58 Peripheral IV securing Direct observation of virtual guided skill performance using Check list: Number of success and reinsertion of IV after demonstrating over IV arm model. Participants evaluated over 28-point check lists. The intervention was effective and resulted in several statistically significant improvements in knowledge, confidence, and skills both within and between study groups over time. Improved
13 Lee et al., 2019, Taiwan, China,37 Quasi, equivalent control group Integrating simulation-based teaching over advanced acute care adult scenario on shock, resuscitations for 90 min. HFS, 90!52 Shock and resuscitations Direct observation at clinical sites using Check list: Evaluated based on predesigned check list for clinical evaluation at actual practical setting. No significant difference in clinical performance was observed among groups. No change
14 Liaw et al., 2015 Singapore,38 RCT, equivalent control group The interactive web-based programmer 3 h training on patient identification, early recognition, vital sign monitoring, and management. VS, 3 h, 67 Deteriorating patients Direct observations using Check list: The simulation performance tool was adapted and modified from the original RAPIDS tool and used to assess specific and global rating scale. l. Two independent raters evaluated recorded video of performance. There was a significant change in Assessing and managing clinical deterioration in experimental group pre-test 18.17 (3.55), post-test 25.83 (4.79), and Reporting clinical deterioration pre-test 10.09 (2.31) post-test 12.83 (2.41). Improved
15 Lubbers et al., 2016, USA,39 Quasi, Single pre-post 1 h simulation, pre-post–simulation discussion. HFS, 3 h and 30!58 Not mentioned Self-assessment of Knowledge, confidence and performance. The Skill score, revealed significant increases from pre-test 2.25 to post-test 4.13, t = 21.21, P < 0.001). Improved
16 Meyer et al., 2011, USA,23 Quasi, equivalent control group Replacing 2 weeks (25%) of clinical work or rotation with simulation-based teaching in skill lab. VS, 24 h, 120 Various Direct observation using rating scale Clinical faculty assessment of student performance in clinical work and compared with control group who spent 100% in clinical rotations. Faculty rated students with patient simulation experience higher than those who had not yet attended simulation mean 1.74 (0.75), P = 0.02). Improved
17 Morton et al., 2019, USA,26 Quasi, Single pre-post Training using HFS portraying a patient with cardiac arrest. HFS, Not mentioned, 37 CPR Direct observation using Check list: Mock Code Evaluation Tool basically developed based on AHA (2015) guideline for basic life supports. There is no statistically significant difference in performance obtained following simulation-based training. No change
18 Sarmasogle et al. 2016, Turkey,40 Quasi, equivalent control group SP-based training of Arterial blood pressure and Subcutaneous injection, feedback, and discussion with SP. SP, 4 h, 77 Hypertension and acute pain Direct observation using Check list: Performance assessment using check list for arterial blood pressure measurements and subcutaneous injection by two raters. The mean performance score for the measurement of arterial blood pressure was 76 ± 7.6 for the control group and 83 ± 3.1 for the experimental group (P < 0.001). However, no significant difference was found between the groups’ performance scores on subcutaneous injection administration. Improved
19 Stayt LC, et al., 2015, UK,41 RCT 2 h clinical skill teaching; systematic ABCDE assessment and management process on medium fidelity patient simulator (ALS Simulator, made by Laerdal Medical) using a clinical scenario of an acutely unwell patient who is exhibiting signs of clinical deterioration. SP, 2 h, 98 Deteriorating patient OSCE using check list. The OSCE comprised of a check list of 24 objective performance criteria that evaluated participants’ performance of assessing and managing a deteriorating patient using a patient simulator. The results indicate that students who received simulation training performed a systematic ABCDE assessment and managed the deteriorating patient more effectively than those who received a didactic teaching approach. Improved
20 Sumner et al., 2012, USA,42 Quasi, Single pre-post Participants received the intervention by attending a 4-hour basic arrhythmia program on the second day of nursing orientation. MFS, 4 h, 138 Arrhythmia cases Self-assessment: post simulation self-report of caring and resource utilization in caring of patient with arrythmias patients. Following simulation there was transfer of knowledge to clinical practice. Improved
21 Toubasi S et al., 2015, Jordan,21 Quasi, Single pre-post Step by step simulation and debriefing of cardiac arrest scenario using AHA guidelines. MFS, 8 h, 30 Cardiac arrest Direct observation using Check list: Validated skill scenario testing tool which was developed by the AHA to assess performance according to the AHA 2010 guidelines. There is a significant mean difference of 2.9 in overall skill performance and BLS score after simulation (t = 7.4, df = 29, P < 0.01). Improved
22 Unver et al., 2013, Turkey,43 Quasi, Single pre-post 4 h training using SP SP, 4 h, 85 Medical administration OSCE: OCEF were used. There was a significant difference (30.26) in pre-test (24.02 ± 16.06) to post-test (54.28 ± 14.54) skill performance measurements (P < 0.01; t = 14.35). Improved
23 Vidal VL et al, 2013, Turkey,44 Quasi, equivalent control group Computer-based training with demonstration, return demonstration and verbal feedback regarding performance of phlebotomy. VS, 3 h, 73 Phlebotomy Direct observations using Check list: the skill checklist used by the mentors consisted of 21 items addressing the necessary steps for the completion of a phlebotomy procedure and 3 items related to overall performance. There is significant among the group in mean skill performance score in pain factor (P = 0.006), hematoma formation (P = 0.000), and number of reinsertions (P = 0.000). Improved
24 Woda et al., 2019, USA,22 Quasi, Single pre-post A 20 min training using HFS and debriefing about care of patient with type I DM. HFS, 20!233 Type one DM Direct observation of using Check list: Performance evaluated using 10 item evaluation rubrics by research assistance on major areas of DM care. Simulation did have a significant positive effect on performance change scores (P < 0.001; r = 0.28). The mean pre-test score on performance items was 0.73 (SD = 0.14), and the mean post-test score on performance items was 0.76 (SD = 0.12) Improved

Note: !: Minute; ABCDE, airway, breathing, circulation, disability, exposure; AHA, American Health Association; CI, confidence interval; CPR, cardiopulmonary resuscitation; df, degree of freedom; DM, diabetes mellitus; HF, high fidelity; HFS, high fidelity simulator; LFS, low fidelity simulator; MFS, medium fidelity simulator; OCEF, objectively constructed evaluation form; OSCE, objective structured clinical examination; RCT, randomized controlled clinical trial; RAPIDS, rescuing A patient in deteriorating situations; SD, standard deviation; SP, standard patient; t, t-distribution statistics; VS, virtual simulator.

The control group was mostly taking the conventional or lecture method of teaching as a comparator or no intervention. The dominant scenario used by individual researchers was acute cases: mainly cardiopulmonary cases (41.6%). The second most common cases were drug dose calculation (8.3%), proper drug administration (8.3%), and securing peripheral intravenous line catheter and phlebotomy (8.3%) (Table 1).

To measure the effectiveness of the intervention, 12 (50%) used direct observation of skill performance using a checklist, 6 (25%) reported the use of OSCE, 4 studies (16.6%) used self-assessment of skill performance improvement, and 1 (4.2%) reported a rating of documents. In 3 studies the skill performance evaluation was assisted by VSs. Of this, virtual computer-guided performance was used in 1 (4.2%), 4 (16.7%) used self-assessment, and another one (4.2%) used direct actual patient-based performance evaluations (Table 1).

Types of studies

The majority (n = 20; 83.3%) of included studies were quasi-experimental. The rest (n = 4; 16.7%)27, 33, 34, 41 were RCT (Table 1).

Type of scenario

Different type of scenarios were used for simulation activity in all studies. Almost half of the scenarios were having the nature of acute cases, such as CPR, resuscitation, arrhythmia, deteriorating patient, pre-post case, and shock. The remaining scenarios were non-acute or cold cases such as medication administration, phlebotomy, diabetes mellitus (DM), and communication skills.

Quality of individual studies

The risk of bias in included studies ranged from unclear to high due to issues with 6 areas of risk of bias assessments for RCTs. These are random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting. From 7 studies, it could be ascertained that 5 of them scored moderate risk of bias while the rest were high risk of bias. Using Robins-I tool for non RCT, we discovered that 6 studies scored with no risk of bias, 7 with low risk of bias, 3 with moderate risk bias, and 1 with serious risk of bias. Moreover, from the total of 24 included studies, only 4 (6.7%) studies were categorized as high-quality research, 2 (8.3%) as low-quality research, and the remaining 18 (75%) studies ranked as medium-level quality studies. In most of the studies, quality issues were related to lack of control group, unclear outcome measurements, and failure to clearly state what treatment was given for study groups.

Meta-analysis
Result of individual studies

Even though individual studies reported additional outcomes as primary and/or secondary objective to their studies, this review considers and takes only the outcome related to skill performance. From a total of 24 studies, 20 reported positive effects of simulation-based teaching, while the rest reported a lack of evidence to support the positive effects of simulation-based teaching.

Simulation-based teaching improves skill performance among the experimental group with an overall random effect size of d = 1.01, 95% CI [0.69–1.33], Z = 6.18, P < 0.01. From this, it is understood that >79% of control group skill performance is below experimental group skill performance. But it is uncertain to conclude this finding because significant heterogeneity (I2 = 93.9%) was observed during analysis.

The random effect size (d) for individual studies dispersed to small (d ≤ 2, n = 5, 20.8%), medium (d = 0.2–0.5, n = 4, 16.7%), and large (d ≥ 0.8 and above, n = 15, 62.5%). Moreover, 5 studies26, 33,34,35, 37 effect size showed statistically insignificant results during analysis (Figure 2; forest plot). This meta-analysis result is consistent with the original report by individual articles about the effect of simulation on skill performance.

Figure 2

Forest plot showing the effect size of individual studies.

Note: CI, confidence interval.

Initially, 4 individual studies26, 33, 35, 37 already reported that simulation has no statistically significant change over a participant's skill performance. At the same time, the meta-analysis also confirmed this by reporting a statistically insignificant effect size for those studies. (Figure 3; forest plot).

Figure 3

Forest plot showing sensitivity analysis by one study remove method.

Subgroup analysis

Because of overall significant heterogeneity (I2 = 93.9%), subgroup analysis with moderator variables were done with types of study design, type of participants, study regions, and simulation fidelity. The heterogeneity level was maintained high despite variation in the effect size across the moderator variable analysis.

Effect of simulator type

Five types of the simulation were considered for this analysis. Except for medium fidelity simulator (MFS), all of the simulation types scored large effect size favoring the skill performance score among the experimental group. But only the low fidelity simulator (LFS) obtained a larger and statistically significant effect size with an acceptable level of heterogeneity d = 0.89 (CI [CI 0.24, 2.29], P = 0.02, I2 0%). This group of studies involved study participants. We are confident that using LFS improved the skill performance of the experimental group (Table 2).

Summary of effect size for subgroup analysis.

Comparison and Groups Numbers of studies Effect size (d) SMD, CI, P value I2, % Z value
All studies Groups 24 1.01 (CI [0.62, 1.41], P < 0.01) 93.9 5.13
  Single group 10 1.02 (CI [0.52, 1.50), P < 0.01) 95 4.46
  Double groups 14 1.00 (CI [0.56, 1.44], P < 0.01) 92.9 4.48
Simulator types
  HF 7 1.23 (CI [0.55, 1.93], P < 0.01) 94.8 3.5
  Medium fidelity 3 0.89 (CI [−0.14, 1.93], P = 0.09) 86.5 1.69
  LF 3 1.27 (CI [0.24, 2.29], P = 0.02 0 2.4
  SP 5 1.03 (CI [0.23, 1.84], P = 0.01) 96 2.5
  VSs 6 0.69 (CI [−0.04, 1.4], P = 0.06) 95.4 1.85
Types of participants
  Clinical staffs 3 1.08 (CI [0.43, 1.74], P < 0.01) 85.8 3.25
  Nursing students 8 0.98 (CI [0.61, 1.37], P < 0.01) 95 5.11
Regions (country)
  America 8 1.22 (CI [0.62, 1.82], P < 0.01) 94.6 4.02
  Europe 10 0.76 (CI [0.24, 1.29], P = 0.004) 95.3 2.85
  Middle East 6 1.17 (CI [0.48, 1.86], P = 0.001) 88.74 3.34
Design
  Quasi 17 0.96 (CI [0.57, 1.34], P < 0.01) 94.78 4.86
  RCT 7 1.14 (CI [0.54, 1.75], P < 0.01) 91.1 3.7
Types of scenarios
  Acute 12 1.07 (CI [0.73, 1.41], P < 0.01) 88.1 6.18
  Cold 12 0.92 (CI [0.35, 1.49], P < 0.02) 95.16 3.16

Note: CI, confidence interval; HF, high fidelity; LF, low fidelity; RCT, randomized controlled clinical trial; SP, standard patient; VSs, virtual simulators.

Types of group

The effect of group type used for the individual study was tested for all studies as subgroup analysis, which was tested as to whether individual studies used single pre-post or double group pre-post design. The single group pre-post users score large effect size d = 1.02 (CI [0.52, 1.50], P < 0.01). Again, the double group also score almost similar effect size d = 1.00 (CI [0.56, 1.44], P < 0.01). In both cases, significant heterogeneity was observed. So, it is understood there is no effect on size, whether we have used a single group or double group for the experiments (Table 2).

Type of study participants

Only 3 studies involved clinical nursing staff as study participants. The effect size for clinical nursing staff was d = 1.08 (CI [0.43, 1.74], P < 0.01, I2 85.8%). The almost similar effect size was observed for nursing students d = 0.98 (CI [0.61, 1.37], P < 0.01, I2 95%). Here also, we have no confidence to discuss the pooled analysis due to significant heterogeneity observed during analysis. But it is visible that the effect size was almost similar and statistically significant (Table 2).

Study sesign

There is no difference in whether RCT or quasi-experimental design was used to evaluate the effect of simulation on skill performance. The skill performance score was increased among study experimental group participants. The effect size for 7 RCTs was d = 1.14 (CI [0.54, 1.75], P < 0.01) and for the rest of quasi-experimental was 0.96 (CI [0.57, 1.34], P < 0.01). In both cases, considerable heterogeneity precludes us from drawing a conclusion and recommending the result (Table 2).

Types of scenario

Another comparison was done to ascertain whether nursing skill performance was different due to the use of categories of scenarios. The scenarios were categorized as acute and cold cases. The effect size for both groups of scenarios was similar and considerable heterogeneity was observed in both cases. Thus, we can conclude that in the current study, types of scenarios used for simulation have no effect on nursing skill performance (Table 2).

Sensitivity analysis

The pooled effect size was tested for a possible change by one study remove method. Accordingly, there is no large change over the overall effect size due to the removal of individual studies one by one.

The maximum change was observed (d = 1.11) when Stayt et al.41 was removed from the analysis. Further, the minimum effect size (d = 0.97) was also obtained when Jaberi and Momennasab34 was removed from the analysis. The overall variation was d = 0.13. Thus, it is understood that the removal of 1 study has no significant effect on overall effect size (Figure 2).

Risk of bias

The risk of publication bias was tested using 4 common methods. Except for Egger's regression (intercept = 2.61, P = 0.08), the Trim and Fill methods (d = 0.62, [0.28, 0.96]), classic Fail-safe N, and the Begg and Mazumdar (b = 0.35, P = 0.01), all confirm the presence of publication bias under the random-effects model. The point estimate and 95% CI for the combined studies is 1.01035 (0.69, 1.33). Using Trim and Fill, the imputed point estimate is 0.62 (0.28, 0.95) (Figure 4).

Figure 4

Funnel plot showing publication bias among included studies.

Discussions

This review and meta-analysis were intended to present the result of the review, and produce a pooled estimate regarding the effect of simulation-based teaching on nursing skill performance in nursing. Most of the studies were from developed and middle-level countries, and original researches were varied in terms of the study context such as the types of the scenario used, the number of study participants, the duration of the simulation, and tools to measure outcomes. Moreover, the pooled estimate of included studies did prove the positive effect of simulation-based teaching in improving nursing skill performance. Since significant heterogeneity was observed during analysis, the reader needs to use the pooled analysis result with caution. The agreement among specific studies on the simulation was not complete. Some studies26, 33,34,35, 37 still reporting inconsequentiality of simulation-based teaching got improving skill performance in nursing. This gives an assignment for researchers to answer why, and users to continuously assess their success after the implementation of the simulation.

The simulation-based teaching helps learners or users to assume the complexity of health service delivery and allow repeated exercise.10 Moreover, participation in simulation decrease mistakes in actual practice and increases flexibility during practice.45

In the current review, regardless of simulation types, the effect of simulation over skill performance showed a larger effect size that favors the users, which is consistent with a systematic review done by others.9, 46,47,48

In contrast with the result obtained in overall effect size, some individual studies reported and scored result that shows lack of evidence to prefer the use of simulation from traditional teaching method.26, 33,34,35, 37 This indicates a need for further evidence and searching for potential factors significantly affecting the success and failure of this teaching strategy. Another factor may be the level of information contamination among controls and experimental groups. A significant number of specific studies were not strict on blinding participants and evaluators of performance.

This review and meta-analysis obtained significant heterogeneity in the overall and moderator analysis. Even though the sizes of effect were statistically significant, we lack the confidence to recommend this effect size due to large heterogeneity. Moreover, this might be due to a combination of studies with different scenarios, designs, and assessment tools. As a result, further work is expected from the nurse researchers to justify its effect confidently in a well-organized and standardized manner.

The larger proportion of studies was drawn from the developed and middle-level countries. Similar results were also reported consistently in various reviews and meta-analyses. This might be associated with a lack of financial support, simulation facility, and motivation on the part of the researchers to handle experimental studies that are accompanied by strict procedures.

We may think that high fidelity simulators (HFS) are better than LFS,49 but the current review shows the opposite. The estimated effect size for LF was higher for LF with an acceptable range of heterogeneity. Even in medicine, the students prefer LF, focused, and shorter duration of the simulation.50 Again, Massoth et al., reported in 2019 that LFS helped to improve skill performance as compared to HFS, and the HF was criticized for letting the students have overconfidence.7 Again, another RCT reported HF that had no effect on students’ retention of neonatal resuscitations.51

The students’ preference as well as larger effect size in LFS-based teaching may be associated with the extent of time spent in simulation and mental adjustment of students for the simulation environment. It tends to happen that that students spent more time in LF. Moreover, the level of anxiety at the time of teaching in LF may also favor learning. Another justification may be the distracting nature of HFS from basic concept learning by increasing extraneous cognitive load; this was also given as the reason for impaired learning in HF simulation room.52

In contrast with the current study, many reviews of original studies showed a higher advantage of HFS than LFS in neonatal resuscitation,9 identification, and management of deteriorating patient,46 and performance of basic life support.53 As a controversial finding, having different types of fidelity levels has not shown a significant difference in student skill performance in all types of simulation. This result indicated not to depend on the level of fidelity and has rather resulted in the revelation that use of the mixed method may be more advisable.10 Again, it helps us to conclude that focused training, student handling, and duration of simulation matters more than types of fidelity used. Thus, the upcoming research needs to identify and address the factors that determine success in using simulator other than changing fidelity.

The use of standardized patients is preferred for the noninvasive procedure and skills, such as physical examination, history taking, communication exercise, and improvement of confidence for clinical skill management. This review also identified the use of standardized patients as a simulator improves the skill performance of participants with large effect sizes. Similar results were reported from different reviews.10 Oh et al. (2015), show that the use of standardized patients improves communication skills with large effect size.54

Conclusions

Assisting teaching with simulation did improve nursing skill performance. Again, the use of simulation-based teaching showed a positive effect both for student and clinical nursing staff training. The level of fidelity showed little difference and even LFS produced a greater effect size than others. Along with investing in equipment and teaching aid, equal attention should be given to faculty development to improve the style of teaching, student handling, and facilitation of teaching sessions. Since most studies were done in simulated environments, their application and significance for actual patient care need to be proved with further research.

Strength and limitations

Analysis of single outcome of simulation-based teaching aid is understood to cause focused result and implication. Moreover, focusing on the most important aspect of nursing education (skill) also helps to inform the most important aspect of nursing.

The confidence in generalizability and overall recommendation is limited by significant heterogeneity in the pooled analysis. Variety and difference in the type of scenario and outcome measuring tool were the major challenges of these combined studies.

The scope of the literature search was narrow due to the subscription challenge, which might reduce the depth of the literature search. Bias may also be introduced during searching, screening, and selecting literature, which directly affect the pool of literature for the final analysis. The number and quality of included and excluded literature were dependent on the critical appraisal ability of researchers. Again, this review was not specific and it considers every study that assessed a skill performance while they were using a different scenario, and research context that ends up with significant heterogeneity. The true effects of simulation-based teaching may be obscured due to the inclusion of freely available literature.

Figure 1

Flow diagram showing the process of study identification and selection.
Flow diagram showing the process of study identification and selection.

Figure 2

Forest plot showing the effect size of individual studies.Note: CI, confidence interval.
Forest plot showing the effect size of individual studies.Note: CI, confidence interval.

Figure 3

Forest plot showing sensitivity analysis by one study remove method.
Forest plot showing sensitivity analysis by one study remove method.

Figure 4

Funnel plot showing publication bias among included studies.
Funnel plot showing publication bias among included studies.

Characteristics of included studies.

Study Interventions Study type, duration, sample size Scenario Outcome measures Result Effects
1 Aqel & Ahmed 2014, Jordan,27 RCT Training of participant over simulated case with cardiac arrest scenario and debriefing discussion. HFS, 25!90 CPR Direct observation using Checklist: mock codes were conducted over manikin over floor and evaluation using AHA checklist. The results revealed the existence of a significant difference in the post-test CPR knowledge as well as the CPR skills in favor of participants in the intervention group. Improved
2 Basak et al., 2016, Turkey,28, 29 Quasi, Single pre-post 45 min paper-based drug dose calculation simulation and debriefing session for discussion. LFS, 45!82 Actual physician prescription Rating: Drug dose calculation was evaluated from 100 points immediately after training and 1 month later. The difference between the mean pre-test score and the mean post-test score was statistically significant (t = 8.767, df = 89, P = 0.001) Improved
3 Basak et al., 2019, Turkey,30 RCT, equivalent control group 20 min simulation with 40 min debriefing and self-evaluation for 10 min generally 80 min discussion about teaching skill over SPs. SP, 80!71 Inhaler drug administration Direct Observation using Check list: Teaching skill measured by checklist consisted of 15 procedural steps developed and tested by principal investigators. Total patient teaching skill score for control group was 26.73 ± 5.63 and 39.08 ± 5.49 for SP group which causes a statistically significant difference (P ≤ 0.01) Improved
4 Bogossian et al. 2015, Australia,20 Quasi Single pre-post Interactive e simulation clinical scenario with video recording patient conditions, pop-up task, and respective response. VS, 24!367 Cardiac, shock, and respiratory Virtual skill performance A paired t-test showed a significant improvement in performance between the first and last scenarios (t = −8.037, df = 366, CI 2.05–1.24; P = 0.00). Improved
5 Bowling et al., 2015, USA,31 Quasi, equivalent control group 50 min respiratory distress simulated cased training and participant required to react to simulated case. MFS, 50!73 Respiratory distress OSCE with six station lasting 7 min and rater-based evaluations There was a significant difference for both groups in knowledge and skill performance (measured with a mini OSCE), but not between the groups Improved
6 Boyde M et al., 2018, Australia,24 Quasi, Single pre-post Innovative teaching of emergency management of patient using HF simulation with Jefferies simulation principles. HFS, Not mentioned, 50 Emergency patient Self-assessment: The self-efficacy in clinical performance scale was used to measure participant's assessment and handovepractice. The mean change in handover skill from 7.88 ± 1.76 to 8.79 ± 1.22 was statistically significant with t (41) = 3.41, P < 0.01 Improved
7 Chen et al., 2015, Canada,32 Quasi, equivalent control group Auscultation skills training using low and HF training. HFS, 40!54 Pneumothorax and a systolic murmur: Auscultation skills OSCE using Check list: Participants required to correctly identify 20 different sounds on simulators. There was no evidence that the HFS group performed better than the LFS group in clinical skills or in auscultation sounds recognition on HFS. No change
8 Durmaz et al., 2012, Turkey,33 RCT Intervention: Participants receive 4 h computer-based education simulation about pre-operative and post-operative patient management. VS, 4 h,82 Pre-post case OSCE for pre and post-operative management and deep breathing and coughing exercise: e. There was not a significant difference between the students’ post-education practical deep breathing and coughing exercise education skills (P = 0.867). Improved
9 Ismailoglu et al., 2018, Turkey,25 Quasi, equivalent control group IV training over virtual IV simulator VS, Not clear, 62 Encoded case Direct observation Check list: Intravenous catheterization Skill list performance evaluation. Mean psychomotor skills score of the experimental group 45.18 (33.73 ± 4.22) was higher than that of the control group 20.44 (26.53 ± 4.45) with Z = 5.294, P = 0.000. Improved
10 Jaberi et al., 2019, Iran,34 RCT Abdominal examination skill was tested after teaching student sing SP for 45 min. SP, 45!,87 Physical examination of abdomen OSCE using checklist: Six station OSCE were used with one rater for each station were assigned to evaluate performance over SPs. The mean score in intervention group changed from 5.35 ± 1.77 to 15.39 ± 3.2, while it was changed 4.98 ± 2.17 to 14.43 ± 3.93 in control group. There was a significant difference between the mean pre-test and post-test scores in each group (P < 0.05). Improved
11 Karabacak et al., 2019, Turkey,35 Quasi, Single pre-post A 12 h theory and laboratory-based training using SP on selected fundamental of nursing skills. SP, 12 h, 65 Fundamental of nursing issues Self-assessment: Proficiency self-assessment Form for proper communication with the patient, establishing a safe patient unit, safe patient transfer and act on body mechanics. No significant difference has been found between pre-scenario (7.05 + 9.17) and post-scenario (5.89 + 2.02) scores about self-assessment of safe patient transfer (t = 1.01; P = 0.32). No change
12 Keleekai et al., 2016, USA,36 RCT, equivalent control group Virtual based 3 h training to improve/decrease IV reinsertion VS, 3 h, 58 Peripheral IV securing Direct observation of virtual guided skill performance using Check list: Number of success and reinsertion of IV after demonstrating over IV arm model. Participants evaluated over 28-point check lists. The intervention was effective and resulted in several statistically significant improvements in knowledge, confidence, and skills both within and between study groups over time. Improved
13 Lee et al., 2019, Taiwan, China,37 Quasi, equivalent control group Integrating simulation-based teaching over advanced acute care adult scenario on shock, resuscitations for 90 min. HFS, 90!52 Shock and resuscitations Direct observation at clinical sites using Check list: Evaluated based on predesigned check list for clinical evaluation at actual practical setting. No significant difference in clinical performance was observed among groups. No change
14 Liaw et al., 2015 Singapore,38 RCT, equivalent control group The interactive web-based programmer 3 h training on patient identification, early recognition, vital sign monitoring, and management. VS, 3 h, 67 Deteriorating patients Direct observations using Check list: The simulation performance tool was adapted and modified from the original RAPIDS tool and used to assess specific and global rating scale. l. Two independent raters evaluated recorded video of performance. There was a significant change in Assessing and managing clinical deterioration in experimental group pre-test 18.17 (3.55), post-test 25.83 (4.79), and Reporting clinical deterioration pre-test 10.09 (2.31) post-test 12.83 (2.41). Improved
15 Lubbers et al., 2016, USA,39 Quasi, Single pre-post 1 h simulation, pre-post–simulation discussion. HFS, 3 h and 30!58 Not mentioned Self-assessment of Knowledge, confidence and performance. The Skill score, revealed significant increases from pre-test 2.25 to post-test 4.13, t = 21.21, P < 0.001). Improved
16 Meyer et al., 2011, USA,23 Quasi, equivalent control group Replacing 2 weeks (25%) of clinical work or rotation with simulation-based teaching in skill lab. VS, 24 h, 120 Various Direct observation using rating scale Clinical faculty assessment of student performance in clinical work and compared with control group who spent 100% in clinical rotations. Faculty rated students with patient simulation experience higher than those who had not yet attended simulation mean 1.74 (0.75), P = 0.02). Improved
17 Morton et al., 2019, USA,26 Quasi, Single pre-post Training using HFS portraying a patient with cardiac arrest. HFS, Not mentioned, 37 CPR Direct observation using Check list: Mock Code Evaluation Tool basically developed based on AHA (2015) guideline for basic life supports. There is no statistically significant difference in performance obtained following simulation-based training. No change
18 Sarmasogle et al. 2016, Turkey,40 Quasi, equivalent control group SP-based training of Arterial blood pressure and Subcutaneous injection, feedback, and discussion with SP. SP, 4 h, 77 Hypertension and acute pain Direct observation using Check list: Performance assessment using check list for arterial blood pressure measurements and subcutaneous injection by two raters. The mean performance score for the measurement of arterial blood pressure was 76 ± 7.6 for the control group and 83 ± 3.1 for the experimental group (P < 0.001). However, no significant difference was found between the groups’ performance scores on subcutaneous injection administration. Improved
19 Stayt LC, et al., 2015, UK,41 RCT 2 h clinical skill teaching; systematic ABCDE assessment and management process on medium fidelity patient simulator (ALS Simulator, made by Laerdal Medical) using a clinical scenario of an acutely unwell patient who is exhibiting signs of clinical deterioration. SP, 2 h, 98 Deteriorating patient OSCE using check list. The OSCE comprised of a check list of 24 objective performance criteria that evaluated participants’ performance of assessing and managing a deteriorating patient using a patient simulator. The results indicate that students who received simulation training performed a systematic ABCDE assessment and managed the deteriorating patient more effectively than those who received a didactic teaching approach. Improved
20 Sumner et al., 2012, USA,42 Quasi, Single pre-post Participants received the intervention by attending a 4-hour basic arrhythmia program on the second day of nursing orientation. MFS, 4 h, 138 Arrhythmia cases Self-assessment: post simulation self-report of caring and resource utilization in caring of patient with arrythmias patients. Following simulation there was transfer of knowledge to clinical practice. Improved
21 Toubasi S et al., 2015, Jordan,21 Quasi, Single pre-post Step by step simulation and debriefing of cardiac arrest scenario using AHA guidelines. MFS, 8 h, 30 Cardiac arrest Direct observation using Check list: Validated skill scenario testing tool which was developed by the AHA to assess performance according to the AHA 2010 guidelines. There is a significant mean difference of 2.9 in overall skill performance and BLS score after simulation (t = 7.4, df = 29, P < 0.01). Improved
22 Unver et al., 2013, Turkey,43 Quasi, Single pre-post 4 h training using SP SP, 4 h, 85 Medical administration OSCE: OCEF were used. There was a significant difference (30.26) in pre-test (24.02 ± 16.06) to post-test (54.28 ± 14.54) skill performance measurements (P < 0.01; t = 14.35). Improved
23 Vidal VL et al, 2013, Turkey,44 Quasi, equivalent control group Computer-based training with demonstration, return demonstration and verbal feedback regarding performance of phlebotomy. VS, 3 h, 73 Phlebotomy Direct observations using Check list: the skill checklist used by the mentors consisted of 21 items addressing the necessary steps for the completion of a phlebotomy procedure and 3 items related to overall performance. There is significant among the group in mean skill performance score in pain factor (P = 0.006), hematoma formation (P = 0.000), and number of reinsertions (P = 0.000). Improved
24 Woda et al., 2019, USA,22 Quasi, Single pre-post A 20 min training using HFS and debriefing about care of patient with type I DM. HFS, 20!233 Type one DM Direct observation of using Check list: Performance evaluated using 10 item evaluation rubrics by research assistance on major areas of DM care. Simulation did have a significant positive effect on performance change scores (P < 0.001; r = 0.28). The mean pre-test score on performance items was 0.73 (SD = 0.14), and the mean post-test score on performance items was 0.76 (SD = 0.12) Improved

Summary of effect size for subgroup analysis.

Comparison and Groups Numbers of studies Effect size (d) SMD, CI, P value I2, % Z value
All studies Groups 24 1.01 (CI [0.62, 1.41], P < 0.01) 93.9 5.13
  Single group 10 1.02 (CI [0.52, 1.50), P < 0.01) 95 4.46
  Double groups 14 1.00 (CI [0.56, 1.44], P < 0.01) 92.9 4.48
Simulator types
  HF 7 1.23 (CI [0.55, 1.93], P < 0.01) 94.8 3.5
  Medium fidelity 3 0.89 (CI [−0.14, 1.93], P = 0.09) 86.5 1.69
  LF 3 1.27 (CI [0.24, 2.29], P = 0.02 0 2.4
  SP 5 1.03 (CI [0.23, 1.84], P = 0.01) 96 2.5
  VSs 6 0.69 (CI [−0.04, 1.4], P = 0.06) 95.4 1.85
Types of participants
  Clinical staffs 3 1.08 (CI [0.43, 1.74], P < 0.01) 85.8 3.25
  Nursing students 8 0.98 (CI [0.61, 1.37], P < 0.01) 95 5.11
Regions (country)
  America 8 1.22 (CI [0.62, 1.82], P < 0.01) 94.6 4.02
  Europe 10 0.76 (CI [0.24, 1.29], P = 0.004) 95.3 2.85
  Middle East 6 1.17 (CI [0.48, 1.86], P = 0.001) 88.74 3.34
Design
  Quasi 17 0.96 (CI [0.57, 1.34], P < 0.01) 94.78 4.86
  RCT 7 1.14 (CI [0.54, 1.75], P < 0.01) 91.1 3.7
Types of scenarios
  Acute 12 1.07 (CI [0.73, 1.41], P < 0.01) 88.1 6.18
  Cold 12 0.92 (CI [0.35, 1.49], P < 0.02) 95.16 3.16

Moran V, Wunderlich R, Rubbelke C. Simulation in Nursing Education. Simulation: Best Practices in Nursing Education: Springer, Cham; 2018. MoranV WunderlichR RubbelkeC Simulation in Nursing Education. Simulation: Best Practices in Nursing Education Springer Cham 2018 Search in Google Scholar

Dearmon V, Graves RJ, Hayden S, et al. Effectiveness of simulation-based orientation of baccalaureate nursing students preparing for their first clinical experience. J Nurs Educ. 2013; 52:29–38. DearmonV GravesRJ HaydenS Effectiveness of simulation-based orientation of baccalaureate nursing students preparing for their first clinical experience J Nurs Educ 2013 52 29 38 Search in Google Scholar

Bland AJ, Topping A, Wood B. A concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Education Today. 2011;31:664–670. BlandAJ ToppingA WoodB A concept analysis of simulation as a learning strategy in the education of undergraduate nursing students Nurse Education Today 2011 31 664 670 Search in Google Scholar

Hicks FD, Coke L, Li S. The Effect of High-Fidelity Simulation on Nursing Students’ Knowledge and Performance: A Pilot Study. National Council of State Boards of Nursing, Inc (NCSBN); 2009. HicksFD CokeL LiS The Effect of High-Fidelity Simulation on Nursing Students’ Knowledge and Performance: A Pilot Study National Council of State Boards of Nursing, Inc (NCSBN) 2009 Search in Google Scholar

Najjar RH, Lyman B, Miehl N. Nursing students’ experiences with high-fidelity simulation. Int J Nurs Educ Scholarsh. 2015;12:27–35. NajjarRH LymanB MiehlN Nursing students’ experiences with high-fidelity simulation Int J Nurs Educ Scholarsh 2015 12 27 35 Search in Google Scholar

Cant RP, Cooper SJ. Use of simulation-based learning in undergraduate nurse education: an umbrella systematic review. Nurse Educ Today. 2017;49:63–71. CantRP CooperSJ Use of simulation-based learning in undergraduate nurse education: an umbrella systematic review Nurse Educ Today 2017 49 63 71 Search in Google Scholar

Massoth C, Röder H, Ohlenburg H, et al. High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students. BMC Med Educ. 2019;19:29. MassothC RöderH OhlenburgH High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students BMC Med Educ 2019 19 29 Search in Google Scholar

Arnold JJ, Tucker SJ, Johnson LM, Chesak SS, Dierkhising RA. Comparison of three simulation-based teaching methodologies for emergency response. Clin Simul Nurs. 2013;9:e85–e93. ArnoldJJ TuckerSJ JohnsonLM ChesakSS DierkhisingRA Comparison of three simulation-based teaching methodologies for emergency response Clin Simul Nurs 2013 9 e85 e93 Search in Google Scholar

Huang J, Tang Y, Tang J, et al. Educational efficacy of high-fidelity simulation in neonatal resuscitation training: a systematic review and metaanalysis. BMC Med Educ. 2019;19:323. HuangJ TangY TangJ Educational efficacy of high-fidelity simulation in neonatal resuscitation training: a systematic review and metaanalysis BMC Med Educ 2019 19 323 Search in Google Scholar

Kim J, Park J-H, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016; 16:176–182. KimJ ParkJ-H ShinS Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis BMC Med Educ 2016 16 176 182 Search in Google Scholar

Shin S, Park J-H, Kim J-H. Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Educ Today. 2015;35:176–182. ShinS ParkJ-H KimJ-H Effectiveness of patient simulation in nursing education: meta-analysis Nurse Educ Today 2015 35 176 182 Search in Google Scholar

Higgins JP, Savović J, Page MJ, et al. Assessing risk of bias in a randomized trial. 2019:205–228. HigginsJP SavovićJ PageMJ Assessing risk of bias in a randomized trial 2019 205 228 Search in Google Scholar

Sterne JAC, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomized studies of interventions. Br Med J. 2016;7:355. SterneJAC HernánMA ReevesBC ROBINS-I: a tool for assessing risk of bias in non-randomized studies of interventions Br Med J 2016 7 355 Search in Google Scholar

JBI TJBI. Checklist for Quasi-Experimental Studies (non-randomized experimental studies). Critical Appraisal Checklist for Quasi-Experimental Studies; 2017:7. JBI TJBI Checklist for Quasi-Experimental Studies (non-randomized experimental studies) Critical Appraisal Checklist for Quasi-Experimental Studies 2017 7 Search in Google Scholar

Tufanaru C, Munn Z, Aromataris E, et al. Chapter 3: systematic reviews of effectiveness. In: Aromataris E, Munn Z, eds. Joanna Briggs Institute Reviewer's Manual. The Joanna Briggs Institute; 2017. TufanaruC MunnZ AromatarisE Chapter 3: systematic reviews of effectiveness In: AromatarisE MunnZ eds. Joanna Briggs Institute Reviewer's Manual The Joanna Briggs Institute 2017 Search in Google Scholar

Bruce N, Pope D, Stanistreet D. Quantitative Methods for Health Research: A Practical Interactive Guide to Epidemiology and Statistics. UK: John Wiley & Sons Ltd; 2018. BruceN PopeD StanistreetD Quantitative Methods for Health Research: A Practical Interactive Guide to Epidemiology and Statistics UK John Wiley & Sons Ltd 2018 Search in Google Scholar

Durlak JA. How to select, calculate, and interpret effect sizes. J Pediatr Psychol. 2009;34:917–928. DurlakJA How to select, calculate, and interpret effect sizes J Pediatr Psychol 2009 34 917 928 Search in Google Scholar

Idris, NRN. Performance of the trim and fill method in adjusting for the publication bias in meta-analysis of continuous data. Am J Appl Sci. 2012;9:1512. IdrisNRN Performance of the trim and fill method in adjusting for the publication bias in meta-analysis of continuous data Am J Appl Sci 2012 9 1512 Search in Google Scholar

Brahmi F, Gall C. EndNote® and reference Manager® citation formats compared to “instructions to authors” in top medical journals. Med Ref Serv Q. 2006;25:49–57. BrahmiF GallC EndNote® and reference Manager® citation formats compared to “instructions to authors” in top medical journals Med Ref Serv Q 2006 25 49 57 Search in Google Scholar

Bogossian FE, Cooper SJ, Cant R, Porter J, Forbes H, FIRST2ACT™ Research Team. A trial of e-simulation of sudden patient deterioration (FIRST2ACT WEB) on student learning. Nurse Educ Today. 2015;35:e36–e42. BogossianFE CooperSJ CantR PorterJ ForbesH FIRST2ACT™ Research Team A trial of e-simulation of sudden patient deterioration (FIRST2ACT WEB) on student learning Nurse Educ Today 2015 35 e36 e42 Search in Google Scholar

Toubasi S, Alosta MR, Darawad MW, Demeh W. Impact of simulation training on Jordanian nurses’ performance of basic life support skills: a pilot study. Nurse Educ Today. 2015;35:999–1003. ToubasiS AlostaMR DarawadMW DemehW Impact of simulation training on Jordanian nurses’ performance of basic life support skills: a pilot study Nurse Educ Today 2015 35 999 1003 Search in Google Scholar

Woda A, Hansen J, Dreifuerst KT, et al. The impact of simulation on knowledge and performance gain regarding diabetic patient care. Clin Simul Nurs. 2019;34:6. WodaA HansenJ DreifuerstKT The impact of simulation on knowledge and performance gain regarding diabetic patient care Clin Simul Nurs 2019 34 6 Search in Google Scholar

Meyer MN, Connors H, Hou Q, Gajewski B. The effect of simulation on clinical performance: a junior nursing student clinical comparison study. Simul Healthc. 2011;6:269–277. MeyerMN ConnorsH HouQ GajewskiB The effect of simulation on clinical performance: a junior nursing student clinical comparison study Simul Healthc 2011 6 269 277 Search in Google Scholar

Boyde M, Cooper E, Putland H, et al. Simulation for emergency nurses (SIREN): a quasi-experimental study. Nurse Educ Today. 2018;68:100–104. BoydeM CooperE PutlandH Simulation for emergency nurses (SIREN): a quasi-experimental study Nurse Educ Today 2018 68 100 104 Search in Google Scholar

Ismailoglu G, Zaybak A. Comparison of the effectiveness of a virtual simulator with a plastic arm model in teaching intravenous catheter insertion skills. Comput Inform Nurs. 2018;36:98–105. IsmailogluG ZaybakA Comparison of the effectiveness of a virtual simulator with a plastic arm model in teaching intravenous catheter insertion skills Comput Inform Nurs 2018 36 98 105 Search in Google Scholar

Morton SB, Powers K, Jordan K, et al. The effect of high-fidelity simulation on medical-surgical nurses’ mock code performance and self-confidence. MEDSURG Nurs. 2019;28:6. MortonSB PowersK JordanK The effect of high-fidelity simulation on medical-surgical nurses’ mock code performance and self-confidence MEDSURG Nurs 2019 28 6 Search in Google Scholar

Aqel AA, Ahmad MM. High-fidelity simulation effects on CPR knowledge, skills, acquisition, and retention in nursing students. Worldviews Evid Based Nurs. 2014;11:394–400. AqelAA AhmadMM High-fidelity simulation effects on CPR knowledge, skills, acquisition, and retention in nursing students Worldviews Evid Based Nurs 2014 11 394 400 Search in Google Scholar

Basak T, Aslan O, Unver V, Yildiz D. Effectiveness of the training material in drug-dose calculation skills. Jpn J Nurs Sci. 2016;13:324–330. BasakT AslanO UnverV YildizD Effectiveness of the training material in drug-dose calculation skills Jpn J Nurs Sci 2016 13 324 330 Search in Google Scholar

Zhang YY, Yan M, Li J, et al. Effects of walking exercise on bowel preparation in patients undergoing colonoscopy: evidence from systematic review and meta-analysis. Front Nurs. 2020;7:39–48. ZhangYY YanM LiJ Effects of walking exercise on bowel preparation in patients undergoing colonoscopy: evidence from systematic review and meta-analysis Front Nurs 2020 7 39 48 Search in Google Scholar

Basak T, Demirtas A, Iyigun E. The effect of simulation based education on patient teaching skills of nursing students: a randomized controlled study. J Prof Nurs. 2019;35:417–424. BasakT DemirtasA IyigunE The effect of simulation based education on patient teaching skills of nursing students: a randomized controlled study J Prof Nurs 2019 35 417 424 Search in Google Scholar

Bowling A. The effect of simulation on skill performance: a need for change in pediatric nursing education. J Pediatr Nurs. 2015;30:439–446. BowlingA The effect of simulation on skill performance: a need for change in pediatric nursing education J Pediatr Nurs 2015 30 439 446 Search in Google Scholar

Chen R, Grierson LE, Norman GR. Evaluating the impact of high- and low-fidelity instruction in the development of auscultation skills. Med Educ. 2015;49:276–285. ChenR GriersonLE NormanGR Evaluating the impact of high- and low-fidelity instruction in the development of auscultation skills Med Educ 2015 49 276 285 Search in Google Scholar

Durmaz A, Dicle A, Cakan E, Cakir Ş. Effect of screen-based computer simulation on knowledge and skill in nursing students’ learning of preoperative and postoperative care management a randomized controlled study. Comput Inform Nurs. 2012;30:196–203. DurmazA DicleA CakanE CakirŞ Effect of screen-based computer simulation on knowledge and skill in nursing students’ learning of preoperative and postoperative care management a randomized controlled study Comput Inform Nurs 2012 30 196 203 Search in Google Scholar

Jaberi A, Momennasab M. Effectiveness of standardized patient in abdominal physical examination education: a randomized, controlled trial. Clin Med Res. 2019;17:1–10. JaberiA MomennasabM Effectiveness of standardized patient in abdominal physical examination education: a randomized, controlled trial Clin Med Res 2019 17 1 10 Search in Google Scholar

Karabacak U, Unver V, Ugur E, et al. Examining the effect of simulation based learning on self-efficacy and performance of first-year nursing students. Nurse Educ Pract. 2019;36:139–143. KarabacakU UnverV UgurE Examining the effect of simulation based learning on self-efficacy and performance of first-year nursing students Nurse Educ Pract 2019 36 139 143 Search in Google Scholar

Keleekai NL, Schuster CA, Murray CL, et al. Improving nurses’ peripheral intravenous catheter insertion knowledge, confidence, and skills using a simulation-based blended learning program: a randomized trial. Simul Healthc. 2016;11:376–384. KeleekaiNL SchusterCA MurrayCL Improving nurses’ peripheral intravenous catheter insertion knowledge, confidence, and skills using a simulation-based blended learning program: a randomized trial Simul Healthc 2016 11 376 384 Search in Google Scholar

Lee BO, Liang HF, Chu TP, Huang CC. Effects of simulation-based learning on nursing student competences and clinical performance. Nurse Educ Pract. 2019;41:102646. LeeBO LiangHF ChuTP HuangCC Effects of simulation-based learning on nursing student competences and clinical performance Nurse Educ Pract 2019 41 102646 Search in Google Scholar

Liaw SY, Wong LF, Ang SBL, Ho JTY, Siau C, Ang ENK. Strengthening the afferent limb of rapid response systems: an educational intervention using web-based learning for early recognition and responding to deteriorating patients. BMJ Qual Saf. 2016;25:448–456. LiawSY WongLF AngSBL HoJTY SiauC AngENK Strengthening the afferent limb of rapid response systems: an educational intervention using web-based learning for early recognition and responding to deteriorating patients BMJ Qual Saf 2016 25 448 456 Search in Google Scholar

Lubbers J, Rossman C. The effects of pediatric community simulation experience on the self-confidence and satisfaction of baccalaureate nursing students: a quasi-experimental study. Nurse Educ Today. 2016;39:93–98. LubbersJ RossmanC The effects of pediatric community simulation experience on the self-confidence and satisfaction of baccalaureate nursing students: a quasi-experimental study Nurse Educ Today 2016 39 93 98 Search in Google Scholar

Sarmasoglu S, Dinc L, Elcin M. Using standardized patients in nursing education: effects on students’ psychomotor skill development. Nurse Educ. 2016;41:E1–E5. SarmasogluS DincL ElcinM Using standardized patients in nursing education: effects on students’ psychomotor skill development Nurse Educ 2016 41 E1 E5 Search in Google Scholar

Stayt LC, Merriman C, Ricketts B, Morton S, Simpson T. Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students. J Adv Nurs. 2015;71:2563–2574. StaytLC MerrimanC RickettsB MortonS SimpsonT Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students J Adv Nurs 2015 71 2563 2574 Search in Google Scholar

Sumner L, Burke SM, Chang LT, McAdams M, Jones DA. Evaluation of basic arrhythmia knowledge retention and clinical application by registered nurses. J Nurs Staff Dev. 2012;28:E5–E9. SumnerL BurkeSM ChangLT McAdamsM JonesDA Evaluation of basic arrhythmia knowledge retention and clinical application by registered nurses J Nurs Staff Dev 2012 28 E5 E9 Search in Google Scholar

Unver V, Basak T, Iyigun E, et al. An evaluation of a course on the rational use of medication in nursing from the perspective of the students. Nurse Educ Today. 2013;33:1362–1368. UnverV BasakT IyigunE An evaluation of a course on the rational use of medication in nursing from the perspective of the students Nurse Educ Today 2013 33 1362 1368 Search in Google Scholar

Vidal VL, Ohaeri BM, John P, Helen D. Virtual reality and the traditional method for phlebotomy training among college of nursing students in Kuwait: implication for practice. Art Sci Infus Nurs. 2013;36:349–355. VidalVL OhaeriBM JohnP HelenD Virtual reality and the traditional method for phlebotomy training among college of nursing students in Kuwait: implication for practice Art Sci Infus Nurs 2013 36 349 355 Search in Google Scholar

Eyikara E, Baykara ZG. The importance of simulation in nursing education. World J Educ Technol. 2017;9:6. EyikaraE BaykaraZG The importance of simulation in nursing education World J Educ Technol 2017 9 6 Search in Google Scholar

Orique SB, Phillips LJ. The effectiveness of simulation on recognizing and managing clinical deterioration: meta-analyses. West J Nurs Res. 2018;40:582–609. OriqueSB PhillipsLJ The effectiveness of simulation on recognizing and managing clinical deterioration: meta-analyses West J Nurs Res 2018 40 582 609 Search in Google Scholar

Hegland PA, Aarlie H, Strømme H, Jamtvedt G. Simulation-based training for nurses: systematic review and meta-analysis. Nurse Educ Today. 2017;54:6–20. HeglandPA AarlieH StrømmeH JamtvedtG Simulation-based training for nurses: systematic review and meta-analysis Nurse Educ Today 2017 54 6 20 Search in Google Scholar

Yuan HB, Williams BA, Fang JB, Ye QH. A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation. Nurse Educ Today. 2012;32:294–298. YuanHB WilliamsBA FangJB YeQH A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation Nurse Educ Today 2012 32 294 298 Search in Google Scholar

Munshi F, Lababidi H, Alyousef S. Low-versus high-fidelity simulations in teaching and assessing clinical skills. J Taibah Univ Med Sci. 2015;10:12–15. MunshiF LababidiH AlyousefS Low-versus high-fidelity simulations in teaching and assessing clinical skills J Taibah Univ Med Sci 2015 10 12 15 Search in Google Scholar

SimSTAFF. What is the difference between low-fidelity and high-fidelity simulations? 2020. https://simstaff.com/difference-between-low-fidelity-and-high-fidelity-simulations/. Accessed March 1, 2020. SimSTAFF What is the difference between low-fidelity and high-fidelity simulations? 2020 https://simstaff.com/difference-between-low-fidelity-and-high-fidelity-simulations/. Accessed March 1, 2020. Search in Google Scholar

Nimbalkar A, Patel D, Kungwani A, Phatak A, Vasa R, Nimbalkar S. Randomized control trial of high fidelity vs low fidelity simulation for training undergraduate students in neonatal resuscitation. BMC Res Notes. 2015;8:636. NimbalkarA PatelD KungwaniA PhatakA VasaR NimbalkarS Randomized control trial of high fidelity vs low fidelity simulation for training undergraduate students in neonatal resuscitation BMC Res Notes 2015 8 636 Search in Google Scholar

Rischer K. Why less fidelity in simulation may improve student learning. https://www.keithrn.com/2019/03/why-less-fidelity-in-simulation-may-be-better-to-strengthen-student-learning. Accessed December 6, 2020. RischerK Why less fidelity in simulation may improve student learning https://www.keithrn.com/2019/03/why-less-fidelity-in-simulation-may-be-better-to-strengthen-student-learning. Accessed December 6, 2020. Search in Google Scholar

Koh JH, Hur HK. Effects of simulation-based training for basic life support utilizing video-assisted debriefing on non-technical and technical skills of nursing students. Korean J Adult Nurs. 2016;28:169–179. KohJH HurHK Effects of simulation-based training for basic life support utilizing video-assisted debriefing on non-technical and technical skills of nursing students Korean J Adult Nurs 2016 28 169 179 Search in Google Scholar

Oh P-J, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35:e6–e15. OhP-J JeonKD KohMS The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis Nurse Educ Today 2015 35 e6 e15 Search in Google Scholar

Recommended articles from Trend MD

Plan your remote conference with Sciendo