Accès libre

Development of student simulated patient training and evaluation indicators in a high-fidelity nursing simulation: a Delphi consensus study

   | 30 avr. 2021
À propos de cet article

Citez

Introduction

As a competency-based practice profession, nursing played a central role in the health care delivery system. The continuous development of the nursing discipline is focused on coping with the complexity of health care and newly developed technologies.1 The aim of nursing education was to provide the professional knowledge, skills, and abilities which enabled graduates to think critically, solve problems, and make clinical decisions. The graduates should possess such knowledge and competency to deliver a qualified and safe nursing care.2 Learning by doing was the way for learners to enhance the knowledge acquisition, but it was impractical to use such a method for technical training in actual clinical situations due to patient safety issues. But, simulation provided students with the opportunity to learn clinical skills through repeated practice in a safe learning environment.3 Replacing high-fidelity simulators with real people allowed students to communicate with them, not only to assess students’ knowledge and skills, but also to assess their communication skills and professionalism.4 However, recruiting and cultivating occupational simulated patients (SPs) were challenged by the financial and human investment issues. Because of these issues, training students as simulated patients was an appropriate way in nursing education.5 Student simulated patient (SSP) referred as the student who was trained by professional guidance and played the role of a patient according to case scripts. SSP quality assurance should ensure and pursue continuous improvement, as this is the key to effective training of SSP. However, there was neither the unified, scientific and objective evaluation standards nor reliable measurements for SSP training.5 This study was conducted to construct SSP training content and evaluation indicators, and to design the training in a high-fidelity simulation.

Literature review
Simulation-based learning

Simulations were defined as the learning activities that were designed to help learners acquire insight into the interrelated knowledge structures within a specific context. Simulation-based learning consisted of the simulated scenarios, which use life-size simulators controlled by computer software and a simulated clinical setting.6 High-fidelity simulation (HFS) was an experiential learning through the interactions with the human body manikins, which presented actual physiological and pharmacological reactions. Knowledge was generated when people’s perspective was transformed as a result of the new experience gained through the exposure to the situation.7 HFS offered the students the particular learning opportunity to learn through practical experience, reflections and feedback.6 Students reacted to the simulated events and obtained the practical experience. Reflections encouraged them to think critically step-by-step through the experience and integrated experience with the cognitive domains.8 As a student-centered approach, simulation-based learning promoted the participative feedback between teachers and students, and developed the skills of communication, teamwork, and decision-making, and further helped students to better understand the role of a nurse.9 Simulation allowed the students not only to identify knowledge gaps and take responsibility for rectifying their shortcomings, but also to increase self-confidence, critical thinking, and psychomotor skills. The students demonstrated the increased problem-solving ability, communication competency, cooperation, and delegation skills after simulated learning. Students were trained as simulated patients (SP) by using a HFS, which allowed for realistic practical experience.10

Simulated patient

Portrayed patient referred to a standardized or SP and was planted in a scenario by health professionals, who acted as a proxy for the client with a specific disease. SP was defined as the person who was trained to precisely and reliably play the role of a client with health concerns.11 SPs realistically imitated the situation of real patients in order to help students prepare and improve professional knowledge and communication skills by working with the SPs.12 SSPs have been used for training students’ clinical skills and communication skills13 as well as skills in the objective structured clinical examination for comprehensive clinical competency5. Previous studies reported positive impacts on nursing education especially in the development of nursing skills and critical nursing care.14 SP simulation was necessary to motivate nursing students’ abilities to establish therapeutic relationships with the real patient.13 SP simulation provided an active learning environment for students to gain the confidence in clinical practicums. The interaction with the SP improved the psychological and emotional fidelity to a greater extent than the interaction with manikins.13 Based on best practice for SP training, the training principles included were safety, quality professionalism, accountability, and collaboration. SPs should be trained for portrayal role, feedback, and complete assessment by a safe way that minimizes or reduces the stakeholder risk.15 Quality assurance and continuous improvement were necessary for the development of SP training or practice. The content and feedback technology of SP training were determined, which is the key to ensure the training quality.16

Methods
Research objectives and design

The purposes of this study were to construct SSP training content and evaluation indicators, and to explore their validity and reliabilities.

A descriptive design and Delphi expert consultations were conducted.

Expert panel

The recruitment of an expert panel was conducted through emails sent from the researcher as well as leaflet explaining the objectives of consultations. The expert panel consisted of 20 experts who were from Beijing, Shanghai, Guangzhou, Sichuan, Yunnan, Zhejiang, Inner Mongolia and Macao, China (Table 1).

Characteristics of participants in Delphi consultations (n = 20).

Characteristics n % M ± SD
Age (years)
  30–39 3 15
  40–49 8 40
  50–59 9 45
Working area
  Clinical nursing 6 30
  Nursing education 10 50
  Nursing management 4 20
Academic degree
  Bachelor of nursing science 5 25
  Master of nursing science 8 40
  Doctor of philosophy in nursing science 7 35
Job title/classification
  Senior registered nurse/senior lecturer 5 25
  Associate professors 7 35
  Professors 8 40
  Nursing administrator 4 20
  Nursing teacher/clinical preceptor 16 80
  Master supervisor 9 45
  Doctor supervisor 5 25
Research expertise
  Clinical nursing 10 50
  Nursing education 16 80
  Nursing management 4 20
Years of experience in nursing 25.55 ± 9.57
  5–9 2 10
  9–19 3 15
  20–29 7 35
  30–40 8 40
Delphi consultations

The Delphi consultation was a technique for achieving consensus by the expert panel that expressed their opinions and comments on a particular issue.17 The Delphi method was a well-developed, completely anonymous group process, in which opinions were expressed to experts in the form of a consultation questionnaire.17 The preliminary dimensions and indicators were developed through the literature retrieval and collation analysis. The questionnaire included questions related to the research purpose, general information of the experts, self-assessment survey of expert familiarity and judgment basis, and importance consultations on the first-level and secondary-level indicators using a five-point scale (from 1 = not important to 5 = very important). In addition, a supplementary opinion column was set for experts to give their comments.

The questionnaires were sent via e-mail in order to allow experts to fully express their opinions. Responses to each item were collected and analyzed along with experts’ comments. The adding, revising, or dropping were done in the subsequent round consultations until panel consensus was achieved. After the collection of the questionnaires in the first round, the mean score and coefficient of variation (CV) of each item were calculated. The mean score of importance over 3.00 points and the CV <0.25 were inclusion criteria for item screening.18 The questionnaire was revised based on experts’ comments, and then was sent back to expert panel. Any modification of questionnaire was informed to the panel. The interval between two rounds was 2–3 weeks.

Data analysis

All quantitative analyses were performed using the Statistical Package for the Social Sciences (SPSS, 26.0 IBM Corp). Mean and standard deviation were calculated for the importance of each item. The experts’ enthusiasm was determined by response rate. The authority (Cr) was calculated as follows: Cr = (Ca + Cs)/2 (judgment coefficient Ca, familiarity coefficient Cs).17 The weights of judgment basis were assigned and presented in Table 2. The sum was reported as the judgment coefficient. The familiarity of the questionnaire was reported by experts with the help of the scoring values—1.00 = very familiar, 0.80 = familiar, 0.60 = general, 0.40 = unfamiliar, and 0.20 = very unfamiliar. The average was calculated as the familiarity coefficient. The degree of expert coordination represented the credibility of consultation and reflected the consensus of different experts. The coordination was analyzed by the coefficient of variation (CV) and the Kendall coordination coefficient (kendall w). The CV is the difference in experts’ understanding of the importance of a specific indicator. It was calculated by the formula as “coefficient of variation = standard deviation/mean score.” The CV greater or equal to 0.25 indicated a large difference between experts.17 The coordination coefficient (Kendall w) was analyzed by the non-parametric K-related-sample test. The larger W value indicated the better coordination. Analytic hierarchy process (AHP) was used to calculate the weight of each item, which showed the relative importance of indicator in the overall evaluation.17

The weight of judgment basis (Ca).

Judgment basis Large Median Small
Theoretical knowledge 0.30 0.20 0.10
Experience 0.45 0.35 0.20
References 0.20 0.15 0.10
Intuition 0.05 0.05 0.05
Results
Expert consultations

The positivity coefficients were 0.952 in the first round and 1.00 in the second round. The judgment coefficient was 0.89 and the familiarity coefficient was 0.85, while the authority coefficient was 0.87. The authority coefficient over 0.80 indicated that experts had great power and confidence on consultation content.17 Table 3 shows that the coordination coefficient in the second round consultation was higher than that in the first round. It meant that the consensus of experts’ judgment on indicators was enhanced.

The description of the coordination of expert consultations.

Consultation Item Kendall’s Rank correlation test

Coordination coefficient (w) Degrees of freedom χ2 value P value
First round SSP training content
  Dimension 0.377 4 30.159 0.000
  Indicator 0.326 16 104.400 0.000
SSP evaluation
  Dimension 0.447 4 35.731 0.000
  Indicator 0.311 17 105.886 0.000

Second round SSP training content
  Dimension 0.918 3 55.091 0.000
  Indicator 0.548 16 175.351 0.000
SSP evaluation
  Dimension 0.594 4 47.528 0.000
  Indicator 0.530 19 201.555 0.000

After the first round of expert consultation, the average score of each item was found to be over 3.00, so all items are retained. SSP training included role and responsibility of SP, script interpretation, plot performance and training for a rater, with a total of 17 indicators (see Table 4). The SSP evaluation included disease knowledge, role portrayal, and performance fidelity, and being a rater, with 20 indicators in total (see Table 5). Each indicator was scored as “2—passed, 1—needs to be improved, and 0—failed.” The accuracy of assessment was calculated by the ratio of the trainee’s scoring to trainer’s scoring (2 points ≧85%, 1 point 60 ~ 84%, 0 points <60%). The trainees with a total score rate of 85% or over were considered as the qualified SSP.

The descriptions of SSP training content after expert consultations.

Dimension Indicator Importance Coefficient of variation Weight

Mean SD
1. Role and responsibility of SP 4.25 0.29 0.07 0.239
1.1. SP concept 3.95 0.60 0.15 0.053
1.2. The performance is consistent with the plot of the play 4.80 0.41 0.09 0.064
1.3. Handling the events that may be encountered 4.00 0.46 0.11 0.053
1.4. Each performance is consistent 4.65 0.59 0.13 0.062
1.5. Punctuality and dedication 3.85 0.49 0.13 0.051
2. Script interpretation 4.47 0.35 0.08 0.252
2.1. Disease knowledge: symptoms, signs, treatment 4.05 0.39 0.10 0.054
2.2. The progress of script plot 4.75 0.44 0.09 0.063
2.3. Characteristics of patient 4.85 0.37 0.08 0.065
2.4. Health concerns 4.70 0.57 0.12 0.063
2.5. The relevant physical examinations, and laboratory and imaging examinations 4.00 0.56 0.14 0.053
3. Plot performance 4.55 0.36 0.08 0.257
3.1. Memory: medical history and health concerns 3.75 0.69 0.17 0.049
3.2. Demonstrations of symptoms 4.80 0.41 0.09 0.064
3.3. Demonstrations of signs 4.80 0.52 0.11 0.064
3.4. Demonstrations of psychological and emotional responses 4.85 0.49 0.10 0.065
4. Training for a rater 4.45 0.44 0.10 0.251
4.1. Assessment content 4.80 0.52 0.11 0.064
4.2. Scoring criteria for each indicator 4.70 0.57 0.12 0.063
4.3. Paying attention to students’ performance 3.85 0.59 0.15 0.051

The descriptions of SSP evaluation indicators after expert consultations.

Dimension Indicator Importance Coefficient of variation Weight

Mean SD
1. Disease knowledge 3.87 0.63 0.16 0.211
1.1. Symptoms and signs 3.80 0.77 0.20 0.041
1.2. Treatment and prognosis 3.95 0.69 0.17 0.042
1.3. Physical examinations, laboratory and imaging examinations 3.85 0.75 0.19 0.043
2. Role portrayal 4.63 0.29 0.06 0.253
2.1. Complete presentations of symptoms 4.90 0.31 0.06 0.053
2.2. Complete presentations of signs 4.75 0.44 0.09 0.051
2.3. Clear and fluent expression 4.85 0.37 0.08 0.052
2.4. Consistency with the script plot 4.85 0.37 0.08 0.052
2.5. Accurate and complete memory of plot lines 4.85 0.37 0.08 0.052
2.6. Accurate and complete memory of case data 4.90 0.31 0.06 0.053
2.7. Answer given only if be asked 4.05 0.60 0.15 0.044
2.8. Respond promptly 3.90 0.85 0.22 0.042
3. Performance fidelity 4.83 0.35 0.07 0.264
3.1. Mimicking symptoms 4.95 0.22 0.05 0.054
3.2. Mimicking signs 4.85 0.37 0.08 0.052
3.3. Mimicking tone and intonation 4.75 0.55 0.12 0.051
3.4. Mimicking facial features and expression 4.80 0.52 0.11 0.052
3.5. Mimicking postures and behaviors 4.75 0.55 0.12 0.051
3.6. Realistic psychological and emotional responses 4.85 0.37 0.08 0.052
4. Being a rater 4.95 0.22 0.05 0.271
4.1. Paying attention to students’ performance 4.95 0.22 0.05 0.054
4.2. Completion of assessment 4.95 0.22 0.05 0.054
4.3. Accuracy of assessment 4.95 0.22 0.05 0.054
A pilot study and the validity and reliability of SSP evaluation indicators

Based on the strategic key elements of SSP training (shown in Figure 1), a pilot study was conducted. The best practices for SP training were applied to ensure a safe learning environment, to train the roles of portrayal and raters, and to give the feedback to trainees during debriefing.15

Figure 1

Strategic key elements of SSP training.

The high-fidelity simulators were used to design the scenarios which were controlled by computer software. For example, the simulated scenario was about abdominal pain. The situation was performed in a simulated surgery department. Mrs. Lu, 25 year-old female, was admitted to surgery department. The training process was shown in Figure 2.

Figure 2

SSP training process and the design of simulated scenario.

Trainers were the senior teachers qualified for SP training. Eleven voluntary trainees were recruited from senior Baccalaureate nursing students through advertisements in News-board. The mean age was 22.82 ±1.25 years old. They had experienced clinical practicums for 24.82 ± 4.85 weeks, and had participated in simulation learning for 12.64 ± 4.18 hours. The 20-hour SSP training included role and responsibilities of SP, context and format of scenarios, script read through, education regarding the disease, role demonstration and training for raters. The SSP performance of role portrayal was expected to be consistent and accurate, and also to understand and maintain confidentiality principles related to the simulated events. Students rehearsed using high-fidelity simulated scripts under the trainers’ guidance. Each session was videotaped in order to provide detailed feedback to the trainees. The feedback and debriefing were conducted in each session in order to help the trainees to improve their performance and reduce their anxiety during the simulated experience.

The open-ended questions were used to investigate trainees’ views on SSP training. Students considered that scenario design was realistic (100%), and they understood the patient’s feelings by playing the role of a patient (90.9%), had deeper understanding of disease knowledge (81.8%), learned the interrogation skills (72.7%) and plot performance (63.6%), and improved communication skills (54.5%), but they encountered the difficulties in memorizing script lines by heart (63.6%), appropriate and timely response to the plot changing (54.5%), and attention to student’s performance (36.4%).

The content validity of SSP evaluation indicators was granted by three experts in nursing education, SP training and instrument development, and the content validity indicator (CVI) was reported as 0.95. The two qualified tutors observed trainee’s performances at the same time and marked the score based on the SSP evaluation indicators. The inter-rater reliability was calculated by the correlation of the scorings between two raters, and reported that the Kendall r value was 0.866 (P = 0.000). The reliability of internal consistency (Cronbach’s α) was reported as 0.727.

Discussion

Delphi method enabled experts across diverse locations to be included anonymously. Anonymity in participation promoted broad and open expression or comments. The Delphi method avoided the influence of highest positioned or ranked expert in reaching consensus and/or the possibility of expert in modifying or making adjustments to the group opinion regardless of her/his owns.18 Each expert only contacted the investigator during the consultation process. The group opinions were given to the experts in subsequent round of consultation for their references. This allowed experts to either maintain or change their opinions in view of panel responses.19 After two rounds of consultation, the coordination coefficient of expert opinions was increased. It indicated that the opinions of expert panel gradually became consistent. This study reached consensus from an expert panel for 17 indicators of training content and 20 evaluation indicators of SSP quality.

Experts had high expectations for training in plot performance, performance consistency, the demonstration of symptoms and signs, and greater understanding of scoring criteria. The importance of these aspects should be considered in plot writing, scenario design, and learning activity arrangements. SSP training should ensure that the trainees were ready to be SPs by repeated practice and regular supportive feedback. The completion of assessment should be trained while a variety of learner behaviors were suggested to be involved in the designs of simulated situation.15 The experts believed that role portrayal, performance fidelity, and being a rater was more important than the assessment of disease knowledge. Experts had high expectations for the indicators of role-playing, performance fidelity, and objective assessment. These indicators were the focus of SSP evaluation as well as reflected the core of SSP training. In this study, the trainees’ knowledge of disease was assessed by multiple-choice question examination while SSP performance was assessed by the observational method. The good inter-rater reliability indicated that the evaluation criteria of each item were specific and objective, and with good stability. The internal consistency was also good. It meant that the assessment items measured the same content or characteristics. In this case, SSP evaluation indicators were reliable.

Regarding the pilot study, all trainees had the clinical practicum experience so that they could better understand the patient conditions in the simulated script. The performance consistency was very important. In order to avoid the subjective arbitrariness of SSP performances, the time nodes and reaction occurrence of expressions, postures, emotions, tone and intonation were strictly set in the plot script. SP realistically played the role of patients with unique facial expressions, postures, as well as the psychological and emotional reactions. SP kept on regular conversations with students and cooperated with history inquiry, physical examination, and patient instruction. They also gave the feedback as a rater, pointed out the advantages, negligence, or errors of the students.20,21 SSPs provided students with how the interventions and behaviors affected the patient’s emotional and psychological feelings, patients’ trust in nurses, and patient’s understanding of the guided information. Feedback played an important educational role in interpersonal and affective learning. SSPs should give the feedback to students on communication and clinical skills. In this case, SSP should be trained to use their observations and knowledge to give students feedback on the behaviors has to be improved or modified.22 Training students to become SPs could not only give the school a reliable source of “cases,” but also should let students’ to actively participate in teaching, which embodied the student-centered teaching model.21 In this study, the students believed that they acted the role of patient and experienced the patient’s pain and needs. These experiences enabled SSP to understand the patient’s feelings while also reflecting on their lack of knowledge and skills. In the process of interacting with nursing students for history inquiry, health assessment and patient instruction, they talked and interacted to each other and learned how to gain the trust of “patients” and started the therapeutic relationship. Previous studies reported that the SSP experience stimulated learning enthusiasm, enhanced learning interest, and promoted the understanding and application of knowledge.23 SSP experience also enhanced students’ confidence in communicating with real patients.24 SSP training was carried out in a simulated situation, so that students can intuitively understand the case information, and realistically imitate the patient’s conditions and psychological and emotional reactions. It was helpful for students to enhance their awareness of humanistic care and build knowledge and competence to deal with real clinical situation.25

Conclusions

Students as SPs would have first-hand knowledge and experience within the simulated scenarios. The content and evaluation indicators for SSP training were developed through the Delphi consensus method combined with analytic hierarchy process. SSP training content included role and responsibility of SSP, script interpretation, plot performance and training for a rater. SSP evaluation included disease knowledge, role portrayal, performance fidelity and being a rater. The evaluation indicators were valid and reliable, and provided the objective and quantifiable measurements for SP training in nursing.

Limitations

The reliability test has its limitations because only a small sample from one nursing school is used. So, the reliability needs to be further verified using large sample from different cultural backgrounds. Due to the age difference between students and specific population, such as children, pregnant women, and the elderly, and the limited social experience of students, it is impossible to comprehensively understand the role characteristics of patients, and this might affect the role performance. The design of SSP training for those special populations needs to be further explored in order to enrich case resources.

eISSN:
2544-8994
Langue:
Anglais
Périodicité:
4 fois par an
Sujets de la revue:
Medicine, Assistive Professions, Nursing