• Research article
  • Open access
  • Published: 08 June 2021

Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction

  • Oscar Arrogante 1 ,
  • Gracia María González-Romero 1 ,
  • Eva María López-Torre 1 ,
  • Laura Carrión-García 1 &
  • Alberto Polo 1  

BMC Nursing volume  20 , Article number:  92 ( 2021 ) Cite this article

21k Accesses

26 Citations

1 Altmetric

Metrics details

Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies.

Two hundred eighteen undergraduate nursing students participated in a cross-sectional study, using a mixed-method. MAES© (self-learning methodology in simulated environments) sessions were developed to assess students by formative evaluation. Objective Structured Clinical Examination sessions were conducted to assess students by summative evaluation. Simulated scenarios recreated clinical cases of critical patients. Students´ performance in all simulated scenarios were assessed using checklists. A validated questionnaire was used to evaluate satisfaction with clinical simulation. Quantitative data were analysed using the IBM SPSS Statistics version 24.0 software, whereas qualitative data were analysed using the ATLAS-ti version 8.0 software.

Most nursing students showed adequate clinical competence. Satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance.

The best solution to reduce students’ complaints with summative evaluation is to orient them to the simulated environment. It should be recommended to combine both evaluation strategies in simulated-based assessment, providing students feedback in summative evaluation, as well as evaluating their achievement of learning outcomes in formative evaluation.

Peer Review reports

Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [ 1 – 5 ]. In addition, simulation-based educational programs have demonstrated to be more useful than traditional teaching methodologies [ 4 , 6 ]. As a result, most nursing faculties are integrating this methodology into their study plans [ 7 ]. SBE has the potential to shorten the learning curve for students, increase the fusion between theoretical knowledge and clinical practice, establish deficient areas in students, develop communication and technical skills acquisition, improve patient safety, standardise the curriculum and teaching contents, and offer observations of real-time clinical decision making [ 5 , 6 , 8 , 9 ].

SBE offers an excellent opportunity to perform not only observed competency-based teaching, but also the assessment of these competencies. Simulated-based assessment (SBA) is aimed at evaluating various professional skills, including knowledge, technical and clinical skills, communication, and decision-making; as well as higher-order competencies such as patient safety and teamwork skills [ 1 – 4 , 10 ]. Compared with other traditional assessment methods (i.e. written or oral test), SBA offers the opportunity to evaluate the actual performance in an environment similar to the ‘real’ clinical practice, assess multidimensional professional competencies, and present standard clinical scenarios to all students [ 1 – 4 , 10 ].

The main SBA strategies are formative and summative evaluation. Formative evaluation is conducted to establish students’ progression during the course [ 11 ]. This evaluation strategy is helpful to educators in improving students’ deficient areas and testing their knowledge [ 12 ]. Employing this evaluation strategy, educators give students feedback about their performance. Subsequently, students self-reflect to evaluate their learning and determine their deficient areas. In this sense, formative evaluation includes an ideal phase to achieve the purposes of strategy: the debriefing [ 13 ]. International Nursing Association for Clinical Simulation and Learning (INACSL) defines debriefing as a reflective process immediately following the simulation-based experience where ‘participants explore their emotions and question, reflect, and provide feedback to one another’. Its aim is ‘to move toward assimilation and accommodation to transfer learning to future situations’ [ 14 ]. Therefore, debriefing is a basic component for learning to be effective after the simulation [ 15 , 16 ]. Furthermore, MAES© (according to its Spanish initials of self-learning methodology in simulated environments) is a clinical simulation methodology created to perform formative evaluations [ 17 ]. MAES© allows evaluating specifically nursing competencies acquired by several nursing students at the same time. MAES© is structured through the union of other active learning methodologies such as self-directed learning, problem-based learning, peer education and simulation-based learning. Specifically, students acquire and develop competencies through self-directed learning, as they voluntarily choose competencies to learn. Furthermore, this methodology encourages students to be the protagonists of their learning process, since they can choose the case they want to study, design the clinical simulation scenario and, finally, actively participate during the debriefing phase [ 17 ]. This methodology meets all the requirements defined by the INACSL Standards of Best Practice [ 18 ]. Compared to traditional simulation-based learning (where simulated clinical scenarios are designed by the teaching team and led by facilitators), the MAES© methodology (where simulated clinical scenarios are designed and led by students) provides students nursing a better learning process and clinical performance [ 19 ]. Currently, the MAES© methodology is used in clinical simulation sessions with nursing students in some universities, not only in Spain but also in Norway, Portugal and Brazil [ 20 ].

In contrast, summative evaluation is used to establish the learning outcomes achieved by students at the end of the course [ 11 ]. This evaluation strategy is helpful to educators in evaluating students’ learning, the competencies acquired by them and their academic achievement [ 12 ]. This assessment is essential in the education process to determine readiness and competence for certification and accreditation [ 10 , 21 ]. Accordingly, Objective Structured Clinical Examination (OSCE) is commonly conducted in SBA as a summative evaluation to evaluate students’ clinical competence [ 22 ]. Consequently, OSCE has been used by educational institutions as a valid and reliable method of assessment. OSCE most commonly consists of a ‘round-robin’ of multiple short testing stations, in each of which students must demonstrate defined clinical competencies, while educators evaluate their performance according to predetermined criteria using a standardized marking scheme, such as checklists. Students must rotate through these stations where educators assess students’ performance in clinical examination, technical skills, clinical judgment and decision-making skill during the nursing process [ 22 , 23 ]. This strategy of summative evaluation incorporates actors performing as simulated patients. Therefore, OSCE allows assessing students’ clinical competence in a real-life simulated clinical environment. After simulated scenarios, this evaluation strategy provides educators with an opportunity to give students constructive feedback according to their achieved results in the checklist [ 10 , 21 – 23 ].

Despite both evaluation strategies are widely employed in SBA, there is scarce evidence about the possible differences in satisfaction with clinical simulation when nursing students are assessed using formative and summative evaluation. Considering the high satisfaction with the formative evaluation perceived by our students during the implementation of the MAES© methodology, we were concerned if this satisfaction would be similar using the same simulated clinical scenarios through a summative evaluation. Additionally, we were concerned about the reasons why this satisfaction would be different using both strategies of SBA. Therefore, the aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation methodology in undergraduate nursing students, as well as to compare their satisfaction with this methodology using two strategies of SBA, such as formative and summative evaluation. In this sense, our research hypothesis is that both strategies of SBA are effective in acquiring nursing competencies, but student satisfaction with the formative evaluation is higher than with the summative evaluation.

Study design and setting

A descriptive cross-sectional study using a mixed-method and analysing both quantitative and qualitative data. The study was conducted from September 2018 to May 2019 in a University Centre of Health Sciences in Madrid (Spain). This centre offers Physiotherapy and Nursing Degrees.

Participants

The study included 3rd-year undergraduate students (106 students participated in MAES© sessions within the subject ‘Nursing care for critical patients’) and 4th-year undergraduate students (112 students participated in OSCE sessions within the subject ‘Supervised clinical placements – Advanced level’) in Nursing Degree. It should be noted, 4th-year undergraduate students had completed all their clinical placements and they had to approve OSCE sessions to achieve their certification.

Clinical simulation sessions

To assess the clinical performance of 3rd-year undergraduate students using formative evaluation, MAES© sessions were conducted. This methodology consists of 6 elements in a minimum of two sessions [ 17 ]: Team selection and creation of group identity (students are grouped into teams and they create their own identity), voluntary choice of subject of study (each team will freely choose a topic that will serve as inspiration for the design of a simulation scenario), establishment of baseline and programming skills to be acquired through brainstorming (the students, by teams, decide what they know about the subject and then what they want to learn from it, as well as the clinical and non- technical skills they would like to acquire with the case they have chosen), design of a clinical simulation scenario in which the students practice the skills to be acquired (each team commits to designing a scenario in the simulation room), execution of the simulated clinical experience (another team, different from the one that has designed the case, will enter the high-fidelity simulation room and will have a simulation experience), and finally debriefing and presentation of the acquired skills (in addition to analysing the performance of the participants in the scenario, the students explain what they learned during the design of the case and look for evidence of the learning objectives).

Alternatively, OSCE sessions were developed to assess the clinical performance of 4th-year undergraduate students using summative evaluation. Both MAES© and OSCE sessions recreated critically ill patients with diagnoses of Exacerbation of Chronic Obstructive Pulmonary Disease (COPD), acute coronary syndrome haemorrhage in a postsurgical, and severe traumatic brain injury.

It should be noted that the implementation of all MAES© and OSCEs sessions followed the Standards of Best Practice recommended by the INACSL [ 14 , 24 – 26 ]. In this way, all the stages included in a high-fidelity session were accomplished: pre-briefing, briefing, simulated scenario, and debriefing. Specifically, a session with all nursing students was carried out 1 week before the performance of OSCE stations to establish a safe psychological learning environment and familiarize students with this summative evaluation. In this pre-briefing phase, we implemented several activities based on practices recommended by the INACSL Standards Committee [ 24 , 25 ] and Rudolph, Raemer, and Simon [ 27 ] for establishing a psychologically safe context. Although traditional OSCEs do not usually include the debriefing phase, we decided to include this phase in all OSCEs carried out in our university centre, since we consider this phase is quite relevant to nursing students’ learning process and their imminent professional career.

Critically ill patient’s role was performed by an advanced simulator mannequin (NursingAnne® by Laerdal Medical AS) in all simulated scenarios. A confederate (a health professional who acts in a simulated scenario) performed the role of a registered nurse or a physician who could help students as required. Occasionally, this confederate could perform the role of a relative of a critically ill patient. Nursing students formed work teams of 2–3 students in all MAES© and OSCE sessions. Specifically, each work team formed in MAES© sessions received a brief description of simulated scenario 2 months before and students had to propose 3 NIC (Nursing Interventions Classification) interventions [ 28 ], and 5 related nursing activities with each of them, to resolve the critical situation. In contrast, the critical situation was presented to each work team formed in OSCE sessions for 2 min before entering the simulated scenario. During all simulated experiences, professors were monitoring and controlling the simulation with a sophisticated computer program in a dedicated control room. All simulated scenarios lasted 10 min.

After each clinical simulated scenario was concluded, a debriefing was carried out to give students feedback about their performance. Debriefings in MAES© sessions were conducted according to the Gather, Analyse, and Summarise (GAS) method, a structured debriefing model developed by Phrampus and O’Donnell [ 29 ]. According to this method, the debriefing questions used were: What went well during your performance?; What did not go so well during your performance?; How can you do better next time? . Additionally, MAES© includes an expository phase in debriefings, where the students who performed the simulated scenario establish the contributions of scientific evidence about its resolution [ 17 ]. Each debriefing lasted 20 min in MAES© sessions. In contrast, debriefings in OSCE sessions lasted 10 min and they were carried out according to the Plus-Delta debriefing tool [ 30 ], a technique recommended when time is limited. Consequently, the debriefing questions were reduced to two questions: What went well during your performance?; What did not go so well during your performance? . Within these debriefings, professors communicated to students the total score obtained in the appropriate checklist. Each debriefing lasted 10 min in OSCE sessions. After all debriefings, students completed the questionnaires to evaluate their satisfaction with clinical simulation. In OSCE sessions, students had to report their satisfaction only with the scenario performed, which took part in a series of clinical stations.

In summary, Table  1 shows the required elements for formative and summative evaluation according to the Standards of Best Practice for participant evaluation recommended by the INACSL [ 18 ]. It should be noted that our MAES© and OSCE sessions accomplished these required elements.

Instruments

Clinical performance.

Professors assessed students’ clinical performance using checklists (‘Yes’/‘No’). In MAES© sessions, checklists were based on the 5 most important nursing activities included in the NIC [ 28 ] selected by nursing students. Table  2 shows the checklist of the most important NIC interventions and its related nursing activities selected by nursing students in the Exacerbation of Chronic Obstructive Pulmonary Disease (COPD) simulated scenario. In contrast, checklists for evaluating OSCE sessions were based on nursing activities selected by consensus among professors, registered nurses, and clinical placement mentors. Nursing activities were divided into 5 categories: nursing assessment, clinical judgment/decision-making, clinical management/nursing care, communication/interpersonal relationships, and teamwork. Table  3 shows the checklist of nursing activities that nursing students had to perform in COPD simulated scenario. During the execution of all simulated scenarios, professors checked if the participants perform or not the nursing activities selected.

Clinical simulation satisfaction

To determine satisfaction with clinical simulation perceived by nursing students, the Satisfaction Scale Questionnaire with High-Fidelity Clinical Simulation [ 31 ] was used after each clinical simulation session. This questionnaire consists of 33 items with a 5-point Likert scale ranging from ‘strongly disagree’ to ‘totally agree’. These items are divided into 8 scales: simulation utility, characteristics of cases and applications, communication, self-reflection on performance, increased self-confidence, relation between theory and practice, facilities and equipment and negative aspects of simulation. Cronbach’s α values for each scale ranged from .914 to .918 and total scale presents satisfactory internal consistency (Cronbach’s α value = .920). This questionnaire includes a final question about any opinion or suggestion that participating students wish to reflect after the simulation experience.

Data analysis

Quantitative data were analysed using IBM SPSS Statistics version 24.0 software for Windows (IBM Corp., Armonk, NY, USA). Descriptive statistics were calculated to interpret the results obtained in demographic data, clinical performance, and satisfaction with clinical simulation. The dependent variables after the program in the two groups were analyzed using independent t-tests. The differences in the mean changes between the two groups were analyzed using an independent t-test. Cohen’s d was calculated to analyse the effect size for t-tests. Statistical tests were two-sided (α = 0.05), so the statistical significance was set at 0.05. Subsequently, all students’ opinions and comments were analysed using the ATLAS-ti version 8.0 software (Scientific Software Development GmbH, Berlin, Germany). All the information contained in these qualitative data were stored, managed, classified and organized through this software. All the reiterated words, sentences or ideas were grouped into themes using a thematic analysis [ 32 ]. It should be noted that the students’ opinions and comments were preceded by the letter ‘S’ (student) and numerically labelled.

A total of 218 nursing students participated in the study (106 students were trained through MAES© sessions, whereas 112 students were assessed through OSCE sessions). The age of students ranged from 20 to 43 years (mean = 23.28; SD = 4.376). Most students were women ( n  = 184; 84.4%).

In formative evaluation, professors checked 93.2% of students selected adequately both NIC interventions and its related nursing activities for the resolution of the clinical simulated scenario. Subsequently, these professors checked 85.6% of students, who participated in each simulated scenario, performed the nursing activities previously selected by them. In summative evaluation, students obtained total scores ranged from 65 to 95 points (mean = 7.43; SD = .408).

Descriptive data for each scale of satisfaction with clinical simulation questionnaire, t-test, and effect sizes (d) of differences between two evaluation strategies are shown in Table  4 . Statistically significant differences were found between two evaluation strategies for all scales of the satisfaction with clinical simulation questionnaire. Students´ satisfaction with clinical simulation was higher for all scales of the questionnaire when they were assessed using formative evaluation, including the ‘negative aspects of simulation’ scale, where the students perceived fewer negative aspects. The effect size of these differences was large (including the total score of the questionnaire) (Cohen’s d values > .8), except for the ‘facilities and equipment’ scale, which effect size was medium (Cohen’s d value > .5) [ 33 ].

Table  5 shows specifically descriptive data, t-test, and effect sizes (d) of differences between both evaluation strategies for each item of the clinical simulation satisfaction questionnaire. Statistically significant differences were found between two evaluation strategies for all items of the questionnaire, except for items ‘I have improved communication with the family’, ‘I have improved communication with the patient’, and ‘I lost calm during any of the cases’. Students´ satisfaction with clinical simulation was higher in formative evaluation sessions for most items, except for item ‘simulation has made me more aware/worried about clinical practice’, where students informed being more aware and worried in summative evaluation sessions. Most effect sizes of these differences were small or medium (Cohen’s d values ranged from .238 to .709) [ 33 ]. The largest effect sizes of these differences were obtained for items ‘timing for each simulation case has been adequate’ (d = 1.107), ‘overall satisfaction of sessions’ (d = .953), and ‘simulation has made me more aware/worried about clinical practice’ (d = -.947). In contrast, the smallest effect sizes of these differences were obtained for items ‘simulation allows us to plan the patient care effectively’ (d = .238) and ‘the degree of cases difficulty was appropriate to my knowledge’ (d = .257).

In addition, participating students provided 74 opinions or suggestions expressed through short comments. Most students’ comments were related to 3 main themes after the thematic analysis: utility of clinical simulation methodology (S45: ‘it has been a useful activity and it helped us to recognize our mistakes and fixing knowledge’, S94: ‘to link theory to practice is essential’), to spend more time on this methodology (S113: ‘I would ask for more practices of this type‘, S178: ‘I feel very happy, but it should be done more frequently’), and its integration into other subjects (S21: ‘I consider this activity should be implemented in more subjects’, S64: ‘I wish there were more simulations in more subjects’). Finally, students´ comments about summative evaluation sessions included other 2 main themes related to: limited time of simulation experience (S134: ‘time is short’, S197: ‘there is no time to perform activities and assess properly’) and students´ anxiety (S123: ‘I was very nervous because people were evaluating me around’, S187: ‘I was more nervous than in a real situation’).

The most significant results obtained in our study are the nursing competency acquisition through clinical simulation by nursing students and the different level of their satisfaction with this methodology depending on the evaluation strategy employed.

Firstly, professors in this study verified most students acquired the nursing competencies to resolve each clinical situation. In our study, professors verified that most nursing students performed the majority of the nursing activities required for the resolution of each MAES© session and OSCE station. This result confirms the findings in other studies that have demonstrated nursing competency acquisition by nursing students through clinical simulation [ 34 , 35 ], and specifically nursing competencies related to critical patient management [ 9 , 36 ].

Secondly, students’ satisfaction assessed using both evaluation strategies could be considered high in most items of the questionnaire, regarding their mean scores (quite close to the maximum score in the response scale of the satisfaction questionnaire). The high level of satisfaction expressed by nursing students with clinical simulation obtained in this study is also congruent with empirical evidence, which confirms that this methodology is a useful tool for their learning process [ 6 , 31 , 37 – 40 ].

However, satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance. Reduced time is a frequent complaint of students in OSCE [ 23 , 41 ] and clinical simulation methodology [ 5 , 6 , 10 ]. Professors, registered nurses, and clinical placement mentors tested all simulated scenarios and their checklist in this study. They checked the time was enough for its resolution. Another criticism of summative evaluation is increased anxiety. However, several studies have demonstrated during clinical simulation students’ anxiety increase [ 42 , 43 ] and it is considered as the most disadvantage of clinical simulation [ 1 – 10 ]. In this sense, anxiety may influence negatively students’ learning process [ 42 , 43 ]. Although the current simulation methodology can mimic the real medical environment to a great degree, it might still be questionable whether students´ performance in the testing environment really represents their true ability. Test anxiety might increase in an unfamiliar testing environment; difficulty to handle unfamiliar technology (i.e., monitor, defibrillator, or other devices that may be different from the ones used in the examinee’s specific clinical environment) or even the need to ‘act as if’ in an artificial scenario (i.e., talking to a simulator, examining a ‘patient’ knowing he/she is an actor or a mannequin) might all compromise examinees’ performance. The best solution to reduce these complaints is the orientation of students to the simulated environment [ 10 , 21 – 23 ].

Nevertheless, it should be noted that the diversity in the satisfaction scores obtained in our study could be supported not by the choice of the assessment strategy, but precisely by the different purposes of formative and summative assessment. In this sense, there is a component of anxiety that is intrinsic in summative assessment, which must certify the acquisition of competencies [ 10 – 12 , 21 ]. In contrast, this aspect is not present in formative assessment, which is intended to help the student understand the distance to reach the expected level of competence, without penalty effects [ 10 – 12 ].

Both SBA strategies allow educators to evaluate students’ knowledge and apply it in a clinical setting. However, formative evaluation is identified as ‘assessment for learning’ and summative evaluation as ‘assessment of learning’ [ 44 ]. Using formative evaluation, educators’ responsibility is to ensure not only what students are learning in the classroom, but also the outcomes of their learning process [ 45 ]. In this sense, formative assessment by itself is not enough to determine educational outcomes [ 46 ]. Consequently, a checklist for evaluating students’ clinical performance was included in MAES© sessions. Alternatively, educators cannot make any corrections in students’ performance using summative evaluation [ 45 ]. Gavriel [ 44 ] suggests providing students feedback in this SBA strategy. Therefore, a debriefing phase was included after each OSCE session in our study. The significance of debriefing recognised by nursing students in our study is also congruent with the most evidence found  [ 13 , 15 , 16 , 47 ]. Nursing students appreciate feedback about their performance during simulation experience and, consequently, debriefing is considered as the most rewarding phase in clinical simulation by them  [ 5 , 6 , 48 ]. In addition, nursing students in our study expressed they could learn from their mistakes in debriefing. Learn from error is one of the most advantages of clinical simulation shown in several studies  [ 5 , 6 , 49 ] and mistakes should be considered learning opportunities rather than there being embarrassment or punitive consequences  [ 50 ].

Furthermore, nursing students who participated in our study considered the practical utility of clinical simulation as another advantage of this teaching methodology. This result is congruent with previous studies [ 5 , 6 ]. Specifically, our students indicated this methodology is useful to bridge the gap between theory and practice [ 51 , 52 ]. In this sense, clinical simulation has proven to reduce this gap and, consequently, it has demonstrated to shorten the gap between classrooms and clinical practices  [ 5 , 6 , 51 , 52 ]. Therefore, as this teaching methodology relates theory and practice, it helps nursing students to be prepared for their clinical practices and future careers. According to Benner’s model of skill acquisition in nursing [ 53 ], nursing students become competent nurses through this learning process, acquiring a degree of safety and clinical experience before their professional careers [ 54 ]. Although our research indicates clinical simulation is a useful methodology for the acquisition and learning process of competencies mainly related to adequate management and nursing care of critically ill patients, this acquisition and learning process could be extended to most nursing care settings and its required nursing competencies.

Limitations and future research

Although checklists employed in OSCE have been criticized for their subjective construction [ 10 , 21 – 23 ], they were constructed with the expert consensus of nursing professors, registered nurses and clinical placement mentors. Alternatively, the self-reported questionnaire used to evaluate clinical simulation satisfaction has strong validity. All simulated scenarios were similar in MAES© and OSCE sessions (same clinical situations, patients, actors and number of participating students), although the debriefing method employed after them was different. This difference was due to reduced time in OSCE sessions. Furthermore, it should be pointed out that the two groups of students involved in our study were from different course years and they were exposed to different strategies of SBA. In this sense, future studies should compare nursing students’ satisfaction with both strategies of SBA in the same group of students and using the same debriefing method. Finally, future research should combine formative and summative evaluation for assessing the clinical performance of undergraduate nursing students in simulated scenarios.

It is needed to provide students feedback about their clinical performance when they are assessed using summative evaluation. Furthermore, it is needed to evaluate whether they achieve learning outcomes when they are assessed using formative evaluation. Consequently, it should be recommended to combine both evaluation strategies in SBA. Although students expressed high satisfaction with clinical simulation methodology, they perceived a reduced time and increased anxiety when they are assessed by summative evaluation. The best solution is the orientation of students to the simulated environment.

Availability of data and materials

The datasets analysed during the current study are available from the corresponding author on reasonable request.

Martins J, Baptista R, Coutinho V, Fernandes M, Fernandes A. Simulation in nursing and midwifery education. Copenhagen: World Health Organization Regional Office for Europe; 2018.

Google Scholar  

Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2010;66:3–15.

Article   PubMed   Google Scholar  

Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90:499–541.

Article   Google Scholar  

Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016;16:152.

Article   PubMed   PubMed Central   Google Scholar  

Ricketts B. The role of simulation for learning within pre-registration nursing education—a literature review. Nurse Educ Today. 2011;31:650–4.

PubMed   Google Scholar  

Shin S, Park JH, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Educ Today. 2015;35:176–82.

Bagnasco A, Pagnucci N, Tolotti A, Rosa F, Torre G, Sasso L. The role of simulation in developing communication and gestural skills in medical students. BMC Med Educ. 2014;14:106.

Oh PJ, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35:e6–e15.

Stayt LC, Merriman C, Ricketts B, Morton S, Simpson T. Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students. J Adv Nurs. 2015;71:2563–74.

Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–82.

PubMed   PubMed Central   Google Scholar  

Billings DM, Halstead JA. Teaching in nursing: a guide for faculty. 4th ed. St. Louis: Elsevier; 2012.

Nichols PD, Meyers JL, Burling KS. A framework for evaluating and planning assessments intended to improve student achievement. Educ Meas Issues Pract. 2009;28:14–23.

Cant RP, Cooper SJ. The benefits of debriefing as formative feedback in nurse education. Aust J Adv Nurs. 2011;29:37–47.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Glossary. Clin Simul Nurs. 2016;12:S39–47.

Dufrene C, Young A. Successful debriefing-best methods to achieve positive learning outcomes: a literature review. Nurse Educ Today. 2014;34:372–6.

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63.

Díaz JL, Leal C, García JA, Hernández E, Adánez MG, Sáez A. Self-learning methodology in simulated environments (MAES©): elements and characteristics. Clin Simul Nurs. 2016;12:268–74.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM : Participant Evaluation. Clin Simul Nurs. 2016;12:S26–9.

Díaz Agea JL, Megías Nicolás A, García Méndez JA, Adánez Martínez MG, Leal CC. Improving simulation performance through self-learning methodology in simulated environments (MAES©). Nurse Educ Today. 2019;76:62–7.

Díaz Agea JL, Ramos-Morcillo AJ, Amo Setien FJ, Ruzafa-Martínez M, Hueso-Montoro C, Leal-Costa C. Perceptions about the self-learning methodology in simulated environments in nursing students: a mixed study. Int J Environ Res Public Health. 2019;16:4646.

Article   PubMed Central   Google Scholar  

Oermann MH, Kardong-Edgren S, Rizzolo MA. Summative simulated-based assessment in nursing programs. J Nurs Educ. 2016;55:323–8.

Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ. 1979;13:41–54.

CAS   PubMed   Google Scholar  

Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. Nurse Educ Today. 2009;29:394–404.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Design. Clin Simul Nurs. 2016;12:S5–S12.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Facilitation. Clin Simul Nurs. 2016;12:S16–20.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Debriefing. Clin Simul Nurs. 2016;12:S21–5.

Rudolph JW, Raemer D, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9:339–49.

Butcher HK, Bulechek GM, Dochterman JMM, Wagner C. Nursing Interventions Classification (NIC). 7th ed. St. Louis: Elsevier; 2018.

Phrampus PE, O’Donnell JM. Debriefing using a structured and supported approach. In: AI AIL, De Maria JS, Schwartz AD, Sim AJ, editors. The comprehensive textbook of healthcare simulation. New York: Springer; 2013. p. 73–84.

Chapter   Google Scholar  

Decker S, Fey M, Sideras S, Caballero S, Rockstraw L, Boese T, et al. Standards of best practice: simulation standard VI: the debriefing process. Clin Simul Nurs. 2013;9:S26–9.

Alconero-Camarero AR, Gualdrón-Romero A, Sarabia-Cobo CM, Martínez-Arce A. Clinical simulation as a learning tool in undergraduate nursing: validation of a questionnaire. Nurse Educ Today. 2016;39:128–34.

Mayan M. Essentials of qualitative inquiry. Walnut Creek: Left Coast Press, Inc.; 2009.

Cohen L, Manion L, Morrison K. Research methods in education. 7th ed. London: Routledge; 2011.

Lapkin S, Levett-Jones T, Bellchambers H, Fernandez R. Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clin Simul Nurs. 2010;6:207–22.

McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Revisiting “a critical review of simulation-based medical education research: 2003-2009”. Med Educ. 2016;50:986–91.

Abelsson A, Bisholt B. Nurse students learning acute care by simulation - focus on observation and debriefing. Nurse Educ Pract. 2017;24:6–13.

Bland AJ, Topping A, Wood BA. Concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today. 2011;31:664–70.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, design scale simulation, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34:1298–304.

Levett-Jones T, McCoy M, Lapkin S, Noble D, Hoffman K, Dempsey J, et al. The development and psychometric testing of the satisfaction with simulation experience scale. Nurse Educ Today. 2011;31:705–10.

Zapko KA, Ferranto MLG, Blasiman R, Shelestak D. Evaluating best educational practices, student satisfaction, and self-confidence in simulation: a descriptive study. Nurse Educ Today. 2018;60:28–34.

Kelly MA, Mitchell ML, Henderson A, Jeffrey CA, Groves M, Nulty DD, et al. OSCE best practice guidelines-applicability for nursing simulations. Adv Simul. 2016;1:10.

Cantrell ML, Meyer SL, Mosack V. Effects of simulation on nursing student stress: an integrative review. J Nurs Educ. 2017;56:139–44.

Nielsen B, Harder N. Causes of student anxiety during simulation: what the literature says. Clin Simul Nurs. 2013;9:e507–12.

Gavriel J. Assessment for learning: a wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Educ Prim Care. 2013;24:93–6.

Taras M. Summative and formative assessment. Act Learn High Educ. 2008;9:172–82.

Wunder LL, Glymph DC, Newman J, Gonzalez V, Gonzalez JE, Groom JA. Objective structured clinical examination as an educational initiative for summative simulation competency evaluation of first-year student registered nurse anesthetists’ clinical skills. AANA J. 2014;82:419–25.

Neill MA, Wotton K. High-fidelity simulation debriefing in nursing education: a literature review. Clin Simul Nurs. 2011;7:e161–8.

Norman J. Systematic review of the literature on simulation in nursing education. ABNF J. 2012;23:24–8.

King A, Holder MGJr, Ahmed RA. Error as allies: error management training in health professions education. BMJ Qual Saf. 2013;22:516–9.

Higgins M, Ishimaru A, Holcombe R, Fowler A. Examining organizational learning in schools: the role of psychological safety, experimentation, and leadership that reinforces learning. J Educ Change. 2012;13:67–94.

Hope A, Garside J, Prescott S. Rethinking theory and practice: Pre-registration student nurses experiences of simulation teaching and learning in the acquisition of clinical skills in preparation for practice. Nurse Educ Today. 2011;31:711–7.

Lisko SA, O’Dell V. Integration of theory and practice: experiential learning theory and nursing education. Nurs Educ Perspect. 2010;31:106–8.

Benner P. From novice to expert: excellence and power in clinical nursing practice. Menlo Park: Addison-Wesley Publishing; 1984.

Book   Google Scholar  

Nickless LJ. The use of simulation to address the acute care skills deficit in pre-registration nursing students: a clinical skill perspective. Nurse Educ Pract. 2011;11:199–205.

Download references

Acknowledgements

The authors appreciate the collaboration of nursing students who participated in the study.

STROBE statement

All methods were carried out in accordance with the 22-item checklist of the consolidated criteria for reporting cross-sectional studies (STROBE).

The authors have no sources of funding to declare.

Author information

Authors and affiliations.

Fundación San Juan de Dios, Centro de Ciencias de la Salud San Rafael, Universidad de Nebrija, Paseo de La Habana, 70, 28036, Madrid, Spain

Oscar Arrogante, Gracia María González-Romero, Eva María López-Torre, Laura Carrión-García & Alberto Polo

You can also search for this author in PubMed   Google Scholar

Contributions

OA: Conceptualization, Data Collection, Formal Analysis, Writing – Original Draft, Writing - Review & Editing, Supervision; GMGR: Conceptualization, Data Collection, Writing - Review & Editing; EMLT: Conceptualization, Writing - Review & Editing; LCG: Conceptualization, Data Collection, Writing - Review & Editing; AP: Conceptualization, Data Collection, Formal Analysis, Writing - Review & Editing, Supervision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Oscar Arrogante .

Ethics declarations

Ethics approval and consent to participate.

The research committee of the Centro Universitario de Ciencias de la Salud San Rafael-Nebrija approved the study (P_2018_012). According to the ethical standards, all participants received written informed consent and written information about the study and its goals. Additionally, written informed consent for audio-video recording was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Arrogante, O., González-Romero, G.M., López-Torre, E.M. et al. Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction. BMC Nurs 20 , 92 (2021). https://doi.org/10.1186/s12912-021-00614-2

Download citation

Received : 09 February 2021

Accepted : 17 May 2021

Published : 08 June 2021

DOI : https://doi.org/10.1186/s12912-021-00614-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical competence
  • High Fidelity simulation training
  • Nursing students

BMC Nursing

ISSN: 1472-6955

formative assessment in nursing education

Formative Assessment Strategies for Healthcare Educators

formative assessment in nursing education

Formative assessments are those lower-stakes assessments that are delivered during instruction in some way, or 'along the way' so to speak. As an educator, it was always a challenge to identify if or what my students were understanding, what skills they had acquired, and if or how I should adjust my teaching strategy to help improve their learning. I’m guessing I am not alone with this. In medical education, the pace is so fast that many instructors feel like they do not have the time to spare in giving assessments ‘along the way’, but would rather focus on teaching everything students need for the higher-stakes exams. With medical education being incredibly intense and fast, this is completely understandable. However, there must be a reason so much research supports the effectiveness in administering formative assessments….along the way.

One reason formative assessments are proven so useful is they provide meaningful and useful feedback; feedback that can be used by both the instructor and students.

Results from formative assessments should have a direct relation to the learning objectives established by the instructor, and because of this, the results provide trusted feedback for both the instructor and student. This is incredibly important. For instructors, it allows them to make immediate adjustments to their teaching strategy and for the students, it helps them develop a more reliable self-awareness of their own learning. These two things alone are very useful, but when combined, they can result in an increase in student outcomes.

Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities.  

1. Pre-Assessment:

Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas

  • When instructors have feedback from student assessments prior to class, it is easier to tailor the lesson to student needs.
  • Posing questions prior to class can help students focus on what the instructor thinks is important.
  • By assessing students before class, it helps ensure students are more prepared for what learning will take place in class.
  • Pre-assessments can provide more ‘in-class’ time flexibility- knowing ahead of time which knowledge gaps students may have allows the instructor to better use class time in a more flexible way...not as many ‘surprises’ flexibility.

formative assessment in nursing education

2. Frequent class assessments:

Provides students with feedback for learning during class, and provides a focus for students related to important topics which help increase learning gains

formative assessment in nursing education

  • Adding more formative assessments during class increases student retention.
  • Frequent formative assessments help students stay focused by giving them natural ‘breaks’ from either a lecture or the activity.
  • Multiple formative assessments can provide students with a “road-map” to what the instructor feels is important (i.e. what will appear on summative assessments).
  • By using frequent assessments, the instructor can naturally help students with topic or content transitions during a lecture or activity.
  • The data/feedback from the assessments can help instructors better understand which instructional methods are most effective- in other words, what works and what doesn’t.

3. Guided Study assessments (group or tutorial):

‍ Provides students with opportunities to acquire information needed to complete the assessment, for example through research or group work, and increases student self-awareness related to their own knowledge (gaps)

formative assessment in nursing education

  • Assessments where students are expected to engage in research allows them to develop and use higher-level thinking skills.
  • Guided assessments engage students in active learning either independently or through collaboration with a group.
  • Small group assessments encourage students to articulate their thinking and reasoning, and helps them develop self-awareness about what they do and do not yet understand.
  • Tutorial assessments can provide the instructor real-time feedback for student misconceptions and overall understanding- allowing them to make important decisions about how to teach particular topics.

4. Take-Home assessments: ‍

Allows students to preview the instructors assessment style, are low-stakes and self-paced to allow students to engage with the material, and provides the instructor with formative feedback 

  • Assessments that students can engage in outside of class gives them a ‘preview’ of the information that they will likely need to retrieve again on a summative exam.
  • When students take an assessment at home, the instructor can receive feedback with enough time to adjust the classroom instruction to address knowledge gaps or misconceptions.
  • Take home assessments can help students develop self-awareness of their own misunderstandings or knowledge gaps.

formative assessment in nursing education

5.“Bedside” observation:

Informs students in clinical settings of their level of competence and learning, and may improve motivation and improve participation in clinical activities.

  • Real-time formative assessments can provide students with critical feedback related to the skills that are necessary for practicing medicine.
  • On the fly assessments can help clinical instructors learn more about student understanding as well as any changes they can make in their instruction.
  • Formative assessments in a clinical setting can equip clinical instructors with a valuable tool to help them make informed decisions around their teaching and student learning.
  • Bedside assessments provide a standardized way of formatively assessing students in a very unpredictable learning environment.

The challenge for many instructors is often in the “how” when delivering formative assessments. Thankfully, improving teaching and learning through the use of formative assessments (and feedback) can be greatly enhanced with educational technology. DaVinci Education’s Leo platform provides multiple ways in which you can deliver formative assessments. With Leos’ exam feature you can:

  • Assign pre-class, in-class or take-home quizzes
  • Deliver IRATs used during TBL exercises to assess student individual readiness
  • Deliver GRATs used during TBL exercises by using Leo’s digital scratch-off tool to encourage collaboration and assess group readiness
  • Monitor student performance in real-time using Leo’s Monitor Exam feature
  • Customize student feedback options during or following an assessment

References:

Burch, v. c., seggie, j. l., & gary, n. e. (2006, may). formative assessment promotes learning in undergraduate clinical clerkships. retrieved from https://www.ncbi.nlm.nih.gov/pubmed/16751919, feedback and formative assessment tools . (n.d.). retrieved from http://www.queensu.ca/teachingandlearning/modules/assessments/11_s2_03_feedback_and_formative.html, hattie, j. and timperely, h. (2007). the power of feedback. review of educational research , 77, 81–112, heritage, m. 2014, formative assessment: an enabler of learning, retrieved from http://www.amplify.com/assets/regional/heritage_fa.pdf, magna publications, inc. (2018). designing better quizzes: ideas for rethinking your quiz practices . madison, wi., schlegel, c. (2018). objective structured clinical examination (osce). osce – kompetenzorientiert prüfen in der pflegeausbildung , 1–7. doi: 10.1007/978-3-662-55800-3_1, other resources.

formative assessment in nursing education

510 Meadowmont Village Circle #129 Chapel Hill, NC 27517 ‍ (919) 694-7498

View privacy policy

DAVINCI EDUCATION MANAGEMENT SYSTEM®, ACADEMIC PORTRAIT®, and LEO® are the registered trademarks of DaVinci Education, Inc.

formative assessment in nursing education

Assessment and Evaluation in Nursing Education: A Simulation Perspective

  • First Online: 29 February 2024

Cite this chapter

formative assessment in nursing education

  • Loretta Garvey 7 &
  • Debra Kiegaldie 8  

Part of the book series: Comprehensive Healthcare Simulation ((CHS))

490 Accesses

Assessment and evaluation are used extensively in nursing education. In many instances, these terms are often used interchangeably, which can create confusion, yet key differences are associated with each.

Assessment in undergraduate nursing education is designed to ascertain whether students have achieved their potential and have acquired the knowledge, skills, and abilities set out within their course. Assessment aims to understand and improve student learning and must be at the forefront of curriculum planning to ensure assessments are well aligned with learning outcomes. In the past, the focus of assessment has often been on a single assessment. However, it is now understood that we must examine the whole system or program of assessment within a course of study to ensure integration and recognition of all assessment elements to holistically achieve overall course aims and objectives. Simulation is emerging as a safe and effective assessment tool that is increasingly used in undergraduate nursing.

Evaluation, however, is more summative in that it evaluates student attainment of course outcomes and their views on the learning process to achieve those outcomes. Program evaluation takes assessment of learning a step further in that it is a systematic method to assess the design, implementation, improvement, or outcomes of a program. According to Frye and Hemmer, student assessments (measurements) can be important to the evaluation process, but evaluation measurements come from various sources (Frye and Hemmer. Med Teacher 34:e288-e99, 2012). Essentially, program evaluation is concerned with the utility of its process and results (Alkin and King. Am J Evalu 37:568–79, 2016). The evaluation of simulation as a distinct program of learning is an important consideration when designing and implementing simulation into undergraduate nursing. This chapter will examine assessment and program evaluation from the simulation perspective in undergraduate nursing to explain the important principles, components, best practice approaches, and practical applications that must be considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Masters GN. Reforming Education Assessment: Imperatives, principles, and challenges. Camberwell: ACER Press; 2013.

Google Scholar  

MacLellan E. Assessment for Learning: the differing perceptions of tutors and students. Assess Eval High Educ. 2001;26(4):307–18.

Article   Google Scholar  

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63–7.

Article   CAS   PubMed   Google Scholar  

Alinier G. Nursing students’ and lecturers’ perspectives of objective structured clinical examination incorporating simulation. Nurse Educ Today. 2003;23(6):419–26.

Article   PubMed   Google Scholar  

Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40(11):1102–9.

Biggs J. Constructive alignment in university teaching: HERDSA. Rev High Educ. 2014;1:5–22.

Hamdy H. Blueprinting for the assessment of health care professionals. Clin Teach. 2006;3(3):175–9.

Welch S. Program evaluation: a concept analysis. Teach Learn Nurs. 2021;16(1):81–4.

Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE Guide No. 67. Med Teach. 2012;34(5):e288–e99.

Johnston S, Coyer FM, Nash R. Kirkpatrick's evaluation of simulation and debriefing in health care education: a systematic review. J Nurs Educ. 2018;57(7):393–8.

ACGM. Glossary of Terms: Accreditation Council for Graduate Medical Education 2020. https://www.acgme.org/globalassets/pdfs/ab_acgmeglossary.pdf .

Shadish WR, Luellen JK. History of evaluation. In: Mathison S, editor. Encyclopedia of evaluation. Sage; 2005. p. 183–6.

Lewallen LP. Practical strategies for nursing education program evaluation. J Prof Nurs. 2015;31(2):133–40.

Kirkpatrick DL. Evaluation of training. In: Craig RL, Bittel LR, editors. New York: McGraw Hill; 1967.

Cahapay M. Kirkpatrick model: its limitations as used in higher education evaluation. Int J Assess Tools Educ. 2021;8(1):135–44.

Yardley S, Dornan T. Kirkpatrick's levels and education 'evidence'. Med Educ. 2012;46(1):97–106.

Kirkpatrick J, Kirkpatrick W. An introduction to the new world Kirkpatrick model. Kirkpatrick Partners; 2021.

Bhatia M, Stewart AE, Wallace A, Kumar A, Malhotra A. Evaluation of an in-situ neonatal resuscitation simulation program using the new world Kirkpatrick model. Clin Simul Nurs. 2021;50:27–37.

Lippe M, Carter P. Using the CIPP model to assess nursing education program quality and merit. Teach Learn Nurs. 2018;13(1):9–13.

Kardong-Edgren S, Adamson KA, Fitzgerald C. A review of currently published evaluation instruments for human patient simulation. Clin Simul Nurs. 2010;6(1):e25–35.

Solutions S. Reliability and Validity; 2022

Rauta S, Salanterä S, Vahlberg T, Junttila K. The criterion validity, reliability, and feasibility of an instrument for assessing the nursing intensity in perioperative settings. Nurs Res Pract. 2017;2017:1048052.

PubMed   PubMed Central   Google Scholar  

Jeffries PR, Rizzolo MA. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: a national, multi-site, multi-method study (summary report). Sci Res. 2006;

Unver V, Basak T, Watts P, Gaioso V, Moss J, Tastan S, et al. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire. Contemp Nurse. 2017;53(1):60–74.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34(10):1298–304.

Guise J-M, Deering SH, Kanki BG, Osterweil P, Li H, Mori M, et al. Validation of a tool to measure and promote clinical teamwork. Simul Healthc. 2008;3(4)

Millward LJ, Jeffries N. The team survey: a tool for health care team development. J Adv Nurs. 2001;35(2):276–87.

Download references

Author information

Authors and affiliations.

Federation University Australia, University Dr, Mount Helen, VIC, Australia

Loretta Garvey

Holmesglen Institute, Healthscope Hospitals, Monash University, Mount Helen, VIC, Australia

Debra Kiegaldie

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Loretta Garvey .

Editor information

Editors and affiliations.

Emergency Medicine, Icahn School of Medicine at Mount Sinai, Director of Emergency Medicine Simulation, Mount Sinai Hospital, New York, NY, USA

Jared M. Kutzin

School of Nursing, University of California San Francisco, San Francisco, CA, USA

Perinatal Patient Safety, Kaiser Permanente, Pleasanton, CA, USA

Connie M. Lopez

Eastern Health Clinical School, Faculty of Medicine, Nursing & Health Sciences, Monash University, Melbourne, VIC, Australia

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Garvey, L., Kiegaldie, D. (2023). Assessment and Evaluation in Nursing Education: A Simulation Perspective. In: Kutzin, J.M., Waxman, K., Lopez, C.M., Kiegaldie, D. (eds) Comprehensive Healthcare Simulation: Nursing. Comprehensive Healthcare Simulation. Springer, Cham. https://doi.org/10.1007/978-3-031-31090-4_14

Download citation

DOI : https://doi.org/10.1007/978-3-031-31090-4_14

Published : 29 February 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-31089-8

Online ISBN : 978-3-031-31090-4

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • ABOUT US About the Journal About the Publisher Editorial Board FAQs Journal Metrics --> Open Access Policy Open Url Guidelines Quick Track Option -->
  • ETHICAL GUIDELINES Allegations From whistleblowers Conflict of Interest Fabricating and Stating False Information Plagiarism Prevention Post Publication Discussions and Corrections Publishing Ethics Research Misconduct
  • FOR EDITORS & REVIEWERS Editorial Management Editorial Policies Ensuring Content Integrity Ethical Guidelines for New Editors Guest Editor Guidelines Peer Review Workflow Publication Process Guidelines for Peer Reviewers
  • FOR GUEST EDITORS Guidelines for Guest Editors
  • FOR AUTHORS Archiving Policies Article Processing Charges Author Benefits Authorship Bentham Manuscript Processing System Institutional Membership Instructions for Authors Manuscript Transfer Facility Special Fee Waivers and Discounts
  • Submit Manuscript
  • MARKETING OPPORTUNITIES Advertise With Us Kudos Advertising Policy
  • BECOME A PART Submit Abstract Online Submit Issue Proposal Become an Editorial Member Become a Reviewer Become a Section Editor Become an Executive Guest Editor Become a Member

All published articles of this journal are available on ScienceDirect.

formative assessment in nursing education

Perceptions, Practices, and Challenges of Formative Assessment in Initial Nursing Education

Open Modal

E-Mail Address of Dr. Halima Lajane

[email protected]

E-Mail Address of Dr. Rachid Gouifrane

E-mail address of dr. rabia qaisar, e-mail address of dr. ghizlane chemsi, e-mail address of dr. mohamed radid, downloads 11,803.

  • Last 6 Months 11,803
  • Last 12 Months 11,803

Background:

Formative assessment is a pedagogical practice that improves teaching, as well as students' learning. There is a multitude of research demonstrating interest in this practice in the field of education. However, this assessment practice is poorly integrated by teachers despite its great pedagogical potential, in addition to the tensions existing between formative and summative assessment that its implementation is more formal by the institutions.

The purpose of this research is to explore, as a first step, how nursing teachers conceptualize formative assessment and how they judge its usefulness in the teaching/learning process. Secondly, the study seeks to identify the main challenges that could influence the practice of formative assessment in the context of nursing education.

The study used a descriptive quantitative research design. The target population of the study was composed of nursing teachers (N = 50) from the Higher Institute of Nursing and Health Techniques of Casablanca (ISPITS).

This target population includes all permanent nursing teachers working at the ISPITS of Casablanca, divided into the various existing fields. They are responsible for the initial training and practical supervision of nursing students and health technicians enrolled in the cycles of the professional license.

To meet our research objective, we conducted a survey using a questionnaire with 37 items divided into five dimensions based on William and Thompson's (2007) model of formative assessment.

The results revealed that, in teachers’ practice, the informal approach to formative assessment takes precedence over formal approaches based on planned assessment tools. In addition, their perception of the usefulness of formative assessment is oriented towards a diagnostic function of students' learning difficulties rather than a function of teaching guidance.

Furthermore, the study showed that the time commitment of formative assessment and the diversity of activities required of teachers might be obstacles to a broader practice of formative assessment.

Conclusion:

This study offers suggestions that may help teachers facilitate and innovate the implementation of formative assessment in the field of nursing. Our research perspective is to demonstrate the effect of formative assessment on student learning outcomes through the implementation of a field experiment in collaboration with nursing teachers.

1. INTRODUCTION

In terms of training, the effects of teaching on students' achievements are often uncertain [ 1 , 2 ]. However, formative assessment is intended as a means by which practitioners can make judgments about the learning attained during the teaching sequences [ 3 , 4 ]. More specifically, this pervasive approach in pedagogical practices provides the teacher and the students with information on the progress of learning [ 5 , 6 ]. The purpose of formative assessment is to improve students' learning, not to judge their performance [ 7 , 8]. Bloom's initial conception, as part of mastery pedagogy, defines formative assessment as an approach that allows students to remediate their learning difficulties. Moreover, the expanded concept states that this is indeed an approach that allows the regulation of both learning and teaching [ 9 ].

In the field of education, formative assessment is carried out in both formal and informal manners, based on class interactions [ 10 , 11 ]. The formal version of formative assessment allows for retroactive regulation of learning difficulties that could not be corrected by interactive regulations resulting from informal approaches [ 9 ]. Moreover, through self-assessment and peer evaluation activities, formal approaches allow for the self-regulation of students' learning and the development of their autonomy [ 3 , 6 ].

Although formative approaches to evaluation affect practitioners and managers, there are barriers to a more developed practice. Implementing this type of assessment may be difficult due to the increase in the number of students and the diversity of activities required of teachers, which leaves less time for the implementation of this evaluative practice [ 12 , 13 ]. Furthermore, practitioners may be hesitant to implement the practice because of the tensions between formative and summative evaluations and the fear that formative evaluation consumes too many resources [ 14 ]. Contrary to the formative assessment, summary assessment is mandatory, as it is formally integrated into the planning of the teachings [ 15 ].

2. LITERATURE REVIEW

2.1. the role and importance of formative assessment.

Formative assessment is the subject of several publications that examine this pedagogical practice and its effects on student learning and the quality of teaching [ 16 - 19 ]. This form of assessment allows students to be informed about the quality of their work and how to self-regulate [ 20 ]. More specifically, formative assessment helps students to develop their learning through feedback provided by teachers [ 21 ]. Black and William (1998) [ 22 ] recommend that teachers integrate formative assessment into their teaching practices in the classroom given the benefits it offers. Moreover, the formative approaches to the assessment give rise to three main types of regulation of teaching: interactive regulation, retroactive regulation, and proactive regulation. Interactive regulation is based on classroom questioning and group interactions. This type of regulation allows for continuous adaptations of teaching; retroactive regulation is carried out at the end of a teaching phase and is based on formal steps of formative assessment. It, therefore, aims to verify the achievement of learning objectives by all learners. Proactive regulation is an approach based on the concern of pedagogical differentiation, taking into account the needs of learners [ 9 , 10 ].

2.2. Practice of Formative Assessment

The concept of formative assessment was first introduced by Scriven as part of the assessment of training programs to enable adjustments. Bloom subsequently applied this concept to student learning in the master's pedagogy model [ 9 ]. Research in the French language further broadened Bloom's original view by focusing on aspects of formative assessment [ 9 ] The main stipulations of this enlargement were: a) Integrating formative assessment into all learning situations; b) Using various means of data collection; c) Regulation of teachings; d) Active participation of students in the formative assessment,; e) Differentiating teaching; f) And continuous improvement in teaching. Regarding the implementation of the formative assessment, the authors [ 9 - 11 , 23 ,24 ] states that this practice can be formal or informal. Formative assessment is formal when teachers use planned instruments such as exercises, online tests, questionnaires, and self-assessment form. In the absence of tools, formative assessment is informal when using group exchanges, classroom questioning, and observation during teaching sequences. The formal version of formative assessment allows teachers to propose retroactive regulations for students' learning difficulties [ 9 ]. Informal formative assessment allows interactive regulation to be conducted throughout the teaching/learning process [ 10 , 25 ].

2.3. Reference Framework for the Practice of Formative Assessment

Formative assessment is the subject of several theoretical guidelines and developments. Thus, and in view of the large number of existing models that have dealt with this concept, we have chosen to use the William and Thompson model [ 26 ], which was derived from the original model of Leahy [ 27 ] in conducting this study. This framework develops the main elements of formative assessment apprehended by the literature. William and Thompson's model conceptualizes formative assessment using five key strategies based on three teaching/learning processes. Leahy [ 27 ] concluded that these strategies are beneficial in all classes and in different fields.

Table 1 outlines the five key strategies by linking them to the three teaching/learning processes.

A first reference strategy is to clarify and share learning intentions and success criteria. This strategy requires communicating to students the objectives and criteria of assessment in a clear way, while taking into account the requirements of certain disciplines. A second strategy is to organize effective classroom discussions and other learning tasks to demonstrate learners' level of understanding. These include pedagogical actions that could lead to clues for the regulation of education. The third strategy is to provide feedback that advances learners. This includes feedback that focuses on the self-regulation process in which the learner is engaged. The fourth strategy is to encourage the learner to be responsible for their learning. This refers to a shared responsibility between the teacher and the learner. The latter participates in his learning through self-assessment processes, taking into account the required evaluation criteria. The final strategy is to encourage students to be resources for their peers. The teacher's job is to offer self-assessment and peer evaluation activities.

2.4. Barriers to Formative Assessment

Although formative assessment promotes learning and improving teaching, there are nevertheless obstacles to a wider practice of this form of assessment. A number of studies have investigated factors that may influence its implementation in the classroom [ 28 - 30 ]. Quyen’s review [ 31 ] analysed several studies that concluded that the key factors impeding the implementation of formative assessment were teachers’ belief in the practice, student learning and commitment to assessment tasks, time required for formative assessment activities, and teacher workload. Other factors identified were teacher training and lack of knowledge on effective formative assessment. The same review showed that, with a large number of students in the classroom, it is difficult for teachers to set up formative assessment activities. Kazman’s model, as presented by Fulmer [ 32 ], classified the factors influencing the practice of formative assessment into three levels (micro, meso and macro). The first level refers to the context of the class and the individual factors of the teacher and the student, such as the number of students per class, the commitment to the tasks of formative assessment, and the evaluative skills of teachers. This level could also include access to educational materials that can be used for this practice in its innovative form. The meso level is linked to institutional factors, including the support provided by the administration and the institutional policies on formative assessment. The macro-level mainly includes national education policies, which can influence the practices of classroom formative assessment.

Based on the above facts, we use these theoretical models to build our conceptual framework for research (Fig 1 ).

2.5. Context

In the nursing profession, several studies have explored the formative assessment and raised interest in this practice for improving nursing learning. The study [ 33 ] focused on formative assessment in the paramedical field and demonstrated that students participating in formal formative assessment held positive perceptions of this approach at each assessment event. A study conducted by Elliott [ 34 ] concluded that self-assessment and peer assessment strategies increase nursing students’ motivation to participate in class project groups. Furthermore, the study [ 35 ] demonstrated that the use of a formative assessment with well-planned quality feedback leads to effective learning, and that it is an essential component in nursing education. In Morocco, there is a lack of literature on the assessment of nursing learning and health techniques. It is important to explore this relevant aspect of the training of nurses, and there is important work to be done in this regard in light of the important developments in this discipline. Keeping in mind, this contribution aims to explore one of the crucial aspects of teaching activities: the formative assessment of learning at the Higher Institute of Nursing Professions and Health Techniques in Casablanca. This institute has the status of a higher education establishment not belonging to universities. This is after the recent introduction of the Master Doctorate system in 2015 within these institutes. The mission of ISPITS is the initial and continuous training of nurses and health technicians, guaranteeing a quality of training that meets the recommended educational and professional requirements

3. METHODOLOGY

Our questioning focused on formative assessment practices as they are developed by allied health teachers at the Institute. This version of formative assessment plays an important role in the second cycle of the paramedical studies training program, Line: Paramedical Education, which refers to the design of formative assessment. It largely illustrates the contributions of this practice on student learning. In addition, official texts governing the training of nurses in Morocco recognize formative assessment as a method of assessing learning.

Specifically, our research objective was to answer the following questions:

• How do nursing teachers design formative assessments?

• How do they view the usefulness of this practice?

• What are the obstacles associated with the practice of formative assessment?

3.1. Study Design

This research is quantitative and descriptive, and the study occurred during 2019 at the ISPITS of Casablanca. The target population of the study was composed of permanent teachers from the nursing professions of ISPITS Casablanca (N =50). The sample is therefore exhaustive.

The permanent teachers, according to their basic training (polyvalent nurse, midwife, neonatology nurse... etc .) are assigned to the different options within the institute (Table 2 ).

3.2. Materials and/or Subjects

To meet our research objective, we conducted a survey using a questionnaire. We chose the questionnaire as a data collection tool because it is a suitable method for quantitative studies. The questionnaire was developed according to the guidelines of the theoretical framework developed by William and Thompson [ 26 ], taking into account the purpose and context of our study. In addition to a section reserved for demographic data, the questionnaire includes five dimensions: functions of formative assessment and perception of its usefulness, sharing and discussing learning attentions and success criteria, how to implement the formative assessment, the temporality of formative assessment and regulation, teacher training, and barriers to the practice of formative assessment. The questionnaire consists of 37 statements with a single answer on a scale of measurement ranging from 1 to 5. Once written, the questionnaire was validated by work managers and resource people. Prior to being administered to participants, we conducted a pre-test with ten nursing teachers to verify the relevance, clarity, and understanding of the issues. The internal consistency of the survey was measured to confirm its reliability. The Cronbach α for the questionnaire (37 items) was 0.854.

formative assessment in nursing education

3.3. Statistical Analysis

We analysed the data using statistical software (Microsoft office Excel and SPSS version 20). The results are presented in the form of tables and figures. The analysis of the data was based on the description of frequencies, percentages, means, and standard deviations.

3.4. Ethical Statement

Before distributing the questionnaire, respect for ethical aspects was taken into consideration. For this purpose, we obtained an authorization issued jointly by the Regional Directorate of the Ministry of Health of Casablanca and the Directorate of ISPITS of Casablanca. The application for authorization included clarifications around the objectives of the research and its conduct. We also received consent from the study participants after explaining the commitment to respect anonymity and data confidentiality.

4.1. Demographic Characteristics of the Participants

Forty teachers participated in the study. The response rate was (80%) (N: 40 of 50)(Table 3 ).

4.2. Key Results

Table 4 illustrates a comparison of scores obtained for each response. The dimensions with the greatest amount of positive responses were, respectively, ‘identifying the students' strengths and weaknesses’ with 87.5% positive responses, ‘guiding student progress’ with 85% positive responses, ‘increasing the students’ autonomy’ with 75% positive responses. The ‘directing teaching planning’ dimension received only 45% positive responses with (Mean 3.08 ± 1.559).

Table 5 shows that 85% of teachers reported sharing learning goals with students ‘quite often.’ Similarly, 62.5% reported discussing learning goals ‘very often’. Regarding the criteria for success, 45% of teachers reported discussing them ‘quite often’. The results of the questionnaire also indicate that 45% of teachers said they are discussing the modalities of the summative assessment with the students ‘very often’. On the other hand, 47.5% ‘rarely’ discussed the terms of the formative assessment with (Mean 2.88 ± 1.436).

Regarding the implementation of the formative assessment, Table 6 provides a ranking of the modalities practiced by the teachers interviewed.

-Class questioning ranked first among the various modalities proposed: 65% of teachers ‘very often’ offered questioning in class to verify students' understanding (Mean 4.53 ± 0.816).

-Group discussions ranked 2nd among the modalities practiced: 25% of teachers proposed this formative approach ‘quite often’, 25% proposed it ‘occasionally’, and 17.5% proposed it ‘very often’. (Mean 3.2 ± 1.224).

More than half of those surveyed (40%) ‘rarely’ offered exercises and tests for formative assessment (Mean 3.05 ± 1.176). Few teachers said they propose the other modalities: self-assessment, peer evaluation, and digital assessment.

Table 7 shows that 52.5% of teachers declared that they ‘never’ carry out the formative assessment after each teaching activity (Mean 2.13 ± 1.436). 40% reported performing formative assessment ‘very often’ towards the end of a course session and 40% also used the assessment ‘very often’ at the end of a course. Also, more than 30% reported either using formative assessment ‘often’ or ‘quite often’ before the summative evaluation of a course. The majority of teachers surveyed (over 80%) reported giving feedback to their students. 47.5% reported giving individual feedback ‘quite often’ after formative assessment activities. A review of teachers' responses to the types of regulations proposed revealed that ‘giving more explanations’ was used most often and that 40% reported doing it 'quite often' (Mean 3.9 ± 1.297). (Table 8 )

With regard to teacher training, the results show that more than 60% of teachers felt that the initial training was not sufficient to carry out the formative assessment. Furthermore, more than 70% believed that ongoing training on this practice will be useful to them (Mean 4.50 ± 0.934), and 62.5% of participants expected ongoing training on digital formative assessment (Mean 4.10 ± 1.464).

formative assessment in nursing education

This study explored the barriers that influence the practice of formative assessment. Fig. ( 2 ) shows that, according to teachers, the barriers involved in using this approach are the time required for formative assessment activities (82.5% positive response), the activities required for the teachers (72.5% positive responses), and the commitment of students (75% positive responses). The other two identified barriers, the lack of support from the administration to use formative assessment and the number of students, did not score significantly among the positive responses.

5. DISCUSSION

In this chapter, we will discuss the most striking results of the study.

5.1. Interpretation and suggestion

As part of our study, we analysed teachers' reported practices on integrating formative assessment, while searching for possible obstacles to the implementation of this form of assessment. The study showed that teachers integrate formative assessment into their practice, but their knowledge does not fully correspond to the broad theoretical guidelines for formative assessment. Teachers perceive this tool to be a useful diagnostic function for identifying students' learning difficulties and guiding their learning, but do not understand that formative assessment has two inseparable functions: a diagnostic function used to identify learning difficulties and a regulatory function aimed at regulating teaching [ 22 ].

Compared to the strategy of formative assessment in relation to sharing, discussion of learning attentions, and criteria for success, this behaviour seems to be shared by more than half of the teachers. However, compared to the summative assessment, the discussion of the modalities of the formative assessment is not universal. This may be related to the formal that the summative assessment occupies in the modalities of the assessment of learning at ISPITS Casablanca.

Regarding the modalities of formative assessment, the practices of the teachers interviewed also varied in their implementation. A comparative analysis of the responses clearly demonstrated a lack of the use of formal approaches to formative assessment calling for the use of tools such as exercises and tests. Teachers seem to be contented with a formative assessment based on classroom questioning and group interactions. Furthermore, modalities for involving the student in the process of regulating learning through self-assessment and peer evaluation are rarely mobilized. In their review, Black and William [ 22 ] encourage teachers to use classroom questioning and discussion as an opportunity to improve students' understanding. They also stress the value of using formal approaches with exercises and tests. This data is inconsistent with the broader conception of formative assessment apprehended by Allal [ 9 ], where it is necessary, as part of innovative approaches, to combine interactive regulatory modalities based on informal evaluation approaches with instrumented formal modalities designed for retroactive regulatory procedures.

This study revealed that the majority of teachers believed that their training remains insufficient for the practice of formative assessment. Thus, their expectations of continuous training on formative assessment relate, in particular, to the practical modalities of formative assessment, the modalities of regulation, and how to make a digital assessment. We assume that these results suggest avenues for continuing education. Thus, it is necessary to set-up training programs at the ISPITS level of Casablanca on the evaluation of apprenticeships. It is also possible to encourage professional reading in the field of formative assessment and to provide teachers with access to educational, scientific databases specializing in the field of assessment. In addition, it would be interesting to offer teachers the ability to incorporate innovative approaches to educational evaluation, such as the use of new information and common technologies.

This initial study, which was conducted to identify teacher’s knowledge and perceptions about the practice of formative assessment, has noted potential difficulties in implementation. According to the results, three obstacles are the most significant: the time required for formative assessment activities, the activities required of teachers, and the commitment of students. Many requirements in terms of curricula are being faced by the paramedical teacher in Morocco. At the same time, he is a trainer in the academic environment and is responsible for the supervision of clinical internships. In addition to these responsibilities, various tasks are also involved concerning the organization of internships, teaching planning activities, and exam supervision.

5.2. Comparison with Previous Studies

Previous research has sought to understand teachers' perceptions and knowledge of the practice of formative assessment. In this sense, the study [ 36 ] found that teachers lack expertise and skills in formative assessment strategies, which has negative implications when integrating this form of assessment into their teaching. A study conducted by Fahez [ 37 ] demonstrated that teachers use summative evaluation more frequently than formative assessment in the classroom. The study also displayed incorrect practice of this form of evaluation with a low mobilization of formal formative assessment procedures, such as classroom testing, self-assessment, and peer review. In addition, a recent study [ 38 ] indicated that teachers view formative assessment in a traditional manner and lack knowledge about the usefulness of the practice and how to use it. This research demonstrated the need to develop teachers' knowledge and skills in formative assessment. On the other hand, the study [ 39 ] showed that the teachers interviewed share positive perceptions about formative assessment and its use for improved learning and training. The teachers interviewed also believe that classroom training is essential for planning teaching and for having effective evidence of student progress.

6. LIMITATIONS

It is also important to mention the imitations of our study. The data presented are the results of an initial diagnosis conducted as part of a doctoral research project on the use of digital technology for formative assessment in the field of nursing. This diagnosis provided a general picture of the orientations of teachers' formative assessment practice in relation to expert theories. This first study can serve as a starting point for further research based on observation of class practices, as it will be necessary to consider how this assessment is actually put into practice.

The study shows that teachers incorporate formative assessment into their practice. However, their expertise did not fully match the directions in the William and Thompson (2007) model. Furthermore, teachers are contented with an informal practice of formative assessment, with an under-employment of app-mobilization supporting self-regulation of learning, such as self-assessment and peer evaluation. The study also revealed a need for continuing education in this area, as well as challenges to the practice.

Thus, as a part of the continuity of our research project, we will try to:

• Offer nursing students, via a theoretical course, an online formative assessment to formulate interest in the assessment process.

• Measure the effect of the implementation of formal training assessments on students' learning and motivation for learning.

ETHICS APPROVAL AND CONSENT TO PARTICIPATE

This study was approved by the local ethics committee of The Higher Institute of Nursing and Health Techniques of Casablanca (ISPITS). Morocco.

HUMAN AND ANIMAL RIGHTS

Not applicable

CONSENT FOR PUBLICATION

Informed consent has been obtained form all the participants.

AVAILABILITY OF DATA AND MATERIALS

The data supporting the findings of the article are available from the corresponding author [H.L] upon request.

CONFLICT OF INTEREST

The author declares no conflict of interest, financial or otherwise.

ACKNOWLEDGEMENTS

The authors would like to express their gratitude to all the teaching staff and the management of the ISPITS of Casablanca for providing administrative and technical support.

Authors & Information

Affiliations, information, published in.

formative assessment in nursing education

Article Information

Article history.

Creative Commons License

Citations & Metrics

Export citation.

Select the format you want to export the citation of this publication.

Citation information is sourced from Crossref Cited-by service.

Article Usage (Last 30 Days)

Article usage (demographic), copyright & license, copyright and license, © 2020 lajane .et al.

Open-Access License: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode . This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Address correspondence to this author at the Laboratory of Physical Chemistry of Materials, Faculty of Sciences Ben M’sik, , , E-mail: [email protected]

Share article link

Copying failed.

Share on social media

An exploration of student nurses' experiences of formative assessment

Affiliation.

  • 1 University of the West of Scotland, Hamilton Campus, Caird Street, Hamilton, United Kingdom. [email protected]
  • PMID: 19285761
  • DOI: 10.1016/j.nedt.2009.02.007

The idea that formative assessment has the potential to prepare students, not only to succeed in summative assessments during the course, but also in the world beyond the classroom [Melland, H., Volden, C., 1998. Classroom assessment: linking teaching and learning. Journal of Nursing Education 37(6), 275-277] fuelled the desire to explore student nurses experiences of being assessed formatively. Focus group discussion, within a UK Higher Education setting, captured the holistic, dynamic and individual experiences student nurses (n=14) have of formative assessment. Ethical approval was obtained. Findings from three separate focus group discussions indicate that lecturers do not use the term "formative assessment" in their communication with the student nurses; student preparation and effort is greater when assessment is for summative purposes; oral feedback is preferable to written feedback which can, at times, be illegible and utilise unfamiliar vocabulary; lecturer comments are regarded as being more valuable than grades; student nurses are not being prepared for the critical feedback associated with peer review and they may, therefore, be vulnerable to the process and outcome of peer review. Thus, the UK centric focus of this small qualitative research study need not detract from its ability to add to the global knowledge base on formative assessment in nursing.

  • Education, Nursing, Baccalaureate / methods*
  • Educational Measurement / methods*
  • Educational Status
  • Focus Groups
  • Health Knowledge, Attitudes, Practice*
  • Nursing Education Research
  • Qualitative Research
  • Students, Nursing*
  • United Kingdom

IMAGES

  1. Formative Assessment in Nursing Education by Emily Cavanagh on Prezi

    formative assessment in nursing education

  2. 75 Formative Assessment Examples (2024)

    formative assessment in nursing education

  3. Nursing Formative Assessment by Aliah Irby on Prezi

    formative assessment in nursing education

  4. (PDF) Perceptions, Practices, and Challenges of Formative Assessment in

    formative assessment in nursing education

  5. Types of Evaluation

    formative assessment in nursing education

  6. Table 3 from The Lived Experience of Nursing Students With Formative

    formative assessment in nursing education

VIDEO

  1. Webinar Series: Using Formative Assessment to Support Early Literacy

  2. Formative Assessment Demonstration

  3. Formative Assessment and Summative Assessment

  4. Formative assessment with remedial class

  5. #Formative Assessment 2-2K24 #sinceresystems #discipline#Exam Squad#Late post

  6. Self Assessment

COMMENTS

  1. How formative assessment helps nursing students succeed in ...

    Formative assessment is a bridge between learning and teaching. It allows instructors to gather real data about students as they work, then adjust their instruction to better serve students at their current learning level.

  2. Guiding Principles for Competency-Based Education

    Formative (low stakes) assessments provide learners with actionable data to guide their progress toward attainment of the competency and entrustment for unsupervised practice. Expected outcomes/behaviors are clearly defined. Achieving competence is a developmental process. Competency expectations are leveled.

  3. Comparing formative and summative simulation-based assessment ...

    The main SBA strategies are formative and summative evaluation. Formative evaluation is conducted to establish students’ progression during the course [11]. This evaluation strategy is helpful to educators in improving students’ deficient areas and testing their knowledge [12].

  4. Formative Assessment Strategies for Healthcare Educators

    Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities. 1. Pre-Assessment: Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas

  5. Assessment and Evaluation in Nursing Education: A Simulation ...

    Assessment as learning occurs when students reflect and self-assess their progress to inform their future learning goals (formative assessment). Through this process, students can learn about themselves as learners and become aware of how they learn.

  6. Formative Assessment and Its Impact on Student Success

    Formative Assessment and Its Impact on Student Success. Hill, Rebecca DNP, CNE; Wong, John PhD; Thal, Rebecca MSN, FNP. Author Information

  7. Perceptions, Practices, and Challenges of Formative ...

    Formative assessment is formal when teachers use planned instruments such as exercises, online tests, questionnaires, and self-assessment form. In the absence of tools, formative assessment is informal when using group exchanges, classroom questioning, and observation during teaching sequences.

  8. Formative online multiple-choice tests in nurse education: An ...

    Formative online multiple-choice tests are used with good effect in nurse education as measured by knowledge gain and exam performance, increased confidence and learner satisfaction. There was no literature that explored metacognitive outcomes and minimal literature considered behavioural outcomes.

  9. Formative online multiple-choice tests in nurse education: An ...

    Formative online multiple-choice tests are widely used with good effect in nurse education. However, opportunities for further research on how these tools can encourage metacognition and self-regulatory behaviours is warranted.

  10. An exploration of student nurses' experiences of formative ...

    Focus group discussion, within a UK Higher Education setting, captured the holistic, dynamic and individual experiences student nurses (n=14) have of formative assessment. Ethical approval was obtained.