Training Assessment Systems

Medical simulation has proliferated throughout the continuum of healthcare. Historically, simulation has provided students the opportunity to practice largely psychomotor skills in a safe, economical way. As simulation technology advances, training requirements are increasingly focused on developing the cognitive skills that separate novice clinician from their more experienced counterparts and enable a complete understanding of the patient’s experience. While efforts to develop simulation-based training for these “softer” skills such as critical thinking, problem-solving, and error anticipation are underway, there is little empirical evidence to support the effectiveness of these trainers. One barrier to assessing training effectiveness is a lack of established metrics, of these, and other skills that are crucial to ensuring successful patient outcomes. One of the contributors to the problem is a general failure in establishing a consensus as to what are the variables and determinates of effective simulation experiences.

While individual studies have contributed to the evaluation and measurement inventory of potential schemas to provide sufficient data to evaluate the use of simulation as an experiential learning tool, clinical performance correlation and movement of the novice through the levels of proficiency, competency and mastery of procedural skills has not been fully documented. While there is limited research on accelerating maturity and proficiency through the use of simulation, there is lacking a general description of the variables and determinates of increased medical simulation performance, increased decision-making ability, retention, and consensus definitions of proficiency, competency and mastery. Currently, metrics of performance that best determine positive patient outcomes with minimum adverse events, methodologies to provide objective observation of those metrics, and the application of valid measurement schemas to those observations, are absent.

IVIR has focused on the establishment of such metrics and methodologies, and correlation to clinical outcomes over time, as well as levels of proficiency, competency and mastery of clinical skills to potentially accelerate maturity and experience. The development of these metrics serve an immediate purpose of facilitating the assessment of simulator effectiveness, however, standardizing measures of cognitive skills across training instances provides a number of additional benefits. Performance data could be used to predict an individual student’s success from one course to the next. Persistent student models based on these metrics could provide the basis for adaptive or personalized learning experiences. Instructors could use the data for self-assessment purposes and resource managers could leverage the information to determine the appropriate number of training hours and modalities required to master a particular skill.