A collection of knowledge and skill subunits, assembled according to workplace needs. These subunits are termed topics, which are linked to learning resources and evaluated with multiple choice questions and checklists.
Successful completion of a mastery learning module requires
- 100% performance on all multiple choice questions
- 95% on all checklists
- A rating of 5/5 for entrustability
The focus of mastery learning should be on uncomplicated cases
Questions:
- How are checklists/GRS used during debriefing?
- Are checklists/GRS reviewed with the individual, or the group?
Ideas
- Develop a team communication checklist
- Have a facilitator simulation guide reference a number of checklists
Stories

TransAsia flt 235, fair use publication
TransAsia flt 235 crashed in Feb 2015 in Taiwan. It sustained right engine failure during take off – normally a survivable event. Unfortunately, the pilot mistakenly then shut off the left engine, and was heard to say “wow, pulled back the wrong side throttle”. The plane crashed shortly thereafter, and 49 people died.
Further analysis showed the pilot had repeatedly failed flight simulator training for the engine failure scenario. After the crash, other TransAsia pilots were urgently tested, and 10/49 also failed emergency proficiency testing for this scenario (NY Times, 2015).
Preventing Crashes hopes to support health care providers in preventing avoidable situations, similar to the one described above, through effective, low-cost, and rewarding strategies for sustained education.
Mastery Learning
“healthcare learners can no longer rely on passive knowledge acquisition at the bedside. Learning should be deliberate, clear, and objective with ample time allowed for mastery by all” (Gonzalez and Kardong-Edgren, 2017).
is based on best practices and theory (Motola et al, 2013).
This section will describe the importance and use of deliberate, spaced practice, coupled with clinical variation, designed to lead to clinical competence and mastery learning (Brown, Roediger, McDaniel, 2014).
Components of mastery learning include:
Baseline testing provides an assessment of knowledge, skill, or application provides feedback and guides subsequent learning. Data should be used as a tool, not a weapon (J Barsuk, personal communication).
Learning activities should be linked to clear learning objectives and sequenced as units, in increasing difficulty. Methods of learning can include lecture, video, demonstration, deliberate practice. As with all educational pursuits, activities should be engaging.
Feedback should be provided by facilitators in a manner that is formative, consistent, and immediate, and learning should be revised and continued until performance reaches an appropriate level.
Assessment should again be provided as the first round of learning concludes. If some learners do not yet meet mastery standards (often 20%, per Barsuk et al, 2016), training should continue until the standards are met. Often this can be done in 15-60 extra minutes of learning (Gonzalez and Kardong-Edgren, 2017).
Assessment may occur through checklists or global ratings scales (Walzak et al, 2015; Ilgen et al, 2015).There is debate regarding the appropriateness of these two types of tools, and importantly, competence as judged by checklists may at times be accompanied by incompetence as judged by global ratings scales. Appropriate use for learning and evaluation is important.
Overlearning may also be helpful, with additional training beyond what is required to meet a minimum standard (Arthur et al, 1998). This can lead to increased retention and automaticity and therefore reduce cognitive load.
Once the mastery standard is met, certification is appropriate. One group provides a ‘certificate of completion of a mastery learning program’.
Using deliberate practice, there is gradual improvement in some specific aspect.
Resources and References
McGaghie WC et al. 2015. Mastery Learning With Deliberate Practice in Medical Education. Academic Medicine.
Ericsson, Academic Medicine, 2004
Ericsson, Peak
Flipped Classroom
This principle is already used in many other training programs, and is supported by education experts (eg van der Vleuten CPM, Driessen EW 2014).
Resources and References
vander Vleuten CPM, Driessen EW. 2014. What would happen to education if we take education evidence seriously? Perspect Med Educ. 2014 Jun; 3(3): 222–232.
Case-Based Learning
While the term has different meanings for different educators and learners,
one synthesis definition describes CBL as a “learning and teaching approach that aims to prepare students for clinical practice, through the use of authentic clinical cases. These cases link theory to practice, through the application of knowledge to the cases, and encourage the use of inquiry-based learning methods” (Thistlethwaite et al, 2012).
Many current training programs use problem-based learning (PBL) as a main educational framework. CBL is principally distinguished from PBL, which focuses on a self-directed seeking out of information by learners, in that the essential elements of the case are usually known beforehand. The PBL process is heavily affected by resource preparation and group function, and facilitators are often not content experts (Dolmans et al, 2005). Contrasted with this, case-based learning has a subject expert as facilitator, and learners normally have the required knowledge either previously learned or available (Tarnvik, 2007).
What is Case-Based Learning?
CBL lies between structured and guided learning. Structured learning proceeds along a path largely controlled by the facilitator, with information, questions, and outcomes laid out beforehand. Guided learning begins with a case, leaving students to develop hypotheses, learning methods, and outcomes on their own. CBL may be described as inquiry-based, with cases and outcomes provided by the facilitator, but leaving a range of possible learning methods that could be employed by students (Thistlethwaite et al, 2012).
There is widespread variation in how case-based learning is defined, but a number of common elements may be described.
Comparing CBL and PBL
Supporters of PBL suggest that it is a superior method for encouraging lifelong learning and curiosity. In contrast, CBL may stifle curiosity through increased control by facilitators, with a risk of tutors acting as lecturers. Supporters of CBL suggest PBL may not make full use of expert tutors. They also suggest it encourages debate and discussion, as well as exploring ambiguity. An excellent comparison of PBL and CBL is provided by Srinivasan et al, 2007.
CBL also benefits from the following (Thistlethwaite et al, 2012):
- bridging the gap between theory and practice
- effective use of limited resources
- flexibility to meet particular learning gaps at particular times
- strong ratings of satisfaction by learners and facilitators
Interestingly, a shift from PBL to CBL in two Californian medical schools showed an overwhelming preference of students (89%) and faculty (84%) for CBL, primarily because of “fewer unfocused tangents, less busy-work, and more opportunities for clinical skills application” (Srinivasan et al, 2007).
The Operation of CBL
There is widespread variation in how CBL may be enacted, as described below:
Group size: CBL is usually delivered in a face-to-face small group format, with 2-15 participants, but cases may also be addressed individually or in larger groups. A facilitator is normally present; this person may or may not be an content expert, but often is.
Format: Cases may be paper-based or accessed electronically, with a host of formats used.
Timing: The group may meet once, and work through the case in its entirety, or may have two or more sessions to cover the case. Sessions may last from 10 minutes, all the way to spanning weeks or longer, depending on level of the learner, the complexity of the case, and the included knowledge or skills, and the curriculum design.
Resources and References
- Dolmans DH, De Grave W, Wolfhagen IH, van der Vleuten CP. 2005. Problem-based learning: future challenges for educational practice and research. Med Educ. 39(7):732-41.
- University of Saskatchewan – small group learning resources
- University of Saskatchewan – what is case-based learning?
- University of Ottawa – A Guide to Case-Based Learning.
- Dhaliwal G, Sharpe B. 2009. Twelve tips for presenting a clinical problem solving exercise. Medical Teacher. 31(12):1056-1059.
- Srinivasan M, Wilkes M, Stevenson F, Nguyen T, Slavin S. 2007. Comparing problem-based learning with case-based learning: effects of a major curricular shift at two institutions. Acad Med. 82(1):74-82.
- Tärnvik A. 2007. Revival of the case method: a way to retain student-centred learning in a post-PBL era. Med Teach. 29(1):e32-6.
- Medical Education wiki, University of Saskatchewan
Other Initiatives
Simulation
Simulation is a technique—not a technology—to replace or amplify real experiences with guided experiences that evoke or replicate substantial aspects of the real world in a fully interactive manner (Gaba, 2004).
Different modalities of simulations include:
- role play
- tasking training or skills training
- standardized patients
- human patient simulators
- virtual simulation
There is a strong body of evidence for best practices in simulation. A recent meta-analysis has identified key principles that improve the efficacy of simulation, (Cook et al, 2013), while AMEE (Association for Medical Education in Europe) has produced a Best Evidence Practice Guide (Motola et al, 2013). The benefits of these interventions can be evident at very low cost, using simple techniques and tools, as described above (Norman, Dowe, and Grierson, 2012). Descriptions of how to implement simulations within postgraduate curricula have been published within large academic centres (Takayesu et al, 2010; Sam et al, 2012).
Simulation promotes active learning, participation, and reflection.
Decreased stress (though at times can also increase anxiety to unhelpful levels).
It also allows
- deliberate practice
- practice for critical adverse events
- difficult to obtain experiences
- patient safety and quality
Dr Kim Jentsch (University of Florida) has information on the impact of education on debriefing on improvement in performance.
There are Standards for Best Practice Simulation, developed by INACSL
Resources and References
Gaba, D. 2004. The future vision of simulation in health care. Qual Saf Health Care. 13(Suppl 1): i2–i10.
Feedback and Debriefing
Feedback is “information communicated to the learner that is intended to modify his or her thinking or behavior to improve learning” (Shute, 2008). In health care, this usually relates to various aspects of clinical care, but is also relevant for other roles.
Feedback is one of the most powerful influences on learning (Norcini, 2010), though some research has shown that students and residents are not observed frequently enough and the feedback they do receive is often vague and unhelpful (Bing and Trowbridge, 2009). Also, many people’s opinions of feedback can be quite negative. Feedback can consist of authoritative pronouncements by teachers about their students’ performance – what they did right and what they did wrong. This can unpleasant and sometimes humiliating. The learner may be passive, accepting the teachers’ opinions without question (at least openly).
More recently, learner-centered approaches to medical education have become prominent, where learners and facilitators work together to understand performance, including what went well, and what could be done better. Here, “in an educational context, it is now argued that learning is the key purpose of assessment” (Norcini and Burch, 2007), and “giving feedback is not just to provide a judgement or evaluation. It is to provide insight. Without insight into their own strengths and limitations (trainees) cannot progress or resolve difficulties.” (King, 2004).