Modifying the D3 Clinical Applications in Orthodontics Seminar
by
Mitchell Lipp, DDS
Clinical Professor
Department of Orthodontics
“Broadcasting from the attic,” is how I open my seminars these days. Indeed, here I am in Stockton, New Jersey, a small town (population 500) near the Delaware River — far from my home base of New York City. For the past year and a half, I have been working remotely: teaching, coordinating classes, delivering lectures, conducting seminars, attending virtual conferences, continuing education courses, meetings, and staying involved in research/scholarship. Welcome to my COVID life. Moving forward things may change a bit, but for the most part, they will not, because change has been good for my D3 competency-based course in predoctoral orthodontics.
Circumstances forced me to rethink, redesign, and take risks, hoping to make a better learning experience for my students. Generally speaking, jumping out of a plane without a parachute is not a good idea. That’s the way I felt trying out new methods and repurposing old ones that had been discarded because they seemed harsh and outdated. I imagined that I would move past my assumptions and biases and follow the outcomes (performance on assessment and student perceptions from surveys) to determine the best approach moving forward.
Pre-pandemic
The D3 Clinical Applications in Orthodontics Seminar Course is the curricular equivalent to a clinical experience in orthodontics. In this course, students apply foundational knowledge and develop thinking skills that general dentists need to manage patients with malocclusion and associated skeletal problems. The D3 class is divided into smaller seminar groups. Through homework and classroom experiences, students analyze seven simulated patients (Figure 1). They then receive written comment feedback and additional feedback in classroom discussions. I recall the joyous noise as students, working in groups, wrestled through the cases to determine problems and management plans as I offered guiding questions. At the final session of the course, I gave four cases to students as a summative assessment. Students were required to demonstrate at least one case with zero critical errors based on evaluation criteria used throughout the course. Assessments were scored 0-4 based on the number of cases with zero critical errors.
FIGURE 1. Clinical Simulation Case: De-identified facial and intraoral photographs and radiographs — panoramic and lateral cephalometric. This is the basis for formative and summative assessments used in the course.
Pandemic Modifications
The course philosophy is akin to coaching — THE GOAL IS IMPROVEMENT. Grades are deemphasized, and there is no penalty for multiple attempts. Assessment is not just for measuring performance, it is the primary method used to enhance learning/skill acquisition. Students are actively involved in case-based formative and summative assessments. The learning objectives of the course are identical to the prompts on the assessment (Figure 2). In this way, the instructional target (goals, objectives, evaluation criteria) is aligned with methods of instruction and assessment.
FIGURE 2. Thinking skills assessed in the course. The objectives of the course are identical to the five prompts on the assessment.
- Malocclusion
Identify normal occlusion or angle classification of malocclusion and describe the evidence-based rationale for your diagnosis - Skeletal Problem
Identify normal or abnormal skeletal pattern and describe the evidence-based rationale for its clinical significance - Treatment Plan
List the treatment/management plan, i.e., services needed to address patient’s clinical problems and describe why each service is necessary - Space Management
Describe space management decision in the upper arch and explain the evidence-based rationale for your decision - Space Management
Describe space management decision in the lower arch and explain the evidence-based rationale for your decision
Pandemic Era
In response to the COVID pandemic, I redesigned the course to enhance active experiential learning in a remote environment. The number of formative case-based assessments more than doubled (to 17) without extending course clock hours. Consciously using extrinsic (grades) and intrinsic motivators (wanting to be a better doctor), I sought to better prepare students for summative assessment.
The course was conducted remotely by Zoom. When COVID first hit, I was informed that requiring students to go on camera was discouraged because of the risk of embarrassment. Over time, going mute and off-camera became the new normal. Yet without interaction, Zoom classes felt like an empty echo chamber. Consequently, I sought out strategies to encourage active participation.
Through “TopHat,” an app available at the College, I was able to record attendance and incorporate in-class quiz questions to engage students. The responses to each question were aggregated and presented graphically. This became the basis for discussion and heightened awareness of gaps that needed attention. The quiz scores were relatively low stakes, accounting for 10% of the final grade.
More controversial was the cold calling method. I remembered seeing the movie The Paper Chase in the 1970s. The movie depicted first-year students at Harvard Law School congregated in a lecture hall, confronted by a stern, curmudgeonly professor at a podium who called on them to explain, propose, or make evidence-based arguments. This movie defined my mental image of cold calling. This approach seemed harsh and intimidating. Yet forming an evidence-based argument was precisely what I was trying to achieve. Cold calling compelled active learning while creating opportunities for meaningful assessment. In this course, cold calls were basically patient-based formative assessments, where students made clinical decisions and presented evidence-based arguments to justify their positions.
Considering the potential for negative effects, I fashioned a friendlier version of cold calling. Starting with the first session, I attempted to shift the student’s perspective away from courses and grades to wanting to be a better doctor. I emphasized that the course is entirely patient based and the skills involved are essential for clinical practice. In the six-session course, four sessions included cold calling. Students were subdivided into two groups that alternated “on-call” sessions. In the first session, students who did not want to be called could request a one-on-one session outside of the Zoom course. The request to skip cold calls rarely occurred (less than 2%). Cold calls were based on case simulations that I gave to students as homework, so they could prepare in advance. If students were unhappy with their performance, they could request another chance, understanding that it could only improve their score. The score accounted for 10% of the final grade.
Although the course goals and criteria for evaluation did not change, the format for assessment now emphasized key decisions a general dentist makes. The course emphasized thinking skills: analysis, synthesis, evaluation, critical thinking, problem solving, reasoning, recognizing abnormal conditions, developing a problem list, and constructing a management plan. The assessment consisted of five short answer/essay prompts (Figure 2). I defined “correct” to mean decisions that are reasonable and evidence-based. In addition, a correct response must demonstrate zero critical errors as detailed in the evaluation criteria. Similar to the former version of the course, students must demonstrate 100% in at least one of four cases to meet minimal course standards. Students who did not meet standards or were not happy with their performance on the summative assessment had two more opportunities to take the assessment without grade penalty (understanding that the course grade was based on the most recent summative assessment).
The assessment was computer-based using ExamSoft and administered remotely by Zoom in a non-proctored environment. To ensure academic integrity, students signed an honor code, examination time was limited to 70 minutes, backwards navigation was disabled, facial recognition was enabled for identification, and there were multiple versions of the assessment (four cases randomly ordered).
Early Outcomes Data
Pre-COVID v. COVID groups: Performance on summative assessment. Using random cluster sampling, Pre-COVID (2018 N=89) and COVID (2020, 2021 N=92) groups were compared. All groups took a summative assessment consisting of the same four simulation cases. The evaluation criteria (basis for critical errors) were the same. Grading was done for all students by the same faculty member. Summative assessments were scored 0–4 based on the number of cases that met standards (0 = Did not meet standards; 4 = the highest score). The Pre-COVID group constructed problem lists, treatment objectives and treatment plans for the cases in a proctored classroom. The COVID groups responded to five prompts, making clinical decisions with an evidence-based rationale, on computers, remotely, and without proctors.
The COVID group outperformed the Pre-COVID group both in percentage passing and quality of assessment scores (Figure 3).
FIGURE 3. Comparisons on summative assessment- Pre v Post-COVID groups
Pre-COVID v. COVID groups: Confidence in abilities and attitude toward the course. Using random cluster sampling, Pre-COVID (2018, N=97) and COVID (2020, N=95) groups were compared. Students completed a five-point Likert response survey prior to administration of the summative assessment. Three items were extracted from the survey for comparison. Response rates for Pre-COVID and COVID groups were 98%. Responses were similar and generally positive for both groups (Figure 4).
FIGURE 4.
Conclusion
Overall, the modifications in the course resulted in improved performance and continued positive effect. I intend to maintain remote synchronous instruction and cold calling next academic year. I also intend to maintain the format of the assessment with the focus on clinical thinking skills. In academic year 2021-2022, I plan to administer summative assessments in a proctored classroom and compare performance outcomes across the two years. With the same method of assessment, I will be able to determine if the performance gains are sustainable. Despite precautions taken, I am concerned about examination integrity in a non-proctored environment. The results of that study will guide decisions concerning conditions for future assessments.
This project was deemed exempt by the NYU IRB FY2021-4795, IRB 2021-5418.
ACKNOWLEDGEMENTS
The author gratefully recognizes Kiyoung Cho, PG2 Orthodontics student, Bilal Chaudry and Jae Jun You, D4 students at NYU College of Dentistry for data analysis and constructing graphs. Eileen Bell for organizing primary datasets and transferring secondary de-identified datasets to-investigators.