I wanted to highlight recent research in neurology education to create awareness of this research, and also open up broader discussions of how this research should impact the way neurology clerkship curricula are designed. I hope that educators outside the field of neurology will also share insights into comparative work which has been done in their own area of interest so we all can be better educators. (Phew - goals and objectives discussion done).
The first study to look at is by Dr. Doug Gelb and colleagues at the University of Michigan. His premise for doing the study was based on a recent requirement for students to keep a log of their patient experiences while on clinical clerkships to verify that they have seen a wide breadth of disease as they go through the fourth year of medical school. The idea behind the logs is that students in the third and fourth years of medical school are assigned to rotate through various hospital wards. Thus, unlike the first and second year where the courses are much more structured and learning content is fairly tightly defined, the majority of the experience of the third and fourth years are reliant on the student seeing patients in the clinical setting. Although there is an effort to assign students to a wide variety of services and clinics, there is no absolute guarantee that a student will be exposed to a patient with any single disorder. Thus, a student may potentially be on a neurology service for 3 weeks, and never see a patient with painful diabetic peripheral neuropathy as these patients by chance were not in the clinics or wards the student to which the student was assigned. Thus, the idea is that a log of patient encounters with a specific disease theoretically can be used to assess the curriculum as a whole, and also to assess each learner's activity.
Dr. Gelb and colleagues wanted to challenge the notion that counting the number of patients a student was exposed to would correlate with their clinical competence. The premise here is that the students are not isolated to only the specific patients they care for. There are lectures and student conferences, as well as readings in the curriculum which will fill in any gaps which may come from lack of exposure to a given topic. The idea is also that perhaps you don't need to see a patient with diabetic neuropathy to learn about diabetic neuropathy. You may see a patient with Charcot-Marie-Tooth type 2, and in reading about that patient, you will learn that diabetic neuropathy is in the differential diagnosis. This will then prompt you to look for more information on diabetic neuropathy.
He put this hypothesis to the test by looking at each student log from the neurology clerkship at U of M over the course of one academic year (05-06). The logs recorded every patient seen by each student while on the four-week clerkship, as well as chief complaint and comorbidity information. The chief complaints were blocked together into sub-categories (ie stroke, neuromuscular, etc). Each students score on the final examination (which was a locally prepared 100 item MCQ test), and their clinical evaluation scores from their ward faculty were correlated with the number of patients seen on the log. They also separated out sub-scores of the exam based on the sub-categories of disease. What they found was that there was no correlation between student performance measures at the end of the clerkship - either knowledge-based with the exam, or clinical-skills with the clinical evaluations. In fact, the trend for the test scored was moving towards the students who saw more patients doing worse than students who saw fewer patients.
Now, obviously, this is a place to start on examining if patient logs are a good thing or a bad thing, but I think it does raise questions about whether our measures of student success are actually measuring something useful.
Gelb, et al. Experience may not be the best teacher: Patient logs do not correlate with clerkship performance. Neurology. Feb 24,2009, 72(8):699-70