I've been thinking about an aspect of the 'hidden curriculum' lately. It came up in reviews of the neurology clerkship over the last several years. There have been a few comments over the last few years about staff and residents making statements behind the closed doors of the conference room about the competence of colleagues from other departments and other institutions. I don't think this is unique to our department or to our school of medicine. The question I have is why does this happen?
I know this is not unique to us as I encountered these same scenarios as a student myself on all the services I rotated through. This is a typical scenario, a resident takes a call with a request for a consultation by another service. They hang up the phone, and break into a tirade (sometimes with expletives included) about how stupid the person/team was for not being able to address this problem by themselves. Too often this exchange happens before the phone is put down, and it can grow into a literal
shouting match. I've seen this same pattern after discussions with support staff for a lab value or to call an on-call tech in to the hospital ob the weekend. There's also the easy target of the referring physician from a smaller hospital who called to transfer a patient. Often these comments include jokes about the intelligence of the people on the other end of the phone.
So, why does this happen? Let me discuss one possible reason. First, from a medical training perspective, I was taught very early to be a critical thinker. Much of clinical reasoning - especially diagnosis and treatment decisions - occur in a vast grey area between the seemingly sharp lines of common diseases and syndromes seen in medical school textbooks and lectures. This means you should approach every patient's problems from the beginning and rework the steps to diagnosis to assure yourself of the correct diagnosis and treatment path. Taken in a positive way, if you come to a different opinion than previous providers, you can potentially change the treatment course and make the person better - which is good. Taken in a negative way, every time you do this exercise, you find that there are many people who don't think like you do, and you can start to get the idea that you are the only provider in the region who has competence. This bias towards thinking that presumed errors are based on incompetence are sometimes actually true - perhaps the provider is indeed not safe to practice medicine. However, I think this is not really true as often as may be grumbled about int the confines of a conference room. First, clinical presentations are often subtle initially, and just the fact that you are evaluating the patient later makes things clearer. Also, you already know what didn't work which usually helps narrow the differential diagnosis or treatment options. Also, you have no idea what the context of the day/ night was for the provider as they were making those decisions. Again, I'm not saying that every misadventure is justified, but I'm saying as professionals our job is to take care of the patient. Out job is not to jump to conclusions about what happened before we were there.
This behavior then gets passed along to our students who see it modeled all the way from residents to staff. It's accepted as normal behavior, and like other parts of the hidden curriculum it is passed down from one generation to the next. Please remember this the next time you are tempted to make a disparaging remark. Now, I'm not saying good natured joking and friendly competition should be outlawed. There are very good jokes out there about neurologists, and I know some good neurosurgeon jokes. Humor can help us all deal with stressful situations. I'm not for banning it completely. I'm just asking for some thought before making a sarcastic comment about a colleague. Would it be OK for that person to be in the room with you when you say the comment? If yes, then it is likely just some banter. If no, it may be time to rethink. Especially with students in the room.
One final thought. The other side of the coin is that we usually hear back from colleagues who tell us about things we did well. Rarely do our colleagues report back to us on things we could have done better. Thus, you likely have a reporting bias on your own performance on these types of issues. So, be careful who you are criticizing as it may well be yourself.
The semi-random musings of a neurologist who first trained to be a high school teacher, and never quite left his educator days behind. Views on the blog are my own, and are not specifically endorsed by my employer.
Friday, October 19, 2012
Friday, September 14, 2012
Medical eduacators - no degree required?
Higher education has a this little secret. Although it is getting better, most of the people responsible for delivering content and designing curriculum don't have a degree in education. I'm not saying I'm the most learned educator, but I do have a undergrad degree in education. I frankly fall back on the learning theory and curriculum design background daily as a medical educator. However, my colleagues around the country don't have that. I'm not talking about going to a one- or two- day seminar on teaching skills. I'm talking about a degree from a university or college that states you have completed coursework in education. I don't see that around these parts much. And I honestly think medical education (and higher education) suffers for it.
I'm not saying that there are not good teachers in medical schools. There are wonderful and devoted teachers in every aspect of medical education. Also, years of teaching does help refine one's skill, and many medical educators have 'learned on the job' and have a decent knowledge base of educational theory. I'm also not saying that everyone who has an education degree of some ilk is automatically a great teacher. I'm also not saying that this is at any medical school in particular, but really it's everywhere.
What I am trying to say is that just like I'd rather see a cardiologist for my chest pain than a pediatrician, I'd rather have someone with some training and background in how learning happens and best ways to do education be the person teaching our future physicians. There, my rant is done, and I feel better.
I'm not saying that there are not good teachers in medical schools. There are wonderful and devoted teachers in every aspect of medical education. Also, years of teaching does help refine one's skill, and many medical educators have 'learned on the job' and have a decent knowledge base of educational theory. I'm also not saying that everyone who has an education degree of some ilk is automatically a great teacher. I'm also not saying that this is at any medical school in particular, but really it's everywhere.
What I am trying to say is that just like I'd rather see a cardiologist for my chest pain than a pediatrician, I'd rather have someone with some training and background in how learning happens and best ways to do education be the person teaching our future physicians. There, my rant is done, and I feel better.
Friday, August 10, 2012
Clinical assessment variability - what is really causing it?
There was a recent article in Academic Medicine by Dr. Alexander and colleagues from Brigham and Women's Hospital describing the amount of variability in clerkship grading among US medical schools. They found that, unsurprisingly, the grading systems for the clinical years had really no consistency at all. There was inconsistency among the grading systems used (traditional ABCDF or honor/pass/fail or pass/fail) - (table 1), and even within the schools which used a similar scale the percentage of students receiving the highest grade was all over the place (table 2). So, the question is what do we do with this information? I think no one really expected findings that were different, but now the answer is out there, in print (or on digital reader screens).
I think part of the answer to where we go from here is to decide if this article was really asking the right question. The authors do start to talk about this in the discussion section, but I'll try to lay out my thoughts with a little different spin than they gave their discussion. I think the real question is what are we using the assessment of the clerkship performance for? What is the essence of what we are trying to measure? Only when there is broad consensus not only between schools, but within the individual courses of each school will there get to be any semblance of uniformity of grading of students. I see at least two competing interest which influence how a clerkship director decides to come up with a grading system. The first is the idea that the students should be measured on how competent they are in the area the clerkship is grading. In other words, when they are on call as a first-year resident or as a 50 year-old physician, do they have the knowledge and skills to assess a patient with a given problem. Second, the clerkship director also wants to be sure that the students at their school have a fair chance to compete for selective residency programs. Thus, there also needs to be a system to distinguish high-achieving from low-achieving students. The first system is more about the individual student, and with this system, by definition, everyone should be able to achieve the highest score with enough effort and work. In the second system, it is more about evaluation of the program, and the group. In this system, it cannot be possible for everyone to achieve the highest score. However, the system can be manipulated on both sides to aid students or to make it more hazardous. There are benefits and risks of each system - as with anything in medicine.
I don't think these interests are necessarily incompatible, but they create a tension which I've seen in national meetings and in local curricular meetings. I also think most clerkship directors are not aware of how this tension affects the grading system they have developed. I think their not aware as the debates I've heard are usually about tools for assessment or the numbers of honors. Rarely does the debate get to the level of what is our ultimate purpose for the assessment. The answer to that question must shape how grades are assessed. Only when we all become very clear about what we our goals are for the assessment will we truly be able to come to a place where we can have a national dialogue about how to unify the system.
I think part of the answer to where we go from here is to decide if this article was really asking the right question. The authors do start to talk about this in the discussion section, but I'll try to lay out my thoughts with a little different spin than they gave their discussion. I think the real question is what are we using the assessment of the clerkship performance for? What is the essence of what we are trying to measure? Only when there is broad consensus not only between schools, but within the individual courses of each school will there get to be any semblance of uniformity of grading of students. I see at least two competing interest which influence how a clerkship director decides to come up with a grading system. The first is the idea that the students should be measured on how competent they are in the area the clerkship is grading. In other words, when they are on call as a first-year resident or as a 50 year-old physician, do they have the knowledge and skills to assess a patient with a given problem. Second, the clerkship director also wants to be sure that the students at their school have a fair chance to compete for selective residency programs. Thus, there also needs to be a system to distinguish high-achieving from low-achieving students. The first system is more about the individual student, and with this system, by definition, everyone should be able to achieve the highest score with enough effort and work. In the second system, it is more about evaluation of the program, and the group. In this system, it cannot be possible for everyone to achieve the highest score. However, the system can be manipulated on both sides to aid students or to make it more hazardous. There are benefits and risks of each system - as with anything in medicine.
I don't think these interests are necessarily incompatible, but they create a tension which I've seen in national meetings and in local curricular meetings. I also think most clerkship directors are not aware of how this tension affects the grading system they have developed. I think their not aware as the debates I've heard are usually about tools for assessment or the numbers of honors. Rarely does the debate get to the level of what is our ultimate purpose for the assessment. The answer to that question must shape how grades are assessed. Only when we all become very clear about what we our goals are for the assessment will we truly be able to come to a place where we can have a national dialogue about how to unify the system.
Wednesday, July 11, 2012
Simulation training vs natural history in LP battle royale
I read an article on the use of simulation in teaching lumbar puncture (LP) technique to residents by Dr. Simuni and her medicine colleagues at Northwestern University. I thought it was a really interesting article and helps to add data to the idea that a curricular plan in medical education which includes deliberate practice and simulation does a really nice job of teaching learners a new skill. It hints that this deliberate practice in a logical fashion is better than the traditional model.
I'm not so sure this paper really definitively answers the question that this practice is superior to the traditional training model. In brief, the article pits final scores on a mastery checklist of first-year medicine residents who underwent a three-hour educational session including simulation to teach proper LP technique against neurology residents who were asked to simply do the LP simulation while being graded on their performance on the checklist. I think this result may stand over time and additional studies, but I have a few problems with it. First, the neurology residents were not shown the checklist. I think this is a big deal. I don't know the proper place to do that to get adequate controls, but essentially the medicine intern group was taught to the test. It was deliberately pounded into their heads over the three hour session that these are the things they are going to be graded on. That's what deliberate practice is all about. It's about repeating something to get it right. To my mind, that is teaching to the test. The neurology residents weren't given time to familiarize themselves with the simulator (at least it didn't say they were). They also weren't oriented to what they would be evaluated on, so of course they didn't perform as well on the checklist. As they have likely done multiple LP's it might have also been easier to skip to inserting the needle in a simulated environment as it feels artificial, and it feels like this is the ultimate goal. It might have been more useful to go to the bedside of the next LP these residents did and see if the 'real world' performance was different between the PGY-1 group and the neurology residents. I doubt any neurology resident would forget to get informed consent in the 'real world' (but I may be wrong). Maybe the simulation training in part, trains you how to take the final simulation exam. I'm not saying that it was not a good idea to do the simulation training, I do. I'm also not saying the checklist is invalid or has no practical applications, it does. I'm saying that the PGY-1 group had the deck stacked in it's favor.
I would also argue that the way I learned to do LP's was essentially with deliberate practice over time with multiple patient experiences. When I was first starting, I had a senior resident or faculty over my shoulder giving me feedback on my technique. Could this have been improved upon by adding a simulation session at the beginning of my training, absolutely! But I don't know that this study really proves what they say it proves which is that traditional training is inferior to simulation, and neurology residents can easily be schooled by interns fresh from the simulation lab. This is shown by comparing the neurology residents with the interns at baseline. The neurology residents were all better just eyeballing the data. I think if you had put the neurology residents through the training, they also would have achieved a higher level of mastery. *That's a neurologist talking of course :)
I do want to say that I am a bit concerned at some of the mastery items the neurology residents missed (as were the authors). The anatomy questions would have likely been taken care of by brushing up on the anatomy before the test, but you could argue especially a senior neurology resident should know that. The authors were concerned about anesthesia, but that could have been a function of being in a simulator vs 'real world'. It could also show how one of the schools has a local practice which is different from national norms. The setting up of the tubes and manometer in a proper fashion is a bit vague to me, and I'm not sure I'd know what the proper position should be for that. I wouldn't make that a make or break point on this procedure. Not saying how I know this, but one can recover surprisingly well with the one handed method of unscrewing the caps in a pinch.
So this is a long response to the article as the editorial that accompanied it was trying to make the point that the traditional model was inferior and should potentially be reconsidered. I don't think that is what this study showed. I think it did prove that mastery level is attainable with a 3 hour simulation lab for PGY-1. I'm not sure it really proves they are better than neurology residents. They may be, but I don't think this was a fair assessment of that.
I'm not so sure this paper really definitively answers the question that this practice is superior to the traditional training model. In brief, the article pits final scores on a mastery checklist of first-year medicine residents who underwent a three-hour educational session including simulation to teach proper LP technique against neurology residents who were asked to simply do the LP simulation while being graded on their performance on the checklist. I think this result may stand over time and additional studies, but I have a few problems with it. First, the neurology residents were not shown the checklist. I think this is a big deal. I don't know the proper place to do that to get adequate controls, but essentially the medicine intern group was taught to the test. It was deliberately pounded into their heads over the three hour session that these are the things they are going to be graded on. That's what deliberate practice is all about. It's about repeating something to get it right. To my mind, that is teaching to the test. The neurology residents weren't given time to familiarize themselves with the simulator (at least it didn't say they were). They also weren't oriented to what they would be evaluated on, so of course they didn't perform as well on the checklist. As they have likely done multiple LP's it might have also been easier to skip to inserting the needle in a simulated environment as it feels artificial, and it feels like this is the ultimate goal. It might have been more useful to go to the bedside of the next LP these residents did and see if the 'real world' performance was different between the PGY-1 group and the neurology residents. I doubt any neurology resident would forget to get informed consent in the 'real world' (but I may be wrong). Maybe the simulation training in part, trains you how to take the final simulation exam. I'm not saying that it was not a good idea to do the simulation training, I do. I'm also not saying the checklist is invalid or has no practical applications, it does. I'm saying that the PGY-1 group had the deck stacked in it's favor.
I would also argue that the way I learned to do LP's was essentially with deliberate practice over time with multiple patient experiences. When I was first starting, I had a senior resident or faculty over my shoulder giving me feedback on my technique. Could this have been improved upon by adding a simulation session at the beginning of my training, absolutely! But I don't know that this study really proves what they say it proves which is that traditional training is inferior to simulation, and neurology residents can easily be schooled by interns fresh from the simulation lab. This is shown by comparing the neurology residents with the interns at baseline. The neurology residents were all better just eyeballing the data. I think if you had put the neurology residents through the training, they also would have achieved a higher level of mastery. *That's a neurologist talking of course :)
I do want to say that I am a bit concerned at some of the mastery items the neurology residents missed (as were the authors). The anatomy questions would have likely been taken care of by brushing up on the anatomy before the test, but you could argue especially a senior neurology resident should know that. The authors were concerned about anesthesia, but that could have been a function of being in a simulator vs 'real world'. It could also show how one of the schools has a local practice which is different from national norms. The setting up of the tubes and manometer in a proper fashion is a bit vague to me, and I'm not sure I'd know what the proper position should be for that. I wouldn't make that a make or break point on this procedure. Not saying how I know this, but one can recover surprisingly well with the one handed method of unscrewing the caps in a pinch.
So this is a long response to the article as the editorial that accompanied it was trying to make the point that the traditional model was inferior and should potentially be reconsidered. I don't think that is what this study showed. I think it did prove that mastery level is attainable with a 3 hour simulation lab for PGY-1. I'm not sure it really proves they are better than neurology residents. They may be, but I don't think this was a fair assessment of that.
Friday, June 29, 2012
Augmented reality for neurological education simulations
I am laying down a challenge for app developers out there who know more about programming than I do. This challenge comes from a day-long IAMSE meeting course I attended over the weekend on state-of-the art medical simulation tools. What I saw was some pretty cool simulation is available today to replicate many physical signs and to help train on various procedures. These simulations have come a long way from when all Harvey could do was teach you how to pick up a murmur consistent with mitral stenosis. Now you can check blood pressure, pupillary response, breath sounds, and the mannequin can even talk to you.
The trouble (from a neurologist perspective) is that current simulation is great for cardiopulmonary physiology and simulation, but it leaves a void for the neurological exam. It can teach laproscopic surgery, mimic a prostate nodule on DRE, and a lot of other things. But aside from pupils and having the machine shake to mimic a seizure (which I haven't seen, but from the description, it sounds like a very large Tickle-Me-Elmo type of convulsion - ie all trunk movement and not much arm or leg movements), the neurological exam is as yet uncovered. I think a lot of that comes from the fact that the neurological exam will require pretty advanced robotic arms and legs to mimic things like fine finger movements, and strength testing. Hence, essentially you can equilibrate an essentially comatose person's exam for the most part.
I see an opportunity for augmented reality to step in while the robotic simulation takes time to become more sophisticated and cheaper. I could imagine using a real person as a simulated patient sitting in a chair, or a simulation mannequin in a gurney, and have the student hold a tablet up to the person so that the view screen is over the torso. Then an augmented reality protocol could take the image of the arm from the simulation, and overlay a realistic-looking tremor. Or you could overlay realistic ataxia with heel to shin testing. Or you could overlay a realistic tongue deviation, tongue fasiculations, or palate deviation. Thus, you could more efficiently create a high fidelity simulation with neurological deficits. I've asked my bioengineer friend about this, and he said it could probably be done, it'd just take money to get off the ground.
So, there's my challenge. Create an augmented reality neurology exam simulation. I'd be interested to hear if anyone is already developing something like this, or if someone if any app makers would be interested in making this happen.
The trouble (from a neurologist perspective) is that current simulation is great for cardiopulmonary physiology and simulation, but it leaves a void for the neurological exam. It can teach laproscopic surgery, mimic a prostate nodule on DRE, and a lot of other things. But aside from pupils and having the machine shake to mimic a seizure (which I haven't seen, but from the description, it sounds like a very large Tickle-Me-Elmo type of convulsion - ie all trunk movement and not much arm or leg movements), the neurological exam is as yet uncovered. I think a lot of that comes from the fact that the neurological exam will require pretty advanced robotic arms and legs to mimic things like fine finger movements, and strength testing. Hence, essentially you can equilibrate an essentially comatose person's exam for the most part.
I see an opportunity for augmented reality to step in while the robotic simulation takes time to become more sophisticated and cheaper. I could imagine using a real person as a simulated patient sitting in a chair, or a simulation mannequin in a gurney, and have the student hold a tablet up to the person so that the view screen is over the torso. Then an augmented reality protocol could take the image of the arm from the simulation, and overlay a realistic-looking tremor. Or you could overlay realistic ataxia with heel to shin testing. Or you could overlay a realistic tongue deviation, tongue fasiculations, or palate deviation. Thus, you could more efficiently create a high fidelity simulation with neurological deficits. I've asked my bioengineer friend about this, and he said it could probably be done, it'd just take money to get off the ground.
So, there's my challenge. Create an augmented reality neurology exam simulation. I'd be interested to hear if anyone is already developing something like this, or if someone if any app makers would be interested in making this happen.
Friday, June 15, 2012
Use of Tablets in Medical Education - What we can learn from Manchester
I've been watching a series of presentations by students at Manchester Medical School where they discuss their use of the iPad in their studies. This is the end result of a project where students where issued an iPad in a pilot project at the beginning of this year. I don't think any of the presentations in and of themselves is ground-breaking. What I think is very innovative about this program is to have the students take the technology and apply it to problems they identify. This is not a top-down approach where an instructor is listing apps the students can use, and then evaluating whether the students do what they are told.
This is really a problem-solving exercise. It's learner-centered learning at it's very core. Give a student a tool which has over 30,000 apps available plus web capability, now the students need to go and figure out how to best use it. First, they all identified problems they had in the past - forgetting important papers at home, having notes highlighted beyond recognition, and inability to physically lug all those textbooks. They sifted through the app landscape, and came up with some remarkable ways to use the technology. This is crowd-sourcing at its best. This is truly the future of technology in medical education. It's not about the downloading the coolest toy out there and jamming it into a curriculum to make it do something, its about finding the right tool to use to solve the educational problem in front of you. The students found ways to get around problems with creating and filing notes, filing reading to do later, communicating log data to their supervisor, and creating study aids for themselves and their classmates. If all the students at Manchester did this project next year, think of the innovations they could produce. Now think about all the students in the UK, or across nations. Again, crowd-sourcing at its best.
As Prof Freemont explains in his introduction to the program, this program is not about one particular format. They chose the iPad for reasons he outlines, but you could likely accomplish similar feats with an army of portable laptops, android tablets, iPads, or whatever the next new thing will be. I do appreciate the spirit of their experiment. And, I'd be happy to come personally see what's going on in Manchester. Maybe sometime next year during football season.
This is really a problem-solving exercise. It's learner-centered learning at it's very core. Give a student a tool which has over 30,000 apps available plus web capability, now the students need to go and figure out how to best use it. First, they all identified problems they had in the past - forgetting important papers at home, having notes highlighted beyond recognition, and inability to physically lug all those textbooks. They sifted through the app landscape, and came up with some remarkable ways to use the technology. This is crowd-sourcing at its best. This is truly the future of technology in medical education. It's not about the downloading the coolest toy out there and jamming it into a curriculum to make it do something, its about finding the right tool to use to solve the educational problem in front of you. The students found ways to get around problems with creating and filing notes, filing reading to do later, communicating log data to their supervisor, and creating study aids for themselves and their classmates. If all the students at Manchester did this project next year, think of the innovations they could produce. Now think about all the students in the UK, or across nations. Again, crowd-sourcing at its best.
As Prof Freemont explains in his introduction to the program, this program is not about one particular format. They chose the iPad for reasons he outlines, but you could likely accomplish similar feats with an army of portable laptops, android tablets, iPads, or whatever the next new thing will be. I do appreciate the spirit of their experiment. And, I'd be happy to come personally see what's going on in Manchester. Maybe sometime next year during football season.
Friday, June 8, 2012
Twitter Live Meeting Stream as a Self-reflection Tool
I doubt this post will come as a surprise to many social media gurus out there, but it is something I fully realized only last week. I've posted to Twitter more regularly from meetings lately. It is decidedly a skill which I'm still working on mastering. I think it is a very powerful tool to use while in a meeting to connect with those in the room with you, and also to disseminate information with those not at the meeting. However, I didn't really get that it could also be useful to me as a self-reflection tool. I've seen Twitter used more intentionally as a self-reflection tool in an education setting as discussed in this very nice slide presentation posted by Dr. Noeline Wright. I'd also seen twitter chats put together using Storify or similar sites. I always thought these things were for the people who weren't in the room, and weren't posting live.
Then I was sitting at the Pacific Northwest Basal Ganglia Coterie (Parkinson's doctors and scientists) meeting this last weekend next to a fellow conference goer. He was getting preparing to jot down some notes, and looked at my laptop which was open to Hootesuite. I had presented about my Twitter account to this group before, so he figured out pretty quickly what live Tweeting from a meeting would entail. But then, he made the assumption that I was doing the Tweeting primarily for myself as a record that I could go back to look at later. Again, maybe I'm just a dunderhead, but when I've live Tweeted meeting updates before, I usually didn't think it was for me. I was thinking about those that may read my stream and learn from it. I've seen data that reflection is better for retention of lecture material, and yet I didn't put that together. I went back through my stream at the end of the conference, and hopefully more of the information will stick because of it.
Now that I've figured this out, and I plan to go back through my Twitter feed intermittently as a reflection tool. The Twitter feeds from meetings may have other valuable information to mine including using it as a a way to prove that you were actively mentally participating in a CME event. I could be used to evaluate the CME, and also if an presenter has a rich group of streams to look at, it can give them loads of information about the audience for future talk planning. Who knew all this could come from live Tweeting at a meeting?
Then I was sitting at the Pacific Northwest Basal Ganglia Coterie (Parkinson's doctors and scientists) meeting this last weekend next to a fellow conference goer. He was getting preparing to jot down some notes, and looked at my laptop which was open to Hootesuite. I had presented about my Twitter account to this group before, so he figured out pretty quickly what live Tweeting from a meeting would entail. But then, he made the assumption that I was doing the Tweeting primarily for myself as a record that I could go back to look at later. Again, maybe I'm just a dunderhead, but when I've live Tweeted meeting updates before, I usually didn't think it was for me. I was thinking about those that may read my stream and learn from it. I've seen data that reflection is better for retention of lecture material, and yet I didn't put that together. I went back through my stream at the end of the conference, and hopefully more of the information will stick because of it.
Now that I've figured this out, and I plan to go back through my Twitter feed intermittently as a reflection tool. The Twitter feeds from meetings may have other valuable information to mine including using it as a a way to prove that you were actively mentally participating in a CME event. I could be used to evaluate the CME, and also if an presenter has a rich group of streams to look at, it can give them loads of information about the audience for future talk planning. Who knew all this could come from live Tweeting at a meeting?
Wednesday, June 6, 2012
EBM evaluation tools applied to medical student assessment tools
I remember back to the days when I was a fresh medical student taking those first classes in biochem, anatomy, and cell biology. I learned a ton, and honestly I draw on this knowledge-base daily when I'm taking care of patients. I also remember that the assessments methods used during my first year of medical school were not the greatest (in the opinion of a person who was teaching high school physics and chemistry 3 months before entering med school). The number of assessments used in med schools has risen over the last 15 years since I was an M1. However, with a rise in number of choices, comes responsibility to utilize the right choice. Another way to look at this from an pedagogical standpoint is are the assessments really measuring the outcomes you think they are measuring. To attempt to help the medical educator with this dilemma, I came up with the idea that you can apply a well-known paradigm used to evaluate evidence-based medicine (EBM) to evaluate a student assessment. The EBM evaluation methods I've been most familiar with is outlined by Straus and colleagues in their book, Evidence-Based Medicine: How to Practice and Teach EBM, copyright 2005.
Here's my proposed way to assess assessment:
1) Is the assessment tool valid? By this we need to be sure that our measurement tool is reliable and accurate in being able to measure what we want it to measure. The standardized (high-stakes) examinations like MCAT, USMLE and board certification examinations are expensive not because these companies are rolling in cash, but because it takes people LOTS of time to validate a test. Hence, most home-grown tools are not completely validated (although some have been). To be validated an assessment has to be likely to give similar results if the same learner takes the test each time. It also has to accurately categorize the level of proficiency of the learner at the task you are measuring.
For example, let's say I have an OSCE to assess whether a learner can counsel a young woman of child-bearing age on her options for migraine prophylaxitic medications. For my OSCE to be valid, I need to look for reliability and accuracy. Does the OSCE predictably identify learners who do not understand that valproate has teratogenic potential, and don't discuss this with a standardized patient? You also want to know if it is accurate, in other words does your scoring method give similar results if multiple faculty who have been trained on how to use the tool score the same student interaction? To truly answer these questions on an assessment, it takes multiple data points for both raters and learners - hence why it takes time and money, and also why most assessments are not truly validated.
The best way to validate is to measure the assessment against another 'gold standard' assessment. How well does your assessment work compared with known validated scales. Unfortunately, there aren't as many 'gold standard' assessments outside of the clinical knowledge domain in medical education (although it is getting better).
2) Is the valid assessment tool important? Here we need to talk about whether the difference seen in the assessment is actually a real difference. How big is the gap between those who just passed without trouble, just barely passed, and those who failed to meet the expected mark? Medical students are all very bright, and sometimes the difference between the very top and the middle is not that great a margin (even if it looks like it on the measures that we are using). I think the place where we trip up here sometimes is in assuming that Likert scale numbers have a linear relationship. Is a step-wise difference from 3 to 4 t o 5 on the scale set up on the clinical evaluations a reasonable assumption, and is the difference between a 4 and a 5 really important? It might very well be that this is true, but it will be different for every scale that we set up. I've never been a big fan of using Likert rating scores to directly come up with a percentage point score unless you can prove to me through your distribution numbers that it is working.
3) Is this valid, important tool able to be applied to my learners? I think this step involves several steps. First, are you actually measuring what you'd like to measure? A valid, reliable tool for measuring knowledge (typical MCQ test) unless it is very artfully crafted will not likely assess clinical reasoning skills or problem-solving. So, if your objective is to teach the learner how to identify 'red flags' in a headache patient history, is that validated MCQ the best assessment tool to use? Is it OK that that learner can pick out 'red flags' from a list of distractors, or is it a different skill set to be able to identify this in a clinical setting? I'm not saying MCQ's can never be used in this situation, you just have to think about it first.
Second, if you are utilizng a tool from another source and you did not design it for your particular curriculum, is the tool useful for the unique objectives? Most of the time this is OK, and cross-fertilization of educational tools is necessary due to the time and effort bit. But, you have to think about what you are actually doing. In our example of the headache OSCE, let's say you found a colleague at another institution who has an OSCE set up to assess communication of differential diagnosis and evaluation to a person with migraine who is worried they have a brain tumor. You then apply that to your clerkship, but you are more interested in the above scenario about choice of therapy. Will the tool still work when you tweak it? It may or may not, and you just need to be careful.
Hopefully you've survived to read through to the end of this post. Hopefully you learned something about assessment in medical education, and you found the EBM-esque approach to assessment evaluation useful. My concern is that in general, not enough time is spent considering these questions, and more time is spent on developing the content then on assessment. I'm guilty of this as well, but I'm trying to get better. Thanks for reading, and feel free to post comments/thoughts below.
Here's my proposed way to assess assessment:
1) Is the assessment tool valid? By this we need to be sure that our measurement tool is reliable and accurate in being able to measure what we want it to measure. The standardized (high-stakes) examinations like MCAT, USMLE and board certification examinations are expensive not because these companies are rolling in cash, but because it takes people LOTS of time to validate a test. Hence, most home-grown tools are not completely validated (although some have been). To be validated an assessment has to be likely to give similar results if the same learner takes the test each time. It also has to accurately categorize the level of proficiency of the learner at the task you are measuring.
For example, let's say I have an OSCE to assess whether a learner can counsel a young woman of child-bearing age on her options for migraine prophylaxitic medications. For my OSCE to be valid, I need to look for reliability and accuracy. Does the OSCE predictably identify learners who do not understand that valproate has teratogenic potential, and don't discuss this with a standardized patient? You also want to know if it is accurate, in other words does your scoring method give similar results if multiple faculty who have been trained on how to use the tool score the same student interaction? To truly answer these questions on an assessment, it takes multiple data points for both raters and learners - hence why it takes time and money, and also why most assessments are not truly validated.
The best way to validate is to measure the assessment against another 'gold standard' assessment. How well does your assessment work compared with known validated scales. Unfortunately, there aren't as many 'gold standard' assessments outside of the clinical knowledge domain in medical education (although it is getting better).
2) Is the valid assessment tool important? Here we need to talk about whether the difference seen in the assessment is actually a real difference. How big is the gap between those who just passed without trouble, just barely passed, and those who failed to meet the expected mark? Medical students are all very bright, and sometimes the difference between the very top and the middle is not that great a margin (even if it looks like it on the measures that we are using). I think the place where we trip up here sometimes is in assuming that Likert scale numbers have a linear relationship. Is a step-wise difference from 3 to 4 t o 5 on the scale set up on the clinical evaluations a reasonable assumption, and is the difference between a 4 and a 5 really important? It might very well be that this is true, but it will be different for every scale that we set up. I've never been a big fan of using Likert rating scores to directly come up with a percentage point score unless you can prove to me through your distribution numbers that it is working.
3) Is this valid, important tool able to be applied to my learners? I think this step involves several steps. First, are you actually measuring what you'd like to measure? A valid, reliable tool for measuring knowledge (typical MCQ test) unless it is very artfully crafted will not likely assess clinical reasoning skills or problem-solving. So, if your objective is to teach the learner how to identify 'red flags' in a headache patient history, is that validated MCQ the best assessment tool to use? Is it OK that that learner can pick out 'red flags' from a list of distractors, or is it a different skill set to be able to identify this in a clinical setting? I'm not saying MCQ's can never be used in this situation, you just have to think about it first.
Second, if you are utilizng a tool from another source and you did not design it for your particular curriculum, is the tool useful for the unique objectives? Most of the time this is OK, and cross-fertilization of educational tools is necessary due to the time and effort bit. But, you have to think about what you are actually doing. In our example of the headache OSCE, let's say you found a colleague at another institution who has an OSCE set up to assess communication of differential diagnosis and evaluation to a person with migraine who is worried they have a brain tumor. You then apply that to your clerkship, but you are more interested in the above scenario about choice of therapy. Will the tool still work when you tweak it? It may or may not, and you just need to be careful.
Hopefully you've survived to read through to the end of this post. Hopefully you learned something about assessment in medical education, and you found the EBM-esque approach to assessment evaluation useful. My concern is that in general, not enough time is spent considering these questions, and more time is spent on developing the content then on assessment. I'm guilty of this as well, but I'm trying to get better. Thanks for reading, and feel free to post comments/thoughts below.
Friday, May 25, 2012
Senioritis in medical school - How to motivate the abulic state
It's that time of year where med students throughout the US shake hands with the Dean, and pull their tassels in unison from the right to the left. In the months leading up to the tears and endless photo ops that mark any graduation, the students are finishing up the last few rotations of their medical school career. Although some students retain focus on their swan-song rotations, in every heart there is always the lure of looking beyond the present rotation to the allure of residency in all its glory. Some have more trouble than others with maintaining drive at the end of the final year. Most students post-match are taking electives or required courses which are not directly aligned with their chosen field of study. This makes some sense for getting a Dean's letter together and positioning one's self to be a desirable residency applicant. Post-match all of this does not seem to matter as much. Indeed, there has been discussion among academic educators that there is a missed opportunity in the fourth year of medical school largely based on articles like this one from Dr. Lyss-Lerman and colleagues which outlined residency directors view of how well the fourth year is working to prepare for residency.
So, I have a bunch of fourth year students in my neurology clerkship, in fact I have only fourth-year students with the exception of a few third-years that can take neurology as an elective in one block in November. These students are largely not going into neurology. How do I try to keep them engaged in our neurology rotation. (Full disclosure - I'm fully aware I can learn more about how to do this. I definitely still get some students for whom my little tricks don't work. This is partly why I'm starting this discussion so that we can all learn from each other). Here are some of my ideas:
- Have them create their own goals - In orientation, I encourage the students to come up with their own course goals and objectives. I have prepared goals and objectives, and they are held accountable to those, but there may be specific areas they want to focus on as an area of weakness or as an area which is important for their specialty. I tell them that in residency, you will not always have clear objectives which are overtly given to you for every rotation. Thus, I made a habit in residency of picking 2-3 key things I wanted to learn. When I was on cardiology as an intern, I wanted to sharpen my EKG reading skills and my cardiac exam skills. Thus, I had something to focus on while taking care of those patients. Intrinsically created goals are more motivating. I encourage the students to follow that model as they move on in their career.
- Encourage exploration of topics related to their field - This is partly a student-led issue, and partly faculty development. Often students will stay engaged if the faculty recognizes what they are going into, and discusses aspects of a neurological case which is of interest to the student. For example, we had an OMFS fellow rotating through the neurology clerkship, and I took him aside to discuss a case of a siallorhea I was seeing in the setting of neurodegenerative disease. Sure it's important for him to know how to treat those diseases from a neurologic standpoint, but he's going to be more interested in the salivary issues. This can then be used as a doorway to get them to be interested in the rest of the disease.
- Try using games - I haven't used this in my clerkship yet, but as a medical student and a resident, we had an attending (Dr. Harold Adams) who would play Neurojeapordy several times during the rotation. Students were put into teams, and asked neuro-trivia questions about neuroanatomy, neurological differential diagnosis/treatment, and neurological history. As a student (and a resident) I really enjoyed this. It's a way to get students to want to read up on disease states, etc.
- Scare the bejeezers out of them - I will often also play the card that in only two to three short months, they will be responsible for caring for patients on their own (in a supervised fashion initially). Their signature will mean something, and when someone in their care has a neurological problem, they will likely be the first person to evaluate the situation. Starting on July 1. Most students understand this logic.
These are just a few ideas I've used. Any other thoughts on how to motivate the post-match senior on a required rotation? Leave them in the comments below!
So, I have a bunch of fourth year students in my neurology clerkship, in fact I have only fourth-year students with the exception of a few third-years that can take neurology as an elective in one block in November. These students are largely not going into neurology. How do I try to keep them engaged in our neurology rotation. (Full disclosure - I'm fully aware I can learn more about how to do this. I definitely still get some students for whom my little tricks don't work. This is partly why I'm starting this discussion so that we can all learn from each other). Here are some of my ideas:
- Have them create their own goals - In orientation, I encourage the students to come up with their own course goals and objectives. I have prepared goals and objectives, and they are held accountable to those, but there may be specific areas they want to focus on as an area of weakness or as an area which is important for their specialty. I tell them that in residency, you will not always have clear objectives which are overtly given to you for every rotation. Thus, I made a habit in residency of picking 2-3 key things I wanted to learn. When I was on cardiology as an intern, I wanted to sharpen my EKG reading skills and my cardiac exam skills. Thus, I had something to focus on while taking care of those patients. Intrinsically created goals are more motivating. I encourage the students to follow that model as they move on in their career.
- Encourage exploration of topics related to their field - This is partly a student-led issue, and partly faculty development. Often students will stay engaged if the faculty recognizes what they are going into, and discusses aspects of a neurological case which is of interest to the student. For example, we had an OMFS fellow rotating through the neurology clerkship, and I took him aside to discuss a case of a siallorhea I was seeing in the setting of neurodegenerative disease. Sure it's important for him to know how to treat those diseases from a neurologic standpoint, but he's going to be more interested in the salivary issues. This can then be used as a doorway to get them to be interested in the rest of the disease.
- Try using games - I haven't used this in my clerkship yet, but as a medical student and a resident, we had an attending (Dr. Harold Adams) who would play Neurojeapordy several times during the rotation. Students were put into teams, and asked neuro-trivia questions about neuroanatomy, neurological differential diagnosis/treatment, and neurological history. As a student (and a resident) I really enjoyed this. It's a way to get students to want to read up on disease states, etc.
- Scare the bejeezers out of them - I will often also play the card that in only two to three short months, they will be responsible for caring for patients on their own (in a supervised fashion initially). Their signature will mean something, and when someone in their care has a neurological problem, they will likely be the first person to evaluate the situation. Starting on July 1. Most students understand this logic.
These are just a few ideas I've used. Any other thoughts on how to motivate the post-match senior on a required rotation? Leave them in the comments below!
Wednesday, April 18, 2012
AAN annual meeting blog promotion ideas accepted
From AAN Annual Meeting website referenced in text |
As I go to the meeting, I'd like to be able to not only learn some things from the meeting, but also do some shameless self-promotion for my blog. As my blog is fairly newly established, I don't really feel like it is at a stage where I could have made a poster or abstract about its relative worth to the community of educators. As such, I may have missed an obvious outlet for creating interest and awareness in my blog. I was wondering if others who have blogs could comment on creative ways to let people at meetings like this know that your blog exists while being relatively subtle. I'm thinking the strategy of going to the open mic and asking a question about an unrelated presentation that ends with the statement, "...I'm very interested in this as I'd like to include it in my medical education blog found on neuronerd.com." Are there ways of spreading blog love at meetings? Thanks for the advice.
Thursday, April 12, 2012
Virtual hospitals - The future of medical simulation?
I was flipping through Facebook the other day, and saw a video posted by a friend. It was over 17 minutes long which is eternal in the world of Facebook videos, but I thought I'd give it a try as it looked interesting. It ended showing people from 'The Gadget Show' making a simulator which not too long ago would have been pure fantasy. They built a tent with 360 video output capability that also has a 360 degree treadmill to allow you to move in the virtual world by walking as you would in real life. They also hooked up an X-Box Kinnect sensor to pick up other body movements. They through in a few other cool add-ons, and they ended up with a truly immersive environment for a first person shooter game. You can watch the video here.
I got to thinking that this technology is now available, and could be used in medical school to train physicians. It's not yet at the level of being a holodeck, but it is closer than we've ever been before. I could envision a program where there is an ambulatory office building, and the student has their own clinic to run where simulated patients come in be interviewed. The physical examination is done through use of gestures mimicking what the real PE would be, or it could be coupled with a simulation manikin to elicit the physical findings. Then the student has to go back to a virtual staffing room, and dictate the encounter, and order testing. They then move on to the next patient. If you had enough of these built (assuming in the future this type of technology gets cheaper), you could envision having a 'continuity clinic' set up completely in a simulator. This might include seeing some of your regular patients back as they come through the emergency room for acute conditions or even going to the OR. It could be as complex as there is time and money to create the scenarios.
I often thought in residency that it would be interesting to have an immersive simulated hospital where you could spend at least some of your time as a medical student or as a junior resident. There you could have freedom to make some truly independent decisions and see what happens. I think the advantages to something like this are obvious and are akin to the flight simulators that pilots use to train. It will never replace time spent on the wards with skilled clinicians giving supervision and feedback. I don't think the technology is there for a completely realistic medical simulation. But it is getting closer.
I got to thinking that this technology is now available, and could be used in medical school to train physicians. It's not yet at the level of being a holodeck, but it is closer than we've ever been before. I could envision a program where there is an ambulatory office building, and the student has their own clinic to run where simulated patients come in be interviewed. The physical examination is done through use of gestures mimicking what the real PE would be, or it could be coupled with a simulation manikin to elicit the physical findings. Then the student has to go back to a virtual staffing room, and dictate the encounter, and order testing. They then move on to the next patient. If you had enough of these built (assuming in the future this type of technology gets cheaper), you could envision having a 'continuity clinic' set up completely in a simulator. This might include seeing some of your regular patients back as they come through the emergency room for acute conditions or even going to the OR. It could be as complex as there is time and money to create the scenarios.
I often thought in residency that it would be interesting to have an immersive simulated hospital where you could spend at least some of your time as a medical student or as a junior resident. There you could have freedom to make some truly independent decisions and see what happens. I think the advantages to something like this are obvious and are akin to the flight simulators that pilots use to train. It will never replace time spent on the wards with skilled clinicians giving supervision and feedback. I don't think the technology is there for a completely realistic medical simulation. But it is getting closer.
Monday, March 26, 2012
Life - Do med students have one? Work-life balance across generations
I gave a journal club last week discussing some general ideas about generational differences between the three main groups trying to work together in medical education: Boomers, Gen X, and Gen Y (or whatever it is your preferred term for this generation is). As I was looking into the topic to prepare for this talk, one of the themes that kept popping up was the work-life balance theme. In general, the common wisdom is that the Boomers value hard work, and are willing to sacrifice family life for career advancement. Gen X and Gen Y tend to have less of a focus on work as a source of primary identity, and see much more value in maintaining balance between career and home. The purpose of this blog post is not to decide whether this is indeed true or not.
What I'd like to spend a moment discussing is how this generational difference is creating conflict in the halls of medical schools. Medical students are primarily Gen Y (although there are some Gen X in the mix). Faculty who now populate Dean's office level positions are primarily Boomers, and course/clerkship directors are now Boomers with some Gen X filling the junior ranks. So, what happens is that the Boomers remember their medical school life which was ruled by the Greatest Generation (even more value on work due to their experiences in the Great Depression). The biggest place I've seen this conflict play out is in requests for time off or for changing a date to take a test. The Boomers were given very little room to change their schedule. I've talked to many of them, and the stories were essentially that if you wanted to take a day off during the clinical years for anything other than being near-death, there would be severe consequences (like repeating the entire clerkship). Things were a little better for me, but not a lot. I remember having friends in medical school who had a lot of trouble getting time off to attend weddings or family reunions. There was minor grumbling, but we all decided it was a transient time, and this was preparing us somehow for the trials of residency. And we kept telling ourselves that things would eventually get better. We also had the usual weeks of vacation around the Holidays and Spring Break for some time off. Everyone also had some lighter rotations, and the fourth-year comes with a much more flexible schedule.
Then along comes Gen Y. They are much more vocal about their need for time off, and much more vocal about providing feedback on things that they are not in agreement with. And they are now complaining primarily to the Boomers, who primarily don't want to hear about it. I'm not so sure. Maybe it's my Gen X roots showing or maybe I'm still close enough to being a student that I remember the bind it puts you in if your schedule is completely inflexible. So, I'm wondering if maybe school policies for personal days off should be revisited. I'm thinking most of the policies were set in place for a very different world, and haven't been changed much for 20-30 years. With the advent of technology, it is possible to make up some assignments which may not have been possible to make up in the past. There's also a different cultural norm emerging (or maybe I just think this should happen), and missing a wedding because you are assigned to spend a day in clinic is not an acceptable trade-off.
As a disclaimer - I've been told by several fourth-year students that as a clerkship director, I run a 'tight ship'. To my mind I'm just doing what the school time off policy is telling me to do. Our school policy is that student have 2 days off per year which can be used for attending a professional meeting or if they are ill/ have a family emergency. All other time off is at the discretion of the clerkship director, and must be made up. I'm not sure I have a perfect answer as to how to change the current policy, but maybe working with appropriate representatives from Gen Y, Gen X, and Boomers we can work together to figure something out.
What I'd like to spend a moment discussing is how this generational difference is creating conflict in the halls of medical schools. Medical students are primarily Gen Y (although there are some Gen X in the mix). Faculty who now populate Dean's office level positions are primarily Boomers, and course/clerkship directors are now Boomers with some Gen X filling the junior ranks. So, what happens is that the Boomers remember their medical school life which was ruled by the Greatest Generation (even more value on work due to their experiences in the Great Depression). The biggest place I've seen this conflict play out is in requests for time off or for changing a date to take a test. The Boomers were given very little room to change their schedule. I've talked to many of them, and the stories were essentially that if you wanted to take a day off during the clinical years for anything other than being near-death, there would be severe consequences (like repeating the entire clerkship). Things were a little better for me, but not a lot. I remember having friends in medical school who had a lot of trouble getting time off to attend weddings or family reunions. There was minor grumbling, but we all decided it was a transient time, and this was preparing us somehow for the trials of residency. And we kept telling ourselves that things would eventually get better. We also had the usual weeks of vacation around the Holidays and Spring Break for some time off. Everyone also had some lighter rotations, and the fourth-year comes with a much more flexible schedule.
Then along comes Gen Y. They are much more vocal about their need for time off, and much more vocal about providing feedback on things that they are not in agreement with. And they are now complaining primarily to the Boomers, who primarily don't want to hear about it. I'm not so sure. Maybe it's my Gen X roots showing or maybe I'm still close enough to being a student that I remember the bind it puts you in if your schedule is completely inflexible. So, I'm wondering if maybe school policies for personal days off should be revisited. I'm thinking most of the policies were set in place for a very different world, and haven't been changed much for 20-30 years. With the advent of technology, it is possible to make up some assignments which may not have been possible to make up in the past. There's also a different cultural norm emerging (or maybe I just think this should happen), and missing a wedding because you are assigned to spend a day in clinic is not an acceptable trade-off.
As a disclaimer - I've been told by several fourth-year students that as a clerkship director, I run a 'tight ship'. To my mind I'm just doing what the school time off policy is telling me to do. Our school policy is that student have 2 days off per year which can be used for attending a professional meeting or if they are ill/ have a family emergency. All other time off is at the discretion of the clerkship director, and must be made up. I'm not sure I have a perfect answer as to how to change the current policy, but maybe working with appropriate representatives from Gen Y, Gen X, and Boomers we can work together to figure something out.
Tuesday, March 6, 2012
What medical education can learn from "Moneyball"
I've been waiting a bit to write this post, as I'm not sure exactly which way to take it. Let me start by stating that I'm a really big baseball fan, and have been since second grade when my dad first took me on the El in Chicago to see the Cubs play in Wrigley. I still get chills walking into that place. This love of baseball drives me read the occasional baseball book. So, while I haven't seen the recent movie, I read the Michael Lewis book, "Moneyball," a few years ago. And I really liked it on many levels.
In the realm of medical education, I liked the idea of trying to measure something that is inherently immeasurable. In some respects, trying to pick a good candidate from a pool of medical school applicants or trying to assign a grade to a student on a clinical rotation is not unlike what the old-time scouts in "Moneyball" were doing. They would look at a player batting, pitching, or fielding, and go with an overall geschalt of whether that player was 'big-league material'. They were also basing their decisions on statistics which had been around forever, and no one had ever really questioned whether they worked or not to predict who is or who is not going to be a good performer.
Then, Billy Beane and his team of statisticians looked beyond the traditional numbers and redefined what to look for in a player prospect by largely ignoring the players current body habitus or mechanics and focusing solely on the numbers. They also redefined what success was by finding the the number of runners on base per game correlated to wins more tightly than other statistics. Thus, on-base percentage, and slugging percentage (which measures walks with extra-base hits) was more important for how an individual would contribute to the team than total runs batted in or home runs. (Sorry if I just lost the non-baseball fans out there).
This process can have applications to lots of venues. I think medical school needs to re-look at how we are evaluating our students and decide if we need to go through a similar process. Are there statistics available to us now which may not have been available 20 to 30 years ago that we could use to identify medical students who are not likely to do well in practice. We're pretty solid at identifying people with knowledge gaps as our system of standardized testing takes care of that. But, is that what really makes a good physician? It's a part of it for sure, but it is not all of it. There's a lot more to clinical reasoning, and professionalism than just knowledge base. Can we find ways of identifying ways to capture those measures, or are we going to be stuck with the old scouting reports and crossing our fingers to see what happens? I don't have any solid answers yet, but I'm willing to help look.
In the realm of medical education, I liked the idea of trying to measure something that is inherently immeasurable. In some respects, trying to pick a good candidate from a pool of medical school applicants or trying to assign a grade to a student on a clinical rotation is not unlike what the old-time scouts in "Moneyball" were doing. They would look at a player batting, pitching, or fielding, and go with an overall geschalt of whether that player was 'big-league material'. They were also basing their decisions on statistics which had been around forever, and no one had ever really questioned whether they worked or not to predict who is or who is not going to be a good performer.
Then, Billy Beane and his team of statisticians looked beyond the traditional numbers and redefined what to look for in a player prospect by largely ignoring the players current body habitus or mechanics and focusing solely on the numbers. They also redefined what success was by finding the the number of runners on base per game correlated to wins more tightly than other statistics. Thus, on-base percentage, and slugging percentage (which measures walks with extra-base hits) was more important for how an individual would contribute to the team than total runs batted in or home runs. (Sorry if I just lost the non-baseball fans out there).
This process can have applications to lots of venues. I think medical school needs to re-look at how we are evaluating our students and decide if we need to go through a similar process. Are there statistics available to us now which may not have been available 20 to 30 years ago that we could use to identify medical students who are not likely to do well in practice. We're pretty solid at identifying people with knowledge gaps as our system of standardized testing takes care of that. But, is that what really makes a good physician? It's a part of it for sure, but it is not all of it. There's a lot more to clinical reasoning, and professionalism than just knowledge base. Can we find ways of identifying ways to capture those measures, or are we going to be stuck with the old scouting reports and crossing our fingers to see what happens? I don't have any solid answers yet, but I'm willing to help look.
Wednesday, February 22, 2012
Social Media: Are med students SoMe-philes or SoMe-phobes?
As with any question about human behavior, I don't think whether students love or hate social media has a definitive answer. If you talk to students about it, you'll find answers vary from one to the next. I have talked to many students about this, and I have found the continuum of students thoughts on social media was more varied than I originally anticipated.
The main thing that surprised me at first is that not all med students have active social media accounts. I had this vision in my head of these students getting through their college years with the prototypical laptop open with multiple chat windows going, Twitter and Facebook windows chock full of 'lol's and 'rofl's, streaming a soccer game from Sweden, video chatting on Skype with a friend at Harvard, and working on a paper on Word researched through a Wikipedia page. While some of that might be true for some of them. I've found a healthy percentage (at least 1-2 in 10 in informal talks) do not have any social media accounts including a Facebook account. This crowd is usually a little sheepish to admit it, but they are a substantial chunk of current medical students. Also, in a class of about 120, I usually find 2-3 students who have a Twitter account which I thought was a bit lower than I'd expect. I don't think my initial perceptions are that unique, as in a recent #meded chat on Twitter, this subject came up, and many academic physicians on the chat were surprised by the numbers I just shared with you. (I haven't yet done a formal survey of med students, but that may not be a bad idea...)
It's easy to assume that the Millenials are the 'digital' generation, so they must all be on social media. So why are they not there? I think part of it is that there are genuinely some younger people who still prefer an analogue life. I don't mean this in a negative sense, but there are people out there (even young people) who are aware of the technologies available, and understand the potential benefits, but don't feel it is worth the time and effort. Some have even tried it out, and didn't like the experience.
The second reason I hear is that there are a good number who are scared of its potential harm, and feel this risk outweighs the benefit of seeing pictures of their college roommates baby. You don't have to be in medical school for very long before someone from the front of a lecture hall tells a story of social media gone horribly wrong, and these stories usually end up with suspensions and expulsions of students.
Another thing I've picked up in talking with students is that very few of them realize they can use social media as part of their job as a physician. They also don't realize its potential positive impact, so few of them are engaged in it. Many are worried about it. I've even interacted with a few med students on Twitter who have a nice presence, but were seriously weighing whether to include their blog/Twitter profile on their residency application.
What has your experience been at your medical school? Do the confines of your school promote social media friendliness or social media angst?
The main thing that surprised me at first is that not all med students have active social media accounts. I had this vision in my head of these students getting through their college years with the prototypical laptop open with multiple chat windows going, Twitter and Facebook windows chock full of 'lol's and 'rofl's, streaming a soccer game from Sweden, video chatting on Skype with a friend at Harvard, and working on a paper on Word researched through a Wikipedia page. While some of that might be true for some of them. I've found a healthy percentage (at least 1-2 in 10 in informal talks) do not have any social media accounts including a Facebook account. This crowd is usually a little sheepish to admit it, but they are a substantial chunk of current medical students. Also, in a class of about 120, I usually find 2-3 students who have a Twitter account which I thought was a bit lower than I'd expect. I don't think my initial perceptions are that unique, as in a recent #meded chat on Twitter, this subject came up, and many academic physicians on the chat were surprised by the numbers I just shared with you. (I haven't yet done a formal survey of med students, but that may not be a bad idea...)
It's easy to assume that the Millenials are the 'digital' generation, so they must all be on social media. So why are they not there? I think part of it is that there are genuinely some younger people who still prefer an analogue life. I don't mean this in a negative sense, but there are people out there (even young people) who are aware of the technologies available, and understand the potential benefits, but don't feel it is worth the time and effort. Some have even tried it out, and didn't like the experience.
The second reason I hear is that there are a good number who are scared of its potential harm, and feel this risk outweighs the benefit of seeing pictures of their college roommates baby. You don't have to be in medical school for very long before someone from the front of a lecture hall tells a story of social media gone horribly wrong, and these stories usually end up with suspensions and expulsions of students.
Another thing I've picked up in talking with students is that very few of them realize they can use social media as part of their job as a physician. They also don't realize its potential positive impact, so few of them are engaged in it. Many are worried about it. I've even interacted with a few med students on Twitter who have a nice presence, but were seriously weighing whether to include their blog/Twitter profile on their residency application.
What has your experience been at your medical school? Do the confines of your school promote social media friendliness or social media angst?
Thursday, February 16, 2012
Evidence based medicine in second year courses: Too much too soon or just in time?
Once again sitting in the back of the neuroscience and behavior class, and I've noticed another interesting phenomenon as the course as gone on. As we cover stroke, the clinical presenters are presenting much more clinical trial data than the other clinicians did (including myself for example as I the presentation on Parkinson's disease). Part of this is due to the veritable glut of evidence from strokologists. As stroke is very common, it is not hard to get large trials together, and the stroke literature is now quite robust. And, rightfully so, the stroke physicians are proud of their work, and want to communicate the data.
This raises a bit of a tension in the second-year neuroscience class. This tension is that these students were introduced to vascular anatomy yesterday, and stroke pathophysiology earlier in the morning. So how soon is too soon to talk about EBM? The focus of the course is more on learning the basic science, and getting an introduction to differetial diagnosis and treatment. Hence, when I talked about the clinical context of Parkinson's disease, I presented a lot about the clinical syndrome, and the differential diagnosis, and initial treatment options with pharmacologic information. As I'm a clinician, the pharmcology focused on adminstration route and side effects. But I didn't really show any primary literature in my slides. The stroke people showed a lot of primary literature - SPARKLE, CLOSURE, CREST, ECASS, NINDS tPA trial, the list of acronyms paraded across the screen.
So the tension lies in the fact that students do need to know that there is evidence which supports our clinical decisions. This evidence is often cited when I do ward attending rounds with the residents and the fourth-year students. But how much belongs in the first and second year coursework? When does it become overload when the student is still trying to grasp the basic concepts of pathophysiology? I'm not sure I have answers to these questions. As my lectures tend to show, I'm more for presenting basic data, and pivotal trial data in measured doses at this stage in training, and allow the learners to delve more deeply into EBM during the clinical years, and into their residency training. What do you think?
This raises a bit of a tension in the second-year neuroscience class. This tension is that these students were introduced to vascular anatomy yesterday, and stroke pathophysiology earlier in the morning. So how soon is too soon to talk about EBM? The focus of the course is more on learning the basic science, and getting an introduction to differetial diagnosis and treatment. Hence, when I talked about the clinical context of Parkinson's disease, I presented a lot about the clinical syndrome, and the differential diagnosis, and initial treatment options with pharmacologic information. As I'm a clinician, the pharmcology focused on adminstration route and side effects. But I didn't really show any primary literature in my slides. The stroke people showed a lot of primary literature - SPARKLE, CLOSURE, CREST, ECASS, NINDS tPA trial, the list of acronyms paraded across the screen.
So the tension lies in the fact that students do need to know that there is evidence which supports our clinical decisions. This evidence is often cited when I do ward attending rounds with the residents and the fourth-year students. But how much belongs in the first and second year coursework? When does it become overload when the student is still trying to grasp the basic concepts of pathophysiology? I'm not sure I have answers to these questions. As my lectures tend to show, I'm more for presenting basic data, and pivotal trial data in measured doses at this stage in training, and allow the learners to delve more deeply into EBM during the clinical years, and into their residency training. What do you think?
Monday, February 6, 2012
Are laptops/ tablets connected to WiFi forces for good or evil in lecutre hall?
This post is in response to several things I've read and heard lately about the use of devices to connect to the internet in medical school large group teaching sessions. Essentially these posts or comments have been either strongly in favor of the introduction of these technologies, or strongly against. I haven't found much of a middle ground.
Those against argue from the idea of distraction. The argument is laid out in several recent research studies looking at the effects of multitasking on cognitive performance. The basic idea is summarized pretty well here, in an article from the San Fransisco Chronicle. This is the view held by many basic science course directors who make comments to the effect of, 'if they have their laptops out, they are likely playing solitaire.' I've also seen some people speaking about generational differences in learning styles who state that the Millenial generation has grown up with multiple electronic devices going. This falls back on the data that they feel like they have done this for a long time, and should be good at it, but they are not really. As they don't have any insight into this potential hazard, we as course directors should act to squash this tendency by telling everyone to turn off their electronic devices. Hence, the common wisdom among these presentations is that it is important to have the learners switch off their devices on entering the classroom for their own good.
On the other hand, there are many potential up sides to having a wired classroom. First, audience response systems using web-based or local networks are becoming more sophisticated and more robust. This is much more than the audience clicker system where the audience can push a button to answer usually a multiple choice question (A,B, C, or D). But there are systems like Twitter that can allow 'back-hallway' discussions or ability to ask questions which can be answered by the presenter in real time. Newer platforms can collate rich text entries and also collect images. Many of these allow ability to catch if the audience is out of step more efficiently by than the traditional method of waiting for someone to raise their hand. Secondly, there is also the ability for the individual learner to go down a 'rabbit hole' right away to pursue a question they may have had. For example, I interjected a clinical example of hemiballism after a lecturer was talking about subthalamic nucleus anatomy. I had not shown a video, as I just stood up and extemporaneously gave the discussion. As soon as I was done talking, a student in front of me had called up a video of hemiballism with a video demonstrating it. And these are just a few brief examples of the good that can come from online activity during a lecture.
The last point I'd like to bring up, is that this will likely not be a point of discussion soon. Our med school is considering going to a paperless system where all notes are distributed electronically. Our med school is likely a bit behind the curve on this. My point is the same devices that allow you to read PP slides, and take notes on them also play Angry Birds. There's not currently a good way to facilitate one task while blocking the other. My point is that the ability for a lecturer to demand that everyone turn off their devices. Thus, this may be analagous to a record company trying in the late Nineties to divert attention from digital devices playing their music and focusing only on
CD's. Who's bought a CD recently?
So, where do we go from here? As I look over the neuroscience course this morning, most electronic devices (except mine) are showing slides on epilepsy treatment (which is the lecture we're having today). So most people are using the technology wisely. However, with email only a click away, the temptation is strong for attention to wander? What are your thoughts?
Those against argue from the idea of distraction. The argument is laid out in several recent research studies looking at the effects of multitasking on cognitive performance. The basic idea is summarized pretty well here, in an article from the San Fransisco Chronicle. This is the view held by many basic science course directors who make comments to the effect of, 'if they have their laptops out, they are likely playing solitaire.' I've also seen some people speaking about generational differences in learning styles who state that the Millenial generation has grown up with multiple electronic devices going. This falls back on the data that they feel like they have done this for a long time, and should be good at it, but they are not really. As they don't have any insight into this potential hazard, we as course directors should act to squash this tendency by telling everyone to turn off their electronic devices. Hence, the common wisdom among these presentations is that it is important to have the learners switch off their devices on entering the classroom for their own good.
On the other hand, there are many potential up sides to having a wired classroom. First, audience response systems using web-based or local networks are becoming more sophisticated and more robust. This is much more than the audience clicker system where the audience can push a button to answer usually a multiple choice question (A,B, C, or D). But there are systems like Twitter that can allow 'back-hallway' discussions or ability to ask questions which can be answered by the presenter in real time. Newer platforms can collate rich text entries and also collect images. Many of these allow ability to catch if the audience is out of step more efficiently by than the traditional method of waiting for someone to raise their hand. Secondly, there is also the ability for the individual learner to go down a 'rabbit hole' right away to pursue a question they may have had. For example, I interjected a clinical example of hemiballism after a lecturer was talking about subthalamic nucleus anatomy. I had not shown a video, as I just stood up and extemporaneously gave the discussion. As soon as I was done talking, a student in front of me had called up a video of hemiballism with a video demonstrating it. And these are just a few brief examples of the good that can come from online activity during a lecture.
The last point I'd like to bring up, is that this will likely not be a point of discussion soon. Our med school is considering going to a paperless system where all notes are distributed electronically. Our med school is likely a bit behind the curve on this. My point is the same devices that allow you to read PP slides, and take notes on them also play Angry Birds. There's not currently a good way to facilitate one task while blocking the other. My point is that the ability for a lecturer to demand that everyone turn off their devices. Thus, this may be analagous to a record company trying in the late Nineties to divert attention from digital devices playing their music and focusing only on
CD's. Who's bought a CD recently?
So, where do we go from here? As I look over the neuroscience course this morning, most electronic devices (except mine) are showing slides on epilepsy treatment (which is the lecture we're having today). So most people are using the technology wisely. However, with email only a click away, the temptation is strong for attention to wander? What are your thoughts?
Friday, January 27, 2012
What I learned from the cerebellum yesterday about timeliness
*Note - photo is archived, and not from the lecture mentioned |
I accept part of this is on me as a first-time course director. There were somethings in the schedule which we purposefully changed in terms of timing to try to improve the flow of the course. Of course, we have had the typical issues of lecturer availability changing up the timing of a session here or there. But, some weeks we just kept as it was, as they worked last year. This is one of those weeks. I didn't even really revert my mind back to med student mode to realize that putting something as complex as the cerebellar anatomy the day before the test was a bad idea. It was a bad idea, as basic educational literature supports the idea that concepts are retained more if they are repeated and if they are applied. We really didn't have time to do either with the cerebellum. So, next year we will rectify that problem. This year, the schedule has been set for several months, and there's not much room now to swap things around.
My reason for this post isn't about whether this was an ill-fated lecture or not, it is more about the student reaction to the lecture. I was in the back of the room as the lecture ended, and the reaction generally from those that sat in the entire auditorium was best described as anger and frustration. Frustration I can understand. I can understand that there was a lot of information presented and that this may have been perceived as not being 'fair'. However, there was also an underpinning of anger that, although I understand where it comes from, I find a little troubling. I've seen on Twitter some posts by med students after a bad lecture where the venting becomes more of a personal attack on the lecturer themselves. That is where I think there is a bit of a problem. This also comes out in the narrative evaluations we receive from students for courses. There are plenty of comments which are truly helpful, and point out errors which can be corrected. Then, there are those that don't give much rationale for why the lecture was not good and how to improve it for the future, but they are just downright mean. I totally realize the the first and second year of medical school is a time of high pressure and stress. I also understand that processing all that information in the time required is a monumental task. I understand how a poorly organized talk can make things worse. I understand that medical students are paying a lot of money for this. But I also understand that as good as any educational program is, there are going to be times where you try something and it doesn't come off as planned. I also know for sure that my co-course director's intent was to provide more details to clarify the major points he was making. His intent was not to harm, but to help. I think the majority of lectures I went to in medical school, the lecturer honestly wanted to help the students learn about things they are passionate about. True it is not always presented with great oratory skills or organization, but I think the number of lecturers who truly despise students and purposefully are trying to mess them up is very small. So, all I'm saying is that part of professionalism we are trying to teach in the medical school curriculum should include how to give reasonable feedback to educators without being judgmental. Yes, the lecture was ill-timed, and changing slides on dense lecture the day before the test was ill-timed, and that feedback should be given. It's not OK in frustration to launch an all-out personal assault. Because, at the end of the day, most medical students still find a way to wade through those messes and learn what needs to be learned. It's not fun, but as I move forward in the 'life-long learning' cohort, most of the stuff I'm presented with is a huge disorganized pile of information some of which is contradictory, and I need to work it out myself as I have a regular test I take regularly in the exam room of my clinic. And also, the theory is that the course director's job is partly to take that reasoned feedback and create changes for next year to improve.
Wednesday, January 18, 2012
Teaching second year students with tag teams
In my view from the back of the neuroscience course, I saw some good stuff today. We had a four-hour block of time to introduce muscle, neuro-muscular junction, peripheral nerve, and motor neuron physiology and pathophysiology. For each of these lectures, I invited a pathologist and a neurologist to share the lecture time. They had not done this before, so there were a few moments where it wasn't clear who was going to present what. Overall, the pathologists presented the pathological changes in the structure, then the clinicians gave a presentation of what that looks like in patients affected by these diseases. What I noticed that I thought was super cool, was that in the middle of each talk, the clinician or the pathologist would look over at the other person with a look like, "am I explaining this right?" The other presenter, then usually stepped in, and gave a nice presentation of the area that was fuzzy for the first lecturer.
I like this model for several reasons. First, it helps avoid some of the inevitable statements like, "I have no idea what you've been exposed to before about this, but..." or "Have you all seen this before or not?" Second, it allows points which need clarification to be clarified right at that moment. Third, I think it helps emphasize to the students that medicine truly is becoming too complex for one person to feel like they can master every thing. Yes, you can still aspire to be a well-rounded physician, but any field of study moves to fast for you to practically stay up on everything. Thus, you need to learn to rely on the knowledge and experience of your colleagues. I think it also practically has the advantage of having clinicians and more basic science facutly mingle a little.
Wondering if others have more experience with a similar model in the basic science curriculum of your medical school? Please share your thoughts and ideas here.
I like this model for several reasons. First, it helps avoid some of the inevitable statements like, "I have no idea what you've been exposed to before about this, but..." or "Have you all seen this before or not?" Second, it allows points which need clarification to be clarified right at that moment. Third, I think it helps emphasize to the students that medicine truly is becoming too complex for one person to feel like they can master every thing. Yes, you can still aspire to be a well-rounded physician, but any field of study moves to fast for you to practically stay up on everything. Thus, you need to learn to rely on the knowledge and experience of your colleagues. I think it also practically has the advantage of having clinicians and more basic science facutly mingle a little.
Wondering if others have more experience with a similar model in the basic science curriculum of your medical school? Please share your thoughts and ideas here.
Subscribe to:
Posts (Atom)