Showing posts with label #meded. Show all posts
Showing posts with label #meded. Show all posts

Friday, June 26, 2015

How might a pure competency-based curriculum change residency interview season?

OHSU is one of several schools that recently received an AMA-funded grant to push medical educational innovation.  Our new curriculum, YourMD (yeah it has a cool marketable name), is in many ways a test lab for this grant (to be clear, most of what I'm going to discuss here is beyond the scope of the current version being developed for the YourMD curriculum, and I'm outlining my personal view of what the model may look like in the future).  One of the primary themes in OHSU's work for this grant is to create a workable competency-based (not time-based) model of medical education.

Multnomah County Hospital residents and interns, circa 1925  
As you can imagine, there have been many questions about the logistical problems with such a system.  One of the issues raised at our institution as this concept has been discussed at various faculty meetings is the perceived trouble students in such a system will have in finding a residency program.  After all, the student will have this transcript which looks remarkably different from most of the current school transcripts.  It will have a bunch of competencies and EPA's.  It may not have any mention of honors.  How is a residency director to be able to choose who is the best candidate for their program?

I've thought about this a bit, and have a few ideas.  First, if the school is truly competency-based, just the fact that the student has been able to graduate should indicate that:

a) The student understands and applies the knowledge necessary to start as an intern,
b) The student clearly demonstrates the skills necessary to start as an intern
c)  The student clearly demonstrates the professionalism necessary to start as an intern
 
To my mind (assuming the system will work as advertised), this is revolutionary.   This means you don't have to guess as a residency director what you are getting.  You don't have to read between the lines for the secret codes hidden in the letters of recommendation.  This person is ready for residency
.  End of line.

So, then what do you look for now?  Now, as a program director, you can begin to look more at what other experiences and skills does this particular individual have that would help them thrive at any particular institution. Instead of trying to assure that the person had 'honors' in internal medicine, the medicine program director can sort applicants in all manner of ways. They could determine their program wants people who have above-average skill in quality-improvement, or they could decide they want residents who are particularly interested in medical education. They can rank based on how well they operate in a team-environment. They can look for students who have had particular experiences that would benefit them in their environment - say a lot of rural practice experience or many rotations in an under-served inner-city.  Each program director can choose what they'd like to highlight, and I don't see a problem with letting students know what they are looking for in applicants. This makes the interview sessions even less about figuring out if this person can operate on the ward successfully, and more about does this person fit well with our system and our culture.

If competency-based education works, this may be something residency program directors will need to think about. We're all well on the way to competency-based education. So, program directors, prepare yourselves. I think it'll make interview season more fun actually.

Thursday, February 20, 2014

6 things to make your medical school lectures better

So, I realize that many schools are trying to minimize lecture hours, but the truth is that this modality will not likely ever go away completely.  As such, I've made a draft of some guidelines for lecturers in the course I co-direct based on common mistakes I've seen.  Implicit in these guidelines is the idea that I have a lot of rotating lecturers, and some of these issues can be avoided by decreasing volume of presenters.  However, we're stuck in a cycle where I can't really easily change that in the next year.  Please share in the comments if you have other things I haven't addressed.

*Note - NSB stands for neuroscience and behavior -a 9 week introductory course on neuroanatomy, neurophysiology with and some clinical neurology, psychiatry, and neurosurgery.

---------------



NSB lecturer guide:
Please adhere to the following guidelines when giving lectures for NSB.  These are general trends which I have noticed over the last few years in terms of best practices and things to avoid if possible.  In general, please remember that these are students who have spent the last year and a half going to lots of lectures.  Things which seem trivial to you as you only do one or two lectures, for them are agonizing as they see these issues over and over again for two years.

1)            Do look over the slides of lecturers who are talking about related topics to your lecturer.
- Helps a lot with keeping continuity through the course.  It would be nice to not have you say, “I’m not sure if you have been over this before or not.”  If you are not sure, please look it up or ask.

2)            Don’t ever say, “This will not be on the test.” 
- Sometimes it is a minor point in your lecture, but it is a major point in a prior lecture (again why it helps to talk to others) or a later lecture (maybe even in the next block).  OK to say, “This is a minor point for the purposes of this lecture,” or “For this lecture, this is primarily FYI.”  We tell the students that everything mentioned in class is potentially testable.  If you don’t want us to potentially test it, don’t mention it at all.  If you must say this, talk to the course director to be sure it REALLY will not be on the test.

3)            Try to stay away from disclosure slide jokes.
- The first person who puts up the “I am actively looking for people to give me money so I can disclose it” slide is funny.  The next five are not so much.

4)            Please know how much time your lecture is scheduled for, and try to stick to that.
- Our general lecture block time is 50 minutes.  In general, we plan our lectures to end 10 minutes prior to the next lecture.  So if your lecture is at 10, you should plan to be done by 10:50.  If you bleed over, this makes everyone behind you have to modify their talks.  If you have a lot of slides left, and are running short on time, consider stopping where you are at, and recording the end of the lecture to be posted online.
- If you are the last lecturer of the day, it is OK to keep going (within reason).  However, please announce that it is OK for students who need to leave to be able to get up and go.  Some students have tight commute times to get to community preceptor sites by 1 PM or have to walk across campus to do OSCE testing over the noon hour.  Also keep in mind that there are noon talks which occur periodically in the lecture hall.

5)            If you are not skilled at PP or basic functions of the audio/visual equipment, please learn minimum functions.
- For PP, you should be able to start and stop your presentation, restart  from the middle of a presentation, go backwards and forwards using only keyboard, understand what the purpose of a right click is, start and stop video presentations, be able to disengage auto-advancing of slides.
- For A/V, you should be able to turn on and off overhead projector, mute/ unmute the screen, turn on/off the mic,  Turn mic up/ down using front panel controls.
- If you are uncomfortable with this, talk to TSO, and they can help you learn how to do these things.

6)            Do feel free to experiment with audience participation techniques.
- Using clickers for in-lecture quizzing, use of pause for students to work through a problem together, and other techniques are great ways to get students involved in the lecture.

Friday, April 19, 2013

Teaching in an ambulatory clinic - how to make routine follow up unboring

One of my colleagues doing a botulinum toxin injection
I have worked with students in my outpatient botulinum toxin injection clinic for about five years now.  The students who work with me in this clinic are generally first year students.  When I first signed up to have students work with me in this clinic, it was primarily because from a scheduling standpoint, it made the most sense as I often have a fourth year student from the neurology clerkship working with me on my other afternoon clinic, and the first year students are in class all morning.  It was only after I had the students working with me for a few weeks that I realized I was doing the same thing over and over again.  Injecting botulinum toxin is fun to watch a few times, but after that, it all looks the same.  I've tried to use some strategies which can be used in any clinic to help students in this experience to still get value out of the clinic even after being with me for 10 weeks.

1)  Discuss communication and encourage empathy:   I spend a lot of time talking about how
I try to shift my approach from a communication standpoint with each patient.  We talk a lot about how I approach patients who may have personality quirks or special circumstances.  As many of the patients in my clinic have been seeing me for years, I try to provide background for the students about how other aspects of their care have affected their lives in general, and how I've needed to make adjustments to their dystonia treatment over time.  I can talk about a wide-range of conditions including some very deep discussions I've had with patients about end-of-life issues in the past.  This allows each patient encounter to create a space to talk about more than just "This is another cervical dystonia," and turn it into a more rich discussion of this is how I I talked to this patient about cervical dystonia in the light of a new cardiac diagnosis and what changes we made.  Those changes don't have to be made that day for it to be a salient discussion point with the student.

2) Even small amounts of participation is appreciated:  This works better for first year students who often have little clinical experience, but I think it does  help even with more experienced learners.  In the botulinum toxin injection clinic, I will have the students only observe for the first day or so.  After that I have them clean the injection area with alcohol swabs, and hook up the EMG ground and reference leads.  Although that doesn't sound like much, for a student, it makes them understand that they are being helpful and are a valued part of your team.  In truth, it does make things go faster as it takes about the same amount of time to wash my hands as it does to prep the patient for the injections.  As possible, I also try to let the students do one injection in a relatively straight forward site by the end of the ten weeks.  It doesn't always work out that a good patient/ injection works out on the last day of the rotation.  But, even doing one injection for a student is potentially a big deal.  Not as big a deal if they were in healthcare prior to med school, but most of the students who have worked with me would probably put that .25 cc IM injection on their list of highlights for the year.  You just need to put yourself back in the shoes of a first year student to remember how excited you were to do just about anything back then.  Then let the student do a very low risk part of the procedure.  Let's be clear, I'm not advocating for the student to do an injection into the iliopsoas (an injection with an EMG needle into the anterior thigh very near the femoral nerve/ artery/ vein).  But doing something on a small scale is good.

3)  Ask them what they are currently learning and try to find a connection:  Doesn't always work, but if they are learning about microbiology, have them read between patients about clostridium.  If they are learning about basic physiology, have them read about neuromuscular junction synapse function.  If they are learning about cardiac function, have them look up the anticholinergic effects of botulinum toxin.  Or if a patient you see has A fib, have them look that up and listen to their heart in clinic.  The trick is to try to make it not feel too constrained, and not feel like you are making something up for them to do, but to make it something they see value in learning more about.   This also applies to the professionalism or clinical skills teaching sessions most students are learning as well in first and second year.

4)  Show them a bit of the business side of medicine:  This again sounds boring to you and me, but most students don't have much exposure to how the billing system works.  At least once or twice during the course of their experience with me, I'll have the students look over my shoulder as I input the billing codes for the patients.  I explain briefly the difference between a CPT code and an E&M code.  I talk about how I put in the prescription for the toxin.  I understand this system will probably change a bit before they are billing, but I again try to put myself in the position of where I was as a first year student, and I had no clue what that stuff was all about.

There are a few lessons I've learned in the ambulatory setting.  But using these thoughts, I recently had a student write on one of my faculty reviews how they were worried once they found out they were in a procedure clinic for 11 weeks, but were amazed how interesting it was each week.  I also have had students who have been requesting to work with me for the past several years.  I'm sure this is not a novel list, and others have thought about this before. 

Tuesday, January 29, 2013

Medical students want syllabus 3.0

I've been getting feedback from students over the last two years on what materials they would like to have included in the course materials distributed with our second-year neuroscience course.  I have heard a very clear message from the students over the two years I've been teaching the course.  The expectations for what is included in the course materials and which readings are required has changed over the last few years.

Let me take you back to the mid-nineties when I took my medical school course work (and my college experience in the early 90's).  Let's call this syllabus 2.0.  I received copies of all the slides presented (as long as they were in PP, we still had some lecturers who used slide carousels and they had minimal notes printed - call that syllabus 1.0).  In class we took notes.  If you missed or ditched class, you could look back over what the lecturer talked about by subscribing to a note taking service which was run by the students.  Readings were from the required textbooks.  Test questions covered anything in the printed syllabus as well as anything said verbally in lecture (even seemingly off-hand remarks) and anything covered in the textbook.

In this model, the material is presented, but there is a intentional (or unintentional) fire hose level of information delivered.  It was up to the student to wrestle with this large volume of information, distill it down to essential concepts, and organize it in their brain to allow them to pass the test.  It was expected that there would be some test questions which were not covered explicitly in class, and the purpose of those questions was to differentiate the top of a group of very highly motivated students.  The upside of this model is that if forces the student to be able to analyze large volumes of information some of which is not a core concept, and independently synthesize the important concepts.  This skill is not outside of the required skill set to be a doctor in a clinic.  The downside is that there is room for the individual student to miss the boat and miss out on important concepts which aren't explicitly identified as core.  Also, this model can increase student anxiety during test preparation as you are not clear until you take an exam if you are missing the boat.

Let's move to 2013.  Our course syllabus was inherited from the above paradigm,  and we have been modifying multiple lectures.  Hence our lectures don't have well developed outlines or notes by the faculty to accompany PP presentations.  Students have on several occasions pointed me towards courses at our institution and others where the course materials include extensive annotation by the faculty in addition to the slides.  Students over the last two years have said things like (paraphrased):

"What I want is to have everything I need to know about this lecture written down so I can go learn it."
"I don't want to have links to a whole bunch of useful information about a topic, I want a single link to a very succinct, applicable resource."
"Even if the syllabus for a class is 450 pages, if it is all I need to look at, that's what I'd prefer."

Another way to state this is the students would like a curated information repository which is finite, organized, and focused on the learning objectives.  This sounds to me like it is mirroring discussions about moving from web 2.0 to web 3.0.  In other words, there is a desire to block out noise and focus on what is important to the individual.  Thus, all information in the course materials is honed to efficiently deliver information necessary to perform well in the course.

The upsides of this is that is very clear what the student is expected to learn.  From a pedagogy standpoint, this is an ideal situation for an educational model based on measuring competence.  Hence it is clear what measure to obtain, and all learners can potentially reach this bar.  The downside is that this perhaps does not help in the long run as this is not how real life medical decision making occurs.  There is no finite set of combinations of signs and symptoms, so often there is a need to be able to process a cacophony of noise and distill out the important ideas.  There is always more to read or more detail, and part of being a doctor is gaining skills in deciding how to be your own curator.

Which is better?  My view is we should aim in the grey of the middle.  I think it should be clear what is necessary to pass the exam, and if all material covered in class, small groups, and presentations is mastered.  I agree that if there is a picture slide, that it is reasonable for a lecturer to include some text to create context for the slide.  I do think it is also reasonable to have some way of assessing whether a student can surpass these minimal competency levels.  On an exam, that means asking questions which may not have come specifically from the readings.  It may introduce a novel topic and apply the concepts learned in class in a new way.  What are your thoughts on how much detail should course materials contain?

Friday, October 19, 2012

How we teach medical students to view other healthcare providers

I've been thinking about an aspect of the 'hidden curriculum' lately.  It came up in reviews of the neurology clerkship over the last several years.  There have been a few comments over the last few years about staff and residents making statements behind the closed doors of the conference room about the competence of colleagues from other departments and other institutions.  I don't think this is unique to our department or to our school of medicine.  The question I have is why does this happen?

I know this is not unique to us as I encountered these same scenarios as a student myself on all the services I rotated through. This is a typical scenario, a resident takes a call with a request for a consultation by another service.  They hang up the phone, and break into a tirade (sometimes with expletives included) about how stupid the person/team was for not being able to address this problem by themselves.  Too often this exchange happens before the phone is put down, and it can grow into a literal shouting match. I've seen this same pattern after discussions with support staff for a lab value or to call an on-call tech in to the hospital ob the weekend.  There's also the easy target of the referring physician from a smaller hospital who called to transfer a patient.  Often these comments include jokes about the intelligence of the people on the other end of the phone.

So, why does this happen?  Let me discuss one possible reason.  First, from a medical training perspective, I was taught very early to be a critical thinker.  Much of clinical reasoning - especially diagnosis and treatment decisions - occur in a vast grey area between the seemingly sharp lines of common diseases and syndromes seen in medical school textbooks and lectures.  This means you should approach every patient's problems from the beginning and rework the steps to diagnosis to assure yourself of the correct diagnosis and treatment path.  Taken in a positive way, if you come to a different opinion than previous providers, you can potentially change the treatment course and make the person better - which is good.  Taken in a negative way, every time you do this exercise, you find that there are many people who don't think like you do, and you can start to get the idea that you are the only provider in the region who has competence.  This bias towards thinking that presumed errors are based on incompetence are sometimes actually true - perhaps the provider is indeed not safe to practice medicine.  However, I think this is not really true as often as may be grumbled about int the confines of a conference room. First, clinical presentations are often subtle initially, and just the fact that you are evaluating the patient later makes things clearer.  Also, you already know what didn't work which usually helps narrow the differential diagnosis or treatment options.  Also, you have no idea what the context of the day/ night was for the provider as they were making those decisions.  Again, I'm not saying that every misadventure is justified, but I'm saying as professionals our job is to take care of the patient.  Out job is not to jump to conclusions about what happened before we were there.

This behavior then gets passed along to our students who see it modeled all the way from residents to staff.  It's accepted as normal behavior, and like other parts of the hidden curriculum it is passed down from one generation to the next.  Please remember this the next time you are tempted to make a disparaging remark.  Now, I'm not saying good natured joking and  friendly competition should be outlawed.  There are very good jokes out there about neurologists, and I know some good neurosurgeon jokes.  Humor can help us all deal with stressful situations.  I'm not for banning it completely.  I'm just asking for some thought before making a sarcastic comment about a colleague.  Would it be OK for that person to be in the room with you when you say the comment?  If yes, then it is likely just some banter.  If no, it may be time to rethink.  Especially with students in the room.

One final thought.  The other side of the coin is that we usually hear back from colleagues who tell us about things we did well.  Rarely do our colleagues report back to us on things we could have done better.  Thus, you likely have a reporting bias on your own performance on  these types of issues.  So, be careful who you are criticizing as it may well be yourself.

Friday, September 14, 2012

Medical eduacators - no degree required?

Higher education has a this little secret.  Although it is getting better, most of the people responsible for delivering content and designing curriculum don't have a degree in education.  I'm not saying I'm the most learned educator, but I do have a undergrad degree in education.  I frankly fall back on the learning theory and curriculum design background daily as a medical educator.  However, my colleagues around the country don't have that.  I'm not talking about going to a one- or two- day seminar on teaching skills.  I'm talking about a degree from a university or college that states you have completed coursework in education.  I don't see that around these parts much.  And I honestly think medical education (and higher education) suffers for it.

I'm not saying that there are not good teachers in medical schools.  There are wonderful and devoted teachers in every aspect of medical education.  Also, years of teaching does help refine one's skill, and many medical educators have 'learned on the job' and have a decent knowledge base of educational theory.  I'm also not saying that everyone who has an education degree of some ilk is automatically a great teacher.  I'm also not saying that this is at any medical school in particular, but really it's everywhere.

What I am trying to say is that just like I'd rather see a cardiologist for my chest pain than a pediatrician, I'd rather have someone with some training and background in how learning happens and best ways to do education be the person teaching our future physicians.  There, my rant is done, and I feel better.

Friday, August 10, 2012

Clinical assessment variability - what is really causing it?

There was a recent article in Academic Medicine by Dr. Alexander and colleagues from Brigham and Women's Hospital describing the amount of variability in clerkship grading among US medical schools.  They found that, unsurprisingly, the grading systems for the clinical years had really no consistency at all.  There was inconsistency among the grading systems used (traditional ABCDF or honor/pass/fail or pass/fail) - (table 1), and even within the schools which used a similar scale the percentage of students receiving the highest grade was all over the place (table 2).  So, the question is what do we do with this information?  I think no one really expected findings that were different, but now the answer is out there, in print (or on digital reader screens).

I think part of the answer to where we go from here is to decide if this article was really asking the right question.  The authors do start to talk about this in the discussion section, but I'll try to lay out my thoughts with a little different spin than they gave their discussion.  I think the real question is what are we using the assessment of the clerkship performance for?  What is the essence of what we are trying to measure?  Only when there is broad consensus not only between schools, but within the individual courses of each school will there get to be any semblance of uniformity of grading of students.  I see at least two competing interest which influence how a clerkship director decides to come up with a grading system.  The first is the idea that the students should be measured on how competent they are in the area the clerkship is grading.  In other words, when they are on call as a first-year resident or as a 50 year-old physician, do they have the knowledge and skills to assess a patient with a given problem.  Second, the clerkship director also wants to be sure that the students at their school have a fair chance to compete for selective residency programs.  Thus, there also needs to be a system to distinguish high-achieving from low-achieving students.  The first system is more about the individual student, and with this system, by definition, everyone should be able to achieve the highest score with enough effort and work.  In the second system, it is more about evaluation of the program, and the group.  In this system, it cannot be possible for everyone to achieve the highest score.  However, the system can be manipulated on both sides to aid students or to make it more hazardous.  There are benefits and risks of each system - as with anything in medicine.

I don't think these interests are necessarily incompatible, but they create a tension which I've seen in national meetings and in local curricular meetings.  I also think most clerkship directors are not aware of how this tension affects the grading system they have developed.  I think their not aware as the debates I've heard are usually about tools for assessment or the numbers of honors.  Rarely does the debate get to the level of what is our ultimate purpose for the assessment.  The answer to that question must shape how grades are assessed.  Only when we all become very clear about what we our goals are for the assessment will we truly be able to come to a place where we can have a national dialogue about how to unify the system.

Wednesday, July 11, 2012

Simulation training vs natural history in LP battle royale

I read an article on the use of simulation in teaching lumbar puncture (LP) technique to residents by Dr. Simuni and her medicine colleagues at Northwestern University.  I thought it was a really interesting article and helps to add data to the idea that a curricular plan in medical education which includes deliberate practice and simulation does a really nice job of teaching learners a new skill.  It hints that this deliberate practice in a logical fashion is better than the traditional model.

I'm not so sure this paper really definitively answers the question that this practice is superior to the traditional training model.  In brief, the article pits final scores on a mastery checklist of first-year medicine residents who underwent a three-hour educational session including simulation to teach proper LP technique against neurology residents who were asked to simply do the LP simulation while being graded on their performance on the checklist.  I think this result may stand over time and additional studies, but I have a few problems with it.  First, the neurology residents were not shown the checklist.  I think this is a big deal.  I don't know the proper place to do that to get adequate controls, but essentially the medicine intern group was taught to the test.  It was deliberately pounded into their heads over the three hour session that these are the things they are going to be graded on.  That's what deliberate practice is all about.  It's about repeating something to get it right.  To my mind, that is teaching to the test.  The neurology residents weren't given time to familiarize themselves with the simulator (at least it didn't say they were).  They also weren't oriented to what they would be evaluated on, so of course they didn't perform as well on the checklist.  As they have likely done multiple LP's it might have also been easier to skip to inserting the needle in a simulated environment as it feels artificial, and it feels like this is the ultimate goal.  It might have been more useful to go to the bedside of the next LP these residents did and see if the 'real world' performance was different between the PGY-1 group and the neurology residents.  I doubt any neurology resident would forget to get informed consent in the 'real world' (but I may be wrong).  Maybe the simulation training in part, trains you how to take the final simulation exam.  I'm not saying that it was not a good idea to do the simulation training, I do.  I'm also not saying the checklist is invalid or has no practical applications, it does.  I'm saying that the PGY-1 group had the deck stacked in it's favor. 

I would also argue that the way I learned to do LP's was essentially with deliberate practice over time with multiple patient experiences.  When I was first starting, I had a senior resident or faculty over my shoulder giving me feedback on my technique.  Could this have been improved upon by adding a simulation session at the beginning of my training, absolutely!  But I don't know that this study really proves what they say it proves which is that traditional training is inferior to simulation, and neurology residents can easily be schooled by interns fresh from the simulation lab.  This is shown by comparing the neurology residents with the interns at baseline.  The neurology residents were all better just eyeballing the data.  I think if you had put the neurology residents through the training, they also would have achieved a higher level of mastery.  *That's a neurologist talking of course :)

I do want to say that I am a bit concerned at some of the mastery items the neurology residents missed (as were the authors).  The anatomy questions would have likely been taken care of by brushing up on the anatomy before the test, but you could argue especially a senior neurology resident should know that.  The authors were concerned about anesthesia, but that could have been a function of being in a simulator vs 'real world'.  It could also show how one of the schools has a local practice which is different from national norms.  The setting up of the tubes and manometer in a proper fashion is a bit vague to me, and I'm not sure I'd know what the proper position should be for that.  I wouldn't make that a make or break point on this procedure.  Not saying how I know this, but one can recover surprisingly well with the one handed method of unscrewing the caps in a pinch.

So this is a long response to the article as the editorial that accompanied it was trying to make the point that the traditional model was inferior and should potentially be reconsidered.  I don't think that is what this study showed.  I think it did prove that mastery level is attainable with a 3 hour simulation lab for PGY-1.  I'm not sure it really proves they are better than neurology residents.  They may be, but I don't think this was a fair assessment of that.

Friday, June 29, 2012

Augmented reality for neurological education simulations

I am laying down a challenge for app developers out there who know more about programming than I do.  This challenge comes from a day-long IAMSE meeting course I attended over the weekend on state-of-the art medical simulation tools.  What I saw was some pretty cool simulation is available today to replicate many physical signs and to help train on various procedures.  These simulations have come a long way from when all Harvey could do was teach you how to pick up a murmur consistent with mitral stenosis.  Now you can check blood pressure, pupillary response, breath sounds, and the mannequin can even talk to you.

The trouble (from a neurologist perspective) is that current simulation is great for cardiopulmonary physiology and simulation, but it leaves a void for the neurological exam.  It can teach laproscopic surgery, mimic a prostate nodule on DRE, and a lot of other things.  But aside from pupils and having the machine shake to mimic a seizure (which I haven't seen, but from the description, it sounds like a very large Tickle-Me-Elmo type of convulsion - ie all trunk movement and not much arm or leg movements), the neurological exam is as yet uncovered.  I think a lot of that comes from the fact that the neurological exam will require pretty advanced robotic arms and legs to mimic things like fine finger movements, and strength testing.  Hence, essentially you can equilibrate an essentially comatose person's exam for the most part.

I see an opportunity for augmented reality to step in while the robotic simulation takes time to become more sophisticated and cheaper.  I could imagine using a real person as a simulated patient sitting in a chair, or a simulation mannequin in a gurney, and have the student hold a tablet up to the person so that the view screen is over the torso.  Then an augmented reality protocol could take the image of the arm from the simulation, and overlay a realistic-looking tremor.  Or you could overlay realistic ataxia with heel to shin testing.  Or you could overlay a realistic tongue deviation, tongue fasiculations, or palate deviation.  Thus, you could more efficiently create a high fidelity simulation with neurological deficits.  I've asked my bioengineer friend about this, and he said it could probably be done, it'd just take money to get off the ground.

So, there's my challenge.  Create an augmented reality neurology exam simulation.  I'd be interested to hear if anyone is already developing something like this, or if someone if any app makers would be interested in making this happen.

Wednesday, June 6, 2012

EBM evaluation tools applied to medical student assessment tools

I remember back to the days when I was a fresh medical student taking those first classes in biochem, anatomy, and cell biology.  I learned a ton, and honestly I draw on this knowledge-base daily when I'm taking care of patients.  I also remember that the assessments methods used during my first year of medical school were not the greatest (in the opinion of a person who was teaching high school physics and chemistry 3 months before entering med school).  The number of assessments used in med schools has risen over the last 15 years since I was an M1.  However, with a rise in number of choices, comes responsibility to utilize the right choice.  Another way to look at this from an pedagogical standpoint is are the assessments really measuring the outcomes you think they are measuring.  To attempt to help the medical educator with this dilemma, I came up with the idea that you can apply a well-known paradigm used to evaluate evidence-based medicine (EBM) to evaluate a student assessment.  The EBM evaluation methods I've been most familiar with is outlined by Straus and colleagues in their book, Evidence-Based Medicine: How to Practice and Teach EBM, copyright 2005.

Here's my proposed way to assess assessment:

1)  Is the assessment tool valid?  By this we need to be sure that our measurement tool is reliable and accurate in being able to measure what we want it to measure.  The standardized (high-stakes) examinations like MCAT, USMLE and board certification examinations are expensive not because these companies are rolling in cash, but because it takes people LOTS of time to validate a test.  Hence, most home-grown tools are not completely validated (although some have been).  To be validated an assessment has to be likely to give similar results if the same learner takes the test each time.  It also has to accurately categorize the level of proficiency of the learner at the task you are measuring.

For example, let's say I have an OSCE to assess whether a learner can counsel a young woman of child-bearing age on her options for migraine prophylaxitic medications.  For my OSCE to be valid, I need to look for reliability and accuracy.  Does the OSCE predictably identify learners who do not understand that valproate has teratogenic potential, and don't discuss this with a standardized patient?  You also want to know if it is accurate, in other words does your scoring method give similar results if multiple faculty who have been trained on how to use the tool score the same student interaction?  To truly answer these questions on an assessment, it takes multiple data points for both raters and learners - hence why it takes time and money, and also why most assessments are not truly validated.

The best way to validate is to measure the assessment against another 'gold standard' assessment.  How well does your assessment work compared with known validated scales.  Unfortunately, there aren't as many 'gold standard' assessments outside of the clinical knowledge domain in medical education (although it is getting better).

2)  Is the valid assessment tool important?  Here we need to talk about whether the difference seen in the assessment is actually a real difference.  How big is the gap between those who just passed without trouble, just barely passed, and those who failed to meet the expected mark?  Medical students are all very bright, and sometimes the difference between the very top and the middle is not that great a margin (even if it looks like it on the measures that we are using).  I think the place where we trip up here sometimes is in assuming that Likert scale numbers have a linear relationship.  Is a step-wise difference from 3 to 4 t o 5 on the scale set up on the clinical evaluations a reasonable assumption, and is the difference between a 4 and a 5 really important?  It might very well be that this is true, but it will be different for every scale that we set up.  I've never been a big fan of using Likert rating scores to directly come up with a percentage point score unless you can prove to me through your distribution numbers that it is working.

3)  Is this valid, important tool able to be applied to my learners?  I think this step involves several steps.  First, are you actually measuring what you'd like to measure?  A valid, reliable tool for measuring knowledge (typical MCQ test) unless it is very artfully crafted will not likely assess clinical reasoning skills or problem-solving.  So, if your objective is to teach the learner how to identify 'red flags' in a headache patient history, is that validated MCQ the best assessment tool to use?  Is it OK that that learner can pick out 'red flags' from a list of distractors, or is it a different skill set to be able to identify this in a clinical setting?  I'm not saying MCQ's can never be used in this situation, you just have to think about it first.

Second, if you are utilizng a tool from another source and you did not design it for your particular curriculum, is the tool useful for the unique objectives?  Most of the time this is OK, and cross-fertilization of educational tools is necessary due to the time and effort bit.  But, you have to think about what you are actually doing.  In our example of the headache OSCE, let's say you found a colleague at another institution who has an OSCE set up to assess communication of differential diagnosis and evaluation to a person with migraine who is worried they have a brain tumor.  You then apply that to your clerkship, but you are more interested in the above scenario about choice of therapy.  Will the tool still work when you tweak it?  It may or may not, and you just need to be careful.

Hopefully you've survived to read through to the end of this post.  Hopefully you learned something about assessment in medical education, and you found the EBM-esque approach to assessment evaluation useful.  My concern is that in general, not enough time is spent considering these questions, and more time is spent on developing the content then on assessment.  I'm guilty of this as well, but I'm trying to get better.  Thanks for reading, and feel free to post comments/thoughts below.

Friday, May 25, 2012

Senioritis in medical school - How to motivate the abulic state

It's that time of year where med students throughout the US shake hands with the Dean, and pull their tassels in unison from the right to the left.  In the months leading up to the tears and endless photo ops that mark any graduation, the students are finishing up the last few rotations of their medical school career.  Although some students retain focus on their swan-song rotations, in every heart there is always the lure of looking beyond the present rotation to the allure of residency in all its glory.  Some have more trouble than others with maintaining drive at the end of the final year.  Most students post-match are taking electives or required courses which are not directly aligned with their chosen field of study.  This makes some sense for getting a Dean's letter together and positioning one's self to be a desirable residency applicant.  Post-match all of this does not seem to matter as much.  Indeed, there has been discussion among academic educators that there is a missed opportunity in the fourth year of medical school largely based on articles like this one from Dr. Lyss-Lerman and colleagues which outlined residency directors view of how well the fourth year is working to prepare for residency.

So, I have a bunch of fourth year students in my neurology clerkship, in fact I have only fourth-year students with the exception of a few third-years that can take neurology as an elective in one block in November.  These students are largely not going into neurology.  How do I try to keep them engaged in our neurology rotation.  (Full disclosure - I'm fully aware I can learn more about how to do this.  I definitely still get some students for whom my little tricks don't work.  This is partly why I'm starting this discussion so that we can all learn from each other).  Here are some of my ideas:

- Have them create their own goals - In orientation, I encourage the students to come up with their own course goals and objectives.  I have prepared goals and objectives, and they are held accountable to those, but there may be specific areas they want to focus on as an area of weakness or as an area which is important for their specialty.  I tell them that in residency, you will not always have clear objectives which are overtly given to you for every rotation.  Thus, I made a habit in residency of picking 2-3 key things I wanted to learn.  When I was on cardiology as an intern, I wanted to sharpen my EKG reading skills and my cardiac exam skills.  Thus, I had something to focus on while taking care of those patients.  Intrinsically created goals are more motivating.  I encourage the students to follow that model as they move on in their career.

Encourage exploration of topics related to their field - This is partly a student-led issue, and partly faculty development.  Often students will stay engaged if the faculty recognizes what they are going into, and discusses aspects of a neurological case which is of interest to the student.  For example, we had an OMFS fellow rotating through the neurology clerkship, and I took him aside to discuss a case of a siallorhea I was seeing in the setting of neurodegenerative disease.  Sure it's important for him to know how to treat those diseases from a neurologic standpoint, but he's going to be more interested in the salivary issues.  This can then be used as a doorway to get them to be interested in the rest of the disease.

- Try using games - I haven't used this in my clerkship yet, but as a medical student and a resident, we had an attending (Dr. Harold Adams) who would play Neurojeapordy several times during the rotation.  Students were put into teams, and asked neuro-trivia questions about neuroanatomy, neurological differential diagnosis/treatment, and neurological history.  As a student (and a resident) I really enjoyed this.  It's a way to get students to want to read up on disease states, etc.

- Scare the bejeezers out of them - I will often also play the card that in only two to three short months, they will be responsible for caring for patients on their own (in a supervised fashion initially).  Their signature will mean something, and when someone in their care has a neurological problem, they will likely be the first person to evaluate the situation.  Starting on July 1.  Most students understand this logic.

These are just a few ideas I've used.  Any other thoughts on how to motivate the post-match senior on a required rotation?  Leave them in the comments below!

Thursday, April 12, 2012

Virtual hospitals - The future of medical simulation?

I was flipping through Facebook the other day, and saw a video posted by a friend.  It was over 17 minutes long which is eternal in the world of Facebook videos, but I thought I'd give it a try as it looked interesting.  It ended showing people from 'The Gadget Show' making a simulator which not too long ago would have been pure fantasy.  They built a tent with 360 video output capability that also has a 360 degree treadmill to allow you to move in the virtual world by walking as you would in real life.  They also hooked up an X-Box Kinnect sensor to pick up other body movements.  They through in a few other cool add-ons, and they ended up with a truly immersive environment for a first person shooter game.  You can watch the video here.

I got to thinking that this technology is now available, and could be used in medical school to train physicians.  It's not yet at the level of being a holodeck, but it is closer than we've ever been before.  I could envision a program where there is an ambulatory office building, and the student has their own clinic to run where simulated patients come in be interviewed.  The physical examination is done through use of gestures mimicking what the real PE would be, or it could be coupled with a simulation manikin to elicit the physical findings.  Then the student has to go back to a virtual staffing room, and dictate the encounter, and order testing.  They then move on to the next patient.  If you had enough of these built (assuming in the future this type of technology gets cheaper), you could envision having a 'continuity clinic' set up completely in a simulator.  This might include seeing some of your regular patients back as they come through the emergency room for acute conditions or even going to the OR.  It could be as complex as there is time and money to create the scenarios.

I often thought in residency that it would be interesting to have an immersive simulated hospital where you could spend at least some of your time as a medical student or as a junior resident.  There you could have freedom to make some truly independent decisions and see what happens.  I think the advantages to something like this are obvious and are akin to the flight simulators that pilots use to train.  It will never replace time spent on the wards with skilled clinicians giving supervision and feedback.  I don't think the technology is there for a completely realistic medical simulation.  But it is getting closer. 

Monday, March 26, 2012

Life - Do med students have one? Work-life balance across generations

I gave a journal club last week discussing some general ideas about generational differences between the three main groups trying to work together in medical education: Boomers, Gen X, and Gen Y (or whatever it is your preferred term for this generation is).  As I was looking into the topic to prepare for this talk, one of the themes that kept popping up was the work-life balance theme.  In general, the common wisdom is that the Boomers value hard work, and are willing to sacrifice family life for career advancement.  Gen X and Gen Y tend to have less of a focus on work as a source of primary identity, and see much more value in maintaining balance between career and home.  The purpose of this blog post is not to decide whether this is indeed true or not.

What I'd like to spend a moment discussing is how this generational difference is creating conflict in the halls of medical schools.  Medical students are primarily Gen Y (although there are some Gen X in the mix).  Faculty who now populate Dean's office level positions are primarily Boomers, and course/clerkship directors are now Boomers with some Gen X filling the junior ranks.  So, what happens is that the Boomers remember their medical school life which was ruled by the Greatest Generation (even more value on work due to their experiences in the Great Depression).  The biggest place I've seen this conflict play out is in requests for time off or for changing a date to take a test.  The Boomers were given very little room to change their schedule.  I've talked to many of them, and the stories were essentially that if you wanted to take a day off during the clinical years for anything other than being near-death, there would be severe consequences (like repeating the entire clerkship).  Things were a little better for me, but not a lot.  I remember having friends in medical school who had a lot of trouble getting time off to attend weddings or family reunions.  There was minor grumbling, but we all decided it was a transient time, and this was preparing us somehow for the trials of residency.  And we kept telling ourselves that things would eventually get better.  We also had the usual weeks of vacation around the Holidays and Spring Break for some time off.  Everyone also had some lighter rotations, and the fourth-year comes with a much more flexible schedule.

Then along comes Gen Y.  They are much more vocal about their need for time off, and much more vocal about providing feedback on things that they are not in agreement with.  And they are now complaining primarily to the Boomers, who primarily don't want to hear about it.  I'm not so sure.  Maybe it's my Gen X roots showing or maybe I'm still close enough to being a student that I remember the bind it puts you in if your schedule is completely inflexible.  So, I'm wondering if maybe school policies for personal days off should be revisited.  I'm thinking most of the policies were set in place for a very different world, and haven't been changed much for 20-30 years.  With the advent of technology, it is possible to make up some assignments which may not have been possible to make up in the past.  There's also a different cultural norm emerging (or maybe I just think this should happen), and missing a wedding because you are assigned to spend a day in clinic is not an acceptable trade-off.

As a disclaimer - I've been told by several fourth-year students that as a clerkship director, I run a 'tight ship'.  To my mind I'm just doing what the school time off policy is telling me to do.  Our school policy is that student have 2 days off per year which can be used for attending a professional meeting or if they are ill/ have a family emergency.  All other time off is at the discretion of the clerkship director, and must be made up.  I'm not sure I have a perfect answer as to how to change the current policy, but maybe working with appropriate representatives from Gen Y, Gen X, and Boomers we can work together to figure something out.

Tuesday, March 6, 2012

What medical education can learn from "Moneyball"

I've been waiting a bit to write this post, as I'm not sure exactly which way to take it.  Let me start by stating that I'm a really big baseball fan, and have been since second grade when my dad first took me on the El in Chicago to see the Cubs play in Wrigley.  I still get chills walking into that place.  This love of baseball drives me read the occasional baseball book.  So, while I haven't seen the recent movie, I read the Michael Lewis book, "Moneyball," a few years ago.  And I really liked it on many levels.

In the realm of medical education, I liked the idea of trying to measure something that is inherently immeasurable.  In some respects, trying to pick a good candidate from a pool of medical school applicants or trying to assign a grade to a student on a clinical rotation is not unlike what the old-time scouts in "Moneyball" were doing.  They would look at a player batting, pitching, or fielding, and go with an overall geschalt of whether that player was 'big-league material'.  They were also basing their decisions on statistics which had been around forever, and no one had ever really questioned whether they worked or not to predict who is or who is not going to be a good performer.

Then, Billy Beane and his team of statisticians looked beyond the traditional numbers and redefined what to look for in a player prospect by largely ignoring the players current body habitus or mechanics and focusing solely on the numbers.  They also redefined what success was by finding the the number of runners on base per game correlated to wins more tightly than other statistics.  Thus, on-base percentage, and slugging percentage (which measures walks with extra-base hits) was more important for how an individual would contribute to the team than total runs batted in or home runs.  (Sorry if I just lost the non-baseball fans out there).

This process can have applications to lots of venues.  I think medical school needs to re-look at how we are evaluating our students and decide if we need to go through a similar process.  Are there statistics available to us now which may not have been available 20 to 30 years ago that we could use to identify medical students who are not likely to do well in practice.  We're pretty solid at identifying people with knowledge gaps as our system of standardized testing takes care of that.  But, is that what really makes a good physician?  It's a part of it for sure, but it is not all of it.  There's a lot more to clinical reasoning, and professionalism than just knowledge base.  Can we find ways of identifying ways to capture those measures, or are we going to be stuck with the old scouting reports and crossing our fingers to see what happens?  I don't have any solid answers yet, but I'm willing to help look.