Showing posts with label #medschool. Show all posts
Showing posts with label #medschool. Show all posts

Friday, June 26, 2015

How might a pure competency-based curriculum change residency interview season?

OHSU is one of several schools that recently received an AMA-funded grant to push medical educational innovation.  Our new curriculum, YourMD (yeah it has a cool marketable name), is in many ways a test lab for this grant (to be clear, most of what I'm going to discuss here is beyond the scope of the current version being developed for the YourMD curriculum, and I'm outlining my personal view of what the model may look like in the future).  One of the primary themes in OHSU's work for this grant is to create a workable competency-based (not time-based) model of medical education.

Multnomah County Hospital residents and interns, circa 1925  
As you can imagine, there have been many questions about the logistical problems with such a system.  One of the issues raised at our institution as this concept has been discussed at various faculty meetings is the perceived trouble students in such a system will have in finding a residency program.  After all, the student will have this transcript which looks remarkably different from most of the current school transcripts.  It will have a bunch of competencies and EPA's.  It may not have any mention of honors.  How is a residency director to be able to choose who is the best candidate for their program?

I've thought about this a bit, and have a few ideas.  First, if the school is truly competency-based, just the fact that the student has been able to graduate should indicate that:

a) The student understands and applies the knowledge necessary to start as an intern,
b) The student clearly demonstrates the skills necessary to start as an intern
c)  The student clearly demonstrates the professionalism necessary to start as an intern
 
To my mind (assuming the system will work as advertised), this is revolutionary.   This means you don't have to guess as a residency director what you are getting.  You don't have to read between the lines for the secret codes hidden in the letters of recommendation.  This person is ready for residency
.  End of line.

So, then what do you look for now?  Now, as a program director, you can begin to look more at what other experiences and skills does this particular individual have that would help them thrive at any particular institution. Instead of trying to assure that the person had 'honors' in internal medicine, the medicine program director can sort applicants in all manner of ways. They could determine their program wants people who have above-average skill in quality-improvement, or they could decide they want residents who are particularly interested in medical education. They can rank based on how well they operate in a team-environment. They can look for students who have had particular experiences that would benefit them in their environment - say a lot of rural practice experience or many rotations in an under-served inner-city.  Each program director can choose what they'd like to highlight, and I don't see a problem with letting students know what they are looking for in applicants. This makes the interview sessions even less about figuring out if this person can operate on the ward successfully, and more about does this person fit well with our system and our culture.

If competency-based education works, this may be something residency program directors will need to think about. We're all well on the way to competency-based education. So, program directors, prepare yourselves. I think it'll make interview season more fun actually.

Thursday, February 20, 2014

6 things to make your medical school lectures better

So, I realize that many schools are trying to minimize lecture hours, but the truth is that this modality will not likely ever go away completely.  As such, I've made a draft of some guidelines for lecturers in the course I co-direct based on common mistakes I've seen.  Implicit in these guidelines is the idea that I have a lot of rotating lecturers, and some of these issues can be avoided by decreasing volume of presenters.  However, we're stuck in a cycle where I can't really easily change that in the next year.  Please share in the comments if you have other things I haven't addressed.

*Note - NSB stands for neuroscience and behavior -a 9 week introductory course on neuroanatomy, neurophysiology with and some clinical neurology, psychiatry, and neurosurgery.

---------------



NSB lecturer guide:
Please adhere to the following guidelines when giving lectures for NSB.  These are general trends which I have noticed over the last few years in terms of best practices and things to avoid if possible.  In general, please remember that these are students who have spent the last year and a half going to lots of lectures.  Things which seem trivial to you as you only do one or two lectures, for them are agonizing as they see these issues over and over again for two years.

1)            Do look over the slides of lecturers who are talking about related topics to your lecturer.
- Helps a lot with keeping continuity through the course.  It would be nice to not have you say, “I’m not sure if you have been over this before or not.”  If you are not sure, please look it up or ask.

2)            Don’t ever say, “This will not be on the test.” 
- Sometimes it is a minor point in your lecture, but it is a major point in a prior lecture (again why it helps to talk to others) or a later lecture (maybe even in the next block).  OK to say, “This is a minor point for the purposes of this lecture,” or “For this lecture, this is primarily FYI.”  We tell the students that everything mentioned in class is potentially testable.  If you don’t want us to potentially test it, don’t mention it at all.  If you must say this, talk to the course director to be sure it REALLY will not be on the test.

3)            Try to stay away from disclosure slide jokes.
- The first person who puts up the “I am actively looking for people to give me money so I can disclose it” slide is funny.  The next five are not so much.

4)            Please know how much time your lecture is scheduled for, and try to stick to that.
- Our general lecture block time is 50 minutes.  In general, we plan our lectures to end 10 minutes prior to the next lecture.  So if your lecture is at 10, you should plan to be done by 10:50.  If you bleed over, this makes everyone behind you have to modify their talks.  If you have a lot of slides left, and are running short on time, consider stopping where you are at, and recording the end of the lecture to be posted online.
- If you are the last lecturer of the day, it is OK to keep going (within reason).  However, please announce that it is OK for students who need to leave to be able to get up and go.  Some students have tight commute times to get to community preceptor sites by 1 PM or have to walk across campus to do OSCE testing over the noon hour.  Also keep in mind that there are noon talks which occur periodically in the lecture hall.

5)            If you are not skilled at PP or basic functions of the audio/visual equipment, please learn minimum functions.
- For PP, you should be able to start and stop your presentation, restart  from the middle of a presentation, go backwards and forwards using only keyboard, understand what the purpose of a right click is, start and stop video presentations, be able to disengage auto-advancing of slides.
- For A/V, you should be able to turn on and off overhead projector, mute/ unmute the screen, turn on/off the mic,  Turn mic up/ down using front panel controls.
- If you are uncomfortable with this, talk to TSO, and they can help you learn how to do these things.

6)            Do feel free to experiment with audience participation techniques.
- Using clickers for in-lecture quizzing, use of pause for students to work through a problem together, and other techniques are great ways to get students involved in the lecture.

Wednesday, June 6, 2012

EBM evaluation tools applied to medical student assessment tools

I remember back to the days when I was a fresh medical student taking those first classes in biochem, anatomy, and cell biology.  I learned a ton, and honestly I draw on this knowledge-base daily when I'm taking care of patients.  I also remember that the assessments methods used during my first year of medical school were not the greatest (in the opinion of a person who was teaching high school physics and chemistry 3 months before entering med school).  The number of assessments used in med schools has risen over the last 15 years since I was an M1.  However, with a rise in number of choices, comes responsibility to utilize the right choice.  Another way to look at this from an pedagogical standpoint is are the assessments really measuring the outcomes you think they are measuring.  To attempt to help the medical educator with this dilemma, I came up with the idea that you can apply a well-known paradigm used to evaluate evidence-based medicine (EBM) to evaluate a student assessment.  The EBM evaluation methods I've been most familiar with is outlined by Straus and colleagues in their book, Evidence-Based Medicine: How to Practice and Teach EBM, copyright 2005.

Here's my proposed way to assess assessment:

1)  Is the assessment tool valid?  By this we need to be sure that our measurement tool is reliable and accurate in being able to measure what we want it to measure.  The standardized (high-stakes) examinations like MCAT, USMLE and board certification examinations are expensive not because these companies are rolling in cash, but because it takes people LOTS of time to validate a test.  Hence, most home-grown tools are not completely validated (although some have been).  To be validated an assessment has to be likely to give similar results if the same learner takes the test each time.  It also has to accurately categorize the level of proficiency of the learner at the task you are measuring.

For example, let's say I have an OSCE to assess whether a learner can counsel a young woman of child-bearing age on her options for migraine prophylaxitic medications.  For my OSCE to be valid, I need to look for reliability and accuracy.  Does the OSCE predictably identify learners who do not understand that valproate has teratogenic potential, and don't discuss this with a standardized patient?  You also want to know if it is accurate, in other words does your scoring method give similar results if multiple faculty who have been trained on how to use the tool score the same student interaction?  To truly answer these questions on an assessment, it takes multiple data points for both raters and learners - hence why it takes time and money, and also why most assessments are not truly validated.

The best way to validate is to measure the assessment against another 'gold standard' assessment.  How well does your assessment work compared with known validated scales.  Unfortunately, there aren't as many 'gold standard' assessments outside of the clinical knowledge domain in medical education (although it is getting better).

2)  Is the valid assessment tool important?  Here we need to talk about whether the difference seen in the assessment is actually a real difference.  How big is the gap between those who just passed without trouble, just barely passed, and those who failed to meet the expected mark?  Medical students are all very bright, and sometimes the difference between the very top and the middle is not that great a margin (even if it looks like it on the measures that we are using).  I think the place where we trip up here sometimes is in assuming that Likert scale numbers have a linear relationship.  Is a step-wise difference from 3 to 4 t o 5 on the scale set up on the clinical evaluations a reasonable assumption, and is the difference between a 4 and a 5 really important?  It might very well be that this is true, but it will be different for every scale that we set up.  I've never been a big fan of using Likert rating scores to directly come up with a percentage point score unless you can prove to me through your distribution numbers that it is working.

3)  Is this valid, important tool able to be applied to my learners?  I think this step involves several steps.  First, are you actually measuring what you'd like to measure?  A valid, reliable tool for measuring knowledge (typical MCQ test) unless it is very artfully crafted will not likely assess clinical reasoning skills or problem-solving.  So, if your objective is to teach the learner how to identify 'red flags' in a headache patient history, is that validated MCQ the best assessment tool to use?  Is it OK that that learner can pick out 'red flags' from a list of distractors, or is it a different skill set to be able to identify this in a clinical setting?  I'm not saying MCQ's can never be used in this situation, you just have to think about it first.

Second, if you are utilizng a tool from another source and you did not design it for your particular curriculum, is the tool useful for the unique objectives?  Most of the time this is OK, and cross-fertilization of educational tools is necessary due to the time and effort bit.  But, you have to think about what you are actually doing.  In our example of the headache OSCE, let's say you found a colleague at another institution who has an OSCE set up to assess communication of differential diagnosis and evaluation to a person with migraine who is worried they have a brain tumor.  You then apply that to your clerkship, but you are more interested in the above scenario about choice of therapy.  Will the tool still work when you tweak it?  It may or may not, and you just need to be careful.

Hopefully you've survived to read through to the end of this post.  Hopefully you learned something about assessment in medical education, and you found the EBM-esque approach to assessment evaluation useful.  My concern is that in general, not enough time is spent considering these questions, and more time is spent on developing the content then on assessment.  I'm guilty of this as well, but I'm trying to get better.  Thanks for reading, and feel free to post comments/thoughts below.

Monday, March 26, 2012

Life - Do med students have one? Work-life balance across generations

I gave a journal club last week discussing some general ideas about generational differences between the three main groups trying to work together in medical education: Boomers, Gen X, and Gen Y (or whatever it is your preferred term for this generation is).  As I was looking into the topic to prepare for this talk, one of the themes that kept popping up was the work-life balance theme.  In general, the common wisdom is that the Boomers value hard work, and are willing to sacrifice family life for career advancement.  Gen X and Gen Y tend to have less of a focus on work as a source of primary identity, and see much more value in maintaining balance between career and home.  The purpose of this blog post is not to decide whether this is indeed true or not.

What I'd like to spend a moment discussing is how this generational difference is creating conflict in the halls of medical schools.  Medical students are primarily Gen Y (although there are some Gen X in the mix).  Faculty who now populate Dean's office level positions are primarily Boomers, and course/clerkship directors are now Boomers with some Gen X filling the junior ranks.  So, what happens is that the Boomers remember their medical school life which was ruled by the Greatest Generation (even more value on work due to their experiences in the Great Depression).  The biggest place I've seen this conflict play out is in requests for time off or for changing a date to take a test.  The Boomers were given very little room to change their schedule.  I've talked to many of them, and the stories were essentially that if you wanted to take a day off during the clinical years for anything other than being near-death, there would be severe consequences (like repeating the entire clerkship).  Things were a little better for me, but not a lot.  I remember having friends in medical school who had a lot of trouble getting time off to attend weddings or family reunions.  There was minor grumbling, but we all decided it was a transient time, and this was preparing us somehow for the trials of residency.  And we kept telling ourselves that things would eventually get better.  We also had the usual weeks of vacation around the Holidays and Spring Break for some time off.  Everyone also had some lighter rotations, and the fourth-year comes with a much more flexible schedule.

Then along comes Gen Y.  They are much more vocal about their need for time off, and much more vocal about providing feedback on things that they are not in agreement with.  And they are now complaining primarily to the Boomers, who primarily don't want to hear about it.  I'm not so sure.  Maybe it's my Gen X roots showing or maybe I'm still close enough to being a student that I remember the bind it puts you in if your schedule is completely inflexible.  So, I'm wondering if maybe school policies for personal days off should be revisited.  I'm thinking most of the policies were set in place for a very different world, and haven't been changed much for 20-30 years.  With the advent of technology, it is possible to make up some assignments which may not have been possible to make up in the past.  There's also a different cultural norm emerging (or maybe I just think this should happen), and missing a wedding because you are assigned to spend a day in clinic is not an acceptable trade-off.

As a disclaimer - I've been told by several fourth-year students that as a clerkship director, I run a 'tight ship'.  To my mind I'm just doing what the school time off policy is telling me to do.  Our school policy is that student have 2 days off per year which can be used for attending a professional meeting or if they are ill/ have a family emergency.  All other time off is at the discretion of the clerkship director, and must be made up.  I'm not sure I have a perfect answer as to how to change the current policy, but maybe working with appropriate representatives from Gen Y, Gen X, and Boomers we can work together to figure something out.