Showing posts with label curriculum. Show all posts
Showing posts with label curriculum. Show all posts

Friday, January 8, 2016

Is there a 'best' curriculum for medical school?

I've been in many meetings lately where we our medical school is grappling with the question of whether our recent/ ongoing curriculum transformation has accomplished what we set out to accomplish. Many people are asking if our new way of doing things is 'better' than the old way of doing things. What most people are expecting to see is a single (or at most two to three) metrics to say that we are better. These are primarily physicians who can clearly state the literature on risk reduction and NNT if I start aspirin in a patient with stroke compared with clopidegril and aspirin. We all like single data points as they are easy to put into practice.

However, our true measure of whether the curriculum is working well is actually pretty darned hard to measure. We want to know if we equipped physicians who are better prepared than the ones who went through the prior curriculum. The potential confounders are enormous, and the outcome data takes years to develop.These are the easy things to see. There is really no one 'best' number to see if we are accomplishing what we should do (yes, in my opinion, this includes the USMLE).

I think there is another inherent issue. Maybe there is no 'best' curriculum. Every system as complex as a medical curriculum will have strengths and weakness, shiny parts and rusty closets no one wants to clean, and areas of emphasis and areas that are not covered as well. Any endeavor has limits and boundries. In education it is often time and effort both on the part of students and teachers is not eternal. You have to make choices on what to include and what to exclude. Choices have to be made on how material is delivered. As a result, some curricula perform better in one area (say medical knowledge) and others perform better in another area (say reflective thinking). But to get better at one, you need to spend time that might be spent on another. Hence without unlimited time, there may be no perfect system.

There may be no 'best'.

And actually that's OK. You just need to decide what the main goals for your curriculum is. It doesn't matter if you are creating a new medical school curriculum or a curriculum for a single learner in your clinic with you for one week. You pick what you want to accomplish, and that will help you determine if you have the 'best' curriculum for you and your learners. And then go to measure whatever you can to see if it is working. We may not have a single best way to measure if our system is working, but if we know what we'd like to measure, it's far easier to get meaningful data.

Friday, September 11, 2015

In which I discuss educational philosophy heresy

Educational reform always provokes controversy and arguments. I honestly started engaging in healthy discussions (read arguments) about educational reform when I was in my first education theory class in college. Most education reform arguments come down to whether the current paradigm is really broken, and whether the new paradigm is enough better to be worth the trouble replacing the old paradigm. Most of these arguments have fuzzy data at best to show for either side of the argument.

In my experience, these arguments tend to ride heavily on the past educational experiences of those involved in the arguments. The problem with this approach is that it assumes the two (or three or four) argue-ers are all equivalent learners. It assumes that all learners will thrive in the environment in which the argue-er thrived. It assumes that all learners have a mental processing intake system which acquires and stores information in a similar manner. It assumes that all learners are motivated to learn by the same motivations that drove the argue-er. No wonder these arguments are typically never resolved with the one party spontaneously saying, "Wow, you're right, and I was wrong all along. Thank you!".

Why is this? I think it is because we often make the assumption that all learners are equivalent in every aspect of acquiring, storing, retrieving, and applying knowledge. This makes it easier for us to create what little data we have as our current model for getting data on effectiveness of educational models. A p-value is not so useful if the entire cohort you are studying is a ill-defined mass of goo. Unfortunately, that is exactly what we have as our substrate, an ill-defined mass of goo.

What do I mean by this? Take neuroscience education in medical school as an example. First, there are obvious background differences - people with advanced degrees in neuroscience mingle in the class with those who have no idea what the frontal lobe is all about. Second, the way people learn is different. When I was a resident, I liked to see a few patients, and then take time right then to look up a bunch of stuff about those patients. I had friends who would rather be slammed with as many patients in a shift as they could find, as they felt they learned better in the doing. Some people like learning large concepts, and then going into details, and others like learning the details first, and then piecing them together later into a larger whole. Some people like to focus on one system or organ at a time, and some people like to have multiple courses concurrently running, so there is more time to absorb the information from each course. Some people love concept maps. Personally, I've never been able to get my head around why they are so great. I'm more of an outline guy. With these differences, we are trying to measure and argue over substrate that is an ill-defined mass of goo.

I'm not saying there are not basic learning theory principles which can be universal. I am saying the application of those basic learning theories is sometimes more wibbly wobbly than the ed-heads like to let on in their arguments. It could be that this multiple choice test on whether education reform is needed is not really a multiple choice test. It's an essay test. And there are multiple right answers as long as you can justify your answer. And everybody hated those tests...


Tuesday, January 29, 2013

Medical students want syllabus 3.0

I've been getting feedback from students over the last two years on what materials they would like to have included in the course materials distributed with our second-year neuroscience course.  I have heard a very clear message from the students over the two years I've been teaching the course.  The expectations for what is included in the course materials and which readings are required has changed over the last few years.

Let me take you back to the mid-nineties when I took my medical school course work (and my college experience in the early 90's).  Let's call this syllabus 2.0.  I received copies of all the slides presented (as long as they were in PP, we still had some lecturers who used slide carousels and they had minimal notes printed - call that syllabus 1.0).  In class we took notes.  If you missed or ditched class, you could look back over what the lecturer talked about by subscribing to a note taking service which was run by the students.  Readings were from the required textbooks.  Test questions covered anything in the printed syllabus as well as anything said verbally in lecture (even seemingly off-hand remarks) and anything covered in the textbook.

In this model, the material is presented, but there is a intentional (or unintentional) fire hose level of information delivered.  It was up to the student to wrestle with this large volume of information, distill it down to essential concepts, and organize it in their brain to allow them to pass the test.  It was expected that there would be some test questions which were not covered explicitly in class, and the purpose of those questions was to differentiate the top of a group of very highly motivated students.  The upside of this model is that if forces the student to be able to analyze large volumes of information some of which is not a core concept, and independently synthesize the important concepts.  This skill is not outside of the required skill set to be a doctor in a clinic.  The downside is that there is room for the individual student to miss the boat and miss out on important concepts which aren't explicitly identified as core.  Also, this model can increase student anxiety during test preparation as you are not clear until you take an exam if you are missing the boat.

Let's move to 2013.  Our course syllabus was inherited from the above paradigm,  and we have been modifying multiple lectures.  Hence our lectures don't have well developed outlines or notes by the faculty to accompany PP presentations.  Students have on several occasions pointed me towards courses at our institution and others where the course materials include extensive annotation by the faculty in addition to the slides.  Students over the last two years have said things like (paraphrased):

"What I want is to have everything I need to know about this lecture written down so I can go learn it."
"I don't want to have links to a whole bunch of useful information about a topic, I want a single link to a very succinct, applicable resource."
"Even if the syllabus for a class is 450 pages, if it is all I need to look at, that's what I'd prefer."

Another way to state this is the students would like a curated information repository which is finite, organized, and focused on the learning objectives.  This sounds to me like it is mirroring discussions about moving from web 2.0 to web 3.0.  In other words, there is a desire to block out noise and focus on what is important to the individual.  Thus, all information in the course materials is honed to efficiently deliver information necessary to perform well in the course.

The upsides of this is that is very clear what the student is expected to learn.  From a pedagogy standpoint, this is an ideal situation for an educational model based on measuring competence.  Hence it is clear what measure to obtain, and all learners can potentially reach this bar.  The downside is that this perhaps does not help in the long run as this is not how real life medical decision making occurs.  There is no finite set of combinations of signs and symptoms, so often there is a need to be able to process a cacophony of noise and distill out the important ideas.  There is always more to read or more detail, and part of being a doctor is gaining skills in deciding how to be your own curator.

Which is better?  My view is we should aim in the grey of the middle.  I think it should be clear what is necessary to pass the exam, and if all material covered in class, small groups, and presentations is mastered.  I agree that if there is a picture slide, that it is reasonable for a lecturer to include some text to create context for the slide.  I do think it is also reasonable to have some way of assessing whether a student can surpass these minimal competency levels.  On an exam, that means asking questions which may not have come specifically from the readings.  It may introduce a novel topic and apply the concepts learned in class in a new way.  What are your thoughts on how much detail should course materials contain?

Friday, June 15, 2012

Use of Tablets in Medical Education - What we can learn from Manchester

I've been watching a series of presentations by students at Manchester Medical School where they discuss their use of the iPad in their studies.  This is the end result of a project where students where issued an iPad in a pilot project at the beginning of this year.  I don't think any of the presentations in and of themselves is ground-breaking.  What I think is very innovative about this program is to have the students take the technology and apply it to problems they identify.  This is not a top-down approach where an instructor is listing apps the students can use, and then evaluating whether the students do what they are told. 

This is really a problem-solving exercise.  It's learner-centered learning at it's very core.  Give a student a tool which has over 30,000 apps available plus web capability, now the students need to go and figure out how to best use it.  First, they all identified problems they had in the past - forgetting important papers at home, having notes highlighted beyond recognition, and inability to physically lug all those textbooks.  They sifted through the app landscape, and came up with some remarkable ways to use the technology.  This is crowd-sourcing at its best.  This is truly the future of technology in medical education.  It's not about the downloading the coolest toy out there and jamming it into a curriculum to make it do something, its about finding the right tool to use to solve the educational problem in front of you.  The students found ways to get around problems with creating and filing notes, filing reading to do later, communicating log data to their supervisor, and creating study aids for themselves and their classmates.  If all the students at Manchester did this project next year, think of the innovations they could produce.  Now think about all the students in the UK, or across nations.  Again, crowd-sourcing at its best.

As Prof Freemont explains in his introduction to the program, this program is not about one particular format.  They chose the iPad for reasons he outlines, but you could likely accomplish similar feats with an army of portable laptops, android tablets, iPads, or whatever the next new thing will be.  I do appreciate the spirit of their experiment.  And, I'd be happy to come personally see what's going on in Manchester.  Maybe sometime next year during football season.

Wednesday, June 6, 2012

EBM evaluation tools applied to medical student assessment tools

I remember back to the days when I was a fresh medical student taking those first classes in biochem, anatomy, and cell biology.  I learned a ton, and honestly I draw on this knowledge-base daily when I'm taking care of patients.  I also remember that the assessments methods used during my first year of medical school were not the greatest (in the opinion of a person who was teaching high school physics and chemistry 3 months before entering med school).  The number of assessments used in med schools has risen over the last 15 years since I was an M1.  However, with a rise in number of choices, comes responsibility to utilize the right choice.  Another way to look at this from an pedagogical standpoint is are the assessments really measuring the outcomes you think they are measuring.  To attempt to help the medical educator with this dilemma, I came up with the idea that you can apply a well-known paradigm used to evaluate evidence-based medicine (EBM) to evaluate a student assessment.  The EBM evaluation methods I've been most familiar with is outlined by Straus and colleagues in their book, Evidence-Based Medicine: How to Practice and Teach EBM, copyright 2005.

Here's my proposed way to assess assessment:

1)  Is the assessment tool valid?  By this we need to be sure that our measurement tool is reliable and accurate in being able to measure what we want it to measure.  The standardized (high-stakes) examinations like MCAT, USMLE and board certification examinations are expensive not because these companies are rolling in cash, but because it takes people LOTS of time to validate a test.  Hence, most home-grown tools are not completely validated (although some have been).  To be validated an assessment has to be likely to give similar results if the same learner takes the test each time.  It also has to accurately categorize the level of proficiency of the learner at the task you are measuring.

For example, let's say I have an OSCE to assess whether a learner can counsel a young woman of child-bearing age on her options for migraine prophylaxitic medications.  For my OSCE to be valid, I need to look for reliability and accuracy.  Does the OSCE predictably identify learners who do not understand that valproate has teratogenic potential, and don't discuss this with a standardized patient?  You also want to know if it is accurate, in other words does your scoring method give similar results if multiple faculty who have been trained on how to use the tool score the same student interaction?  To truly answer these questions on an assessment, it takes multiple data points for both raters and learners - hence why it takes time and money, and also why most assessments are not truly validated.

The best way to validate is to measure the assessment against another 'gold standard' assessment.  How well does your assessment work compared with known validated scales.  Unfortunately, there aren't as many 'gold standard' assessments outside of the clinical knowledge domain in medical education (although it is getting better).

2)  Is the valid assessment tool important?  Here we need to talk about whether the difference seen in the assessment is actually a real difference.  How big is the gap between those who just passed without trouble, just barely passed, and those who failed to meet the expected mark?  Medical students are all very bright, and sometimes the difference between the very top and the middle is not that great a margin (even if it looks like it on the measures that we are using).  I think the place where we trip up here sometimes is in assuming that Likert scale numbers have a linear relationship.  Is a step-wise difference from 3 to 4 t o 5 on the scale set up on the clinical evaluations a reasonable assumption, and is the difference between a 4 and a 5 really important?  It might very well be that this is true, but it will be different for every scale that we set up.  I've never been a big fan of using Likert rating scores to directly come up with a percentage point score unless you can prove to me through your distribution numbers that it is working.

3)  Is this valid, important tool able to be applied to my learners?  I think this step involves several steps.  First, are you actually measuring what you'd like to measure?  A valid, reliable tool for measuring knowledge (typical MCQ test) unless it is very artfully crafted will not likely assess clinical reasoning skills or problem-solving.  So, if your objective is to teach the learner how to identify 'red flags' in a headache patient history, is that validated MCQ the best assessment tool to use?  Is it OK that that learner can pick out 'red flags' from a list of distractors, or is it a different skill set to be able to identify this in a clinical setting?  I'm not saying MCQ's can never be used in this situation, you just have to think about it first.

Second, if you are utilizng a tool from another source and you did not design it for your particular curriculum, is the tool useful for the unique objectives?  Most of the time this is OK, and cross-fertilization of educational tools is necessary due to the time and effort bit.  But, you have to think about what you are actually doing.  In our example of the headache OSCE, let's say you found a colleague at another institution who has an OSCE set up to assess communication of differential diagnosis and evaluation to a person with migraine who is worried they have a brain tumor.  You then apply that to your clerkship, but you are more interested in the above scenario about choice of therapy.  Will the tool still work when you tweak it?  It may or may not, and you just need to be careful.

Hopefully you've survived to read through to the end of this post.  Hopefully you learned something about assessment in medical education, and you found the EBM-esque approach to assessment evaluation useful.  My concern is that in general, not enough time is spent considering these questions, and more time is spent on developing the content then on assessment.  I'm guilty of this as well, but I'm trying to get better.  Thanks for reading, and feel free to post comments/thoughts below.

Thursday, December 8, 2011

Paper vs Pixel - Use of On-line or traditonal books in medical education

I was at at team meeting yesterday to orient faculty to the neuroscience course I'm co-directing next year.  We were going through the section where I was relaying to them our required texts for the course.  One of the faculty (who happens to be a physiologist) asked which physiology text we were using for the course.  On the list we have a nice neuroanatomy text, a brain atlas, a psychiatry text, a pathology text, and Harrison's.  He felt the neurophysiology discussions in our clinically minded neuroantomy text were lacking, and the other faculty in the room agreed with him.

So this left me with a dilemma.  Do I have switch from our current neuroanatomy text with a definite clinical foundation a more comprehensive text with neurophysiology covered more completely?  Such a text was used in the past, and was felt to be too dense for the needs of medical students.  Do we have them buy a text that focuses only on neurophysiology in addition to the neuroanatomy text?  I think this would likely just lead to them not buying this text as we'd only need a few chapters, and I'm not sure it would be very useful for them in the future honestly.  It would likely be more dense than the book we already rejected.  Do we just give them the lecture notes to study from?  Or do we search for/develop online references for them to use.

This last point led me to the rebuttal I had to requiring the students to buy another textbook.  My impression from talking to fourth year students is that the majority of them do not buy textbooks any more.  I can actually see good reason for that.  First, textbooks are and always have been expensive.  Textbooks are also notoriously slow to adapt to new information (new editions come out every few years, and take a year or so to develop, so are at best a year out of date, and at worst 2-3 years out of date when they are read).  Compare that to most online resources which are free (or available for free through institutional subscriptions).  Online resources aren't guaranteed to be updated frequently, but at least the possibility is there for them to be updated frequently.  Also, with the advent of more interactive pages, there is a chance for things to be updated as new information is developed through crowd-sourcing.  Hence, my idea that asking students to buy another text is foolish, as I'm not convinced that their all going to buy the first 3 books that are already on my list.