Friday, March 18, 2016

360 degrees of neurological amazingness

It was an ordinary day, and I was looking at my Facebook feed, and a friend had posted a video he made while snowboarding. Being on vacation, and a little bored, I clicked on it expecting to see a snowboarding epic move or an epic fall or something. What I saw was the guy's mitten and the trail ahead. Not that cool, I thought, until I touched the screen and panned around. He had captured a 360 video that the user can pan around. I could look at the sky, the trees reflection in his goggles, his board, or the trail ahead. It was very cool.

Why am I talking about this on a medical education post? Because, I looked and these cameras are now affordable. The one he bought is a little over $300. From a quick web search, there are now a bunch of camera options to shoot in 360. Couple that with virtual reality headsets which are also dramatically decreasing in price, and there are a lot of opportunities to develop medical educational materials in ways never possible before.

Here's a couple of examples from Mythbusters diving with sharks, and USA Today flying with the Blue Angels. You may have already seen videos like this where you can pan and zoom, or if you have VR googles, you can just look around.

Imagine showing patient videos to students where the student actually feels like they are in the room with you as you examine a patient. We share videos in movement disorders frequently of unusual tremors and movements.Imagine being able to look like you would in an actual patient visit at a life-size patient. Imagine if you couple this with augmented reality programs that can allow the learners to interact with the patients, and we are not that far from simulation situations that truly do simulate real life.

Affordable 360 cameras are now on the market. Couple with more affordable VR headsets, and the possibilities for VR or augmented reality to teach medical students has opened way up. Now it's up to you all to go out there and buy some toys, and start making cool stuff.

Friday, March 4, 2016

Using a Lego to explain the difference between competencies and EPA's

People in medical education often have trouble figuring out the difference between competencies and EPA (entrustable professional activities). There is a pretty big philosophical difference. The competencies are definitions of observable behaviors and the EPA's are about observing a learner do a specific work task. Here is a recent article from Carraccio and others  that tries to ties the concepts together.

I was in a meeting yesterday where we were discussing the differences between EPA's and competencies. The group was trying to determine whether you are obligated to assess one first. We have 43 competencies in our new curriculum and 13 EPA's. The question that came up was if EPA 2 is going to be assessed in a student, and it is identified to require multiple competencies, do I need to measure the competencies first to allow me to get into an assessment for EPA's? The reverse of this question is if I am found to be entrustable to an acceptable level for graduation for EPA2, does this automatically allow me to be entrustable on all the related competencies.

While this discussion was going on, my mind wandered to Legos. I've been building Lego sets for years. My son and daughters now have large tubs of Legos in our house. It's really cool how you can make all sorts of wonderful things with the simple building blocks that are Legos. You can think of competencies as the individual building blocks. These are the behaviors necessary to build cool stuff. If you don't have the basic building blocks, you can't really make many cool sets (EPA's). The blocks come in lots of different shapes. Think of each shape as a competency. There are long flat short pieces and long flat long pieces. There are two by four bricks and two by eight bricks. There are all sorts of bricks. The bricks also come in different colors which can represent that a competency must be demonstrated in many different environments prior to saying for sure that it has been acheived. In other word, you may be good at applying medical knowledge in a pediatrics outpatient clinic, but not in an inpatient ICU with a critically ill patient. So, to check out on any given competency, the student may need a green two by four brick (applying medical knowledge in peds clinic), and a red two by four brick (applying medical knowledge in an ICU).

EPA's then are like the ability to build the sets. An EPA would be like taking all those Lego bricks and putting them together to make a car or a boat or a house. The act of making the car or boat or house means that you not only have the bricks needed to make the set, you can use them appropriately. So to enter an order in the ICU would be like making a house. The learner needs some two by four red bricks to make the house, but will also need roof pieces (say an infomatics competency) as well as other pieces. And they need to all be the right color to make a house in the ICU setting. Having a red two by four brick does not mean a student can build a house (they need specific skills to put it all together and other pieces), and building a house in the ICU does not mean you can build a house in the peds clinic (you need green pieces for that).

So, in other words, the EPA's and competencies are each dependent on each other. But both need to be assessed in parallel to assure that students Lego buckets are full of lots of cool and useful pieces, but also to assure that they can actually use the cool pieces to make stuff. Let me know if this helps you understand how EPA's and competencies work together, and what you think of this analogy in the comments below.

Friday, January 8, 2016

Is there a 'best' curriculum for medical school?

I've been in many meetings lately where we our medical school is grappling with the question of whether our recent/ ongoing curriculum transformation has accomplished what we set out to accomplish. Many people are asking if our new way of doing things is 'better' than the old way of doing things. What most people are expecting to see is a single (or at most two to three) metrics to say that we are better. These are primarily physicians who can clearly state the literature on risk reduction and NNT if I start aspirin in a patient with stroke compared with clopidegril and aspirin. We all like single data points as they are easy to put into practice.

However, our true measure of whether the curriculum is working well is actually pretty darned hard to measure. We want to know if we equipped physicians who are better prepared than the ones who went through the prior curriculum. The potential confounders are enormous, and the outcome data takes years to develop.These are the easy things to see. There is really no one 'best' number to see if we are accomplishing what we should do (yes, in my opinion, this includes the USMLE).

I think there is another inherent issue. Maybe there is no 'best' curriculum. Every system as complex as a medical curriculum will have strengths and weakness, shiny parts and rusty closets no one wants to clean, and areas of emphasis and areas that are not covered as well. Any endeavor has limits and boundries. In education it is often time and effort both on the part of students and teachers is not eternal. You have to make choices on what to include and what to exclude. Choices have to be made on how material is delivered. As a result, some curricula perform better in one area (say medical knowledge) and others perform better in another area (say reflective thinking). But to get better at one, you need to spend time that might be spent on another. Hence without unlimited time, there may be no perfect system.

There may be no 'best'.

And actually that's OK. You just need to decide what the main goals for your curriculum is. It doesn't matter if you are creating a new medical school curriculum or a curriculum for a single learner in your clinic with you for one week. You pick what you want to accomplish, and that will help you determine if you have the 'best' curriculum for you and your learners. And then go to measure whatever you can to see if it is working. We may not have a single best way to measure if our system is working, but if we know what we'd like to measure, it's far easier to get meaningful data.

Friday, September 11, 2015

In which I discuss educational philosophy heresy

Educational reform always provokes controversy and arguments. I honestly started engaging in healthy discussions (read arguments) about educational reform when I was in my first education theory class in college. Most education reform arguments come down to whether the current paradigm is really broken, and whether the new paradigm is enough better to be worth the trouble replacing the old paradigm. Most of these arguments have fuzzy data at best to show for either side of the argument.

In my experience, these arguments tend to ride heavily on the past educational experiences of those involved in the arguments. The problem with this approach is that it assumes the two (or three or four) argue-ers are all equivalent learners. It assumes that all learners will thrive in the environment in which the argue-er thrived. It assumes that all learners have a mental processing intake system which acquires and stores information in a similar manner. It assumes that all learners are motivated to learn by the same motivations that drove the argue-er. No wonder these arguments are typically never resolved with the one party spontaneously saying, "Wow, you're right, and I was wrong all along. Thank you!".

Why is this? I think it is because we often make the assumption that all learners are equivalent in every aspect of acquiring, storing, retrieving, and applying knowledge. This makes it easier for us to create what little data we have as our current model for getting data on effectiveness of educational models. A p-value is not so useful if the entire cohort you are studying is a ill-defined mass of goo. Unfortunately, that is exactly what we have as our substrate, an ill-defined mass of goo.

What do I mean by this? Take neuroscience education in medical school as an example. First, there are obvious background differences - people with advanced degrees in neuroscience mingle in the class with those who have no idea what the frontal lobe is all about. Second, the way people learn is different. When I was a resident, I liked to see a few patients, and then take time right then to look up a bunch of stuff about those patients. I had friends who would rather be slammed with as many patients in a shift as they could find, as they felt they learned better in the doing. Some people like learning large concepts, and then going into details, and others like learning the details first, and then piecing them together later into a larger whole. Some people like to focus on one system or organ at a time, and some people like to have multiple courses concurrently running, so there is more time to absorb the information from each course. Some people love concept maps. Personally, I've never been able to get my head around why they are so great. I'm more of an outline guy. With these differences, we are trying to measure and argue over substrate that is an ill-defined mass of goo.

I'm not saying there are not basic learning theory principles which can be universal. I am saying the application of those basic learning theories is sometimes more wibbly wobbly than the ed-heads like to let on in their arguments. It could be that this multiple choice test on whether education reform is needed is not really a multiple choice test. It's an essay test. And there are multiple right answers as long as you can justify your answer. And everybody hated those tests...


Friday, July 31, 2015

Is there an upside to the noise of seemingly irrelevant content in medical education?

As I have worked in planning curriculum in medical school either as a course/ clerkship director or in various school-level committees, a common question keeps coming up. How much is too much information?  Or conversely how little do you need to know about basic science to be able to competently practice medicine?

The age of the 'Google' search and other tools like Epocrates and PubMed being capable of getting a seemingly endless stream of factoids answered instantly makes this question more difficult. How much do I need to know versus how much do I need to be able to know how to look it up?

I don't think there is a perfect answer to this question, but let's use as an example GI histology. As a neurologist, if you asked me how much I have used my knowledge from medical school of GI histology in the past six months, I would probably laugh. Most neurologists would probably laugh as when you ask that, what comes to mind immediately is the day in med school microscopy lab where we looked at the slides of intestines and identified the villi cells. I don't do that anymore - like ever.

However, in the last six months I have taken care of Parkinson's patients with gut motility problems and constipation and I've also taken care of people on whom we were considering gluten-sensitivity as differential diagnostic points. We also know the carbidopa/levodopa competes with protein in small intestine absorption. How much of my ability to understand these basic problems with occur daily in my clinic is founded in part on my original knowledge of GI histology? What I think it critical to consider when considering what level of detail of GI histology is important to physician training is to consider what is implicit knowledge that allows me to solve problems. This means looking beyond the typical response to any given topic where a practicing physician says, "I never use that." (Biochemistry anyone?) This means spending time unpacking the implicit framework knowledge on which you have built much more complex concepts. On the flip side, there are some things which I learned in med school which I really don't ever seem to use much now even as much as I try to rack my brain to figure out if I do use them.

I'm not sure of the best way to puzzle this question out. I'm a little worried about running the grand experiment of just stopping teaching the med students all the tiny details we have taught in the past without first pausing to understand the repercussions. There is not likely going to be a firm line in the sand somewhere where a given topic is relevant or irrelevant, it'll look likely more like a large sandy smudge. However, every teacher of med students has to draw their line somewhere, and it would be good to have some alignment within a med school system.

Friday, June 26, 2015

How might a pure competency-based curriculum change residency interview season?

OHSU is one of several schools that recently received an AMA-funded grant to push medical educational innovation.  Our new curriculum, YourMD (yeah it has a cool marketable name), is in many ways a test lab for this grant (to be clear, most of what I'm going to discuss here is beyond the scope of the current version being developed for the YourMD curriculum, and I'm outlining my personal view of what the model may look like in the future).  One of the primary themes in OHSU's work for this grant is to create a workable competency-based (not time-based) model of medical education.

Multnomah County Hospital residents and interns, circa 1925  
As you can imagine, there have been many questions about the logistical problems with such a system.  One of the issues raised at our institution as this concept has been discussed at various faculty meetings is the perceived trouble students in such a system will have in finding a residency program.  After all, the student will have this transcript which looks remarkably different from most of the current school transcripts.  It will have a bunch of competencies and EPA's.  It may not have any mention of honors.  How is a residency director to be able to choose who is the best candidate for their program?

I've thought about this a bit, and have a few ideas.  First, if the school is truly competency-based, just the fact that the student has been able to graduate should indicate that:

a) The student understands and applies the knowledge necessary to start as an intern,
b) The student clearly demonstrates the skills necessary to start as an intern
c)  The student clearly demonstrates the professionalism necessary to start as an intern
 
To my mind (assuming the system will work as advertised), this is revolutionary.   This means you don't have to guess as a residency director what you are getting.  You don't have to read between the lines for the secret codes hidden in the letters of recommendation.  This person is ready for residency
.  End of line.

So, then what do you look for now?  Now, as a program director, you can begin to look more at what other experiences and skills does this particular individual have that would help them thrive at any particular institution. Instead of trying to assure that the person had 'honors' in internal medicine, the medicine program director can sort applicants in all manner of ways. They could determine their program wants people who have above-average skill in quality-improvement, or they could decide they want residents who are particularly interested in medical education. They can rank based on how well they operate in a team-environment. They can look for students who have had particular experiences that would benefit them in their environment - say a lot of rural practice experience or many rotations in an under-served inner-city.  Each program director can choose what they'd like to highlight, and I don't see a problem with letting students know what they are looking for in applicants. This makes the interview sessions even less about figuring out if this person can operate on the ward successfully, and more about does this person fit well with our system and our culture.

If competency-based education works, this may be something residency program directors will need to think about. We're all well on the way to competency-based education. So, program directors, prepare yourselves. I think it'll make interview season more fun actually.

Friday, March 13, 2015

Robert's rules and the digital age (or my one day as chair of the curriuculum committee)



http://digitalcollections/files/h/214.jpg
Oregon Medical School Admission and Advanced Standing Committee1950's
Sometimes things happen to you without much thought or planning.  In February, I was sitting in what I thought was an inconspicuous corner of the room during the monthly curriculum committee.  Paul Gorman, the committee chair was paged, and somehow found me even though I was behind him, and motioned for me to take over the meeting.  I've been chair of the clerkship directors subcommittee for over a year now, so chairing is not completely foreign to me.  However, the curriculum committee is bigger and sometimes contentious, so my pulse quickened just a tad.  My reign ended approximately 3 minutes / 2 comments later.  I thought I was done, and slunk back into my usual spot against the wall, only to be re-jolted as Paul said he would be gone for the next meeting, and Robert's rules required that he could not appoint an alternate chair, but the committee must appoint an alternate chair.  A voice said, "Well, he did a good job."  Moments later, I was voted curriculum committee chair for a day.
 I was impressed that Paul knew there was a Robert's rule pertaining to the absence of the chair, and as the newly-appointed curriculum committee chair of-the-day, I thought I should review Robert's rules myself.  The OHSU school of medicine website on committees led me to this document.  It's on the one hand a very tedious set of rules.  On the other hand, it represents a time-tested means for a group to come to a decision on matters important to an organization.  However, there are a few areas that I noticed where this document may need some updating.

First, it mentions specifically a lot of papers, here are three examples:









This language should probably be cleaned up to say "document" at the least.  Also, what's the deal with not being able to write on the papers?  Should we extend this rule to pdf marking apps like Notability?  It seems a titch silly to me to have this laid out, but perhaps this is for maintaining the integrity of the first draft.

Next, there is this bit about everybody needing to sign the document physically:


Can we amend this to say we can 'sign' by a form of electronic signature - in many cases now a reply from a personal email account with a signature block is what is required.

The last bit I found that needs updating pertains to remotely logging into a meeting (I found this on FAQ about parlimentary procedure):



I'm thinking that as time goes along, remote log in to meetings via web or phone will become more the norm than the exception.  I haven't been to a curriculum committee meeting in over a year where someone wasn't logging in remotely.  I think this one should be changed so the default is that remote login of any sort in real-time should be considered present and able to vote.  Being absent and voting only by email, is probably along the lines of mail-in votes, and probably should be prohibited.  As the rules don't mention video login or VOIP logins at all, I think this rule needs updating as well.

As I'm not a parlimentary expert, I'm not sure if these issues have already been addressed, I'm just going by what my school of medicine references as the rules we abide by.  It's overall a good system, it just needs a bit of a nudge into the twenty first century.   Please leave your thoughts or any updated parliamentary rules links you are aware of below.