Friday, March 18, 2016

360 degrees of neurological amazingness

It was an ordinary day, and I was looking at my Facebook feed, and a friend had posted a video he made while snowboarding. Being on vacation, and a little bored, I clicked on it expecting to see a snowboarding epic move or an epic fall or something. What I saw was the guy's mitten and the trail ahead. Not that cool, I thought, until I touched the screen and panned around. He had captured a 360 video that the user can pan around. I could look at the sky, the trees reflection in his goggles, his board, or the trail ahead. It was very cool.

Why am I talking about this on a medical education post? Because, I looked and these cameras are now affordable. The one he bought is a little over $300. From a quick web search, there are now a bunch of camera options to shoot in 360. Couple that with virtual reality headsets which are also dramatically decreasing in price, and there are a lot of opportunities to develop medical educational materials in ways never possible before.

Here's a couple of examples from Mythbusters diving with sharks, and USA Today flying with the Blue Angels. You may have already seen videos like this where you can pan and zoom, or if you have VR googles, you can just look around.

Imagine showing patient videos to students where the student actually feels like they are in the room with you as you examine a patient. We share videos in movement disorders frequently of unusual tremors and movements.Imagine being able to look like you would in an actual patient visit at a life-size patient. Imagine if you couple this with augmented reality programs that can allow the learners to interact with the patients, and we are not that far from simulation situations that truly do simulate real life.

Affordable 360 cameras are now on the market. Couple with more affordable VR headsets, and the possibilities for VR or augmented reality to teach medical students has opened way up. Now it's up to you all to go out there and buy some toys, and start making cool stuff.

Friday, March 4, 2016

Using a Lego to explain the difference between competencies and EPA's

People in medical education often have trouble figuring out the difference between competencies and EPA (entrustable professional activities). There is a pretty big philosophical difference. The competencies are definitions of observable behaviors and the EPA's are about observing a learner do a specific work task. Here is a recent article from Carraccio and others  that tries to ties the concepts together.

I was in a meeting yesterday where we were discussing the differences between EPA's and competencies. The group was trying to determine whether you are obligated to assess one first. We have 43 competencies in our new curriculum and 13 EPA's. The question that came up was if EPA 2 is going to be assessed in a student, and it is identified to require multiple competencies, do I need to measure the competencies first to allow me to get into an assessment for EPA's? The reverse of this question is if I am found to be entrustable to an acceptable level for graduation for EPA2, does this automatically allow me to be entrustable on all the related competencies.

While this discussion was going on, my mind wandered to Legos. I've been building Lego sets for years. My son and daughters now have large tubs of Legos in our house. It's really cool how you can make all sorts of wonderful things with the simple building blocks that are Legos. You can think of competencies as the individual building blocks. These are the behaviors necessary to build cool stuff. If you don't have the basic building blocks, you can't really make many cool sets (EPA's). The blocks come in lots of different shapes. Think of each shape as a competency. There are long flat short pieces and long flat long pieces. There are two by four bricks and two by eight bricks. There are all sorts of bricks. The bricks also come in different colors which can represent that a competency must be demonstrated in many different environments prior to saying for sure that it has been acheived. In other word, you may be good at applying medical knowledge in a pediatrics outpatient clinic, but not in an inpatient ICU with a critically ill patient. So, to check out on any given competency, the student may need a green two by four brick (applying medical knowledge in peds clinic), and a red two by four brick (applying medical knowledge in an ICU).

EPA's then are like the ability to build the sets. An EPA would be like taking all those Lego bricks and putting them together to make a car or a boat or a house. The act of making the car or boat or house means that you not only have the bricks needed to make the set, you can use them appropriately. So to enter an order in the ICU would be like making a house. The learner needs some two by four red bricks to make the house, but will also need roof pieces (say an infomatics competency) as well as other pieces. And they need to all be the right color to make a house in the ICU setting. Having a red two by four brick does not mean a student can build a house (they need specific skills to put it all together and other pieces), and building a house in the ICU does not mean you can build a house in the peds clinic (you need green pieces for that).

So, in other words, the EPA's and competencies are each dependent on each other. But both need to be assessed in parallel to assure that students Lego buckets are full of lots of cool and useful pieces, but also to assure that they can actually use the cool pieces to make stuff. Let me know if this helps you understand how EPA's and competencies work together, and what you think of this analogy in the comments below.

Friday, January 8, 2016

Is there a 'best' curriculum for medical school?

I've been in many meetings lately where we our medical school is grappling with the question of whether our recent/ ongoing curriculum transformation has accomplished what we set out to accomplish. Many people are asking if our new way of doing things is 'better' than the old way of doing things. What most people are expecting to see is a single (or at most two to three) metrics to say that we are better. These are primarily physicians who can clearly state the literature on risk reduction and NNT if I start aspirin in a patient with stroke compared with clopidegril and aspirin. We all like single data points as they are easy to put into practice.

However, our true measure of whether the curriculum is working well is actually pretty darned hard to measure. We want to know if we equipped physicians who are better prepared than the ones who went through the prior curriculum. The potential confounders are enormous, and the outcome data takes years to develop.These are the easy things to see. There is really no one 'best' number to see if we are accomplishing what we should do (yes, in my opinion, this includes the USMLE).

I think there is another inherent issue. Maybe there is no 'best' curriculum. Every system as complex as a medical curriculum will have strengths and weakness, shiny parts and rusty closets no one wants to clean, and areas of emphasis and areas that are not covered as well. Any endeavor has limits and boundries. In education it is often time and effort both on the part of students and teachers is not eternal. You have to make choices on what to include and what to exclude. Choices have to be made on how material is delivered. As a result, some curricula perform better in one area (say medical knowledge) and others perform better in another area (say reflective thinking). But to get better at one, you need to spend time that might be spent on another. Hence without unlimited time, there may be no perfect system.

There may be no 'best'.

And actually that's OK. You just need to decide what the main goals for your curriculum is. It doesn't matter if you are creating a new medical school curriculum or a curriculum for a single learner in your clinic with you for one week. You pick what you want to accomplish, and that will help you determine if you have the 'best' curriculum for you and your learners. And then go to measure whatever you can to see if it is working. We may not have a single best way to measure if our system is working, but if we know what we'd like to measure, it's far easier to get meaningful data.