Tuesday, March 6, 2012

What medical education can learn from "Moneyball"

I've been waiting a bit to write this post, as I'm not sure exactly which way to take it.  Let me start by stating that I'm a really big baseball fan, and have been since second grade when my dad first took me on the El in Chicago to see the Cubs play in Wrigley.  I still get chills walking into that place.  This love of baseball drives me read the occasional baseball book.  So, while I haven't seen the recent movie, I read the Michael Lewis book, "Moneyball," a few years ago.  And I really liked it on many levels.

In the realm of medical education, I liked the idea of trying to measure something that is inherently immeasurable.  In some respects, trying to pick a good candidate from a pool of medical school applicants or trying to assign a grade to a student on a clinical rotation is not unlike what the old-time scouts in "Moneyball" were doing.  They would look at a player batting, pitching, or fielding, and go with an overall geschalt of whether that player was 'big-league material'.  They were also basing their decisions on statistics which had been around forever, and no one had ever really questioned whether they worked or not to predict who is or who is not going to be a good performer.

Then, Billy Beane and his team of statisticians looked beyond the traditional numbers and redefined what to look for in a player prospect by largely ignoring the players current body habitus or mechanics and focusing solely on the numbers.  They also redefined what success was by finding the the number of runners on base per game correlated to wins more tightly than other statistics.  Thus, on-base percentage, and slugging percentage (which measures walks with extra-base hits) was more important for how an individual would contribute to the team than total runs batted in or home runs.  (Sorry if I just lost the non-baseball fans out there).

This process can have applications to lots of venues.  I think medical school needs to re-look at how we are evaluating our students and decide if we need to go through a similar process.  Are there statistics available to us now which may not have been available 20 to 30 years ago that we could use to identify medical students who are not likely to do well in practice.  We're pretty solid at identifying people with knowledge gaps as our system of standardized testing takes care of that.  But, is that what really makes a good physician?  It's a part of it for sure, but it is not all of it.  There's a lot more to clinical reasoning, and professionalism than just knowledge base.  Can we find ways of identifying ways to capture those measures, or are we going to be stuck with the old scouting reports and crossing our fingers to see what happens?  I don't have any solid answers yet, but I'm willing to help look.

No comments:

Post a Comment