As Lean Healthcare practitioners, have you been asked by your leadership to “spread the best practice?” In the words of Dr. Phil, ”Well, how is that working for you?”  

I have the privilege of belonging to a journal club that explores healthcare policy.  Our last topic was about best practice and spread, or lack thereof.  This is critically important at many levels, from national policy all the way to the physician’s office.  For Lean Healthcare practitioners, you are well positioned to be successful with spread because you fundamentally understand the value of performing a current state analysis at the beginning of an improvement journey.  This gives you the context and knowledge needed to understand which aspects of others’ successes are relevant for your situation and which ones are not.

If you want to explore this more deeply based on the discussion summary below, click through the references at the end.

Summary of journal club discussions
Let’s talk about spreading “Best Practice”.  This is wonk-ese for, “Hey, if it works in one place, it should work everywhere else, right?”  The answer from hard experience is, not even close.  Despite promising pilots and articles, taking things from one site to many sites successfully turns out to be really, really difficult.  From disease management to care management to patient-centered medical home (and perhaps ACOs next), getting to “just the way we do things” is pretty difficult.

Why is this?  We don’t often recognize that when we say, “your mileage may vary,” we mean it.  That is, there are a host of variables in the real world that can prevent successful models from being replicated.

First, what we think matters may be wrong.  We often try to abstract what we think are the essential elements of a successful intervention and then export.  But what really were the essential success factors?  Was it the care manager?  Was it the clinic leadership?  Was it the financial incentive?  Some things you can bring to a new setting, some you can’t.  Some spread efforts fail because what we think causes success might be wrong or may at least be insufficient alone.  At times we vastly overestimate our ability to judge what creates success.

Then there is the matter of host factors.  Variables that may have been immaterial in the pilot setting may be very material in the spread site.  These variables may be patient characteristics, provider characteristics, physical plant, organizational structure, or others.  (For a more comprehensive list see The Congressional Budget Office Issue Brief January 2012: Lessons from Medicare’s Demonstration Projects on Disease Management, Care Coordination, and Value-Based Payment)  The point is that context and culture matter.  Just as treatments vary by our characteristics as patients, whether a reform measure works in a setting depends as much on the context as the intervention. Just as germ theory is incomplete without host factors, so is our understanding of change incomplete if we view it through the lens of interventions without considering settings.

The implications of this are startling.  It means that we cannot assume the ability to spread for successful interventions. The existing federal approach of experimentation in the states, then once proven successful making it the law of the land for the nation (as The Center for Medicare & Medicaid Innovation (CMMI) has the power to do), may be fundamentally flawed.

Instead, the best we can hope for is to be able to abstract guiding principles from successes without assuming that those successes guarantee the same in a different setting. In essence, each new installation of an intervention is a de novo experiment with an uncertain outcome. Our best shot is to recognize those factors that truly are guiding principles—the immutable yin—rather than the methods we use to reflect those principles—the changeable yang.

But there can’t be multiple best practices? Can there?  Yes there can, all depending on the setting and context in which you are trying to apply it.  What is best in one place isn’t necessarily best in all.  Yes, we must be consistent, but only within the setting, not necessarily between settings, and not locked in stone over time.  In practice, this is affected by the degree of complexity for the process we are trying to re-engineer.  For us lucky westerners, think of In ‘n Out Burger’s approach. They use a highly standardized and regulated production process, have a limited menu, and consistent training.  However, their menu and proven best practice would not apply if they were planning an expansion to India, where eating beef is not terribly popular.  Much of the modern world has become too complex to be run by an instruction manual printed on hard copy, including our particular corner of the world.

Further, the more complex the intervention, the more it must be an iterative process.  (In retrospect, this resonates strongly with those students of complexity theory.)  Are research and improvement then at odds?  No.  Research can be nimble and iterative, the product of curious minds in a supportive culture. It’s just that research can alternatively be rigid and dogmatic, in which case it can be a barrier to improvement, both by holding to things that aren’t universally true, and through that rigidity, failing to ask the second and third questions about why things work in one segment of the population and not another.

One of the failures of our current research construct/process is that we are simultaneously invested in declaring our way to be flawless (and therefore demand fidelity to model from others) and incompletely conclusive (thus requiring another cycle of grant funding to “fine tune” the flawless model, a cycle that only we can do).  Just as we have a medical-industrial complex dedicated to its own perpetuation, just as assuredly we have a research-industrial complex with massive investments in the randomized clinical trial machine that is loathe to condone less rigid experimentation that coincidentally doesn’t require its infrastructure.  Sure it works in practice, but what about in theory?

So assuming the current model of innovation is wrong, what should something like CMMI do to foster real and usable innovation?  In addition to respecting context and culture, the other critical variable is leadership.  Without leadership articulating mission and the unwavering need for change, nothing good happens.  Perhaps CMMI should be trying to grow leaders.  The Robert Wood Johnson Foundation has already been doing this for years in their clinical scholars program.  Leaders have a disproportionate responsibility for creating culture that rewards curiosity and trialing to further their organizations’ missions.

Finally, if leaders are a decisive factor, why don’t we have more good ones than we do?  Maybe we as their superiors and followers too often don’t want them.  Leaders of this variety create change, and try to yank away the comfortable security blanket of the status quo in a relentless fashion.  We don’t much like that.  The reason we don’t have more good leaders may have as much to do with bad sponsorship and followership as bad leadership itself.


  1. Health Care Reform And The Trap Of The “Iron Law” Posted By Rocco Perla On April 22, 2015 @ 12:00 pm In Organization and Delivery,Quality
  2. The Congressional Budget Office Issue Brief January 2012: Lessons from Medicare’s Demonstration Projects on Disease Management, Care Coordination, and Value-Based Payment
  3. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science Laura J Damschroder1*, David C Aron2, Rosalind E Keith1, Susan R Kirsh2, Jeffery A Alexander3 and Julie C Lowery1

MunchDaveRSToday’s blog was written by Dave Munch, M.D., HPP senior vice president and chief clinical officer. 

Dave oversees all of HPP’s clinical and Lean Healthcare engagements. He plays a lead role in new services development and HPP’s continuous adaptation to the healthcare industry’s ever-changing needs.  Dave previously served at Exempla Lutheran Medical Center as their Chief Clinical and Quality Officer.

Dave has served on the Agency for Healthcare Research and Quality’s High Reliability Advisory Group, has an extensive background in hospital operations, health plan governance, physician organization governance and clinical practice in internal medicine. He is on the faculty of the Institute for Healthcare Improvement (IHI).

Dave received his M.D. from the University of Colorado’s Health Sciences Center. He is a faculty member for the Belmont University Lean Healthcare Certificate Program.

Share This
Like what you see? Receive our blog posts by email!

Like what you see? Receive our blog posts by email!

Sent right to your inbox on Tuesdays!

You have successfully subscribed! Look for our email on Tuesday!