What’s all this brouhaha about evidence-based practice?

Posted by Marshall Swenson

Jun 17, 2014 8:56:00 AM

Evidence-based practice, evidence-informed practice, practice-based evidence . . . to the average Joe, these all sound so much the same that most folks just quit listening and do what is easiest. I am referring to how communities decide which services we deliver to children and families and how they determine whether something actually works, or more importantly, does not. If the information was clear and better options available, wouldn’t most people choose what works and stop choosing what does not? Unfortunately, this is not necessarily the case. 

Full disclosure first. I work for one of the top evidence-based practices in the United States (and 14 other countries)—Multisystemic Therapy (MST). Some might ask, doesn’t that skew my perceptions? To which I would answer, maybe, but it also gives me an inside look at the problems and solutions offered by these treatments. I see close up their efficacy. I see them work. And that’s what this article is all about.

Keynote_ElliottI recently attended a conference put on by Blueprints for Healthy Youth Development at which Del Elliott made some cogent suggestions. For one thing, he maintains our industry should adopt common standards for defining Evidence-Based Practices (EBP) and presented an adaptation from the Working Group for the Federal Collaboration on What Works, 2004. This federal committee proposed the following elements:

  1. There should be an experimental design Randomized Control Trial (RCT).
  2. The treatment effects should be sustained for at least one year after the intervention.
  3. Have an RTC replicated independently at least once.
  4. RCTs must adequately address threats to internal validity.
  5. Verify that there are no known side effects that compromise health.

Dr. Elliott went on to suggest a Hierarchical Program Classification that would show:

  1. Model Program: meets all the above standards
  2. Effective Program: Lacks independent RCT replication
  3. Promising Program: Lacks a RCT replication
  4. Inconclusive Program: Contradictory findings or non-sustainable effects
  5. Ineffective Program: Meets all standards, but with no statistically significant effects
  6. Harmful Program: Meets all standards, but with negative main effects or serious side effects
  7. Insufficient Evidence: All other programs

Check out Dr. Elliott’s various lists and the degree to which each met the above criteria. The reality is quite troubling. Very few of the lists meet even one or two of the measures. Here's a link to his slide presentation at Blueprints and a link to his actual presentation on YouTube.

It’s time for a change, and the time is right. President Obama launched an initiative in February aimed at minority men, called My Brother's Keeper.  It included an internal administration effort to more rigorously evaluate what programs work best.

Our communities and our policy makers need help and guidance in ascertaining the best course to follow for the good of the community—and the juvenile offenders. We embrace the notion “First, Do No Harm,” but we need to know what that means through scientific study. Once we see what works, we can guide the financial resources of our nation there. And on the other side of the coin, once we see something that either doesn’t work or harms, then we must STOP DOING THAT. It is the only ethical, reasonable and rational path.

Topics: Blueprints for healthy families, Blueprints

Subscribe to Blog

Recent Posts

Posts by Topic

see all