It’s tempting to believe that when a patient or athlete improves after we’ve worked with them, we caused the improvement.
But what if the story isn’t so simple?
In this post, we’ll explore why clinical and performance outcomes do not always reflect the true effects of our interventions.
We’ll break it down in three parts:
- Why outcomes ≠ effects
- How cognitive biases reinforce our assumptions
- Why individual variability makes it dangerous to assume cause and effect
Let’s dive in.
1. Outcomes Reflect Change, Not Causation
Over the past few decades, outcome measures have become a staple in healthcare and performance.
Tools like pain scales, function tests, and patient satisfaction scores are used to justify care, monitor progress, and reflect program success.
But here’s the problem: outcome measures tell us what changed, but not why it changed.
A person may improve because of:
- Natural recovery
- Regression to the mean
- Hawthorne effect or placebo
- Your intervention (maybe)
- Or any combination of the above
Just because change follows treatment doesn’t mean the treatment caused it.
This is the post hoc fallacy, when we confuse sequence with consequence.
That’s why randomized controlled trials (RCTs) exist: to isolate the true effect of an intervention by controlling for confounders.
In contrast, real-world outcome tracking lacks those controls and can easily mislead us.
So when we say “It worked!” we usually mean “They got better.”
2. Cognitive Biases Fuel the Story We Want to Believe
Even when we know outcomes don’t prove causation, our brains still want to connect the dots and protect our egos.
Here are some of the most common cognitive traps we fall into:
Confirmation Bias
We notice and remember the cases where our methods worked—and forget or rationalize the ones that didn’t.
Outcome Bias
We judge the quality of the decision based on the result, not the process. If the athlete improved, the treatment must have been correct.
Survivorship Bias
We highlight the success stories in our heads, on our websites, and in our case studies, yet we ignore those who dropped out, didn’t get better, or tried something else.
Dunning-Kruger Effect
Early in our careers, we may overestimate our ability to discern what’s effective because we lack the experience or knowledge to detect the complexity involved.
These biases aren’t signs of poor character; they’re just part of being human.
But they do shape how we interpret outcomes, especially in emotionally or professionally charged environments.
3. Individual Responses Are Not Universal
Even if a treatment or program can work for someone, that doesn’t mean it will work for everyone.
Here’s why:
- People vary biologically (genetics, training age, injury history)
- People vary psychologically (motivation, belief systems)
- People vary environmentally (lifestyle, stress, resources)
So when a client or athlete responds positively, that’s their outcome.
We can’t assume it will generalize to others, even if the intervention was consistent.
This is why single-subject improvements shouldn’t become global prescriptions.
“It worked for my last ACL athlete” does not mean “This is the best way to rehab all ACL injuries.”
Even more important: your non-responders hold just as much insight as your success stories, if not more.
Takeaways for Coaches and Clinicians
It’s okay to celebrate wins.
But if we want to improve our coaching, rehabbing, or leadership, we have to admit what we don’t know.
And the idea that “they got better, so it must’ve been me” is one of the most seductive (and dangerous) stories we tell ourselves.
Instead, let's acknowledge the gaps in what we do. Here's how:
- Outcomes matter, but they don’t prove effectiveness.
- Use outcomes to monitor change, not justify treatment.
- RCTs are the gold standard for controlling confounding variables and determining what causes change.
- Biases skew our perceptions of what works.
- Expect individual variability, and be careful with generalizations.
- Good science plus good judgment equals great coaching.
- Stay curious, humble, and skeptical.
Stay evidence-informed. Default to the literature.
And remember: not everything that works is working for the reason you think.