📋 My Report Card
Total Points: 0
New: SWEDEPAD and AQUATIC journal club breakdowns now live.

Statistics for clinicians. Finally understand.

Interactive lessons that teach you to read papers critically, choose the right analysis, and interpret results without a degree in statistics.

📚

Education

6 modules to statistical fluency

01 Available

Reading a Paper Without Faking It

Study design, hierarchy of evidence, and spotting red flags before you get fooled.

5 lessons · 45 min
02 Available

Choosing the Right Test

Variable types, study design, and the decision tree for test selection.

5 lessons · 45 min
03 Available

Understanding Your Output

P-values, confidence intervals, effect sizes, and how to smell a fragile result.

4 lessons · 35 min
04 Coming Soon

Survival Analysis for Surgeons

Kaplan-Meier, Cox regression, competing risks, and why your amputation data needs special handling.

5 lessons · 55 min
05 Coming Soon

Regression Without Crying

Linear, logistic, ordinal. When to use each, how to interpret coefficients, and why your 15-variable model is garbage.

6 lessons · 60 min
06 Coming Soon

Sample Size and Power

Why your retrospective study is underpowered, how to calculate what you need, and when to admit defeat.

4 lessons · 40 min
📰

Journal Club

Landmark trials, dissected

CREST-2 New

Stenting wins, CEA... doesn't?

Stenting beat medical therapy. CEA didn't reach significance. The trial that's reshaping how we think about asymptomatic carotid disease.

SWEDEPAD New

One program, two trials, same answer

3,500 patients across two parallel Swedish trials. No amputation benefit in CLTI, no QoL benefit in claudication, and a 5-year mortality signal. The case against routine paclitaxel-coated devices.

AQUATIC New

Aspirin + anticoagulation: friend or foe?

Chronic coronary syndrome patients on oral anticoagulation. When standard of care is scary.

🛠️

Tools

Quick utilities for your research

📄

Is This Paper Worth My Time?

Answer 7 quick questions about a study to get a read/skim/skip verdict with specific red flags and strengths.

Scan a paper
🔬

Which Test Do I Need?

Walk through your study design step-by-step to get the right statistical test, assumptions to check, and what to report.

Find your test
🎤

Prep Your Journal Club

Walk through a 10-step framework and leave with a presentation ready to deliver in under 10 minutes.

Start prepping
You are visitor number visitor count since 2026
Back to Modules
Module 01

Reading a Paper Without Faking It

Study design, hierarchy of evidence, and spotting red flags before you get fooled.

0/5 lessons complete
1
Study Design Hierarchy
RCT, cohort, case-control, and more
Foundation for evaluating any study's conclusions
Available
2
Bias and Confounding
Selection bias, information bias, confounders
Why associations aren't always causal
Available
3
Red Flags in Methods
P-hacking, HARKing, cherry-picking
Spot manipulation before you're fooled
Available
4
Reading Results Tables
Table 1, baseline characteristics, outcome tables
Extract what matters without getting lost
Available
5
The 5-Minute Paper Scan
A systematic approach to rapid appraisal
Decide if a paper is worth your time
Available
Back to Modules
Module 03

Understanding Your Output

P-values, confidence intervals, effect sizes, and how to smell a fragile result.

0/4 lessons complete
1
Effect Size vs P-Value
Statistical vs clinical significance
Why p=0.001 can be meaningless and p=0.09 can matter
Available
2
Confidence Intervals
What CIs tell you that p-values don't
Precision, plausible range, and clinical interpretation
Available
3
Effect Sizes That Matter
NNT, absolute vs relative risk, MCID
Translate statistics into clinical decisions
Available
4
Smelling Fragile Results
Underpowered studies, wide CIs, barely significant
Know when a result won't replicate
Available
Back to Modules
Module 02

Choosing the Right Test

Variable types, study design, and the decision tree for test selection.

0/5 lessons complete
1
Variable Types
Continuous, categorical, ordinal, time-to-event
Test selection depends entirely on this
Available
2
Outcome vs Predictor
Which variable are you trying to explain?
Determines the direction of analysis
Available
3
Paired vs Independent
Same subjects or different subjects?
Wrong choice = invalid test
Available
4
Number of Groups
One, two, or more?
Determines t-test vs ANOVA vs other
Available
5
The Decision Tree
Pull it all together
Interactive: answer questions → get the right test
Available
Effect Size vs P-Value 0 pts Module 2 · Lesson 1
Introduction

Effect Size vs P-Value

Why statistical significance isn't the same as clinical importance

What You Were Taught

p < 0.05 = significant = real finding = publish
p 0.05 = not significant = no effect = move on

What We're Going to Show You

That framework is incomplete at best, misleading at worst. By the end of this lesson, you'll understand why a p=0.001 result can be meaningless, and why a p=0.09 result might be the one that should change your practice.

Consider These Two Trials

Both tested Drug X for claudication. Look at the results:

Trial A

Patients
10,000
6-Min Walk Improvement
4 meters
P-Value
0.001

Trial B

Patients
80
6-Min Walk Improvement
85 meters
P-Value
0.09

Which drug works better? Which result should change your practice?

Hold that thought. We'll come back to it.

What the P-Value Actually Tells You

1

The definition

The p-value is the probability of seeing this result (or something more extreme) if the null hypothesis were true.

In other words: "If there's actually no difference, how often would we see data like this by chance?"

2

What it does NOT tell you

The p-value does not tell you:

• The probability that your finding is true
• The size of the effect
• Whether the effect matters clinically

3

The question it answers

"Is this likely due to chance?"

That's it. Nothing about importance. Nothing about magnitude.

The p-value answers the wrong question. You want to know if the effect matters. The p-value only tells you if it's probably real.

The Large-N Trap

With enough patients, any difference becomes statistically significant.

Same effect (2 mg/dL LDL reduction), different sample sizes:

N = 50
2 mg/dL drop
p = 0.40
N = 500
2 mg/dL drop
p = 0.08
N = 5,000
2 mg/dL drop
p = 0.003
N = 50,000
2 mg/dL drop
p < 0.001

The effect didn't change. Your certainty about a trivial effect increased.

The p-value is a function of sample size. Effect size is not.

Effect Size Is the Clinical Question

Effect size answers "how much?"—the question you actually care about.

1

Common effect size measures

• Absolute difference (e.g., 4 meters vs 85 meters)
• Relative risk or hazard ratio
• Number needed to treat (NNT)
• Odds ratio

2

Minimum Clinically Important Difference (MCID)

Below this threshold, who cares if it's significant?

For 6-minute walk distance in PAD: the MCID is roughly 30-50 meters.

A 4-meter improvement? Statistically significant noise.

The question to always ask:

"Is this difference big enough to change what I do for my patient?"

If the answer is no, the p-value is irrelevant.

Confidence Intervals Give You Everything

The 95% CI is more informative than the p-value alone.

1

Point estimate = effect size

The middle of the CI is your best guess at the true effect.

2

Width = precision

Narrow CI = large sample, precise estimate.
Wide CI = small sample, uncertain estimate.

3

Crossing the null = not significant

If the 95% CI for a difference includes zero (or 1.0 for ratios), p > 0.05.

If the entire CI falls within a clinically meaningless range, the study is definitively negative—even if p < 0.05.

Example

A study shows: Effect: 2 mg/dL, 95% CI: 1.5 to 2.5, p < 0.001

The entire confidence interval is below any meaningful LDL reduction. This is a confident null—we're certain the effect is too small to matter.

Exercise: Significant or Important?

Classify each scenario into one of four categories.

Significant AND Important
Significant but NOT Important
Not Significant but Potentially Important
Neither
Question 1 of 8

Back to Our Two Trials

Trial A

Patients
10,000
6-Min Walk Improvement
4 meters
P-Value
0.001

Trial B

Patients
80
6-Min Walk Improvement
85 meters
P-Value
0.09

Now you can answer:

Trial A is statistically significant but not clinically important. A 4-meter improvement is far below the MCID of 30-50 meters. With 10,000 patients, you've achieved high confidence in a trivial effect.

Trial B is not statistically significant but potentially important. An 85-meter improvement is clinically meaningful—nearly double the MCID. The p=0.09 reflects an underpowered study, not an absent effect.

Trial B should influence your thinking more. It suggests a meaningful effect that deserves a larger study.

The Bottom Line

0
Total Points Earned
Exercise (0/8 correct) +0 pts
Lesson Completed +100 pts
  • P-value tells you about chance, not importance. A small p-value means you're confident the effect isn't zero—not that it matters.
  • Always ask: "How big is the effect? Is that big enough to matter?" If the effect is below the MCID, significance is irrelevant.
  • A confident null is more informative than a shaky significant result. A tight CI around zero tells you more than a wide CI that barely excludes it.
  • Look at the confidence interval. It gives you effect size, precision, and significance in one measure.

You Now Know

How to look past p-values and evaluate what a study actually found. You won't be fooled by significant-but-trivial results, and you'll recognize potentially important findings that failed to reach significance due to sample size.