Why Confidence Intervals are better than P-values


Most journals report a point-estimate of what they’re trying to measure with a p-value. This gives you an idea of whether this point-estimate is statistically significant. However, now journals are requiring the reporting of confidence intervals over p-values. CI’s give you additional information:

  • First they give you an idea of the precision of the point-estimate. You’re given a range. A huge range is worse than a nice, tight, narrow range.
  • Secondly they give you an idea of whether the estimate is clinically significant. If the extreme ends of that range include values that clinically are important, then the study found a clinically significant difference.

Reporting only p-values and point-estimates eliminates this extra (and very useful) information.

The other point covered in this video is the concept of the point of no difference to determine statistical significance. There are two ways to compare numbers: subtraction and division.

  • In subtraction type comparisons, the point-of-no-difference will be zero. One thing minus the same thing equals zero. If your confidence interval includes zero, the study is not statistically significant. You can recognize subtraction type comparisons by words such as “difference” or “reduction” (e.g., “risk reduction”).
  • In division type comparisons, the point-of-no-difference will be one. One thing divided by the same thing equals one. If your confidence interval includes one, the study is not statistically significant. You can recognize these division type comparisons by the word “ratio” (e.g., “odds ratio” or “risk ratio”).

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s