When reporting scientific results, the most common measure of the significance of a result is the p-value, which represents how likely it is that an experiment’s results were caused by random chance. But the “statistical significance” represented by a p-value doesn’t always indicate that a result is meaningful.
For instance, a drug could lead to only a miniscule improvement in symptoms, but if it were tested with a large number of patients, the p-value could still look impressive. And the opposite could be true: a drug that meaningfully improves symptoms could have an unimpressive p-value when tested with few patients. To better understand and communicate the relevance of new results, researchers and clinicians urgently need better measures of scientific relevance.
A statistical measure called the second-generation p-value (SGPV) could help better represent the relevance of new scientific findings. New research led by University of Utah researcher Jonathan Chipman, PhD, lays out a framework to determine whether a study has collected enough data to establish scientific relevance. It uses the SGPV while protecting against false discoveries.
This framework has the potential to improve one of the basic facets of how science is done and shift researchers’ and clinicians’ perspectives from “significant” effects to ones that are legitimately meaningful.
The research publishes in The American Statistician.