MathematicsPremium

The 17 Ways Headlines Lie About Studies

A headline claims '30% risk reduction.' The study: 200 mice, risk dropped from 1.0% to 0.7%. The headline isn't wrong—it tells you nothing. Decode the distortion.

Hyle Editorial·

A headline says 'Red wine reduces heart attack risk by 30%.' The actual study: 200 mice, 8 weeks, relative risk reduction from 1.0% to 0.7%. The headline is not wrong. It tells you nothing.

In 2015, the World Health Organization's International Agency for Research on Cancer classified processed meat as a Group 1 carcinogen—the same category as tobacco. Headlines screamed: 'Bacon Gives You Cancer.' What they didn't mention: the absolute risk increase was approximately 1 additional case of colorectal cancer per 1,000 people who eat 50g of processed meat daily. Smoking, by contrast, increases lung cancer risk by 2,000-3,000%. Both 'Group 1 carcinogens.' Completely different realities.

The gap between what studies find and what headlines report isn't accidental—it's structural. Scientists need funding and publications. Journalists need clicks and deadlines. You need the truth. These incentives are not aligned.

The Mathematical Illusions: Numbers That Mislead

1. Relative Risk vs. Absolute Risk

This is the single most common distortion in science reporting. When a headline claims 'X increases risk by 50%,' your brain imagines something terrifying. Here's what it actually means:

Scenario: A study finds that a certain medication increases blood clot risk from 2 in 10,000 to 3 in 10,000.

  • Absolute risk increase: 0.01% (1 additional case per 10,000 people)
  • Relative risk increase: 50% (3 ÷ 2 = 1.5, a 50% increase)

The headline writes itself: 'Drug Increases Clot Risk by 50%.' Technically accurate. Practically meaningless.

[!INSIGHT] Relative risk ratios without baseline context are statistically valid but communicatively deceptive. A 100% increase from 0.001% to 0.002% is still 0.002%—affecting 1 additional person per 100,000.

The Formula:

$$\text{Relative Risk} = \frac{P(\text{event}|\text{exposed})}{P(\text{event}|\text{unexposed})}$$

$$\text{Absolute Risk Reduction} = P(\text{control}) - P(\text{treatment})$$

Number Needed to Treat (NNT) reveals the practical reality:

$$\text{NNT} = \frac{1}{\text{Absolute Risk Reduction}}$$

If a statin reduces heart attack risk from 5% to 4% over 5 years, the NNT = 1/(0.05-0.04) = 100. You need to treat 100 people for 5 years to prevent one heart attack.

2. The Baseline Trap: 'Risk Doubles'

'Doubling your risk' sounds catastrophic until you ask: doubling from what?

Headline ClaimBaseline RiskActual Risk After 'Doubling'
Risk doubles0.005%0.01%
Risk doubles15%30%
Risk doubles80%160% (impossible for probabilities)

A 2023 meta-analysis made headlines claiming a '96% increased risk' of a rare neurological condition after a common procedure. The baseline incidence: 0.0003%. The 'doubled' risk: 0.0006%. Approximately 2 additional cases per million procedures.

3. Confidence Intervals Hidden in Point Estimates

Studies report ranges. Headlines report single numbers.

A treatment might show '25% improvement' with a 95% confidence interval of 2% to 53%. The headline: 'New Treatment 25% More Effective.' The reality: we're 95% confident the true effect is somewhere between 'barely noticeable' and 'quite impressive.'

The Translation Distortions: From Lab to Headline

4. Animal Models Sold as Human Findings

'Compound X reverses aging in mice' becomes 'Scientists Discover Cure for Aging.'

Mouse metabolism differs from human metabolism in fundamental ways. A 2020 analysis found that of 100 promising cancer drugs that worked in mice, only 5-8% succeeded in human trials. The failure rate for neurology drugs moving from mice to humans exceeds 99%.

Why the gap?

  • Mice have 2-year lifespans; effects observed over months represent proportionally more time
  • Mice are genetically homogeneous; humans are not
  • Mice don't have comorbidities, lifestyle factors, or varied diets
  • Doses in mouse studies often far exceed safe human equivalents

5. Observational Studies Reported as Causal

The hierarchy of evidence places randomized controlled trials (RCTs) above observational studies for a reason. Observational studies can only show correlation, not causation.

Yet headlines routinely blur this distinction:

  • Study: 'People who drink coffee have 15% lower rates of liver disease'
  • Headline: 'Coffee Prevents Liver Disease'
  • Possible reality: Coffee drinkers might be wealthier, have better healthcare access, or share other protective factors

The Bradford Hill criteria for causation requires consistency, dose-response relationship, biological plausibility, temporality, and more. Most observational studies satisfy none of these adequately.

6. P-Hacking and Selective Reporting

Researchers face pressure to publish 'significant' results. This creates perverse incentives:

  • Outcome switching: Measuring 20 variables, finding 1 significant result, reporting only that one
  • Subgroup analysis: 'The drug didn't work overall, but it worked in women over 50 with brown hair'
  • P-hacking: Running analyses different ways until p < 0.05 emerges

A 2015 investigation in PLOS ONE found that in psychology studies with p-values just below 0.05, effect sizes were systematically inflated by an average of 60%.

"The difference between significant and non-significant is not itself significant.
Andrew Gelman, Statistician, Columbia University

The Incentive Structure: Why This Keeps Happening

7-12. The Six Translation Filters

Each step between study and headline introduces potential distortion:

StageActorIncentiveTypical Distortion
7. Research designPrincipal InvestigatorNeed significant results to publishUnderpowered studies, flexible analysis
8. Peer reviewJournalImpact factor, citationsPreference for positive results
9. Press releaseUniversity PR officeMedia coverage, institutional prestigeExaggerated claims, omitted caveats
10. ReportingScience journalistDeadlines, clicksSimplification, removed uncertainty
11. HeadlineEditorCTR, engagementMaximal claim, minimal nuance
12. Social sharingPublicSignal virtue/awarenessFurther simplification

13. The Press Release Multiplier

A 2019 study in BMC Medicine analyzed 462 press releases from 20 leading UK universities. They found:

  • 40% of press releases exaggerated causal claims from observational studies
  • 33% inflated advice beyond what the study supported
  • 36% overemphasized human relevance of animal research

These exaggerations directly predicted more exaggerated news stories. The distortion originates upstream of journalism.

14. Conflict of Interest Omission

'New study shows supplement X improves memory' — funded by the supplement manufacturer.

A 2017 analysis found that industry-sponsored nutrition studies were 8 times more likely to favor the sponsor's product than independently funded studies. This doesn't make the studies wrong, but it makes non-reporting of funding sources deeply problematic.

The Remaining Distortions

15. Single Study Syndrome

Science progresses through replication and meta-analysis. Headlines progress through novelty.

The 'SINGLE STUDY PROBLEM': One study with n=47 participants showing an effect becomes 'Scientists Discover...' while 12 subsequent studies with n=2,000 each showing no effect receive zero coverage.

16. Effect Size Invisibility

Statistical significance (p < 0.05) tells you an effect probably exists. It tells you nothing about whether the effect matters.

A study might find that a teaching intervention improves test scores with p < 0.001 (highly significant). The effect size: 0.02 points on a 100-point scale. Statistically real. Practically worthless.

17. Generalization Beyond Population

A study on 18-22 year-old American college students (psychology's favorite subjects) gets reported as 'Humans behave this way.'

WEIRD problem: Western, Educated, Industrialized, Rich, Democratic societies represent 12% of the world's population but 96% of psychology study participants. Generalizations beyond this population are often unjustified.

What This Means For You

[!NOTE] The solution isn't to ignore science journalism entirely. Peer-reviewed research, even with its flaws, remains our best tool for understanding the world. The solution is to read beyond headlines and ask specific questions.

When you see a health or science headline, ask:

  1. What was the baseline risk? (Absolute vs. relative)
  2. What was the sample size and population?
  3. Was this an RCT or observational study?
  4. Is this a single study or a replication?
  5. What does the confidence interval look like?
  6. Who funded this research?
  7. Did the headline use causal language ('prevents,' 'causes') for correlational findings?
  8. Were the subjects human or animal?

The gap between scientific finding and public understanding isn't inevitable—it's engineered by misaligned incentives at every stage of the translation chain. Understanding these 17 distortions doesn't make you cynical. It makes you literate in the actual language of evidence.

Key Takeaway Headlines aren't lying—they're speaking a different language than the studies they reference. Relative risk without absolute baseline, animal models generalized to humans, and observational correlations reported as causal mechanisms are features of the science journalism ecosystem, not bugs. The only defense is learning to read the underlying evidence yourself, or finding sources that translate faithfully rather than sensationally.

Sources: Ioannidis JPA (2005). 'Why Most Published Research Findings Are False.' PLOS Medicine. press releases. BMC Medicine. Gelman A (2019). 'Don't Calculate Post-Hoc Power Using Observed Estimate of Effect Size.' Annals of Surgery. Henrich J et al. (2010). 'The weirdest people in the world?' Behavioral and Brain Sciences.

This is a Premium Article

Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.

Related Articles