SeriesBias and causal associations in observational research
Section snippets
Internal and external validity
Analogous to a laboratory test, a study should have internal validity—ie, the ability to measure what it sets out to measure.2 The inference from participants in a study should be accurate. In other words, a research study should avoid bias or systematic error.3 Internal validity is the sine qua non of clinical research; extrapolation of invalid results to the broader population is not only worthless but potentially dangerous.
A second important concern is external validity; can results from
Bias
Bias undermines the internal validity of research. Unlike the conventional meaning of bias—ie, prejudice—bias in research denotes deviation from the truth. All observational studies (and, regrettably, many badly done randomised controlled trials)9, 10 have built-in bias; the challenge for investigators, editors, and readers is to ferret these out and judge how they might have affected results. A simple checklist, such as that shown in panel 1, can be helpful.11, 12, 13, 14
Several taxonomies
Are the groups similar in all important respects?
Selection bias stems from an absence of comparability between groups being studied. For example, in a cohort study, the exposed and unexposed groups differ in some important respect aside from the exposure. Membership bias is a type of selection bias: people who choose to be members of a group—eg, joggers—might differ in important respects from others. For instance, both cohort and case-control studies initially suggested that jogging after myocardial infarction prevented repeat infarction.
Has information been gathered in the same way?
Information bias, also known as observation, classification, or measurement bias, results from incorrect determination of exposure or outcome, or both. In a cohort study or randomised controlled trial, information about outcomes should be obtained the same way for those exposed and unexposed. In a case-control study, information about exposure should be gathered in the same way for cases and controls.
Information bias can arise in many ways. Some use the term ascertainment to describe gathering
Is an extraneous factor blurring the effect?
Confounding is a mixing or blurring of effects. A researcher attempts to relate an exposure to an outcome, but actually measures the effect of a third factor, termed a confounding variable. A confounding variable is associated with the exposure and it affects the outcome, but it is not an intermediate link in the chain of causation between exposure and outcome.27, 28 More simply, confounding is a methodological fly in the ointment. Confounding is often easier to understand from examples than
Control for confounding
When selection bias or information bias exist in a study, irreparable damage results. Internal validity is doomed. By contrast, when confounding is present, this bias can be corrected, provided that confounding was anticipated and the requisite information gathered. Confounding can be controlled for before or after a study is done. The purpose of these approaches is to achieve homogeneity between study groups.
Chance
If a reader cannot explain results on the basis of selection, information, or confounding bias, then chance might be another explanation. The reason for examination of bias before chance is that biases can easily cause highly significant (though bogus) results. Regrettably, many readers use the p value as the arbiter of validity, without considering these other, more important, factors.
The venerable p value measures chance. It advises the reader of the likelihood of a false-positive conclusion:
Bogus, indirect, or real?
When statistical associations emerge from clinical research, the next step is to judge what type of association exists. Statistical associations do not necessarily imply causal associations.17 Although several classifications are available,28 a simple approach includes just three types: spurious, indirect, and causal. Spurious associations are the result of selection bias, information bias, and chance. By contrast, indirect associations (which stem from confounding) are real but not causal.
Conclusion
Studies need to have both internal and external validity: the results should be both correct and capable of extrapolation to the population. A simple checklist for bias (selection, information, and confounding) then chance can help readers decipher research reports. When a statistical association appears in research, guidelines for judgment of associations can help a reader decide whether the association is bogus, indirect, or real.
References (46)
- et al.
Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?
Lancet
(1998) Bias in analytic research
J Chronic Dis
(1979)- et al.
The intrauterine device and pelvic inflammatory disease: the Women's Health Study reanalyzed
J Clin Epidemiol
(1991) - et al.
The intrafamilial transmission of rheumatoid arthritis: 3, the lack of support for a genetic hypothesis
J Chronic Dis
(1969) - et al.
Estimates of the risk of cardiovascular death attributable to low-dose oral contraceptives in the United States
Am J Obstet Gynecol
(1999) Cigarette smoking, use of oral contraceptives, and myocardial infarction
Am J Obstet Gynecol
(1976)Cancer of the breast and reproductive tract in relation to use of oral contraceptives
Contraception
(1989)- et al.
Primary prevention of gynecologic cancers
Am J Obstet Gynecol
(1995) Technology follies: the uncritical acceptance of medical innovation
JAMA
(1993)