Logo

Does Lack of Health Insurance Kill?

Written by Linda Gorman

View Comments
Share

The results from the Oregon Experiment, published in the New England Journal of Medicine on May 2, show that extending Medicaid to low-income adults did not improve basic clinical measures of health. Given that, it is a bit hard to see how being uninsured can cause 45,000 premature deaths every year — a figure rivaling the number of Americans killed in the Vietnam War. That’s the number physicians for a National Health Program say die prematurely in America due to a lack of health insurance.

The Oregon study results probably did not surprise those who have been paying attention to the serious academic literature, however. In independent empirical papers, Richard Kronick and David Card and his colleagues find little evidence that health insurance coverage significantly reduces mortality. Former Director of the Congressional Budget Office June O’Neill and her husband Dave also conclude that lack of insurance has little or no impact on mortality. See the discussion at this blog here, here and here.

One person who ordinarily pays scrupulous attention to the quality of research is Austin Frakt. Yet in a surprisingly irate blog post he makes this claim: the fact that health insurance improves health and reduces “mortality risk” is “well established” and “as close to an incontrovertible truth as one can find in social science.”

In another post he asserts that Megan McArdle “distorts the scientific record” in an Atlantic article in which she concluded that there was little evidence to support the claim that people die because they do not have health insurance. He accused her of cherry picking, of “misrepresent[ing] a body of work in support of that conclusion and further mislead[ing] readers that such work does not exist.”

Professor Frakt owes Ms. McArdle an apology.

Let’s look at the references that Professor Frakt uses to support his claim. He refers readers to a number of links. They include an article by Stan Dorn of the Urban Institute on Ezra Klein’s blog; an article in the New Republic by Harold Pollack, a professor of Social Service Administration at the University of Chicago; and a blog post by J. Michael McWilliams, assistant professor of health care policy and medicine at Harvard Medical School. Each of these articles cites other articles. They add up to an impressive total, but a number of them shed very little light on the question at hand.

The citations rely heavily on the 2002 and 2009 Institute of Medicine (IOM) reports. Mr. Dorn refers readers to Table 3-3 in the 2009 IOM report America’s Uninsured Crisis: Consequences for Health and Health Care, which provides study counts, and to testimony from John Ayanian, a professor at Harvard Medical School. Professor Ayanian summarizes the 2009 report’s conclusions. Mr. Dorn’s work for the Urban Institute is also cited. It determined the number of deaths from lack of health insurance by accepting the IOM conclusions and extrapolating from them.

The problem with relying on the IOM reports is that they were not particularly scrupulous about determining whether mortality rates were caused by lack of health insurance or by behavioral differences for which being uninsured is a marker.

As previously reported on this blog, Appendix D of the 2002 Institute of Medicine cites only two studies on the topic of deaths from lack of health insurance, Franks et al. (1993) and Sorlie et al. (1994). It adopts the Franks estimate of 1.25 deaths among those without health insurance for every death among those with health insurance without explanation. The problem with this is that Franks’ sample assumed that baseline insurance status remained the same for 19 years, an unrealistic assumption, and it excluded everyone covered by government programs. The 95% confidence interval for the 1.25 hazard ratio was 1.00 to 1.55.

Wilper et al. (2009) would seem to support the IOM conclusions. The authors compared deaths through 2000 for the insured and uninsured based on their self-reported insurance status in NHANES III, a survey conducted between 1988 and 1994. The uninsured were 40 percent more likely to have died. The 95 percent Confidence Interval for the estimated hazard ratio ranged from (1.06 to 1.80).

Unfortunately, almost 30 percent of the Wilper sample was excluded due to missing data. Insurance status was self-reported, and the paper notes that 7 to 11 percent of the uninsured may be incorrectly classified. The study has no information about the duration of insurance coverage, it did not correct for income, and it excluded all people who had “public insurance,” including those on Medicaid, in the military, on Medicare, or in the VA system.

Kronick (2009) corrected for income and for other variables that are known to be correlated with mortality rates. He compared death rates as of 2002 for the insured and uninsured interviewed for the National Health Interview Survey between 1986 and 2000. About 20 percent of the sample reported not having insurance at baseline, and by almost every characteristic measured, they were in a higher health risk groups. Kronick found that “adjusted for demographic, health status, and health behavior characteristics, the risk of subsequent mortality is no different for uninsured respondents than for those covered by employer-sponsored group insurance at base line.” He concluded that “the Institute of Medicine’s estimate that lack of insurance leads to 18,000 excess deaths each year is almost certainly incorrect.”

Several suggested articles shed little light because they look at the effect of disrupted health care on small samples of people who are sick and poor. The results from involuntarily cancelation for these people probably do not generalize to a larger, mostly healthy population in which many people choose to be uninsured. Studies from the 1980s by Lurie et al., “Termination from Medi-Cal — Does it Affect Health” (1984) and “Termination of Medi-Cal benefits. A follow-up study one year later” (1986) reportedly tracked 164 indigent adults who had been attending UCLA clinics and who had their care transferred from California Medicaid to county health facilities.

Fihn and Wicher (1988) compared 157 Seattle Veterans Administration Medical Center patients who had their outpatient care terminated due to budget cuts. The health of those who had their outpatient care involuntarily terminated deteriorated compared to 74 other people who were retained. The study concluded that “administrative criteria did not accurately identify medically stable patients,” that “federal health care programs are important to many indigent patients,” and that “withdrawing services may have deleterious consequences.”

Carlson et al. (2006) show that people who were dropped from Oregon Medicaid were less likely to have had a primary care visit and more likely to have unmet needs that those who were not dropped. The response rate to their random sample request was 34 percent.

Another group of references examines what happens to care when payments for care differ. Mr. Dorn’s article links to a seemingly random segment of the 2009 IOM report. Given that he discusses a study showing that the uninsured who are in severe auto accidents receive 20 percent less care than the insured, and die at rates that are 39 percent higher, let’s assume that he intended to refer the reader to Doyle’s (2005) empirical study of the effect of health insurance coverage on the amount of hospital care received following an automobile accident.

The study included 80 percent of all crash-related hospitalizations in Wisconsin from 1992 to 1997. Demographic differences were controlled for using the characteristics of the victim’s ZIP code of residence.

Professor Doyle found that the uninsured received fewer spinal fusions, skeletal traction, and operations on the brain, kidney, bladder, chest, large intestine, vessels and plastic surgery. They receive more sutures and more alcohol and drug rehabilitation and detoxification. The uninsured went to hospitals with fewer resources, and they had a mortality rate of 5.3 percent rather than the 3.8 percent enjoyed by the insured.

Based on the results, he concludes that a 10 percent increase in facility charges reduces mortality by 1.1 percent. This suggests that when higher payments for care result in more care, the extra care saves lives, at least for trauma patients. According to Doyle’s calculations, the estimated difference in the survival rates for the insured and uninsured translated into a 0.01 percent increase in the annual risk of death from an auto accident for the uninsured.

Card et al. (2009) measured how the health of seriously ill people, those for whom treatment could not be deferred and who were admitted through the emergency department, was affected by becoming eligible for Medicare at age 65. Their sample consists of a subset of all of the people aged 60 to 70 who were discharged from California hospitals from 1992 to 2002. As expected, these urgent admissions were unrelated to age, and predicted mortality rates rose smoothly with age.

Card et al. found “modest” increases in treatment intensity at age 65, on the order of 3 percent when measured by length of stay, list charges, and number of procedures. The data suggest that the increase in intensity is much larger for specific “procedure-intensive” diagnoses such as acute myocardial infarction, but the samples were too small to permit a definitive conclusion.

In accord with Doyle, the increase in treatment intensity observed by Card et al. seemed to produce a decrease in mortality for the seriously ill. As treatment intensity increased, the probability of death fell by 0.7 to 1.0 percent. Seven day death rates dropped from almost 5 percent just before age 65 to almost 4 percent just after age 65. Death rates a year after treatment dropped from roughly 23 percent to roughly 22 percent.

But nondeferrable admissions make up only 12 percent of the overall patient population. When Card et al. estimated the effect of turning age 65 on the entire set of discharged patients, they found that 28-day mortality fell by a small, and only marginally significant, 0.13 percent.

They conclude that the reduction in mortality that they observed was too large “to be driven solely by changes among the 8% of the patient population who move from no health insurance coverage to Medicare when they reach age 65.” They discuss several variables that may operate to reduce mortality, including the possibility that Medicare places fewer restrictions on care than private insurance or Medicaid “leading to more (and possibly higher-quality) services to patients over 65 than to patients under 65,” but conclude that the exact cause remains unclear.

Finally, they emphasize that their analysis “illustrates an important lesson for future research. Any plausible effect of insurance on health status in the general population will likely be small and easily confounded by selection effects in observational settings. Indeed, the only randomized health insurance experiment ever mounted found insignificant impacts of insurance on the health status of the overall population (Newhouse, 1993).”

Volpp et al. (2003) examined “The Effect of Cuts in Medicare Reimbursement on Hospital Mortality.” They found that when New Jersey reduced subsidies to hospitals that treated the uninsured, the hospitals performed fewer cardiac catheterizations and did less mechanical revascularization on uninsured patients admitted with heart attacks.

Meyers et al. (2006) surveyed 25 physicians working in Washington, D.C., who completed a survey on each of 409 patients seen in two one-half day sessions. They reported making changes in clinical management in response to insurance coverage with more changes made for the uninsured than for the privately insured.

Though papers by McWilliams and “colleagues” are mentioned but not specifically cited, likely candidates are McWilliams et al. (2009) “Medicare Spending for Previously Uninsured” and McWilliams et al.  (2007) “Health of Previously Uninsured After Acquiring Medicare Coverage.” The sample for the 2007 paper analyzed data from the Health and Retirement Study which enrolled people aged 51 to 61 in 1992. Subjects were questioned about self-reported health and health insurance status biannually through 2004. As 15.1 percent of the study sample died and 14.9 percent dropped out before 2004, results for this group were inferred. They did not control for demographic variables other than age.

Before age 65, summary health scores worsened at a greater rate for the uninsured than for the insured. After age 65, when the majority of both groups were covered by Medicare, health worsened for the previously insured while the health of the previously uninsured was relatively stable. Improvements were concentrated among those with cardiovascular disease or diabetes.

In “The Health Effects of Medicare for the Near-Elderly Uninsured,” Polsky et al. (2009) use data from the same Health and Retirement Study survey to estimate health state transitions in each two year period. In contrast to McWilliams, they find that gaining Medicare coverage had little effect on health. Their primary outcome measure is self-reported health status combined with mortality. Control variables include sex, age, education, ethnicity, race and census region. The change in health trajectory for the previously uninsured when they qualify for Medicare is small and “not statistically significant.” Specifically, for every 100 people in the previously uninsured group, by age 73 the effect of joining Medicare is that 0.6 fewer are in excellent or very good health when compared to the previously insured group, 0.3 more are in good health, 2.5 fewer are in fair or poor health, and 2.8 more are dead.

McWilliams et al. (2010) suggested that including deaths represented a potential source of bias “because previously uninsured adults were sicker than previously insured adults, and sicker adults were more likely to die.” By including deaths, Polsky et al. “implicitly assumed that the study design and statistical model were equally appropriate for both types of outcomes — health and mortality.”

Polsky et al. (2010) respond that death is “an important aspect of one’s health trajectory” and that their model allows hazard ratios to change as people spend more years on Medicaid. In an appendix to their 2009 article, they conclude that the problem is that the differences are driven by the fact that the previously uninsured who die after age 65 are more likely to be of excellent or very good health, and that health status comparisons are highly sensitive to accounting for the different character of deaths between the insured and the uninsured groups.

Professor Frakt’s references refer the reader to a final group of studies that are something of a hodge-podge. Most have little to do with health insurance or mortality. McGlynn et al. (2003) survey adults in cities and find that most people get half of recommended care. Decker and Remler (2004) compare the income gradient of self-reported health from surveys in Canada and the United States. They find that people below median income in the United States are 7.5 percent more likely to report being in poor or fair health than similar people in Canada.

Though they include two paragraphs that discuss the fact that the gradient grows, flattens and shrinks in other countries at the same ages during which the U.S.-Canadian gradient also grows, flattens and shrinks, they nevertheless conclude that universal health care reduces differences in health by income and that universal health care in the United States would reduce inequality “quite a bit.”

Finally, the RAND Health Insurance Experiment (HIE) is dragged in. It compared results for people with different kinds of health insurance, none of whom were uninsured. Professor Pollack directs readers to a 1983 abstract (Brook et al.) to support the claim that the RAND HIE predicted that low-income patients enrolled in a high deductible health plan would have a 38 percent higher mortality rate than those enrolled in a free plan due to differences in hypertension. However, the abstract merely says that diastolic blood pressure was 3mm Hg lower for those with free care.

In the definitive book on the RAND experiment by Joseph P. Newhouse and the Insurance Experimental Group, Free For All?, the effect of the lower blood pressure is said to reduce predicted mortality rates by about 10 percent (p. 339). Furthermore, Newhouse et al. concluded that “virtually all of the improvement in blood pressure control brought about by free care occurred as a result of better identification of hypertensives…Control, once the person was diagnosed, was not measurably affected by cost sharing.”(p. 352). The RAND HIE researchers concluded that for most of the American population “free medical care in an ‘unmanaged’ fee-for-service system is not worth its costs. The burden on the poor and on persons (particularly the poor) with chronic conditions is a separate issue and should be dealt with as such.” (p. 357

---Linda Gorman is a Senior Fellow and Director of the Health Care Policy Institute at the Independence Institute, a state-based free market think tank in Golden, Colorado.

A former academic economist, she has written extensively about the problems created by government interference in health care decisions and the promise of consumer directed health care. Her articles on minimum wages, education, and discrimination appear in the Concise Encyclopedia of Economics.

A frequent contributor to John Goodman's Health Policy blog, she is also a member of the Galen Institute's Health Policy Consensus Group and was appointed to the Colorado Blue Ribbon Commission for Healthcare Reform where she co-authored one of the Commission's minority reports. She holds a Ph.D. in economics.

 

You are now being logged in using your Facebook credentials