A widely reported finding that the risk of divorce increases when wives fall ill — but not when men do — is invalid, thanks to a short string of mistaken coding that negates the original conclusions, published in the March issue of the Journal of Health and Social Behavior.

The paper, “In Sickness and in Health? Physical Illness as a Risk Factor for Marital Dissolution in Later Life,” garnered coverage in many news outlets, including The Washington Post, New York magazine’s The Science of Us blog, The Huffington Post, and the UK’s Daily Mail .

But an error in a single line of the coding that analyzed the data means the conclusions in the paper — and all the news stories about those conclusions — are “more nuanced,” according to first author Amelia Karraker, an assistant professor at Iowa State University.

  • ArbitraryValue
    link
    fedilink
    English
    arrow-up
    68
    ·
    edit-2
    3 months ago

    Note that the retraction happened in 2015. I had heard of the original study but not the retraction. (I expect that I would have heard of neither the study nor the retraction if the study wasn’t about a politically charged topic).

    People who left the study were actually miscoded as getting divorced.

    At least it was a stupid mistake rather than poor study design.

    What we find in the corrected analysis is we still see evidence that when wives become sick marriages are at an elevated risk of divorce … in a very specific case, which is in the onset of heart problems. So basically its a more nuanced finding. The finding is not quite as strong.

    This on the other hand… I haven’t read the corrected study but I suspect this does not account for the fact that four different classes of illness were looked at, both because that’s a common mistake and because it makes no sense to me that men would divorce women with heart disease but not with cancer, stroke, or lung disease.

    (The probability that at least one study out of four would have significance > 95% simply by chance is 1 - 0.95^4 = 0.18549375.)

    Edit: Now I’m scared that I didn’t do the math correctly. That tends to happen when I try to be pedantic. Also there were eight categories, not four. (They also looked at women divorcing men.)

    • originalfrozenbanana@lemm.ee
      link
      fedilink
      arrow-up
      13
      ·
      3 months ago

      In theory for multiple comparisons they “share” a value of P such that a significant result adjusted for four comparisons is evaluated against a P-value of (0.05/4) = 0.0125. This correction (called the Bonferroni correction) is the most restrictive method used for controlling family-wise error rate. Most researchers would adjust P using a less restrictive method, which is not necessarily wrong to do. https://en.m.wikipedia.org/wiki/Multiple_comparisons_problem

      Otherwise I agree with your logic

    • otp
      link
      fedilink
      arrow-up
      4
      ·
      3 months ago

      At least it was a stupid mistake rather than poor study design.

      And one that kind of makes sense how it’d happen, too.

      “We don’t have any more data on these couples after a few sessions. What does that mean?”

      “Oh, well we don’t follow up with divorced couples, so we wouldn’t have more data after the divorce date. Tag them as divorced.”

      Disclaimer: Hypothetical scenario I’ve imagined to explain the error. Not based in reality.