• Gsus4@mander.xyz
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Couldn’t you have researchers who specialize in finding “bugs” in published papers (yes, researchers already do this to each other), like we have QA testers or bounties for finding exploits? Is this too aggressive an approach for science? Should work for hard sciences, though.

    • deathbird@mander.xyz
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      4 days ago

      This is a great idea I think, but part of the problem with science versus programming is that they’re just very different social environments, so the expectations, norms, and demands on each are very different.

      Dependent down a little bit more, most research is done by people with phds or other advanced degrees (or pursuing them) in an academic job, and one of the conditions of attaining or maintaining that job is publishing. And these are the same people doing the peer reviews.

      I think what this creates, even aside from the overwhelming volume and complexity of work, is a certain amount of grace amongst academics. That is, I think a fair number of peer reviewers are not only failing to rigorously grapple with the material that they review, but because of the small social mileux and shared incentives, they are incentivized to not be very rigorous in many cases.

      Not saying peer review is without value, but how harshly would you want to challenge or critique the work of someone whom you may work alongside or under in the future?

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      i heard about a woman a while back that did exactly that: she read papers across disciplines and found doctored results etc… she’d found something like 10 papers that had fabricated data