AI-screened eye pics diagnose childhood autism with 100% accuracy::undefined

  • eggymachus
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    What they’re saying, as far as I can tell, is that after training the model on 85% of the dataset, the model predicted whether a participant had an ASD diagnosis (as a binary choice) 100% correctly for the remaining 15%. I don’t think this is unheard of, but I’ll agree that a replication would be nice to eliminate systemic errors. If the images from the ASD and TD sets were taken with different cameras, for instance, that could introduce an invisible difference in the datasets that an AI could converge on. I would expect them to control for stuff like that, though.

    • dragontamer@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 year ago

      I would expect them to control for stuff like that, though.

      What was the problem with that male vs female deep-learning test a few years ago?

      That all the males were earlier in the day, so the sun angle in the background was a certain direction, while all the females were later in the day, so the sun was in a different angle? And so it turned out that the deep-learning AI was just trained on the window in the background?

      100% accuracy almost certainly means this kind of effect happened. No one gets perfect, all good tests should be at least a “little bit” shoddy.

      • eggymachus
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Definitely possible, but we’ll have to wait for some sort of replication (or lack of) to see, I guess.

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Yeah, exactly. They’re reporting findings. Saying that it worked in 100% of the cases they tested is not making a claim that it will work in 100% of all cases ever. But if they had 30 images and it classified all 30 images correctly, then that’s 100%.

      The article headline is what’s misleading. First, it’s poorly written - “AI-screened eye PICS DIAGNOSE childhood autism.” The pics do not diagnose the autism, so the subject of the verb is wrong. But even if it were rephrased, stating that the AI system diagnoses autism itself is a stretch. The AI system correctly identified individuals previously diagnosed with autism based on eye pictures.

      This is an interesting but limited finding that suggests AI systems may be capable of serving as one diagnostic tool for autism, based on one experiment in which they performed well. Anything more than that is overstating the findings of the study.

    • Bgugi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      They talk about collecting the images - the two populations of images were collected separately. It’s probably not 100% of the difference, but it might have been enough to push it up to 100%

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        You mean like the infamous AI model for detecting skin cancers that they figured out was simply detecting if there’s a ruler in the photo because in all of the data fed into it the skin cancer photos had rulers and the control images did not

    • dirtdigger@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      You need to report two numbers for a classifier, though. I can create a classifier that catches all cases of autism just by saying that everybody has autism. You also need a false positive rate.