• FierySpectre@lemmy.world
      link
      fedilink
      English
      arrow-up
      127
      ·
      edit-2
      5 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        85
        arrow-down
        9
        ·
        5 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          67
          arrow-down
          2
          ·
          5 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

          • FierySpectre@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            5 months ago

            Well, this is very much an application of AI… Having more examples of recent AI development that aren’t ‘chatgpt’(/transformers-based) is probably a good thing.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              4
              ·
              5 months ago

              Op is not saying this isn’t using the techniques associated with the term AI. They’re saying that the term AI is misleading, broad, and generally not desirable in a technical publication.

              • FierySpectre@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                5 months ago

                Op is not saying this isn’t using the techniques associated with the term AI.

                Correct, also not what I was replying about. I said that using AI in the headline here is very much correct. It is after all a paper using AI to detect stuff.

        • 0laura@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          5 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

            • 0laura@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              the word program refers to even more things and no one says it’s a bad word.

            • GetOffMyLan@programming.dev
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              edit-2
              5 months ago

              It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.

              It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.

              • wewbull@feddit.uk
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                5 months ago

                If people consistently misunderstand it, it’s a bad term for communicating the concept.

                • GetOffMyLan@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  5 months ago

                  It’s the correct term though.

                  It’s like when people get confused about what a scientific theory is. We still call it the theory of gravity.

          • Ephera@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 months ago

            The problem is that it refers to so many and constantly changing things that it doesn’t refer to anything specific in the end. You can replace the word “AI” in any sentence with the word “magic” and it basically says the same thing…

      • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 months ago

        Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

        From the conclusion of the actual paper:

        Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

        If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

        • FierySpectre@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          5 months ago

          For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

          The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step

          As a different commenter mentioned, the data collection is largely the interesting part here.

          I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

          • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            5 months ago

            They don’t go in depth about how they combine the two for the hybrid model

            Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.

            Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.

          • errer@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            5 months ago

            ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.

        • llothar@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

          I would love to see comparison against risk factors + human image evaluation.

          Nevertheless, this is the AI that will really help humanity.

    • SomeGuy69@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      5 months ago

      It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

    • earmuff@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 months ago

      That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.