and as always, the culprit is ChatGPT. Stack Overflow Inc. won’t let their mods take down AI-generated content

  • Hyperz@beehaw.org
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    It seems to me like StackOverflow is really shooting themselves in the foot by allowing AI generated answers. Even if we assume that all AI generated answers are “correct”, doesn’t that completely destroy the purpose of the site? Like, if I were seeking an answer to some Python-related problem, why wouldn’t I go straight to ChatGPT or similar language models instead then? That way I also don’t have to deal with some of the other issues that plague StackOverflow such as “this question is a duplicate of <insert unrelated question> - closed!”.

    • OrangeSlice@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      I think what sites have been running into is that it’s difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it’s better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there’s a human generated response that’s higher.quality, then that should win anyway, right? (Idk tbh)

      • Hyperz@beehaw.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Yeah that’s a good point. I have no idea how you’d go about solving that problem. Right now you can still sort of tell sometimes when something was AI generated. But if we extrapolate the past few years of advances in LLMs, say, 10 years into the future… There will be no telling what’s AI and what’s not. Where does that leave sites like StackOverflow, or indeed many other types of sites?

        This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

        • cavemeat@beehaw.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          My guess is the internet is gonna go through trial by fire regarding ai—some stuff is gonna be obscenely incorrect, or difficult to detect before it all straightens out.

          • DM_Gold@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            At the end of the day AI should be classified as what it is, a tool. We could embrace this tool and use it to our advantage, or we can fight it all the way up…even as more folks start to use it.

            • Pigeon@beehaw.org
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              1 year ago

              Its threat to jobs wouldn’t be anywhere near so much an issue if people just… Had medical care and food and housing regardless of employment status.

              As is, it’s primarily a tool for the ultra wealthy to boost productivity while cutting costs, aka humans. All of which resulting profit and power will just further line the pockets of the 1%.

              I’d have no issue with AI… If and only if we fixed the deeper societal problems first. As is, it’s salt in the wounds and can’t just be ignored.

              • sazey@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Almost any innovation in human history has been used by the elite to advance their own selves first. That just happens to be the nature of power and wealth, it affords you opportunities that wouldn’t be available to plebs.

                We would still be sitting around waiting for the wheel to become commonplace if the adoption criteria was to wait for all societal problems to be fixed before its spread through society.

        • OrangeSlice@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

          Between this and the “deep fake” tech I’m kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that’s kind of a hot take

          • Hyperz@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            But then they’d have to break up with their AI girlfriends/boyfriends 🤔.

            spoiler

            I wish I was joking.

      • salarua@sopuli.xyzOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        there are some pretty good AI-generated text detectors out there like GPTZero. i wouldn’t be surprised if mods used that to screen comments

        • OrangeSlice@lemmy.ml
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          My understanding was that they’re very unreliable in their current state, but I’m definitely not up to speed.

          • Pigeon@beehaw.org
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 year ago

            I’ve been seeing so many stories about student work getting falsely flagged as AI generated. It really feels bad to be accused of that, I think. So I can see why it would be better to avoid trying to determine one way or the other if something is AI generated, for now.

            All that matters for a question answer is whether it’s right, partly right, completely dead wrong, and so on, right? And that can still be judged regardless of whether it’s AI.

            AI absolutely shouldn’t be outright invited either, though.