• Wolpertinger
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    10 months ago

    So I need to run any comments I make to reddit by chatgpt before posting, it seems. I heard ai training ai leads to a poisoned data set.

    • Fishbone@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      For text, AI training AI wouldn’t be all that great for giving data sets a little poison ivy rubdown, because at the end of the day, the message is still moderated by a non bot. I think a better way would be to write more unconventionally, but heavily contextual so that if specifics texts are ripped and tossed into the bot blender, it’ll make no sense without the context alongside it.

      Slang, edge case wording, and verbing non verbs would likely do a lot of heavy lifting in that department.

      • addie@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        Using LLMs for corporate communications - automatically-generated complaint responses, and the like - usually has swearing disabled, so if you want to fuck up their shit, be sure to express yourself with as many fucking swears as possible. Let’s get that shit into those cunt’s language models ASAP.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yeah, I heard that, too. Consider that people who don’t like tech may not have very reliable knowledge of tech. Regardless, OAI would appreciate your business.