I’m sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i’m wondering what’s going to happen once everyone start using them to spam the platform. Lemmy with it’s simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

  • Ocelot@lemmies.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    The architecture of lemmy means that the API is completely open. Bots do not need to even scrape or do anything with the website, they can do everything through the API. it doesn’t matter how simple the layout is. Lemmy is open source as well and the API is fully documented and available to the public.

    Lemmy devs are going to need to do some additional work to differentiate bot/human accounts. In the meantime its going to be on the admins to identify and remove/ban these accounts.

    • sugar_in_your_tea
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      API is fully documented

      Yes, but last I checked, it was documented very poorly. It’s just bad enough that I’m not super motivated to build helpful tools, but not bad enough to discourage trolls.

  • Muddybulldog@mylemmy.win
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    Somewhat of a loaded question but, if we need to scroll through their comment history meticulously to separate real from bot, does it really matter at that point?

    SPAM is SPAM and we’re all in agreement that we don’t want bots junking up the communities with low effort content. However if they reach the point that it takes real effort to ferret them out they must be successfully driving some sort of engagement.

    I’m not positive that’s a bad thing.

    • usrtrv@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I think we’ll be in bad shape when you can’t trust any opinions about products, media, politics, etc. Sure, shills currently exists, so everything you read already needs skepticism. But at some point bots will be able to flood very high quality posts. But these will of course be lies to push a product or ideology. The truth will be noise.

      I do think this is inevitable, and the only real guard would be to move back to smaller social circles.

      • Muddybulldog@mylemmy.win
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’m of the mind that the truth already is noise and has been for a long, long time. AI isn’t introducing anything new, it’s just enabling faster creation of agenda-driven content. Most people already can’t identify the AI generated content that’s been spewing forth in years past. Most people aren’t looking for quality content, they looking for bias-affirming content. The overall quality is irrelevant.

    • HelloHotel@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Things like chatGPT are not designed to think using object relations like a human. Its designed to respond the way a human would, (a speach quartex with no brain), it is made to figure out what a human would respond with rather than give a well thoght out answer.

      Robert Miles can explain it better than i ever could

    • zer0@thelemmy.clubOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Could something like this be implemented as a nsfw filter you can turn on and off?

        • zer0@thelemmy.clubOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Ahaha the lumber cartel thing is pretty funny. Anyway let me ask you shagie, from usenet what do you think went wrong that lead us to the centralized services we have now? How do we not make the same mistake again?

  • RoundSparrow@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    One of the cool things to me about Lemmy is it is like email where people have their own custom domain names. Personally I think people using their real identity should come back into fashion and post 9/11/2001 USA culture of terrorism fear-ism should not be the dominating media emotion in 2023.

    “Real humans, not bots” for the ongoing Social Media reboot of Twitter since September 2022 and Reddit since May 2023 could really leverage it. The “throwaway” culture of Reddit.

    ChatGPT GPT-4 is incredibly good at convincing human beings it gives factual information when it really is great at “sounding good, but being factually wrong”. It’s amazing to me how many people have embraced and even shown deep love towards the machines. It’s pretty weird to me that a computer fed facts spits out anti-facts. Back in March I would doing a lot of research on ChatGPT’s fabrication of facts, it made wild claims like Bill Gates traveled to New Mexico when BASIC was first created. It would even give pages from Bill Gate’s book that did not have the quotes it provided. https://www.AuthoredByComputer.com/ has examples I documented.

    EDIT: another example, facts about simple computer chips it would make up about in a book, claiming they had more RAM in the chip than they did, etc: https://www.AuthoredByComputer.com/chatgpt4/chatgpt4-ibm-ps2-uart-2023-03-16a

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      It’s because it isn’t fed facts really. Words are converted into numbers and it understands the relationship between them.

      It has absolutely no understanding of facts, just how words are used with other words.

      It’s not like it’s looking up things in a database. It’s taking the provided words and applying a mathematical formula to create new words.

      • RoundSparrow@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s because it isn’t fed facts really.

        That’s an interesting theory of why it works that way. Personally, I think rights usage, as in copyright, is a huge problem for OpenAI and Microsoft (Bing)… and they are trying to avoid paying money for the training material they use. And if they accurately quoted source material, they would run into expensive costs they are trying to avoid.

        [email protected]

  • ezmack@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The horde aspect might make it easier. The ones on twitter at least you can tell are just running the same script through a thesaurus basically. 20 people leaving the same comment is a little more obvious than just one

    • usernotfound@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s why they’re talking about the next generation.

      With AI you can easily generate 100 different ways to say the same thing. And it’s hard to distinguish a bot that’s parroting someone else from a person who’s repeating something they heard.