most of the time you’ll be talking to a bot there without even realizing. they’re gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.

for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they’ll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.

the only real humans you’ll find there are the shills that run the place, and the poor suckers that fall for the scam.

it’s gonna be a shithole.

  • PabloDiscobar@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate’s promotion thread and filled it with praise for that candidate.

    Any source for this? I’d like to have a look.

    • Andreas@feddit.dk
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Nope, sorry. Just a memory of a Reddit thread with very out-of-context comments. Ironically, while trying to search for documentation of the thread, DuckDuckGo returned a lot of research papers about the analysis of bot content on Reddit starting from 2015, so there’s still proof that botting on Reddit goes way back.