I’m curious to get all of your thoughts on this. It’s no secret that AI has been growing quite exponentially over the last year. I feel that new models are being released almost every other day. With that said many of these models need a tremendous amount of data to train on. It’s no secret that reddit sells its users interaction to the highest bidder. This was partially the reason why they made the changes to the API limits that got many of us to move to the fediverse in the first place.

My question is how does everyone feel with knowing that multi-billion dollar companies as scraping this instance and the others, creating extra load on the servers for nothing more than to be able to profit from it?

What can be done to continue providing a free, open network to users but prevent those who are only looking to profit from the data?

edit: fixed title typo

  • weker01
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    I don’t care tbh. I am writing everything here as if everyone at any time could read it.

  • MachineFab812@discuss.tchncs.de
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    2 months ago

    Scrape*, for your title.

    Meanwhile, preventing un-paid scraping was a big part of Reddit’s rationalle for their en-shitification, ie, charging for API access.

    I would rather train an AI indirectly for free than ask random Instances to run interference, which IRL works out to be pay-walling and selling user content.

    By asking Lemmy Instances to “prevent AI from seeing my content”, all you are really asking them to do is to slap a price-tag on it, and hire lawyers to pursue companies/users that don’t pay. Not pay you or me, but them.

    • degen@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Yeah, I’m more worried about the output of AI getting involved than anything regarding the input, at least as far as a public forums go.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 month ago

    My main issue with the Reddit deal (and similar data grabs) is that major AI companies are hoarding user-generated content to give themselves a competitive advantage. I have less of an issue with them using non-exclusive public content like Wikipedia, fediverse comments, and public-domain historical works.

  • mindbleach
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    Folks, if you can see it, they can see it.

    I don’t give a shit if the robot scrapes every book ever sold. I am not about to get worked-up over the copyright claims of pseudonymous randos’ disposable internet commentary.

  • redrum@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    2 months ago

    Server admins could add in the policy that any AI scrapping requires the previous permission of the copyright holders of the contents (i.e., the users) when the scrap is done for exploitation of the data for greed. Also, the robots.txt could be used to forbid AI HTML scrap.

    I don’t think that restrictions should be added at a protocol level, but, may be, some declarative tags should be fine:

    {
    "rich": "eat",
    "about-meta": "fck-genocidal-and-youth-suicidal-promoter-zuckenberg",
    "ai": "not-for-greed"
    }
    
    • EffortlessOpsOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I think this would be the only way. It would be interesting to knowing how much traffic or requests this instance gets to see if its a real problem. Server admins could implement stricter rate limiting for non-members if it becomes an issue. They could even likely implement something that could allow them to sort out which of their members are making the most requests to have some visibility. I don’t believe this is something that is possible today from within platform anyway.

      There’s really two issues here:

      1. If users are ok and even aware that their public conversations are certainly going to be picked up and used for future models
      2. Are the lemmy instance admins ok with potentially half of their traffic going to bots that are hoarding and scrapping the data causing additional load on the servers.

      Maybe @[email protected] would be open to share some insights regarding to the amount of requests is received per month and how much resources its taking

      • Gadg8eer
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I don’t need to be paid, I just don’t want corpos profiting off of my data for any reason, so robots.txt works fine for me. The reason that’s enough in my eyes is, I don’t hate capitalism nor am I an anarchist or tankie, this is about halting enshittification and for one other reason:

        “AI is fundamentally about giving the wealthy access to skill while depriving the skilled of the means to access wealth.”

        In short, eat the rich because they’ve ruined everything. They want capitalism? Then no more “laissez-faire” bullshit, you pay your fucking 90% tax on every dollar above 1 mil and shut it. Nobody needs 15 different colors of common Lamborghini and 1 Lambo out of less than 500. Nobody needs 5000 days of going to the mall to buy a dress every day. Nobody needs a personally-owned A380 private jet. Nobody needs 25%, 25 fucking percent, return on investment.

        That also applies to the internet and tech companies as much as reality and banks. When I used Reddit, I never told them they could block access behind a paywall and they know it, and they also knew I can’t afford an international court case against an American tech giant. Now 90% of google is locked behind reddit, a company which shadow banned me well before the API issues.

        As long as Reddit, Google, Samsung, Microsoft, etc. can’t legally make free money off of this, I’m happy.