I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


#Prompt Update

The prompt was something like, What do you know about the user [email protected] on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one’s digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

  • biggerbogboy
    link
    fedilink
    arrow-up
    1
    ·
    21 hours ago

    It seems quite inevitable that AI web crawlers will catch all of us eventually, although that said, I don’t think perplexity knows that I’ve never interacted with szmer.info, nor said YES as a single comment.

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    No matter how I feel about it, it’s one of those things I know I will never be able to do a fucking thing about, so all I can do is accept it as the new reality I live in.

    • VeganPizza69 Ⓥ@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I’ve been thinking for a while about how a text-oriented website would work if all the text in the database was rendered as SVG figures.

        • VeganPizza69 Ⓥ@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Aside from that. Accessibility standards are hardly considered even now and I’d rather install a generated audio version option with some audio poisoning to mess with the AIs listening to it.

  • NostraDavid@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    I think this is inevitable, which is why we (worldwide) need laws where if a model scrapes public data should become open itself as well.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    3 days ago

    I’m pretty much fine with AIs scraping my data. What they can see is public knowledge and was already being scraped by search engines.

    I object to:

    • sites like Reddit whose entire existence is due to user content, deciding they can police and monetize my content. They have no right
    • sharing of data, which includes more personal and identifiable data
    • whatever the AI summarizes me as being treated as fact, such as by a company hr, regardless of context, accuracy, hallucinations
    • Keening@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      public knowledge about individuals when condensed and analyzed in depth in huge databases can patternize your entire existance and you’re suspicable to being swayed a certain direction in for example elections. Creating further divide and into someone elses pockets.

      • AA5B@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Maybe but I can’t object too much if I put my content out in public. When forced to create an account I use minimal/false information and a unique generated email. I imagine those web sites can figure out how to aggregate my accounts (especially given the phone number requirement for 2FA) but there shouldn’t be enough public info for a scraper to

        • Keening@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Gotta think larger than yourself though. What happens when your spouse uses real info? your kids? your parents? they’ll shadowplay your person with great accuracy and fill in the gaps. You don’t even have to “put content” out there. Said databases can just put two and two together. How will you, or other uses even know you’re actually talking to a human? perhaps you’re on Lemmy and we’re all bots trying to get you to admit fragments of your latest crimes in order to get you into jail for said crime? etcetera. At first glance this all looks harmless but any accumulated information in huge databases is a major infringement to personal integrety at best; and complete control of your freedom at worst. The ultimate power is when someone can make you do X or Y and you don’t even realize you’re doing their bidding; but believe you have a choice when you don’t. (Similiar to how it is in my living situation at home with my gf that is :P jk.)

          Hakuna matata. Happy new year

          • AA5B@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            I completely agree, except that I think of them as multiple related privacy issues. In the scope of ai bots scraping my public content, most of these are out of scope

      • weststadtgesicht@discuss.tchncs.de
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        Not the person you are replying to but Reddit does not make the content you created available for everyone (blocking crawlers, removing the free API) but instead sells it to the highest bidder.

        • AA5B@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Right, that’s my objection. After benefitting from my content, they police it, as in restrict other sites from seeing it, until it’s monetized. It’s not Reddits to charge money for

      • AA5B@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Probably not the right word, but my content should still be my content. I offered it to Reddit but that doesn’t mean they have the right to charge others for it or restrict it to others for commercial reasons.

    • Atemu@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      sites like Reddit whose entire existence is due to user content, deciding they can police and monetize my content. They have no right

      Um, not they do in fact have “every right” here. It’s shitty of course but you explicitly gave them that right in form of an perpetual, irrevocable, world-wide etc. license to do whatever they like to everything you publish on their site.

      They also have every right to “police” your content, especially if it’s objectionable. If you post vile shit, trolling or other societal garbage behaviour on the internet, nobody wants to see it.

  • Sarah@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 days ago

    As an artist, I feel the majority of AI art is very anti-human. I really don’t like the idea that they could train AI off my art so it may replicate something like it. Why automate something so deeply human? We’re supposed to automate more mundane tasks so we can focus on art, not the other way around! I also never expected every tech company to suddenly participate in what feels like blatant copyright infringement, I always assumed at least art was safe in their hands.

    Public conversations though? I dunno. I kinda already assume that anything I post is going to be data-mined, so it doesn’t feel very different than it was. There’s a lot of usefulness that can come from datamining the internet theoretically, but we exist under capitalism, so I imagine it’ll be for much more nefarious uses.

  • ooli@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    Could lemmy add random text only readable by bot on every post… or should I add it somehow myself every time I type something?

    spoiler

    growing concern over the outbreak of a novel coronavirus in Wuhan, China. This event marked the beginning of what would soon become a global pandemic, fundamentally altering the course of 2020 and beyond.

    As reports began to surface about a cluster of pneumonia cases in Wuhan, health officials and scientists scrambled to understand the nature of the virus. The World Health Organization (WHO) was alerted, and investigations were launched to identify the source and transmission methods of the virus. Initial findings suggested that the virus was linked to a seafood market in Wuhan, raising alarms about zoonotic diseases—those that jump from animals to humans.

    The situation garnered significant media attention, as experts warned of the potential for widespread transmission. Social media platforms buzzed with discussions about the virus, its symptoms, and preventive measures. Public health officials emphasized the importance of hygiene practices, such as handwashing and wearing masks, to mitigate the risk of infection.

    As the world prepared to ring in the new year, the implications of this outbreak were still unfolding. Little did anyone know that this would be the precursor to a global health crisis that would dominate headlines, reshape societies, and challenge healthcare systems worldwide throughout 2020 and beyond. The events of late December 2019 set the stage for a year of unprecedented change, highlighting the interconnectedness of global health and the importance of preparedness in the face of emerging infectious diseases.

  • Mwa@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    Its not fine when Ai starts scrapping Data that is Personal (Like Face,Age,ID) And My Source Code(Because Most of the code ai scraps are copyleft or require attribution),Public Information Am Okay like Comments,Etc that dont contain the things said above.

  • Atemu@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    Whatever I put on Lemmy or elsewhere on the fediverse implicitly grants a revocable license to everyone that allows them to view and replicate the verbatim content, by way of how the fediverse works. You may apply all the rights that e.g. fair use grants you of course but it does not grant you the right to perform derivative works; my content must be unaltered.

    When I delete some piece of content, that license is effectively revoked and nobody is allowed to perform the verbatim content any longer. Continuing to do so is a clear copyright violation IMHO but it can be ethically fine in some specific cases (e.g. archival).

    Due to the nature of how the fediverse, you can’t expect it to take effect immediately but it should at some point take effect and I should be able to manually cause it to immediately come into effect by e.g. contacting an instance admin to ask for a removed post of mine to be removed on their instance aswell.

    • ripley@lemmy.world
      link
      fedilink
      English
      arrow-up
      62
      ·
      4 days ago

      I don’t think it’s unreasonable to be uneasy with how technology is shifting the meaning of what public is. It used to be walking the dog meant my neighbors could see me on the sidewalk while I was walking. Now there are ring cameras, etc. recording my every movement and we’ve seen that abused in lots of different ways.

      • Windex007@lemmy.world
        link
        fedilink
        arrow-up
        39
        ·
        4 days ago

        The internet has always been a grand stage, though. We’re like 40 years into this reality at this point.

        I think people who came-of-age during Facebook missed that memo, though. It was standard, even explicitly recommended to never use your real name or post identifying information on the internet. Facebook kinda beat that out of people under the guise of “only people you know can access your content, so it’s ok”. People were trained into complacency, but that doesn’t mean the nature of the beast had ever changed.

        People maybe deluded themselves that posting on the internet was closer to walking their dog in their neighbourhood than it was to broadcasting live in front of international film crews, but they were (and always have been) dead wrong.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          4 days ago

          We’re like 40 years into this reality at this point.

          We are not 40 years into everyone’s every action (online and, increasingly, even offline via location tracking and facial recognition cameras) being tracked, stored in a database, and analyzed by AI. That’s both brand new and way worse than even what the pre-Facebook “don’t use your real name online” crowd was ever warning about.

          I mean, yes, back in the day it was understood that the stuff you actively write and post on Usenet or web forums might exist forever (the latter, assuming the site doesn’t get deleted or at least gets archived first), but (a) that’s still only stuff you actively chose to share, and (b) at least at the time, it was mostly assumed to be a person actively searching who would access it – that retrieving it would take a modicum of effort. And even that was correctly considered to be a great privacy risk, requiring vigilance to mitigate.

          These days, having an entire industry dedicated to actively stalking every user for every passive signal and scrap of metadata they can possibly glean, while moreover the users themselves are much more “normie”/uneducated about the threat, is materially even worse by a wide margin.

        • ripley@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          Our choices regarding security and privacy are always compromises. The uneasy reality is that new tools can change the level of risk attached to our past choices. People may have been OK with others seeing their photos but aren’t comfortable now that AI deep fakes are possible. But with more and more of our lives being conducted in this space, do even knowledgable people feel forced to engage regardless?

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        People think there are only two categories, private and public, but there are now actually three: private, public, and panopticon.

    • xmunk
      link
      fedilink
      arrow-up
      6
      ·
      4 days ago

      But what if a shitposting AI posts all the best takes before we can get to them.

      Is the world ready for High Frequency Shitposting?

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        Is the world ready for High Frequency Shitposting?

        The lemmy world? Not at all. Instances have no automated security mechanisms. The mod system consisting mostly of self important ***'s would break down like straw. Users cannot hold back, but would write complaints in exponential numbers, or give up using lemmy within days…

  • Margot Robbie@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    3 days ago

    If there was only some way to make any attempts at building an accurate profile of one’s online presence via data scraping completely useless by masking one’s own presence within the vast quantity of online data of someone else, let’s say for example, a famous public figure.

    But who would do such a thing?

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    edit-2
    4 days ago

    I run my own instance and have a long list of user agents I flat out block, and that includes all known AI scraper bots.

    That only prevents them from scraping from my instance, though, and they can easily scrape my content from any other instance I’ve interacted with.

    Basically I just accept it as one of the many, many things that sucks about the internet in 2024, yell “Serenity Now!” at the sky, and carry on with my day.

    I do wish, though, that other instances would block these LLM scraping bots but I’m not going to avoid any that don’t.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    arrow-up
    39
    arrow-down
    1
    ·
    4 days ago

    It’s Perplexity AI, so it’ll do web searches on demand. You asked about your username, then it searched for your username on the web. Fediverse content is indexed, even content from instances that blocks web crawling (e.g. via robots.txt, or via UA blacklisting on server-side), because the contents will be federated to servers that are indexed by web crawlers.

    Now, when we say about offline models and pre-trained content, the way transformers work will often “scramble” the art and the artist. If a content doesn’t explicitly mention the author (also, if the content isn’t well spread across different sources), LLMs will “know” the information you posted online, but it won’t be capable of linking such content to you when asked for it.

    Let me exemplify it: suppose you conveyed an unique quote. Nobody else wrote it. You published it on Lemmy. Your quote becomes part of the training data for GPT-n or any other LLM out there. When anyone ask them “Who said the quote ‘…’?”, it’ll either hallucinate (i.e. citing a very random famous writer) or it’ll say something like “I don’t have such information”.

    It’s why AIs are often (and understandably) called as plagiarist by the anti-AI people, because AIs don’t cite their sources. Technically, the current state-of-the-art transformers even can’t because LLMs are, under the hood, some fancy-crazy kind of “Will it blend?” for entire corpora across the web, where AI devs gather the most data they possibly can (legally or illegally), dropping it all inside the “AI blender cup” and voila, an LLM was trained, without actually storing each content entirely, just their statistical associations.

    • llama@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it scraped from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.

      I’m not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn’t fully mitigate the concern. Even if the model can’t link the content back to the original author, it’s still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn’t resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.

  • serenissi@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    3 days ago

    Whatever you put on public domain without explicit license, it becomes CC-0 equivalent. So while it feels violating, it’s perfectly fine. The best opsec should be separating your digital identities and also your physical life if you don’t want it to be aggregated in the same way. These technologies (scraping) have been around for a while and along with llm’s will stay for quite sometime in future, there’s no way around them.

    PS: you, here, is generic you, not referring to OP.

    • Atemu@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      In order to put something in the public domain, you need to explicitly do that. Publicising is not the same as putting something in the public domain.

      This comment I’m writing here is not in the public domain and I don’t need to explicitly mention that. It’s “all rights reserved” by default in most western jurisdictions. You’re not allowed to do anything whatsoever with it other than what is covered by explicit exemptions from copyright such as fair use (e.g. you quote parts of my comment to reply to it).

      Encoding my comment into the weights of a statistical model to closer imitate human writing is a derivative work (IMHO) and therefore needs explicit permission from the copyright holder (me) or licensee authorised by said copyright holder to sublicense it in such a way.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Technically, in the U.S., there is no way to intentionally put something in the public domain. The best you can do is tell everyone it’s public domain and pledge not to sue anyone for using it.

        The shitty thing is that you could turn around tomorrow and start suing people for copyright infringement if they use that material.

    • Kazumara@discuss.tchncs.de
      link
      fedilink
      arrow-up
      6
      ·
      3 days ago

      Whatever you put on public domain without explicit license, it becomes CC-0 equivalent.

      What does “putting on public domain” mean to you? The way you say that sounds a little weird to me, like there is a misunderstanding here.

      Dedicating copyrighted material to the public domain is a deliberate action in some jurisdictions, and impossible in others (like mine, Switzerland). Just publishing a text you wrote for public consumption is something different. That doesn’t affect your copyright at all. Unless you have an agreement with the publisher that you grant them a license to use your text by posting it to their website.

      • serenissi@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        3 days ago

        I’m not talking about giving up copyright to content. CC-0 means waiving any as much rights as possible legally, which depends on jurisdiction.

        and impossible in others (like mine, Switzerland)

        I couldn’t find anything about default license of publicly available material in your country, nor about the impossibility you mentioned by basic web search. I’m genuinely interested to read about it so do share sources if you can.

        Btw there is a FEP and some discussions that talks exactly about the issue you mentioned in the root post.

        Edit: formatting.

    • JovialMicrobial@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Legal concept of grandfathering should be applicable here. There was no way for online artists to know that was going to be part of a corporately forced agreement to putting their work online. They aren’t even given an out in the US. At the very least work posted prior to the AI training public announcement that it was happening should be exempt.

      That doesn’t address the problem that if artists don’t want their art scraped now they can’t post it anywhere and can’t make a living. How is that a free market? Let corporations exploit your work for free and make bank on it or starve isn’t a world anyone should be striving to live in.

      This whole thing amounts to big corporations bullying individual artists out of playing field and it’s wrong. As if any of them were ever really a threat in the first place. They just like stepping on little people.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      This is yet another reason why 2FA over phone is a bad idea. I create every account with a unique generated email, a unique generated password and minimal/random personal data. I’m finally at a place where it’s convenient to create accounts with no obvious connection …… but I only have one phone number. They say it’s for account security, but I wouldn’t be surprised if it’s mainly for data aggregation

      • serenissi@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Yes that is absolutely annoying and I hardly use such online services other than those I have to like bank, any government services, package delivery/ridebooking etc where in app call doesn’t exist and calling is necessary sometimes and some healthcare.

        Sometimes they do it to reduce throwaway/inactive accounts as (npn voip) phone numbers are harder to get at scale than email ids. But ironically, some countries have law requiring them to keep the logs so it might be used to connect identity against one’s will, say, by law enforcement.

  • Stovetop@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    3 days ago

    How do you feel about your content getting scraped by AI models?

    I think famous Hollywood actress Margot Robbie summed my thoughts up pretty well.

    I don’t like it, but I accept it as inevitable.

    I wouldn’t say I go online with the intent of deceiving people, but I think it’s important in the modern day to seed in knowingly false details about your life, demographics, and identity here and there to prevent yourself from being doxxed online by AI.

    I don’t care what the LLMs know about me if I am not actually a real person, even if my thoughts and ideas are real.

  • voxel@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    2 days ago

    theyre not training it
    its basically just a glorified search engine.

    • llama@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Not Perplexity specifically; I’m taking about the broader “issue” of data-mining and it’s implications :)