“Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to Donald Trump with a pregnant woman who (vaguely) resembles Kamala Harris to Trump and Harris pointing guns. With US elections approaching and X already under scrutiny from regulators in Europe, it’s a recipe for a new fight over the risks of generative AI.”

  • ShinkanTrain@lemmy.ml
    link
    fedilink
    English
    arrow-up
    299
    arrow-down
    2
    ·
    edit-2
    3 months ago

    I used it to make this picture of Elon and convicted Epstein associate Ghislaine Maxwell hanging out, crazy how realistic AI got

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    134
    arrow-down
    9
    ·
    3 months ago

    Generative AI isn’t the real risk. It’s letting musk go unchecked that is.

    • foggenbooty@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      edit-2
      3 months ago

      The cat’s out of the bag unfortunately. I can download stable or unstable diffusion on my home PC and make it generate all kinds of stuff. It’s open source so you can’t really stop that knowledge from spreading.

      You can however recognize that the majority of people won’t do that, and write rules around software that is delivered as a service or for a fee. That would stop 90% of it.

      So while regulating GenAI is possible, it’s not full fix. GenAI is kind of still the risk.

        • foggenbooty@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          My apologies, I missed the word Musk. I think what I said still stands that regulations can only hold back some of the damage, and that GenAI is still a big issue in and of itself.

          With that said, you’re right about Musk. He’s a wildcard who is only out for his personal interets and he has way too big a following. He’s a large problem to be sure.

  • Showroom7561@lemmy.ca
    link
    fedilink
    English
    arrow-up
    93
    ·
    3 months ago

    I don’t care about “Grok”, but just this morning I was looking at some creations that Stable Diffusion and Flux cranked out, and it’s crazy just how incredible these image generators are getting.

    I also saw one example of software that allows you to live-stream with someone else’s face, using only a single photo of that person. You literally can’t trust anything you see, read, or hear online anymore.

    My webhost also put out some AI wordpress thing that basically creates posts and images for your website automatically. What’s the point of the internet if everything is fake?

    • Geth@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      31
      ·
      3 months ago

      That’s the point I’m reaching as well. The internet as a mass of disconnected sites feels dead. Any attempt to look up information goes to pages that follow the same formula and feel very AI generated even with errors and unconsistencies typical for AI. What is the point of the open net anymore? The only value I feel like I’m getting is in specific trusted platforms or sites. It’s a sad state that we’ve reached.

      • Showroom7561@lemmy.ca
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        3 months ago

        I’m finding more and more websites with these long form question and answers about a topic. They all look the same and if you know enough about the subject you can easily tell that either AI or an idiot wrote these "articles.

        I’m just going to start archiving human created content from the last 20 years, and stop looking for new content. The internet of 2024 is 99% quantity and 1% quality (from sites dating back more than five years).

        • Geth@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          3 months ago

          Yeah exactly. No wonder people use tiktok to look up stuff these days. At least there you have actual humans sharing their knowledge.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    3 months ago

    I suspect the greatest threat is not that politicians and billionaires can create realistic images of their enemies, rather they can deny any evidence of their own misdeeds.

    Harris’ airport crowd serves as an example. Whether or not it really happened (I assume it did) MAGAs will have cause to plausibly believe it didn’t.

    Not that evidence to the contrary of belief systems has ever been effective at deprogramming those invested in their worldview.

    • msage@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      People are and always have been divorced from reality. Some people barely understand how their own house works, but nobody ever understood everything, and at no point you could be sure you know with absolute certainty what was going on 1000 miles away from you.

      Let us not act like generative AI is going to break the 4th wall of our perfect world. And I don’t even mean video or even image manipulation - basic text can and has been used for manipulation forever.

      Perhaps we should reexamine things we’ve had for decades, like signing content with PGP and building actual trust around the world.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 months ago

      While I share this sentiment, I think/hope the eventual conclusion will be a better relationship between more people and the truth. Maybe not for everyone, but more people than before. Truth is always more like 99.99% certain than absolute truth, and it’s the collection of evidence that should inform ‘truth’. The closest thing we have to achieving that is the court system (In theory).

      You don’t see the electric wiring in your home, yet you ‘know’ flipping the switch will cause electricity to create light. You ‘know’ there is not some other mechanism in your walls that just happens to produce the exact same result. But unless you check, you technically didn’t know for sure. Someone could have swapped it out while you weren’t looking, even if you built it yourself. (And even if you check, your eyes might deceive you).

      With Harris’ airport crowd, honestly if you weren’t there, you have to trust second hand accounts. So how do you do that? One video might not say a lot, and honestly if I saw the alleged image in a vacuum I might have been suspicious of AI as well.

      But here comes the context. There are many eye witness perspectives where details can be verified and corroborated. The organizer isn’t an habitual liar. It happened at a time that wasn’t impossible (eg. a sort of ‘counter’-alibi). It happened in a place that isn’t improbable (She’s on the campaign trail). If true, it would require a conspiracy level of secrecy to pull of. And I could list so many more things.

      Anything that could be disproven with ‘It might have been AI’, probably would have not stuck in court anyways. It’s why you take testimony, because even though that proves nothing on it’s own, if corroborated with other information it can make one situation more or less probable.

      • Saganaki@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        I don’t have the hope you do. The sheer number of people that believe the moon landing was faked is just plain crazy. There were soooo many people involved with that process, yet it’s still not believed.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I have a similar hesitancy, but unfortunately that’s why we can’t even really trust ourselves either. The statistics we can put to paper already paints such a different image of society than the one we experience. So even though it feels like these people are everywhere and such a mindset is growing, there are many signs that this is not the case. But I get it, that at times also feels like puffing some hopium. I’m fortunate to have met enough stubborn people that did end up changing their minds on their own personal irrationality, and as I grew older I caught myself doing the same a couple of times as well. That does give me hope.

          And well, if you look at history, the kind of shit people believed. Miasma, bloodletting, superstitious beliefs, to name a few. As time has moved on, the majority of people has grown. Even a century where not a lot changes in that regard (as long as it doesn’t regress) can be a speed bump in the mindset of the future.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          I respectfully disagree. Sure, it didn’t cure the world of ignorant people like we hoped, but they are not the average rational person. It massively increased the awareness of people about international issues like climate change, racism, injustice, and allowed people to forge bonds abroad far more easily. The discourse even among ignorant people is different from 20 years ago. However, the internet that did that might no longer be the same one it is today.

          But honestly, “more facts leads to more truth” wasn’t the point of my message. It was “more spread of falsehoods leads to higher standards of evidence to back up the actual truth”, which isn’t quite the same. Before DNA evidence and photographic / video evidence, people sometimes had to rely on testimony. Nowadays if someone tells you a story that screams false you might say “pics or it didn’t happen.”. That’s the kind of progress I’m referring to.

          Someone presenting you only a single photo of something damning is the hearsay of yesterday. (And honestly, it’s been that way since Photoshop came out, but AI will push that point even further)

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    3 months ago

    And I’m sure nobody will be able to tell that it’s just generative AI by counting the “finglers.” /s

    But this is a blessing to bad actors who trade in rumors and conspiracy theories—many of whom just so happen to be on Xitter by Pure Coincidence™.

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    20
    arrow-down
    2
    ·
    3 months ago

    Asking multiple times will get you variations with different policies, some of which sound distinctly un-X-ish, like “be mindful of cultural sensitivities.”

    kek

  • warbond@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    Oh boy! Can’t wait to experience the ramifications of this one!

    What’s next? Gonna try to recreate Deep Impact, just to see?

  • SomeGuy69@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    Oh that’s what Grok is. Unnecessary attention for Flux. Musk is going to ruin it for us all.

  • schnurrito@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    21
    ·
    3 months ago

    “We are all going to die because we have new technology, help us governments, regulate it so we don’t make too much human progress at a time!!!”

    I want to know what happened to “information wants to be free”.

        • Rakudjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          I think you’re confusing it with Ms. Information. Which would cause information to be female, and as a female, information has no rights. Ms. Information does not get to be free.

          /s

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      3 months ago

      I want to know what happened to “information wants to be free”.

      I think its more “Fuck you, pay me” now that Musk is running a few rubles short of what he needs to stay the world’s richest man.

    • Xatolos@reddthat.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      “Information wants to be free”, it never was. Thats only a small part of the quote. The whole quote is:

      Information Wants To Be Free. Information also wants to be expensive. …That tension will not go away.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      edit-2
      3 months ago

      A lot of the commercially available image generators will block certain terms, like copyrighted characters or presidential candidates, or celebrities in their underwear. You can of course do this all without limits on your own hosted versions of stable diffusion and whatnot. But a commercially available option without any limits is actually news.

      • Todd Bonzalez@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 months ago

        The reason commercial image generators put those limits in place isn’t because they want to uphold an ethical use of their technology, but because they will get sued if they create a platform for making harmful images.

        Musk seems to be a big fan of getting sued lately, so best of luck to him on getting even more lawsuits against him.

      • codenul@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Interesting. Just tested this on the publicly o one I use and typed in “President Biden, White house”. Sure enough it produced pictures of him standing within the White House but none of them had his face being displaying correctly. Either messed up, blurred out or just looked horrifying.

  • mindbleach
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Twitter doesn’t cause violence with images - it causes violence by letting assholes type “a Muslim did it!!!”

    That’s enough to cause a murderous panic among millions of other assholes.

  • Amanda@aggregatet.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    My wife talked about how Grok apparently seemed promising in AI benchmarks for a while until everyone realised the way they’re winning is by absolutely blindly outspending everyone on GPUs and brute forcing the fuck out of the problem rather than having good anything.

    This seems to match that approach (and, more generally, everything Musk is doing).