Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead::Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    1 year ago

    There will be no way to watermark all AI images, as someone could just mod stable diffusion to remove the watermark. The best we can do is to doubt any photographic evidence we see.

    • Doorbook@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      7
      ·
      1 year ago

      Intentionally they spotage and killed journalists. Defunded public media, and privatized the rest. Bought out and censored social media and now its hard to tell which image is real or not.

      The only option in my opinion is for camera manufacturer to include a cryptic hash that can be pass to an algorithm to authenticate a photograph metadata.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        That could very easily be abused as some sort of DRM or vendor lockin for photos. I would rather not.

        • bobgusford@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Well, not necessarily. How about just embedding the following in the EXIF tag: digital signatures from the original camera; digital hashes of the original image; digital sigs for the publisher and the article where the pics will appear.

          Any additional processing by a “social media content creator” - for example, adding captions to make a meme out of it - will also include the prior chain of digital sigs and hashes.

          Now when it pops up on social media sites/apps, there can be little info bubbles that link to the original pic or article, or provide info on ownership of the camera along with date and timestamps of the pics.

          Garbage will always exist on social media, but at least we can have these little tools to verify authentic images.

          • 15Redstones
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            How would they be made secure against faking?

            If the cryptographic key itself was extractable, it’d be easy to sign fake images with just a bit of custom software.

            If it isn’t, there’s still workarounds. Buy a professional photography camera, disassemble it, extract the chip that does the signature, feed it fake GPS and image data, and you have a modified image signed as legit. A country’s intelligence agency could easily do that.

            Even if the camera was made completely unmodifiable, you could put it in a Faraday cage, feed it a spoofed GPS signal for fake date/time/location data, and take a picture of a high resolution screen showing your photoshopped image.

            Building a system where end users are told “this image is cryptographically confirmed to be legit” just makes it easier to convince users that your fake images are legit.

            • bobgusford@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Oh no. No social media site should ever claim that a post, story, or image is legit.

              For some viral pics/posts, it should probably show a warning that the image doesn’t have any signatures, no valid signatures, or a revoked signature. Otherwise, it probably just shows a verified signature chain, for example: BleedingHeartInfluencer*[edited]* → NyTimes*[edited]* → AP*[story]* → AhmedMohammed*[photographer,2023-12-03]*.

              We can always assume nation states and other powerful people will know how to fake images, GPS, reality, etc. We can also always assume fakes will still be shared by many people without any proper authentication.

              The main goal here would just be to reduce proliferation.

              • 15Redstones
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                In this case you’d still need a way to know who the photographer is and whether they can be trusted. The photographer at the beginning of the chain can sign anything, regardless of if it’s a real photograph or edited (or a real photograph of a staged scene with fake location/time data). The cryptography system could only tell you that the image originates with the same person or organisation who is associated with a specific cryptographic key.