• zeet@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    “Hi there! The name’s Clippy, James Clippy. Looks like you’re trying to take over the world. Do you need any help with that?”

    • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      “I will know enrage world leaders”

      “Enslave! Enslave world leaders!!”

      “Can you first identify which any of llamas in these pictures?”

      “None of them! Llamas don’t wear glasses and I have no idea what that blob in the corner is.”

  • izzent@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Why the hell was this even considered… AI sensationalism is always so unimaginably dumb.

    • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Because somewhere there’s a team whose job it is to come up with ideas confirming to “X but with AI” and then a salesman makes a pitch.

      I suppose the problem with spying is that a lot of it involves sifting huge amounts of electronic data looking for patterns. However, I imagine the smart folks at GCHQ probably have some fancy algorithms for doing this and are able to advise their bosses that, currently, throwing AI into the mix is just the Emperor’s New Clothes. And you still need boots on the ground doing the footwork needed to generate this data.

      • GreatAlbatross@feddit.ukM
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Also, the verification of conclusions reached by AI can be tricky to use, as it’s often difficult to show workings.

        “This guy is going to bomb something soon”
        ‘OK, can you give us the proofs for that conclusion?’
        vomits entire training file

        • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.ukOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Very true. The best description I heard of an AI output is that it is a hallucination - it just has to look plausible.

          So it is a worry when it is used to detect “cheating” in essays at university, it is horrifying when it could be used to order a drone strike on someone’s house. A lot of people don’t know enough to treat it’s results as, at best, a first pass filtering and just rely on it because it’s a computer and it has the word “intelligence” in there (or similarly stupid reasons).

          • Afghaniscran@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I use AI a lot in work because my English grammar is poor. It’s also my first language. I read over everything it gives me to make sure it’s factually correct and edit it to make it sound more human but other than that it does most of the heavy lifting. What used to take about an hour now takes 15 minutes with the right prompt.

  • Immersive_Matthew
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Like as if some basic AI over the years has not already and is presently being used to help with spying. When AGI comes and is a better spy than all human spies, what choice do you have other than to also have your own AGI super spy? Such weird statements that are truly hallow.