I think AI is neat.

  • Meowoem
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    10 months ago

    Ha ha yeah humans sure are great at not being convinced by the opinions of other people, that’s why religion and politics are so simple and society is so sane and reasonable.

    Helen Keller would belive you it’s purple.

    If humans didn’t have eyes they wouldn’t know the colour of the sky, if you give an ai a colour video feed of outside then it’ll be able to tell you exactly what colour the sky is using a whole range of very accurate metrics.

    • rambaroo@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      10 months ago

      This is one of the worst rebuttals I’ve seen today because you aren’t addressing the fact that the LLM has zero awareness of anything. It’s not an intelligence and never will be without additional technologies built on top of it.

      • Meowoem
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        Why would I rebut that? I’m simply arguing that they don’t need to be ‘intelligent’ to accurately determine the colour of the sky and that if you expect an intelligence to know the colour of the sky without ever seeing it then you’re being absurd.

        The way the comment I responded to was written makes no sense to reality and I addressed that.

        Again as I said in other comments you’re arguing that an LLM is not will smith in I Robot and or Scarlett Johansson playing the role of a usb stick but that’s not what anyone sane is suggesting.

        A fork isn’t great for eating soup, neither is a knife required but that doesn’t mean they’re not incredibly useful eating utensils.

        Try thinking of an LLM as a type of NLP or natural language processing tool which allows computers to use normal human text as input to perform a range of tasks. It’s hugely useful and unlocks a vast amount of potential but it’s not going to slap anyone for joking about it’s wife.

    • Kecessa
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      How come all LLMs keep inventing facts and telling false information then?

      • Meowoem
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

        A lot of the newer implementations use more complex methods of fact verification, it’s not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we’re seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.