• BitSound@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

    But why? Also, “has free will” is exactly equivalent to “i cannot predict the behavior of this object”. This is a whole separate essay, but “free will” is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don’t have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn’t really make a difference.

    Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

    But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I’d say most humans wouldn’t pass your test for intelligence, and in fact they’re just 3 LLMs in a trenchcoat.

    https://en.m.wikipedia.org/wiki/Chinese_room

    Yeah, the reality is that we’ve built a Chinese room. And saying “well, it doesn’t really understand” isn’t sufficient anymore. In a few years are you going to be saying “we’re not really being oppressed by our robot overlords!”?

    • Brocken40
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I’m saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn’t exist at all.