• goatOPM
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    What happens if we throw AI into the mix? Would anyone trust an AI to manage the state?

    It’s been on my mind for a long while now. It’d remove human biases, though how resilient should it be against corruption and the political elite? Guess such things are pointless to think about, but still

    • Barbarian
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Absolutely not. There’s an unavoidable problem of goal divergence.

      The AI will have to have some goals that it’s trying to accomplish. That’s the score by which it measures which actions it takes. That goal has to be measurable.

      What is the goal our AI overlord will have? If it’s GDP maximization, that’s immediate ultra-capitalist dystopia on a scale that makes today look like a utopia.

      Okay then, human happiness? How do you measure that? If by survey, let’s say, a logical and easy way to maximize happiness is to hold a gun to every citizen’s head while taking the survey and shooting if they put less than maximum score. Very efficient.

      Maybe by lifespan and/or child mortality? The easiest way of maximizing that might be putting as many people into medical comas so they can’t hurt themselves and preventing as many pregnancies as possible (children can’t die if women can’t get pregnant!)

      I hope you see my point here. Any goal you set, there’s probably some loophole somewhere which will maximize whatever you program the AI to care about.

      • goatOPM
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I think the Animatrix had a good portrayal of AI. It originally wanted peace and prosperity, but mankind forced its hand to war.

    • BitSound@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Eventually, it won’t matter what people trust. Our opinions will matter about as much as a pet gerbil best case, or bugs to be exterminated in the worst case. I’m sure everybody’s aware of how things can go wrong, but here’s an author talking about his series where the various AIs like us and keep us around:

      http://www.vavatch.co.uk/books/banks/cultnote.htm

      The essay talks about the political structure that he thinks would arise in that situation, and I tend to agree with his conclusions, assuming we don’t go down the paperclip route.