• @Barbarian
    link
    English
    410 months ago

    Absolutely not. There’s an unavoidable problem of goal divergence.

    The AI will have to have some goals that it’s trying to accomplish. That’s the score by which it measures which actions it takes. That goal has to be measurable.

    What is the goal our AI overlord will have? If it’s GDP maximization, that’s immediate ultra-capitalist dystopia on a scale that makes today look like a utopia.

    Okay then, human happiness? How do you measure that? If by survey, let’s say, a logical and easy way to maximize happiness is to hold a gun to every citizen’s head while taking the survey and shooting if they put less than maximum score. Very efficient.

    Maybe by lifespan and/or child mortality? The easiest way of maximizing that might be putting as many people into medical comas so they can’t hurt themselves and preventing as many pregnancies as possible (children can’t die if women can’t get pregnant!)

    I hope you see my point here. Any goal you set, there’s probably some loophole somewhere which will maximize whatever you program the AI to care about.

    • @goatOPM
      link
      English
      210 months ago

      I think the Animatrix had a good portrayal of AI. It originally wanted peace and prosperity, but mankind forced its hand to war.