? (I hit the title character limit)

  • @[email protected]
    link
    fedilink
    English
    339 months ago

    Run simulations on what the best system of governance would be. You’d want to test across different cultures/countries/technological eras to get an idea of what the most resilient would be, maybe you’d get different results depending on what you were testing. Even the definition of “best system” would need alot of clarification.

    • Riskable
      link
      fedilink
      English
      129 months ago

      An AI would decide that an AI-driven dictatorship would be most effective at implementing whatever goals you gave it.

      • @[email protected]
        link
        fedilink
        English
        39 months ago

        You’d obviously need to give it constraints such as “administrable by humans” and if you’re looking at different technological eras, AI wouldn’t be available to something like 99% of humanity.

      • JGrffn
        link
        fedilink
        19 months ago

        It wouldn’t be the worst idea to come out of it, to be honest.

    • @[email protected]
      link
      fedilink
      English
      39 months ago

      Why bother with simulations of governance systems and not governance itself at that point?

      I do understand “the risk” of putting AI being the steering wheel but if you’re already going to be trusting it this far, the last step probably doesn’t actually matter.

    • OpenStars
      link
      fedilink
      29 months ago

      That leaves too much room for subjective interpretation - like ultimately the answer as to what system of governance will last the longest in a steady state will ofc be to kill all humans (bc that lasts for infinite time, and you can’t beat that kind of longevity!), while if you add the constraint that at least some must remain alive, it would be to enslave all humans (bc otherwise they’ll find some way to mess everything up), and if there is something added in there about being “happy” (more or less) then it becomes The Matrix (trick them into thinking they are happy, bc they cannot handle any real responsibility).

      Admittedly, watching the USA election cycle (or substitute that with most other nations lately; or most corporate decisions work just as well for this) has made me biased against human decision making:-P. Like objectively speaking, Trump proved himself to be the “better” candidate than Hillary Clinton a few years ago (empirically I mean, you know, by actually winning), then he lost to Biden, but now there’s a real chance that Trump may win again, if Biden continues to forget which group he is addressing and thus makes it easy to spin the thought that he is so old as to be irrelevant himself and a vote for him is in reality one for Kamala Harris (remember, facts such as Trump’s own age would only be relevant for liberals, but conservatives do not base their decisions based on such trifling matters, it’s all about “gut feelings” and instincts there, so Biden is “old” while Trump is “not” - capiche?). Or in corporate politics, Reddit likewise “won” the protests.

      Such experiments are going on constantly, and always have been for billions of years, and we are what came out of that:-D. Experiments with such socioeconomics have only gone on for a few thousand, but it will be interesting to see what survives.