Difficult question. I’ll try my best at making my thoughts somewhat legible.
If you ask 10 different people what communism is, you’ll get 10 different definitions, so just to be absolutely clear, when I think “communism”, I think of authoritarian, centralized socialism where the state is the ultimate arbiter of all things. Communism has proven to work extremely well and be pretty nice at the level of a town or village as long as everyone has the option to stay or leave (for example, kibbutzes). The important part here is the voluntary nature and human scale. This amount of centralization and power is insane at the level of a state.
I think any authoritarian government has certain inherent problems, and leads naturally and inevitably to institutional paranoia. This is extremely bad for citizens. Not all authoritarian systems are equally bad, but this part I feel is unavoidable in any authoritarian government.
I am a big proponent of socialism, especially syndicalism (although recently, the more I read about anarchism the more it makes sense), but it has to be in a system where people have control over their own lives.
What happens if we throw AI into the mix? Would anyone trust an AI to manage the state?
It’s been on my mind for a long while now. It’d remove human biases, though how resilient should it be against corruption and the political elite? Guess such things are pointless to think about, but still
Absolutely not. There’s an unavoidable problem of goal divergence.
The AI will have to have some goals that it’s trying to accomplish. That’s the score by which it measures which actions it takes. That goal has to be measurable.
What is the goal our AI overlord will have? If it’s GDP maximization, that’s immediate ultra-capitalist dystopia on a scale that makes today look like a utopia.
Okay then, human happiness? How do you measure that? If by survey, let’s say, a logical and easy way to maximize happiness is to hold a gun to every citizen’s head while taking the survey and shooting if they put less than maximum score. Very efficient.
Maybe by lifespan and/or child mortality? The easiest way of maximizing that might be putting as many people into medical comas so they can’t hurt themselves and preventing as many pregnancies as possible (children can’t die if women can’t get pregnant!)
I hope you see my point here. Any goal you set, there’s probably some loophole somewhere which will maximize whatever you program the AI to care about.
Eventually, it won’t matter what people trust. Our opinions will matter about as much as a pet gerbil best case, or bugs to be exterminated in the worst case. I’m sure everybody’s aware of how things can go wrong, but here’s an author talking about his series where the various AIs like us and keep us around:
The essay talks about the political structure that he thinks would arise in that situation, and I tend to agree with his conclusions, assuming we don’t go down the paperclip route.
Difficult question. I’ll try my best at making my thoughts somewhat legible.
If you ask 10 different people what communism is, you’ll get 10 different definitions, so just to be absolutely clear, when I think “communism”, I think of authoritarian, centralized socialism where the state is the ultimate arbiter of all things. Communism has proven to work extremely well and be pretty nice at the level of a town or village as long as everyone has the option to stay or leave (for example, kibbutzes). The important part here is the voluntary nature and human scale. This amount of centralization and power is insane at the level of a state.
I think any authoritarian government has certain inherent problems, and leads naturally and inevitably to institutional paranoia. This is extremely bad for citizens. Not all authoritarian systems are equally bad, but this part I feel is unavoidable in any authoritarian government.
I am a big proponent of socialism, especially syndicalism (although recently, the more I read about anarchism the more it makes sense), but it has to be in a system where people have control over their own lives.
What happens if we throw AI into the mix? Would anyone trust an AI to manage the state?
It’s been on my mind for a long while now. It’d remove human biases, though how resilient should it be against corruption and the political elite? Guess such things are pointless to think about, but still
Absolutely not. There’s an unavoidable problem of goal divergence.
The AI will have to have some goals that it’s trying to accomplish. That’s the score by which it measures which actions it takes. That goal has to be measurable.
What is the goal our AI overlord will have? If it’s GDP maximization, that’s immediate ultra-capitalist dystopia on a scale that makes today look like a utopia.
Okay then, human happiness? How do you measure that? If by survey, let’s say, a logical and easy way to maximize happiness is to hold a gun to every citizen’s head while taking the survey and shooting if they put less than maximum score. Very efficient.
Maybe by lifespan and/or child mortality? The easiest way of maximizing that might be putting as many people into medical comas so they can’t hurt themselves and preventing as many pregnancies as possible (children can’t die if women can’t get pregnant!)
I hope you see my point here. Any goal you set, there’s probably some loophole somewhere which will maximize whatever you program the AI to care about.
I think the Animatrix had a good portrayal of AI. It originally wanted peace and prosperity, but mankind forced its hand to war.
Eventually, it won’t matter what people trust. Our opinions will matter about as much as a pet gerbil best case, or bugs to be exterminated in the worst case. I’m sure everybody’s aware of how things can go wrong, but here’s an author talking about his series where the various AIs like us and keep us around:
http://www.vavatch.co.uk/books/banks/cultnote.htm
The essay talks about the political structure that he thinks would arise in that situation, and I tend to agree with his conclusions, assuming we don’t go down the paperclip route.