Risks of the AI going amok set aside there’s something appealing in not risking human lifes during conflicts. Karmically or economically wise. Also you remove the limitations of human bodies -suddenly your airframe can give 100% of its capabilities without turning the pilot in a milkshake.
Removing humans from your side of the war lowers the cost of going to war and allows for even more centralized power. It’s a lot easier to do morally bankrupt acts if you don’t need to convinced a group of human soldiers to do it. Clearly you can anyway a lot of the time, but going for the robots is a lot cheaper/less risky.
It’s pretty obvious why powerful people would want this and why it would be terrible for the rest of us even without worrying about a hypothetical sky net future.
It’s a lot easier to murder with robots, but they’re going to need new models if they want to replace all the raping, pillaging, bureaucratic corruption, oppression, etc…
It only sounds problematic to someone who cares about the lives of ordinary people. But that’s not the sort of person who dedicates their whole life to increasing their own wealth and power by any means necessary. Those are the people that get to make the decisions.
And under even broader definitions of AI, there’s landmines. Those have been around for a long time already. Their “AI” is just monumentally stupid, I wouldn’t mind having them be a bit more discriminating about whose limb to blow off.
Definitely! Reliable discrimination of combatant and non combatant and chain of responsibility for that decision is much more important than just having a human in the loop.
Why are the people in power always biased towards the most brain dead decisions? Does this not sound wrong to any of them?
Risks of the AI going amok set aside there’s something appealing in not risking human lifes during conflicts. Karmically or economically wise. Also you remove the limitations of human bodies -suddenly your airframe can give 100% of its capabilities without turning the pilot in a milkshake.
Removing humans from your side of the war lowers the cost of going to war and allows for even more centralized power. It’s a lot easier to do morally bankrupt acts if you don’t need to convinced a group of human soldiers to do it. Clearly you can anyway a lot of the time, but going for the robots is a lot cheaper/less risky.
It’s pretty obvious why powerful people would want this and why it would be terrible for the rest of us even without worrying about a hypothetical sky net future.
It’s a lot easier to murder with robots, but they’re going to need new models if they want to replace all the raping, pillaging, bureaucratic corruption, oppression, etc…
There’s plenty of training material for those around I understand…
deleted by creator
Military intelligence is to intelligence, what military music is to music.
It only comes in one variety and it’s not very good?
It only sounds problematic to someone who cares about the lives of ordinary people. But that’s not the sort of person who dedicates their whole life to increasing their own wealth and power by any means necessary. Those are the people that get to make the decisions.
Ah, my mistake.
I mean by broad definitions of AI, beyond visual range air to air missiles already do have AI making decisions to kill people.
And under even broader definitions of AI, there’s landmines. Those have been around for a long time already. Their “AI” is just monumentally stupid, I wouldn’t mind having them be a bit more discriminating about whose limb to blow off.
Definitely! Reliable discrimination of combatant and non combatant and chain of responsibility for that decision is much more important than just having a human in the loop.