Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
The job site decided to recommend me an article calling for the removal of most human oversight from military AI on grounds of inefficiency, which is a pressing issue since apparently we’re already living in the Culture.
The Strategic Liability of Human Oversight in AI-Driven Military Operations
Oh unknowable genie of the sketchily curated datasetsClaude, come up with an optimal ratio of civilian to enemy combatant deaths that will allow us to bomb that building with the giant red cross that you labeled an enemy stronghold.This is awful for sure, but thankfully low impact. Turns out that this Terminator Enjoyer is an unemployed idea guy. Maybe he’s wrangling for an IDF contract?
From jobbie site:
and
Also, the image is perfect. I especially like the Joe Kucan-looking general embedded in the star trek tactical station. The Technology of Peace ain’t what it used to be, is it?
human oversigh :(
and daiquiri nominations :)
Eliminating Mothman is our prime strattgic priority
Private Bbailcy! I see you back there! Cut it out with the oversighing, you’re dragging down morale KPIs for this quarter!
Is that a screenshot from Command&Conquer 4?
what if C&4, but in the metaverse?
So, ethics and legality are strategic liabilities? Jesus fucking Christ, that’s not even sneer-worthy. This guy is completely fucking insane.
If you’ve convinced yourself that you’ll mostly be fighting the AIs of a rival always-chaotic-evil alien species or their outgroup equivalent, you probably think they are.
Otherwise I hope shooting first and asking questions later will probably continue to be frowned upon in polite society even if it’s automated agents doing the shooting.
This is straight up Hague material right there, all he wants is plausible deniability
Computer said so 🥺
e: that’s a shit take for several reasons and we have autonomous killers already. it’s called air defense (in some modes) because how many civilians are going at mach fuck with RCS of 0.1m^2, that’s no civilian that’s ballistic missile. also lmao at speed of decision
perun video on this topic https://m.youtube.com/watch?v=tou8ahLZvP4
Honestly the most surprising and interesting part of that episode of Power(projection)Points with Perun was the idea of simple land mines as autonomous lethal systems.
Once again, the concept isn’t as new as they want you to think, moral and regulatory frameworks already exist, and the biggest contribution of the AI component is doing more complicated things than existing mechanisms but doing them badly.