Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
The once and future king + ol’ muskrat give their most sensible total nuclear annihilation takes. Fellas, are we cooked?
This has to be hands down the absolute dumbest take I’ve seen from Musk ever. Dude has the mental capacity of a boiled pear.
Considering they were saying this while having trouble doing internet radio at scale, a problem basically solved 20 years ago, I’m not sure we should listen to them.
Related to Musk, Trump and all the other fools. PrimalPoly revealing just how shallow and culture war brainwormed thinker he is.
Image Description.
Musk: Happy to host Kamala on an 𝕏 Spaces too PrimalPoly: Suggested questions for Kamala:
-
How do crypto blockchains work, & why are so many Americans skeptical of Central Bank Digital Currencies?
-
How would you stop the US gov’t from colluding with Big Tech social media companies to censor Americans?
-
What is the main cause of inflation?
-
What is a woman? Description ends, question I have for anybody with a screenreader, does this spoiler method work? And also does the screenreader properly work with the letter: X as used on twitter, namely 𝕏.
- They don’t & yall are “skeptical” of ID cards because it’s the Mark of the Beast, so go figure
- Easy, regulate Big Tech to the ground until there’s only Small-to-Medium Tech left
- Air, most of the time.
- Your mom.
Can I be a VP or at least Chief of Staff now
You have all my votes!
5.) Why do people keep calling us weird?
“Well If you had read my paper on evolutionary psychology I did while looking at sex workers, accusations of weirdness is a actually sign of …”
Spoilers are an html element so they should work everywhere. Mastodon just shows the text without spoiler or CW. Letters in a different typeface specified by Unicode are announced the same as regular letters for this purpose, emphasis, to the dismay of mathematicians that would want “double-stroke X” to be announced.
-
I cannot get over the fact that this man child who is so concerned with “the future of humanity” is both out right trying to buy the presidency and downplaying the very real weapons that can easily wipe out 70% of the Earth’s population in 2 hours. Remember ya’ll, the cost of microwaving the world is negligible compared to the power of spicy autocomplete.
They are both stupid men who repeat stuff they hear to make them look good. So the question is who are this time the “very smart people” that are telling numbnuts like these two that nuclear war is survivable - and by extension winnable? Because if that is the US defense establishment, then yeah we might be cooked.
Alex Jones for one is convinced USA would “win” a nuclear war, so
Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion. Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it, maybe some of them will self reflective on what the hell they think “Effective” means anyways.
The Bismarck Analysis crew were sneering at Sagan being a filthy peace activist so I would hazard that the era of ‘survivable nuclear war’ rides again.
I found the most HN comment of all time:
What sort of mating strategy are you optimizing for?
Optimizing mating strategies? But I’m terrible at chess!
please tell me the topic was repopulating endangered species of animals
oh who the fuck am I kidding, it’s the orange site
we’re talking about the endangered species of white cishet males here please don’t demean this important topic with jokes
Edit I wish I was joking but a luser with a classical Greek-ish handle replies
Women are optimizing 80% of men out of the gene pool.
What the men do is irrelevant.
What the men do is irrelevant.
What most of the orange site frequenting men do is indeed irrelevant, though for different reasons than they think.
Some nerds found the r/K selection strategy wikipedia pages.
That’s the state of the job market these days: even the pick-up artists have STEM degrees.
It’s amazing to watch them flock together like this, nature is beautiful 😍
This is what happens in the absence of a natural predator.
No joke but actually yes?
Looks like an opportunity for the DoT to introduce a breeding colony of traffic cones to the area!
I used to wonder if they had thought about deadlock/livelock re self driving cars. Thanks to modern technology I no longer have to wonder. Thanks!
Oh this sounds like a dog I used to have as a kid! They needed more enrichment during the day or else she’d bark into the void all night and get super excited when another dog barked back.
Have they tried taking the waymos out for walkies?
saw a video of this yesterday, that “honk” title extremely understates how fucking dumb the problem is
in the video I saw, those dumb-ass things are literally crawling forward and back in the parking lot, because the one in front of it is also doing it, because…
yes, a multi-car movement deadlock, with a visually clear solution (which any human driver would be able to implement in seconds) that nonetheless still doesn’t happen because….? I guess waymo didn’t code in inter-car communication or something
seriously, find a copy and watch. it’ll give a lovely kicker to your day :>
This came up in a podcast I listen to:
WaPo: "OpenAI illegally barred staff from airing safety risks, whistleblowers say "
archive link https://archive.is/E3M2p
OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.
While I’m not prepared to defend OpenAI here I suspect this is just to shut up the most hysterical employees who still actually believe they’re building the P(doom) machine.
I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.
Short story: it’s smoke and mirrors.
Longer story: This is now how software releases work I guess. Alot is running on open ai’s anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there’s no more training data. So the next trick is that for their next batch of models they have “solved” various problems that people say you can’t solve with LLMs, and they are going to be massively better without needing more data.
But, as someone with insider info, it’s all smoke and mirrors.
The model that “solved” structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it’s a price optimization afaik).
The next large model launching with the new Q* change tomorrow is “approaching agi because it can now reliably count letters” but actually it’s still just agents (Q* looks to be just a cost optimization of agents on the backend, that’s basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they’re so confident in this model that they don’t run the resulting python themselves. It’s still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um… checks notes count the number of letters in a sentence.
But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.
Expect more of this around GPT-5 which they promise “Is so scary they can’t release it until after the elections”. My guess? It’s nothing different, but they have to create a story so that true believers will see it as something different.
Yeah, I’m not in any doubt that the C-level and marketing team are goosing the numbers like crazy to keep the buuble from bursting, but I also think they’re the ones that are most cognizant of the fact that ChatGPT is definitely not the Doom Machine. But I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
Part of me suspects they probably also aren’t the sharpest knives in OpenAI’s drawer.
It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employee’s stocks are tied to how scared everyone is.
Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?
Strange hysteria like this doesn’t need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.
Well, it’s now yesterday’s tomorrow and while there’s an update I’m not seeing a Q* announcement.
Q*
My understanding is that it was renamed or rebranded to Strawberry which itself nebulous marketting maybe it’s the new larger model or maybe it’s GPT-5 or maybe…
it’s all smoke and mirrors. I think my point is, they made some cost optimizations and mostly moved around things that existed, and they’ll keep doing that.
OH
I first saw this then later saw the “openai employees tweeted 🍓” and thought the latter was them being cheeky dipshits about the former. admittedly I didn’t look deeper (because ugh)
but this is even more hilarious and dumb
I’m not seeing a Strawberry announcement either.
Post from July, tweet from today:
It’s easy to forget that Scottstar Codex just makes shit up, but what the fuck “dynamic” is he talking about? He’s describing this like a recurring pattern and not an addled fever dream
There’s a dynamic in gun control debates, where the anti-gun side says “YOU NEED TO BAN THE BAD ASSAULT GUNS, YOU KNOW, THE ONES THAT COMMIT ALL THE SCHOOL SHOOTINGS”. Then Congress wants to look tough, so they ban some poorly-defined set of guns. Then the Supreme Court strikes it down, which Congress could easily have predicted but they were so fixated on looking tough that they didn’t bother double-checking it was constitutional. Then they pass some much weaker bill, and a hobbyist discovers that if you add such-and-such a 3D printed part to a legal gun, it becomes exactly like whatever category of guns they banned. Then someone commits another school shooting, and the anti-gun people come back with “WHY DIDN’T YOU BAN THE BAD ASSAULT GUNS? I THOUGHT WE TOLD YOU TO BE TOUGH! WHY CAN’T ANYONE EVER BE TOUGH ON GUNS?”
Embarrassing to be this uninformed about such a high profile issue, no less that you’re choosing to write about derisively.
Surely this is 3 or 4 different anti-gun control tropes all smashed together.
let’s see how things are going on twitter
friend looked at this and said “ah race science geoguesser guy”
Genomeguesser
“Inexplicable Cimmerian Vibes” is the name of my next band.
Bonus points if this turns out to be the output of an LLM trained by phrenologists.
“Inexplicable Cimmerian Vibes”
But all the band members have the Homer Simpson bodytype. Sadly Okilly Dokilly stopped.
omg, next time my wife asks me how she looks, I’m definitely dropping that “legible magyar admixture”
Edit: Didn’t work. She started talking about how in the old country, the Hungarians chased her family out of the village for being religious minorities. I give this approach 0 bags of popcorn and a magen david.
“Babe, you’re looking Haplogroup I-M437 tonight. No. Not M-437. Damn, girl, you’re an M-438.”
When you really want to confuse your astronomy post-doc partner.
EDIT: I’ve been reliably informed that that’s too many Messier objects.
That’s certainly one approach to commenting on someone’s picture. Pretty sure it’s better to stick with the standard “Wow! 😍😍😍” but this certainly sticks out from the crowd?
the original is one of the new approach from the Dimes Square nazis, isn’t it?
In another forum I’m in, someone posted this article and asked if someone, anyone could understand it. I kept schtum.
In particular she felt like Anna, whom she’d been closest with, was being dishonest about what they hoped to achieve with this whole project. Sanje further alleged that Anna’s good standing largely stemmed from her incomprehensibility, because people don’t have a clue what this is actually all about. Possibly Anna doesn’t, either.
most straightforward hegelian
jfc what did I just spend 20m reading
I got as far as “Dimes Square bohemians” in the fourth sentence before realizing that everything in that article I recognized, I would regret.
Haela Hunt-Hendrix, the singer from the black metal band Litvrgy, was one of the principal organizers of this “symposium.” […] Besides making music, she seems to be interested in esoteric religious themes, numerology, and Orthodox iconography. In any case, Hunt-Hendrix claimed that Anna was stealing her ideas and twisting them in a “cryptofascist” manner.
Oh no! How could she!
Oh my god I only now notice the title is a reference to Tiqqun’s Preliminary Materials For a Theory of the Young-Girl, this is like a supercollision of dumbfuck cryptoreactionary nonsense I’ve obsessed over.
Hegelian e-girls’ VIP “symposium”
Excuse the rather formal philosophical latin but qvid in fvck?
I tried looking through the post to find out what possibly they could have to do with Hegel and found
Thankfully, Matthew shared the Googledoc the e-girls had sent him with their prepared remarks. My commentary over the next several paragraphs will only make sense if you read over them (they’re mercifully short), so I’d urge everyone to open up the hyperlink and give it a quick look.
Okay, first of all, it’s like 5 pages, “mercifully short” lol, go take a hike. Second,
Concrete philosophizing means applying insight to the alchemical transformation of everyday life.
This is in the first paragraph. I feel like reading this would make me devolve into an entire day of incoherent screaming and I have enough respect for my coworkers and loved ones to not subject them to that
The wikipedia page for TESCREAL is “disputed”, always a good sign when the online right launches skirmish actions against front-line trenches
‘TESCREAL’ refers to a nonsense conspiracy theory that disparages people such as Nick Bostrom without citing any sources that are credible on the question of whether Nick Bostrom is an ‘evil eugenicist’ or whatever.
WP:LOL. WP:LMAO even.
I’m ok with this because everytime Nick Bostrom’s name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, “What the fuck is this shit? And these people are associated with him? Fuck that.”
A credible source on whether Nick Bostrom is a weirdo is Nick Bostrom cited verbatim
WP:YEAHOK, WP:IWILLALLOWIT
Picked up an oddly good sneer from a gen-AI CEO, of all people (thanks to @ai_shame for catching it):
jesus, that’s telling. and I can 100% see that sentence forming in the heads of the types of people who fall over themselves to create something like these tools. so caught up in the math and the technical cool, they can’t appreciate other beauty
Not a sneer, but something that’ll inspire plenty of schadenfreude:
Brian Merchant: The artists fighting to save their jobs and their work from AI are gaining ground
Brian’s done plenty of good sneers on AI, I’d recommend checking him out
Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?
This is kind of the central issue to me honestly. I’m not a lawyer, just a (non-professional) artist, but it seems to me like “using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work” is extremely not fair use. In fact it’s kind of a prototypically unfair use.
Meanwhile Midjourney and OpenAI are over here like “uhh, no copyright infringement intended!!!” as though “fair use” is a magic word you say that makes the thing you’re doing suddenly okay. They don’t seem to have very solid arguments justifying them other than “AI learns like a person!” (false) and “well google books did something that’s not really the same at all that one time”.
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as “oh, both sides have good points! who will turn out to be right in the end!” really bugs me for some reason. Like, it seems to me that there’s a notable asymmetry here!
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers
You’re not wrong on the AI corps having good lawyers, but I suspect those lawyers don’t have much to work with:
-
Pretty much every AI corp has been caught stealing from basically everyone (with basically everyone caught scraping without people’s knowledge or consent, and OpenAI, Perplexity, and Anthropic all caught scraping against people’s explicit wishes)
-
Said data was used to create products which, either implicitly or [explicitly]((https://archive.is/jNhpN), produce counterfeits of the stolen artists’ work
-
Said counterfeits are, in turn, destroying the artists’ ability to profit from their original work and discouraging them from sharing it freely
-
And to cap things off, there’s solid evidence pointing to the defendants being completely unrepentant in their actions, whether that be Microsoft’s AI boss treating such theft as entirely acceptable or Mira Murati treating the job losses as an afterthought
If I were a betting man, I’d put my money on the trial being a bloodbath in the artists’ favour, and the resulting legal precedent being one which will likely kill generative AI as we know it.
God, that would be the dream, huh? Absolutely crossing my fingers it all shakes out this way.
Stranger things have happened. But in either case, we should commit to supporting every effort. If one punch doesn’t work take another. Death by a million cuts.
-
Like, it seems to me that there’s a notable asymmetry here!
I think that’s a great framing here.
The link seems b0rked, do you have another one?
Fixed the link - thanks for catching it.
ah, jeez, AI bros are trying to make deepfakes even fucking worse:
Deep-Live-Cam is trending #1 on github. It enables anyone to convert a single image into a LIVE stream deepfake, instant and immediately
Most of the replies are openly lambasting this shit like it deserves, thankfully
“help artists with tasks such as animating a custom character or using the character as a model for clothing etc”
The “deepfake” and “(uncensored)” in the repo description have me questioning that ever so slightly
Who had Trump accusing the Harris campaign of using AI to inflate crowd size photos on their Election ‘24 bingo card? Anyway, I’m sure that being associated with fraud and fakes is Good For AI.
it going co-evolution on the same path as cybercoins did is chefskiss.bmp
Babe, new AI doom vector just dropped: AGI will corrupt knitting sites so crafters make Langford visual hack patterns![1]
https://www.zdnet.com/article/how-ai-scams-are-infiltrating-the-knitting-and-crochet-world/
[1] doom scenario is my interpretation, not actually included in ZDnet article.
Sadly, Langford hacks seem to have never achieved memetic takeoff. Having an internet legally enforced on pain of death to be text-only would probably be a good thing.
Jimmy Buffet fans in shambles.
yall might want to take notice of this thing https://discuss.tchncs.de/post/20460779
https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2024-08-14/Recent_research
STORM: AI agents role-play as “Wikipedia editors” and “experts” to create Wikipedia-like articles, a more sophisticated effort than previous auto-generation systems
ai slop in extruded text form, now longer and worse! and burns extra square kilometers of rainforest
People out there acting like “research” using LLMs is ethical
LLM, tell me the most obviously persuasive sort of science devoid of context. Historically, that’s been super helpful so let’s do more of that.
literally why would you do this. you can research anything you stupid bastards why would you make this
we propose the STORM paradigm for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking
oh come the fuck on
The authors hail from Monica S. Lam’s group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, “the first few-shot LLM-based chatbot that almost never hallucinates” – a paper that received the Wikimedia Foundation’s “Research Award of the Year” some weeks ago).
from the same minds as STOTRMPQA comes: we constructed this LLM so it won’t generate a response unless similar text appears in the Wikipedia corpus and now it almost never entirely fucks up. award-winning!
this will probably become a NotAwfulTech post after I explore a bit more, but here’s a quick follow-up to my post last stubsack looking for language learning apps:
the open source apps for the learning system I want to use do exist! that system is essentially an automation around reading an interesting text in Spanish (or any other language), marking and translating terms and phrases with a translation dictionary, and generating flash cards/training materials for those marked terms and phrases. there’s no good name for the apps that implement this idea as a whole so I’m gonna call them the LWT family for reasons that will become clear.
briefly, the LWT family apps I’ve discovered so far are:
- LWT (Learning With Texts) is the original open source system that implemented the learning system I described above (though LWT itself originated as an open source clone of LingQ with some ideas from other learning systems). the Hugo Fara fork is the most recently-maintained version of LWT, but it’s generally considered finished (and extraordinarily difficult to modify) software. I need to look into LWT more since it’s still in active use; I believe it uses an Anki exporter for spaced repetition training. it doesn’t seem to have a mobile UI, which might be a dealbreaker since I’ll probably be doing a lot of learning from my phone
- Lute (Learning Using Texts) is a modernized LWT remake. this one is being developed for stability, so it’s missing features but the ones that exist are reputedly pretty solid. it does have a workable mobile UI, but it lacks any training framework at all (it may have an extremely early Anki plugin to generate flash cards)
- LinguaCafe is a completely reworked LWT with a modern UI. it’s got a bunch of features, but it’s a bit janky overall. this is the one I’m using and liking so far! installing it is a fucking nightmare (you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native) but the UI’s very modern, it works well on mobile (other than jank), and it has its own spaced repetition training framework as well as (currently essentially useless) Anki export. it supports a variety of freely available translation dictionaries (which it keeps in its own storage so they’re local and very fast) and utterly optional DeepL support I haven’t felt the need to enable. in spite of my nitpicks, I really am enjoying this one so far (but I’m only a couple days in)
you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native
yeah I have no idea what any of these words mean
singe marks from where the curses landed
speaking of
one of my endeavours the last few days (although heavily split into pieces between migraines and other downtimes) was to figure out how to segment containers into vlan splits (bc reasons), and doing this on podman
the docs will (by omission or directly) lie to you so much. the execution boundaries of root vs rootless cause absolutely hilarious failure modes. things that are required for operation are
Recommended
packages (in the apt/dpkg sense)utter and complete clownshow bullshit. it does my head in to think how much human time has been wasted on falling arse-over-face to get in on this shit purely after docker ran a multi-year vc-funded pr campaign. and even more to see, at every fucking interaction with this shit, just how absolutely infantile the implementations of any of the ideas and tooling are
our entire industry will regret using Docker in the relatively near term, but nobody will learn a damn thing from the mistake
handy to know about :)
Witnessed an AI doomer freaking out over a16z trying to deep-six SB1047.
Seems like the “AI doom” criti-hype is starting to become a bit of an albatross around the industry’s neck.
They shriek, endlessly, about their outgroups being cults while blocking anybody who disagrees with them (the actual cultiest thing you can do)
blocking anybody who disagrees with them (the actual cultiest thing you can do)
(the actual cultiest thing you can do)
the texture of a blade of grass is slightly rough. slightly ragged. impossible to describe to someone who has never experienced it
What’s SB1047 about? The post doesn’t clarify that in any way.
https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
Have a AI regulation committee and also give the committee their own hardware so that they can use that hardware to regulate the other hardware. Maybe.
What makes me laugh (well, chuckle silently) is how this cult member obviously has not gotten the memo that all legislation is useless and the only thing preventing non-aligned hostile AI is long-ass LW posts.
It’s some proposed California legislation.