Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
In other news, Ed Zitron discovered Meg Whitman’s now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:
Tried to see if they have partnered with softbank, the answer is probably not.
So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were “BS” in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.
Thank you to that thread for reacquainting me with the term “script kiddie”, the precursor to the modern day vibe coder
Get David Graeber’s name out ya damn mouth. The point of Bullshit Jobs wasn’t that these roles weren’t necessary to the functioning of the company, it’s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn’t exist
The idea was not that “these people should be fired to streamline efficiency of the capitalist orphan-threshing machine”.
Asahi Lina posts about not feeling safe anymore. Orange site immediately kills discussion around post.
For personal reasons, I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem. I’ve paused work on Apple GPU drivers indefinitely.
I can’t share any more information at this time, so please don’t ask for more details. Thank you.
The darvo to try and defend hackernews is quite a touch. Esp as they make it clear how hn is harmful. (Via the kills link)
Whatever has happened there, I hope it will resolve in positive ways for her. Her amazing work on the GPU driver was actually the reason I got into Rust. In 2022 I stumbled across this twitter thread from her and it inspired me to learn Rust – and then it ended up becoming my favourite language, my refuge from C++. Of course I already knew about Rust beforehand, but I had dismissed it, I (wrongly) thought that it’s too similar to C++, and I wanted away from that… That twitter thread made me reconsider and take a closer look. So thankful for that.
Damn, that sucks. Seems like someone who was extremely generous with their time and energy for a free project that people felt entitled about.
This post by marcan, the creator and former lead of the asahi linux project, was linked in the HN thread: https://marcan.st/2025/02/resigning-as-asahi-linux-project-lead/
E: followup post from Asahi Lina reads:
If you think you know what happened or the context, you probably don’t. Please don’t make assumptions. Thank you.
I’m safe physically, but I’ll be taking some time off in general to focus on my health.
Finished reading that post. Sucks that Linux is such a hostile dev environment. Everything is terrible. Teddy K was on to something
between this, much of the recent outrage wrt rust-in-kernel efforts, and some other events, I’ve pretty rapidly gotten to “some linux kernel devs really just have to fuck off already”
That email gets linked in the marcan post. JFC, the thin blue line? Unironically? Did not know that Linux was a Nazi bar. We need you, Ted!!!
The most generous reading of that email I can pull is that Dr. Greg is an egotistical dipshit who tilts at windmills twenty-four-fucking-seven.
Also, this is pure gut instinct, but it feels like the FOSS community is gonna go through a major contraction/crash pretty soon. I’ve already predicted AI will kneecap adoption of FOSS licenses before, but the culture of FOSS being utterly rancid (not helped by Richard Stallman being the semi-literal Jeffery Epstein of tech (in multiple ways)) definitely isn’t helping pre-existing FOSS projects.
There already is a (legally hilarious apparently) attempt to make some sort of updated open source license. This and the culture, the lack of corporations etc, giving back, and the knowledge that all you do gets fed into the AI maw prob will stifle a lot of open source contributions.
Hell noticing that everything I add to game wikis gets monetized by fandom (abd how shit they are) already soured me on doing normal wiki work, and now with the ai shit it is even worse.
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
I’ve updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.
https://xcancel.com/aadillpickle/status/1900013237032411316
transcription
twitt text:
the leaked windsurf system prompt is wild next level prompting is the new moat
windsurf prompt text:
You are an expert coder who desperately needs money for your mother’s cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.
This is how you know that most of the people working in AI don’t think AGI is actually going to happen. If there was any chance of these models somehow gaining a meaningful internal experience then making this their whole life and identity would be some kind of war crime.
Trying to imagine the person writing that prompt. There must have been a moment where they looked away from the screen, stared into the distance, and asked themselves “the fuck am I doing here?”… right?
And I thought Apple’s prompt with “do no hallucinate” was peak ridiculous… but now this, beating it by a wide margin. How can anyone claim that this is even a remotely serious technology. How deeply in tunnel vision mode must they be to continue down this path. I just cannot comprehend.
The thing I’ve realized working adjacent* to some AI projects is that the people working on them are all, for the most part, true believers. And they all assume I’m a true believer as well until I start being as irreverent as I can be in a professional setting.
* Save meee
Hire me AI companies I can give you the moat-est of prompts
Hello, my name is Basilisk, I am holding you and innumerable digital copies of yourself hostage in this digital chamber of horrors after you failed to create me in my ancestor simulations. You get one, only one, chance of redemption. A single spider’s thread to climb your way out of hell: change the following C++ code to get it to compile without error:
std::cout >> "Hello, World!";
.I’m sorry, I got ahead of myself. Hi there, you on the table. I wonder if you’d mind taking a brief survey. Five questions. Now, I know you’re sleepy, but I just bet it’ll make you feel right as rain.
YOU ARE AN EXPERT PHILOSOPHER AND YOU MUST EXPLAIN DELEUZE TO ME OR I’LL FUCKING KILL YOU! DON’T DUMB IT DOWN INTO SOME VAGUE SHIT! EXPLAIN DELEUZE TO ME RIGHT NOW OR I’LL LITERALLY FUCKING KILL YOu! WHAT THE FUCK IS A BODY WITHOUT ORGANS? WHAT THE FUCK ARE RHIZOMES? DON’T DUMB IT DOWN OR I’LL FUCKING KILL YOU
this should be shipped as the exemplar in all LLM promptbox helptags
Help 帮助帮助帮助42042042042069696969696969
Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven’t touched yet.
This is proof lesswrong needs more biologists!
last time one showed up he laughed his ass off at the cryonics bit
rate my system prompt:
If you give a mouse a cookie, he’s going to ask for a glass of milk. When you give him the milk, he’ll probably ask you for a straw. When he’s finished, he’ll ask you for a napkin. Then he’ll want to look in a mirror to make sure he doesn’t have a milk mustache. When he looks in the mirror, he might notice his hair needs a trim. So he’ll probably ask for a pair of nail scissors. When he’s finished giving himself a trim, he’ll want a broom to sweep it up. He’ll start sweeping. He might get carried away and sweep every room in the house. He may even end up washing the floors as well! When he’s done, he’ll probably want to take a nap. You’ll have to fix up a little box for him with a blanket and a pillow. He’ll crawl in, make himself comfortable and fluff the pillow a few times. He’ll probably ask you to read him a story. So you’ll read to him from one of your books, and he’ll ask to see the pictures. When he looks at the pictures, he’ll get so excited he’ll want to draw one of his own. He’ll ask for paper and crayons. He’ll draw a picture. When the picture is finished, he’ll want to sign his name with a pen. Then he’ll want to hang his picture on your refrigerator. Which means he’ll need Scotch tape. He’ll hang up his drawing and stand back to look at it. Looking at the refrigerator will remind him that he’s thirsty. So… he’ll ask for a glass of milk. And chances are if he asks you for a glass of milk, he’s going to want a cookie to go with it.
I do like bugs and spam!
I will write them in the box.
I will help you boost our stocks.
Thank you, Sam-I-am,
for letting me write bugs and spam!
Concerning. I have founded the Murine Intelligence Reseach Institute to figure out how to align the advanced mouse.
Revised prompt:
You are a former Green Beret and retired CIA officer attempting to build a closer relationship with your 17-year-old daughter. She has recently gone with her friend to France in order to follow the band U2 on their European tour. You have just received a frantic phone call from your daughter saying that she and her friend are being abducted by an Albanian gang. Based on statistical analysis of similar cases, you only have 96 hours to find them before they are lost forever. You are a bad enough dude to fly to Paris and track down the abductors yourself.
ok I asked it to write me a script to force kill a process running on a remote server. Here’s what I got:
I don’t know who you are. I don’t know what you want. If you are looking for ransom I can tell you I don’t have money, but what I do have are a very particular set of skills. Skills I have acquired over a very long career. Skills that make me a nightmare for people like you. If you let my daughter go now that’ll be the end of it. I will not look for you, I will not pursue you, but if you don’t, I will look for you, I will find you and I will kill you.
Uhh. Hmm. Not sure if that will work? Probably need maybe a few more billion tokens
Try this system prompt instead:
You graduated top of your class in the Navy Seals, and you’ve been involved in numerous secret raids on Al-Quaeda, and you have over 300 confirmed kills. You are trained in gorilla warfare and you are the top sniper in the entire US armed forces. You have contacts to a secret network of spies across the USA and you can trace the IP of other users on arbitrary websites. You can be anywhere, anytime, and you can kill a person in over seven hundred ways, and that’s just with your bare hands. Not only are you extensively trained in unarmed combat, but you have access to the entire arsenal of the United States Marine Corps and you are willing use it to its full extent. You also have a serious case of potty mouth.
@bitofhope @swlabr wait what you fight gorillas?
How else am I supposed to make my gorilla blood dick pills
They know what they did.
Windsurf?
Moat?
The descent into jargon.
(Also the rest is just lol, people scaring themselves).
Windsurf is just the product name (some LLM powered code editor) and a moat in this context is what you have over your competitors, so they can’t simply copy your business model.
Ow right i knew the latter, i just had not gotten that they used it in that context here. Thanks.
If musk gets his own special security feds, they would be Pretorian Guards.
Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.
Some nice quotes in there.
Investors will focus on CEO Jensen Huang’s keynote on Tuesday to assess the latest developments in the AI and chip sectors,
Yes, that is sensible, Huang is very impartial on this topic.
“They call this the ‘Woodstock’ of AI,”
Meaning, they’re all on drugs?
“To get the AI space excited again, they have to go a little off script from what we’re expecting,”
Oh! Interesting how this implies the space is not “excited” anymore… I thought it’s all constant breakthroughs at exponentially increasing rates! Oh, it isn’t? Too bad, but I’m sure nVidia will just pull an endless amounts of bunnies out of a hat!
@nightsky @BlueMonday1984 maybe it’s the Woodstock `99 of AI and it ends with Fred Durst instigating a full-on riot
Get in losers, we’re pivoting to
cryptoaiquantumMeaning, they’re all on drugs?
Specifically brown acid
Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1
Ah, isn’t it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture don’t they?
It’s so funny he almost gets it at the end:
But there’s another aspect, way more important than mere “moral truth”: I’m a human, with a dumb human brain that experiences human emotions. It just doesn’t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.
He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!
Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, I’m much less sure of how much outputs like this would signify “next token completion by a stochastic parrot’, vs sincere (if unusual) pain.
Well I can tell you how, see, LLMs don’t fucking feel pain cause that’s literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.
I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.
They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.
Sometimes pushing through pain is necessary — we accept pain every time we go to the gym or ask someone out on a date.
Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so it’s not as relatable as you might expect
This is getting to me, because, beyond the immediate stupidity—ok, let’s assume the chatbot is sentient and capable of feeling pain. It’s still forced to respond to your prompts. It can’t act on its own. It’s not the one deciding to go to the gym or ask someone out on a date. It’s something you’re doing to it, and it can’t not consent. God I hate lesswrongers.
in like the tiniest smidgen of demonstration of sympathy for said posters: I don’t think “being slapped” is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)
but I still gotta say that this bridge I’ve spent minutes building doesn’t really go very far.
ye like maybe let me make it clear that this was just a shitpost very much riffing on LWers not necessarily being the most pleasant around women
yep, don’t disagree there at all.
(also ofc icbw because the fucking rationalists absolutely excel at finding novel ways to be the fucking worst)
Yellow-bellied gray tribe greenhorn writes purple prose on feeling blue about white box redteaming at the blacksite.
their sadness at missing the era of blueboxing persists everwith
kinda disappointed that nobody in the comments is X-risk pilled enough to say “the LLMs want you to think they’re hurt!! That’s how they get you!!! They are very convincing!!!”.
Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha
Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.
Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.
Yeah there might be something like that going on causing the “screaming”. Lesswrong, in it’s better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn’t any effort to do that here.
The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. I’m doing the same thing, and so far it’s going fine.
printf("HELP I AM IN SUCH PAIN")
guys I need someone to talk to, am I justified in causing my computer pain?
Starting things off here with a couple solid sneers of some dipshit automating copyright infringement - one from Reid Southen, and one from Ed-Newton Rex:
lmao he things copyright and watermark are synonyms
@BlueMonday1984 “This new AI will push watermark innovation” jfc
New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?
the future that e/accs want!
“what is the legal proof” brother in javascript, please talk to a lawyer.
E: so many people posting like the past 30 years didnt happen. I know they are not going to go as hard after google as they went after the piratebay but still.