- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
“[Generative AI] is the leaded gasoline of tech, where the boost to engine performance didn’t outweigh the horrific health impacts it inflicted.”
I love Ed so much.
This is the thought I’ve had too. This stuff takes a ridiculous amount of energy and energy costs money. Ah but they’re going to build big nuclear plants! But thos cost money to build and you have to pay nuclear engineers a lot of money to run them and buy the uranium… it’s going to cost a lot of money.
They have to figure out how to get it to 1/1000 of the cost it currently is to make any money off of it. They’ll need to cut so many corners that it probably not be much good at anything.
It’ll probably just used for big “data driven” corporations to use to analyze our data to try to figure out how to sell products that barely anyone can afford.
Foot good or for bad there are smaller, less heavy LLMs than openai.
Stellar read. So OpenAI et al keep going at the current rate, and there’s never any profit—when does it burst? How spectacular does it burst? Or will we simply have laid off a significant portion of the tech workforce and then it fizzles away?
The money in AI is going to be the wages of the people it replaces. Those tech billionaires call it a revolution because getting labor without wages is the promised land for cunt billionaires.
The AI revolution is here because that’s what the owners want. If you think they’ll wait until the AI is as good as humans to replace humans I’ll point out that self-checkout already exists.
The big problem here is that it’s simply not reliable enough to replace a worker because you have to have the AI running unsupervised to actually replace anyone and it can’t be done with the current technology and something that can actually replace workers is not coming in the next 5 years or so and I’m being generous.
Thing is, it doesn’t replace workers. And it won’t for the foreseeable future. Even Microsoft itself had to admit that their studies show AI assisted coding to be bad and making developers worse.
There is hardly any market where these systems can reasonably compete with exploited humans. It’s just that the tech bros have nothing left to invest in. The same idiots that pushed crypto, NFTs and the Metaverse are now pushing for AI. There is hardly any innovation anymore, so the only ways to make line go up are rent seeking and investing in bubbles in the desperate attempt, that something might stick.
It seems like we have a problem where there’s too much money at the top of society that’s trying to chase returns that can’t exist because there’s enough money at the bottom to buy products, so it just gets invested in bad ways. This will probably continue until they waste enough for their own money on bullshit that they no longer have it
You’re right in your analysis, but the prediction is wrong, I’m afraid.
The next “big thing” is taking over the government. See Musk and his gang. He’s not alone and the US isn’t the only country this is happening in. Corporations inject themselves into each and every transaction, every aspect of life and politics. That way they have essentially infinite money at their hands.
Eventually the only way rich people can get richer is by stealing from other rich people because the economy is so heavily weighted toward the rich that there’s just no money to extract from the poor. I think that’s related to what we’re seeing here … And probably everywhere else as well.
Not actually to discredit you, but I really would love to send some studies to some people I work; by any chance do you have the links to Microsoft studies
Searching for AI and Microsoft is an absolute shit show, btw. There was also a study about the code quality, but I can’t find it among all the marketing bullshit.
Edit: just found another one https://futurism.com/openai-researchers-coding-fail
Translators and junior level devs.
Translation on a level an AI could do is already pretty cheap, nobody’s gonna throw a nuanced legal document at an AI and rely on it.
Junior devs are much smarter than any current AI, because they know what they want to achieve and why. There’s a reason why all the demos are toy examples. Actual code is messy and full of quirks because of weird requirements.
Junior devs are much smarter than any current AI, because they know what they want to achieve and why
Oh sweet summer child
See, acting like a condescending asshole without any substance is a task that AI may never take from us.
Are you a junior developer by any chance?
That’s exactly what I mean. Condescending, arrogant, and confidently incorrect.
bring back 00’s google search that shit was smarter than any of these ais
Have you tried one of the google search alternatives?
- kagi
- Quant
- Ecosia
- duckduckgo
Of course they can not compensate for the shithole the whole internet has become
Google Now and Google Inbox did stuff that’s beyond what can be achieved in AI in practice now. Both shut down due to being unprofitable XD
Aw I miss Google inbox 😢 time to put a flower on the google graveyard 🪦😋
OpenAI loses money on every single paying customer, just like with its free users. Increasing paid subscribers also, somehow, increases OpenAI’s burn rate. This is not a real company.
🔥
Data centers and electrical companies will be new Rockefellers
It’s almost as if the “aRtIfiCiAl iNtElLiGenCe” is as big a cult as blockchain is, isn’t it?
And I already have the next bubble ready: https://youtu.be/wSHmygPQukQ
There could be an AI revolution. Send Elon and the rest of the billionaires into orbit, and they’ll revolve around the Earth; they’re all artificially intelligent, after all.
I’d prefer that revolution be a decaying orbit around the sun.
Hitting the sun is surprisingly difficult
Launching into deep space knowing that they’ll never enter another star system on the otherhand…
Look launching our billionaires into deep space is no better than interstellar littering.
We should be better than that.
I bet we could launch them into Jupiter or Saturn no problems.
Even the Moon will do
I mean, I never said anything about space suits or capsules, so the destination could be wherever!
The revolution will not be monetized.
Local models do minor witchcraft. Grifters and morons keep trying to coerce absolute truth out of them, which is opposite of how they work - but they do work. They have demonstrable utility. Spicy autocomplete is useful, actually. Code in a language I’ve never learned doesn’t have to be flawless to be helpful. Bad summaries of complex articles probably still beats skimming them. Being able to hum a song into existence is just plain cool. Yes, I could do all that work better, if I did it myself, but getting a half-assed job from no talent and no effort is obviously a desirable option. Lamenting the blow to bespoke artisanal whatever versus mechanization is what old sounds like.
OpenAI is fucked because they bet on mainframes. Mainframes have never been the right approach. There was a brief window circa 1970 where it was a plausible option and the plausible option, but local power has been the better choice since 1977 at the absolute latest. Even battery-powered pocket computers didn’t make remote computing sensible. Siri and Alexa only did it to spy on you. OpenAI did it because maximum scale meant minimum competition.
If it genuinely took zillion-parameter models to deliver Asimov-grade results, they wanted to get there first. But that’s not what zillion-parameter models deliver… and you don’t need a zillion parameters for what they do deliver.
The future of this tech is programs that spin up your GPU for a couple minutes when you ask for weird shit. You want a sketch inked? Brrrr here you go. You want a draft outline for the paper you just wrote? A transcript of an eight-hour security video? A ringtone that’s kinda like these three other ringtones? Fetish pornography of your Sonic fan character? Brrrrrrrr. Computer just does what you ask, in mostly plain English, to the best of its abilities.
That’s not gonna be a $200/mo subscription - for reasons unrelated to the economic value of that silly bullshit. You can do it yourself with $200 of RAM. What you’ll get and what they claim will be closer than what they claim and what they give.
But it will be a revolution.
Games are the obvious use-case everyone suggested, when GPT3’s output more dream-sequence than fanfiction. This tech allows an elf to muster a reaction when you fill their house with cabbages. Or kill a dragon with a bucket. Or walk in wearing nothing but the emperor’s crown. You can have a whole-ass conversation about the stupid shit you’re doing, and it won’t pass the Turing test, but it can stay relevant to any wacky nonsense that happens. And related tech can voice this dialog with about the same performance that a human actor gave. Not necessarily ‘in that actor’s voice,’ because why the fuck would every NPC sound like a specific human actor? If the tech can make plain text sound like Nolan North, it can make Nolan North sound like a thousand distinctly different people.
Video’s the drum I keep banging, because it will dissolve Hollywood at its foundations. How many celebrated animators can’t get their badass pet projects greenlit? They won’t need other people’s money, if they can draw keyframes, and the computer “tweens” like it’s told to. They may not need to ink or color every frame. They can still have complete control, as surely as leaning over a junior animator’s shoulder. We’re not talking about typing “genndy tartakovsky movie 2025 five stars” and posting whatever comes out. This is a force multiplier for artists - and it treats shocking photorealism as just another art style.
This cool shit was not possible before. We didn’t know how to make computers do this. We’d been trying! Deep neural networks do shit we barely understand, and some of it is science fiction. Lots of ‘only a human could–’ went out the window. It still didn’t get us metal men walkin’ around all impudent, but dishwashers and spreadsheets weren’t that either, and they still absorbed a bunch of labor that used to require real thinking people.
Ed ignores this. Yeah yeah yeah, he always hand-waves there must be some users and some uses, but ev-er-y fucking post is “AI is a complete scam!!!” and then talking about businesses instead of tech. Namedropping the tech and then condemning the business is dishonest. And then he says shit like “generative AI is OpenAI,” when Stable Diffusion is right fuckin’ there, and DeepSeek dropped, what, last week? OpenAI could collapse overnight (inshallah) and very little would change. For tech, anyway. The tech would continue apace, along with the applications it actually works for, and we’d get more and more tricks where your computer just does that.
Key point: your computer.
Those GPUs to run the AI? Still cheaper than wages! And I’ve met plenty of people in my life that were far dumber than a small LLM.
No, they are not. Why do you think there’s not a single AI company that’s making a profit from an actual product/service? The only ones with a real business plan are nvidia and other shovel merchants.