- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Sam Altman says AI is gonna need so much energy that we need to be investing in new energy strategies.
If it still takes humans doing all this extra work to get the job done, wouldn’t it make more financial and ecological sense to just…
…
…pay artists a living wage to do the same fucking job instead?
We’re gonna burn our planet to the ground and doom our species to extinction for this bullshit? Fuck me, maybe it’s what we deserve, then.
If it still takes humans doing all this extra work to get the job done, wouldn’t it make more financial and ecological sense to just…
…
…pay artists a living wage to do the same fucking job instead?
Someone should tell them this was an option all along before they waste any more time and resources on this.
We’re gonna burn our planet
We? Or maybe just a handful of tech bros with too much money.
I mean, arguably, yes, but I meant more “we” as in “humanity as a whole.”
Sure, plenty of us don’t actually get a choice in what is happening, but outwardly, you would attribute it to the human species, even if it was only a subset of the human species. Mosquitoes and bedbugs aren’t responsible for the destruction of the planet, although it would be nice to be able to blame them.
it’s incumbent on sam altman to pretend that ai will more efficiently drain the world’s money, capital and power into the hands of the ultra rich, because this is how he causes money, capital and power to drain into his hands
Turns Out That Extremely Impressive Sora Demo
Impressive to whom, exactly? Play a game and pause that video at any random spot and tell me it looks like a movie shot.
Even after editing that scene with the fountain has completely random shadows and this… just whatever the fuck this even is
Every Frame a Drunken Painting
to be fair, there’s still a bunch of other sora demo clips that are very impressive.
* that haven’t been individually debunked yet, just the big splashy one they publicised most heavily
I don’t think sora is good for anything or anyone other than openai making more money based on hype, but it seems pretty reasonable to me that the company that previously scraped the whole internet (violating everyones copyright) and trained one of the best text models on it, could do the same for video.
I mean, most of the clips they show have some noticable minor (or major) issues, and the “fail” examples are very obviously wrong.
I hadn’t even seen the balloon head one, I only watched the clips they published when they announced it.
Unfortunately, effort does not guarantee quality, and it’s worth contemplating the possibility that they violated everyone’s rights just to make something irredeemably crappy.
true, i do love the video of wolf pups intersecting and disappearing and regenerating like in a dream, or the incredibly creepy birthday one with fake humans morphing into one another and i’d love to see what sort of disturbing and content infringing stuff i could produce with it if it didn’t consume a rainforest’s worth of power for five seconds of video
Did they claim the video was unedited Sora output? It doesn’t sound like they had to do all that much to the output to get what they wanted. There aren’t any AI tools right now that always output exactly what you want without any alterations, so of course they had to regenerateate clips many times and fix them up manually. They still ended up with a video that required no actual filming, and that’s impressive.
deleted by creator
fucking called it
putting the pro in prognostication
Did they claim the video was unedited Sora output?
This is the article that I read at launch:
Has generative video’s problem with faces and hands been solved? Not quite. We still get glimpses of warped body parts. And text is still a problem (in another video, by the creative agency Native Foreign, we see a bike repair shop with the sign “Biycle Repaich”). But everything in “Air Head” is raw output from Sora. After editing together many different clips produced with the tool, Shy Kids did a bunch of post-processing to make the film look even better. They used visual effects tools to fix certain shots of the main character’s balloon face, for example.
I think what they claimed “this is what real artists can do with this technology”
Which appears exactly what this video is.
weird OpenAI neglected to mention that what the real artists were doing with the technology was spend a lot of time heavily editing and fucking rotoscoping its output to look barely passable
but the result was still uninteresting garbage that’s only barely notable if you think generative AI did it, and we’ve established that all the coherent parts of this were done (as usual) with the hard work of a team of uncredited humans
I am confused, was the expectation really a magic automate entire movie clip button? Because thats not how any kind of creative generative ai works in my experience.
llms are not sentient, they cannot perform “intentional reasoning” of course the showcased art is a human work. Of course the raw output has hallucinations, gpt4 is not except of that either but its still a great drafter.
The results stands to appear technologically very impressive. This kind of thing was perceived as never to be possible and improves quickly.
No cameras, no physical shooting, no actors. Just a few creatives and something to compute.
oh come the fuck off it, OpenAI’s marketing presents sora as exactly a magic automate entire movie clip button. here’s OpenAI marketing the stupid thing as a world simulator which is fucking laughable if it can’t maintain even basic consistency. here’s an analysis of how disappointing sora actually is
tonight’s promptfans are fucking boring and I’m cranky from openai’s shitty sora page crashing my browser so I guess all you folks doing free marketing for Sam Altman can fuck off now
also:
The results stands to appear technologically very impressive. This kind of thing was perceived as never to be possible and improves quickly.
No cameras, no physical shooting, no actors. Just a few creatives and something to compute.
like @[email protected] I am begging generative AI idiots to realize how out of touch “no cameras, no physical shooting, no actors” is as a supposed milestone when it applies equally well to Xavier: Renegade Angel… except Xavier looked fucked up on purpose
Still proud of promptfans
* promptfondlers
Tbh I find both acceptable, and not solely because I thought of the one. Current working mental taxonomy:
Fans: the internet weird-nerds choosing to be bodyshields for this shit absent any other reason whatsoever
Fondlers: those that write the thonkpieces as demonstrated elsethread (the infosec panic one)
it’s quite telling that you don’t think that actors are “creatives” but think that “gpt-4 is a great drafter”.
well, that’s true from a certain point of view
marketing department said it so it must be right
yes and people 4 months ago were like “haha stupid ai art can’t draw hands” and now that’s just like not a valid argument because the tech has matured to a degree that its pretty reasonable to create something with little to no imperfections, and obviously that will happen again
something with little to no imperfections
Bold to post this in a thread about how it had many many imperfections and what it outputs has to be manually reworked by humans, still.
🎶 Iiiiiiiiii-eye-eye … have become … confidently wrong 🎶
(with only the tiniest apology for the massacre of waters’ lyrics this happens to be)
lying is immoral
they still can’t draw hands