That’d be the only sensible reading of Hollywood imaging the text-to-images technology will save their sequence-of-images business from the tyranny of paying people for text.
But I’m not sure “sensible” applies. The same zeal was applied to multiple stages of crypto bullshit. Some tech-bros just latch onto the newest thing, as a cargo cult, and sell people on what they imagine it could do.
Meanwhile I’m kinda hyped for AI because I’ve seen all the weird shit it can do, and I am excited for clever applications of what weird shit. I’d been kicking around ways to make animation as approachable as comics. Motion-vector continuation for complex details that don’t need constant repainting. Image-space wireframe manipulation. Deferred pipelines for smooth rainbow-colored doodles that take detail and lighting automatically. I haven’t touched any of it in two years. What’s the point? I can’t know which parts will go from jawdropping to underwhelming, within months.
That said it’s not like cutting-edge AI companies are doing things sensibly. Sora should not be spitting out jump-cuts. What you want are whole takes. Leave the editing to humans, because it’s a crucial part of conveying meaning through the footage. And the fact it’s limited to short clips anyway means they’re still spitting out the whole damn thing at once, instead of generating the next frame based on previous frames, or tweening frames out of adjacent frames. Will that limited-scope approach have issues? Sure. But going from thirty seconds of footage to forty won’t require a new generation of video cards.
The middle paragraph is hand-waving some software I either never published or never wrote. One concept involved drawing a map for copying pixels from each frame to the next, preserving details as they’re added. Another concept involved drawing fake 3D boxes over an image to then warp parts of it in ways that move like 3D models. Another concept involved having people animate “normal passes” to allow later projection of patterns, texture, shading, and shadows, onto parts of the animation.
Pursuing any of these to the point of being a tool usable by normal people (and artists) would take years. I had been toying with prototypes and avoiding commitment until suddenly some neural-network curiosities from Two Minute Papers became a tool for instant pornography with almost the right number of fingers. Those acts of wizardry still suck at movement, but Sora proves we’re headed the right direction in a hurry. There will be tools that let you sketch at 1 FPS and get television-quality cartoon. Or not-quite-CGI. Or shockingly plausible “live action.” The timeline for that is months, not decades. I am looking forward to all the weird shit people can do with it. Hollywood should not be. They are in deep shit.
… oh, I did miss the word “seconds.” Lemme fix that.
That’d be the only sensible reading of Hollywood imaging the text-to-images technology will save their sequence-of-images business from the tyranny of paying people for text.
But I’m not sure “sensible” applies. The same zeal was applied to multiple stages of crypto bullshit. Some tech-bros just latch onto the newest thing, as a cargo cult, and sell people on what they imagine it could do.
Meanwhile I’m kinda hyped for AI because I’ve seen all the weird shit it can do, and I am excited for clever applications of what weird shit. I’d been kicking around ways to make animation as approachable as comics. Motion-vector continuation for complex details that don’t need constant repainting. Image-space wireframe manipulation. Deferred pipelines for smooth rainbow-colored doodles that take detail and lighting automatically. I haven’t touched any of it in two years. What’s the point? I can’t know which parts will go from jawdropping to underwhelming, within months.
That said it’s not like cutting-edge AI companies are doing things sensibly. Sora should not be spitting out jump-cuts. What you want are whole takes. Leave the editing to humans, because it’s a crucial part of conveying meaning through the footage. And the fact it’s limited to short clips anyway means they’re still spitting out the whole damn thing at once, instead of generating the next frame based on previous frames, or tweening frames out of adjacent frames. Will that limited-scope approach have issues? Sure. But going from thirty seconds of footage to forty won’t require a new generation of video cards.
wait do you understand what this post means? genuinely it reads like a satirical stringing together of catchwords to me
The middle paragraph is hand-waving some software I either never published or never wrote. One concept involved drawing a map for copying pixels from each frame to the next, preserving details as they’re added. Another concept involved drawing fake 3D boxes over an image to then warp parts of it in ways that move like 3D models. Another concept involved having people animate “normal passes” to allow later projection of patterns, texture, shading, and shadows, onto parts of the animation.
Pursuing any of these to the point of being a tool usable by normal people (and artists) would take years. I had been toying with prototypes and avoiding commitment until suddenly some neural-network curiosities from Two Minute Papers became a tool for instant pornography with almost the right number of fingers. Those acts of wizardry still suck at movement, but Sora proves we’re headed the right direction in a hurry. There will be tools that let you sketch at 1 FPS and get television-quality cartoon. Or not-quite-CGI. Or shockingly plausible “live action.” The timeline for that is months, not decades. I am looking forward to all the weird shit people can do with it. Hollywood should not be. They are in deep shit.
… oh, I did miss the word “seconds.” Lemme fix that.
sorry i meant OPs post