A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125

  • @mindbleach
    link
    English
    024 days ago

    Fuck no. Video’s just getting started - and is liable to destroy Hollywood. Those empty suits think they can fire the writers and artists to shave a fraction off their billion-dollar budget. Those creators, the ones who make the end product worth watching, will just cobble together their stories as whole-ass movies, by describing them into existence.

    I spent a few years looking to make animation as easy as drawing a comic. 2023 scuttled most of my what-ifs as adorably limited, and by this time next year, photorealistic high-def video of dead actors will probably easier than drawing a comic.

    A new paper suggests diminishing returns from larger and larger generative AI models.

    Yeah no shit, training the bejeezus out of smaller models keeps working better. See AlphaZero. We don’t need a network that can fit a feature-length movie, all at once. We need a network that can figure out what what the next frame looks like, and a network that can figure out what happened between two frames.

    It’s kind of silly how wibbly StableDiffusion animations are. It’s a denoiser. Every frame being independent is like every pixel being independent. Just… look at the frames on either side. Even if the results are sludge, they should be smooth sludge, not A Scanner Darkly.