Tyler Perry Puts $800M Studio Expansion On Hold After Seeing OpenAI’s Sora: “Jobs Are Going to Be Lost”::Tyler Perry is raising the alarm about the impact of OpenAI’s Sora on Hollywood.

  • Bob Robertson IX @discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    1
    ·
    9 months ago

    Jobs are going to be lost

    And it is now a self-fulfilling prophesy because a lot of people just lost out on working on his studio expansion project.

    • grabyourmotherskeys@lemmy.world
      link
      fedilink
      English
      arrow-up
      76
      arrow-down
      2
      ·
      9 months ago

      This announcement is performative. Probably just a “good” reason to back out of something they didn’t want to do anymore, anyway. Otherwise there’s paper on the deal and they wouldn’t back out.

      • KevonLooney@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        9 months ago

        That’s true. If he really wanted to expand, he would just expand into AI filmmaking too. This sounds like a money problem, not a technology issue.

    • Unruffled [he/him]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      9 months ago

      Yep - those jobs are going to be lost because of Tyler Perry seeing an opportunity to reduce production costs, not because of AI. Nobody is forcing him to use AI. It’s just classic corporate greed where he is happy to trade [other people’s] jobs for more profit. Sure, AI is enabling him to do this, but it’s greed that’s guiding his decision making here.

  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    9 months ago

    To be fair, most Tyler Perry movies could be replaced by minute long clips of stock footage with some filters stuck on them.

  • TheOneCurly@lemm.ee
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    9 months ago

    Sora can sometimes do 1 minute clips that mostly look ok as long as you don’t pay too close attention. We are incredibly far away from coherent, feature-length narratives and even those aren’t likely to be thematically interesting or engaging.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      2
      ·
      9 months ago

      Yep. I watched their demo clips, and the “good” ones are full of errors, have lots of thematically incoherent content, and - this is the biggie - can’t be fixed.

      Say you’re a 3D animator and build an animation with thousands of different assets and individual, alterable elements. Your editor comes to you and says, “This furry guy over here is looking in the wrong direction, he should be looking at the kangaroo king over there, but it looks like he’s just glaring at his own hand.”

      So you just fix it. You go in, tweak the furry guy’s animation, and now he’s looking in the right direction.

      Now say you made that animation with Sora. You have no manipulatable assets, just a set of generated frames that made the furry guy look in the wrong direction.

      So you fire up Sora and try to fine-tune its instructions, and it generates a completely new animation that shares none of the elements of the previous one, and has all sorts of new, similarly unfixable errors.

      If I use an AI assistant while coding, I can correct its coding errors. But you can’t just “correct” frames of video it has created. If you try, you’re looking at painstakingly hand-painting every frame where there’s an error. You’ll spend more time trying to fix an AI-generated animation that’s 90% good and 10% wrong than you will just doing the animation with 3D assets from scratch.

      • Buelldozer@lemmy.today
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        9 months ago

        Now say you made that animation with Sora. You have no manipulatable assets, just a set of generated frames that made the furry guy look in the wrong direction.

        “Sora, regenerate $Scene153 with $Character looking at $OtherCharacter. Same Style.”

        Or “Sora, regenerate $Scene153 from time mark X to time mark Y with $Character looking at $OtherCharcter. Same Style”.

        It’s a new model, you won’t work with frames anymore you’ll work with scenes and when the tools get a bit smarter you’ll be working with scene layers.

        “Sora, regenerate $Scene153 with $Character in Layer1 looking at $OtherCharacter in Layer2. Same Style, both layers.”

        I give it 36 months or less before that’s the norm.

        • Sprucie@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          I agree, I don’t think people realise how early into this tech we are at the moment. There are going to be huge leaps over the next few years.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          This seems like a fundamental misunderstanding of how generative AI works. To accomplish what you’re describing you’d need:

          • An instance of generative AI running for each asset.
          • An enclosing instance of generative AI running for each scene.
          • A means for each AI instance to discard its own model and recreate exactly the same asset, tweaked in precisely the manner requested, but immediately being able to reincorporate the model for subsequent generation.
          • A coordinating AI instance to keep it all working together, performing actions such as mediating asset collisions.

          The whole system would need to be able to rewind to specific trouble spots, correct them, and still generate everything that comes after unchanged. We’re talking orders of magnitude more complexity and difficulty.

          And in the meantime, artists creating 3D assets the regular way would suddenly look a lot less expensive and a lot less difficult.

          If all you have is a hammer, everything looks like a nail. Right now, generative AI is everyone’s really attractive hammer. But I don’t see it working here in 36 months. Or 48. Or even 60.

          The first 90% is easy. The last 10% is really fucking hard.

        • conciselyverbose
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Or just “take the frame and replace the head with the same face pointed a different way”.

      • mindbleach
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        But you can’t just “correct” frames of video it has created.

        Yeah you can.

        Same way you can correct parts of a generated image, and have the generator go back and smooth it over again. Denoiser-based networks identify parts that don’t look right and nudge them toward the expectations of the model. Sora clearly has decent expectations for how things look and move. I would bet anything that pasting a static image of a guy’s head, facing the desired direction, will result in an equally-plausible shot with that guy facing the right way.

        There have been image-generator demos where elements can be moved in real-time. Yeah, it has wider effects on the whole image, because this technology is a pile of hacks - but it’s not gonna turn red wallpaper into a green forest, or shift the whole camera angle. You’re just negotiating with a model that expects, if this guy’s facing this way now, his hands must go over here. Goofy? Yes. Ruinous? Nope.

        And at the end of the day you can still have human artists modify the output, as surely as they can modify actual film of actual people. That process is not quick or cheap. But if your video was spat out by a robot, requiring no actors, sets, or animators, manual visual effects might be your entire budget.

        Really - the studios that do paint-overs for Hollywood could be the first to make this tech work. They’d only need a few extra people to start from the first 90% of a movie.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      And ironically when we do get to the point where an AI can string together a semi-coherent narrative, the first things it’ll start to produce will probably be exactly the sort of mid-level dross that Tyler Perry likes to make.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      This won’t get used for key narrative content. This will be used to a lot of b-roll and the quick cuts that audiences don’t examine closely. A lot of a movie is content like that, and since the dawn of the effects industry, editors and effects artists have known that they can get away with janky stuff in certain places. The audience won’t know it’s there because they’re not watching the film frame by frame.

    • FoxBJK@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      It seems pretty good with backgrounds though, and it’s only going to get better. I think the threats of job losses are a lot more imminent than people are ready to admit.

    • mindbleach
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      If you expect to type in “comedy movie oscar bait five stars” and have it spit out a finished MP4, then sure, that’s not happening any time soon.

      But movies are composed of shots. Most shots are shorter than one minute. Narratives are constructed in the edit. Actors talking to one another don’t have to be in the same room… or alive at the same time. One composite wide shot and a bunch of jump cuts will stop the audience from even thinking about it.

      This is going to be used for short films before summer, the same way image generators were used for comics. Both generally terrible - but mostly because the people leaping into it are boring, impatient, and just want to go ‘look what I made!’ while pointing at the parts they absolutely did not make. It’s the fancy version of saying ‘my characters look so cool!’ when your webcomic is made from stolen Mega Man sprites.

      But considering we’re about eighteen months removed from 256x256 blobs that vaguely resemble an avocado chair, and Sora slaps down a variety of pessimistic timelines, it seems incomprehensible to bet against using this for worthwhile storytelling. Sora spits out half-decent shots from text alone. Video-to-video style transfer has been in research papers for like five years now - and unless this is a completely novel form of generative network, that means you can probably insert your own footage halfway into the process.

      Some of these networks are denoisers. They remove the parts of the input that don’t look like the prompt. Starting from random noise is only the laziest way to get a finished output. Any blurry approximation of what you want, any blob-colored animatics, any 1 FPS storyboard, should guide the network to produce matching results.

      What that does for Tyler Perry, I have no friggin’ idea. I was under the impression most of his movies could have been filmed at his house. (Alright damn, A Jazzman’s Blues must have taken some money.) We are not decades away from twenty-minute OVAs of sci-fi bullshit that would otherwise cost a fortune. It will be a matter of months.

      Narrative will come first and foremost, because this technology frees writers and directors from needing studios… not vice-versa.

    • UNWILLING_PARTICIPANT
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      People will be laid off
      Spoils will be enjoyed
      This great economy… it will endure
      The working class will survive!

  • realitista@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    It will be really interesting to see how long it actually takes before this can be done accurately enough to execute a directors vision and high quality enough to actually make a film from. It could be anything from a few months to decades, it’s so hard to know how much we are actually able to control these models to get them to do what we really want accurately enough.

    • Buelldozer@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      9 months ago

      It will be really interesting to see how long it actually takes …

      We went from Will Smith eating Spaghetti to Sora in just 12 months. Whatever the time period turns out to be I think it’s safe to say that it will be far shorter than most people would like.

      • realitista@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 months ago

        Probably you are right, but so far no one has demonstrated any LLM that can be controlled within these tight types of adjustments and it feels like it might be something that the technology just never is able to do. We might have to wait for a whole new generation of technology for this.

        • Buelldozer@lemmy.today
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          9 months ago

          We might have to wait for a whole new generation of technology for this.

          We may but this tech is advancing at a pace we haven’t experienced since the 90s. If you weren’t around back then I can tell you that “next generation” literally happened every 12 months or less.

      • WarlordSdocy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I mean most new technologies have a period of explosive growth followed by eventually slowing down and making only gradual gains. So it really just depends if we’re near the end of that development or if there’s still more explosive changes to come.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    This is the best summary I could come up with:


    Over the past four years, Tyler Perry had been planning an $800 million expansion of his studio in Atlanta, which would have added 12 soundstages to the 330-acre property.

    Now, however, those ambitions are on hold — thanks to the rapid developments he’s seeing in the realm of artificial intelligence, including OpenAI’s text-to-video model Sora, which debuted Feb. 15 and stunned observers with its cinematic video outputs.

    As a business owner, Perry sees the opportunity in these developments, but as an employer, fellow actor and filmmaker, he also wants to raise in the alarm.

    In an interview between shoots on Thursday, Perry explained his concerns about the technology’s impact on labor and why he wants the industry to come together to tackle AI: “There’s got to be some sort of regulations in order to protect us.

    After seeing Sora, what are your current feelings about how fast AI technology is moving and how it might affect entertainment in the near term?

    I was in the middle of, and have been planning for the last four years, about an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size, we were adding 12 more soundstages.


    The original article contains 1,226 words, the summary contains 198 words. Saved 84%. I’m a bot and I’m open source!

  • Imgonnatrythis
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    9 months ago

    Does this hopefully mean he’s throwing in the towel? Orrr, is he just going to save money by being an early adopter? (screaming noises)

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    edit-2
    9 months ago

    This is a Hollywood killer and I’m all for it. I’m tired of them dumping millions into the same drab script while hogging all the profits.

    Every job lost is a potential new indie company, fuck Hollywood.

    • nxdefiant@startrek.website
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      9 months ago

      The money will be dumped into AI

      The new scripts will be derivative mashups of the old scripts.

      An independent will create a successful film

      The new scripts will be derivative mashups of that script.

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        9 months ago

        It’s a step up from what we have right now which is basically no independents and every script being a derivative mashup.