James Cameron has reportedly revealed an anti-AI title card will open up Avatar 3, officially titled Avatar: Fire and Ash. The Oscar-winning director shared the news in a Q&A session in New Zealand attended by Twitter user Josh Harding.

Sharing a picture of Cameron at the event, they wrote: “Such an incredible talk. Also, James Cameron revealed that Avatar: Fire and Ash will begin with a title card after the 20th Century and Lightstorm logos that ‘no generative A.I. was used in the making of this movie’.”

Cameron has been vocal in the past abo6ut his feelings on artificial intelligence, speaking to CTV news in 2023 about AI-written scripts. “I just don’t personally believe that a disembodied mind that’s just regurgitating what other embodied minds have said – about the life that they’ve had, about love, about lying, about fear, about mortality – and just put it all together into a word salad and then regurgitate it,” he told the publication. “I don’t believe that’s ever going to have something that’s going to move an audience. You have to be human to write that. I don’t know anyone that’s even thinking about having AI write a screenplay.”

  • chemical_cutthroat@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    20 hours ago

    Only when we can accurately point to any one idea that a human has had that hasn’t been a product of previous information.

    • oce 🐆@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      19 hours ago

      With historian work, I think it’s possible to say this idea appeared at about this point in time and space, even if it was refined by many previous minds. For example, you can tell about when an engineering invention or an art style appeared. Of course you will always have a specialists debate about who was the actual pioneer (often influenced by patriotism), but I guess we can at least have a consensus of when it starts to actually impact the society.
      Also, maybe we can have an algorithm to determine if a generated result was part of the learning corpus or not.

      • chemical_cutthroat@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        19 hours ago

        But the idea is never original. The wheel likely wasn’t invented randomly, it started as a rock that rolled down a hill. Fire likely wasn’t started by a caveman with sticks, it was a natural fire that was copied. Expressionism wasn’t a new style of art, it was an evolution that was influenced by previous generations. Nothing is purely original. The genesis of everything is in the existence of something else. When we talk about originality, we mean that these things haven’t been put together this exact way before, and thus, it is new.

        • oce 🐆@jlai.lu
          link
          fedilink
          English
          arrow-up
          4
          ·
          17 hours ago

          I don’t disagree with your definition, but I’m not sure what it changes in the point of current LLMs lacking human creativity. Do you think there isn’t anything more than a probabilistic regurgitation in human creativity so LLM already overcome human creativity, and it’s just a matter of consideration?

          • stray@pawb.social
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            14 hours ago

            I agree that humans are just flesh computers, but I don’t know whether we can say LLMs have overcome human creativity because I think the definition is open to interpretation.

            Is the intentionality capable only with metacognition a requirement for something to be art? If no, then we and AI and spiders making webs are all doing the same “creativity” regardless of our abilities to consider ourselves and our actions.

            If yes, then is the AI (or the spider) capable of metacognition? I know of no means to answer that except that ChatGPT can be observed engaging in what appears to be metacognition. And that leaves me with the additional question: What is the difference between pretending to think something and actually thinking it?

            In terms of specifically “overcoming” creativity, I don’t think that kind of value judgement has any real meaning. How do you determine whether artist A or B is more creative? Is it more errors in reproduction leading to more original compositions?

            • oce 🐆@jlai.lu
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              11 hours ago

              As I suggested above, I would say creating a coherent idea or link between ideas that was not learned. I guess it could be possible to create an algorithm to estimate if the link was not already present in the learning corpus of an ML model.

              • stray@pawb.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 hours ago

                I’m not sure how humans go about creating ideas, and therefore cannot be sure that the resulting ideas aren’t a combination of learned things. There have been people in history who did things like guess that everything is made up of tiny particles long before we could ever test the idea, but probably they got the idea from observing various forms of matter, right? Like seeing how rocks can crumble into sand and grain can be ground to flour. I don’t think they would have been able to come up with the idea in a vacuum. I think anything we’re capable of creating must be based on things which we’ve already learned about, but I don’t know that I can prove that.

          • chemical_cutthroat@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            16 hours ago

            Human creativity, at it’s core, is not original. We smush things together, package it as something new, and in our hubris call it “original” because we are human, and thus infallible originators. Our minds are just electrical impulses that fire off in response to stimuli. There is no divine spark, that’s hogwash. From a truly scientific standpoint, we are machines built with organic matter. Our ones and zeros are the same as the machines we create, we just can’t deal with the fact that we aren’t as special as we like to think. We derive meaning from our individuality, and to lose that would mean that we aren’t individual. However, we are deterministic.

            If you woke up this morning and relived the same day that you already have, and had no prior knowledge of what had happened the previous time you experienced it, and no other changes were made to your environment, you would do the same thing that you did the first time, without fail. If you painted, you would paint the same image. If you ate breakfast, you would eat the same breakfast. How do we know this? Because you’ve already done it. Why does it work this way? Because nothing had changed, and your ones and zeros flipped in the same sequences. There is no “chaos”. There is no “random”. Nothing is original because everything is the way it is because of everything else. When you look at it from that bird’s eye perspective, you see that a human mind making “art” is no different than an LLM, or some form of generative AI. Stimulus is our prompt, and our output is what our machine minds create from that prompt.

            Our “black box” may be more obscure and complex than current technology is for AI, but that doesn’t make it different any more than a modern sports car is different than a Model T. Both serve the same function.

            • oce 🐆@jlai.lu
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              12 hours ago

              From a truly scientific standpoint, we are machines built with organic matter. Our ones and zeros are the same as the machines we create, we just can’t deal with the fact that we aren’t as special as we like to think. We derive meaning from our individuality, and to lose that would mean that we aren’t individual. However, we are deterministic.

              Would you have some scientific sources about the claim that we think in binary and that we are deterministic?

              I think you may be conflating your philosophical point of view with science.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                9 hours ago

                All Turing-complete modes of computation are isomorphic so binary or not is irrelevant. Both silicon computers and human brains are Turing-complete, both can compute all computable functions (given enough time and scratch paper).

                If non-determinism even exists in the real world (it clashes with cause and effect in a rather fundamental manner) then the architecture of brains, nay the life we know in general, actively works towards minimising its impact. Like, copying the genome has a quite high error rate at first, then error correction is applied which brings the error rate down to practically zero, then randomness is introduced in strategic places, influenced by environmental factors. When the finch genome sees that an individual does not get enough food it throws dice at the beak shape, not mitochondrial DNA.

                It’s actually quite obvious in AI models: The reason we can quantise them, essentially rounding every weight of the model to be able to run them with lower-precision maths so they run faster and with less memory, is because the architecture is ludicrously resistant to noise, and rounding every number is equivalent to adding noise, from the perspective of the model.

              • chemical_cutthroat@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                11 hours ago

                The deterministic universe is a theory as much as the big bang. We can’t prove it, but all of the evidence is there. Thinking in binary is me making a point about how our minds interact with the world. If you break down any interaction to its smallest parts, it becomes a simple yes/no, or on/off, we just process it much faster than we think about it in that sense.

                • oce 🐆@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  10 hours ago

                  There are various independent reproducible measurements that give weight to the hot big bang theory as opposed to other cosmological theories. Are they any for the deterministic nature of humans?
                  Quantum physic is not deterministic, for example. While quantum decoherence explains why macro physical systems are deterministic, can we really say it couldn’t play a role in our neurons?
                  On a slightly different point, quantum bits are not binary, they can represent a continuous superposition of multiple states. Why would our mind be closer to binary computing rather than quantum computing?

                  • chemical_cutthroat@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    edit-2
                    9 hours ago

                    The comparison between human cognition and binary isn’t meant to be taken literally as “humans think in 1s and 0s” but rather as an analogy for how deterministic processes work. Even quantum computing, which operates on superposition, ultimately collapses to definite states when observed—the underlying physics differs, but the principle remains: given identical initial conditions, identical outcomes follow.

                    Regarding empirical evidence for human determinism, we can look to neuroscience. Studies consistently show that neural activity precedes conscious awareness of decisions (Libet’s experiments and their modern successors), suggesting our sense of “choosing” comes after the brain has already initiated action. While quantum effects theoretically could influence neural firing, there’s no evidence these effects propagate meaningfully to macro-scale cognition—our neural architecture actively dampens random fluctuations through redundancy.

                    The question isn’t whether humans operate on binary code but whether the system as a whole follows deterministic principles. Even if quantum indeterminacy exists at the micro level, emergence creates effectively deterministic systems at the macro level. This is why weather patterns, while chaotic, remain theoretically deterministic—we just lack perfect information about initial conditions.

                    My position isn’t merely philosophical—it’s the most parsimonious explanation given current scientific understanding of causality, neuroscience, and complex systems. The alternative requires proposing special exemptions for human cognition that aren’t supported by evidence.