• Heavybell@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    edit-2
    11 months ago

    Yeah this doesn’t shock me. Generative AI is gonna be trained on the best art possible, so of course you’re gonna get good looking output… until you realise the thing that created it doesn’t actually understand 3D space, or find other imperfections that reveal it for the thorough cargo-copy it is.

      • Heavybell@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Will give that a read in the morning, thanks. I am only talking about the generated art I’ve seen, which often features a clear lack of understanding of 3D space. When I see generated art that shows understanding, I’ll be impressed.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Ah, you mean diffusion models (which are different from transformer models for text).

          There’s recent advances in that as well - you might not have seen Stability’s preview announcement of their offering here, and there’s big players like Nvidia and dedicated startups focused on it as well. Expect that application of the tech to move quickly in the next 18 months.