Human favoritism, not AI aversion: People’s perceptions (and bias) toward generative AI, human experts, and human–GAI collaboration in persuasive content generation - Volume 18
Yeah this doesn’t shock me. Generative AI is gonna be trained on the best art possible, so of course you’re gonna get good looking output… until you realise the thing that created it doesn’t actually understand 3D space, or find other imperfections that reveal it for the thorough cargo-copy it is.
Will give that a read in the morning, thanks. I am only talking about the generated art I’ve seen, which often features a clear lack of understanding of 3D space. When I see generated art that shows understanding, I’ll be impressed.
Ah, you mean diffusion models (which are different from transformer models for text).
There’s recent advances in that as well - you might not have seen Stability’s preview announcement of their offering here, and there’s big players like Nvidia and dedicated startups focused on it as well. Expect that application of the tech to move quickly in the next 18 months.
Yeah this doesn’t shock me. Generative AI is gonna be trained on the best art possible, so of course you’re gonna get good looking output… until you realise the thing that created it doesn’t actually understand 3D space, or find other imperfections that reveal it for the thorough cargo-copy it is.
You might find the following paper interesting as the reality is a fair bit more nuanced than you might think:
Language Models Represent Space and Time
Will give that a read in the morning, thanks. I am only talking about the generated art I’ve seen, which often features a clear lack of understanding of 3D space. When I see generated art that shows understanding, I’ll be impressed.
Ah, you mean diffusion models (which are different from transformer models for text).
There’s recent advances in that as well - you might not have seen Stability’s preview announcement of their offering here, and there’s big players like Nvidia and dedicated startups focused on it as well. Expect that application of the tech to move quickly in the next 18 months.
Yeah, I didn’t thin LLMs did art generation.
Actually, the transformer approach was just used with some neat success for efficient 3D model generation:
https://nihalsid.github.io/mesh-gpt/
That looks cool :)