- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
It would require the “AI” to understand what was being asked instead of mishmashing what has been fed into it. Since the odds of a plain background and solid colors are probably not included in the training, and it isn’t intelligent, it includes the things it has associated with the color white.
ChatGPT is also not intelligent and is designed to respond, so of course it has trouble not responding.
This is part of my main issue with these models, and i believe the key is mathematics.
I bet that the moment these models can generate perfect mathematical geometry it will also ben able To understand background and nothing else.
Well, yeah. It wasn’t trained on many solid-color 300-byte PNGs. MS Paint bucket fill won’t let you do Jackson Pollock dribbling, either, and it’s terrible at landscapes.
I was going to say ChatGPT is incapable of inaction, since it just guesses the most-likely next letter, and something has to be most likely… but it does know when to return control for another prompt and it could theoretically do that.
AI is presently struggling the most with spelling far more than low demand plain images from AI.