I’m finding it harder and harder to tell whether an image has been generated or not (the main giveaways are disappearing). This is probably going to become a big problem in like half a year’s time. Does anyone know of any proof of legitimacy projects that are gaining traction? I can imagine news orgs being the first to be hit by this problem. Are they working on anything?
My understanding is that the generator would have to make each frame from scratch and also keep track of the progress of the drawing.
They may have trained on a few Timelapse drawings, but that dataset is much smaller then the photographs and artworks in the models.
I’m sure it could happen, but I’m not sure there will be enough demand to bother.
Universal Basic Income is really the only answer. So we can make art for fun instead of as a means to survive.
The problem is Goodhart’s Law; “Every measure which becomes a target becomes a bad measure”. Implementing a verification system that depends on video evidence creates both an incentive to forge such videos and a set of labeled training data that grows more readily available as the system sees more use. The adversarial generative network is literally designed to evolve better scams in response to a similarly-evolving scam detector, there’s no computational way around the need to have people involved to make sure things are what they’re claimed to be.
Universal Basic Income would be a good start, but the fundamental problem is money as the primary organizing force of society.