• nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    9 months ago

    The article surmises:

    What will happen when voters can’t separate truth from lies? And what are the stakes?

    Regrettably this has been true since the beginning of time about many issues and is not something that legislation can ever hope to change.

    I think we’ve all seen that the abuse of deepfakes is coming at society like a tidal wave. But I don’t think legislating away the technology that makes it possible is even remotely going the right direction. That cat is already out of the bag, so to speak.

    What needs to be be legislated, however, is personal responsibility for creating (with some limitations) or distributing sexual content that is designed to harm. As a society we already believe this when it comes to revenge porn. But I don’t think it’s as simple.

    Creation is not necessarily a crime, but the intent or positioning during distribution may make something that’s innocent become a crime. Perhaps we cover that with libel laws already. If a picture is true and serves the public interest, it probably doesn’t qualify as libel even if it could harm the target. Obviously, an appropriate venue is necessary, because seeing graphic adult content can also be harmful in its own right, which is why we put porn on porn dedicated websites.

    I guess to sum up my ideas… There’s not much we can do to prevent somebody from generating AI pictures depicting AOC - or your high school crush - in some sort of abusive sexualized graphic state. But we can, and should, impose a high penalty for distributing that material as truth. And there should also be a social penalty for distributing that material even if it’s labeled as fake or imaginary from the beginning.