A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey.

Meanwhile, on the other side of the country, officials are investigating an incident involving a teenage boy who allegedly used artificial intelligence to create and distribute similar images of other students – also teen girls - that attend a high school in suburban Seattle, Washington.

The disturbing cases have put a spotlight yet again on explicit AI-generated material that overwhelmingly harms women and children and is booming online at an unprecedented rate. According to an analysis by independent researcher Genevieve Oh that was shared with The Associated Press, more than 143,000 new deepfake videos were posted online this year, which surpasses every other year combined.

      • @Chakravanti
        link
        0
        edit-2
        7 months ago

        According to what logic? Like I’m ever going to trust some lying asshole to hide his instructions for fucking anything that’s MINE. News Alert: “Your” computer ain’t yours.

        • @[email protected]
          link
          fedilink
          0
          edit-2
          7 months ago

          People have been trying to circumvent chatGPT’s filters, they’ll do the exact same with open source AI. But it’ll be worse because it’s open source, so any built in feature to prevent abuse could just get removed then recompiled by whoever.

          And that’s all even assuming there ever ends up being open source AI.

          • @Chakravanti
            link
            17 months ago

            You’re logic is bass ackwards. Knowing the open source publicly means the shit gets fixed faster. Closed source just don’t get fixed %99 of the time because there’s only one mother fucker to do the fixing and usually just don’t do it.

            • @[email protected]
              link
              fedilink
              17 months ago

              You can’t fix it with open source. All it takes is one guy making a fork and removing the safeguards because they believe in free speech or something. You can’t have safeguards against misuse of a tool in an open source environment.

              I agree that closed source AI is bad. But open source doesn’t magically solve the problem.

              • @Chakravanti
                link
                17 months ago

                Forks are productive. Your’re just wrong about it. I’ll take FOSS over closed source. I’ll trust the masses reviewing FOSS over the one asshole doing, or rather not doing, exactly that.

                • @[email protected]
                  link
                  fedilink
                  1
                  edit-2
                  7 months ago

                  The masses can review said asshole all they like, but it doesn’t mean anything, because nobody can stop them from removing the safeguards.

                  Then all of a sudden you have an AI that anybody can use that will happily generate even the most morally bankrupt things with ease and speed.

                  I love FOSS, but AI is inherently unethical. At best it steals people’s work. At worst it makes CP/unconsensual porn.

                  • @Chakravanti
                    link
                    17 months ago

                    The user can. You don’t understand anything and I ain’t on a laptop to school you. Gimmie a few hours til I get home and I’ll teach you a the best I can, if you care to learn.

                    AI may be unethical but terminator’s AS was the best description of a FOSS AI.