I wanted to maybe start making PeerTubr videos, but then I realized I’ve never had to consider my voice as part of my threat model. A consequence that immediately comes to mind is potentially having your voice trained on by AI, but I’m not (currently) in a position where others would find it desirable to do so. Potentially in the future?

I’d like to know how your threat model handles your personal voice. And as a bonus, how would voice modulators help your voice in/prevent your voice from being more flexible in your threat model? Thanks!

  • AnAmericanPotato@programming.dev
    link
    fedilink
    English
    arrow-up
    20
    ·
    8 天前

    I’m not (currently) in a position where others would find it desirable to do so. Potentially in the future?

    It’s hard to imagine a scenario where this would happen and your voice would not otherwise be available. For example, if you went into politics, then you’d be a target, but you’d already be speaking in public all the time. It only takes a few seconds of a voice sample to do this nowadays and it’ll only get easier from here.

    Maybe just make a point to educate your family and friends on the risk of voice cloning so they don’t fall for phone scams.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      8 天前

      educate your family and friends on the risk of voice cloning so they don’t fall for phone scams.

      Absolutely, in fact you can easily (clone your own voice, create a new email address like [email protected] and attach the recording where you ask for a Netflix/Apple/whatever gift card) do it as a harmless prank just to gauge how they’ll react.