How the upcoming AI legislations around the world, like voice cloning prevention and disclosure requeriment of techincal details of models, will affect open source or selfhosted models?

  • Kissaki@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    If you’re selling or publishing a voice in a way that impersonates another person without their consent that may be identifiable and prosecutable. “Generate with x voice.” 'Talk to x." Etc. Exact lettering is no necessary if intent is evident from pictures or evasive descriptions making an obvious implication.

    If prosecution can find evidence of cloning/training that can also serve as basis.

    In these ways it doesn’t have to be about similarity of the produced voice, of quality or alternative people, at all.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      You make AI voice generation sound like it’s a one-step process, “clone voice X.” While you can do that, here’s where it’s heading in reality:

      “Generate a voice that’s sounds like a male version of Scarlett Johansson”.

      “That sounds good, but I want it to sound smoother.”

      “Ooh that’s close! Make it slightly higher pitch.”

      In a process like that, do you think Scarlett Johansson would have legal standing to sue?

      What if you started with cloning your own voice but after many tweaks the end result ends up sounding similar to Taylor Swift? Does she have standing?

      In court, you’d have expert witnesses saying they don’t sound the same. “They don’t even have the same inflection or accent!” You’d have voice analysis experts saying their voice patterns don’t match. Not even a little bit.

      But about half the jury would be like, “yeah, that does sound similar.” And you could convict a completely innocent person.

      • Kissaki@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        You mean my second point does? Would you agree with, do you see my first point being independent of the process and act of such a creation?

        The same applies to the creation or training process. If they trained with voice samples or have a collection of voice samples for matching, then those could serve as evidence or indications.

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          Like I said initially, how do we legally define “cloning”? I don’t think it’s possible to write a law that prevents it without also creating vastly more unintended consequences (and problems).

          Let’s take a step back for a moment to think about a more fundamental question: Do people even have the right to NOT have their voice cloned? To me, that is impersonation; which is perfectly legal (in the US). As long as you don’t make claims that it’s the actual person. That is, if you impersonate someone, you can’t claim it’s actually that person. Because that would be fraud.

          In the US—as far as I know—it’s perfectly legal to clone someone’s voice and use it however TF you want. What you can’t do is claim that it’s actually that person because that would be akin to a false endorsement.

          Realistically—from what I know about human voices—this is probably fine. Voice clones aren’t that good. The most effective method is to clone a voice and use it in a voice changer, using a voice actor that can mimick the original person’s accent and inflection. But even that has flaws that a trained ear will pick up.

          Ethically speaking, there’s really nothing wrong with cloning a voice. Because—from an ethics standpoint—it is N/A: There’s no impact. It’s meaningless; just a different way of speaking or singing.

          It feels like it might be bad to sing a song using something like Taylor Swift’s voice but in reality it’ll have no impact on her or her music-related business.