Every singe Voice-AI company has ethics pages that people looking to create deep-fakes should read. If only to keep themselves out of jail. "I didn't know" is not going to be a legal defense.
"ElevenLabs claims it "always had the ability to trace any generated audio clip back to a specific user." Next week it will release a tool that will allow anyone to confirm that a clip was generated using its technology and report it.
The company says that the malicious content was created by "free anonymous accounts," so it will add a new layer of identity verification. Voice Lab will be made available only on paid tiers, and immediately remove the free version from its site. ElevenLabs are currently tracking and banning any account that creates harmful content in violation of its policies."
Microsoft's VALL-E (voice) is similar to ElevenLabs, and they also have the ability to tell whether a voice recording is AI generated. Anyone being accused of saying something they didn't, will be able to prove it.
Regardless of any specifics, we are fast approaching – or are already at the point – of not being able to believe our eyes and ears unless we witnessed something live ourselves. Funny thing is, there have been studies throughout the years asking people to describe or recall what they've just witnessed (like a car accident), and many times the researchers have received as many different answers as the number of people asked.
Finally, as with everything technology-based, it can be used for good and for bad. Bad actors will always find a way, and many people will be harmed by them, with or without technology IMO. Anyone that has lived through the games/music/social media/internet/wifi/2G-5G/porn-is-bad pogroms because of a few bad actors along the fringes, knows that AI/ML is just another opportunity for "fear-mongering", and others with an agenda to try and take control of us. It's the human dilemma.