A brand new AI voice device is already being abused to deepfake celeb audio clips
Just a few days in the past, speech AI startup ElevenLabs launched a beta version of its platform that offers customers the ability to create solely new artificial voices for text-to-speech audio or to clone someone’s voice. Nicely, it solely took the web a couple of days to start out utilizing the latter for vile functions. The corporate has revealed on Twitter that it is seeing an “growing variety of voice cloning misuse instances” and that it is considering of a solution to handle the issue by “implementing extra safeguards.”
Whereas ElevenLabs did not elaborate on what it meant by “misuse instances,” Motherboard discovered 4chan posts with clips that includes generated voices that sound like celebrities studying or saying one thing questionable. One clip, as an illustration, reportedly featured a voice that appeared like Emma Watson studying part of Mein Kampf. Customers additionally posted voice clips that function homophobic, transphobic, violent and racist sentiments. It isn’t solely clear if all of the clips used ElevenLab’s know-how, however a submit with a large assortment of the voice information on 4chan included a hyperlink to the startup’s platform.
Maybe this emergence of “deepfake” audio clips should not come as a shock, seeing as a couple of years in the past, we would seen an identical phenomenon happen. Advances in AI and machine studying had led to an increase in deepfake movies, particularly deepfake pornography, whereby present pornographic supplies are altered to make use of the faces of celebrities. And, sure, folks used Emma Watson’s face for a few of these movies.
ElevenLabs is now gathering suggestions on find out how to forestall customers from abusing its know-how. For the time being, its present concepts embrace including extra layers to its account verification to allow voice cloning, equivalent to requiring customers to enter fee information or an ID. It is also contemplating having customers confirm copyright possession of the voice they wish to clone, equivalent to getting them to submit a pattern with prompted textual content. Lastly, the corporate is considering of dropping its Voice Lab device altogether and having customers submit voice cloning requests that it has to manually confirm.