A brand new AI voice software is already being abused to deepfake celeb audio clips

A couple of days in the past, speech AI startup ElevenLabs launched a beta version of its platform that provides customers the ability to create totally new artificial voices for text-to-speech audio or to clone anyone’s voice. Effectively, it solely took the web a couple of days to begin utilizing the latter for vile functions. The corporate has revealed on Twitter that it is seeing an “rising variety of voice cloning misuse instances” and that it is considering of a method to deal with the issue by “implementing extra safeguards.”

Whereas ElevenLabs did not elaborate on what it meant by “misuse instances,” Motherboard discovered 4chan posts with clips that includes generated voices that sound like celebrities studying or saying one thing questionable. One clip, for example, reportedly featured a voice that appeared like Emma Watson studying part of Mein Kampf. Customers additionally posted voice clips that characteristic homophobic, transphobic, violent and racist sentiments. It is not totally clear if all of the clips used ElevenLab’s know-how, however a publish with a large assortment of the voice recordsdata on 4chan included a hyperlink to the startup’s platform.

Maybe this emergence of “deepfake” audio clips should not come as a shock, seeing as a couple of years in the past, we might seen an analogous phenomenon happen. Advances in AI and machine studying had led to an increase in deepfake movies, particularly deepfake pornography, whereby present pornographic supplies are altered to make use of the faces of celebrities. And, sure, individuals used Emma Watson’s face for a few of these movies.

ElevenLabs is now gathering suggestions on the best way to stop customers from abusing its know-how. In the meanwhile, its present concepts embody including extra layers to its account verification to allow voice cloning, akin to requiring customers to enter fee information or an ID. It is also contemplating having customers confirm copyright possession of the voice they need to clone, akin to getting them to submit a pattern with prompted textual content. Lastly, the corporate is considering of dropping its Voice Lab software altogether and having customers submit voice cloning requests that it has to manually confirm.

Previous post AUGUSTA GOLD COMMENTS ON TRADING ACTIVITY
Next post Avicanna Pronounces Warrant and Debenture Repricing and Amendments