
How Big of a Threat Is AI Voice Cloning to the Enterprise?
www.informationweek.com
In March, several YouTube content creators seemed to receive a private video from the platforms CEO, Neal Mohan. It turns out that it was not Mohan in the video, rather an AI-generated version of him created by scammers out to steal credentials and install malware. This may stir memories of other recent, high-profile AI-powered scams. Last year, robocalls featuring the voice of President Joe Biden urged people not to vote in the primaries. The calls made use of AI to mimic Bidens voice, AP News reports.Examples of these kinds of deepfakes -- video and audio -- are popping up in the news frequently. The nonprofit Consumer Reports reviewed six voice cloning apps and reports that four of those apps have no significant guardrails preventing users from cloning someones voice without their consent.Executives are often the public faces and voices of their companies; audio and video of CEOs, CIOs, and other C-suite members are readily available online. How concerned should CIOs and other enterprise tech leaders be about voice cloning and other deepfakes?A Lack of GuardrailsElevenLabs, Lovo, PlayHT, and Speechify -- four of the apps Consumer Reviews evaluated -- ask users to check a box confirming that they have the legal right to go ahead with their voice cloning capabilities. Descript and Resemble AI take consent a step further by asking users to read and record a consent statement, according to Consumer Reports.Related:Barriers to prevent misuse of these apps are quite low. Even for the apps that require users to read a statement could potentially be manipulated by audio created by a non-consensual voice clone on another platform, the Consumer Reports review notes.Not only can users employ many readily available apps to clone someones voice without their consent, they dont need technical skills to do so.No CS background, no masters degree, no need to program, literally go on to your app store on your phone or to Google and type in voice clone or deepfake face generator, and there's thousands of tools for fraudsters to cause harm, says Ben Colman, co-founder and CEO of deepfake detection company Reality Defender.Colman also notes that compute costs have dramatically dropped within the past few months. A year ago you needed cloud compute. Now, you can do it on a commodity laptop or phone, he adds.The issue of AI regulation is still very much up in the air. Could there be more guardrails for these kinds of apps in the future? Colman is confident that there will be. He gave testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on the dangers of election deepfakes.Related:The challenges and risks created by generative AI are a truly bipartisan concern, Colman tells InformationWeek. We're very optimistic about near-term guardrails.The Risks of Voice CloningWhile more guardrails may be forthcoming, whether via regulation or another impetus, enterprise leaders have to contend with the risks of voice cloning and other deepfakes today.The burden to entry is so low right now that AI voices could essentially bypass outdated authentication systems, and that's going to leave you with multiple risks whether there's data breaches, reputational concerns, financial fraud, says Justice Erolin, CTO of BairesDev, a software outsourcing company. And because there's no industry safeguards, it leaves most companies at risk.Safeguarding Against FraudThe obvious frontline defense to defend against voice cloning would be to limit sharing personal data, like your voice print. The harder it is to find audio featuring your voice, the harder it is to clone it. They should not share either personal data or voice or face, but it's challenging for CEOs. For example, I'm on YouTube. I'm on the news. It's just a cost of doing business, says Colman.Related:CIOs must operate in the realities of digital world, knowing that enterprises leaders are going to have publicly available audio that scammers can attempt to voice clone and use for nefarious ends.AI voice cloning is not a futuristic risk. It's a risk that's here today. I would treat it like any other cyber threat: with robust authentication, says Erolin.Given the risks of voice cloning, audio alone for authentication is risky. Adopting multifactor authentication can mitigate that risk. Enabling passwords, pins, or biometrics along with audio can help ensure you are speaking to the person you think you are, not someone who has cloned their voice or likeness.The Outlook for DetectionDetection is an essential tool in the fight against voice cloning. Colman likens the development of deepfake detection tools to the development of antivirus scanning, which is done locally, in real time on devices.I'd say deepfake detection [has] the exact same growth story, Colman explains. Last year, it was pick files you want to scan, and this year, it's pick a certain location, scan everything. And we're expecting within the next year, we will move completely on-device.Detection tools could be integrated onto devices, like phones and computers, and into video conferencing platforms to detect when audio and video have been generated or manipulated by AI. Reality Defender is working on pilots of its tool with banks, for example, initially integrating with call centers and interactive voice response (IVR) technology.I think we're going to look back on this period in a few years, just like antivirus, and say, Can you imagine a world where we didn't check for generative AI? says Colman.Like any other cybersecurity concern, there will be a tug of war between escalating deepfake capabilities in the hands of threat actors and detection capabilities in the hands of defenders. CIOs and other security leaders will be challenged to implement safeguards and evaluate those capabilities against those of fraudsters.
0 Comments
·0 Shares
·3 Views