thenextweb.com
Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.Thats according to iProov, a British biometric authentication firm. The company tested the publics AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.Sadly, the budding sleuths overwhelmingly failed in their investigations.A woeful 99.9% of them couldnt distinguish between the real and the deepfake. Think you can do better, Sherlock? Youre not the only one.Register NowIn iProovs study, over 60% of the participants were confident in their AI detection skills regardless of the accuracy of their guesses. Still trust your nose for digital clues? Well, you can test it for yourself in a deepfake quiz released alongside the study results.The quiz arrives amid a surge in headline-grabbing deepfake attacks.In January, for instance, the tabloids were enraptured by one that targeted a French woman called Anne.Scammers swindled her out of 830,000 after using deepfakes to pose as Brad Pitt with deepfakes of the actor. The fraudsters also sent her footage of an AI-generated TV anchor revealing the Hollywood stars exclusive relationship with one special individual who goes by the name of Anne.Poor Anne was roundly mocked for her naivety, but shes far from alone in falling for a deepfake.Deepfakes on the riseLast year, a deepfake attack happened every five minutes, according toID verification firm Onfido.The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:The rapid evolution of AI and its ability to produce realistic deepfakesThe growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologiesThe vulnerability of traditional ID verification practicesBud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple cheapfakes to powerful tools that create convincing synthetic media within minutes.Deepfaking has become commoditised, Bud told TNW via email. The tools to create deepfake content are widely accessible, very affordable, and produce results undetectable to the human eye. Its creating a perfect storm of cybercrime, as most organisations lack adequate defences to counter these attacks.Traditional solutions and manual processes like video identification simply cant keep up. Organisations must adopt science-based biometric systems combined with AI-powered defences that can detect, evolve with, and prevent these attacks.AI will take centre stage at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he e (show all) Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chess (badly) and the guitar (even worse). Get the TNW newsletterGet the most important tech news in your inbox each week.Also tagged with