AI-Powered Social Engineering: Reinvented Threats
thehackernews.com
Feb 07, 2025The Hacker NewsArtificial Intelligence / CybercrimeThe foundations for social engineering attacks manipulating humans might not have changed much over the years. It's the vectors how these techniques are deployed that are evolving. And like most industries these days, AI is accelerating its evolution. This article explores how these changes are impacting business, and how cybersecurity leaders can respond.Impersonation attacks: using a trusted identityTraditional forms of defense were already struggling to solve social engineering, the 'cause of most data breaches' according to Thomson Reuters. The next generation of AI-powered cyber attacks and threat actors can now launch these attacks with unprecedented speed, scale, and realism. The old way: Silicone masksBy impersonating a French government minister, two fraudsters were able to extract over 55 million from multiple victims. During video calls, one would wear a silicone mask of Jean-Yves Le Drian. To add a layer of believability, they also sat in a recreation of his ministerial office with photos of the then-President Franois Hollande. Over 150 prominent figures were reportedly contacted and asked for money for ransom payments or anti-terror operations. The biggest transfer made was 47 million, when the target was urged to act because of two journalists held in Syria.The new way: Video deepfakesMany of the requests for money failed. After all, silicon masks can't fully replicate the look and movement of skin on a person. AI video technology is offering a new way to step up this form of attack. We saw this last year in Hong Kong, where attackers created a video deepfake of a CFO to carry out a $25 million scam. They then invited a colleague to a videoconference call. That's where the deepfake CFO persuaded the employee to make the multi-million transfer to the fraudsters' account.Live calls: voice phishingVoice phishing, often known as vishing, uses live audio to build on the power of traditional phishing, where people are persuaded to give information that compromises their organization. The old way: Fraudulent phone callsThe attacker may impersonate someone, perhaps an authoritative figure or from another trustworthy background, and make a phone call to a target. They add a sense of urgency to the conversation, requesting that a payment be made immediately to avoid negative outcomes such as losing access to an account or missing a deadline. Victims lost a median $1,400 to this form of attack in 2022.The new way: Voice cloningTraditional vishing defense recommendations include asking people not to click on links that come with requests, and calling back the person on an official phone number. It's similar to the Zero Trust approach of Never Trust, Always Verify. Of course, when the voice comes from someone the person knows, it's natural for trust to bypass any verification concerns. That's the big challenge with AI, with attackers now using voice cloning technology, often taken from just a few seconds of a target speaking. A mother received a call from someone who'd cloned her daughter's voice, saying she'd be kidnapped and that the attackers wanted a $50,000 reward.Phishing email Most people with an email address have been a lottery winner. At least, they've received an email telling them that they've won millions. Perhaps with a reference to a King or Prince who might need help to release the funds, in return for an upfront fee.The old way: Spray and prayOver time these phishing attempts have become far less effective, for multiple reasons. They're sent in bulk with little personalization and lots of grammatical errors, and people are more aware of '419 scams' with their requests to use specific money transfer services. Other versions, such as using fake login pages for banks, can often be blocked using web browsing protection and spam filters, along with educating people to check the URL closely. However, phishing remains the biggest form of cybercrime. The FBI's Internet Crime Report 2023 found phishing/spoofing was the source of 298,878 complaints. To give that some context, the second-highest (personal data breach) registered 55,851 complaints.The new way: Realistic conversations at scaleAI is allowing threat actors to access word-perfect tools by harnessing LLMs, instead of relying on basic translations. They can also use AI to launch these to multiple recipients at scale, with customization allowing for the more targeted form of spear phishing. What's more, they can use these tools in multiple languages. These open the doors to a wider number of regions, where targets may not be as aware of traditional phishing techniques and what to check. The Harvard Business Review warns that 'the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates.'Reinvented threats mean reinventing defenses Cybersecurity has always been in an arms race between defense and attack. But AI has added a different dimension. Now, targets have no way of knowing what's real and what's fake when an attacker is trying to manipulate their:Trust, by Impersonating a colleague and asking an employee to bypass security protocols for sensitive informationRespect for authority by pretending to be an employee's CFO and ordering them to complete an urgent financial transactionFear by creating a sense of urgency and panic means the employee doesn't think to consider whether the person they're speaking to is genuineThese are essential parts of human nature and instinct that have evolved over thousands of years. Naturally, this isn't something that can evolve at the same speed as malicious actors' methods or the progress of AI. Traditional forms of awareness, with online courses and questions and answers, aren't built for this AI-powered reality.That's why part of the answer especially while technical protections are still catching up is to make your workforce experience simulated social engineering attacks. Because your employees might not remember what you say about defending against a cyber attack when it occurs, but they will remember how it makes them feel. So that when a real attack happens, they're aware of how to respond.Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
0 Комментарии
·0 Поделились
·46 Просмотры