That weird call or text from a senator is probably an AI scam
The AI clones establish a rapport and then try to trick the target into clicking dirty links.
Image: Moor Studio/Getty Images
Get the Popular Science daily newsletter
Breakthroughs, discoveries, and DIY tips sent every weekday.
If you recently received a voice message from an unusual number claiming to be your local congressperson, it’s probably a scam. The FBI’s crime division issued a warning this week about a new scheme in which bad actors use text messages and AI-generated voice clones to impersonate government officials. The scammers try to build a sense of connection with their target and eventually convince them to click on a malicious link that steals valuable login credentials. This scam is just the latest in a series of evolving attacks using convincing generative AI technology to trick people.
“If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI crime alert reads.
How the scam works
Government officials say the scam began around April of this year. Attackers either send text messages or use AI-generated voice clone technology to impersonate government employees. Many of the targets of these attacks, the alert notes, have been officials or close contacts of government officials. AI technology has improved rapidly in recent years, to the point where some systems can generate convincing fakes after analyzing just a few minutes, or even seconds, of a person’s voice recordings. Public officials—many of whom frequently give speeches or statements—are particularly vulnerable to voice cloning.
Though the FBI notice is sparse on details, it says scammers typically use the supposed government official’s identity to create a sense of familiarity or urgency with their target. From there, they often ask the target to click a link to continue the conversation on a different messaging platform. In reality, that link is a trap designed to steal sensitive credentials like usernames and passwords. The FBI warns that this type of attack could also be used to target other individuals in government positions. If scammers gain access to a victim’s contacts, they could use that information to target additional officials. The stolen contact details could later be used to impersonate others in attempts to steal or transfer funds.
“Access to personal or official accounts operated by US officials could be used to target other government officials, or their associates and contacts, by using trusted contact information they obtain,” the FBI notes.
Related:Scammers are leaning into AI deepfakes
Increasingly convincing AI-generated audio and video are making phishing scams more effective. A 2024 report from cybersecurity company Zscaler found that phishing attempts increased by 58 percent in 2023, a surge attributed in part to AI deepfakes. While these scams can target anyone, seniors are often disproportionately impacted. In 2023, FBI data showed that scammers stole billion from senior citizens through various financial schemes. AI, the agency notes, is worsening the problem by making scams appear more believable, tricking people who might otherwise recognize them as fraud.
Some of these attacks can be shockingly targeted. Over the past two years, there have been numerous reports of attackers using voice cloning technology to trick parents into believing their child has been kidnapped. In a state of panic, the victims transfer large sums of money, only to later discover their loved one was never in danger. Voice clones are also being used in the political arena.
Last year, voters in New Hampshire received a robocall featuring what sounded like former President Joe Biden, urging them not to vote in the state’s primary. The “Biden” voice was actually generated by AI. That audio was reportedly created by political consultant Steve Kramer, who was working with then-Democratic presidential primary challenger Dean Phillips. Kramer was eventually fined million by the FCC and is facing criminal charges for alleged voter suppression.
The FBI urges people to exercise extreme caution if they receive a communication claiming to come directly from a government official. If that does happen, individuals should attempt to independently verify the person’s identity by calling a known, trusted phone number associated with them. The alert also advises the public to inspect email addresses and URLs for typos or other irregularities that could indicate a phishing attempt. In the case of deepfakes, the notice recommends watching for awkward pauses, unusual intonation, or other oddities that may be telltale signs of an AI-generated voice. That’s easier said than done. Today’s most sophisticated tools can produce manipulated content in ways that’s virtually indistinguishable to the average human observer.
#that #weird #call #text #senator
That weird call or text from a senator is probably an AI scam
The AI clones establish a rapport and then try to trick the target into clicking dirty links.
Image: Moor Studio/Getty Images
Get the Popular Science daily newsletter💡
Breakthroughs, discoveries, and DIY tips sent every weekday.
If you recently received a voice message from an unusual number claiming to be your local congressperson, it’s probably a scam. The FBI’s crime division issued a warning this week about a new scheme in which bad actors use text messages and AI-generated voice clones to impersonate government officials. The scammers try to build a sense of connection with their target and eventually convince them to click on a malicious link that steals valuable login credentials. This scam is just the latest in a series of evolving attacks using convincing generative AI technology to trick people.
“If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI crime alert reads.
How the scam works
Government officials say the scam began around April of this year. Attackers either send text messages or use AI-generated voice clone technology to impersonate government employees. Many of the targets of these attacks, the alert notes, have been officials or close contacts of government officials. AI technology has improved rapidly in recent years, to the point where some systems can generate convincing fakes after analyzing just a few minutes, or even seconds, of a person’s voice recordings. Public officials—many of whom frequently give speeches or statements—are particularly vulnerable to voice cloning.
Though the FBI notice is sparse on details, it says scammers typically use the supposed government official’s identity to create a sense of familiarity or urgency with their target. From there, they often ask the target to click a link to continue the conversation on a different messaging platform. In reality, that link is a trap designed to steal sensitive credentials like usernames and passwords. The FBI warns that this type of attack could also be used to target other individuals in government positions. If scammers gain access to a victim’s contacts, they could use that information to target additional officials. The stolen contact details could later be used to impersonate others in attempts to steal or transfer funds.
“Access to personal or official accounts operated by US officials could be used to target other government officials, or their associates and contacts, by using trusted contact information they obtain,” the FBI notes.
Related:Scammers are leaning into AI deepfakes
Increasingly convincing AI-generated audio and video are making phishing scams more effective. A 2024 report from cybersecurity company Zscaler found that phishing attempts increased by 58 percent in 2023, a surge attributed in part to AI deepfakes. While these scams can target anyone, seniors are often disproportionately impacted. In 2023, FBI data showed that scammers stole billion from senior citizens through various financial schemes. AI, the agency notes, is worsening the problem by making scams appear more believable, tricking people who might otherwise recognize them as fraud.
Some of these attacks can be shockingly targeted. Over the past two years, there have been numerous reports of attackers using voice cloning technology to trick parents into believing their child has been kidnapped. In a state of panic, the victims transfer large sums of money, only to later discover their loved one was never in danger. Voice clones are also being used in the political arena.
Last year, voters in New Hampshire received a robocall featuring what sounded like former President Joe Biden, urging them not to vote in the state’s primary. The “Biden” voice was actually generated by AI. That audio was reportedly created by political consultant Steve Kramer, who was working with then-Democratic presidential primary challenger Dean Phillips. Kramer was eventually fined million by the FCC and is facing criminal charges for alleged voter suppression.
The FBI urges people to exercise extreme caution if they receive a communication claiming to come directly from a government official. If that does happen, individuals should attempt to independently verify the person’s identity by calling a known, trusted phone number associated with them. The alert also advises the public to inspect email addresses and URLs for typos or other irregularities that could indicate a phishing attempt. In the case of deepfakes, the notice recommends watching for awkward pauses, unusual intonation, or other oddities that may be telltale signs of an AI-generated voice. That’s easier said than done. Today’s most sophisticated tools can produce manipulated content in ways that’s virtually indistinguishable to the average human observer.
#that #weird #call #text #senator
·52 مشاهدة