WWW.TECHSPOT.COM
Microsoft warns AI is making it faster and easier to create online scams
In brief: It seems one profession that really loves generative AI is that of the cybercriminal. Microsoft warns that the technology has evolved to the point where creating an online scam can now take minutes rather than days or weeks and requires little technical knowledge. In its latest edition of the Cyber Signals report, Microsoft writes that AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools. The range of cyber scams AI can be used for is extensive. The tools can, for example, help create social engineering lures by scanning and scraping the web to build detailed profiles of employees or other targets. There are also cases of complex fraud schemes that use AI-enhanced product reviews and AI-generated storefronts, with scammers creating entire sham websites and fake e-commerce brands, complete with fabricated business histories and customer testimonials. Scammers can even use AI for customer service chatbots that can lie about unexplained charges and other anomalies. It's long been reported that advancing deepfake technology is making this a popular tool for scammers. We've seen it used to create fake celebrity endorsements, impersonate friends and family members, and, as Microsoft notes, for job interviews – both hiring and applying – conducted via video calls. The company notes that lip-syncing delays, robotic speech, or odd facial expressions are giveaway signs that the person on the other end of a video call might be a deepfake. Microsoft recommends that consumers be wary of limited-time deals, countdown timers, and suspicious reviews. They should also cross-check domain names and reviews before making purchases, and avoid using payment methods that lack fraud protections, such as direct bank transfers and cryptocurrency payments. Tech support scams are also on the rise. While AI doesn't always play a part in these incidents, tech support scammers often pretend to be legitimate IT support from well-known companies and use social engineering tactics to gain the trust of their targets. The Windows Quick Assist tool, which lets someone use a remote connection to view a screen or take it over to fix problems, is regularly used in these scams. As such, Microsoft is adding warnings to Quick Assist and requires users to check a box acknowledging the security implications of sharing their screen. Microsoft also recommends using Remote Help instead of Quick Assist for internal tech support. While the post focuses on the dangers of AI scams, it also notes that Microsoft continues to protect its platforms and customers from cybercriminals. Between April 2024 and April 2025, Microsoft stopped $4 billion worth of fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked about 1.6 million bot signup attempts per hour. // Related Stories
0 Kommentare 0 Anteile 27 Ansichten