Bots Now Dominate the Web, and Thats a Problem
www.technewsworld.com
Bots Now Dominate the Web, and Thats a ProblemBy John P. Mello Jr.February 4, 2025 5:00 AM PT ADVERTISEMENTQuality Leads That Turn Into DealsFull-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Nearly half the traffic on the internet is generated by automated entities called bots, and a large portion of them pose threats to consumers and businesses on the web.[B]ots can help in creating phishing scams by gaining users trust and exploiting it for scammers. These scams can have serious implications for the victim, some of which include financial loss, identity theft, and the spread of malware, Christoph C. Cemper, founder of AIPRM, an AI prompt engineering and management company, in Wilmington, Del., said in a statement provided to TechNewsWorld.Unfortunately, this is not the only security threat posed by bots, he continued. They can also damage brand reputations, especially for brands and businesses with popular social media profiles and high engagement rates. By associating a brand with fraudulent and unethical practices, bots can tarnish a brands reputation and reduce consumer loyalty.According to the Imperva 2024 Bad Bot Report, bad bot traffic levels have risen for the fifth consecutive year, indicating an alarming trend. It noted the increase is partly driven by the increasing popularity of artificial intelligence (AI) and large learning models (LLMs).In 2023, bad bots accounted for 32% of all internet traffic a 1.8% increase from 2022, the report explained. The portion of good bot traffic also increased, albeit slightly less significantly, from 17.3% of all internet traffic in 2022 to 17.6% in 2023. Combined, 49.6% of all internet traffic in 2023 wasnt human, as human traffic levels decreased to 50.4% of all traffic.Good bots help index the web for search engines, automate cybersecurity monitoring, and assist customer service through chatbots, explained James McQuiggan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.They assist with detecting vulnerabilities, improving IT workflows, and streamlining procedures online, he told TechNewsWorld. The trick is knowing whats valuable automation and whats nefarious activity.Ticket Scalping at ScaleAutomation and success are driving the growth trends for botnet traffic, explained Thomas Richards, network and red team practice director at Black Duck Software, an applications security company in Burlington, Mass.Being able to scale up allows malicious actors to achieve their goals, he told TechNewsWorld. AI is having an impact by allowing these malicious actors to act more human and automate coding and other tasks. Google, for example, has revealed that Gemini has been used to create malicious things.We see this in other everyday experiences as well, he continued, like the struggle in recent years to get concert tickets to popular events. Scalpers find ways to create users or use compromised accounts to buy tickets faster than a human ever could. They make money by reselling the tickets at a much higher price.Its easy and profitable to deploy automated attacks, added Stephen Kowski, field CTO at SlashNext, a computer and network security company in Pleasanton, Calif. Criminals are using sophisticated tools to bypass traditional security measures, he told TechNewsWorld. AI-powered systems make bots more convincing and harder to detect, enabling them to mimic human behavior better and adapt to defensive measures.The combination of readily available AI tools and the increasing value of stolen data creates perfect conditions for even more advanced bot attacks in the future, he said.Why Bad Bots Are a Serious ThreatDavid Brauchler, technical director and head of AI and ML security at the NCC Group, a global cybersecurity consultancy, expects non-human internet traffic to continue to grow.As more devices become internet-connected, SaaS platforms add interconnected functionality, and new vulnerable devices enter the scene, bot-related traffic has had the opportunity to continue increasing its share of network bandwidth, he told TechNewsWorld.Brauchler added that bad bots are capable of causing great harm. Bots have been used to trigger mass outages by overwhelming network resources to deny access to systems and services, he said.With the advent of generative AI, bots can also be used to impersonate realistic user activity on online platforms, increasing spam risk and fraud, he explained. They can also scan for and exploit security vulnerabilities in computer systems.He contended that the biggest risk from AI is the proliferation of spam. Theres no strong technical solution to identifying and blocking this type of content online, he explained. Users have taken to calling this phenomenon AI slop, and it risks drowning out the signal of legitimate online interactions in the noise of artificial content.He cautioned, however, that the industry should be very careful when it considers the best solution to this problem. Many potential remedies can create more harm, especially those that risk attacking online privacy, he said.How to Identify Malicious BotsBrauchler acknowledged that it can be difficult for humans to detect a malicious bot. The overwhelming majority of bots dont operate in any fashion that humans can detect, he said. They contact internet-exposed systems directly, querying for data or interacting with services.The category of bot that most humans are concerned with are autonomous AI agents that can masquerade as humans in an attempt to defraud people online, he continued. Many AI chatbots use predictable speech patterns that users can learn to recognize by interacting with AI text generators online. Similarly, AI-generated imagery has a number of tells that users can learn to look for, including broken patterns, such as hands and clocks being misaligned, edges of objects melting into other objects, and muddled backgrounds, he said.AI voices also have unusual inflections and expressions of tone that users can learn to pick up on, he added.Malicious bots are often used on social media platforms to gain trusted access to individuals or groups. Watch for telltale signs like unusual patterns in friend requests, generic or stolen profile pictures, and accounts that post at inhuman speeds or frequencies, Kowski cautioned.He also advised to be wary of profiles with limited personal information, suspicious engagement patterns, or pushing specific agendas through automated responses.In the enterprise, he continued, real-time behavioral analysis can spot automated actions that dont match natural human patterns, such as impossibly fast clicks or form fills.Threat to BusinessesMalicious bots can be a significant threat to enterprises, noted Ken Dunham, director of the threat research unit at Qualys, a provider of cloud-based IT, security, and compliance solutions in Foster City, Calif.Once amassed by a threat actor, they can be weaponized, he told TechNewsWorld. Bots have incredible resources and capabilities to perform anonymous, distributed, asynchronous attacks against targets of choice, such as brute force credential attacks, distributed denial of service attacks, vulnerability scans, attempted exploitation, and more.Malicious bots can also target login portals, API endpoints, and public-facing systems, which creates risks for organizations as the bad actors probe for weaknesses to find a way to gain access to the internal infrastructure and data, added McQuiggan.Without bot mitigation strategies, companies can be vulnerable to automated threats, he said.To mitigate threats from bad bots, he recommended deploying multi-factor authentication, technological bot detection solutions, and monitoring traffic for anomalies.He also recommended blocking old user agents, utilizing Captchas, and limiting interactions, where possible, to reduce success rates.Through security awareness education and human risk management, an employees knowledge of bot-driven phishing and fraud attempts can ensure a healthy security culture and reduce the risk of a successful bot attack, he advised.John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.Leave a CommentClick here to cancel reply. Please sign in to post or reply to a comment. New users create a free account.More by John P. Mello Jr.view allMore in Cybersecurity
0 Commentarii
·0 Distribuiri
·59 Views