Let the AI Security War Games Begin In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a..."> Let the AI Security War Games Begin In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a..." /> Let the AI Security War Games Begin In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a..." />

ترقية الحساب

Let the AI Security War Games Begin

In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” In Europe, a second firm experienced a multimillion-dollar fraud when a deepfake emulated a board member in a video allegedly approving a fraudulent transfer of funds. “Banks and financial institutions are particularly at risk,” said The Hack Academy. “A study by Deloitte found that over 50% of senior executives expect deepfake scams to target their organizations soon. These attacks can undermine trust and lead to significant financial loss.”  Hack Academy went on to say that AI-inspired security attacks weren’t confined to deepfakes. These attacks were also beginning to occur with increased regularity in the form of corporate espionage and misinformation campaigns. AI brings new, more dangerous tactics to traditional security attack methods like phishing, social engineering and the insertion of malware into systems. For CIOs, enterprise AI system developers, data scientists and IT network professionals, AI changes the rules and the tactics for security, given AI’s limitless potential for both good and bad. This is forcing a reset in how IT thinks about security against malicious actors and intruders. Related:How Bad Actors are Exploiting AI What exactly is IT up against? The AI tools that are available on the dark web and in public cyber marketplaces give security perpetrators a wide choice of AI weaponry. Also, IoT and edge networks now present much broader enterprise attack surfaces. Security threats can come in videos, phone calls, social media sites, corporate systems and networks, vendor clouds, IoT devices, network end points, and virtually any entry point into a corporate IT environment that electronic communications can penetrate. Here are some of the current AI-embellished security attacks that companies are seeing: Convincing deepfake videos of corporate executives and stakeholders that are intended to dupe companies in pursuing certain actions or transferring certain assets or funds. This deep faking also extends to voice simulations of key personnel that are left as voicemails in corporate phone systems.  Phishing and spearfishing attacks that send convincing emailsto employees, who mistakenly open them because they think the sender is their boss, the CEO or someone else they perceive as trusted. AI supercharges these attacks because it can automate and send out a large volume of emails that hit many employee email accounts. That AI continues to “learn” with the help of machine learning so it can discover new trusted sender candidates for future attacks.   Related:Adaptive messaging that uses generative AI to craft messages to users that correct grammar and that “learn” from corporate communication styles so they can more closely emulate corporate communications that make them seem legitimate. Mutating code that uses AI to change malware signatures on the fly so antivirus detection mechanisms can be evaded. Data poisoning that occurs when a corporate or cloud provider’s AI data repository is injected by malware that altersso the data produces erroneous and misleading results.  Fighting Back With Tech To combat these supercharged AI-based security threats, IT has number of tools, techniques and strategies it can consider. Fighting deepfakes. Deepfakes can come in the form of videos, voicemails and photos. Since deepfakes are unstructured data objects that can’t be parsed in their native forms like real data, there are new tools on the market that can convert these objects into graphical representations that can be analyzed to evaluate whether there is something in an object that should or shouldn’t be there. The goal is to confirm authenticity.  Related:Fighting phishing and spear phishing. A combination of policy and practice works best to combat phishing and spear phishing attacks. Both types of attacks are predicated on users being tricked into opening an email attachment that they believe is from a trusted sender, so the first line of defense is educatingusers on how to handle their email. For instance, a user should notify IT if they receive an email that seems unusual or unexpected, and they should never open it. IT should also review its current security tools. Is it still using older security monitoring software that doesn’t include more modern technologies like observability, which can check for security intrusions or malware at more atomic levels?  Is IT still using IAMsoftware to track user identities and activities at a top level in the cloud and on top and atomic levels on premises, or has it also added cloud identity entitlements management, which gives it an atomic level view of  user accesses and activities in the cloud? Better yet, has IT moved to identity governance administration, which can serve as an over-arching umbrella for IAM and CIEM plugins, plus provide detailed audit reports and automated compliance across all platforms? Fighting embedded malware code. Malware can lie dormant in systems for months, giving a bad actor the option to activate it whenever the timing is right. It’s all the more reason for IT to augment its security staff with new skillsets, such as that of the “threat hunter,” whose job is to examine networks, data and systems on a daily basis, hunting down malware that might be lurking within, and destroying it before it activates. Fighting with zero-trust networks. Internet of Thingsdevices come into companies with little or no security because IoT suppliers don’t pay much attention to it and there is a general expectation that corporate IT will configure devices to the appropriate security settings. The problem is, IT often forgets to do this. There are also times when users purchase their own IoT gear, and IT doesn’t know about it. Zero-trust networks help manage this, because they detect and report on everything that is added, subtracted or modified on the network. This gives IT visibility into new, potential security breach points. A second step is to formalize IT procedures for IoT devices so that no IoT device is deployed without the device’s security first being set to corporate standards.  Fighting AI data poisoning. AI models, systems and data should be continuously monitored for accuracy. As soon as they show lowered levels of accuracy or produce unusual conclusions, the data repository, inflows and outflows should be examined for quality and non-bias of data. If contamination is found, the system should be taken down, the data sanitized, and the sources of the contamination traced, tracked and disabled. Fighting AI with AI. Most every security tool on the market today contains AI functionality to detect anomalies, abnormal data patterns and user activities. Additionally, forensics AI can dissect a security breach that does occur, isolating how it happened, where it originated from and what caused it. Since most sites don’t have on-staff forensics experts, IT will have to train staff in forensics skills. Fighting with regular audits and vulnerability testing. Minimally, IT vulnerability testing should be performed on a quarterly basis, and full security audits on an annual basis. If sites use cloud providers, they should request each provider’s latest security audit for review. An outside auditor can also help sites prepare for future AI-driven security threats, because auditors stay on top of the industry, visit many different companies, and see many different situations. An advanced knowledge of threats that loom in the future helps sites prepare for new battles. Summary AI technology is moving faster than legal rulings and regulations. This leaves most IT departments “on their own” to develop security defenses against bad actors who use AI against them.  The good news is that IT already has insights into how bad actors intend to use AI, and there are tools on the market that can help defensive efforts. What’s been missing is a proactive and aggressive battle plan from IT. That has to start now. 
#let #security #war #games #begin
Let the AI Security War Games Begin
In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” In Europe, a second firm experienced a multimillion-dollar fraud when a deepfake emulated a board member in a video allegedly approving a fraudulent transfer of funds. “Banks and financial institutions are particularly at risk,” said The Hack Academy. “A study by Deloitte found that over 50% of senior executives expect deepfake scams to target their organizations soon. These attacks can undermine trust and lead to significant financial loss.”  Hack Academy went on to say that AI-inspired security attacks weren’t confined to deepfakes. These attacks were also beginning to occur with increased regularity in the form of corporate espionage and misinformation campaigns. AI brings new, more dangerous tactics to traditional security attack methods like phishing, social engineering and the insertion of malware into systems. For CIOs, enterprise AI system developers, data scientists and IT network professionals, AI changes the rules and the tactics for security, given AI’s limitless potential for both good and bad. This is forcing a reset in how IT thinks about security against malicious actors and intruders. Related:How Bad Actors are Exploiting AI What exactly is IT up against? The AI tools that are available on the dark web and in public cyber marketplaces give security perpetrators a wide choice of AI weaponry. Also, IoT and edge networks now present much broader enterprise attack surfaces. Security threats can come in videos, phone calls, social media sites, corporate systems and networks, vendor clouds, IoT devices, network end points, and virtually any entry point into a corporate IT environment that electronic communications can penetrate. Here are some of the current AI-embellished security attacks that companies are seeing: Convincing deepfake videos of corporate executives and stakeholders that are intended to dupe companies in pursuing certain actions or transferring certain assets or funds. This deep faking also extends to voice simulations of key personnel that are left as voicemails in corporate phone systems.  Phishing and spearfishing attacks that send convincing emailsto employees, who mistakenly open them because they think the sender is their boss, the CEO or someone else they perceive as trusted. AI supercharges these attacks because it can automate and send out a large volume of emails that hit many employee email accounts. That AI continues to “learn” with the help of machine learning so it can discover new trusted sender candidates for future attacks.   Related:Adaptive messaging that uses generative AI to craft messages to users that correct grammar and that “learn” from corporate communication styles so they can more closely emulate corporate communications that make them seem legitimate. Mutating code that uses AI to change malware signatures on the fly so antivirus detection mechanisms can be evaded. Data poisoning that occurs when a corporate or cloud provider’s AI data repository is injected by malware that altersso the data produces erroneous and misleading results.  Fighting Back With Tech To combat these supercharged AI-based security threats, IT has number of tools, techniques and strategies it can consider. Fighting deepfakes. Deepfakes can come in the form of videos, voicemails and photos. Since deepfakes are unstructured data objects that can’t be parsed in their native forms like real data, there are new tools on the market that can convert these objects into graphical representations that can be analyzed to evaluate whether there is something in an object that should or shouldn’t be there. The goal is to confirm authenticity.  Related:Fighting phishing and spear phishing. A combination of policy and practice works best to combat phishing and spear phishing attacks. Both types of attacks are predicated on users being tricked into opening an email attachment that they believe is from a trusted sender, so the first line of defense is educatingusers on how to handle their email. For instance, a user should notify IT if they receive an email that seems unusual or unexpected, and they should never open it. IT should also review its current security tools. Is it still using older security monitoring software that doesn’t include more modern technologies like observability, which can check for security intrusions or malware at more atomic levels?  Is IT still using IAMsoftware to track user identities and activities at a top level in the cloud and on top and atomic levels on premises, or has it also added cloud identity entitlements management, which gives it an atomic level view of  user accesses and activities in the cloud? Better yet, has IT moved to identity governance administration, which can serve as an over-arching umbrella for IAM and CIEM plugins, plus provide detailed audit reports and automated compliance across all platforms? Fighting embedded malware code. Malware can lie dormant in systems for months, giving a bad actor the option to activate it whenever the timing is right. It’s all the more reason for IT to augment its security staff with new skillsets, such as that of the “threat hunter,” whose job is to examine networks, data and systems on a daily basis, hunting down malware that might be lurking within, and destroying it before it activates. Fighting with zero-trust networks. Internet of Thingsdevices come into companies with little or no security because IoT suppliers don’t pay much attention to it and there is a general expectation that corporate IT will configure devices to the appropriate security settings. The problem is, IT often forgets to do this. There are also times when users purchase their own IoT gear, and IT doesn’t know about it. Zero-trust networks help manage this, because they detect and report on everything that is added, subtracted or modified on the network. This gives IT visibility into new, potential security breach points. A second step is to formalize IT procedures for IoT devices so that no IoT device is deployed without the device’s security first being set to corporate standards.  Fighting AI data poisoning. AI models, systems and data should be continuously monitored for accuracy. As soon as they show lowered levels of accuracy or produce unusual conclusions, the data repository, inflows and outflows should be examined for quality and non-bias of data. If contamination is found, the system should be taken down, the data sanitized, and the sources of the contamination traced, tracked and disabled. Fighting AI with AI. Most every security tool on the market today contains AI functionality to detect anomalies, abnormal data patterns and user activities. Additionally, forensics AI can dissect a security breach that does occur, isolating how it happened, where it originated from and what caused it. Since most sites don’t have on-staff forensics experts, IT will have to train staff in forensics skills. Fighting with regular audits and vulnerability testing. Minimally, IT vulnerability testing should be performed on a quarterly basis, and full security audits on an annual basis. If sites use cloud providers, they should request each provider’s latest security audit for review. An outside auditor can also help sites prepare for future AI-driven security threats, because auditors stay on top of the industry, visit many different companies, and see many different situations. An advanced knowledge of threats that loom in the future helps sites prepare for new battles. Summary AI technology is moving faster than legal rulings and regulations. This leaves most IT departments “on their own” to develop security defenses against bad actors who use AI against them.  The good news is that IT already has insights into how bad actors intend to use AI, and there are tools on the market that can help defensive efforts. What’s been missing is a proactive and aggressive battle plan from IT. That has to start now.  #let #security #war #games #begin
WWW.INFORMATIONWEEK.COM
Let the AI Security War Games Begin
In February 2024, CNN reported, “A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.” In Europe, a second firm experienced a multimillion-dollar fraud when a deepfake emulated a board member in a video allegedly approving a fraudulent transfer of funds. “Banks and financial institutions are particularly at risk,” said The Hack Academy. “A study by Deloitte found that over 50% of senior executives expect deepfake scams to target their organizations soon. These attacks can undermine trust and lead to significant financial loss.”  Hack Academy went on to say that AI-inspired security attacks weren’t confined to deepfakes. These attacks were also beginning to occur with increased regularity in the form of corporate espionage and misinformation campaigns. AI brings new, more dangerous tactics to traditional security attack methods like phishing, social engineering and the insertion of malware into systems. For CIOs, enterprise AI system developers, data scientists and IT network professionals, AI changes the rules and the tactics for security, given AI’s limitless potential for both good and bad. This is forcing a reset in how IT thinks about security against malicious actors and intruders. Related:How Bad Actors are Exploiting AI What exactly is IT up against? The AI tools that are available on the dark web and in public cyber marketplaces give security perpetrators a wide choice of AI weaponry. Also, IoT and edge networks now present much broader enterprise attack surfaces. Security threats can come in videos, phone calls, social media sites, corporate systems and networks, vendor clouds, IoT devices, network end points, and virtually any entry point into a corporate IT environment that electronic communications can penetrate. Here are some of the current AI-embellished security attacks that companies are seeing: Convincing deepfake videos of corporate executives and stakeholders that are intended to dupe companies in pursuing certain actions or transferring certain assets or funds. This deep faking also extends to voice simulations of key personnel that are left as voicemails in corporate phone systems.  Phishing and spearfishing attacks that send convincing emails (some with malicious attachments) to employees, who mistakenly open them because they think the sender is their boss, the CEO or someone else they perceive as trusted. AI supercharges these attacks because it can automate and send out a large volume of emails that hit many employee email accounts. That AI continues to “learn” with the help of machine learning so it can discover new trusted sender candidates for future attacks.   Related:Adaptive messaging that uses generative AI to craft messages to users that correct grammar and that “learn” from corporate communication styles so they can more closely emulate corporate communications that make them seem legitimate. Mutating code that uses AI to change malware signatures on the fly so antivirus detection mechanisms can be evaded. Data poisoning that occurs when a corporate or cloud provider’s AI data repository is injected by malware that alters (“poisons”) so the data produces erroneous and misleading results.  Fighting Back With Tech To combat these supercharged AI-based security threats, IT has number of tools, techniques and strategies it can consider. Fighting deepfakes. Deepfakes can come in the form of videos, voicemails and photos. Since deepfakes are unstructured data objects that can’t be parsed in their native forms like real data, there are new tools on the market that can convert these objects into graphical representations that can be analyzed to evaluate whether there is something in an object that should or shouldn’t be there. The goal is to confirm authenticity.  Related:Fighting phishing and spear phishing. A combination of policy and practice works best to combat phishing and spear phishing attacks. Both types of attacks are predicated on users being tricked into opening an email attachment that they believe is from a trusted sender, so the first line of defense is educating (and repeat-educating) users on how to handle their email. For instance, a user should notify IT if they receive an email that seems unusual or unexpected, and they should never open it. IT should also review its current security tools. Is it still using older security monitoring software that doesn’t include more modern technologies like observability, which can check for security intrusions or malware at more atomic levels?  Is IT still using IAM (identity access management) software to track user identities and activities at a top level in the cloud and on top and atomic levels on premises, or has it also added cloud identity entitlements management (CIEM), which gives it an atomic level view of  user accesses and activities in the cloud? Better yet, has IT moved to identity governance administration (IGA), which can serve as an over-arching umbrella for IAM and CIEM plugins, plus provide detailed audit reports and automated compliance across all platforms? Fighting embedded malware code. Malware can lie dormant in systems for months, giving a bad actor the option to activate it whenever the timing is right. It’s all the more reason for IT to augment its security staff with new skillsets, such as that of the “threat hunter,” whose job is to examine networks, data and systems on a daily basis, hunting down malware that might be lurking within, and destroying it before it activates. Fighting with zero-trust networks. Internet of Things (IoT) devices come into companies with little or no security because IoT suppliers don’t pay much attention to it and there is a general expectation that corporate IT will configure devices to the appropriate security settings. The problem is, IT often forgets to do this. There are also times when users purchase their own IoT gear, and IT doesn’t know about it. Zero-trust networks help manage this, because they detect and report on everything that is added, subtracted or modified on the network. This gives IT visibility into new, potential security breach points. A second step is to formalize IT procedures for IoT devices so that no IoT device is deployed without the device’s security first being set to corporate standards.  Fighting AI data poisoning. AI models, systems and data should be continuously monitored for accuracy. As soon as they show lowered levels of accuracy or produce unusual conclusions, the data repository, inflows and outflows should be examined for quality and non-bias of data. If contamination is found, the system should be taken down, the data sanitized, and the sources of the contamination traced, tracked and disabled. Fighting AI with AI. Most every security tool on the market today contains AI functionality to detect anomalies, abnormal data patterns and user activities. Additionally, forensics AI can dissect a security breach that does occur, isolating how it happened, where it originated from and what caused it. Since most sites don’t have on-staff forensics experts, IT will have to train staff in forensics skills. Fighting with regular audits and vulnerability testing. Minimally, IT vulnerability testing should be performed on a quarterly basis, and full security audits on an annual basis. If sites use cloud providers, they should request each provider’s latest security audit for review. An outside auditor can also help sites prepare for future AI-driven security threats, because auditors stay on top of the industry, visit many different companies, and see many different situations. An advanced knowledge of threats that loom in the future helps sites prepare for new battles. Summary AI technology is moving faster than legal rulings and regulations. This leaves most IT departments “on their own” to develop security defenses against bad actors who use AI against them.  The good news is that IT already has insights into how bad actors intend to use AI, and there are tools on the market that can help defensive efforts. What’s been missing is a proactive and aggressive battle plan from IT. That has to start now. 
·165 مشاهدة