• How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests

    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    #how #being #used #spread #misinformationand
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says. #how #being #used #spread #misinformationand
    TIME.COM
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    0 Kommentare 0 Anteile
  • Do these nine things to protect yourself against hackers and scammers

    Scammers are using AI tools to create increasingly convincing ways to trick victims into sending money, and to access the personal information needed to commit identity theft. Deepfakes mean they can impersonate the voice of a friend or family member, and even fake a video call with them!
    The result can be criminals taking out thousands of dollars worth of loans or credit card debt in your name. Fortunately there are steps you can take to protect yourself against even the most sophisticated scams. Here are the security and privacy checks to run to ensure you are safe …

    9to5Mac is brought to by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee.

    Use a password manager
    At one time, the advice might have read “use strong, unique passwords for each website and app you use” – but these days we all use so many that this is only possible if we use a password manager.
    This is a super-easy step to take, thanks to the Passwords app on Apple devices. Each time you register for a new service, use the Passwords appto set and store the password.
    Replace older passwords
    You probably created some accounts back in the days when password rules were much less strict, meaning you now have some weak passwords that are vulnerable to attack. If you’ve been online since before the days of password managers, you probably even some passwords you’ve used on more than one website. This is a huge risk, as it means your security is only as good as the least-secure website you use.
    What happens is attackers break into a poorly-secured website, grab all the logins, then they use automated software to try those same logins on hundreds of different websites. If you’ve re-used a password, they now have access to your accounts on all the sites where you used it.
    Use the password change feature to update your older passwords, starting with the most important ones – the ones that would put you most at risk if your account where compromised. As an absolute minimum, ensure you have strong, unique passwords for all financial services, as well as other critical ones like Apple, Google, and Amazon accounts.
    Make sure you include any accounts which have already been compromised! You can identify these by putting your email address into Have I Been Pwned.
    Use passkeys where possible
    Passwords are gradually being replaced by passkeys. While the difference might seem small in terms of how you login, there’s a huge difference in the security they provide.
    With a passkey, a website or app doesn’t ask for a password, it instead asks your device to verify your identity. Your device uses Face ID or Touch ID to do so, then confirms that you are who you claim to be. Crucially, it doesn’t send a password back to the service, so there’s no way for this to be hacked – all the service sees is confirmation that you successfully passed biometric authentication on your device.
    Use two-factor authentication
    A growing number of accounts allow you to use two-factor authentication. This means that even if an attacker got your login details, they still wouldn’t be able to access your account.
    2FA works by demanding a rolling code whenever you login. These can be sent by text message, but we strongly advise against this, as it leaves you vulnerable to SIM-swap attacks, which are becoming increasingly common. In particular, never use text-based 2FA for financial services accounts.
    Instead, select the option to use an authenticator app. A QR code will be displayed which you scan in the app, adding that service to your device. Next time you login, you just open the app to see a 6-digit rolling code which you’ll need to enter to login. This feature is built into the Passwords app, or you can use a separate one like Google Authenticator.
    Check last-login details
    Some services, like banking apps, will display the date and time of your last successful login. Get into the habit of checking this each time you login, as it can provide a warning that your account has been compromised.
    Use a VPN service for public Wi-Fi hotspots
    Anytime you use a public Wi-Fi hotspot, you are at risk from what’s known as a Man-in-the-Middleattack. This is where someone uses a small device which uses the same name as a public Wi-Fi hotspot so that people connect to it. Once you do, they can monitor your internet traffic.
    Almost all modern websites use HTTPS, which provides an encrypted connection that makes MitM attacks less dangerous than they used to be. All the same, the exploit can expose you to a number of security and privacy risks, so using a VPN is still highly advisable. Always choose a respected VPN company, ideally one which keeps no logs and subjects itself to independent audits. I use NordVPN for this reason.
    Don’t disclose personal info to AI chatbots
    AI chatbots typically use their conversations with users as training material, meaning anything you say or type could end up in their database, and could potentially be regurgitated when answering another user’s question. Never reveal any personal information you wouldn’t want on the internet.
    Consider data removal
    It’s likely that much of your personal information has already been collected by data brokers. Your email address and phone number can be used for spam, which is annoying enough, but they can also be used by scammers. For this reason, you might want to scrub your data from as many broker services as possible. You can do this yourself, or use a service like Incogni to do it for you.
    Triple-check requests for money
    Finally, if anyone asks you to send them money, be immediately on the alert. Even if seems to be a friend, family member, or your boss, never take it on trust. Always contact them via a different, known communication channel. If they emailed you, phone them. If they phoned you, message or email them. Some people go as far as agreeing codewords with family members to use if they ever really do need emergency help.
    If anyone asks you to buy gift cards and send the numbers to them, it’s a scam 100% of the time. Requests to use money transfer services are also generally scams unless it’s something you arranged in advance.
    Even if you are expecting to send someone money, be alert for claims that they have changed their bank account. This is almost always a scam. Again, contact them via a different, known comms channel.
    Photo by Christina @ wocintechchat.com on Unsplash

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #these #nine #things #protect #yourself
    Do these nine things to protect yourself against hackers and scammers
    Scammers are using AI tools to create increasingly convincing ways to trick victims into sending money, and to access the personal information needed to commit identity theft. Deepfakes mean they can impersonate the voice of a friend or family member, and even fake a video call with them! The result can be criminals taking out thousands of dollars worth of loans or credit card debt in your name. Fortunately there are steps you can take to protect yourself against even the most sophisticated scams. Here are the security and privacy checks to run to ensure you are safe … 9to5Mac is brought to by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee. Use a password manager At one time, the advice might have read “use strong, unique passwords for each website and app you use” – but these days we all use so many that this is only possible if we use a password manager. This is a super-easy step to take, thanks to the Passwords app on Apple devices. Each time you register for a new service, use the Passwords appto set and store the password. Replace older passwords You probably created some accounts back in the days when password rules were much less strict, meaning you now have some weak passwords that are vulnerable to attack. If you’ve been online since before the days of password managers, you probably even some passwords you’ve used on more than one website. This is a huge risk, as it means your security is only as good as the least-secure website you use. What happens is attackers break into a poorly-secured website, grab all the logins, then they use automated software to try those same logins on hundreds of different websites. If you’ve re-used a password, they now have access to your accounts on all the sites where you used it. Use the password change feature to update your older passwords, starting with the most important ones – the ones that would put you most at risk if your account where compromised. As an absolute minimum, ensure you have strong, unique passwords for all financial services, as well as other critical ones like Apple, Google, and Amazon accounts. Make sure you include any accounts which have already been compromised! You can identify these by putting your email address into Have I Been Pwned. Use passkeys where possible Passwords are gradually being replaced by passkeys. While the difference might seem small in terms of how you login, there’s a huge difference in the security they provide. With a passkey, a website or app doesn’t ask for a password, it instead asks your device to verify your identity. Your device uses Face ID or Touch ID to do so, then confirms that you are who you claim to be. Crucially, it doesn’t send a password back to the service, so there’s no way for this to be hacked – all the service sees is confirmation that you successfully passed biometric authentication on your device. Use two-factor authentication A growing number of accounts allow you to use two-factor authentication. This means that even if an attacker got your login details, they still wouldn’t be able to access your account. 2FA works by demanding a rolling code whenever you login. These can be sent by text message, but we strongly advise against this, as it leaves you vulnerable to SIM-swap attacks, which are becoming increasingly common. In particular, never use text-based 2FA for financial services accounts. Instead, select the option to use an authenticator app. A QR code will be displayed which you scan in the app, adding that service to your device. Next time you login, you just open the app to see a 6-digit rolling code which you’ll need to enter to login. This feature is built into the Passwords app, or you can use a separate one like Google Authenticator. Check last-login details Some services, like banking apps, will display the date and time of your last successful login. Get into the habit of checking this each time you login, as it can provide a warning that your account has been compromised. Use a VPN service for public Wi-Fi hotspots Anytime you use a public Wi-Fi hotspot, you are at risk from what’s known as a Man-in-the-Middleattack. This is where someone uses a small device which uses the same name as a public Wi-Fi hotspot so that people connect to it. Once you do, they can monitor your internet traffic. Almost all modern websites use HTTPS, which provides an encrypted connection that makes MitM attacks less dangerous than they used to be. All the same, the exploit can expose you to a number of security and privacy risks, so using a VPN is still highly advisable. Always choose a respected VPN company, ideally one which keeps no logs and subjects itself to independent audits. I use NordVPN for this reason. Don’t disclose personal info to AI chatbots AI chatbots typically use their conversations with users as training material, meaning anything you say or type could end up in their database, and could potentially be regurgitated when answering another user’s question. Never reveal any personal information you wouldn’t want on the internet. Consider data removal It’s likely that much of your personal information has already been collected by data brokers. Your email address and phone number can be used for spam, which is annoying enough, but they can also be used by scammers. For this reason, you might want to scrub your data from as many broker services as possible. You can do this yourself, or use a service like Incogni to do it for you. Triple-check requests for money Finally, if anyone asks you to send them money, be immediately on the alert. Even if seems to be a friend, family member, or your boss, never take it on trust. Always contact them via a different, known communication channel. If they emailed you, phone them. If they phoned you, message or email them. Some people go as far as agreeing codewords with family members to use if they ever really do need emergency help. If anyone asks you to buy gift cards and send the numbers to them, it’s a scam 100% of the time. Requests to use money transfer services are also generally scams unless it’s something you arranged in advance. Even if you are expecting to send someone money, be alert for claims that they have changed their bank account. This is almost always a scam. Again, contact them via a different, known comms channel. Photo by Christina @ wocintechchat.com on Unsplash Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #these #nine #things #protect #yourself
    9TO5MAC.COM
    Do these nine things to protect yourself against hackers and scammers
    Scammers are using AI tools to create increasingly convincing ways to trick victims into sending money, and to access the personal information needed to commit identity theft. Deepfakes mean they can impersonate the voice of a friend or family member, and even fake a video call with them! The result can be criminals taking out thousands of dollars worth of loans or credit card debt in your name. Fortunately there are steps you can take to protect yourself against even the most sophisticated scams. Here are the security and privacy checks to run to ensure you are safe … 9to5Mac is brought to by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee. Use a password manager At one time, the advice might have read “use strong, unique passwords for each website and app you use” – but these days we all use so many that this is only possible if we use a password manager. This is a super-easy step to take, thanks to the Passwords app on Apple devices. Each time you register for a new service, use the Passwords app (or your own preferred password manager) to set and store the password. Replace older passwords You probably created some accounts back in the days when password rules were much less strict, meaning you now have some weak passwords that are vulnerable to attack. If you’ve been online since before the days of password managers, you probably even some passwords you’ve used on more than one website. This is a huge risk, as it means your security is only as good as the least-secure website you use. What happens is attackers break into a poorly-secured website, grab all the logins, then they use automated software to try those same logins on hundreds of different websites. If you’ve re-used a password, they now have access to your accounts on all the sites where you used it. Use the password change feature to update your older passwords, starting with the most important ones – the ones that would put you most at risk if your account where compromised. As an absolute minimum, ensure you have strong, unique passwords for all financial services, as well as other critical ones like Apple, Google, and Amazon accounts. Make sure you include any accounts which have already been compromised! You can identify these by putting your email address into Have I Been Pwned. Use passkeys where possible Passwords are gradually being replaced by passkeys. While the difference might seem small in terms of how you login, there’s a huge difference in the security they provide. With a passkey, a website or app doesn’t ask for a password, it instead asks your device to verify your identity. Your device uses Face ID or Touch ID to do so, then confirms that you are who you claim to be. Crucially, it doesn’t send a password back to the service, so there’s no way for this to be hacked – all the service sees is confirmation that you successfully passed biometric authentication on your device. Use two-factor authentication A growing number of accounts allow you to use two-factor authentication (2FA). This means that even if an attacker got your login details, they still wouldn’t be able to access your account. 2FA works by demanding a rolling code whenever you login. These can be sent by text message, but we strongly advise against this, as it leaves you vulnerable to SIM-swap attacks, which are becoming increasingly common. In particular, never use text-based 2FA for financial services accounts. Instead, select the option to use an authenticator app. A QR code will be displayed which you scan in the app, adding that service to your device. Next time you login, you just open the app to see a 6-digit rolling code which you’ll need to enter to login. This feature is built into the Passwords app, or you can use a separate one like Google Authenticator. Check last-login details Some services, like banking apps, will display the date and time of your last successful login. Get into the habit of checking this each time you login, as it can provide a warning that your account has been compromised. Use a VPN service for public Wi-Fi hotspots Anytime you use a public Wi-Fi hotspot, you are at risk from what’s known as a Man-in-the-Middle (MitM) attack. This is where someone uses a small device which uses the same name as a public Wi-Fi hotspot so that people connect to it. Once you do, they can monitor your internet traffic. Almost all modern websites use HTTPS, which provides an encrypted connection that makes MitM attacks less dangerous than they used to be. All the same, the exploit can expose you to a number of security and privacy risks, so using a VPN is still highly advisable. Always choose a respected VPN company, ideally one which keeps no logs and subjects itself to independent audits. I use NordVPN for this reason. Don’t disclose personal info to AI chatbots AI chatbots typically use their conversations with users as training material, meaning anything you say or type could end up in their database, and could potentially be regurgitated when answering another user’s question. Never reveal any personal information you wouldn’t want on the internet. Consider data removal It’s likely that much of your personal information has already been collected by data brokers. Your email address and phone number can be used for spam, which is annoying enough, but they can also be used by scammers. For this reason, you might want to scrub your data from as many broker services as possible. You can do this yourself, or use a service like Incogni to do it for you. Triple-check requests for money Finally, if anyone asks you to send them money, be immediately on the alert. Even if seems to be a friend, family member, or your boss, never take it on trust. Always contact them via a different, known communication channel. If they emailed you, phone them. If they phoned you, message or email them. Some people go as far as agreeing codewords with family members to use if they ever really do need emergency help. If anyone asks you to buy gift cards and send the numbers to them, it’s a scam 100% of the time. Requests to use money transfer services are also generally scams unless it’s something you arranged in advance. Even if you are expecting to send someone money, be alert for claims that they have changed their bank account. This is almost always a scam. Again, contact them via a different, known comms channel. Photo by Christina @ wocintechchat.com on Unsplash Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Kommentare 0 Anteile
  • Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Kommentare 0 Anteile
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Kommentare 0 Anteile
  • Why do lawyers keep using ChatGPT?

    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    #why #lawyers #keep #using #chatgpt
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More: #why #lawyers #keep #using #chatgpt
    WWW.THEVERGE.COM
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    0 Kommentare 0 Anteile
  • The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead

    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infiniteand Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venisis the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randallis the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeffis the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souperis the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest. In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.”
    #real #life #tech #execs #that
    The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead
    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infiniteand Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venisis the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randallis the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeffis the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souperis the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest. In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.” #real #life #tech #execs #that
    TIME.COM
    The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead
    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infinite (about Sam Bankman-Fried) and Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venis (Cory Michael Smith) is the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randall (Steve Carell) is the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeff (Ramy Youssef) is the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souper (Jason Schwartzman) is the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest (hundreds of millions as opposed to billions of dollars). In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.”
    4 Kommentare 0 Anteile
  • Senators probe whether RealPage pushed state AI law ban

    Democratic senators are probing whether RealPage, a software company accused of colluding with landlords to raise rents, lobbied for a proposed ban on states regulating AI for the next decade. In a letter to RealPage CEO Dana Jones, five Democratic senators — Elizabeth Warren, Bernie Sanders, Amy Klobuchar, Cory Booker, and Tina Smith— ask for more information about the company’s “potential involvement” in a provision attached to Republicans’ budget reconciliation bill, which bars state laws that impact AI or “automated decision” systems for 10 years.The senators argue that the provision could scuttle attempts to stop RealPage from feeding sensitive information from groups of landlords into an algorithm and using it to recommend noncompetitive rental prices.In 2022, a report from ProPublica linked RealPage to rising rent prices across the US, alleging that its algorithm allows landlords to coordinate pricing. The Department of Justice and eight states sued the company last year, claiming it “deprives renters of the benefits of competition on apartment leasing terms.” Meanwhile, cities like Minneapolis, Jersey City, Philadelphia, and San Francisco have passed laws meant to ban the use of rent-setting software, and several states, including Connecticut, New York, Massachusetts, and Washington, have legislation in the works.“Republicans are trying to give a green light to RealPage’s rent-hiking algorithm.”As it stands, the senators argue the Republican budget reconciliation bill would block pending legislation and stop states from enforcing any regulation that puts limitations on RealPage’s rent-setting algorithm. The bill proposes preventing states from enforcing “any law or regulation” covering a broad range of automated computing systems, which would likely apply to the algorithms used by RealPage.And while the moratorium’s most high-profile proponents are giants like OpenAI, lawmakers believe RealPage might have spent millions pushing for it too. “In light of this, we seek information on RealPage’s lobbying efforts, and on how the Republicans’ reconciliation provision would help the bottom line of RealPage and other large corporations by allowing them to take advantage of consumers,” the letter states.The senators say RealPage “stepped up” its Congressional lobbying in response to local legislation that would affect its business. They cite a report from The Lever, which found that the National Multifamily Housing Council, a trade group that represents RealPage, increased its lobbying spending from million in 2020 to million in 2024. The Lever also found that the trade group disclosed that it lobbied on “issues surrounding the risks and opportunities posed by artificial intelligence,” as well as “federal policies affecting usage of data, artificial intelligence, software,” and other technology used in real estate.Related“RealPage ramped up its million dollar spending campaign in Congress and lo and behold, Republicans in Congress passed a provision to block states from protecting renters,” Senator Warren told The Verge. “Americans are being squeezed by rising rents, but instead of helping, Republicans are trying to give a green light to RealPage’s rent-hiking algorithm.”The Senators have asked RealPage how much money the company spent on Congressional lobbying in each year since 2020, as well as which firms and individuals it “engaged or contracted with” during the same period. They also want to know how much it spent on lobbying targeted toward AI legislation, as well as how RealPage would be impacted by the budget reconciliation bill in states with pending legislation on rent-setting software. The senators ask RealPage to respond by June 10th, 2025.If passed, the bill — currently awaiting consideration in the Senate — could have far wider-ranging impacts than RealPage. In addition to blocking states from regulating AI chatbots, it could also affect any laws covering things like deepfakes, automated hiring systems, facial recognition, sentencing algorithms, and more.
    See More:
    #senators #probe #whether #realpage #pushed
    Senators probe whether RealPage pushed state AI law ban
    Democratic senators are probing whether RealPage, a software company accused of colluding with landlords to raise rents, lobbied for a proposed ban on states regulating AI for the next decade. In a letter to RealPage CEO Dana Jones, five Democratic senators — Elizabeth Warren, Bernie Sanders, Amy Klobuchar, Cory Booker, and Tina Smith— ask for more information about the company’s “potential involvement” in a provision attached to Republicans’ budget reconciliation bill, which bars state laws that impact AI or “automated decision” systems for 10 years.The senators argue that the provision could scuttle attempts to stop RealPage from feeding sensitive information from groups of landlords into an algorithm and using it to recommend noncompetitive rental prices.In 2022, a report from ProPublica linked RealPage to rising rent prices across the US, alleging that its algorithm allows landlords to coordinate pricing. The Department of Justice and eight states sued the company last year, claiming it “deprives renters of the benefits of competition on apartment leasing terms.” Meanwhile, cities like Minneapolis, Jersey City, Philadelphia, and San Francisco have passed laws meant to ban the use of rent-setting software, and several states, including Connecticut, New York, Massachusetts, and Washington, have legislation in the works.“Republicans are trying to give a green light to RealPage’s rent-hiking algorithm.”As it stands, the senators argue the Republican budget reconciliation bill would block pending legislation and stop states from enforcing any regulation that puts limitations on RealPage’s rent-setting algorithm. The bill proposes preventing states from enforcing “any law or regulation” covering a broad range of automated computing systems, which would likely apply to the algorithms used by RealPage.And while the moratorium’s most high-profile proponents are giants like OpenAI, lawmakers believe RealPage might have spent millions pushing for it too. “In light of this, we seek information on RealPage’s lobbying efforts, and on how the Republicans’ reconciliation provision would help the bottom line of RealPage and other large corporations by allowing them to take advantage of consumers,” the letter states.The senators say RealPage “stepped up” its Congressional lobbying in response to local legislation that would affect its business. They cite a report from The Lever, which found that the National Multifamily Housing Council, a trade group that represents RealPage, increased its lobbying spending from million in 2020 to million in 2024. The Lever also found that the trade group disclosed that it lobbied on “issues surrounding the risks and opportunities posed by artificial intelligence,” as well as “federal policies affecting usage of data, artificial intelligence, software,” and other technology used in real estate.Related“RealPage ramped up its million dollar spending campaign in Congress and lo and behold, Republicans in Congress passed a provision to block states from protecting renters,” Senator Warren told The Verge. “Americans are being squeezed by rising rents, but instead of helping, Republicans are trying to give a green light to RealPage’s rent-hiking algorithm.”The Senators have asked RealPage how much money the company spent on Congressional lobbying in each year since 2020, as well as which firms and individuals it “engaged or contracted with” during the same period. They also want to know how much it spent on lobbying targeted toward AI legislation, as well as how RealPage would be impacted by the budget reconciliation bill in states with pending legislation on rent-setting software. The senators ask RealPage to respond by June 10th, 2025.If passed, the bill — currently awaiting consideration in the Senate — could have far wider-ranging impacts than RealPage. In addition to blocking states from regulating AI chatbots, it could also affect any laws covering things like deepfakes, automated hiring systems, facial recognition, sentencing algorithms, and more. See More: #senators #probe #whether #realpage #pushed
    WWW.THEVERGE.COM
    Senators probe whether RealPage pushed state AI law ban
    Democratic senators are probing whether RealPage, a software company accused of colluding with landlords to raise rents, lobbied for a proposed ban on states regulating AI for the next decade. In a letter to RealPage CEO Dana Jones, five Democratic senators — Elizabeth Warren (D-MA), Bernie Sanders (D-VT), Amy Klobuchar (D-MN), Cory Booker (D-NJ), and Tina Smith (D-MN) — ask for more information about the company’s “potential involvement” in a provision attached to Republicans’ budget reconciliation bill, which bars state laws that impact AI or “automated decision” systems for 10 years.The senators argue that the provision could scuttle attempts to stop RealPage from feeding sensitive information from groups of landlords into an algorithm and using it to recommend noncompetitive rental prices.In 2022, a report from ProPublica linked RealPage to rising rent prices across the US, alleging that its algorithm allows landlords to coordinate pricing. The Department of Justice and eight states sued the company last year, claiming it “deprives renters of the benefits of competition on apartment leasing terms.” Meanwhile, cities like Minneapolis, Jersey City, Philadelphia, and San Francisco have passed laws meant to ban the use of rent-setting software, and several states, including Connecticut, New York, Massachusetts, and Washington, have legislation in the works.“Republicans are trying to give a green light to RealPage’s rent-hiking algorithm.”As it stands, the senators argue the Republican budget reconciliation bill would block pending legislation and stop states from enforcing any regulation that puts limitations on RealPage’s rent-setting algorithm. The bill proposes preventing states from enforcing “any law or regulation” covering a broad range of automated computing systems, which would likely apply to the algorithms used by RealPage.And while the moratorium’s most high-profile proponents are giants like OpenAI, lawmakers believe RealPage might have spent millions pushing for it too. “In light of this, we seek information on RealPage’s lobbying efforts, and on how the Republicans’ reconciliation provision would help the bottom line of RealPage and other large corporations by allowing them to take advantage of consumers,” the letter states.The senators say RealPage “stepped up” its Congressional lobbying in response to local legislation that would affect its business. They cite a report from The Lever, which found that the National Multifamily Housing Council, a trade group that represents RealPage, increased its lobbying spending from $4.8 million in 2020 to $9 million in 2024. The Lever also found that the trade group disclosed that it lobbied on “issues surrounding the risks and opportunities posed by artificial intelligence,” as well as “federal policies affecting usage of data, artificial intelligence, software,” and other technology used in real estate.Related“RealPage ramped up its million dollar spending campaign in Congress and lo and behold, Republicans in Congress passed a provision to block states from protecting renters,” Senator Warren told The Verge. “Americans are being squeezed by rising rents, but instead of helping, Republicans are trying to give a green light to RealPage’s rent-hiking algorithm.”The Senators have asked RealPage how much money the company spent on Congressional lobbying in each year since 2020, as well as which firms and individuals it “engaged or contracted with” during the same period. They also want to know how much it spent on lobbying targeted toward AI legislation, as well as how RealPage would be impacted by the budget reconciliation bill in states with pending legislation on rent-setting software. The senators ask RealPage to respond by June 10th, 2025.If passed, the bill — currently awaiting consideration in the Senate — could have far wider-ranging impacts than RealPage. In addition to blocking states from regulating AI chatbots, it could also affect any laws covering things like deepfakes, automated hiring systems, facial recognition, sentencing algorithms, and more. See More:
    8 Kommentare 0 Anteile
  • You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    News

    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    6 min read

    Published: May 27, 2025

    Key Takeaways

    With Google’s Veo3, you can now render AI videos, audio, and background sounds.
    This would also make it easy for scammers to design deepfake scams to defraud innocent citizens.
    Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part.

    Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio.
    While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people.
    A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon.
    Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human.
    A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here.
    New AI Age for Scammers
    With the development of generative AI, we have already seen countless examples of people losing millions to such scams. 
    For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%.
    Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard.

    In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme.
    In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases.
    It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases.
    How to Protect Yourself
    Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations.
    Self Vigilance
    Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video. 

    The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds.
    The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds.

    We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are. 
    You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars.
    Developer’s Responsibilities
    AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts. 
    Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI.

    However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward.
    Government Regulations
    Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems. 
    Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration. 
    Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer.
    That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    News

    OpenAI Academy – A New Beginning in AI Learning

    Krishi Chowdhary

    44 minutes ago

    View all
    #you #can #now #make #videos
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all #you #can #now #make #videos
    TECHREPORT.COM
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worse (worse) from here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent $25M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of $40B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost $690,000 after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all
    0 Kommentare 0 Anteile