• What a disgrace! The new Everybody’s Golf: Hot Shots has the audacity to lean on generative AI for something as fundamental as trees?! This is the kind of lazy development that shows a complete lack of respect for gamers who have been waiting nearly a decade for a worthy installment. Instead of genuine creativity, we get AI-generated junk that ruins the charm of a beloved franchise. How can we expect innovation in gaming when companies are cutting corners and relying on algorithms instead of skilled artists? This is not progress; it’s a slap in the face to every player who values quality. Stand up, gamers! We deserve better!

    #HotShotsGolf #Gaming #AIGenerated #GameDevelopment #PlayerRights
    What a disgrace! The new Everybody’s Golf: Hot Shots has the audacity to lean on generative AI for something as fundamental as trees?! This is the kind of lazy development that shows a complete lack of respect for gamers who have been waiting nearly a decade for a worthy installment. Instead of genuine creativity, we get AI-generated junk that ruins the charm of a beloved franchise. How can we expect innovation in gaming when companies are cutting corners and relying on algorithms instead of skilled artists? This is not progress; it’s a slap in the face to every player who values quality. Stand up, gamers! We deserve better! #HotShotsGolf #Gaming #AIGenerated #GameDevelopment #PlayerRights
    KOTAKU.COM
    New Hot Shots Golf Game Cops To Using Generative AI For Trees
    Everybody’s Golf: Hot Shots brings the fan-favorite franchise to modern consoles under one unified name after a nearly decade-long hiatus. Unfortunately, its simple three-button shot mechanics will arrive alongside some AI-generated junk. The game’
    1 التعليقات 0 المشاركات
  • In a world where we’re all desperately trying to make our digital creations look as lifelike as a potato, we now have the privilege of diving headfirst into the revolutionary topic of "Separate shaders in AI 3D generated models." Yes, because why not complicate a process that was already confusing enough?

    Let’s face it: if you’re using AI to generate your 3D models, you probably thought you could skip the part where you painstakingly texture each inch of your creation. But alas! Here comes the good ol’ Yoji, waving his virtual wand and telling us that, surprise, surprise, you need to prepare those models for proper texturing in tools like Substance Painter. Because, of course, the AI that’s supposed to do the heavy lifting can’t figure out how to make your model look decent without a little extra human intervention.

    But don’t worry! Yoji has got your back with his meticulous “how-to” on separating shaders. Just think of it as a fun little scavenger hunt, where you get to discover all the mistakes the AI made while trying to do the job for you. Who knew that a model could look so… special? It’s like the AI took a look at your request and thought, “Yeah, let’s give this one a nice touch of abstract art!” Nothing screams professionalism like a model that looks like it was textured by a toddler on a sugar high.

    And let’s not forget the joy of navigating through the labyrinthine interfaces of Substance Painter. Ah, yes! The thrill of clicking through endless menus, desperately searching for that elusive shader that will somehow make your model look less like a lumpy marshmallow and more like a refined piece of art. It’s a bit like being in a relationship, really. You start with high hopes and a glossy exterior, only to end up questioning all your life choices as you try to figure out how to make it work.

    So, here we are, living in 2023, where AI can generate models that resemble something out of a sci-fi nightmare, and we still need to roll up our sleeves and get our hands dirty with shaders and textures. Who knew that the future would come with so many manual adjustments? Isn’t technology just delightful?

    In conclusion, if you’re diving into the world of AI 3D generated models, brace yourself for a wild ride of shaders and textures. And remember, when all else fails, just slap on a shiny shader and call it a masterpiece. After all, art is subjective, right?

    #3DModels #AIGenerated #SubstancePainter #Shaders #DigitalArt
    In a world where we’re all desperately trying to make our digital creations look as lifelike as a potato, we now have the privilege of diving headfirst into the revolutionary topic of "Separate shaders in AI 3D generated models." Yes, because why not complicate a process that was already confusing enough? Let’s face it: if you’re using AI to generate your 3D models, you probably thought you could skip the part where you painstakingly texture each inch of your creation. But alas! Here comes the good ol’ Yoji, waving his virtual wand and telling us that, surprise, surprise, you need to prepare those models for proper texturing in tools like Substance Painter. Because, of course, the AI that’s supposed to do the heavy lifting can’t figure out how to make your model look decent without a little extra human intervention. But don’t worry! Yoji has got your back with his meticulous “how-to” on separating shaders. Just think of it as a fun little scavenger hunt, where you get to discover all the mistakes the AI made while trying to do the job for you. Who knew that a model could look so… special? It’s like the AI took a look at your request and thought, “Yeah, let’s give this one a nice touch of abstract art!” Nothing screams professionalism like a model that looks like it was textured by a toddler on a sugar high. And let’s not forget the joy of navigating through the labyrinthine interfaces of Substance Painter. Ah, yes! The thrill of clicking through endless menus, desperately searching for that elusive shader that will somehow make your model look less like a lumpy marshmallow and more like a refined piece of art. It’s a bit like being in a relationship, really. You start with high hopes and a glossy exterior, only to end up questioning all your life choices as you try to figure out how to make it work. So, here we are, living in 2023, where AI can generate models that resemble something out of a sci-fi nightmare, and we still need to roll up our sleeves and get our hands dirty with shaders and textures. Who knew that the future would come with so many manual adjustments? Isn’t technology just delightful? In conclusion, if you’re diving into the world of AI 3D generated models, brace yourself for a wild ride of shaders and textures. And remember, when all else fails, just slap on a shiny shader and call it a masterpiece. After all, art is subjective, right? #3DModels #AIGenerated #SubstancePainter #Shaders #DigitalArt
    Separate shaders in AI 3d generated models
    Yoji shows how to prepare generated models for proper texturing in tools like Substance Painter. Source
    Like
    Love
    Wow
    Sad
    Angry
    192
    1 التعليقات 0 المشاركات
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 التعليقات 0 المشاركات
  • This movie is fully AI-generated and has a fully SAG-AFTRA cast – here’s 3 things you need to know about Echo Hunter

    Echo Hunter is a new fully AI-generated film pairing synthetic production with human union talent.
    #this #movie #fully #aigenerated #has
    This movie is fully AI-generated and has a fully SAG-AFTRA cast – here’s 3 things you need to know about Echo Hunter
    Echo Hunter is a new fully AI-generated film pairing synthetic production with human union talent. #this #movie #fully #aigenerated #has
    WWW.TECHRADAR.COM
    This movie is fully AI-generated and has a fully SAG-AFTRA cast – here’s 3 things you need to know about Echo Hunter
    Echo Hunter is a new fully AI-generated film pairing synthetic production with human union talent.
    0 التعليقات 0 المشاركات
  • This AI-generated Fortnite video is a bleak glimpse at our future

    Earlier this week, Google unveiled Flow, a tool that can be used to generate AI video with ease. Users can submit text prompts or give Veo, the AI model that Flow uses, the digital equivalent of a mood board in exchange for eight second clips. From there, users can direct Flow to patch together different clips to form a longer stream of footage, potentially allowing for the creation of entire films. Immediately, people experimented with asking the AI to generate gameplay footage — and the tools are shockingly good at looking like games that you might recognize.

    Already, one video has amassed millions of views as onlookers are in awe over how easily the AI footage could be mistaken for actual Fortnite gameplay. According to Matt Shumer, who originally generated the footage, the prompt he entered to produce this content never mentioned Fortnite by name. What he apparently wrote was, “Streamer getting a victory royale with just his pickaxe.”

    Uhhh… I don't think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox— Matt ShumerMay 21, 2025

    Google did not respond to a request for comment over whether or not Veo should be generating footage that mimics copyrighted material. However, this does not appear to be an isolated incident. Another user got Veo to spit out something based on the idea of GTA 6. The result is probably a far cry from the realistic graphics GTA 6 has displayed in trailers thus far, but the gameplay still successfully replicates the aesthetic Rockstar is known for:

    We got Veo 3 playing GTA 6 before we got GTA 6!pic.twitter.com/OM63yf0CKK— Sherveen MashayekhiMay 20, 2025

    Though there are limitations — eight seconds is a short period of time, especially compared to the hours of material that human streamers generate — it’s undoubtedly an impressive piece of technology that augurs a specific pathway for the future of livestreams. We’ve already got AI-powered Twitch streamers like Neuro-sama, which hooks up a large language model to a text-to-speech program that allows the chibi influencer to speak to her viewers. Neuro-sama learns from other actual Twitch streamers, which makes her personality as malleable as it is chaotic.

    Imagine, for a moment, if an AI streamer didn’t need to rely on an actual game to endlessly entertain its viewers. Most games have a distinct beginning and end, and even live service games cannot endlessly produce new material. The combination of endless entertainment hosted by a personality who never needs to eat or sleep is a powerful if not terrifying combo, no? In January, Neuro-sama briefly became one in the top ten most subscribed Twitch channels according to stats website Twitch Tracker.

    That, and, an AI personality can sidestep many of the issues that are inherent to parasocial relationships. An AI cannot be harassed, swatted, or stalked by traditional means. An AI can still offend its viewers, but blame and responsibility in such instances are hazy concepts. AI-on-AI content — meaning, an AI streamer showing off AI footage — seems like the natural end point for the trends we’re seeing on platforms like Twitch.

    Twitch, for its part, already has a category for AI content. Its policies do not address the use of AI content beyond banning deepfake porn, but sexually explicit content of that nature wouldn’t be allowed regardless of source.

    “This topic is very much on our radar, and we are always monitoring emerging behaviors to ensure our policies remain relevant to what’s happening on our service,” a Twitch blog post from 2023 on deepfakes states. In 2024, ex-Twitch CEO Dan Clancy — who has a PhD in artificial intelligence — seemed confident about the opportunities that AI might afford Twitch streamers when Business Insider asked him about it in 2024. Clancy called AI a “boon” for Twitch that could potentially generate “endless” stimuli to react to.

    Would the general populace really be receptive to AI-on-AI content, though? Slurs aside, Fortnite’s AI Darth Vader seemed to be a hit. At the same time, nearly all generative models tend to spawn humans who have an unsettling aura. Everyone is laughing, yet no one looks happy. The cheer is forced in a way where you can practically imagine someone off-frame, menacingly holding a gun to the AI’s head. Like a dream where the more people smile, the closer things get to a nightmare. Everything is as perfect as it is hollow.

    Until the technology improves, any potential entertainer molded in the image of stock photography risks repulsing its viewers. Yet the internet is already slipping away from serving the needs of real human beings. Millions of bots roam about Twitch, dutifully inflating the views of streamers. Human beings will always crave the company of other people, sure. Much like mass production did for artisanal crafts, a future where our feeds are taken over by AI might just exponentially raise the value of authenticity and the human touch.

    But 2025 was the first year in history that traffic on the internet was determined to be frequented more by bots than people. It’s already a bot’s world out there. We’re just breathing in it.
    #this #aigenerated #fortnite #video #bleak
    This AI-generated Fortnite video is a bleak glimpse at our future
    Earlier this week, Google unveiled Flow, a tool that can be used to generate AI video with ease. Users can submit text prompts or give Veo, the AI model that Flow uses, the digital equivalent of a mood board in exchange for eight second clips. From there, users can direct Flow to patch together different clips to form a longer stream of footage, potentially allowing for the creation of entire films. Immediately, people experimented with asking the AI to generate gameplay footage — and the tools are shockingly good at looking like games that you might recognize. Already, one video has amassed millions of views as onlookers are in awe over how easily the AI footage could be mistaken for actual Fortnite gameplay. According to Matt Shumer, who originally generated the footage, the prompt he entered to produce this content never mentioned Fortnite by name. What he apparently wrote was, “Streamer getting a victory royale with just his pickaxe.” Uhhh… I don't think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox— Matt ShumerMay 21, 2025 Google did not respond to a request for comment over whether or not Veo should be generating footage that mimics copyrighted material. However, this does not appear to be an isolated incident. Another user got Veo to spit out something based on the idea of GTA 6. The result is probably a far cry from the realistic graphics GTA 6 has displayed in trailers thus far, but the gameplay still successfully replicates the aesthetic Rockstar is known for: We got Veo 3 playing GTA 6 before we got GTA 6!pic.twitter.com/OM63yf0CKK— Sherveen MashayekhiMay 20, 2025 Though there are limitations — eight seconds is a short period of time, especially compared to the hours of material that human streamers generate — it’s undoubtedly an impressive piece of technology that augurs a specific pathway for the future of livestreams. We’ve already got AI-powered Twitch streamers like Neuro-sama, which hooks up a large language model to a text-to-speech program that allows the chibi influencer to speak to her viewers. Neuro-sama learns from other actual Twitch streamers, which makes her personality as malleable as it is chaotic. Imagine, for a moment, if an AI streamer didn’t need to rely on an actual game to endlessly entertain its viewers. Most games have a distinct beginning and end, and even live service games cannot endlessly produce new material. The combination of endless entertainment hosted by a personality who never needs to eat or sleep is a powerful if not terrifying combo, no? In January, Neuro-sama briefly became one in the top ten most subscribed Twitch channels according to stats website Twitch Tracker. That, and, an AI personality can sidestep many of the issues that are inherent to parasocial relationships. An AI cannot be harassed, swatted, or stalked by traditional means. An AI can still offend its viewers, but blame and responsibility in such instances are hazy concepts. AI-on-AI content — meaning, an AI streamer showing off AI footage — seems like the natural end point for the trends we’re seeing on platforms like Twitch. Twitch, for its part, already has a category for AI content. Its policies do not address the use of AI content beyond banning deepfake porn, but sexually explicit content of that nature wouldn’t be allowed regardless of source. “This topic is very much on our radar, and we are always monitoring emerging behaviors to ensure our policies remain relevant to what’s happening on our service,” a Twitch blog post from 2023 on deepfakes states. In 2024, ex-Twitch CEO Dan Clancy — who has a PhD in artificial intelligence — seemed confident about the opportunities that AI might afford Twitch streamers when Business Insider asked him about it in 2024. Clancy called AI a “boon” for Twitch that could potentially generate “endless” stimuli to react to. Would the general populace really be receptive to AI-on-AI content, though? Slurs aside, Fortnite’s AI Darth Vader seemed to be a hit. At the same time, nearly all generative models tend to spawn humans who have an unsettling aura. Everyone is laughing, yet no one looks happy. The cheer is forced in a way where you can practically imagine someone off-frame, menacingly holding a gun to the AI’s head. Like a dream where the more people smile, the closer things get to a nightmare. Everything is as perfect as it is hollow. Until the technology improves, any potential entertainer molded in the image of stock photography risks repulsing its viewers. Yet the internet is already slipping away from serving the needs of real human beings. Millions of bots roam about Twitch, dutifully inflating the views of streamers. Human beings will always crave the company of other people, sure. Much like mass production did for artisanal crafts, a future where our feeds are taken over by AI might just exponentially raise the value of authenticity and the human touch. But 2025 was the first year in history that traffic on the internet was determined to be frequented more by bots than people. It’s already a bot’s world out there. We’re just breathing in it. #this #aigenerated #fortnite #video #bleak
    WWW.POLYGON.COM
    This AI-generated Fortnite video is a bleak glimpse at our future
    Earlier this week, Google unveiled Flow, a tool that can be used to generate AI video with ease. Users can submit text prompts or give Veo, the AI model that Flow uses, the digital equivalent of a mood board in exchange for eight second clips. From there, users can direct Flow to patch together different clips to form a longer stream of footage, potentially allowing for the creation of entire films. Immediately, people experimented with asking the AI to generate gameplay footage — and the tools are shockingly good at looking like games that you might recognize. Already, one video has amassed millions of views as onlookers are in awe over how easily the AI footage could be mistaken for actual Fortnite gameplay. According to Matt Shumer, who originally generated the footage, the prompt he entered to produce this content never mentioned Fortnite by name. What he apparently wrote was, “Streamer getting a victory royale with just his pickaxe.” Uhhh… I don't think Veo 3 is supposed to be generating Fortnite gameplay pic.twitter.com/bWKruQ5Nox— Matt Shumer (@mattshumer_) May 21, 2025 Google did not respond to a request for comment over whether or not Veo should be generating footage that mimics copyrighted material. However, this does not appear to be an isolated incident. Another user got Veo to spit out something based on the idea of GTA 6. The result is probably a far cry from the realistic graphics GTA 6 has displayed in trailers thus far, but the gameplay still successfully replicates the aesthetic Rockstar is known for: We got Veo 3 playing GTA 6 before we got GTA 6!(what impresses me here is two distinct throughlines of audio: the guy, the game – prompt was 'a twitch streamer playing grand theft auto 6') pic.twitter.com/OM63yf0CKK— Sherveen Mashayekhi (@Sherveen) May 20, 2025 Though there are limitations — eight seconds is a short period of time, especially compared to the hours of material that human streamers generate — it’s undoubtedly an impressive piece of technology that augurs a specific pathway for the future of livestreams. We’ve already got AI-powered Twitch streamers like Neuro-sama, which hooks up a large language model to a text-to-speech program that allows the chibi influencer to speak to her viewers. Neuro-sama learns from other actual Twitch streamers, which makes her personality as malleable as it is chaotic. Imagine, for a moment, if an AI streamer didn’t need to rely on an actual game to endlessly entertain its viewers. Most games have a distinct beginning and end, and even live service games cannot endlessly produce new material. The combination of endless entertainment hosted by a personality who never needs to eat or sleep is a powerful if not terrifying combo, no? In January, Neuro-sama briefly became one in the top ten most subscribed Twitch channels according to stats website Twitch Tracker. That, and, an AI personality can sidestep many of the issues that are inherent to parasocial relationships. An AI cannot be harassed, swatted, or stalked by traditional means. An AI can still offend its viewers, but blame and responsibility in such instances are hazy concepts. AI-on-AI content — meaning, an AI streamer showing off AI footage — seems like the natural end point for the trends we’re seeing on platforms like Twitch. Twitch, for its part, already has a category for AI content. Its policies do not address the use of AI content beyond banning deepfake porn, but sexually explicit content of that nature wouldn’t be allowed regardless of source. “This topic is very much on our radar, and we are always monitoring emerging behaviors to ensure our policies remain relevant to what’s happening on our service,” a Twitch blog post from 2023 on deepfakes states. In 2024, ex-Twitch CEO Dan Clancy — who has a PhD in artificial intelligence — seemed confident about the opportunities that AI might afford Twitch streamers when Business Insider asked him about it in 2024. Clancy called AI a “boon” for Twitch that could potentially generate “endless” stimuli to react to. Would the general populace really be receptive to AI-on-AI content, though? Slurs aside, Fortnite’s AI Darth Vader seemed to be a hit. At the same time, nearly all generative models tend to spawn humans who have an unsettling aura. Everyone is laughing, yet no one looks happy. The cheer is forced in a way where you can practically imagine someone off-frame, menacingly holding a gun to the AI’s head. Like a dream where the more people smile, the closer things get to a nightmare. Everything is as perfect as it is hollow. Until the technology improves, any potential entertainer molded in the image of stock photography risks repulsing its viewers. Yet the internet is already slipping away from serving the needs of real human beings. Millions of bots roam about Twitch, dutifully inflating the views of streamers. Human beings will always crave the company of other people, sure. Much like mass production did for artisanal crafts, a future where our feeds are taken over by AI might just exponentially raise the value of authenticity and the human touch. But 2025 was the first year in history that traffic on the internet was determined to be frequented more by bots than people. It’s already a bot’s world out there. We’re just breathing in it.
    0 التعليقات 0 المشاركات
  • This AI App Is Using an AI-Generated Ad to Show How Easy It Is to Generate AI App Slop

    Back in my day, the phrase used to be “there’s an app for that,” and that’s still the case, though with one major amendment: now, it’s “there’s an AI app for that.” In fact, there’s even an AI app for making apps—buckle up, kiddos, things are about to get meta. Let me explain: Rork, which I stumbled across while scrolling X, is—if we are to drink the Kool-Aid—the app to end all apps. The font from which all other apps may flow. The cold fusion of coding. Alright, I’m exaggerating, but it’s exactly what I alluded to: an app that makes apps, which is like a hat on a hat if the first hat actually made the second hat. To make things even more meta, Rork used an AI ad with Google’s new Veo 3 video generator to promote its tool. Is your head spinning yet? Mine kind of is. When I say Rork makes apps, I mean it really makes the damn thing. But on the surface, it does the whole thing. I went to the web version of Rork to try it out, and it seemingly took my text prompt, “I want to make an app that matches me with similar-sized people in my area to fight. Like Tinder but for fisticuffs,” and ran with it.

    Once I punched the prompt in, Rork got to workand then used its corresponding large language modelto start drawing everything up. And I mean everything—colors, features, parameters, basically every aspect of an app that you might need to launch. And the conjuring doesn’t stop there. Once everything is devised, Rork’s interface splits everything off into packages if you want to look at the code, and then it does my favorite part—it generates a usable preview that you can test on your phone or another device. After the AI had coded everything, I was able to scan a QR code and generate a preview using ExpoGo, a tool that lets you deploy code in a preview mode. So, without further ado, ladies and gentlemen, I present to you: FightMatch, Tinder for kicking ass.

    © Rork / Screenshot by Gizmodo It’s worth noting that I tried to make this even more meta by prompting Rork to make an app that uses generative AI to make images or video—an AI app that generates AI—but it ran into some issues that I wasn’t able to fully wrap my head around. Per Rork, they were “critical errors,” and even when I clicked the “fix” button, it wouldn’t budge. No AI app inception today, folks, sorry. On one hand, as someone with no coding experience, I’m impressed. Rork, as promised, was able to take my very simple text promptand write up all the code to make it happen in about a minute or so. Again, a coder I am not, but that feels pretty extraordinary from a sheer idea to preview perspective. I’m fairly certain whatever Rork and Claude generated wouldn’t be enough to push to an app store right away, both from a technical and aesthetic perspective, but as a first draft, it’s at least serviceable, if very far from perfect. Also, if I’m being honest, I was looking for more of a Fight Club-type app over MMA, but I suppose Claude played this one safe.

    There’s obviously vast potential here to expedite app creation, but just like with every generative tool like this, there’s also potential for something less exciting—slop. Like I wrote earlier this week, tools like Google’s Veo 3 and Flow are impressive technical feats, but they also feel primed to further bloat an already overwhelming bucket of AI slop. There’s always that question: do we need more apps or do we need better apps? I’m a proponent of the latter philosophy, but if there’s one thing I’ve come to expect in the tech world, it’s more. But hey, if I get rich quick with FightMatch, I can’t really complain, can I? And if you disagree, swipe right, and let’s settle this the old-fashioned way.
    #this #app #using #aigenerated #show
    This AI App Is Using an AI-Generated Ad to Show How Easy It Is to Generate AI App Slop
    Back in my day, the phrase used to be “there’s an app for that,” and that’s still the case, though with one major amendment: now, it’s “there’s an AI app for that.” In fact, there’s even an AI app for making apps—buckle up, kiddos, things are about to get meta. Let me explain: Rork, which I stumbled across while scrolling X, is—if we are to drink the Kool-Aid—the app to end all apps. The font from which all other apps may flow. The cold fusion of coding. Alright, I’m exaggerating, but it’s exactly what I alluded to: an app that makes apps, which is like a hat on a hat if the first hat actually made the second hat. To make things even more meta, Rork used an AI ad with Google’s new Veo 3 video generator to promote its tool. Is your head spinning yet? Mine kind of is. When I say Rork makes apps, I mean it really makes the damn thing. But on the surface, it does the whole thing. I went to the web version of Rork to try it out, and it seemingly took my text prompt, “I want to make an app that matches me with similar-sized people in my area to fight. Like Tinder but for fisticuffs,” and ran with it. Once I punched the prompt in, Rork got to workand then used its corresponding large language modelto start drawing everything up. And I mean everything—colors, features, parameters, basically every aspect of an app that you might need to launch. And the conjuring doesn’t stop there. Once everything is devised, Rork’s interface splits everything off into packages if you want to look at the code, and then it does my favorite part—it generates a usable preview that you can test on your phone or another device. After the AI had coded everything, I was able to scan a QR code and generate a preview using ExpoGo, a tool that lets you deploy code in a preview mode. So, without further ado, ladies and gentlemen, I present to you: FightMatch, Tinder for kicking ass. © Rork / Screenshot by Gizmodo It’s worth noting that I tried to make this even more meta by prompting Rork to make an app that uses generative AI to make images or video—an AI app that generates AI—but it ran into some issues that I wasn’t able to fully wrap my head around. Per Rork, they were “critical errors,” and even when I clicked the “fix” button, it wouldn’t budge. No AI app inception today, folks, sorry. On one hand, as someone with no coding experience, I’m impressed. Rork, as promised, was able to take my very simple text promptand write up all the code to make it happen in about a minute or so. Again, a coder I am not, but that feels pretty extraordinary from a sheer idea to preview perspective. I’m fairly certain whatever Rork and Claude generated wouldn’t be enough to push to an app store right away, both from a technical and aesthetic perspective, but as a first draft, it’s at least serviceable, if very far from perfect. Also, if I’m being honest, I was looking for more of a Fight Club-type app over MMA, but I suppose Claude played this one safe. There’s obviously vast potential here to expedite app creation, but just like with every generative tool like this, there’s also potential for something less exciting—slop. Like I wrote earlier this week, tools like Google’s Veo 3 and Flow are impressive technical feats, but they also feel primed to further bloat an already overwhelming bucket of AI slop. There’s always that question: do we need more apps or do we need better apps? I’m a proponent of the latter philosophy, but if there’s one thing I’ve come to expect in the tech world, it’s more. But hey, if I get rich quick with FightMatch, I can’t really complain, can I? And if you disagree, swipe right, and let’s settle this the old-fashioned way. #this #app #using #aigenerated #show
    GIZMODO.COM
    This AI App Is Using an AI-Generated Ad to Show How Easy It Is to Generate AI App Slop
    Back in my day, the phrase used to be “there’s an app for that,” and that’s still the case, though with one major amendment: now, it’s “there’s an AI app for that.” In fact, there’s even an AI app for making apps—buckle up, kiddos, things are about to get meta. Let me explain: Rork, which I stumbled across while scrolling X, is—if we are to drink the Kool-Aid—the app to end all apps. The font from which all other apps may flow. The cold fusion of coding. Alright, I’m exaggerating, but it’s exactly what I alluded to: an app that makes apps, which is like a hat on a hat if the first hat actually made the second hat. To make things even more meta, Rork used an AI ad with Google’s new Veo 3 video generator to promote its tool. Is your head spinning yet? Mine kind of is. When I say Rork makes apps, I mean it really makes the damn thing (at least I think it does since I wouldn’t know a functional piece of code if it sat on my chest and suffocated me like a sleep paralysis demon). But on the surface, it does the whole thing. I went to the web version of Rork to try it out (there’s no mobile app that I’m aware of), and it seemingly took my text prompt, “I want to make an app that matches me with similar-sized people in my area to fight. Like Tinder but for fisticuffs,” and ran with it. Once I punched the prompt in (pun intended), Rork got to work (thinking for a while as AI does) and then used its corresponding large language model (Anthropic’s Claude 4 model) to start drawing everything up. And I mean everything—colors, features, parameters, basically every aspect of an app that you might need to launch. And the conjuring doesn’t stop there. Once everything is devised, Rork’s interface splits everything off into packages if you want to look at the code (that is, if you’re capable of reading it, unlike me), and then it does my favorite part—it generates a usable preview that you can test on your phone or another device. After the AI had coded everything, I was able to scan a QR code and generate a preview using ExpoGo, a tool that lets you deploy code in a preview mode. So, without further ado, ladies and gentlemen, I present to you: FightMatch, Tinder for kicking ass. © Rork / Screenshot by Gizmodo It’s worth noting that I tried to make this even more meta by prompting Rork to make an app that uses generative AI to make images or video—an AI app that generates AI—but it ran into some issues that I wasn’t able to fully wrap my head around. Per Rork, they were “critical errors,” and even when I clicked the “fix” button, it wouldn’t budge. No AI app inception today, folks, sorry. On one hand, as someone with no coding experience, I’m impressed. Rork, as promised, was able to take my very simple text prompt (Tinder for fighting) and write up all the code to make it happen in about a minute or so. Again, a coder I am not, but that feels pretty extraordinary from a sheer idea to preview perspective. I’m fairly certain whatever Rork and Claude generated wouldn’t be enough to push to an app store right away, both from a technical and aesthetic perspective, but as a first draft, it’s at least serviceable, if very far from perfect. Also, if I’m being honest, I was looking for more of a Fight Club-type app over MMA, but I suppose Claude played this one safe. There’s obviously vast potential here to expedite app creation, but just like with every generative tool like this, there’s also potential for something less exciting—slop. Like I wrote earlier this week, tools like Google’s Veo 3 and Flow are impressive technical feats, but they also feel primed to further bloat an already overwhelming bucket of AI slop. There’s always that question: do we need more apps or do we need better apps? I’m a proponent of the latter philosophy, but if there’s one thing I’ve come to expect in the tech world, it’s more. But hey, if I get rich quick with FightMatch, I can’t really complain, can I? And if you disagree, swipe right, and let’s settle this the old-fashioned way.
    0 التعليقات 0 المشاركات
  • These AI-Generated TikTok Videos Are Tricking People Into Installing Malware

    In recent years, TikTok has become a prime target for scammers and cyber attackers spreading various forms of malware, and the latest shady campaign promotes instructional videos that trick users into downloading infostealers to their devices via ClickFix attacks. The scheme, identified by Trend Micro and reported by Bleeping Computer, instructs users to execute commands to activate Windows and Microsoft Office or premium features in CapCut and Spotify. One video is captioned "Boost Your Spotify Experience Instantly — Here's How!" and has nearly half a million views. These videos seem to be AI generated and, while the software they discuss is legitimate, the activation steps they outline are not, and will ultimately lead users to infect their devices with Vidar and StealC malware. TikTok's engagement algorithm makes it easy for such malicious videos to spread. In the past, cybercriminals have used TikTok's trending "Invisible Challenge" to spread WASP Stealer malware, which can steal Discord accounts, passwords, credit cards, and crypto wallets. Fake cryptocurrency giveaways posted on TikTok used deepfakes of Elon Muskto scam users into paying "activation" deposits using Bitcoin. How TikTok ClickFix attacks workClickFix is a social engineering tactic that uses fake error messages or CAPTCHA prompts to trick users into executing a command with malicious code. Users will see a pop-up notification about a technical problem with instructions to copy and run a commandto "fix" the issue. The attack most often targets Windows users, but it has been employed on macOS and Linux too. In the current TikTok campaign, the instructional videos prompt users to run a PowerShell command that installs Vidar or StealC information-stealing malware. The former can take desktop screenshots and harvest data ranging from login credentials and cookies to credit cards and crypto wallets. The latter targets web browsers and crypto wallets. Once run, the script will download a second PowerShell script allowing it to launch automatically upon device startup. It also saves in a hidden directory and deletes temporary folders so it can evade detection. How to spot malicious TikTok videosBe wary of following instructional videos you're served on TikTok. Check the source, and only engage with those that are legitimate, like from the developer itself. You should also look for signs of AI-generated content, which may be used to spread malware widely and rapidly. There's no malicious code actually embedded in or delivered by these instructional videos—the scheme is dependent on social engineering via verbal directions—making the threat technically harder to detect.
    #these #aigenerated #tiktok #videos #are
    These AI-Generated TikTok Videos Are Tricking People Into Installing Malware
    In recent years, TikTok has become a prime target for scammers and cyber attackers spreading various forms of malware, and the latest shady campaign promotes instructional videos that trick users into downloading infostealers to their devices via ClickFix attacks. The scheme, identified by Trend Micro and reported by Bleeping Computer, instructs users to execute commands to activate Windows and Microsoft Office or premium features in CapCut and Spotify. One video is captioned "Boost Your Spotify Experience Instantly — Here's How!" and has nearly half a million views. These videos seem to be AI generated and, while the software they discuss is legitimate, the activation steps they outline are not, and will ultimately lead users to infect their devices with Vidar and StealC malware. TikTok's engagement algorithm makes it easy for such malicious videos to spread. In the past, cybercriminals have used TikTok's trending "Invisible Challenge" to spread WASP Stealer malware, which can steal Discord accounts, passwords, credit cards, and crypto wallets. Fake cryptocurrency giveaways posted on TikTok used deepfakes of Elon Muskto scam users into paying "activation" deposits using Bitcoin. How TikTok ClickFix attacks workClickFix is a social engineering tactic that uses fake error messages or CAPTCHA prompts to trick users into executing a command with malicious code. Users will see a pop-up notification about a technical problem with instructions to copy and run a commandto "fix" the issue. The attack most often targets Windows users, but it has been employed on macOS and Linux too. In the current TikTok campaign, the instructional videos prompt users to run a PowerShell command that installs Vidar or StealC information-stealing malware. The former can take desktop screenshots and harvest data ranging from login credentials and cookies to credit cards and crypto wallets. The latter targets web browsers and crypto wallets. Once run, the script will download a second PowerShell script allowing it to launch automatically upon device startup. It also saves in a hidden directory and deletes temporary folders so it can evade detection. How to spot malicious TikTok videosBe wary of following instructional videos you're served on TikTok. Check the source, and only engage with those that are legitimate, like from the developer itself. You should also look for signs of AI-generated content, which may be used to spread malware widely and rapidly. There's no malicious code actually embedded in or delivered by these instructional videos—the scheme is dependent on social engineering via verbal directions—making the threat technically harder to detect. #these #aigenerated #tiktok #videos #are
    LIFEHACKER.COM
    These AI-Generated TikTok Videos Are Tricking People Into Installing Malware
    In recent years, TikTok has become a prime target for scammers and cyber attackers spreading various forms of malware, and the latest shady campaign promotes instructional videos that trick users into downloading infostealers to their devices via ClickFix attacks. The scheme, identified by Trend Micro and reported by Bleeping Computer, instructs users to execute commands to activate Windows and Microsoft Office or premium features in CapCut and Spotify. One video is captioned "Boost Your Spotify Experience Instantly — Here's How!" and has nearly half a million views. These videos seem to be AI generated and, while the software they discuss is legitimate, the activation steps they outline are not, and will ultimately lead users to infect their devices with Vidar and StealC malware. TikTok's engagement algorithm makes it easy for such malicious videos to spread. In the past, cybercriminals have used TikTok's trending "Invisible Challenge" to spread WASP Stealer malware, which can steal Discord accounts, passwords, credit cards, and crypto wallets. Fake cryptocurrency giveaways posted on TikTok used deepfakes of Elon Musk (and themes around SpaceX and Tesla) to scam users into paying "activation" deposits using Bitcoin. How TikTok ClickFix attacks workClickFix is a social engineering tactic that uses fake error messages or CAPTCHA prompts to trick users into executing a command with malicious code. Users will see a pop-up notification about a technical problem with instructions to copy and run a command (commonly a PowerShell script) to "fix" the issue. The attack most often targets Windows users, but it has been employed on macOS and Linux too. In the current TikTok campaign, the instructional videos prompt users to run a PowerShell command that installs Vidar or StealC information-stealing malware. The former can take desktop screenshots and harvest data ranging from login credentials and cookies to credit cards and crypto wallets. The latter targets web browsers and crypto wallets. Once run, the script will download a second PowerShell script allowing it to launch automatically upon device startup. It also saves in a hidden directory and deletes temporary folders so it can evade detection. How to spot malicious TikTok videosBe wary of following instructional videos you're served on TikTok (as well as unsolicited technical content in general). Check the source, and only engage with those that are legitimate, like from the developer itself. You should also look for signs of AI-generated content, which may be used to spread malware widely and rapidly. There's no malicious code actually embedded in or delivered by these instructional videos—the scheme is dependent on social engineering via verbal directions—making the threat technically harder to detect.
    0 التعليقات 0 المشاركات
  • Microsoft is now testing AI-generated text in Windows Notepad

    As of yesterday, Microsoft has begun rolling out a new update to Windows 11 Insiders on the Dev and Canary Channels. This update brings new AI features to Notepad, Paint, and the Snipping Tool.
    Notepad now has the ability to write text from scratch using generative AI, which is meant to aid you by quickly producing drafts based on your prompts and instructions. To use AI text generation, simply right-click anywhere in the document and select Write. Type in your instructions, then either click Keep Text or Discard on the results. You’ll need a Microsoft account and AI credits to use Write in Notepad.
    Meanwhile, Paint now has a new AI-generated sticker feature as well as an AI-assisted smart selection tool for isolating and editing elements in an image, and Snipping Tool has a new AI-powered “perfect screenshot” feature for capturing your screen without the need to crop or resize afterwards. Paint’s new AI features only work on Copilot+ PCs while Snipping Tool’s features work on all computers.
    All of this builds on Microsoft’s strategy to bring more AI experiences to Notepad, Paint, and other Windows apps.
    #microsoft #now #testing #aigenerated #text
    Microsoft is now testing AI-generated text in Windows Notepad
    As of yesterday, Microsoft has begun rolling out a new update to Windows 11 Insiders on the Dev and Canary Channels. This update brings new AI features to Notepad, Paint, and the Snipping Tool. Notepad now has the ability to write text from scratch using generative AI, which is meant to aid you by quickly producing drafts based on your prompts and instructions. To use AI text generation, simply right-click anywhere in the document and select Write. Type in your instructions, then either click Keep Text or Discard on the results. You’ll need a Microsoft account and AI credits to use Write in Notepad. Meanwhile, Paint now has a new AI-generated sticker feature as well as an AI-assisted smart selection tool for isolating and editing elements in an image, and Snipping Tool has a new AI-powered “perfect screenshot” feature for capturing your screen without the need to crop or resize afterwards. Paint’s new AI features only work on Copilot+ PCs while Snipping Tool’s features work on all computers. All of this builds on Microsoft’s strategy to bring more AI experiences to Notepad, Paint, and other Windows apps. #microsoft #now #testing #aigenerated #text
    WWW.PCWORLD.COM
    Microsoft is now testing AI-generated text in Windows Notepad
    As of yesterday, Microsoft has begun rolling out a new update to Windows 11 Insiders on the Dev and Canary Channels. This update brings new AI features to Notepad, Paint, and the Snipping Tool. Notepad now has the ability to write text from scratch using generative AI, which is meant to aid you by quickly producing drafts based on your prompts and instructions. To use AI text generation, simply right-click anywhere in the document and select Write. Type in your instructions, then either click Keep Text or Discard on the results. You’ll need a Microsoft account and AI credits to use Write in Notepad. Meanwhile, Paint now has a new AI-generated sticker feature as well as an AI-assisted smart selection tool for isolating and editing elements in an image, and Snipping Tool has a new AI-powered “perfect screenshot” feature for capturing your screen without the need to crop or resize afterwards. Paint’s new AI features only work on Copilot+ PCs while Snipping Tool’s features work on all computers. All of this builds on Microsoft’s strategy to bring more AI experiences to Notepad, Paint, and other Windows apps.
    0 التعليقات 0 المشاركات