• Cloning in polo? Seriously? This is an absolute disgrace! How low can we go when we start cloning horses, turning the sport into a money-making circus? A single clone worth $800,000? This isn’t innovation; it’s a betrayal of the very principles of sportsmanship and the love of the game! What’s next? A factory line for athletes? The line between skill and synthetic replication is blurred, and it’s infuriating to see greed overshadow talent. Let’s not allow this to become the norm—polo deserves better than this twisted tech trend.

    #Polo #Cloning #Sportsmanship #InnovationGoneWrong #EthicsInSports
    Cloning in polo? Seriously? This is an absolute disgrace! How low can we go when we start cloning horses, turning the sport into a money-making circus? A single clone worth $800,000? This isn’t innovation; it’s a betrayal of the very principles of sportsmanship and the love of the game! What’s next? A factory line for athletes? The line between skill and synthetic replication is blurred, and it’s infuriating to see greed overshadow talent. Let’s not allow this to become the norm—polo deserves better than this twisted tech trend. #Polo #Cloning #Sportsmanship #InnovationGoneWrong #EthicsInSports
    Cloning Came to Polo. Then Things Got Truly Uncivilized
    A polo legend and a businessman joined forces to copy the player’s greatest horse. But with a single clone worth $800,000, some technologies are a breeding ground for betrayal.
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    News

    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    6 min read

    Published: May 27, 2025

    Key Takeaways

    With Google’s Veo3, you can now render AI videos, audio, and background sounds.
    This would also make it easy for scammers to design deepfake scams to defraud innocent citizens.
    Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part.

    Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio.
    While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people.
    A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon.
    Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human.
    A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here.
    New AI Age for Scammers
    With the development of generative AI, we have already seen countless examples of people losing millions to such scams. 
    For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%.
    Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard.

    In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme.
    In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases.
    It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases.
    How to Protect Yourself
    Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations.
    Self Vigilance
    Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video. 

    The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds.
    The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds.

    We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are. 
    You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars.
    Developer’s Responsibilities
    AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts. 
    Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI.

    However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward.
    Government Regulations
    Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems. 
    Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration. 
    Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer.
    That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    News

    OpenAI Academy – A New Beginning in AI Learning

    Krishi Chowdhary

    44 minutes ago

    View all
    #you #can #now #make #videos
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all #you #can #now #make #videos
    TECHREPORT.COM
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worse (worse) from here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent $25M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of $40B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost $690,000 after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Rick and Morty team didn’t worry about the lore ‘we owe’ in season 8 — only Rick’s baggage

    Rick and Morty remains a staggering work of chaotic creativity. Previewing a handful of episodes from season 8, which premieres Sunday, May 25 with a Matrix-themed story inspired by phone charger theft, I still had that brain-melty “How do they think of this stuff?” feeling from when the show premiered more than a decade ago. The characters aren’t all the same as they were back in 2013: Morty has an edge from being around the galactic block a few hundred times, and Rick, while still a maniac, seems to carry the weight of cloning his daughter Beth that one time. 

    But the sheer amount of wackadoo sci-fi comedy that creator Dan Harmon, showrunner Scott Marder, and their team of writers pack into each half-hour hasn’t lost the awe. This season, that includes everything from a body-horror spin on the Easter Bunny to a “spiritual sequel”to season 3’s beloved Citadel episode “The Ricklantis Mixup.”

    So where does writing yet another season of Rick and Morty begin? And what does a new season need to accomplish at this point? Polygon talked to Harmon and Marder, who wrote seasons 8, 9, and 10 all in one go, about the tall-order task of reapproaching the Adult Swim series with so much madcap history behind them.

    Polygon: Where do you even start writing a new episode, when your show can zip in any fantastical direction, or go completely ham on its own mythology?

    Scott Marder: You might be surprised that we never start off a season with “What’s the canon we owe?” That’s the heavy lifting, and not necessarily how we want to start a season off. There are always people on staff that are hyper-aware of where we are in that central arc that’s going across the whole series, but it’s like any writers room — people are coming in with ideas they’re excited about. You can just see it on their faces. You can feel their energy and just spit it out, and people just start firing off things they’re excited about. We don’t try to have any rules or any setup. Sometimes there are seasons where we owe something from the previous season. In season 8, we didn’t, and that was luxurious.

    Dan Harmon: I always reference the Dexter season where they tried to save the revelation that a Fight Club was happening for the end, and after the first episode, all of Reddit had decoded it. I marked that moment as sort of “We are now in post-payoff TV.” As TV writers, we have to use what the audience doesn’t have, which is a TV writers room. That isn’t 10 people sitting around planning a funhouse, because they’re not going to plan as good a funhouse as a million people can plan for free by crowdsourcing. 

    But we can mix chocolate with giant machines that people can’t afford and don’t have in their kitchen. We can use resources and things to make something that’s delicious to watch. So that becomes the obligation when we sit down for seasons. We never go, “What’s going to be the big payoff? What’s going to be the big old twist? What are we going to reveal?” I think that that’s a non-starter for the modern audience. You just have to hope that the thing that ends up making headlines is a “How is it still good?” kind of thing — that’s the only narrative you can blow people’s minds with.

    Even if “lore” isn’t the genesis of a new season, Rick & Morty still exists in an interesting middle ground between episodic and serialized storytelling. Do you need the show to have one or the other when you want a season to have impact?

    Harmon: It’s less episodic than Hercules or Xena. It’s not Small Wonder or something where canon would defeat their own purpose. But it is way more episodic than Yellowjackets — I walked in on Codywatching season 2 of, and literally there wasn’t a single line of dialogue that made sense to me, and that was how she liked it. They were all talking about whatever happened in season 1. 

    Referencing The Pitt, I think is the new perfect example of how you can’t shake your cane at serialization. In a post-streaming marketplace, The Pitt represents a new opportunity for old showrunners, new viewers to do things you couldn’t do before, that you can now do with serialization, and issuing the time-slot-driven narrative model. Our show needs to be Doctor Who or Deep Space Nine. It comes from a tradition of, you need to be able to eat one piece of chocolate out of the box, but the characters need to, more so than a Saved by the Bell character, grow and change and have things about them that get revealed over time that don’t then get retconned.

    Marder: Ideally, the show’s evergreen, generally episodic. But we’re keeping an eye on serialized stuff, moments across each season that keep everyone engaged. I know people care about all that stuff. I think all of that combined makes for a perfect Rick and Morty season.

    How reactive is writing a new season of Rick and Morty? Does season 8 feel very 2025 to you, or is the goal timelessness?

    Harmon: The show has seen such a turbulent decade, and one of the cultural things that has happened is, TV is now always being watched by the entire planet. So people often ask “Is there anything that you’re afraid to do or can’t do?” The answer to that is “No.” But then at the same time, I don’t think the show has an edge that it needs to push, or would profit from pushing. It’s almost the opposite, in that the difficult thing is figuring out how to keep Rick from being Flanderized as a character that was a nihilist 10 years ago, where across an epoch of culture and TV, Rick was simply the guy saying, “By the way, God doesn’t exist” and having a cash register “Cha-ching!” from him saying that. 

    How do you keep House from not becoming pathetic on the 10th season of House if House has made people go, “I trust House because he’s such a crab-ass and he doesn’t care about your feelings when he diagnoses you!” I mean, you need to very delicately cultivate a House. So if you do care about the character, and value its outside perspective, it needs to be delicately changed to balance a changing ecosystem. 

    What a weird rambling answer to that question. But yeah, with Rick, it’s now like, “What if you’re kind of post-achievement? What if your nihilism isn’t going to pay the rent, as far as emotional relationships?” It’s not going to blow anyone’s mind, least of all his own. Where does that leave him? A new set of challenges. He’s still cynical, he’s still a nihilist. He’s still self-loathing, and filled with self-damage. Those things are wired into him. And yet he’s also acknowledged that other people are arbitrarily important to him. And so I guess we start there — that’s the only thing we can do to challenge ourselves. 

    Marder: I would say, just yes-anding Harmon, that’s sort of the light arc that runs through the season. Just kind of Rick living in a “retirement state.” What does he do now that this vendetta is over? He’s dealing with the family now, dealing with the Beths. That’s some of the stuff that we touch on lightly through it. 

    Which characters were you excited to see grow this season?

    Marder: I don’t think anyone had an agenda. It just kind of happened that we ended up finding a really neat Beth arc once Beth got split in two. It made her a way more intriguing character. One part of you literally gets to live the road less traveled, and this season really explores whether either of them are leading a happier life. Rick has to deal with being at the root of all that. 

    When we stumble onto something like a Jerry episode, like the Easter, that’s a treat, or Summer and the phone charger. She’s such an awesome character. It’s cool to see how she and Morty are evolving and becoming better at being the sidekick and handling themselves. It was cool watching her become a powerful CEO, then step back into her old life. We are very lucky that we’ve got a strong cast. 

    Are there any concepts in season 8 you’ve tried to get in the show for years and only now found a way?

    Harmon: My frustrating answer to that question is that the answer to that question is one that happens in season 9!I’ve actually been wanting to do in television or in movies forever, and we figured out how to do it. 

    There are definitely things in every episode, but it’s hard to tell which ones. We have a shoebox of “Oh, this idea can’t be done now,” but it’s like a cow’s digestive system. Ideas for seasons just keep getting passed down.

    Marder: There are a few that are magnetic that we can’t crack, and that we kind of leave on the board, hoping that maybe a new guy will come in and see it comedically. I feel like every season, a new person will come in and see that we have “time loop” up on the board, and they’ll crack their knuckles and be like, “I’m going to break the time loop.” And then we all spend three days trying to break “time loop.” Then it goes back on the board, and we’re reminded why we don’t do time loops. 

    Harmon: That is so funny. That is the reality, and it’s funny how mythical it is. It’s like an island on a pre-Columbian map in a ship’s galley, and some new deckhand comes in going, “What’s the Galapagos?” And we’re like, “Yarr, you little piece of shit, sit down and I’ll tell you a tale!” And they’ll either be successfully warned off, or they’ll go, “I’m going to take it.”

    Marder: It’s always like, “I can’t remember why that one made it back on the board… I can’t remember why we couldn’t crack it…” And then three days later, you’re like, “I remember why we couldn’t crack it.” Now an eager young writer is seasoned and grizzled. “It was a mistake to go to the time loop.”
    #rick #morty #team #didnt #worry
    Rick and Morty team didn’t worry about the lore ‘we owe’ in season 8 — only Rick’s baggage
    Rick and Morty remains a staggering work of chaotic creativity. Previewing a handful of episodes from season 8, which premieres Sunday, May 25 with a Matrix-themed story inspired by phone charger theft, I still had that brain-melty “How do they think of this stuff?” feeling from when the show premiered more than a decade ago. The characters aren’t all the same as they were back in 2013: Morty has an edge from being around the galactic block a few hundred times, and Rick, while still a maniac, seems to carry the weight of cloning his daughter Beth that one time.  But the sheer amount of wackadoo sci-fi comedy that creator Dan Harmon, showrunner Scott Marder, and their team of writers pack into each half-hour hasn’t lost the awe. This season, that includes everything from a body-horror spin on the Easter Bunny to a “spiritual sequel”to season 3’s beloved Citadel episode “The Ricklantis Mixup.” So where does writing yet another season of Rick and Morty begin? And what does a new season need to accomplish at this point? Polygon talked to Harmon and Marder, who wrote seasons 8, 9, and 10 all in one go, about the tall-order task of reapproaching the Adult Swim series with so much madcap history behind them. Polygon: Where do you even start writing a new episode, when your show can zip in any fantastical direction, or go completely ham on its own mythology? Scott Marder: You might be surprised that we never start off a season with “What’s the canon we owe?” That’s the heavy lifting, and not necessarily how we want to start a season off. There are always people on staff that are hyper-aware of where we are in that central arc that’s going across the whole series, but it’s like any writers room — people are coming in with ideas they’re excited about. You can just see it on their faces. You can feel their energy and just spit it out, and people just start firing off things they’re excited about. We don’t try to have any rules or any setup. Sometimes there are seasons where we owe something from the previous season. In season 8, we didn’t, and that was luxurious. Dan Harmon: I always reference the Dexter season where they tried to save the revelation that a Fight Club was happening for the end, and after the first episode, all of Reddit had decoded it. I marked that moment as sort of “We are now in post-payoff TV.” As TV writers, we have to use what the audience doesn’t have, which is a TV writers room. That isn’t 10 people sitting around planning a funhouse, because they’re not going to plan as good a funhouse as a million people can plan for free by crowdsourcing.  But we can mix chocolate with giant machines that people can’t afford and don’t have in their kitchen. We can use resources and things to make something that’s delicious to watch. So that becomes the obligation when we sit down for seasons. We never go, “What’s going to be the big payoff? What’s going to be the big old twist? What are we going to reveal?” I think that that’s a non-starter for the modern audience. You just have to hope that the thing that ends up making headlines is a “How is it still good?” kind of thing — that’s the only narrative you can blow people’s minds with. Even if “lore” isn’t the genesis of a new season, Rick & Morty still exists in an interesting middle ground between episodic and serialized storytelling. Do you need the show to have one or the other when you want a season to have impact? Harmon: It’s less episodic than Hercules or Xena. It’s not Small Wonder or something where canon would defeat their own purpose. But it is way more episodic than Yellowjackets — I walked in on Codywatching season 2 of, and literally there wasn’t a single line of dialogue that made sense to me, and that was how she liked it. They were all talking about whatever happened in season 1.  Referencing The Pitt, I think is the new perfect example of how you can’t shake your cane at serialization. In a post-streaming marketplace, The Pitt represents a new opportunity for old showrunners, new viewers to do things you couldn’t do before, that you can now do with serialization, and issuing the time-slot-driven narrative model. Our show needs to be Doctor Who or Deep Space Nine. It comes from a tradition of, you need to be able to eat one piece of chocolate out of the box, but the characters need to, more so than a Saved by the Bell character, grow and change and have things about them that get revealed over time that don’t then get retconned. Marder: Ideally, the show’s evergreen, generally episodic. But we’re keeping an eye on serialized stuff, moments across each season that keep everyone engaged. I know people care about all that stuff. I think all of that combined makes for a perfect Rick and Morty season. How reactive is writing a new season of Rick and Morty? Does season 8 feel very 2025 to you, or is the goal timelessness? Harmon: The show has seen such a turbulent decade, and one of the cultural things that has happened is, TV is now always being watched by the entire planet. So people often ask “Is there anything that you’re afraid to do or can’t do?” The answer to that is “No.” But then at the same time, I don’t think the show has an edge that it needs to push, or would profit from pushing. It’s almost the opposite, in that the difficult thing is figuring out how to keep Rick from being Flanderized as a character that was a nihilist 10 years ago, where across an epoch of culture and TV, Rick was simply the guy saying, “By the way, God doesn’t exist” and having a cash register “Cha-ching!” from him saying that.  How do you keep House from not becoming pathetic on the 10th season of House if House has made people go, “I trust House because he’s such a crab-ass and he doesn’t care about your feelings when he diagnoses you!” I mean, you need to very delicately cultivate a House. So if you do care about the character, and value its outside perspective, it needs to be delicately changed to balance a changing ecosystem.  What a weird rambling answer to that question. But yeah, with Rick, it’s now like, “What if you’re kind of post-achievement? What if your nihilism isn’t going to pay the rent, as far as emotional relationships?” It’s not going to blow anyone’s mind, least of all his own. Where does that leave him? A new set of challenges. He’s still cynical, he’s still a nihilist. He’s still self-loathing, and filled with self-damage. Those things are wired into him. And yet he’s also acknowledged that other people are arbitrarily important to him. And so I guess we start there — that’s the only thing we can do to challenge ourselves.  Marder: I would say, just yes-anding Harmon, that’s sort of the light arc that runs through the season. Just kind of Rick living in a “retirement state.” What does he do now that this vendetta is over? He’s dealing with the family now, dealing with the Beths. That’s some of the stuff that we touch on lightly through it.  Which characters were you excited to see grow this season? Marder: I don’t think anyone had an agenda. It just kind of happened that we ended up finding a really neat Beth arc once Beth got split in two. It made her a way more intriguing character. One part of you literally gets to live the road less traveled, and this season really explores whether either of them are leading a happier life. Rick has to deal with being at the root of all that.  When we stumble onto something like a Jerry episode, like the Easter, that’s a treat, or Summer and the phone charger. She’s such an awesome character. It’s cool to see how she and Morty are evolving and becoming better at being the sidekick and handling themselves. It was cool watching her become a powerful CEO, then step back into her old life. We are very lucky that we’ve got a strong cast.  Are there any concepts in season 8 you’ve tried to get in the show for years and only now found a way? Harmon: My frustrating answer to that question is that the answer to that question is one that happens in season 9!I’ve actually been wanting to do in television or in movies forever, and we figured out how to do it.  There are definitely things in every episode, but it’s hard to tell which ones. We have a shoebox of “Oh, this idea can’t be done now,” but it’s like a cow’s digestive system. Ideas for seasons just keep getting passed down. Marder: There are a few that are magnetic that we can’t crack, and that we kind of leave on the board, hoping that maybe a new guy will come in and see it comedically. I feel like every season, a new person will come in and see that we have “time loop” up on the board, and they’ll crack their knuckles and be like, “I’m going to break the time loop.” And then we all spend three days trying to break “time loop.” Then it goes back on the board, and we’re reminded why we don’t do time loops.  Harmon: That is so funny. That is the reality, and it’s funny how mythical it is. It’s like an island on a pre-Columbian map in a ship’s galley, and some new deckhand comes in going, “What’s the Galapagos?” And we’re like, “Yarr, you little piece of shit, sit down and I’ll tell you a tale!” And they’ll either be successfully warned off, or they’ll go, “I’m going to take it.” Marder: It’s always like, “I can’t remember why that one made it back on the board… I can’t remember why we couldn’t crack it…” And then three days later, you’re like, “I remember why we couldn’t crack it.” Now an eager young writer is seasoned and grizzled. “It was a mistake to go to the time loop.” #rick #morty #team #didnt #worry
    WWW.POLYGON.COM
    Rick and Morty team didn’t worry about the lore ‘we owe’ in season 8 — only Rick’s baggage
    Rick and Morty remains a staggering work of chaotic creativity. Previewing a handful of episodes from season 8, which premieres Sunday, May 25 with a Matrix-themed story inspired by phone charger theft, I still had that brain-melty “How do they think of this stuff?” feeling from when the show premiered more than a decade ago. The characters aren’t all the same as they were back in 2013 (voice actors aside): Morty has an edge from being around the galactic block a few hundred times, and Rick, while still a maniac, seems to carry the weight of cloning his daughter Beth that one time.  But the sheer amount of wackadoo sci-fi comedy that creator Dan Harmon, showrunner Scott Marder, and their team of writers pack into each half-hour hasn’t lost the awe. This season, that includes everything from a body-horror spin on the Easter Bunny to a “spiritual sequel” (Harmon’s words) to season 3’s beloved Citadel episode “The Ricklantis Mixup.” So where does writing yet another season of Rick and Morty begin? And what does a new season need to accomplish at this point? Polygon talked to Harmon and Marder, who wrote seasons 8, 9, and 10 all in one go, about the tall-order task of reapproaching the Adult Swim series with so much madcap history behind them. Polygon: Where do you even start writing a new episode, when your show can zip in any fantastical direction, or go completely ham on its own mythology? Scott Marder: You might be surprised that we never start off a season with “What’s the canon we owe?” That’s the heavy lifting, and not necessarily how we want to start a season off. There are always people on staff that are hyper-aware of where we are in that central arc that’s going across the whole series, but it’s like any writers room — people are coming in with ideas they’re excited about. You can just see it on their faces. You can feel their energy and just spit it out, and people just start firing off things they’re excited about. We don’t try to have any rules or any setup. Sometimes there are seasons where we owe something from the previous season. In season 8, we didn’t, and that was luxurious. Dan Harmon: I always reference the Dexter season where they tried to save the revelation that a Fight Club was happening for the end, and after the first episode, all of Reddit had decoded it. I marked that moment as sort of “We are now in post-payoff TV.” As TV writers, we have to use what the audience doesn’t have, which is a TV writers room. That isn’t 10 people sitting around planning a funhouse, because they’re not going to plan as good a funhouse as a million people can plan for free by crowdsourcing.  But we can mix chocolate with giant machines that people can’t afford and don’t have in their kitchen. We can use resources and things to make something that’s delicious to watch. So that becomes the obligation when we sit down for seasons. We never go, “What’s going to be the big payoff? What’s going to be the big old twist? What are we going to reveal?” I think that that’s a non-starter for the modern audience. You just have to hope that the thing that ends up making headlines is a “How is it still good?” kind of thing — that’s the only narrative you can blow people’s minds with. Even if “lore” isn’t the genesis of a new season, Rick & Morty still exists in an interesting middle ground between episodic and serialized storytelling. Do you need the show to have one or the other when you want a season to have impact? Harmon: It’s less episodic than Hercules or Xena. It’s not Small Wonder or something where canon would defeat their own purpose. But it is way more episodic than Yellowjackets — I walked in on Cody [Heller, Harmon’s partner] watching season 2 of [Yellowjackets], and literally there wasn’t a single line of dialogue that made sense to me, and that was how she liked it. They were all talking about whatever happened in season 1.  Referencing The Pitt, I think is the new perfect example of how you can’t shake your cane at serialization. In a post-streaming marketplace, The Pitt represents a new opportunity for old showrunners, new viewers to do things you couldn’t do before, that you can now do with serialization, and issuing the time-slot-driven narrative model. Our show needs to be Doctor Who or Deep Space Nine. It comes from a tradition of, you need to be able to eat one piece of chocolate out of the box, but the characters need to, more so than a Saved by the Bell character, grow and change and have things about them that get revealed over time that don’t then get retconned. Marder: Ideally, the show’s evergreen, generally episodic. But we’re keeping an eye on serialized stuff, moments across each season that keep everyone engaged. I know people care about all that stuff. I think all of that combined makes for a perfect Rick and Morty season. How reactive is writing a new season of Rick and Morty? Does season 8 feel very 2025 to you, or is the goal timelessness? Harmon: The show has seen such a turbulent decade, and one of the cultural things that has happened is, TV is now always being watched by the entire planet. So people often ask “Is there anything that you’re afraid to do or can’t do?” The answer to that is “No.” But then at the same time, I don’t think the show has an edge that it needs to push, or would profit from pushing. It’s almost the opposite, in that the difficult thing is figuring out how to keep Rick from being Flanderized as a character that was a nihilist 10 years ago, where across an epoch of culture and TV, Rick was simply the guy saying, “By the way, God doesn’t exist” and having a cash register “Cha-ching!” from him saying that.  How do you keep House from not becoming pathetic on the 10th season of House if House has made people go, “I trust House because he’s such a crab-ass and he doesn’t care about your feelings when he diagnoses you!” I mean, you need to very delicately cultivate a House. So if you do care about the character, and value its outside perspective, it needs to be delicately changed to balance a changing ecosystem.  What a weird rambling answer to that question. But yeah, with Rick, it’s now like, “What if you’re kind of post-achievement? What if your nihilism isn’t going to pay the rent, as far as emotional relationships?” It’s not going to blow anyone’s mind, least of all his own. Where does that leave him? A new set of challenges. He’s still cynical, he’s still a nihilist. He’s still self-loathing, and filled with self-damage. Those things are wired into him. And yet he’s also acknowledged that other people are arbitrarily important to him. And so I guess we start there — that’s the only thing we can do to challenge ourselves.  Marder: I would say, just yes-anding Harmon, that’s sort of the light arc that runs through the season. Just kind of Rick living in a “retirement state.” What does he do now that this vendetta is over? He’s dealing with the family now, dealing with the Beths. That’s some of the stuff that we touch on lightly through it.  Which characters were you excited to see grow this season? Marder: I don’t think anyone had an agenda. It just kind of happened that we ended up finding a really neat Beth arc once Beth got split in two. It made her a way more intriguing character. One part of you literally gets to live the road less traveled, and this season really explores whether either of them are leading a happier life. Rick has to deal with being at the root of all that.  When we stumble onto something like a Jerry episode, like the Easter [one], that’s a treat, or Summer and the phone charger. She’s such an awesome character. It’s cool to see how she and Morty are evolving and becoming better at being the sidekick and handling themselves. It was cool watching her become a powerful CEO, then step back into her old life. We are very lucky that we’ve got a strong cast.  Are there any concepts in season 8 you’ve tried to get in the show for years and only now found a way? Harmon: My frustrating answer to that question is that the answer to that question is one that happens in season 9! [A thing] I’ve actually been wanting to do in television or in movies forever, and we figured out how to do it.  There are definitely things in every episode, but it’s hard to tell which ones. We have a shoebox of “Oh, this idea can’t be done now,” but it’s like a cow’s digestive system. Ideas for seasons just keep getting passed down. Marder: There are a few that are magnetic that we can’t crack, and that we kind of leave on the board, hoping that maybe a new guy will come in and see it comedically. I feel like every season, a new person will come in and see that we have “time loop” up on the board, and they’ll crack their knuckles and be like, “I’m going to break the time loop.” And then we all spend three days trying to break “time loop.” Then it goes back on the board, and we’re reminded why we don’t do time loops.  Harmon: That is so funny. That is the reality, and it’s funny how mythical it is. It’s like an island on a pre-Columbian map in a ship’s galley, and some new deckhand comes in going, “What’s the Galapagos?” And we’re like, “Yarr, you little piece of shit, sit down and I’ll tell you a tale!” And they’ll either be successfully warned off, or they’ll go, “I’m going to take it.” Marder: It’s always like, “I can’t remember why that one made it back on the board… I can’t remember why we couldn’t crack it…” And then three days later, you’re like, “I remember why we couldn’t crack it.” Now an eager young writer is seasoned and grizzled. “It was a mistake to go to the time loop.”
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Level up your code with game programming patterns

    If you have experience with object-oriented programming languages, then you’ve likely heard of the SOLID principles, MVP, singleton, factory, and observer patterns. Our new e-book highlights best practices for using these principles and patterns to create scalable game code architecture in your Unity project.For every software design issue you encounter, a thousand developers have been there before. Though you can’t always ask them directly for advice, you can learn from their decisions through design patterns.By implementing common, game programming design patterns in your Unity project, you can efficiently build and maintain a clean, organized, and readable codebase, which in turn, creates a solid foundation for scaling your game, development team, and business.In our community, we often hear that it can be intimidating to learn how to incorporate design patterns and principles, such as SOLID and KISS, into daily development. That’s why our free e-book, Level up your code with game programming patterns, explains well-known design patterns and shares practical examples for using them in your Unity project.Written by internal and external Unity experts, the e-book is a resource that can help expand your developer’s toolbox and accelerate your project’s success. Read on for a preview of what the guide entails.Design patterns are general solutions to common problems found in software engineering. These aren’t finished solutions you can copy and paste into your code, but extra tools that can help you build larger, scalable applications when used correctly.By integrating patterns consistently into your project, you can improve code readability and make your codebase cleaner. Design patterns not only reduce refactoring and the time spent testing, they speed up onboarding and development processes.However, every design pattern comes with tradeoffs, whether that means additional structures to maintain or more setup at the beginning. You’ll need to do a cost-benefit assessment to determine if the advantage justifies the extra work required. Of course, this assessment will vary based on your project.KISS stands for “keep it simple, stupid.” The aim of this principle is to avoid unnecessary complexity in a system, as simplicity helps drive greater levels of user acceptance and interaction.Note that “simple” does not equate to “easy.” Making something simple means making it focused. While you can create the same functionality without the patterns, something fast and easy doesn’t necessarily result in something simple.If you’re unsure whether a pattern applies to your particular issue, you might hold off until it feels like a more natural fit. Don’t use a pattern because it’s new or novel to you. Use it when you need it.It’s in this spirit that the e-book was created. Keep the guide handy as a source of inspiration for new ways of organizing your code – not as a strict set of rules for you to follow.Now, let’s turn to some of the key software design principles.SOLID is a mnemonic acronym for five core fundamentals of software design. You can think of them as five basic rules to keep in mind while coding, to ensure that object-oriented designs remain flexible and maintainable.The SOLID principles were first introduced by Robert C. Martin in the paper, Design Principles and Design Patterns. First published in 2000, the principles described are still applicable today, and to C# scripting in Unity:Single responsibility states that each module, class, or function is responsible for one thing and encapsulates only that part of the logic.Open-closed states that classes must be open for extension but closed for modification; that means structuring your classes to create new behavior without modifying the original code.Liskov substitution states that derived classes must be substitutable for their base class when using inheritance.Interface segregation states that no client should be forced to depend on methods it does not use. Clients should only implement what they need.Dependency inversion states that high-level modules should not import anything directly from low-level modules. Both should depend on abstractions.In the e-book, we provide illustrated examples of each principle with clear explanations for using them in Unity. In some cases, adhering to SOLID can result in additional work up front. You may need to refactor some of your functionality into abstractions or interfaces, but there is often a payoff in long-term savings.The principles have dominated software design for nearly two decades at the enterprise level because they’re so well-suited to large applications that scale. If you’re unsure about how to use them, refer back to the KISS principle. Keep it simple, and don’t try to force the principles into your scripts just for the sake of doing so. Let them organically work themselves into place through necessity.If you’re interested in learning more, check out the SOLID presentation from Unite Austin 2017 by Dan Sagmiller of Productive Edge.What’s the difference between a design principle and a design pattern? One way to answer that question is to consider SOLID as a framework for, or a foundational approach to, writing object-oriented code. While design patterns are solutions or tools you can implement to avoid everyday software problems, remember that they’re not off-the-shelf recipes – or for that matter, algorithms with specific steps for achieving specific results.A design pattern can be thought of as a blueprint. It’s a general plan that leaves the actual construction up to you. For instance, two programs can follow the same pattern but involve very different code.When developers encounter the same problem in the wild, many of them will inevitably come up with similar solutions. Once a solution is repeated enough times, someone might “discover” a pattern and formally give it a name.Many of today’s software design patterns stem from the seminal work, Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This book unpacks 23 such patterns identified in a variety of day-to-day applications.The original authors are often referred to as the “Gang of Four”,and you’ll also hear the original patterns dubbed the GoF patterns. While the examples cited are mostly in C++, you can apply their ideas to any object-oriented language, such as C#.Since the Gang of Four originally published Design Patterns in 1994, developers have since established dozens more object-oriented patterns in a variety of fields, including game development.While you can work as a game programmer without studying design patterns, learning them will help you become a better developer. After all, design patterns are labeled as such because they’re common solutions to well-known problems.Software engineers rediscover them all the time in the normal course of development. You may have already implemented some of these patterns unwittingly.Train yourself to look for them. Doing this can help you:Learn object-oriented programming: Design patterns aren’t secrets buried in an esoteric StackOverflow post. They are common ways to overcome everyday hurdles in development. They can inform you of how many other developers have approached the same issue – remember, even if you’re not using patterns, someone else is.Talk to other developers: Patterns can serve as a shorthand when trying to communicate as a team. Mention the “command pattern” or “object pool” and experienced Unity developers will know what you’re trying to implement.Explore new frameworks:When you import a built-in package or something from the Asset Store, inevitably you’ll stumble onto one or more patterns discussed here. Recognizing design patterns will help you understand how a new framework operates, as well as the thought process involved in its creation.As indicated earlier, not all design patterns apply to every game application. Don’t go looking for them with Maslow’s hammer; otherwise, you might only find nails.Like any other tool, a design pattern’s usefulness depends on context. Each one provides a benefit in certain situations and also comes with its share of drawbacks. Every decision in software development comes with compromises.Are you generating a lot of GameObjects on the fly? Does it impact your performance? Can restructuring your code fix that? Be aware of these design patterns, and when the time is right, pull them from your gamedev bag of tricks to solve the problem at hand.In addition to the Gang of Four’s Design Patterns, Game Programming Patterns by Robert Nystrom is another standout resource, currently available for free as a web-based edition. The author details a variety of software patterns in a no-nonsense manner.In our new e-book, you can dive into the sections that explain common design patterns, such as factory, object pool, singleton, command, state, and observer patterns, plus the Model View Presenter, among others. Each section explains the pattern along with its pros and cons, and provides an example of how to implement it in Unity so you can optimize its usage in your project.Unity already implements several established gamedev patterns, saving you the trouble of writing them yourself. These include:Game loop: At the core of all games is an infinite loop that must function independently of clock speed, since the hardware that powers a game application can vary greatly. To account for computers of different speeds, game developers often need to use a fixed timestepand a variable timestep where the engine measures how much time has passed since the previous frame.

    Unity takes care of this, so you don’t have to implement it yourself. You only need to manage gameplay using MonoBehaviour methods like Update, LateUpdate, and FixedUpdate.
    Update: In your game application, you’ll often update each object’s behavior one frame at a time. While you can manually recreate this in Unity, the MonoBehaviour class does this automatically. Use the appropriate Update, LateUpdate, or FixedUpdate methods to modify your GameObjects and components to one tick of the game clock.Prototype: Often you need to copy objects without affecting the original. This creational pattern solves the problem of duplicating and cloning an object to make other objects similar to itself. This way you avoid defining a separate class to spawn every type of object in your game.

    Unity’s Prefab system implements a form of prototyping for GameObjects. This allows you to duplicate a template object complete with its components. Override specific properties to create Prefab Variants or nest Prefabs inside other Prefabs to create hierarchies. Use a special Prefab editing mode to edit Prefabs in isolation or in context.
    Component:Most people working in Unity know this pattern. Instead of creating large classes with multiple responsibilities, build smaller components that each do one thing.

    If you use composition to pick and choose components, you can combine them for complex behavior. Add Rigidbody and Collider components for physics, or a MeshFilter and MeshRenderer for 3D geometry. Each GameObject is only as rich and unique as its collection of components.Both the e-book and a sample project on the use of design patterns are available now to download for free. Review the examples and decide which design pattern best suits your project. As you gain experience with them, you’ll recognize how and when they can enhance your development process. As always, we encourage you to visit the forum thread and let us know what you think of the e-book and sample.
    #level #your #code #with #game
    Level up your code with game programming patterns
    If you have experience with object-oriented programming languages, then you’ve likely heard of the SOLID principles, MVP, singleton, factory, and observer patterns. Our new e-book highlights best practices for using these principles and patterns to create scalable game code architecture in your Unity project.For every software design issue you encounter, a thousand developers have been there before. Though you can’t always ask them directly for advice, you can learn from their decisions through design patterns.By implementing common, game programming design patterns in your Unity project, you can efficiently build and maintain a clean, organized, and readable codebase, which in turn, creates a solid foundation for scaling your game, development team, and business.In our community, we often hear that it can be intimidating to learn how to incorporate design patterns and principles, such as SOLID and KISS, into daily development. That’s why our free e-book, Level up your code with game programming patterns, explains well-known design patterns and shares practical examples for using them in your Unity project.Written by internal and external Unity experts, the e-book is a resource that can help expand your developer’s toolbox and accelerate your project’s success. Read on for a preview of what the guide entails.Design patterns are general solutions to common problems found in software engineering. These aren’t finished solutions you can copy and paste into your code, but extra tools that can help you build larger, scalable applications when used correctly.By integrating patterns consistently into your project, you can improve code readability and make your codebase cleaner. Design patterns not only reduce refactoring and the time spent testing, they speed up onboarding and development processes.However, every design pattern comes with tradeoffs, whether that means additional structures to maintain or more setup at the beginning. You’ll need to do a cost-benefit assessment to determine if the advantage justifies the extra work required. Of course, this assessment will vary based on your project.KISS stands for “keep it simple, stupid.” The aim of this principle is to avoid unnecessary complexity in a system, as simplicity helps drive greater levels of user acceptance and interaction.Note that “simple” does not equate to “easy.” Making something simple means making it focused. While you can create the same functionality without the patterns, something fast and easy doesn’t necessarily result in something simple.If you’re unsure whether a pattern applies to your particular issue, you might hold off until it feels like a more natural fit. Don’t use a pattern because it’s new or novel to you. Use it when you need it.It’s in this spirit that the e-book was created. Keep the guide handy as a source of inspiration for new ways of organizing your code – not as a strict set of rules for you to follow.Now, let’s turn to some of the key software design principles.SOLID is a mnemonic acronym for five core fundamentals of software design. You can think of them as five basic rules to keep in mind while coding, to ensure that object-oriented designs remain flexible and maintainable.The SOLID principles were first introduced by Robert C. Martin in the paper, Design Principles and Design Patterns. First published in 2000, the principles described are still applicable today, and to C# scripting in Unity:Single responsibility states that each module, class, or function is responsible for one thing and encapsulates only that part of the logic.Open-closed states that classes must be open for extension but closed for modification; that means structuring your classes to create new behavior without modifying the original code.Liskov substitution states that derived classes must be substitutable for their base class when using inheritance.Interface segregation states that no client should be forced to depend on methods it does not use. Clients should only implement what they need.Dependency inversion states that high-level modules should not import anything directly from low-level modules. Both should depend on abstractions.In the e-book, we provide illustrated examples of each principle with clear explanations for using them in Unity. In some cases, adhering to SOLID can result in additional work up front. You may need to refactor some of your functionality into abstractions or interfaces, but there is often a payoff in long-term savings.The principles have dominated software design for nearly two decades at the enterprise level because they’re so well-suited to large applications that scale. If you’re unsure about how to use them, refer back to the KISS principle. Keep it simple, and don’t try to force the principles into your scripts just for the sake of doing so. Let them organically work themselves into place through necessity.If you’re interested in learning more, check out the SOLID presentation from Unite Austin 2017 by Dan Sagmiller of Productive Edge.What’s the difference between a design principle and a design pattern? One way to answer that question is to consider SOLID as a framework for, or a foundational approach to, writing object-oriented code. While design patterns are solutions or tools you can implement to avoid everyday software problems, remember that they’re not off-the-shelf recipes – or for that matter, algorithms with specific steps for achieving specific results.A design pattern can be thought of as a blueprint. It’s a general plan that leaves the actual construction up to you. For instance, two programs can follow the same pattern but involve very different code.When developers encounter the same problem in the wild, many of them will inevitably come up with similar solutions. Once a solution is repeated enough times, someone might “discover” a pattern and formally give it a name.Many of today’s software design patterns stem from the seminal work, Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This book unpacks 23 such patterns identified in a variety of day-to-day applications.The original authors are often referred to as the “Gang of Four”,and you’ll also hear the original patterns dubbed the GoF patterns. While the examples cited are mostly in C++, you can apply their ideas to any object-oriented language, such as C#.Since the Gang of Four originally published Design Patterns in 1994, developers have since established dozens more object-oriented patterns in a variety of fields, including game development.While you can work as a game programmer without studying design patterns, learning them will help you become a better developer. After all, design patterns are labeled as such because they’re common solutions to well-known problems.Software engineers rediscover them all the time in the normal course of development. You may have already implemented some of these patterns unwittingly.Train yourself to look for them. Doing this can help you:Learn object-oriented programming: Design patterns aren’t secrets buried in an esoteric StackOverflow post. They are common ways to overcome everyday hurdles in development. They can inform you of how many other developers have approached the same issue – remember, even if you’re not using patterns, someone else is.Talk to other developers: Patterns can serve as a shorthand when trying to communicate as a team. Mention the “command pattern” or “object pool” and experienced Unity developers will know what you’re trying to implement.Explore new frameworks:When you import a built-in package or something from the Asset Store, inevitably you’ll stumble onto one or more patterns discussed here. Recognizing design patterns will help you understand how a new framework operates, as well as the thought process involved in its creation.As indicated earlier, not all design patterns apply to every game application. Don’t go looking for them with Maslow’s hammer; otherwise, you might only find nails.Like any other tool, a design pattern’s usefulness depends on context. Each one provides a benefit in certain situations and also comes with its share of drawbacks. Every decision in software development comes with compromises.Are you generating a lot of GameObjects on the fly? Does it impact your performance? Can restructuring your code fix that? Be aware of these design patterns, and when the time is right, pull them from your gamedev bag of tricks to solve the problem at hand.In addition to the Gang of Four’s Design Patterns, Game Programming Patterns by Robert Nystrom is another standout resource, currently available for free as a web-based edition. The author details a variety of software patterns in a no-nonsense manner.In our new e-book, you can dive into the sections that explain common design patterns, such as factory, object pool, singleton, command, state, and observer patterns, plus the Model View Presenter, among others. Each section explains the pattern along with its pros and cons, and provides an example of how to implement it in Unity so you can optimize its usage in your project.Unity already implements several established gamedev patterns, saving you the trouble of writing them yourself. These include:Game loop: At the core of all games is an infinite loop that must function independently of clock speed, since the hardware that powers a game application can vary greatly. To account for computers of different speeds, game developers often need to use a fixed timestepand a variable timestep where the engine measures how much time has passed since the previous frame. Unity takes care of this, so you don’t have to implement it yourself. You only need to manage gameplay using MonoBehaviour methods like Update, LateUpdate, and FixedUpdate. Update: In your game application, you’ll often update each object’s behavior one frame at a time. While you can manually recreate this in Unity, the MonoBehaviour class does this automatically. Use the appropriate Update, LateUpdate, or FixedUpdate methods to modify your GameObjects and components to one tick of the game clock.Prototype: Often you need to copy objects without affecting the original. This creational pattern solves the problem of duplicating and cloning an object to make other objects similar to itself. This way you avoid defining a separate class to spawn every type of object in your game. Unity’s Prefab system implements a form of prototyping for GameObjects. This allows you to duplicate a template object complete with its components. Override specific properties to create Prefab Variants or nest Prefabs inside other Prefabs to create hierarchies. Use a special Prefab editing mode to edit Prefabs in isolation or in context. Component:Most people working in Unity know this pattern. Instead of creating large classes with multiple responsibilities, build smaller components that each do one thing. If you use composition to pick and choose components, you can combine them for complex behavior. Add Rigidbody and Collider components for physics, or a MeshFilter and MeshRenderer for 3D geometry. Each GameObject is only as rich and unique as its collection of components.Both the e-book and a sample project on the use of design patterns are available now to download for free. Review the examples and decide which design pattern best suits your project. As you gain experience with them, you’ll recognize how and when they can enhance your development process. As always, we encourage you to visit the forum thread and let us know what you think of the e-book and sample. #level #your #code #with #game
    UNITY.COM
    Level up your code with game programming patterns
    If you have experience with object-oriented programming languages, then you’ve likely heard of the SOLID principles, MVP, singleton, factory, and observer patterns. Our new e-book highlights best practices for using these principles and patterns to create scalable game code architecture in your Unity project.For every software design issue you encounter, a thousand developers have been there before. Though you can’t always ask them directly for advice, you can learn from their decisions through design patterns.By implementing common, game programming design patterns in your Unity project, you can efficiently build and maintain a clean, organized, and readable codebase, which in turn, creates a solid foundation for scaling your game, development team, and business.In our community, we often hear that it can be intimidating to learn how to incorporate design patterns and principles, such as SOLID and KISS, into daily development. That’s why our free e-book, Level up your code with game programming patterns, explains well-known design patterns and shares practical examples for using them in your Unity project.Written by internal and external Unity experts, the e-book is a resource that can help expand your developer’s toolbox and accelerate your project’s success. Read on for a preview of what the guide entails.Design patterns are general solutions to common problems found in software engineering. These aren’t finished solutions you can copy and paste into your code, but extra tools that can help you build larger, scalable applications when used correctly.By integrating patterns consistently into your project, you can improve code readability and make your codebase cleaner. Design patterns not only reduce refactoring and the time spent testing, they speed up onboarding and development processes.However, every design pattern comes with tradeoffs, whether that means additional structures to maintain or more setup at the beginning. You’ll need to do a cost-benefit assessment to determine if the advantage justifies the extra work required. Of course, this assessment will vary based on your project.KISS stands for “keep it simple, stupid.” The aim of this principle is to avoid unnecessary complexity in a system, as simplicity helps drive greater levels of user acceptance and interaction.Note that “simple” does not equate to “easy.” Making something simple means making it focused. While you can create the same functionality without the patterns (and often more quickly), something fast and easy doesn’t necessarily result in something simple.If you’re unsure whether a pattern applies to your particular issue, you might hold off until it feels like a more natural fit. Don’t use a pattern because it’s new or novel to you. Use it when you need it.It’s in this spirit that the e-book was created. Keep the guide handy as a source of inspiration for new ways of organizing your code – not as a strict set of rules for you to follow.Now, let’s turn to some of the key software design principles.SOLID is a mnemonic acronym for five core fundamentals of software design. You can think of them as five basic rules to keep in mind while coding, to ensure that object-oriented designs remain flexible and maintainable.The SOLID principles were first introduced by Robert C. Martin in the paper, Design Principles and Design Patterns. First published in 2000, the principles described are still applicable today, and to C# scripting in Unity:Single responsibility states that each module, class, or function is responsible for one thing and encapsulates only that part of the logic.Open-closed states that classes must be open for extension but closed for modification; that means structuring your classes to create new behavior without modifying the original code.Liskov substitution states that derived classes must be substitutable for their base class when using inheritance.Interface segregation states that no client should be forced to depend on methods it does not use. Clients should only implement what they need.Dependency inversion states that high-level modules should not import anything directly from low-level modules. Both should depend on abstractions.In the e-book, we provide illustrated examples of each principle with clear explanations for using them in Unity. In some cases, adhering to SOLID can result in additional work up front. You may need to refactor some of your functionality into abstractions or interfaces, but there is often a payoff in long-term savings.The principles have dominated software design for nearly two decades at the enterprise level because they’re so well-suited to large applications that scale. If you’re unsure about how to use them, refer back to the KISS principle. Keep it simple, and don’t try to force the principles into your scripts just for the sake of doing so. Let them organically work themselves into place through necessity.If you’re interested in learning more, check out the SOLID presentation from Unite Austin 2017 by Dan Sagmiller of Productive Edge.What’s the difference between a design principle and a design pattern? One way to answer that question is to consider SOLID as a framework for, or a foundational approach to, writing object-oriented code. While design patterns are solutions or tools you can implement to avoid everyday software problems, remember that they’re not off-the-shelf recipes – or for that matter, algorithms with specific steps for achieving specific results.A design pattern can be thought of as a blueprint. It’s a general plan that leaves the actual construction up to you. For instance, two programs can follow the same pattern but involve very different code.When developers encounter the same problem in the wild, many of them will inevitably come up with similar solutions. Once a solution is repeated enough times, someone might “discover” a pattern and formally give it a name.Many of today’s software design patterns stem from the seminal work, Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This book unpacks 23 such patterns identified in a variety of day-to-day applications.The original authors are often referred to as the “Gang of Four” (GoF),and you’ll also hear the original patterns dubbed the GoF patterns. While the examples cited are mostly in C++ (and Smalltalk), you can apply their ideas to any object-oriented language, such as C#.Since the Gang of Four originally published Design Patterns in 1994, developers have since established dozens more object-oriented patterns in a variety of fields, including game development.While you can work as a game programmer without studying design patterns, learning them will help you become a better developer. After all, design patterns are labeled as such because they’re common solutions to well-known problems.Software engineers rediscover them all the time in the normal course of development. You may have already implemented some of these patterns unwittingly.Train yourself to look for them. Doing this can help you:Learn object-oriented programming: Design patterns aren’t secrets buried in an esoteric StackOverflow post. They are common ways to overcome everyday hurdles in development. They can inform you of how many other developers have approached the same issue – remember, even if you’re not using patterns, someone else is.Talk to other developers: Patterns can serve as a shorthand when trying to communicate as a team. Mention the “command pattern” or “object pool” and experienced Unity developers will know what you’re trying to implement.Explore new frameworks:When you import a built-in package or something from the Asset Store, inevitably you’ll stumble onto one or more patterns discussed here. Recognizing design patterns will help you understand how a new framework operates, as well as the thought process involved in its creation.As indicated earlier, not all design patterns apply to every game application. Don’t go looking for them with Maslow’s hammer; otherwise, you might only find nails.Like any other tool, a design pattern’s usefulness depends on context. Each one provides a benefit in certain situations and also comes with its share of drawbacks. Every decision in software development comes with compromises.Are you generating a lot of GameObjects on the fly? Does it impact your performance? Can restructuring your code fix that? Be aware of these design patterns, and when the time is right, pull them from your gamedev bag of tricks to solve the problem at hand.In addition to the Gang of Four’s Design Patterns, Game Programming Patterns by Robert Nystrom is another standout resource, currently available for free as a web-based edition. The author details a variety of software patterns in a no-nonsense manner.In our new e-book, you can dive into the sections that explain common design patterns, such as factory, object pool, singleton, command, state, and observer patterns, plus the Model View Presenter (MVP), among others. Each section explains the pattern along with its pros and cons, and provides an example of how to implement it in Unity so you can optimize its usage in your project.Unity already implements several established gamedev patterns, saving you the trouble of writing them yourself. These include:Game loop: At the core of all games is an infinite loop that must function independently of clock speed, since the hardware that powers a game application can vary greatly. To account for computers of different speeds, game developers often need to use a fixed timestep (with a set frames-per-second) and a variable timestep where the engine measures how much time has passed since the previous frame. Unity takes care of this, so you don’t have to implement it yourself. You only need to manage gameplay using MonoBehaviour methods like Update, LateUpdate, and FixedUpdate. Update: In your game application, you’ll often update each object’s behavior one frame at a time. While you can manually recreate this in Unity, the MonoBehaviour class does this automatically. Use the appropriate Update, LateUpdate, or FixedUpdate methods to modify your GameObjects and components to one tick of the game clock.Prototype: Often you need to copy objects without affecting the original. This creational pattern solves the problem of duplicating and cloning an object to make other objects similar to itself. This way you avoid defining a separate class to spawn every type of object in your game. Unity’s Prefab system implements a form of prototyping for GameObjects. This allows you to duplicate a template object complete with its components. Override specific properties to create Prefab Variants or nest Prefabs inside other Prefabs to create hierarchies. Use a special Prefab editing mode to edit Prefabs in isolation or in context. Component:Most people working in Unity know this pattern. Instead of creating large classes with multiple responsibilities, build smaller components that each do one thing. If you use composition to pick and choose components, you can combine them for complex behavior. Add Rigidbody and Collider components for physics, or a MeshFilter and MeshRenderer for 3D geometry. Each GameObject is only as rich and unique as its collection of components.Both the e-book and a sample project on the use of design patterns are available now to download for free. Review the examples and decide which design pattern best suits your project. As you gain experience with them, you’ll recognize how and when they can enhance your development process. As always, we encourage you to visit the forum thread and let us know what you think of the e-book and sample.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • What I learned from my first few months with a Bambu Lab A1 3D printer, part 1

    to 3d or not to 3d

    What I learned from my first few months with a Bambu Lab A1 3D printer, part 1

    One neophyte's first steps into the wide world of 3D printing.

    Andrew Cunningham



    May 22, 2025 7:30 am

    |

    21

    The hotend on my Bambu Lab A1 3D printer.

    Credit:

    Andrew Cunningham

    The hotend on my Bambu Lab A1 3D printer.

    Credit:

    Andrew Cunningham

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    For a couple of years now, I've been trying to find an excuse to buy a decent 3D printer.
    Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks' worth of plastic filament.
    But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.
    Since then, I've been tinkering with the thing nearly daily, learning more about what I've gotten myself into and continuing to find fun and useful things to print. I've gathered a bunch of thoughts about my learning process here, not because I think I'm breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. "Hyperfixating on new hobbies" is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.
    Getting to know my printer
    My wife settled on the Bambu A1 because it's a larger version of the A1 Mini, Wirecutter's main 3D printer pick at the time. Other reviews she read noted that it's beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.
    Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I'm already leaning mostly on the first-party tools and built-in functionality to get everything going, so I'm not really experiencing the sense of having "lost" features I was relying on, and any concerns I did have are mostly addressed by Bambu's update about its update.

    I hadn't really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu's printersare capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

    Basic terminology

    Extrusion-based 3D printerswork by depositing multiple thin layers of melted plastic filament on a heated bed.

    Credit:

    Andrew Cunningham

    First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plasticand then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.
    The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography”. You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints. Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.
    There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a "bed slinger." In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to "move" the print head on the Y axis.

    More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are "CoreXY" printers, which include a third rail or set of railsthat allow the print head to travel in all three directions.
    The A1 is also an "open-bed" printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.
    Together, the downsides of a bed-slingerand an open-bed printermainly just mean that the A1 isn't well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.
    Setting up
    Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It's not quite the same as the "take it out of the box, remove all the plastic film, and plug it in" process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that's roughly the level of complexity we're talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

    The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer's calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. "Stable enough to put a lamp on" is not the same as "stable enough to put a constantly wobbling contraption" on—obvious in retrospect, but my being new to this is why this article exists.
    After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer's every motion to our kid's room below, a boon for when I'm trying to print something after he has gone to bed.
    The first-party Bambu apps for sending files to the printer are Bambu Handyand Bambu Studio. Handy works OK for sending ready-made models from MakerWorldand for monitoring prints once they've started. But I'll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

    Bambu Studio: A primer
    Bambu Studio is what's known in the hobby as a "slicer," software that takes existing 3D models output by common CAD programsand converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it's primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

    Bambu Studio isn't the most approachable application, but if you've made it this far, it shouldn't be totally beyond your comprehension. For first-time setup, you'll choose your model of printer, leave the filament settings as they are, and sign in if you want to use Bambu's cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.
    For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you've downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click "slice plate," and then click "print." Things like the default 0.4 mm nozzle size and Bambu's included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.
    When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the "total filament" figure, which tells you how many grams of filament the printer will use to make your model. The second is the "total time" figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

    Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing.

    Andrew Cunningham

    Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing.

    Andrew Cunningham

    The main way to tweak print quality is to adjust the height of the layers that the A1 lays down.

    Andrew Cunningham

    The main way to tweak print quality is to adjust the height of the layers that the A1 lays down.

    Andrew Cunningham

    Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament.

    Andrew Cunningham

    Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament.

    Andrew Cunningham

    The main way to tweak print quality is to adjust the height of the layers that the A1 lays down.

    Andrew Cunningham

    Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament.

    Andrew Cunningham

    For some prints, scaling them up or down a bit can make them fit your needs better.

    Andrew Cunningham

    For items that are small enough, you can print a few at once using the clone function. For filaments with a gradient, this also makes the gradient effect more pronounced.

    Andrew Cunningham

    Bambu Studio estimates the amount of filament you'll use and the amount of time a print will take. Filament usually comes in 1 kg spools.

    Andrew Cunningham

    When selecting filament, people who stick to Bambu's first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I've had almost zero trouble with the "generic" presets and the spools of generic Inland-branded filament I've bought from our local Micro Center, at least when sticking to PLA. But we'll dive deeper into plastics in part 2 of this series.

    I won't pretend I'm skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I've found most useful:

    The "clone" function, accessed by right-clicking an object and clicking "clone." Useful if you'd like to fit several copies of an object on the build plate at once, especially if you're using a filament with a color gradient and you'd like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
    The "arrange all objects" function, the fourth button from the left under the "prepare" tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn't need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
    Layer height, located in the sidebar directly beneath "Process". Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
    Infill percentage and wall loops, located in the Strength tab beneath the "Process" sidebar item. For most everyday prints, you don't need to worry about messing with these settings much; the infill percentage determines the amount of your print's interior that's plastic and the part that's empty space. The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it.

    My first prints

    A humble start: My very first print was a wall bracket for the remote for my office's ceiling fan.

    Credit:

    Andrew Cunningham

    When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.
    When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.
    MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it's not a terrible idea, you can usually find someone out there who has made something close to what you're looking for.
    I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that's all it took to kick the printer into action.
    After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

    Print No. 2 was another wall bracket, this time for my gaming PC's gamepad and headset.

    Credit:

    Andrew Cunningham

    It wears off a bit after you successfully execute a print, but I still haven't quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.
    The remote holder was, as I'd learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.
    This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I'll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I've learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

    Andrew Cunningham
    Senior Technology Reporter

    Andrew Cunningham
    Senior Technology Reporter

    Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

    21 Comments
    #what #learned #first #few #months
    What I learned from my first few months with a Bambu Lab A1 3D printer, part 1
    to 3d or not to 3d What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 One neophyte's first steps into the wide world of 3D printing. Andrew Cunningham – May 22, 2025 7:30 am | 21 The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more For a couple of years now, I've been trying to find an excuse to buy a decent 3D printer. Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks' worth of plastic filament. But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present. Since then, I've been tinkering with the thing nearly daily, learning more about what I've gotten myself into and continuing to find fun and useful things to print. I've gathered a bunch of thoughts about my learning process here, not because I think I'm breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. "Hyperfixating on new hobbies" is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming. Getting to know my printer My wife settled on the Bambu A1 because it's a larger version of the A1 Mini, Wirecutter's main 3D printer pick at the time. Other reviews she read noted that it's beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far. Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I'm already leaning mostly on the first-party tools and built-in functionality to get everything going, so I'm not really experiencing the sense of having "lost" features I was relying on, and any concerns I did have are mostly addressed by Bambu's update about its update. I hadn't really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu's printersare capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints. Basic terminology Extrusion-based 3D printerswork by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plasticand then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick. The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography”. You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints. Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible. There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a "bed slinger." In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to "move" the print head on the Y axis. More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are "CoreXY" printers, which include a third rail or set of railsthat allow the print head to travel in all three directions. The A1 is also an "open-bed" printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer. Together, the downsides of a bed-slingerand an open-bed printermainly just mean that the A1 isn't well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it. Setting up Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It's not quite the same as the "take it out of the box, remove all the plastic film, and plug it in" process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that's roughly the level of complexity we're talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple. The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer's calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. "Stable enough to put a lamp on" is not the same as "stable enough to put a constantly wobbling contraption" on—obvious in retrospect, but my being new to this is why this article exists. After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer's every motion to our kid's room below, a boon for when I'm trying to print something after he has gone to bed. The first-party Bambu apps for sending files to the printer are Bambu Handyand Bambu Studio. Handy works OK for sending ready-made models from MakerWorldand for monitoring prints once they've started. But I'll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing. Bambu Studio: A primer Bambu Studio is what's known in the hobby as a "slicer," software that takes existing 3D models output by common CAD programsand converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it's primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects. Bambu Studio isn't the most approachable application, but if you've made it this far, it shouldn't be totally beyond your comprehension. For first-time setup, you'll choose your model of printer, leave the filament settings as they are, and sign in if you want to use Bambu's cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious. For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you've downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click "slice plate," and then click "print." Things like the default 0.4 mm nozzle size and Bambu's included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time. When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the "total filament" figure, which tells you how many grams of filament the printer will use to make your model. The second is the "total time" figure, which tells you how long the entire print will take from the first calibration steps to the end of the job. Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing. Andrew Cunningham Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham For some prints, scaling them up or down a bit can make them fit your needs better. Andrew Cunningham For items that are small enough, you can print a few at once using the clone function. For filaments with a gradient, this also makes the gradient effect more pronounced. Andrew Cunningham Bambu Studio estimates the amount of filament you'll use and the amount of time a print will take. Filament usually comes in 1 kg spools. Andrew Cunningham When selecting filament, people who stick to Bambu's first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I've had almost zero trouble with the "generic" presets and the spools of generic Inland-branded filament I've bought from our local Micro Center, at least when sticking to PLA. But we'll dive deeper into plastics in part 2 of this series. I won't pretend I'm skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I've found most useful: The "clone" function, accessed by right-clicking an object and clicking "clone." Useful if you'd like to fit several copies of an object on the build plate at once, especially if you're using a filament with a color gradient and you'd like to make the gradient effect more pronounced by spreading it out over a bunch of prints. The "arrange all objects" function, the fourth button from the left under the "prepare" tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn't need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space. Layer height, located in the sidebar directly beneath "Process". Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail. Infill percentage and wall loops, located in the Strength tab beneath the "Process" sidebar item. For most everyday prints, you don't need to worry about messing with these settings much; the infill percentage determines the amount of your print's interior that's plastic and the part that's empty space. The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it. My first prints A humble start: My very first print was a wall bracket for the remote for my office's ceiling fan. Credit: Andrew Cunningham When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk. When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control. MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it's not a terrible idea, you can usually find someone out there who has made something close to what you're looking for. I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that's all it took to kick the printer into action. After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object. Print No. 2 was another wall bracket, this time for my gaming PC's gamepad and headset. Credit: Andrew Cunningham It wears off a bit after you successfully execute a print, but I still haven't quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office. The remote holder was, as I'd learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn. This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I'll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I've learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 21 Comments #what #learned #first #few #months
    ARSTECHNICA.COM
    What I learned from my first few months with a Bambu Lab A1 3D printer, part 1
    to 3d or not to 3d What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 One neophyte's first steps into the wide world of 3D printing. Andrew Cunningham – May 22, 2025 7:30 am | 21 The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more For a couple of years now, I've been trying to find an excuse to buy a decent 3D printer. Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks' worth of plastic filament. But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present. Since then, I've been tinkering with the thing nearly daily, learning more about what I've gotten myself into and continuing to find fun and useful things to print. I've gathered a bunch of thoughts about my learning process here, not because I think I'm breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. "Hyperfixating on new hobbies" is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming. Getting to know my printer My wife settled on the Bambu A1 because it's a larger version of the A1 Mini, Wirecutter's main 3D printer pick at the time (she also noted it was "hella on sale"). Other reviews she read noted that it's beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far. Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I'm already leaning mostly on the first-party tools and built-in functionality to get everything going, so I'm not really experiencing the sense of having "lost" features I was relying on, and any concerns I did have are mostly addressed by Bambu's update about its update. I hadn't really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu's printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints. Basic terminology Extrusion-based 3D printers (also sometimes called "FDM," for "fused deposition modeling") work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick. The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible. There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a "bed slinger." In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to "move" the print head on the Y axis. More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are "CoreXY" printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions. The A1 is also an "open-bed" printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer. Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn't well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it. Setting up Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It's not quite the same as the "take it out of the box, remove all the plastic film, and plug it in" process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that's roughly the level of complexity we're talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple. The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer's calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. "Stable enough to put a lamp on" is not the same as "stable enough to put a constantly wobbling contraption" on—obvious in retrospect, but my being new to this is why this article exists. After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer's every motion to our kid's room below, a boon for when I'm trying to print something after he has gone to bed. The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they've started. But I'll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing. Bambu Studio: A primer Bambu Studio is what's known in the hobby as a "slicer," software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it's primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects. Bambu Studio isn't the most approachable application, but if you've made it this far, it shouldn't be totally beyond your comprehension. For first-time setup, you'll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu's cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious. For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you've downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click "slice plate," and then click "print." Things like the default 0.4 mm nozzle size and Bambu's included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time. When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the "total filament" figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won't track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you're using). The second is the "total time" figure, which tells you how long the entire print will take from the first calibration steps to the end of the job. Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you'll manage multicolor printing. Andrew Cunningham Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you'll manage multicolor printing. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham For some prints, scaling them up or down a bit can make them fit your needs better. Andrew Cunningham For items that are small enough, you can print a few at once using the clone function. For filaments with a gradient, this also makes the gradient effect more pronounced. Andrew Cunningham Bambu Studio estimates the amount of filament you'll use and the amount of time a print will take. Filament usually comes in 1 kg spools. Andrew Cunningham When selecting filament, people who stick to Bambu's first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I've had almost zero trouble with the "generic" presets and the spools of generic Inland-branded filament I've bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we'll dive deeper into plastics in part 2 of this series. I won't pretend I'm skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I've found most useful: The "clone" function, accessed by right-clicking an object and clicking "clone." Useful if you'd like to fit several copies of an object on the build plate at once, especially if you're using a filament with a color gradient and you'd like to make the gradient effect more pronounced by spreading it out over a bunch of prints. The "arrange all objects" function, the fourth button from the left under the "prepare" tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn't need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space. Layer height, located in the sidebar directly beneath "Process" (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail. Infill percentage and wall loops, located in the Strength tab beneath the "Process" sidebar item. For most everyday prints, you don't need to worry about messing with these settings much; the infill percentage determines the amount of your print's interior that's plastic and the part that's empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight). My first prints A humble start: My very first print was a wall bracket for the remote for my office's ceiling fan. Credit: Andrew Cunningham When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk. When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control. MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it's not a terrible idea, you can usually find someone out there who has made something close to what you're looking for. I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that's all it took to kick the printer into action. After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object. Print No. 2 was another wall bracket, this time for my gaming PC's gamepad and headset. Credit: Andrew Cunningham It wears off a bit after you successfully execute a print, but I still haven't quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office. The remote holder was, as I'd learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn. This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I'll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I've learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 21 Comments
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • This Add-On Module Turns the Flipper Zero Into a Pocket UV Radiation Scanner

    The Flipper Zero has always felt like a digital Swiss Army knife for hardware hackers – part toy, part toolkit, and a whole lot of trouble for the devices around it. Out of the box, it could spoof RFID badges, copy remotes, analyze sub-GHz signals, and poke into wireless protocols with the ease of a gadget built by a Bond villain’s favorite engineer. But where it really opened up was at the top, with those GPIO pins that transformed it from a multi-tool into a platform.
    Enter Michael Baisch’s UV Meter module. It’s a thumbnail-sized add-on that plugs directly into the Flipper’s GPIO header and does one thing extremely well: it measures ultraviolet radiation. UVA, UVB, and even UVC rays most of us can’t see, don’t think about, but probably should. It’s like giving your Flipper Zero sunscreen awareness, which feels oddly poetic for a device usually associated with penetration testing and hacking.
    Designer: Michael Baisch

    The core of the module is the AS7331 sensor, a surprisingly sophisticated little chip that Baisch had to wrestle into submission. Reading UV data isn’t just about pulling values from registers. The AS7331 speaks in wavelengths and photodiode voltages, not clean numerical outputs. So Baisch wrote his own library to interface with it via I²C, making the sensor’s data digestible for the Flipper’s tiny 128×64 pixel screen.
    What you get is live UV index readings broken into the A, B, and C spectrums. UVA is the long-wave stuff that penetrates deep into skin, causing wrinkles and damage over time. UVB is what gives you sunburns. UVC is mostly filtered by the atmosphere, but if you’re anywhere near artificial UV sources like sterilization lamps, knowing it’s there matters.

    And the module doesn’t just spit out raw numbers. Baisch paid attention to the UX, cramming a clean, usable interface into the Flipper’s minimal display. There’s even a wiring guide built into the app for DIYers who’d rather solder their own sensor than buy the full PCB. But for the rest of us, he’s made the hardware files public, so you can 3D print, fab, or Tindie your way into having a plug-and-play UV monitor on your Flipper.
    Sure, it’s not as headline-grabbing as cloning key fobs or brute-forcing infrared signals, but it taps into something arguably more relevant. UV exposure is tied to everything from cancer risk to sleep patterns. Being able to measure it on a whimis a compelling narrative shift.

    It’s another reminder that the Flipper Zero is an absolute hardware chameleon capable of so much more than we know. With every new module, it finds fresh territory to explore. Today it’s the sun. Tomorrow? Who knows. Maybe something in the air, the soil, the human body.The post This Add-On Module Turns the Flipper Zero Into a Pocket UV Radiation Scanner first appeared on Yanko Design.
    #this #addon #module #turns #flipper
    This Add-On Module Turns the Flipper Zero Into a Pocket UV Radiation Scanner
    The Flipper Zero has always felt like a digital Swiss Army knife for hardware hackers – part toy, part toolkit, and a whole lot of trouble for the devices around it. Out of the box, it could spoof RFID badges, copy remotes, analyze sub-GHz signals, and poke into wireless protocols with the ease of a gadget built by a Bond villain’s favorite engineer. But where it really opened up was at the top, with those GPIO pins that transformed it from a multi-tool into a platform. Enter Michael Baisch’s UV Meter module. It’s a thumbnail-sized add-on that plugs directly into the Flipper’s GPIO header and does one thing extremely well: it measures ultraviolet radiation. UVA, UVB, and even UVC rays most of us can’t see, don’t think about, but probably should. It’s like giving your Flipper Zero sunscreen awareness, which feels oddly poetic for a device usually associated with penetration testing and hacking. Designer: Michael Baisch The core of the module is the AS7331 sensor, a surprisingly sophisticated little chip that Baisch had to wrestle into submission. Reading UV data isn’t just about pulling values from registers. The AS7331 speaks in wavelengths and photodiode voltages, not clean numerical outputs. So Baisch wrote his own library to interface with it via I²C, making the sensor’s data digestible for the Flipper’s tiny 128×64 pixel screen. What you get is live UV index readings broken into the A, B, and C spectrums. UVA is the long-wave stuff that penetrates deep into skin, causing wrinkles and damage over time. UVB is what gives you sunburns. UVC is mostly filtered by the atmosphere, but if you’re anywhere near artificial UV sources like sterilization lamps, knowing it’s there matters. And the module doesn’t just spit out raw numbers. Baisch paid attention to the UX, cramming a clean, usable interface into the Flipper’s minimal display. There’s even a wiring guide built into the app for DIYers who’d rather solder their own sensor than buy the full PCB. But for the rest of us, he’s made the hardware files public, so you can 3D print, fab, or Tindie your way into having a plug-and-play UV monitor on your Flipper. Sure, it’s not as headline-grabbing as cloning key fobs or brute-forcing infrared signals, but it taps into something arguably more relevant. UV exposure is tied to everything from cancer risk to sleep patterns. Being able to measure it on a whimis a compelling narrative shift. It’s another reminder that the Flipper Zero is an absolute hardware chameleon capable of so much more than we know. With every new module, it finds fresh territory to explore. Today it’s the sun. Tomorrow? Who knows. Maybe something in the air, the soil, the human body.The post This Add-On Module Turns the Flipper Zero Into a Pocket UV Radiation Scanner first appeared on Yanko Design. #this #addon #module #turns #flipper
    WWW.YANKODESIGN.COM
    This Add-On Module Turns the Flipper Zero Into a Pocket UV Radiation Scanner
    The Flipper Zero has always felt like a digital Swiss Army knife for hardware hackers – part toy, part toolkit, and a whole lot of trouble for the devices around it. Out of the box, it could spoof RFID badges, copy remotes, analyze sub-GHz signals, and poke into wireless protocols with the ease of a gadget built by a Bond villain’s favorite engineer. But where it really opened up was at the top, with those GPIO pins that transformed it from a multi-tool into a platform. Enter Michael Baisch’s UV Meter module. It’s a thumbnail-sized add-on that plugs directly into the Flipper’s GPIO header and does one thing extremely well: it measures ultraviolet radiation. UVA, UVB, and even UVC rays most of us can’t see, don’t think about, but probably should. It’s like giving your Flipper Zero sunscreen awareness, which feels oddly poetic for a device usually associated with penetration testing and hacking. Designer: Michael Baisch The core of the module is the AS7331 sensor, a surprisingly sophisticated little chip that Baisch had to wrestle into submission. Reading UV data isn’t just about pulling values from registers. The AS7331 speaks in wavelengths and photodiode voltages, not clean numerical outputs. So Baisch wrote his own library to interface with it via I²C, making the sensor’s data digestible for the Flipper’s tiny 128×64 pixel screen. What you get is live UV index readings broken into the A, B, and C spectrums. UVA is the long-wave stuff that penetrates deep into skin, causing wrinkles and damage over time. UVB is what gives you sunburns. UVC is mostly filtered by the atmosphere, but if you’re anywhere near artificial UV sources like sterilization lamps, knowing it’s there matters. And the module doesn’t just spit out raw numbers. Baisch paid attention to the UX, cramming a clean, usable interface into the Flipper’s minimal display. There’s even a wiring guide built into the app for DIYers who’d rather solder their own sensor than buy the full PCB. But for the rest of us, he’s made the hardware files public, so you can 3D print, fab, or Tindie your way into having a plug-and-play UV monitor on your Flipper. Sure, it’s not as headline-grabbing as cloning key fobs or brute-forcing infrared signals, but it taps into something arguably more relevant. UV exposure is tied to everything from cancer risk to sleep patterns. Being able to measure it on a whim (especially using a tool originally designed for digital mischief) is a compelling narrative shift. It’s another reminder that the Flipper Zero is an absolute hardware chameleon capable of so much more than we know. With every new module, it finds fresh territory to explore. Today it’s the sun. Tomorrow? Who knows. Maybe something in the air, the soil, the human body.The post This Add-On Module Turns the Flipper Zero Into a Pocket UV Radiation Scanner first appeared on Yanko Design.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Hazy Hawk Exploits DNS Records to Hijack CDC, Corporate Domains for Malware Delivery

    A threat actor known as Hazy Hawk has been observed hijacking abandoned cloud resources of high-profile organizations, including Amazon S3 buckets and Microsoft Azure endpoints, by leveraging misconfigurations in the Domain Name Systemrecords.
    The hijacked domains are then used to host URLs that direct users to scams and malware via traffic distribution systems, according to Infoblox. Some of the other resources usurped by the threat actor include those hosted on Akamai, Bunny CDN, Cloudflare CDN, GitHub, and Netlify.
    The DNS threat intelligence firm said it first discovered the threat actor after it gained control of several sub-domains associated with the U.S. Center for Disease Controlin February 2025.
    It has since been determined that other government agencies across the globe, prominent universities, and international corporations such as Deloitte, PricewaterhouseCoopers, and Ernst & Young have been victimized by the same threat actor since at least December 2023.

    "Perhaps the most remarkable thing about Hazy Hawk is that these hard-to-discover, vulnerable domains with ties to esteemed organizations are not being used for espionage or 'highbrow' cybercrime," Infoblox's Jacques Portal and Renée Burton said in a report shared with The Hacker News.
    "Instead, they feed into the seedy underworld of adtech, whisking victims to a wide range of scams and fake applications, and using browser notifications to trigger processes that will have a lingering impact."
    What makes Hazy Hawk's operations noteworthy is the hijacking of trusted and reputable domains belonging to legitimate organizations, thus boosting their credibility in search results when they are being used to serve malicious and spammy content. But even more concerningly, the approach enables the threat actors to bypass detection.
    Underpinning the operation is the ability of the attackers to seize control of abandoned domains with dangling DNS CNAME records, a technique previously exposed by Guardio in early 2024 as being exploited by bad actors for spam proliferation and click monetization. All a threat actor needs to do is register the missing resource to hijack the domain.

    Hazy Hawk goes a step further by finding abandoned cloud resources and then commandeering them for malicious purposes. In some cases, the threat actor employs URL redirection techniques to conceal which cloud resource was hijacked.
    "We use the name Hazy Hawk for this actor because of how they find and hijack cloud resources that have dangling DNS CNAME records and then use them in malicious URL distribution," Infoblox said. "It's possible that the domain hijacking component is provided as a service and is used by a group of actors."
    The attack chains often involve cloning the content of legitimate sites for their initial site hosted on the hijacked domains, while luring victims into visiting them with pornographic or pirated content. The site visitors are then funneled via a TDS to determine where they land next.

    "Hazy Hawk is one of the dozens of threat actors we track within the advertising affiliate world," the company said. "Threat actors who belong to affiliate advertising programs drive users into tailored malicious content and are incentivized to include requests to allow push notifications from 'websites' along the redirection path."
    In doing so, the idea is to flood a victim's device with push notifications and deliver an endless torrent of malicious content, with each notification leading to different scams, scareware, and fake surveys, and accompanied by requests to allow more push notifications.
    To prevent and protect against Hazy Hawk activities, domain owners are recommended to remove a DNS CNAME record as soon as a resource is shut down. End users, on the other hand, are advised to deny notification requests from websites they don't know.
    "While operators like Hazy Hawk are responsible for the initial lure, the user who clicks is led into a labyrinth of sketchy and outright malicious adtech. The fact that Hazy Hawk puts considerable effort into locating vulnerable domains and then using them for scam operations shows that these advertising affiliate programs are successful enough to pay well," Infoblox said.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #hazy #hawk #exploits #dns #records
    Hazy Hawk Exploits DNS Records to Hijack CDC, Corporate Domains for Malware Delivery
    A threat actor known as Hazy Hawk has been observed hijacking abandoned cloud resources of high-profile organizations, including Amazon S3 buckets and Microsoft Azure endpoints, by leveraging misconfigurations in the Domain Name Systemrecords. The hijacked domains are then used to host URLs that direct users to scams and malware via traffic distribution systems, according to Infoblox. Some of the other resources usurped by the threat actor include those hosted on Akamai, Bunny CDN, Cloudflare CDN, GitHub, and Netlify. The DNS threat intelligence firm said it first discovered the threat actor after it gained control of several sub-domains associated with the U.S. Center for Disease Controlin February 2025. It has since been determined that other government agencies across the globe, prominent universities, and international corporations such as Deloitte, PricewaterhouseCoopers, and Ernst & Young have been victimized by the same threat actor since at least December 2023. "Perhaps the most remarkable thing about Hazy Hawk is that these hard-to-discover, vulnerable domains with ties to esteemed organizations are not being used for espionage or 'highbrow' cybercrime," Infoblox's Jacques Portal and Renée Burton said in a report shared with The Hacker News. "Instead, they feed into the seedy underworld of adtech, whisking victims to a wide range of scams and fake applications, and using browser notifications to trigger processes that will have a lingering impact." What makes Hazy Hawk's operations noteworthy is the hijacking of trusted and reputable domains belonging to legitimate organizations, thus boosting their credibility in search results when they are being used to serve malicious and spammy content. But even more concerningly, the approach enables the threat actors to bypass detection. Underpinning the operation is the ability of the attackers to seize control of abandoned domains with dangling DNS CNAME records, a technique previously exposed by Guardio in early 2024 as being exploited by bad actors for spam proliferation and click monetization. All a threat actor needs to do is register the missing resource to hijack the domain. Hazy Hawk goes a step further by finding abandoned cloud resources and then commandeering them for malicious purposes. In some cases, the threat actor employs URL redirection techniques to conceal which cloud resource was hijacked. "We use the name Hazy Hawk for this actor because of how they find and hijack cloud resources that have dangling DNS CNAME records and then use them in malicious URL distribution," Infoblox said. "It's possible that the domain hijacking component is provided as a service and is used by a group of actors." The attack chains often involve cloning the content of legitimate sites for their initial site hosted on the hijacked domains, while luring victims into visiting them with pornographic or pirated content. The site visitors are then funneled via a TDS to determine where they land next. "Hazy Hawk is one of the dozens of threat actors we track within the advertising affiliate world," the company said. "Threat actors who belong to affiliate advertising programs drive users into tailored malicious content and are incentivized to include requests to allow push notifications from 'websites' along the redirection path." In doing so, the idea is to flood a victim's device with push notifications and deliver an endless torrent of malicious content, with each notification leading to different scams, scareware, and fake surveys, and accompanied by requests to allow more push notifications. To prevent and protect against Hazy Hawk activities, domain owners are recommended to remove a DNS CNAME record as soon as a resource is shut down. End users, on the other hand, are advised to deny notification requests from websites they don't know. "While operators like Hazy Hawk are responsible for the initial lure, the user who clicks is led into a labyrinth of sketchy and outright malicious adtech. The fact that Hazy Hawk puts considerable effort into locating vulnerable domains and then using them for scam operations shows that these advertising affiliate programs are successful enough to pay well," Infoblox said. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. #hazy #hawk #exploits #dns #records
    THEHACKERNEWS.COM
    Hazy Hawk Exploits DNS Records to Hijack CDC, Corporate Domains for Malware Delivery
    A threat actor known as Hazy Hawk has been observed hijacking abandoned cloud resources of high-profile organizations, including Amazon S3 buckets and Microsoft Azure endpoints, by leveraging misconfigurations in the Domain Name System (DNS) records. The hijacked domains are then used to host URLs that direct users to scams and malware via traffic distribution systems (TDSes), according to Infoblox. Some of the other resources usurped by the threat actor include those hosted on Akamai, Bunny CDN, Cloudflare CDN, GitHub, and Netlify. The DNS threat intelligence firm said it first discovered the threat actor after it gained control of several sub-domains associated with the U.S. Center for Disease Control (CDC) in February 2025. It has since been determined that other government agencies across the globe, prominent universities, and international corporations such as Deloitte, PricewaterhouseCoopers, and Ernst & Young have been victimized by the same threat actor since at least December 2023. "Perhaps the most remarkable thing about Hazy Hawk is that these hard-to-discover, vulnerable domains with ties to esteemed organizations are not being used for espionage or 'highbrow' cybercrime," Infoblox's Jacques Portal and Renée Burton said in a report shared with The Hacker News. "Instead, they feed into the seedy underworld of adtech, whisking victims to a wide range of scams and fake applications, and using browser notifications to trigger processes that will have a lingering impact." What makes Hazy Hawk's operations noteworthy is the hijacking of trusted and reputable domains belonging to legitimate organizations, thus boosting their credibility in search results when they are being used to serve malicious and spammy content. But even more concerningly, the approach enables the threat actors to bypass detection. Underpinning the operation is the ability of the attackers to seize control of abandoned domains with dangling DNS CNAME records, a technique previously exposed by Guardio in early 2024 as being exploited by bad actors for spam proliferation and click monetization. All a threat actor needs to do is register the missing resource to hijack the domain. Hazy Hawk goes a step further by finding abandoned cloud resources and then commandeering them for malicious purposes. In some cases, the threat actor employs URL redirection techniques to conceal which cloud resource was hijacked. "We use the name Hazy Hawk for this actor because of how they find and hijack cloud resources that have dangling DNS CNAME records and then use them in malicious URL distribution," Infoblox said. "It's possible that the domain hijacking component is provided as a service and is used by a group of actors." The attack chains often involve cloning the content of legitimate sites for their initial site hosted on the hijacked domains, while luring victims into visiting them with pornographic or pirated content. The site visitors are then funneled via a TDS to determine where they land next. "Hazy Hawk is one of the dozens of threat actors we track within the advertising affiliate world," the company said. "Threat actors who belong to affiliate advertising programs drive users into tailored malicious content and are incentivized to include requests to allow push notifications from 'websites' along the redirection path." In doing so, the idea is to flood a victim's device with push notifications and deliver an endless torrent of malicious content, with each notification leading to different scams, scareware, and fake surveys, and accompanied by requests to allow more push notifications. To prevent and protect against Hazy Hawk activities, domain owners are recommended to remove a DNS CNAME record as soon as a resource is shut down. End users, on the other hand, are advised to deny notification requests from websites they don't know. "While operators like Hazy Hawk are responsible for the initial lure, the user who clicks is led into a labyrinth of sketchy and outright malicious adtech. The fact that Hazy Hawk puts considerable effort into locating vulnerable domains and then using them for scam operations shows that these advertising affiliate programs are successful enough to pay well," Infoblox said. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com