• The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    WWW.MARKTECHPOST.COM
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Kommentare 0 Anteile
  • China-Linked Hackers Exploit SAP and SQL Server Flaws in Attacks Across Asia and Brazil

    May 30, 2025Ravie LakshmananVulnerability / Threat Intelligence

    The China-linked threat actor behind the recent in-the-wild exploitation of a critical security flaw in SAP NetWeaver has been attributed to a broader set of attacks targeting organizations in Brazil, India, and Southeast Asia since 2023.
    "The threat actor mainly targets the SQL injection vulnerabilities discovered on web applications to access the SQL servers of targeted organizations," Trend Micro security researcher Joseph C Chen said in an analysis published this week. "The actor also takes advantage of various known vulnerabilities to exploit public-facing servers."
    Some of the other prominent targets of the adversarial collective include Indonesia, Malaysia, the Philippines, Thailand, and Vietnam.
    The cybersecurity company is tracking the activity under the moniker Earth Lamia, stating the activity shares some degree of overlap with threat clusters documented by Elastic Security Labs as REF0657, Sophos as STAC6451, and Palo Alto Networks Unit 42 as CL-STA-0048.

    Each of these attacks has targeted organizations spanning multiple sectors in South Asia, often leveraging internet-exposed Microsoft SQL Servers and other instances to conduct reconnaissance, deploy post-exploitation tools like Cobalt Strike and Supershell, and establish proxy tunnels to the victim networks using Rakshasa and Stowaway.
    Also used are privilege escalation tools like GodPotato and JuicyPotato; network scanning utilities such as Fscan and Kscan; and legitimate programs like wevtutil.exe to clean Windows Application, System, and Security event logs.
    Select intrusions aimed at Indian entities have also attempted to deploy Mimic ransomware binaries to encrypt victim files, although the efforts were largely unsuccessful.
    "While the actors were seen staging the Mimic ransomware binaries in all observed incidents, the ransomware often did not successfully execute, and in several instances, the actors were seen attempting to delete the binaries after being deployed," Sophos noted in an analysis published in August 2024.
    Then earlier this month, EclecticIQ disclosed that CL-STA-0048 was one among the many China-nexus cyber espionage groups to exploit CVE-2025-31324, a critical unauthenticated file upload vulnerability in SAP NetWeaver to establish a reverse shell to infrastructure under its control.

    Besides CVE-2025-31324, the hacking crew is said to have weaponized as many as eight different vulnerabilities to breach public-facing servers -

    CVE-2017-9805 - Apache Struts2 remote code execution vulnerability
    CVE-2021-22205 - GitLab remote code execution vulnerability
    CVE-2024-9047 - WordPress File Upload plugin arbitrary file access vulnerability
    CVE-2024-27198 - JetBrains TeamCity authentication bypass vulnerability
    CVE-2024-27199 - JetBrains TeamCity path traversal vulnerability
    CVE-2024-51378 - CyberPanel remote code execution vulnerability
    CVE-2024-51567 - CyberPanel remote code execution vulnerability
    CVE-2024-56145 - Craft CMS remote code execution vulnerability

    Describing it as "highly active," Trend Micro noted that the threat actor has shifted its focus from financial services to logistics and online retail, and most recently, to IT companies, universities, and government organizations.

    "In early 2024 and prior, we observed that most of their targets were organizations within the financial industry, specifically related to securities and brokerage," the company said. "In the second half of 2024, they shifted their targets to organizations mainly in the logistics and online retail industries. Recently, we noticed that their targets have shifted again to IT companies, universities, and government organizations."
    A noteworthy technique adopted by Earth Lamia is to launch its custom backdoors like PULSEPACK via DLL side-loading, an approach widely embraced by Chinese hacking groups. A modular .NET-based implant, PULSEPACK communicates with a remote server to retrieve various plugins to carry out its functions.
    Trend Micro said it observed in March 2025 an updated version of the backdoor that changes the command-and-controlcommunication method from TCP to WebSocket, indicating active ongoing development of the malware.
    "Earth Lamia is conducting its operations across multiple countries and industries with aggressive intentions," it concluded. "At the same time, the threat actor continuously refines their attack tactics by developing custom hacking tools and new backdoors."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #chinalinked #hackers #exploit #sap #sql
    China-Linked Hackers Exploit SAP and SQL Server Flaws in Attacks Across Asia and Brazil
    May 30, 2025Ravie LakshmananVulnerability / Threat Intelligence The China-linked threat actor behind the recent in-the-wild exploitation of a critical security flaw in SAP NetWeaver has been attributed to a broader set of attacks targeting organizations in Brazil, India, and Southeast Asia since 2023. "The threat actor mainly targets the SQL injection vulnerabilities discovered on web applications to access the SQL servers of targeted organizations," Trend Micro security researcher Joseph C Chen said in an analysis published this week. "The actor also takes advantage of various known vulnerabilities to exploit public-facing servers." Some of the other prominent targets of the adversarial collective include Indonesia, Malaysia, the Philippines, Thailand, and Vietnam. The cybersecurity company is tracking the activity under the moniker Earth Lamia, stating the activity shares some degree of overlap with threat clusters documented by Elastic Security Labs as REF0657, Sophos as STAC6451, and Palo Alto Networks Unit 42 as CL-STA-0048. Each of these attacks has targeted organizations spanning multiple sectors in South Asia, often leveraging internet-exposed Microsoft SQL Servers and other instances to conduct reconnaissance, deploy post-exploitation tools like Cobalt Strike and Supershell, and establish proxy tunnels to the victim networks using Rakshasa and Stowaway. Also used are privilege escalation tools like GodPotato and JuicyPotato; network scanning utilities such as Fscan and Kscan; and legitimate programs like wevtutil.exe to clean Windows Application, System, and Security event logs. Select intrusions aimed at Indian entities have also attempted to deploy Mimic ransomware binaries to encrypt victim files, although the efforts were largely unsuccessful. "While the actors were seen staging the Mimic ransomware binaries in all observed incidents, the ransomware often did not successfully execute, and in several instances, the actors were seen attempting to delete the binaries after being deployed," Sophos noted in an analysis published in August 2024. Then earlier this month, EclecticIQ disclosed that CL-STA-0048 was one among the many China-nexus cyber espionage groups to exploit CVE-2025-31324, a critical unauthenticated file upload vulnerability in SAP NetWeaver to establish a reverse shell to infrastructure under its control. Besides CVE-2025-31324, the hacking crew is said to have weaponized as many as eight different vulnerabilities to breach public-facing servers - CVE-2017-9805 - Apache Struts2 remote code execution vulnerability CVE-2021-22205 - GitLab remote code execution vulnerability CVE-2024-9047 - WordPress File Upload plugin arbitrary file access vulnerability CVE-2024-27198 - JetBrains TeamCity authentication bypass vulnerability CVE-2024-27199 - JetBrains TeamCity path traversal vulnerability CVE-2024-51378 - CyberPanel remote code execution vulnerability CVE-2024-51567 - CyberPanel remote code execution vulnerability CVE-2024-56145 - Craft CMS remote code execution vulnerability Describing it as "highly active," Trend Micro noted that the threat actor has shifted its focus from financial services to logistics and online retail, and most recently, to IT companies, universities, and government organizations. "In early 2024 and prior, we observed that most of their targets were organizations within the financial industry, specifically related to securities and brokerage," the company said. "In the second half of 2024, they shifted their targets to organizations mainly in the logistics and online retail industries. Recently, we noticed that their targets have shifted again to IT companies, universities, and government organizations." A noteworthy technique adopted by Earth Lamia is to launch its custom backdoors like PULSEPACK via DLL side-loading, an approach widely embraced by Chinese hacking groups. A modular .NET-based implant, PULSEPACK communicates with a remote server to retrieve various plugins to carry out its functions. Trend Micro said it observed in March 2025 an updated version of the backdoor that changes the command-and-controlcommunication method from TCP to WebSocket, indicating active ongoing development of the malware. "Earth Lamia is conducting its operations across multiple countries and industries with aggressive intentions," it concluded. "At the same time, the threat actor continuously refines their attack tactics by developing custom hacking tools and new backdoors." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #chinalinked #hackers #exploit #sap #sql
    THEHACKERNEWS.COM
    China-Linked Hackers Exploit SAP and SQL Server Flaws in Attacks Across Asia and Brazil
    May 30, 2025Ravie LakshmananVulnerability / Threat Intelligence The China-linked threat actor behind the recent in-the-wild exploitation of a critical security flaw in SAP NetWeaver has been attributed to a broader set of attacks targeting organizations in Brazil, India, and Southeast Asia since 2023. "The threat actor mainly targets the SQL injection vulnerabilities discovered on web applications to access the SQL servers of targeted organizations," Trend Micro security researcher Joseph C Chen said in an analysis published this week. "The actor also takes advantage of various known vulnerabilities to exploit public-facing servers." Some of the other prominent targets of the adversarial collective include Indonesia, Malaysia, the Philippines, Thailand, and Vietnam. The cybersecurity company is tracking the activity under the moniker Earth Lamia, stating the activity shares some degree of overlap with threat clusters documented by Elastic Security Labs as REF0657, Sophos as STAC6451, and Palo Alto Networks Unit 42 as CL-STA-0048. Each of these attacks has targeted organizations spanning multiple sectors in South Asia, often leveraging internet-exposed Microsoft SQL Servers and other instances to conduct reconnaissance, deploy post-exploitation tools like Cobalt Strike and Supershell, and establish proxy tunnels to the victim networks using Rakshasa and Stowaway. Also used are privilege escalation tools like GodPotato and JuicyPotato; network scanning utilities such as Fscan and Kscan; and legitimate programs like wevtutil.exe to clean Windows Application, System, and Security event logs. Select intrusions aimed at Indian entities have also attempted to deploy Mimic ransomware binaries to encrypt victim files, although the efforts were largely unsuccessful. "While the actors were seen staging the Mimic ransomware binaries in all observed incidents, the ransomware often did not successfully execute, and in several instances, the actors were seen attempting to delete the binaries after being deployed," Sophos noted in an analysis published in August 2024. Then earlier this month, EclecticIQ disclosed that CL-STA-0048 was one among the many China-nexus cyber espionage groups to exploit CVE-2025-31324, a critical unauthenticated file upload vulnerability in SAP NetWeaver to establish a reverse shell to infrastructure under its control. Besides CVE-2025-31324, the hacking crew is said to have weaponized as many as eight different vulnerabilities to breach public-facing servers - CVE-2017-9805 - Apache Struts2 remote code execution vulnerability CVE-2021-22205 - GitLab remote code execution vulnerability CVE-2024-9047 - WordPress File Upload plugin arbitrary file access vulnerability CVE-2024-27198 - JetBrains TeamCity authentication bypass vulnerability CVE-2024-27199 - JetBrains TeamCity path traversal vulnerability CVE-2024-51378 - CyberPanel remote code execution vulnerability CVE-2024-51567 - CyberPanel remote code execution vulnerability CVE-2024-56145 - Craft CMS remote code execution vulnerability Describing it as "highly active," Trend Micro noted that the threat actor has shifted its focus from financial services to logistics and online retail, and most recently, to IT companies, universities, and government organizations. "In early 2024 and prior, we observed that most of their targets were organizations within the financial industry, specifically related to securities and brokerage," the company said. "In the second half of 2024, they shifted their targets to organizations mainly in the logistics and online retail industries. Recently, we noticed that their targets have shifted again to IT companies, universities, and government organizations." A noteworthy technique adopted by Earth Lamia is to launch its custom backdoors like PULSEPACK via DLL side-loading, an approach widely embraced by Chinese hacking groups. A modular .NET-based implant, PULSEPACK communicates with a remote server to retrieve various plugins to carry out its functions. Trend Micro said it observed in March 2025 an updated version of the backdoor that changes the command-and-control (C2) communication method from TCP to WebSocket, indicating active ongoing development of the malware. "Earth Lamia is conducting its operations across multiple countries and industries with aggressive intentions," it concluded. "At the same time, the threat actor continuously refines their attack tactics by developing custom hacking tools and new backdoors." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Kommentare 0 Anteile
  • From LLMs to hallucinations, here’s a simple guide to common AI terms

    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
    We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks.

    AGI
    Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research.
    AI agent
    An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks.
    Chain of thought
    Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer.
    In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Deep learning
    A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural networkstructure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain.
    Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results. They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.Diffusion
    Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise.
    Distillation
    Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior.
    Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4.
    While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants.
    Fine-tuning
    This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specializeddata. 
    Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.GAN
    A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – includingdeepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time. 
    The GAN structure is set up as a competition– with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications, rather than general purpose AI.
    Hallucination
    Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. 
    Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences. This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button.
    The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God. 
    Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks.
    Inference
    Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data.
    Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.Large language modelLarge language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters.
    AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product.
    LLMs are deep neural networks made of billions of numerical parametersthat learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words.
    These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.Neural network
    A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. 
    Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware— via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.Training
    Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs.
    Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand.
    It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained thanself-learning systems.
    Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards.
    Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch.Transfer learning
    A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. 
    Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focusWeights
    Weights are core to AI training, as they determine how much importanceis given to different featuresin the data used for training the system — thereby shaping the AI model’s output. 
    Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.
    For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. 
    Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset.

    Topics
    #llms #hallucinations #heres #simple #guide
    From LLMs to hallucinations, here’s a simple guide to common AI terms
    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. AGI Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research. AI agent An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Chain of thought Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer. In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Deep learning A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural networkstructure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results. They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.Diffusion Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. Fine-tuning This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specializeddata.  Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.GAN A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – includingdeepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time.  The GAN structure is set up as a competition– with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications, rather than general purpose AI. Hallucination Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality.  Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences. This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God.  Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.Large language modelLarge language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parametersthat learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.Neural network A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.  Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware— via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.Training Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained thanself-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch.Transfer learning A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied.  Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focusWeights Weights are core to AI training, as they determine how much importanceis given to different featuresin the data used for training the system — thereby shaping the AI model’s output.  Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on.  Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Topics #llms #hallucinations #heres #simple #guide
    TECHCRUNCH.COM
    From LLMs to hallucinations, here’s a simple guide to common AI terms
    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. AGI Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research. AI agent An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Chain of thought Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows). In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning. (See: Large language model) Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Deep learning A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher. (See: Neural network) Diffusion Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. Fine-tuning This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data.  Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise. (See: Large language model [LLM]) GAN A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time.  The GAN structure is set up as a competition (hence “adversarial”) – with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI. Hallucination Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality.  Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God (yet).  Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips. [See: Training] Large language model (LLM) Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat. (See: Neural network) Neural network A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.  Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery. (See: Large language model [LLM]) Training Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained than (well-trained) self-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch. [See: Inference] Transfer learning A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied.  Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus (See: Fine tuning) Weights Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output.  Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on.  Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Topics
    0 Kommentare 0 Anteile
  • What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design

    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it.
    In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it.
    The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about.
    I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either.
    Buddha In The Machine
    Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States.
    Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious.
    Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position.
    In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman.
    Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces.
    “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig

    Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be.

    Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think.
    So, with the Godhead in mind, to business.
    Classical And Romantic
    A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it.
    “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig

    If we were to characterize the two as bickering siblings, familiar adjectives might start to appear:

    Classical
    Romantic

    Dull
    Frivolous

    Awkward
    Irrational

    Ugly
    Erratic

    Mechanical
    Untrustworthy

    Cold
    Fleeting

    Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them?
    Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound.
    Steve Jobs was a famous advocate of this.
    “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs

    Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony.
    Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty.

    Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory.

    Classical
    Romantic

    Organized
    Vibrant

    Scaleable
    Evocative

    Reliable
    Playful

    Efficient
    Fun

    Replicable
    Expressive

    And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share.
    Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way.
    In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself.

    The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one.
    What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality.
    Quality
    The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience.
    “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig

    Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is.

    Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us.
    We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there.
    A Quality Web
    So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it?
    There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feelssecondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality.
    Here are a few habits that I think work in the service of more Quality on the web.
    Seek To Understand How Things Work
    I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding.
    To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse.
    “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle.
    So, in concrete terms:

    Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it.
    Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work?
    Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful.
    Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion.

    Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run.
    Reframe The Questions
    Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process.
    The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process.
    Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality?
    Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful.
    The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small.
    Seek To Wed Art With ScienceNone of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen.
    In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence.
    Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture.
    The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away.
    Make Time For Doing Nothing
    Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys.
    If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head.

    Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop!
    Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone.

    As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too.
    From time to time, let go of your sense of urgency.
    Spirit Of Play
    Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door.
    Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article.
    Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play.
    We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise.
    The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself.
    Other Resources

    Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
    The Beauty of Everyday Things by Soetsu Yanagi
    Tao Te Ching
    “The Creative Act” by Rick Rubin
    “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt
    “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva

    Further Reading on Smashing Magazine

    “Three Approaches To Amplify Your Design Projects,” Olivia De Alba
    “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag
    “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks
    “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth
    #what #zen #art #motorcycle #maintenance
    What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it. In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it. The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about. I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either. Buddha In The Machine Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States. Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious. Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position. In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman. Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces. “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be. Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think. So, with the Godhead in mind, to business. Classical And Romantic A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it. “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig If we were to characterize the two as bickering siblings, familiar adjectives might start to appear: Classical Romantic Dull Frivolous Awkward Irrational Ugly Erratic Mechanical Untrustworthy Cold Fleeting Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them? Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound. Steve Jobs was a famous advocate of this. “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony. Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty. Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory. Classical Romantic Organized Vibrant Scaleable Evocative Reliable Playful Efficient Fun Replicable Expressive And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share. Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way. In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself. The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one. What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality. Quality The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience. “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is. Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us. We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there. A Quality Web So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it? There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feelssecondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality. Here are a few habits that I think work in the service of more Quality on the web. Seek To Understand How Things Work I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding. To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse. “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle. So, in concrete terms: Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it. Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work? Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful. Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion. Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run. Reframe The Questions Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process. The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process. Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality? Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful. The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small. Seek To Wed Art With ScienceNone of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen. In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence. Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture. The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away. Make Time For Doing Nothing Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys. If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head. Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop! Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone. As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too. From time to time, let go of your sense of urgency. Spirit Of Play Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door. Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article. Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play. We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise. The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself. Other Resources Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig The Beauty of Everyday Things by Soetsu Yanagi Tao Te Ching “The Creative Act” by Rick Rubin “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva Further Reading on Smashing Magazine “Three Approaches To Amplify Your Design Projects,” Olivia De Alba “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth #what #zen #art #motorcycle #maintenance
    SMASHINGMAGAZINE.COM
    What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it. In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it. The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about. I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either. Buddha In The Machine Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States. Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious. Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position. In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman. Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces. “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be. Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think. So, with the Godhead in mind, to business. Classical And Romantic A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it. “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig If we were to characterize the two as bickering siblings, familiar adjectives might start to appear: Classical Romantic Dull Frivolous Awkward Irrational Ugly Erratic Mechanical Untrustworthy Cold Fleeting Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them? Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound. Steve Jobs was a famous advocate of this. “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony. Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty. Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory. Classical Romantic Organized Vibrant Scaleable Evocative Reliable Playful Efficient Fun Replicable Expressive And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share. Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way. In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself. The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one. What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality. Quality The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience. “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is. Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us. We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there. A Quality Web So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it? There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feels (and often is) secondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality. Here are a few habits that I think work in the service of more Quality on the web. Seek To Understand How Things Work I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding. To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse. “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle. So, in concrete terms: Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it. Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work? Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful. Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion. Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run. Reframe The Questions Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process. The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process. Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality? Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful. The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small. Seek To Wed Art With Science (And Whatever Else Fits The Bill) None of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen. In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence. Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture. The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away. Make Time For Doing Nothing Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys. If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head. Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop! Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone. As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too. From time to time, let go of your sense of urgency. Spirit Of Play Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door. Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article. Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play. We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise. The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself. Other Resources Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig The Beauty of Everyday Things by Soetsu Yanagi Tao Te Ching “The Creative Act” by Rick Rubin “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva Further Reading on Smashing Magazine “Three Approaches To Amplify Your Design Projects,” Olivia De Alba “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth
    0 Kommentare 0 Anteile
  • How to thrive with AI agents — tips from an HP strategist

    The rapid rise of AI agents is sparking both excitement and alarm.
    Their power lies in their ability to complete tasks with increasing autonomy. Many can already pursue multi-step goals, make decisions, and interact with external systems — all with minimal human input. Teams of AI agents are beginning to collaborate, each handling a specialised role. As their autonomy increases, they’re poised to reshape countless business processes.
    Tech giants are heralding them as the future of the web. At Microsoft’s Build conference this week, the company declared that we have entered “the era of AI agents.” OpenAI CEO Sam Altman joined the event, proclaiming his lab’s new Codex tool as “a real agentic coding experience.” He called it “one of the biggest changes to programming that I’ve ever seen.”
    Beyond the hype, practical applications are rapidly emerging. AI agents are already assisting with various tasks, from code generation and cyber threat detection to customer service enquiries and shopping marketing campaigns. 
    Before long, they could become comprehensive executive assistants — managing your emails, calendar, and projects. But to harness the opportunities, people need to prepare now.
    Cihangir Kocak is helping them do just that. A principal business and AI strategist at HP, Kocak guides organisations through digital transformation. He believes AI agents will unleash a new wave of opportunities.
    “We are going to a future where everyone will have an AI agent as an assistant,” he says.
    At TNW Conference this summer, Kocak will host two sessions on AI agents. On June 19, he’ll deliver a keynote on their rise. The next day, he’ll join Joost Bos, Senior AI Engineer at Deloitte, for a masterclass titled “Agentic AI: Architecting the Future of Business.”
    Ahead of the event, he shared a few of his tips.
    1. Understand what AI agents can do
    AI agents evolve large language modelsfrom passive responders into active problem-solvers. With tools, memory, and defined goals, they can complete complex tasks on their own.
    “Large language models act as the brains and AI agents as the hands, which means they can also act,” Kocak says. “They can do things for you autonomously.”
    Agents can also collaborate. One might source products, another handle logistics, a third build your website, and a fourth write the marketing copy. In future, businesses may need their own agents to interact with others. Your AI assistant could collaborate with them to book the best service for your needs.
    Free courses from the likes of Hugging Face, Salesforce, and Microsoft are good starting points to explore the possibilities.
    After getting an understanding of the basics, you can put them into practice. 
    2. Start experimenting
    Kocak expects AI agents to rapidly reshape workplaces. “I believe that within five years, everything will be changed because of AI agents,” he says. “It might be even much less than five years — maybe two to three years.”
    Many companies are already shifting numerous tasks from humans to AI. In the near future, the people that they do recruit may require experience of working with AI agents.
    “Soon, a lot of these companies will ask for people who can work with AI agents,” says Kocak. His advice? “Get your hands dirty. Play with it, experiment with it — but do it consciously.”
    One tool he recommends is LM Studio, a desktop app for running LLMs locally. But his key recommendation is simply getting started.
    “Just do something to get a feel of it. Once you have that, it’s time for the next step.”
    3. Find use cases
    After testing some tools, Kocak suggests identifying where they can add value. He advises looking for tasks where AI can free up your time — and start small.
    “What costs you the most time? What don’t you like to do? When you figure out those things, you can look at how AI agents can help you.”
    Kocak uses local LLMs for privacy-sensitive tasks, and ChatGPT for public ones — like drafting LinkedIn posts in his own voice.
    “It saves at least half of my time,” he says.
    4. Focus on the data
    The real magic of AI agents emerges when they’re personalised with your choice of data. Generic tools like ChatGPT can handle broad tasks. But if you want something tailored, agents trained on your choice of data can offer sharper performance.
    That internal knowledge can turn a generic agent into a bespoke powerhouse. “What makes an AI solution special is when you feed it with your own data,” says Kocak. “Then you will have a solution that can operate differently than anything else.”
    5. Maintain human oversight
    Although AI agents can act autonomously, human oversight remains vital. Agents are powerful, but not flawless. Giving them too much freedom is risky.
    “It’s wise to have a human in the room,” he says. “The future will be AI agents plus humans — that will be the most beneficial combination.”
    6. Stay secure
    As AI tools become more accessible, security concerns are mounting. Among the threats are data leaks, adversarial attacks, and agents going off the rails. There’s also the risk of losing a competitive edge. 
    “External parties can take your data and send it to their servers,” says Kocak. “They can then use all sensitive data in your conversations to optimise their models.”
    Many risks can be reduced by deploying open-source, local models — especially for sensitive data and use cases.
    “If you really want a competitive advantage, you need to run and own your AI. That sets you apart,” says Kocak.
    He adds that people shouldn’t be fearful, but conscious. Closed-source, cloud-based tools such as ChatGPT remain useful — but sensitive data and tasks may require more secure alternatives.
    “Just be aware of what information you enter. And remember there is another, better option, of running your large language model locally.”
    7. Embrace the future
    As the industrial revolution and factory automation did before them, AI agents will transform jobs. Some roles will disappear — but new ones will emerge.
    A welder could become an operator of robotic welders. A data entry clerk might oversee AI agents. Kocak is optimistic about the possibilities.
    “Our core capabilities as humans — like being creative, finding solutions out of the box, and empathy — will come to the forefront.”
    These tips are just a glimpse of what Kocak will provide at TNW Conference. If you want to check out his sessions — or anything else on the event agenda — we have a special offer for you. Use the code TNWXMEDIA2025 at the ticket checkout to get 30% off.

    Story by

    Thomas Macaulay

    Managing editor

    Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar.

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #how #thrive #with #agents #tips
    How to thrive with AI agents — tips from an HP strategist
    The rapid rise of AI agents is sparking both excitement and alarm. Their power lies in their ability to complete tasks with increasing autonomy. Many can already pursue multi-step goals, make decisions, and interact with external systems — all with minimal human input. Teams of AI agents are beginning to collaborate, each handling a specialised role. As their autonomy increases, they’re poised to reshape countless business processes. Tech giants are heralding them as the future of the web. At Microsoft’s Build conference this week, the company declared that we have entered “the era of AI agents.” OpenAI CEO Sam Altman joined the event, proclaiming his lab’s new Codex tool as “a real agentic coding experience.” He called it “one of the biggest changes to programming that I’ve ever seen.” Beyond the hype, practical applications are rapidly emerging. AI agents are already assisting with various tasks, from code generation and cyber threat detection to customer service enquiries and shopping marketing campaigns.  Before long, they could become comprehensive executive assistants — managing your emails, calendar, and projects. But to harness the opportunities, people need to prepare now. Cihangir Kocak is helping them do just that. A principal business and AI strategist at HP, Kocak guides organisations through digital transformation. He believes AI agents will unleash a new wave of opportunities. “We are going to a future where everyone will have an AI agent as an assistant,” he says. At TNW Conference this summer, Kocak will host two sessions on AI agents. On June 19, he’ll deliver a keynote on their rise. The next day, he’ll join Joost Bos, Senior AI Engineer at Deloitte, for a masterclass titled “Agentic AI: Architecting the Future of Business.” Ahead of the event, he shared a few of his tips. 1. Understand what AI agents can do AI agents evolve large language modelsfrom passive responders into active problem-solvers. With tools, memory, and defined goals, they can complete complex tasks on their own. “Large language models act as the brains and AI agents as the hands, which means they can also act,” Kocak says. “They can do things for you autonomously.” Agents can also collaborate. One might source products, another handle logistics, a third build your website, and a fourth write the marketing copy. In future, businesses may need their own agents to interact with others. Your AI assistant could collaborate with them to book the best service for your needs. Free courses from the likes of Hugging Face, Salesforce, and Microsoft are good starting points to explore the possibilities. After getting an understanding of the basics, you can put them into practice.  2. Start experimenting Kocak expects AI agents to rapidly reshape workplaces. “I believe that within five years, everything will be changed because of AI agents,” he says. “It might be even much less than five years — maybe two to three years.” Many companies are already shifting numerous tasks from humans to AI. In the near future, the people that they do recruit may require experience of working with AI agents. “Soon, a lot of these companies will ask for people who can work with AI agents,” says Kocak. His advice? “Get your hands dirty. Play with it, experiment with it — but do it consciously.” One tool he recommends is LM Studio, a desktop app for running LLMs locally. But his key recommendation is simply getting started. “Just do something to get a feel of it. Once you have that, it’s time for the next step.” 3. Find use cases After testing some tools, Kocak suggests identifying where they can add value. He advises looking for tasks where AI can free up your time — and start small. “What costs you the most time? What don’t you like to do? When you figure out those things, you can look at how AI agents can help you.” Kocak uses local LLMs for privacy-sensitive tasks, and ChatGPT for public ones — like drafting LinkedIn posts in his own voice. “It saves at least half of my time,” he says. 4. Focus on the data The real magic of AI agents emerges when they’re personalised with your choice of data. Generic tools like ChatGPT can handle broad tasks. But if you want something tailored, agents trained on your choice of data can offer sharper performance. That internal knowledge can turn a generic agent into a bespoke powerhouse. “What makes an AI solution special is when you feed it with your own data,” says Kocak. “Then you will have a solution that can operate differently than anything else.” 5. Maintain human oversight Although AI agents can act autonomously, human oversight remains vital. Agents are powerful, but not flawless. Giving them too much freedom is risky. “It’s wise to have a human in the room,” he says. “The future will be AI agents plus humans — that will be the most beneficial combination.” 6. Stay secure As AI tools become more accessible, security concerns are mounting. Among the threats are data leaks, adversarial attacks, and agents going off the rails. There’s also the risk of losing a competitive edge.  “External parties can take your data and send it to their servers,” says Kocak. “They can then use all sensitive data in your conversations to optimise their models.” Many risks can be reduced by deploying open-source, local models — especially for sensitive data and use cases. “If you really want a competitive advantage, you need to run and own your AI. That sets you apart,” says Kocak. He adds that people shouldn’t be fearful, but conscious. Closed-source, cloud-based tools such as ChatGPT remain useful — but sensitive data and tasks may require more secure alternatives. “Just be aware of what information you enter. And remember there is another, better option, of running your large language model locally.” 7. Embrace the future As the industrial revolution and factory automation did before them, AI agents will transform jobs. Some roles will disappear — but new ones will emerge. A welder could become an operator of robotic welders. A data entry clerk might oversee AI agents. Kocak is optimistic about the possibilities. “Our core capabilities as humans — like being creative, finding solutions out of the box, and empathy — will come to the forefront.” These tips are just a glimpse of what Kocak will provide at TNW Conference. If you want to check out his sessions — or anything else on the event agenda — we have a special offer for you. Use the code TNWXMEDIA2025 at the ticket checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #how #thrive #with #agents #tips
    THENEXTWEB.COM
    How to thrive with AI agents — tips from an HP strategist
    The rapid rise of AI agents is sparking both excitement and alarm. Their power lies in their ability to complete tasks with increasing autonomy. Many can already pursue multi-step goals, make decisions, and interact with external systems — all with minimal human input. Teams of AI agents are beginning to collaborate, each handling a specialised role. As their autonomy increases, they’re poised to reshape countless business processes. Tech giants are heralding them as the future of the web. At Microsoft’s Build conference this week, the company declared that we have entered “the era of AI agents.” OpenAI CEO Sam Altman joined the event, proclaiming his lab’s new Codex tool as “a real agentic coding experience.” He called it “one of the biggest changes to programming that I’ve ever seen.” Beyond the hype, practical applications are rapidly emerging. AI agents are already assisting with various tasks, from code generation and cyber threat detection to customer service enquiries and shopping marketing campaigns.  Before long, they could become comprehensive executive assistants — managing your emails, calendar, and projects. But to harness the opportunities, people need to prepare now. Cihangir Kocak is helping them do just that. A principal business and AI strategist at HP, Kocak guides organisations through digital transformation. He believes AI agents will unleash a new wave of opportunities. “We are going to a future where everyone will have an AI agent as an assistant,” he says. At TNW Conference this summer, Kocak will host two sessions on AI agents. On June 19, he’ll deliver a keynote on their rise. The next day, he’ll join Joost Bos, Senior AI Engineer at Deloitte, for a masterclass titled “Agentic AI: Architecting the Future of Business.” Ahead of the event, he shared a few of his tips. 1. Understand what AI agents can do AI agents evolve large language models (LLMs) from passive responders into active problem-solvers. With tools, memory, and defined goals, they can complete complex tasks on their own. “Large language models act as the brains and AI agents as the hands, which means they can also act,” Kocak says. “They can do things for you autonomously.” Agents can also collaborate. One might source products, another handle logistics, a third build your website, and a fourth write the marketing copy. In future, businesses may need their own agents to interact with others. Your AI assistant could collaborate with them to book the best service for your needs. Free courses from the likes of Hugging Face, Salesforce, and Microsoft are good starting points to explore the possibilities. After getting an understanding of the basics, you can put them into practice.  2. Start experimenting Kocak expects AI agents to rapidly reshape workplaces. “I believe that within five years, everything will be changed because of AI agents,” he says. “It might be even much less than five years — maybe two to three years.” Many companies are already shifting numerous tasks from humans to AI. In the near future, the people that they do recruit may require experience of working with AI agents. “Soon, a lot of these companies will ask for people who can work with AI agents,” says Kocak. His advice? “Get your hands dirty. Play with it, experiment with it — but do it consciously.” One tool he recommends is LM Studio, a desktop app for running LLMs locally. But his key recommendation is simply getting started. “Just do something to get a feel of it. Once you have that, it’s time for the next step.” 3. Find use cases After testing some tools, Kocak suggests identifying where they can add value. He advises looking for tasks where AI can free up your time — and start small. “What costs you the most time? What don’t you like to do? When you figure out those things, you can look at how AI agents can help you.” Kocak uses local LLMs for privacy-sensitive tasks, and ChatGPT for public ones — like drafting LinkedIn posts in his own voice. “It saves at least half of my time,” he says. 4. Focus on the data The real magic of AI agents emerges when they’re personalised with your choice of data. Generic tools like ChatGPT can handle broad tasks. But if you want something tailored, agents trained on your choice of data can offer sharper performance. That internal knowledge can turn a generic agent into a bespoke powerhouse. “What makes an AI solution special is when you feed it with your own data,” says Kocak. “Then you will have a solution that can operate differently than anything else.” 5. Maintain human oversight Although AI agents can act autonomously, human oversight remains vital. Agents are powerful, but not flawless. Giving them too much freedom is risky. “It’s wise to have a human in the room,” he says. “The future will be AI agents plus humans — that will be the most beneficial combination.” 6. Stay secure As AI tools become more accessible, security concerns are mounting. Among the threats are data leaks, adversarial attacks, and agents going off the rails. There’s also the risk of losing a competitive edge.  “External parties can take your data and send it to their servers,” says Kocak. “They can then use all sensitive data in your conversations to optimise their models.” Many risks can be reduced by deploying open-source, local models — especially for sensitive data and use cases. “If you really want a competitive advantage, you need to run and own your AI. That sets you apart,” says Kocak. He adds that people shouldn’t be fearful, but conscious. Closed-source, cloud-based tools such as ChatGPT remain useful — but sensitive data and tasks may require more secure alternatives. “Just be aware of what information you enter. And remember there is another, better option, of running your large language model locally.” 7. Embrace the future As the industrial revolution and factory automation did before them, AI agents will transform jobs. Some roles will disappear — but new ones will emerge. A welder could become an operator of robotic welders. A data entry clerk might oversee AI agents. Kocak is optimistic about the possibilities. “Our core capabilities as humans — like being creative, finding solutions out of the box, and empathy — will come to the forefront.” These tips are just a glimpse of what Kocak will provide at TNW Conference. If you want to check out his sessions — or anything else on the event agenda — we have a special offer for you. Use the code TNWXMEDIA2025 at the ticket checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he e (show all) Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chess (badly) and the guitar (even worse). Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 Kommentare 0 Anteile
  • Why Trump’s ‘Golden Dome’ Won’t Shield the U.S. from Nuclear Strikes

    May 21, 202510 min readWhy Some Experts Call Trump’s ‘Golden Dome’ Missile Shield a Dangerous FantasyThe White House’s -billion plan to protect the U.S. from nuclear annihilation will probably cost much more—and deliver far less—than has been claimed, says nuclear arms expert Jeffrey LewisBy Lee Billings U.S. President Donald Trump speaks in the Oval Office of the White House on May 20, 2025, during a briefing announcing his administration’s plan for the “Golden Dome” missile defense shield. Jim Watson/AFP via Getty ImagesDuring a briefing from the Oval Office this week, President Donald Trump revealed his administration’s plan for “Golden Dome”—an ambitious high-tech system meant to shield the U.S. from ballistic, cruise and hypersonic missile attacks launched by foreign adversaries. Flanked by senior officials, including Secretary of Defense Pete Hegseth and the project’s newly selected leader, Gen. Michael Guetlein of the U.S. Space Force, Trump announced that Golden Dome will be completed within three years at a cost of billion.The program, which was among Trump’s campaign promises, derives its name from the Iron Dome missile defense system of Israel—a nation that’s geographically 400 times smaller than the U.S. Protecting the vastness of the U.S. demands very different capabilities than those of Iron Dome, which has successfully shot down rockets and missiles using ground-based interceptors. Most notably, Trump’s Golden Dome would need to expand into space—making it a successor to the Strategic Defense Initiativepursued by the Reagan administration in the 1980s. Better known by the mocking nickname “Star Wars,” SDI sought to neutralize the threat from the Soviet Union’s nuclear-warhead-tipped intercontinental ballistic missiles by using space-based interceptors that could shoot them down midflight. But fearsome technical challenges kept SDI from getting anywhere close to that goal, despite tens of billions of dollars of federal expenditures.“We will truly be completing the job that President Reagan started 40 years ago, forever ending the missile threat to the American homeland,” Trump said during the briefing. Although the announcement was short on technical details, Trump also said Golden Dome “will deploy next-generation technologies across the land, sea and space, including space-based sensors and interceptors.” The program, which Guetlein has compared to the scale of the Manhattan Project in past remarks, has been allotted billion in a Republican spending bill that has yet to pass in Congress. But Golden Dome may ultimately cost much more than Trump’s staggering -billion sum. An independent assessment by the Congressional Budget Office estimates its price tag could be as high as billion, and the program has drawn domestic and international outcries that it risks sparking a new, globe-destabilizing arms race and weaponizing Earth’s fragile orbital environment.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To get a better sense of what’s at stake—and whether Golden Dome has a better chance of success than its failed forebears—Scientific American spoke with Jeffrey Lewis, an expert on the geopolitics of nuclear weaponry at the James Martin Center for Nonproliferation Studies at the Middlebury Institute of International Studies.It’s been a while, but when last I checked, most experts considered this sort of plan a nonstarter because the U.S. is simply too big of a target. Has something changed?Well, yes and no. The killer argument against space-based interceptors in the 1980s was that it would take thousands of them, and there was just no way to put up that many satellites. Today that’s no longer true. SpaceX alone has put up more than 7,000 Starlink satellites. Launch costs are much cheaper now, and there are more launch vehicles available. So, for the first time, you can say, “Oh, well, I could have a 7,000-satellite constellation. Do I want to do that?” Whereas, when the Reagan administration was talking about this, it was just la-la land.But let’s be clear: this does not solve all the other problems with the general idea—or the Golden Dome version in particular.What are some of those other problems?Just talking about space-based interceptors, there are a couplemy colleagues and I have pointed out. We ran some numbers using the old SDI-era calculation fromEd Teller and Greg Canavan—so we couldn’t be accused of using some hippie version of the calculation, right? And what this and other independent assessments show is that the number of interceptors you need is super-duper sensitive to lots of things. For instance, it’s not like this is a “one satellite to one missile” situation—because the physics demands that these satellites ... have to be in low-Earth orbit, and that means they’re going to be constantly moving over different parts of the planet.So if you want to defend against just one missile, you still need a whole constellation. And if you want to defend against two missiles, then you basically need twice as many interceptors, and so on.You probably have to shoot down missiles during the boost phase, when the warheads are still attached. For SDI, the U.S. was dealing with Soviet liquid-fueled missiles that would boost, or burn, for about four minutes. Well, modern ones burn for less than three—that’s a whole minute that you no longer have. This is actually much worse than it sounds because you’re probably unable to shoot for the first minute or so. Even with modern detectorsmuch better thanwe had in the 1980s, you may not see the missile until it rises above the clouds. And once it does, your sensors, your computers, still have to say, “Aha! That is a missile!” And then you have to ensure that you’re not shooting down some ordinary space launch—so the system says, “I see a missile. May I shoot at it, please?” And someone or something has to give the go-ahead. So let’s just say you’ll have a good minute to shoot it down; this means your space-based interceptor has to be right there, ready to go, right? But by the time you’re getting permission to shoot, the satellite that was overhead to do that is now too far away, and so the next satellite has to be coming there. This scales up really, really fast.Presumably artificial intelligence and other technologies could be leveraged to make that sort of command and control more agile and responsive. But clearly there are still limits here—AI can’t be some sort of panacea.Sure, that’s right. But technological progress overall hasn’t made the threat environment better. Instead it’s gotten much worse.Let’s get back to the sheer physics-induced numbers for a moment, which AI can’t really do much about. That daunting scaling I mentioned also depends on the quality of your interceptors, your kill vehicles—which, by the way, are still going to be grotesquely expensive even if launch costs are low. If your interceptors can rapidly accelerate to eight or 10 kilometers per second, your constellation can be smaller. If they only reach 4 km/s, your constellation has to be huge.The point is: any claim that you can do this with relatively low numbers—let’s say 2,000 interceptors—assumes a series of improbable miracles occurring in quick succession to deliver the very best outcome that could possibly happen. So it’s not going to happen that way, even if, in principle, it could.So you’re telling me there’s a chance! No, seriously, I see what you mean. The arguments in favor of this working seem rather contrived. No system is perfect, and just one missile getting through can still have catastrophic results. And we haven’t even talked about adversarial countermeasures yet.There’s a joke that’s sometimes made about this: “We play chess, and they don’t move their pieces.” That seems to be the operative assumption here: that other nations will sit idly by as we build a complex, vulnerable system to nullify any strategic nuclear capability they have. And of course, it’s not valid at all. Why do you think the Chinese are building massive fields of missile silos? It’s to counteract or overwhelm this sort of thing. Why do you think the Russians are making moves to put a nuclear weapon in orbit? It’s to mass kill any satellite constellation that would shoot down their missiles.Golden Dome proponents may say, “Oh, we’ll shoot that down, too, before it goes off.” Well, good luck. You put a high-yield nuclear weapon on a booster, and the split second it gets above the clouds, sure, you might see it—but now it sees you, too, before you can shoot. All it has to do at that point is detonate to blow a giant hole in your defenses, and that’s game over. And by the way, this rosy scenario assumes your adversaries don’t interfere with all your satellites passing over their territory in peacetime. We know that won’t be the case—they’ll light them up with sensor-dazzling lasers, at minimum!You’ve compared any feasible space-based system to Starlink and noted that, similar to Starlink, these interceptors will need to be in low-Earth orbit. That means their orbits will rapidly decay from atmospheric drag, so just like Starlink’s satellites, they’d need to be constantly replaced, too, right?Ha, yes, that’s right. With Starlink, you’re looking at a three-to-five-year life cycle, which means annually replacing one third to one fifth of a constellation.So let’s say Golden Dome is 10,000 satellites; this would mean the best-case scenario is that you’re replacing 2,000 per year. Now, let’s just go along with what the Trump administration is saying, that they can get these things really cheap. I’m going to guess a “really cheap” mass-produced kill vehicle would still run you million a pop, easily. Just multiply million by 2,000, and your answer is billion. So under these assumptions, we’d be spending billion per year just to maintain the constellation. That’s not even factoring in operations.And that’s not to mention associated indirect costs from potentially nasty effects on the upper atmosphere and the orbital environment from all the launches and reentries.That, yes—among many other costly things.I have to ask: If fundamental physics makes this extremely expensive idea blatantly incapable of delivering on its promises, what’s really going on when the U.S. president and the secretary of defense announce their intention to pump billion into it for a three-year crash program? Some critics claim this kind of thing is really about transferring taxpayer dollars to a few big aerospace companies and other defense contractors.Well, I wouldn’t say it’s quite that simple.Ballistic missile defense is incredibly appealing to some people for reasons besides money. In technical terms, it’s an elegant solution to the problem of nuclear annihilation—even though it’s not really feasible. For some people, it’s just cool, right? And at a deeper level, many people just don’t like the concept of deterrence—mutual assured destruction and all that—because, remember, the status quo is this: If Russia launches 1,000 nuclear weapons at us—or 100 or 10 or even just one—then we are going to murder every single person in Russia with an immediate nuclear counterattack. That’s how deterrence works. We’re not going to wait for those missiles to land so we can count up our dead to calibrate a more nuanced response. That’s official U.S. policy, and I don’t think anyone wants it to be this way forever. But it’s arguably what’s prevented any nuclear exchange from occurring to date.But not everyone believes in the power of deterrence, and so they’re looking for some kind of technological escape. I don’t think this fantasy is that different from Elon Musk thinking he’s going to go live on Mars when climate change ruins Earth: In both cases, instead of doing the really hard things that seem necessary to actually make this planet better, we’re talking about people who think they can just buy their way out of the problem. A lot of people—a lot of men, especially—really hate vulnerability, and this idea that you can just tech your way out of it is very appealing to them. You know, “Oh, what vulnerability? Yeah, there’s an app for that.”You’re saying this isn’t about money?Well, I imagine this is going to be good for at least a couple of SpaceX Falcon Heavy or Starship launches per year for Elon Musk. And you don’t have to do too many of those launches for the value proposition to work out: You build and run Starlink, you put up another constellation of space-based missile defense interceptors, and suddenly you’ve got a viable business model for these fancy huge rockets that can also take you to Mars, right?Given your knowledge of science history—of how dispassionate physics keeps showing space-based ballistic missile defense is essentially unworkable, yet the idea just keeps coming back—how does this latest resurgence make you feel?When I was younger, I would have been frustrated, but now I just accept human beings don’t learn. We make the same mistakes over and over again. You have to laugh at human folly because I do think most of these people are sincere, you know. They’re trying to get rich, sure, but they’re also trying to protect the country, and they’re doing it through ways they think about the world—which admittedly are stupid. But, hey, they’re trying. It’s very disappointing, but if you just laugh at them, they’re quite amusing.I think most people would have trouble laughing about something as devastating as nuclear war—or about an ultraexpensive plan to protect against it that’s doomed to failure and could spark a new arms race.I guess if you’re looking for a hopeful thought, it’s that we’ve tried this before, and it didn’t really work, and that’s likely to happen again.So how do you think it will actually play out this time around?I think this will be a gigantic waste of money that collapses under its own weight.They’ll put up a couple of interceptors, and they’ll test those against a boosting ballistic missile, and they’ll eventually get a hit. And they’ll use that to justify putting up more, and they’ll probably even manage to make a thin constellation—with the downside, of course, being that the Russians and the Chinese and the North Koreans and everybody else will make corresponding investments in ways to kill this system.And then it will start to really feel expensive, in part because it will be complicating and compromising things like Starlink and other commercial satellite constellations—which, I’d like to point out, are almost certainly uninsured in orbit because you can’t insure against acts of war. So think about that: if the Russians or anyone else detonate a nuclear weapon in orbit because of something like Golden Dome, Elon Musk’s entire constellation is dead, and he’s probably just out the cash.The fact is: these days we rely on space-based assets much more than most people realize, yet Earth orbit is such a fragile environment that we could muck it up in many different ways that carry really nasty long-term consequences. I worry about that a lot. Space used to be a benign environment, even throughout the entire cold war, but having an arms race there will make it malign. So Golden Dome is probably going to make everyone’s life a little bit more dangerous—at least until we, hopefully, come to our senses and decide to try something different.
    #why #trumps #golden #dome #wont
    Why Trump’s ‘Golden Dome’ Won’t Shield the U.S. from Nuclear Strikes
    May 21, 202510 min readWhy Some Experts Call Trump’s ‘Golden Dome’ Missile Shield a Dangerous FantasyThe White House’s -billion plan to protect the U.S. from nuclear annihilation will probably cost much more—and deliver far less—than has been claimed, says nuclear arms expert Jeffrey LewisBy Lee Billings U.S. President Donald Trump speaks in the Oval Office of the White House on May 20, 2025, during a briefing announcing his administration’s plan for the “Golden Dome” missile defense shield. Jim Watson/AFP via Getty ImagesDuring a briefing from the Oval Office this week, President Donald Trump revealed his administration’s plan for “Golden Dome”—an ambitious high-tech system meant to shield the U.S. from ballistic, cruise and hypersonic missile attacks launched by foreign adversaries. Flanked by senior officials, including Secretary of Defense Pete Hegseth and the project’s newly selected leader, Gen. Michael Guetlein of the U.S. Space Force, Trump announced that Golden Dome will be completed within three years at a cost of billion.The program, which was among Trump’s campaign promises, derives its name from the Iron Dome missile defense system of Israel—a nation that’s geographically 400 times smaller than the U.S. Protecting the vastness of the U.S. demands very different capabilities than those of Iron Dome, which has successfully shot down rockets and missiles using ground-based interceptors. Most notably, Trump’s Golden Dome would need to expand into space—making it a successor to the Strategic Defense Initiativepursued by the Reagan administration in the 1980s. Better known by the mocking nickname “Star Wars,” SDI sought to neutralize the threat from the Soviet Union’s nuclear-warhead-tipped intercontinental ballistic missiles by using space-based interceptors that could shoot them down midflight. But fearsome technical challenges kept SDI from getting anywhere close to that goal, despite tens of billions of dollars of federal expenditures.“We will truly be completing the job that President Reagan started 40 years ago, forever ending the missile threat to the American homeland,” Trump said during the briefing. Although the announcement was short on technical details, Trump also said Golden Dome “will deploy next-generation technologies across the land, sea and space, including space-based sensors and interceptors.” The program, which Guetlein has compared to the scale of the Manhattan Project in past remarks, has been allotted billion in a Republican spending bill that has yet to pass in Congress. But Golden Dome may ultimately cost much more than Trump’s staggering -billion sum. An independent assessment by the Congressional Budget Office estimates its price tag could be as high as billion, and the program has drawn domestic and international outcries that it risks sparking a new, globe-destabilizing arms race and weaponizing Earth’s fragile orbital environment.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To get a better sense of what’s at stake—and whether Golden Dome has a better chance of success than its failed forebears—Scientific American spoke with Jeffrey Lewis, an expert on the geopolitics of nuclear weaponry at the James Martin Center for Nonproliferation Studies at the Middlebury Institute of International Studies.It’s been a while, but when last I checked, most experts considered this sort of plan a nonstarter because the U.S. is simply too big of a target. Has something changed?Well, yes and no. The killer argument against space-based interceptors in the 1980s was that it would take thousands of them, and there was just no way to put up that many satellites. Today that’s no longer true. SpaceX alone has put up more than 7,000 Starlink satellites. Launch costs are much cheaper now, and there are more launch vehicles available. So, for the first time, you can say, “Oh, well, I could have a 7,000-satellite constellation. Do I want to do that?” Whereas, when the Reagan administration was talking about this, it was just la-la land.But let’s be clear: this does not solve all the other problems with the general idea—or the Golden Dome version in particular.What are some of those other problems?Just talking about space-based interceptors, there are a couplemy colleagues and I have pointed out. We ran some numbers using the old SDI-era calculation fromEd Teller and Greg Canavan—so we couldn’t be accused of using some hippie version of the calculation, right? And what this and other independent assessments show is that the number of interceptors you need is super-duper sensitive to lots of things. For instance, it’s not like this is a “one satellite to one missile” situation—because the physics demands that these satellites ... have to be in low-Earth orbit, and that means they’re going to be constantly moving over different parts of the planet.So if you want to defend against just one missile, you still need a whole constellation. And if you want to defend against two missiles, then you basically need twice as many interceptors, and so on.You probably have to shoot down missiles during the boost phase, when the warheads are still attached. For SDI, the U.S. was dealing with Soviet liquid-fueled missiles that would boost, or burn, for about four minutes. Well, modern ones burn for less than three—that’s a whole minute that you no longer have. This is actually much worse than it sounds because you’re probably unable to shoot for the first minute or so. Even with modern detectorsmuch better thanwe had in the 1980s, you may not see the missile until it rises above the clouds. And once it does, your sensors, your computers, still have to say, “Aha! That is a missile!” And then you have to ensure that you’re not shooting down some ordinary space launch—so the system says, “I see a missile. May I shoot at it, please?” And someone or something has to give the go-ahead. So let’s just say you’ll have a good minute to shoot it down; this means your space-based interceptor has to be right there, ready to go, right? But by the time you’re getting permission to shoot, the satellite that was overhead to do that is now too far away, and so the next satellite has to be coming there. This scales up really, really fast.Presumably artificial intelligence and other technologies could be leveraged to make that sort of command and control more agile and responsive. But clearly there are still limits here—AI can’t be some sort of panacea.Sure, that’s right. But technological progress overall hasn’t made the threat environment better. Instead it’s gotten much worse.Let’s get back to the sheer physics-induced numbers for a moment, which AI can’t really do much about. That daunting scaling I mentioned also depends on the quality of your interceptors, your kill vehicles—which, by the way, are still going to be grotesquely expensive even if launch costs are low. If your interceptors can rapidly accelerate to eight or 10 kilometers per second, your constellation can be smaller. If they only reach 4 km/s, your constellation has to be huge.The point is: any claim that you can do this with relatively low numbers—let’s say 2,000 interceptors—assumes a series of improbable miracles occurring in quick succession to deliver the very best outcome that could possibly happen. So it’s not going to happen that way, even if, in principle, it could.So you’re telling me there’s a chance! No, seriously, I see what you mean. The arguments in favor of this working seem rather contrived. No system is perfect, and just one missile getting through can still have catastrophic results. And we haven’t even talked about adversarial countermeasures yet.There’s a joke that’s sometimes made about this: “We play chess, and they don’t move their pieces.” That seems to be the operative assumption here: that other nations will sit idly by as we build a complex, vulnerable system to nullify any strategic nuclear capability they have. And of course, it’s not valid at all. Why do you think the Chinese are building massive fields of missile silos? It’s to counteract or overwhelm this sort of thing. Why do you think the Russians are making moves to put a nuclear weapon in orbit? It’s to mass kill any satellite constellation that would shoot down their missiles.Golden Dome proponents may say, “Oh, we’ll shoot that down, too, before it goes off.” Well, good luck. You put a high-yield nuclear weapon on a booster, and the split second it gets above the clouds, sure, you might see it—but now it sees you, too, before you can shoot. All it has to do at that point is detonate to blow a giant hole in your defenses, and that’s game over. And by the way, this rosy scenario assumes your adversaries don’t interfere with all your satellites passing over their territory in peacetime. We know that won’t be the case—they’ll light them up with sensor-dazzling lasers, at minimum!You’ve compared any feasible space-based system to Starlink and noted that, similar to Starlink, these interceptors will need to be in low-Earth orbit. That means their orbits will rapidly decay from atmospheric drag, so just like Starlink’s satellites, they’d need to be constantly replaced, too, right?Ha, yes, that’s right. With Starlink, you’re looking at a three-to-five-year life cycle, which means annually replacing one third to one fifth of a constellation.So let’s say Golden Dome is 10,000 satellites; this would mean the best-case scenario is that you’re replacing 2,000 per year. Now, let’s just go along with what the Trump administration is saying, that they can get these things really cheap. I’m going to guess a “really cheap” mass-produced kill vehicle would still run you million a pop, easily. Just multiply million by 2,000, and your answer is billion. So under these assumptions, we’d be spending billion per year just to maintain the constellation. That’s not even factoring in operations.And that’s not to mention associated indirect costs from potentially nasty effects on the upper atmosphere and the orbital environment from all the launches and reentries.That, yes—among many other costly things.I have to ask: If fundamental physics makes this extremely expensive idea blatantly incapable of delivering on its promises, what’s really going on when the U.S. president and the secretary of defense announce their intention to pump billion into it for a three-year crash program? Some critics claim this kind of thing is really about transferring taxpayer dollars to a few big aerospace companies and other defense contractors.Well, I wouldn’t say it’s quite that simple.Ballistic missile defense is incredibly appealing to some people for reasons besides money. In technical terms, it’s an elegant solution to the problem of nuclear annihilation—even though it’s not really feasible. For some people, it’s just cool, right? And at a deeper level, many people just don’t like the concept of deterrence—mutual assured destruction and all that—because, remember, the status quo is this: If Russia launches 1,000 nuclear weapons at us—or 100 or 10 or even just one—then we are going to murder every single person in Russia with an immediate nuclear counterattack. That’s how deterrence works. We’re not going to wait for those missiles to land so we can count up our dead to calibrate a more nuanced response. That’s official U.S. policy, and I don’t think anyone wants it to be this way forever. But it’s arguably what’s prevented any nuclear exchange from occurring to date.But not everyone believes in the power of deterrence, and so they’re looking for some kind of technological escape. I don’t think this fantasy is that different from Elon Musk thinking he’s going to go live on Mars when climate change ruins Earth: In both cases, instead of doing the really hard things that seem necessary to actually make this planet better, we’re talking about people who think they can just buy their way out of the problem. A lot of people—a lot of men, especially—really hate vulnerability, and this idea that you can just tech your way out of it is very appealing to them. You know, “Oh, what vulnerability? Yeah, there’s an app for that.”You’re saying this isn’t about money?Well, I imagine this is going to be good for at least a couple of SpaceX Falcon Heavy or Starship launches per year for Elon Musk. And you don’t have to do too many of those launches for the value proposition to work out: You build and run Starlink, you put up another constellation of space-based missile defense interceptors, and suddenly you’ve got a viable business model for these fancy huge rockets that can also take you to Mars, right?Given your knowledge of science history—of how dispassionate physics keeps showing space-based ballistic missile defense is essentially unworkable, yet the idea just keeps coming back—how does this latest resurgence make you feel?When I was younger, I would have been frustrated, but now I just accept human beings don’t learn. We make the same mistakes over and over again. You have to laugh at human folly because I do think most of these people are sincere, you know. They’re trying to get rich, sure, but they’re also trying to protect the country, and they’re doing it through ways they think about the world—which admittedly are stupid. But, hey, they’re trying. It’s very disappointing, but if you just laugh at them, they’re quite amusing.I think most people would have trouble laughing about something as devastating as nuclear war—or about an ultraexpensive plan to protect against it that’s doomed to failure and could spark a new arms race.I guess if you’re looking for a hopeful thought, it’s that we’ve tried this before, and it didn’t really work, and that’s likely to happen again.So how do you think it will actually play out this time around?I think this will be a gigantic waste of money that collapses under its own weight.They’ll put up a couple of interceptors, and they’ll test those against a boosting ballistic missile, and they’ll eventually get a hit. And they’ll use that to justify putting up more, and they’ll probably even manage to make a thin constellation—with the downside, of course, being that the Russians and the Chinese and the North Koreans and everybody else will make corresponding investments in ways to kill this system.And then it will start to really feel expensive, in part because it will be complicating and compromising things like Starlink and other commercial satellite constellations—which, I’d like to point out, are almost certainly uninsured in orbit because you can’t insure against acts of war. So think about that: if the Russians or anyone else detonate a nuclear weapon in orbit because of something like Golden Dome, Elon Musk’s entire constellation is dead, and he’s probably just out the cash.The fact is: these days we rely on space-based assets much more than most people realize, yet Earth orbit is such a fragile environment that we could muck it up in many different ways that carry really nasty long-term consequences. I worry about that a lot. Space used to be a benign environment, even throughout the entire cold war, but having an arms race there will make it malign. So Golden Dome is probably going to make everyone’s life a little bit more dangerous—at least until we, hopefully, come to our senses and decide to try something different. #why #trumps #golden #dome #wont
    WWW.SCIENTIFICAMERICAN.COM
    Why Trump’s ‘Golden Dome’ Won’t Shield the U.S. from Nuclear Strikes
    May 21, 202510 min readWhy Some Experts Call Trump’s ‘Golden Dome’ Missile Shield a Dangerous FantasyThe White House’s $175-billion plan to protect the U.S. from nuclear annihilation will probably cost much more—and deliver far less—than has been claimed, says nuclear arms expert Jeffrey LewisBy Lee Billings U.S. President Donald Trump speaks in the Oval Office of the White House on May 20, 2025, during a briefing announcing his administration’s plan for the “Golden Dome” missile defense shield. Jim Watson/AFP via Getty ImagesDuring a briefing from the Oval Office this week, President Donald Trump revealed his administration’s plan for “Golden Dome”—an ambitious high-tech system meant to shield the U.S. from ballistic, cruise and hypersonic missile attacks launched by foreign adversaries. Flanked by senior officials, including Secretary of Defense Pete Hegseth and the project’s newly selected leader, Gen. Michael Guetlein of the U.S. Space Force, Trump announced that Golden Dome will be completed within three years at a cost of $175 billion.The program, which was among Trump’s campaign promises, derives its name from the Iron Dome missile defense system of Israel—a nation that’s geographically 400 times smaller than the U.S. Protecting the vastness of the U.S. demands very different capabilities than those of Iron Dome, which has successfully shot down rockets and missiles using ground-based interceptors. Most notably, Trump’s Golden Dome would need to expand into space—making it a successor to the Strategic Defense Initiative (SDI) pursued by the Reagan administration in the 1980s. Better known by the mocking nickname “Star Wars,” SDI sought to neutralize the threat from the Soviet Union’s nuclear-warhead-tipped intercontinental ballistic missiles by using space-based interceptors that could shoot them down midflight. But fearsome technical challenges kept SDI from getting anywhere close to that goal, despite tens of billions of dollars of federal expenditures.“We will truly be completing the job that President Reagan started 40 years ago, forever ending the missile threat to the American homeland,” Trump said during the briefing. Although the announcement was short on technical details, Trump also said Golden Dome “will deploy next-generation technologies across the land, sea and space, including space-based sensors and interceptors.” The program, which Guetlein has compared to the scale of the Manhattan Project in past remarks, has been allotted $25 billion in a Republican spending bill that has yet to pass in Congress. But Golden Dome may ultimately cost much more than Trump’s staggering $175-billion sum. An independent assessment by the Congressional Budget Office estimates its price tag could be as high as $542 billion, and the program has drawn domestic and international outcries that it risks sparking a new, globe-destabilizing arms race and weaponizing Earth’s fragile orbital environment.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To get a better sense of what’s at stake—and whether Golden Dome has a better chance of success than its failed forebears—Scientific American spoke with Jeffrey Lewis, an expert on the geopolitics of nuclear weaponry at the James Martin Center for Nonproliferation Studies at the Middlebury Institute of International Studies.[An edited transcript of the interview follows.]It’s been a while, but when last I checked, most experts considered this sort of plan a nonstarter because the U.S. is simply too big of a target. Has something changed?Well, yes and no. The killer argument against space-based interceptors in the 1980s was that it would take thousands of them, and there was just no way to put up that many satellites. Today that’s no longer true. SpaceX alone has put up more than 7,000 Starlink satellites. Launch costs are much cheaper now, and there are more launch vehicles available. So, for the first time, you can say, “Oh, well, I could have a 7,000-satellite constellation. Do I want to do that?” Whereas, when the Reagan administration was talking about this, it was just la-la land.But let’s be clear: this does not solve all the other problems with the general idea—or the Golden Dome version in particular.What are some of those other problems?Just talking about space-based interceptors, there are a couple [of issues that] my colleagues and I have pointed out. We ran some numbers using the old SDI-era calculation from [SDI physicists] Ed Teller and Greg Canavan—so we couldn’t be accused of using some hippie version of the calculation, right? And what this and other independent assessments show is that the number of interceptors you need is super-duper sensitive to lots of things. For instance, it’s not like this is a “one satellite to one missile” situation—because the physics demands that these satellites ... have to be in low-Earth orbit, and that means they’re going to be constantly moving over different parts of the planet.So if you want to defend against just one missile, you still need a whole constellation. And if you want to defend against two missiles, then you basically need twice as many interceptors, and so on.You probably have to shoot down missiles during the boost phase, when the warheads are still attached. For SDI, the U.S. was dealing with Soviet liquid-fueled missiles that would boost, or burn, for about four minutes. Well, modern ones burn for less than three—that’s a whole minute that you no longer have. This is actually much worse than it sounds because you’re probably unable to shoot for the first minute or so. Even with modern detectors [that are] much better than [those] we had in the 1980s, you may not see the missile until it rises above the clouds. And once it does, your sensors, your computers, still have to say, “Aha! That is a missile!” And then you have to ensure that you’re not shooting down some ordinary space launch—so the system says, “I see a missile. May I shoot at it, please?” And someone or something has to give the go-ahead. So let’s just say you’ll have a good minute to shoot it down; this means your space-based interceptor has to be right there, ready to go, right? But by the time you’re getting permission to shoot, the satellite that was overhead to do that is now too far away, and so the next satellite has to be coming there. This scales up really, really fast.Presumably artificial intelligence and other technologies could be leveraged to make that sort of command and control more agile and responsive. But clearly there are still limits here—AI can’t be some sort of panacea.Sure, that’s right. But technological progress overall hasn’t made the threat environment better. Instead it’s gotten much worse.Let’s get back to the sheer physics-induced numbers for a moment, which AI can’t really do much about. That daunting scaling I mentioned also depends on the quality of your interceptors, your kill vehicles—which, by the way, are still going to be grotesquely expensive even if launch costs are low. If your interceptors can rapidly accelerate to eight or 10 kilometers per second (km/s), your constellation can be smaller. If they only reach 4 km/s, your constellation has to be huge.The point is: any claim that you can do this with relatively low numbers—let’s say 2,000 interceptors—assumes a series of improbable miracles occurring in quick succession to deliver the very best outcome that could possibly happen. So it’s not going to happen that way, even if, in principle, it could.So you’re telling me there’s a chance! No, seriously, I see what you mean. The arguments in favor of this working seem rather contrived. No system is perfect, and just one missile getting through can still have catastrophic results. And we haven’t even talked about adversarial countermeasures yet.There’s a joke that’s sometimes made about this: “We play chess, and they don’t move their pieces.” That seems to be the operative assumption here: that other nations will sit idly by as we build a complex, vulnerable system to nullify any strategic nuclear capability they have. And of course, it’s not valid at all. Why do you think the Chinese are building massive fields of missile silos? It’s to counteract or overwhelm this sort of thing. Why do you think the Russians are making moves to put a nuclear weapon in orbit? It’s to mass kill any satellite constellation that would shoot down their missiles.Golden Dome proponents may say, “Oh, we’ll shoot that down, too, before it goes off.” Well, good luck. You put a high-yield nuclear weapon on a booster, and the split second it gets above the clouds, sure, you might see it—but now it sees you, too, before you can shoot. All it has to do at that point is detonate to blow a giant hole in your defenses, and that’s game over. And by the way, this rosy scenario assumes your adversaries don’t interfere with all your satellites passing over their territory in peacetime. We know that won’t be the case—they’ll light them up with sensor-dazzling lasers, at minimum!You’ve compared any feasible space-based system to Starlink and noted that, similar to Starlink, these interceptors will need to be in low-Earth orbit. That means their orbits will rapidly decay from atmospheric drag, so just like Starlink’s satellites, they’d need to be constantly replaced, too, right?Ha, yes, that’s right. With Starlink, you’re looking at a three-to-five-year life cycle, which means annually replacing one third to one fifth of a constellation.So let’s say Golden Dome is 10,000 satellites; this would mean the best-case scenario is that you’re replacing 2,000 per year. Now, let’s just go along with what the Trump administration is saying, that they can get these things really cheap. I’m going to guess a “really cheap” mass-produced kill vehicle would still run you $20 million a pop, easily. Just multiply $20 million by 2,000, and your answer is $40 billion. So under these assumptions, we’d be spending $40 billion per year just to maintain the constellation. That’s not even factoring in operations.And that’s not to mention associated indirect costs from potentially nasty effects on the upper atmosphere and the orbital environment from all the launches and reentries.That, yes—among many other costly things.I have to ask: If fundamental physics makes this extremely expensive idea blatantly incapable of delivering on its promises, what’s really going on when the U.S. president and the secretary of defense announce their intention to pump $175 billion into it for a three-year crash program? Some critics claim this kind of thing is really about transferring taxpayer dollars to a few big aerospace companies and other defense contractors.Well, I wouldn’t say it’s quite that simple.Ballistic missile defense is incredibly appealing to some people for reasons besides money. In technical terms, it’s an elegant solution to the problem of nuclear annihilation—even though it’s not really feasible. For some people, it’s just cool, right? And at a deeper level, many people just don’t like the concept of deterrence—mutual assured destruction and all that—because, remember, the status quo is this: If Russia launches 1,000 nuclear weapons at us—or 100 or 10 or even just one—then we are going to murder every single person in Russia with an immediate nuclear counterattack. That’s how deterrence works. We’re not going to wait for those missiles to land so we can count up our dead to calibrate a more nuanced response. That’s official U.S. policy, and I don’t think anyone wants it to be this way forever. But it’s arguably what’s prevented any nuclear exchange from occurring to date.But not everyone believes in the power of deterrence, and so they’re looking for some kind of technological escape. I don’t think this fantasy is that different from Elon Musk thinking he’s going to go live on Mars when climate change ruins Earth: In both cases, instead of doing the really hard things that seem necessary to actually make this planet better, we’re talking about people who think they can just buy their way out of the problem. A lot of people—a lot of men, especially—really hate vulnerability, and this idea that you can just tech your way out of it is very appealing to them. You know, “Oh, what vulnerability? Yeah, there’s an app for that.”You’re saying this isn’t about money?Well, I imagine this is going to be good for at least a couple of SpaceX Falcon Heavy or Starship launches per year for Elon Musk. And you don’t have to do too many of those launches for the value proposition to work out: You build and run Starlink, you put up another constellation of space-based missile defense interceptors, and suddenly you’ve got a viable business model for these fancy huge rockets that can also take you to Mars, right?Given your knowledge of science history—of how dispassionate physics keeps showing space-based ballistic missile defense is essentially unworkable, yet the idea just keeps coming back—how does this latest resurgence make you feel?When I was younger, I would have been frustrated, but now I just accept human beings don’t learn. We make the same mistakes over and over again. You have to laugh at human folly because I do think most of these people are sincere, you know. They’re trying to get rich, sure, but they’re also trying to protect the country, and they’re doing it through ways they think about the world—which admittedly are stupid. But, hey, they’re trying. It’s very disappointing, but if you just laugh at them, they’re quite amusing.I think most people would have trouble laughing about something as devastating as nuclear war—or about an ultraexpensive plan to protect against it that’s doomed to failure and could spark a new arms race.I guess if you’re looking for a hopeful thought, it’s that we’ve tried this before, and it didn’t really work, and that’s likely to happen again.So how do you think it will actually play out this time around?I think this will be a gigantic waste of money that collapses under its own weight.They’ll put up a couple of interceptors, and they’ll test those against a boosting ballistic missile, and they’ll eventually get a hit. And they’ll use that to justify putting up more, and they’ll probably even manage to make a thin constellation—with the downside, of course, being that the Russians and the Chinese and the North Koreans and everybody else will make corresponding investments in ways to kill this system.And then it will start to really feel expensive, in part because it will be complicating and compromising things like Starlink and other commercial satellite constellations—which, I’d like to point out, are almost certainly uninsured in orbit because you can’t insure against acts of war. So think about that: if the Russians or anyone else detonate a nuclear weapon in orbit because of something like Golden Dome, Elon Musk’s entire constellation is dead, and he’s probably just out the cash.The fact is: these days we rely on space-based assets much more than most people realize, yet Earth orbit is such a fragile environment that we could muck it up in many different ways that carry really nasty long-term consequences. I worry about that a lot. Space used to be a benign environment, even throughout the entire cold war, but having an arms race there will make it malign. So Golden Dome is probably going to make everyone’s life a little bit more dangerous—at least until we, hopefully, come to our senses and decide to try something different.
    0 Kommentare 0 Anteile
  • The Crowded Battle: Key Insights from the 2025 State of Pentesting Report

    May 20, 2025The Hacker NewsPenetration Testing / Risk Management

    In the newly released 2025 State of Pentesting Report, Pentera surveyed 500 CISOs from global enterprisesto understand the strategies, tactics, and tools they use to cope with the thousands of security alerts, the persisting breaches and the growing cyber risks they have to handle. The findings reveal a complex picture of progress, challenges, and a shifting mindset about how enterprises approach security testing.
    More Tools, More Data, More Protection… No Guarantees
    Over the past year, 45% of enterprises expanded their security technology stacks, with organizations now managing an average of 75 different security solutions​.
    Yet despite these layers of security tools, 67% of U.S. enterprises experienced a breach in the past 24 months​. The growing number of deployed tools has a few effects on the daily operation and the overall cyber posture of the organization.
    Although it seems obvious, the findings tell a clear story - more security tools do mean better security posture. However, there is no silver bullet. Among organizations with fewer than 50 security tools, 93% reported a breach. That percentage steadily declines as stack size increases, dropping to 61% among those using more than 100 tools.
    Alert Fatigue Is Real
    The flip side of larger security stacks is that CISOs and their teams must contend with a much larger influx of information. Enterprises managing over 75 security solutions now face an average of 2,000 alerts per week — double the volume compared to organizations with smaller stacks, and those with over 100 tools receive over 3000.
    This in turn, puts much more emphasis on effective prioritization, otherwise, critical threats may get buried in a sea of alerts. In this environment, where alert volumes are high and time to triage is short, organizations benefit most when they can frequently test for exploitable gaps, so they know which issues truly matter before threat actors find them first.
    Software-Based Pentesting Gains Ground
    Trust in software-based security testing is growing rapidly. Only 5-10 years ago, many enterprises would never have permitted automated tools to run pentests in their environments for fear of causing outages, but sentiment is changing.
    As CISOs continue to recognize the advantages of software in scaling adversarial testing and keeping pace with constantly changing IT environments, software-based pentesting is becoming the standard. Over half of enterprises now use these tools to support in-house testing, driven by trust in their reliability and the need for scalable, continuous validation strategies. Today, 50% of CISOs cite software-based pentesting solutions as their primary method for uncovering exploitable gaps​.
    Insurance Providers Become Unexpected Influencers
    Beyond internal management and Boards of Directors, a surprising new force is shaping security strategy: Cyber insurance providers. 59% of CISOs admitted that they have implemented at least one cybersecurity solution that they were not previously considering as a result of their cyber insurers. It's a clear sign that insurers aren't just pricing risk, they're actively prescribing how to reduce it, and reshaping enterprise security priorities in the process.​.
    Low Confidence in Government Support
    While governmental agencies like CISAand ENISAplay an important role in threat visibility and coordination, confidence in government cybersecurity support is surprisingly low.
    Only 14% of CISOs believe the government is adequately supporting the private sector's cyber challenges​, while 64% feel that government efforts, though acknowledged, are insufficient​. 22% believe that they cannot rely on the government at all for cybersecurity help.
    To benchmark your organization's pentesting practices, budgets, and priorities against other global enterprises, register for the webinar on May 27, 2025 where senior security analysts will discuss the key findings. Alternatively, get the full 2025 State of Pentesting Report and see all the insights for yourself!Note: This article was written and contributed by Jay Mar Tang, Field CISO at Pentera.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #crowded #battle #key #insights #state
    The Crowded Battle: Key Insights from the 2025 State of Pentesting Report
    May 20, 2025The Hacker NewsPenetration Testing / Risk Management In the newly released 2025 State of Pentesting Report, Pentera surveyed 500 CISOs from global enterprisesto understand the strategies, tactics, and tools they use to cope with the thousands of security alerts, the persisting breaches and the growing cyber risks they have to handle. The findings reveal a complex picture of progress, challenges, and a shifting mindset about how enterprises approach security testing. More Tools, More Data, More Protection… No Guarantees Over the past year, 45% of enterprises expanded their security technology stacks, with organizations now managing an average of 75 different security solutions​. Yet despite these layers of security tools, 67% of U.S. enterprises experienced a breach in the past 24 months​. The growing number of deployed tools has a few effects on the daily operation and the overall cyber posture of the organization. Although it seems obvious, the findings tell a clear story - more security tools do mean better security posture. However, there is no silver bullet. Among organizations with fewer than 50 security tools, 93% reported a breach. That percentage steadily declines as stack size increases, dropping to 61% among those using more than 100 tools. Alert Fatigue Is Real The flip side of larger security stacks is that CISOs and their teams must contend with a much larger influx of information. Enterprises managing over 75 security solutions now face an average of 2,000 alerts per week — double the volume compared to organizations with smaller stacks, and those with over 100 tools receive over 3000. This in turn, puts much more emphasis on effective prioritization, otherwise, critical threats may get buried in a sea of alerts. In this environment, where alert volumes are high and time to triage is short, organizations benefit most when they can frequently test for exploitable gaps, so they know which issues truly matter before threat actors find them first. Software-Based Pentesting Gains Ground Trust in software-based security testing is growing rapidly. Only 5-10 years ago, many enterprises would never have permitted automated tools to run pentests in their environments for fear of causing outages, but sentiment is changing. As CISOs continue to recognize the advantages of software in scaling adversarial testing and keeping pace with constantly changing IT environments, software-based pentesting is becoming the standard. Over half of enterprises now use these tools to support in-house testing, driven by trust in their reliability and the need for scalable, continuous validation strategies. Today, 50% of CISOs cite software-based pentesting solutions as their primary method for uncovering exploitable gaps​. Insurance Providers Become Unexpected Influencers Beyond internal management and Boards of Directors, a surprising new force is shaping security strategy: Cyber insurance providers. 59% of CISOs admitted that they have implemented at least one cybersecurity solution that they were not previously considering as a result of their cyber insurers. It's a clear sign that insurers aren't just pricing risk, they're actively prescribing how to reduce it, and reshaping enterprise security priorities in the process.​. Low Confidence in Government Support While governmental agencies like CISAand ENISAplay an important role in threat visibility and coordination, confidence in government cybersecurity support is surprisingly low. Only 14% of CISOs believe the government is adequately supporting the private sector's cyber challenges​, while 64% feel that government efforts, though acknowledged, are insufficient​. 22% believe that they cannot rely on the government at all for cybersecurity help. To benchmark your organization's pentesting practices, budgets, and priorities against other global enterprises, register for the webinar on May 27, 2025 where senior security analysts will discuss the key findings. Alternatively, get the full 2025 State of Pentesting Report and see all the insights for yourself!Note: This article was written and contributed by Jay Mar Tang, Field CISO at Pentera. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #crowded #battle #key #insights #state
    THEHACKERNEWS.COM
    The Crowded Battle: Key Insights from the 2025 State of Pentesting Report
    May 20, 2025The Hacker NewsPenetration Testing / Risk Management In the newly released 2025 State of Pentesting Report, Pentera surveyed 500 CISOs from global enterprises (200 from within the USA) to understand the strategies, tactics, and tools they use to cope with the thousands of security alerts, the persisting breaches and the growing cyber risks they have to handle. The findings reveal a complex picture of progress, challenges, and a shifting mindset about how enterprises approach security testing. More Tools, More Data, More Protection… No Guarantees Over the past year, 45% of enterprises expanded their security technology stacks, with organizations now managing an average of 75 different security solutions​. Yet despite these layers of security tools, 67% of U.S. enterprises experienced a breach in the past 24 months​. The growing number of deployed tools has a few effects on the daily operation and the overall cyber posture of the organization. Although it seems obvious, the findings tell a clear story - more security tools do mean better security posture. However, there is no silver bullet. Among organizations with fewer than 50 security tools, 93% reported a breach. That percentage steadily declines as stack size increases, dropping to 61% among those using more than 100 tools. Alert Fatigue Is Real The flip side of larger security stacks is that CISOs and their teams must contend with a much larger influx of information. Enterprises managing over 75 security solutions now face an average of 2,000 alerts per week — double the volume compared to organizations with smaller stacks, and those with over 100 tools receive over 3000 (3x the alerts). This in turn, puts much more emphasis on effective prioritization, otherwise, critical threats may get buried in a sea of alerts. In this environment, where alert volumes are high and time to triage is short, organizations benefit most when they can frequently test for exploitable gaps, so they know which issues truly matter before threat actors find them first. Software-Based Pentesting Gains Ground Trust in software-based security testing is growing rapidly. Only 5-10 years ago, many enterprises would never have permitted automated tools to run pentests in their environments for fear of causing outages, but sentiment is changing. As CISOs continue to recognize the advantages of software in scaling adversarial testing and keeping pace with constantly changing IT environments, software-based pentesting is becoming the standard. Over half of enterprises now use these tools to support in-house testing, driven by trust in their reliability and the need for scalable, continuous validation strategies. Today, 50% of CISOs cite software-based pentesting solutions as their primary method for uncovering exploitable gaps​. Insurance Providers Become Unexpected Influencers Beyond internal management and Boards of Directors, a surprising new force is shaping security strategy: Cyber insurance providers. 59% of CISOs admitted that they have implemented at least one cybersecurity solution that they were not previously considering as a result of their cyber insurers. It's a clear sign that insurers aren't just pricing risk, they're actively prescribing how to reduce it, and reshaping enterprise security priorities in the process.​. Low Confidence in Government Support While governmental agencies like CISA (in the US) and ENISA (in the EU) play an important role in threat visibility and coordination, confidence in government cybersecurity support is surprisingly low. Only 14% of CISOs believe the government is adequately supporting the private sector's cyber challenges​, while 64% feel that government efforts, though acknowledged, are insufficient​. 22% believe that they cannot rely on the government at all for cybersecurity help. To benchmark your organization's pentesting practices, budgets, and priorities against other global enterprises, register for the webinar on May 27, 2025 where senior security analysts will discuss the key findings. Alternatively, get the full 2025 State of Pentesting Report and see all the insights for yourself!Note: This article was written and contributed by Jay Mar Tang, Field CISO at Pentera. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Kommentare 0 Anteile
  • Agentic AI in Financial Services: IBM’s Whitepaper Maps Opportunities, Risks, and Responsible Integration

    As autonomous AI agents move from theory into implementation, their impact on the financial services sector is becoming tangible. A recent whitepaper from IBM Consulting, titled “Agentic AI in Financial Services: Opportunities, Risks, and Responsible Implementation”, outlines how these AI systems—designed for autonomous decision-making and long-term planning—can fundamentally reshape how financial institutions operate. The paper presents a balanced framework that identifies where Agentic AI can add value, the risks it introduces, and how institutions can implement these systems responsibly.
    Understanding Agentic AI
    AI agents, in this context, are software entities that interact with their environments to accomplish tasks with a high degree of autonomy. Unlike traditional automation or even LLM-powered chatbots, Agentic AI incorporates planning, memory, and reasoning to execute dynamic tasks across systems. IBM categorizes them into Principal, Service, and Task agents, which collaborate in orchestrated systems. These systems enable the agents to autonomously process information, select tools, and interact with human users or enterprise systems in a closed loop of goal pursuit and reflection.
    The whitepaper describes the evolution from rule-based automation to multi-agent orchestration, emphasizing how LLMs now serve as the reasoning engine that drives agent behavior in real-time. Crucially, these agents can adapt to evolving conditions and handle complex, cross-domain tasks, making them ideal for the intricacies of financial services.
    Key Opportunities in Finance
    IBM identifies three primary use case patterns where Agentic AI can unlock significant value:

    Customer Engagement & Personalization
    Agents can streamline onboarding, personalize services through real-time behavioral data, and drive KYC/AML processes using tiered agent hierarchies that reduce manual oversight.Operational Excellence & Governance
    Agents improve internal efficiencies by automating risk management, compliance verification, and anomaly detection, while maintaining auditability and traceability.Technology & Software Development
    They support IT teams with automated testing, predictive maintenance, and infrastructure optimization—redefining DevOps through dynamic, self-improving workflows.
    These systems promise to replace fragmented interfaces and human handoffs with integrated, persona-driven agent experiences grounded in high-quality, governed data products.
    Risk Landscape and Mitigation Strategies
    Autonomy in AI brings unique risks. The IBM paper categorizes them under the system’s core components—goal misalignment, tool misuse, and dynamic deception being among the most critical. For instance, a wealth management agent might misinterpret a client’s risk appetite due to goal drift, or bypass controls by chaining permissible actions in unintended ways.
    Key mitigation strategies include:

    Goal Guardrails: Explicitly defined objectives, real-time monitoring, and value alignment feedback loops.
    Access Controls: Least-privilege design for tool/API access, combined with dynamic rate-limiting and auditing.
    Persona Calibration: Regularly reviewing agents’ behavior to avoid biased or unethical actions.

    The whitepaper also emphasizes agent persistence and system drift as long-term governance challenges. Persistent memory, while enabling learning, can cause agents to act on outdated assumptions. IBM proposes memory reset protocols and periodic recalibrations to counteract drift and ensure continued alignment with organizational values.
    Regulatory Readiness and Ethical Design
    IBM outlines regulatory developments in jurisdictions like the EU and Australia, where agentic systems are increasingly considered “high-risk.” These systems must comply with emerging mandates for transparency, explainability, and continuous human oversight. In the EU’s AI Act, for example, agents influencing access to financial services may fall under stricter obligations due to their autonomous and adaptive behavior.
    The paper recommends proactive alignment with ethical AI principles even in the absence of regulation—asking not just can we, but should we. This includes auditing agents for deceptive behavior, embedding human-in-the-loop structures, and maintaining transparency through natural language decision narratives and visualized reasoning paths.
    Conclusion
    Agentic AI stands at the frontier of enterprise automation. For financial services firms, the promise lies in enhanced personalization, operational agility, and AI-driven governance. Yet these benefits are closely linked to how responsibly these systems are designed and deployed. IBM’s whitepaper serves as a practical guide—advocating for a phased, risk-aware adoption strategy that includes governance frameworks, codified controls, and cross-functional accountability.

    Check out the White Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit.
    Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Critical Security Vulnerabilities in the Model Context Protocol: How Malicious Tools and Deceptive Contexts Exploit AI AgentsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Stability AI Introduces Adversarial Relativistic-ContrastivePost-Training and Stable Audio Open Small: A Distillation-Free Breakthrough for Fast, Diverse, and Efficient Text-to-Audio Generation Across DevicesMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Meta AI Introduces CATransformers: A Carbon-Aware Machine Learning Framework to Co-Optimize AI Models and Hardware for Sustainable Edge DeploymentMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Enterprise AI Without GPU Burn: Salesforce’s xGen-small Optimizes for Context, Cost, and Privacy

    Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub!
    #agentic #financial #services #ibms #whitepaper
    Agentic AI in Financial Services: IBM’s Whitepaper Maps Opportunities, Risks, and Responsible Integration
    As autonomous AI agents move from theory into implementation, their impact on the financial services sector is becoming tangible. A recent whitepaper from IBM Consulting, titled “Agentic AI in Financial Services: Opportunities, Risks, and Responsible Implementation”, outlines how these AI systems—designed for autonomous decision-making and long-term planning—can fundamentally reshape how financial institutions operate. The paper presents a balanced framework that identifies where Agentic AI can add value, the risks it introduces, and how institutions can implement these systems responsibly. Understanding Agentic AI AI agents, in this context, are software entities that interact with their environments to accomplish tasks with a high degree of autonomy. Unlike traditional automation or even LLM-powered chatbots, Agentic AI incorporates planning, memory, and reasoning to execute dynamic tasks across systems. IBM categorizes them into Principal, Service, and Task agents, which collaborate in orchestrated systems. These systems enable the agents to autonomously process information, select tools, and interact with human users or enterprise systems in a closed loop of goal pursuit and reflection. The whitepaper describes the evolution from rule-based automation to multi-agent orchestration, emphasizing how LLMs now serve as the reasoning engine that drives agent behavior in real-time. Crucially, these agents can adapt to evolving conditions and handle complex, cross-domain tasks, making them ideal for the intricacies of financial services. Key Opportunities in Finance IBM identifies three primary use case patterns where Agentic AI can unlock significant value: Customer Engagement & Personalization Agents can streamline onboarding, personalize services through real-time behavioral data, and drive KYC/AML processes using tiered agent hierarchies that reduce manual oversight.Operational Excellence & Governance Agents improve internal efficiencies by automating risk management, compliance verification, and anomaly detection, while maintaining auditability and traceability.Technology & Software Development They support IT teams with automated testing, predictive maintenance, and infrastructure optimization—redefining DevOps through dynamic, self-improving workflows. These systems promise to replace fragmented interfaces and human handoffs with integrated, persona-driven agent experiences grounded in high-quality, governed data products. Risk Landscape and Mitigation Strategies Autonomy in AI brings unique risks. The IBM paper categorizes them under the system’s core components—goal misalignment, tool misuse, and dynamic deception being among the most critical. For instance, a wealth management agent might misinterpret a client’s risk appetite due to goal drift, or bypass controls by chaining permissible actions in unintended ways. Key mitigation strategies include: Goal Guardrails: Explicitly defined objectives, real-time monitoring, and value alignment feedback loops. Access Controls: Least-privilege design for tool/API access, combined with dynamic rate-limiting and auditing. Persona Calibration: Regularly reviewing agents’ behavior to avoid biased or unethical actions. The whitepaper also emphasizes agent persistence and system drift as long-term governance challenges. Persistent memory, while enabling learning, can cause agents to act on outdated assumptions. IBM proposes memory reset protocols and periodic recalibrations to counteract drift and ensure continued alignment with organizational values. Regulatory Readiness and Ethical Design IBM outlines regulatory developments in jurisdictions like the EU and Australia, where agentic systems are increasingly considered “high-risk.” These systems must comply with emerging mandates for transparency, explainability, and continuous human oversight. In the EU’s AI Act, for example, agents influencing access to financial services may fall under stricter obligations due to their autonomous and adaptive behavior. The paper recommends proactive alignment with ethical AI principles even in the absence of regulation—asking not just can we, but should we. This includes auditing agents for deceptive behavior, embedding human-in-the-loop structures, and maintaining transparency through natural language decision narratives and visualized reasoning paths. Conclusion Agentic AI stands at the frontier of enterprise automation. For financial services firms, the promise lies in enhanced personalization, operational agility, and AI-driven governance. Yet these benefits are closely linked to how responsibly these systems are designed and deployed. IBM’s whitepaper serves as a practical guide—advocating for a phased, risk-aware adoption strategy that includes governance frameworks, codified controls, and cross-functional accountability. Check out the White Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Critical Security Vulnerabilities in the Model Context Protocol: How Malicious Tools and Deceptive Contexts Exploit AI AgentsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Stability AI Introduces Adversarial Relativistic-ContrastivePost-Training and Stable Audio Open Small: A Distillation-Free Breakthrough for Fast, Diverse, and Efficient Text-to-Audio Generation Across DevicesMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Meta AI Introduces CATransformers: A Carbon-Aware Machine Learning Framework to Co-Optimize AI Models and Hardware for Sustainable Edge DeploymentMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Enterprise AI Without GPU Burn: Salesforce’s xGen-small Optimizes for Context, Cost, and Privacy 🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub! #agentic #financial #services #ibms #whitepaper
    WWW.MARKTECHPOST.COM
    Agentic AI in Financial Services: IBM’s Whitepaper Maps Opportunities, Risks, and Responsible Integration
    As autonomous AI agents move from theory into implementation, their impact on the financial services sector is becoming tangible. A recent whitepaper from IBM Consulting, titled “Agentic AI in Financial Services: Opportunities, Risks, and Responsible Implementation”, outlines how these AI systems—designed for autonomous decision-making and long-term planning—can fundamentally reshape how financial institutions operate. The paper presents a balanced framework that identifies where Agentic AI can add value, the risks it introduces, and how institutions can implement these systems responsibly. Understanding Agentic AI AI agents, in this context, are software entities that interact with their environments to accomplish tasks with a high degree of autonomy. Unlike traditional automation or even LLM-powered chatbots, Agentic AI incorporates planning, memory, and reasoning to execute dynamic tasks across systems. IBM categorizes them into Principal, Service, and Task agents, which collaborate in orchestrated systems. These systems enable the agents to autonomously process information, select tools, and interact with human users or enterprise systems in a closed loop of goal pursuit and reflection. The whitepaper describes the evolution from rule-based automation to multi-agent orchestration, emphasizing how LLMs now serve as the reasoning engine that drives agent behavior in real-time. Crucially, these agents can adapt to evolving conditions and handle complex, cross-domain tasks, making them ideal for the intricacies of financial services. Key Opportunities in Finance IBM identifies three primary use case patterns where Agentic AI can unlock significant value: Customer Engagement & Personalization Agents can streamline onboarding, personalize services through real-time behavioral data, and drive KYC/AML processes using tiered agent hierarchies that reduce manual oversight.Operational Excellence & Governance Agents improve internal efficiencies by automating risk management, compliance verification, and anomaly detection, while maintaining auditability and traceability.Technology & Software Development They support IT teams with automated testing, predictive maintenance, and infrastructure optimization—redefining DevOps through dynamic, self-improving workflows. These systems promise to replace fragmented interfaces and human handoffs with integrated, persona-driven agent experiences grounded in high-quality, governed data products. Risk Landscape and Mitigation Strategies Autonomy in AI brings unique risks. The IBM paper categorizes them under the system’s core components—goal misalignment, tool misuse, and dynamic deception being among the most critical. For instance, a wealth management agent might misinterpret a client’s risk appetite due to goal drift, or bypass controls by chaining permissible actions in unintended ways. Key mitigation strategies include: Goal Guardrails: Explicitly defined objectives, real-time monitoring, and value alignment feedback loops. Access Controls: Least-privilege design for tool/API access, combined with dynamic rate-limiting and auditing. Persona Calibration: Regularly reviewing agents’ behavior to avoid biased or unethical actions. The whitepaper also emphasizes agent persistence and system drift as long-term governance challenges. Persistent memory, while enabling learning, can cause agents to act on outdated assumptions. IBM proposes memory reset protocols and periodic recalibrations to counteract drift and ensure continued alignment with organizational values. Regulatory Readiness and Ethical Design IBM outlines regulatory developments in jurisdictions like the EU and Australia, where agentic systems are increasingly considered “high-risk.” These systems must comply with emerging mandates for transparency, explainability, and continuous human oversight. In the EU’s AI Act, for example, agents influencing access to financial services may fall under stricter obligations due to their autonomous and adaptive behavior. The paper recommends proactive alignment with ethical AI principles even in the absence of regulation—asking not just can we, but should we. This includes auditing agents for deceptive behavior, embedding human-in-the-loop structures, and maintaining transparency through natural language decision narratives and visualized reasoning paths. Conclusion Agentic AI stands at the frontier of enterprise automation. For financial services firms, the promise lies in enhanced personalization, operational agility, and AI-driven governance. Yet these benefits are closely linked to how responsibly these systems are designed and deployed. IBM’s whitepaper serves as a practical guide—advocating for a phased, risk-aware adoption strategy that includes governance frameworks, codified controls, and cross-functional accountability. Check out the White Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Critical Security Vulnerabilities in the Model Context Protocol (MCP): How Malicious Tools and Deceptive Contexts Exploit AI AgentsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Stability AI Introduces Adversarial Relativistic-Contrastive (ARC) Post-Training and Stable Audio Open Small: A Distillation-Free Breakthrough for Fast, Diverse, and Efficient Text-to-Audio Generation Across DevicesMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Meta AI Introduces CATransformers: A Carbon-Aware Machine Learning Framework to Co-Optimize AI Models and Hardware for Sustainable Edge DeploymentMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Enterprise AI Without GPU Burn: Salesforce’s xGen-small Optimizes for Context, Cost, and Privacy 🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub! (Promoted)
    0 Kommentare 0 Anteile
  • Why CTEM is the Winning Bet for CISOs in 2025

    May 19, 2025The Hacker NewsRisk Management / Threat Detection

    Continuous Threat Exposure Managementhas moved from concept to cornerstone, solidifying its role as a strategic enabler for CISOs. No longer a theoretical framework, CTEM now anchors today's cybersecurity programs by continuously aligning security efforts with real-world risk.
    At the heart of CTEM is the integration of Adversarial Exposure Validation, an advanced, offensive methodology powered by proactive security tools including External Attack Surface Management, autonomous penetration testing and red teaming, and Breach and Attack Simulation. Together, these AEV tools transform how enterprises proactively identify, validate, and reduce risks, turning threat exposure into a manageable business metric.
    CTEM reflects a broader evolution in how security leaders measure effectiveness and allocate resources. As board expectations grow and cyber risk becomes inseparable from business risk, CISOs are leveraging CTEM to drive measurable, outcome-based security initiatives. Early adopters report improved risk visibility, faster validation and remediation cycles, and tighter alignment between security investments and business priorities.1 With tools like ASM and autonomous pentesting delivering real-time insights into exposure, CTEM empowers CISOs to adopt a continuous, adaptive model that keeps pace with attacker techniques and the evolving threat landscape.
    CTEM's Moment Has Arrived
    CTEM introduces a continuous, iterative process encompassing three pillars: Adversarial Exposure Validation, Exposure Assessment Platforms, and Exposure Management. These methodologies ensure enterprises can dynamically assess and respond to threats, aligning security efforts with business objectives.1 Gartner underscores the significance of CTEM, predicting that by 2026, organizations prioritizing security investments based on a CTEM program will be three times less likely to suffer a breach.2
    Adversarial Exposure Validation: Simulating Real-World Threats
    AEV strengthens CTEM by continuously validating the effectiveness of security controls through the simulated exploitation of assets using real-world attacker behaviors. This often involves the use of automation, AI, and machine learning to replicate tactics, techniques, and proceduresused by adversaries, helping enterprises to proactively identify exploitable exposures before they can be leveraged in an actual attack. This proactive approach is crucial in understanding weaknesses and refining defenses more effectively.
    Attack Surface Management: Expanding Visibility
    ASM complements CTEM by providing comprehensive visibility into an enterprise's digital footprint. By continuously discovering, prioritizing, and monitoring assets, ASM enables security teams to identify potential vulnerabilities and exposures promptly. This expanded visibility is essential for effective threat exposure management, ensuring that no asset remains unmonitored. AEV transforms ASM from a map into a mission plan, and enterprises need it urgently.
    Autonomous Penetration Testing and Red Teaming: Improving Scalability
    The integration of autonomous penetrating testing and red teaming into CTEM frameworks marks a significant advancement in cybersecurity practices. Autonomous pentesting, for example, delivers real-time, scalable, and actionable insights unlike periodic assessments. This shift enhances operational efficiency while proactively identifying and mitigating vulnerabilities in real-time. While regulatory compliance remains important, it is no longer the sole driver – modern mandates increasingly emphasize continuous, proactive security testing.
    Breach and Attack Simulation: Continuous Security Validation
    BAS tools also play a role in CTEM by automating the simulation of known attack techniques across the kill chain – ranging from phishing and lateral movement to data exfiltration. Unlike autonomous pentesting, which actively exploits vulnerabilities, BAS focuses on continuously validating the effectiveness of security controls without causing disruption. These simulated attacks help uncover blind spots, misconfigurations, and detection and response gaps across endpoints, networks, and cloud environments. By aligning results with threat intelligence and frameworks like MITRE ATT&CK, BAS enables security teams to prioritize remediation based on real exposure and risk, helping CISOs ensure their defenses are not only in place, but operationally effective.
    The Impetus Behind CTEM's Rise
    The rapid adoption of CTEM in 2025 is no coincidence. As cyber risks grow more complex and dynamic, enterprises are embracing CTEM not just as a framework, but as an effective cyber strategy that yields measurable results. Several converging trends, ranging from evolving threat tactics to regulatory pressure and expanding digital footprints, are driving security leaders to prioritize continuous validation, real-time visibility, and operational efficiency across the attack surface. Several factors contribute to the widespread adoption of CTEM:

    Scalability: The rapid shift to cloud-native architectures, growing supply chain, and interconnected systems has expanded the attack surface. CTEM delivers the visibility and control needed to manage this complexity at scale.
    Operational Efficiency: By integrating tools and automating threat validation, CTEM reduces redundancy, streamlines workflows, and accelerates response times.
    Measurable Outcomes: CTEM enables CISOs to shift from abstract risk discussions to data-driven decisions by providing clear metrics on exposure, control effectiveness, and remediation progress, supporting better alignment with business objectives and board-level reporting.
    Regulatory Compliance: With rising enforcement of cybersecurity regulations like NIS2, DORA, and SEC reporting mandates, CTEM's continuous validation and visibility help enterprises stay compliant and audit ready.

    Conclusion
    Cybersecurity cannot evolve by standing still, and neither can security leaders and their organizations. The shift toward a proactive, measurable, and continuous approach to threat exposure is not only necessary but achievable. In fact, it's the only viable path forward. CTEM isn't just another framework, it's a blueprint for transforming security into a business-aligned, data-driven discipline. By embracing real-time validation, prioritizing exposures that matter, and proving effectiveness with metrics that resonate beyond the SOC, CISOs are moving the industry beyond checkboxes toward true resilience. Today, the enterprises that lead in cybersecurity will be the ones that measure it and manage it, continuously.
    About BreachLock:
    BreachLock is a leader in offensive security, delivering scalable and continuous security testing. Trusted by global enterprises, BreachLock provides human-led and AI-assisted attack surface management, penetration testing services, red teaming, and Adversarial Exposure Validationservices that help security teams stay ahead of adversaries. With a mission to make proactive security the new standard, BreachLock is shaping the future of cybersecurity through automation, data-driven intelligence, and expert-driven execution.

    References:

    Hacking Reviews.. How attack surface management supports continuous threat exposure management. Retrieved 30, April 2025, from .. How to Manage Cybersecurity Threats, Not Episodes. Retrieved 30, April 2025, from

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #why #ctem #winning #bet #cisos
    Why CTEM is the Winning Bet for CISOs in 2025
    May 19, 2025The Hacker NewsRisk Management / Threat Detection Continuous Threat Exposure Managementhas moved from concept to cornerstone, solidifying its role as a strategic enabler for CISOs. No longer a theoretical framework, CTEM now anchors today's cybersecurity programs by continuously aligning security efforts with real-world risk. At the heart of CTEM is the integration of Adversarial Exposure Validation, an advanced, offensive methodology powered by proactive security tools including External Attack Surface Management, autonomous penetration testing and red teaming, and Breach and Attack Simulation. Together, these AEV tools transform how enterprises proactively identify, validate, and reduce risks, turning threat exposure into a manageable business metric. CTEM reflects a broader evolution in how security leaders measure effectiveness and allocate resources. As board expectations grow and cyber risk becomes inseparable from business risk, CISOs are leveraging CTEM to drive measurable, outcome-based security initiatives. Early adopters report improved risk visibility, faster validation and remediation cycles, and tighter alignment between security investments and business priorities.1 With tools like ASM and autonomous pentesting delivering real-time insights into exposure, CTEM empowers CISOs to adopt a continuous, adaptive model that keeps pace with attacker techniques and the evolving threat landscape. CTEM's Moment Has Arrived CTEM introduces a continuous, iterative process encompassing three pillars: Adversarial Exposure Validation, Exposure Assessment Platforms, and Exposure Management. These methodologies ensure enterprises can dynamically assess and respond to threats, aligning security efforts with business objectives.1 Gartner underscores the significance of CTEM, predicting that by 2026, organizations prioritizing security investments based on a CTEM program will be three times less likely to suffer a breach.2 Adversarial Exposure Validation: Simulating Real-World Threats AEV strengthens CTEM by continuously validating the effectiveness of security controls through the simulated exploitation of assets using real-world attacker behaviors. This often involves the use of automation, AI, and machine learning to replicate tactics, techniques, and proceduresused by adversaries, helping enterprises to proactively identify exploitable exposures before they can be leveraged in an actual attack. This proactive approach is crucial in understanding weaknesses and refining defenses more effectively. Attack Surface Management: Expanding Visibility ASM complements CTEM by providing comprehensive visibility into an enterprise's digital footprint. By continuously discovering, prioritizing, and monitoring assets, ASM enables security teams to identify potential vulnerabilities and exposures promptly. This expanded visibility is essential for effective threat exposure management, ensuring that no asset remains unmonitored. AEV transforms ASM from a map into a mission plan, and enterprises need it urgently. Autonomous Penetration Testing and Red Teaming: Improving Scalability The integration of autonomous penetrating testing and red teaming into CTEM frameworks marks a significant advancement in cybersecurity practices. Autonomous pentesting, for example, delivers real-time, scalable, and actionable insights unlike periodic assessments. This shift enhances operational efficiency while proactively identifying and mitigating vulnerabilities in real-time. While regulatory compliance remains important, it is no longer the sole driver – modern mandates increasingly emphasize continuous, proactive security testing. Breach and Attack Simulation: Continuous Security Validation BAS tools also play a role in CTEM by automating the simulation of known attack techniques across the kill chain – ranging from phishing and lateral movement to data exfiltration. Unlike autonomous pentesting, which actively exploits vulnerabilities, BAS focuses on continuously validating the effectiveness of security controls without causing disruption. These simulated attacks help uncover blind spots, misconfigurations, and detection and response gaps across endpoints, networks, and cloud environments. By aligning results with threat intelligence and frameworks like MITRE ATT&CK, BAS enables security teams to prioritize remediation based on real exposure and risk, helping CISOs ensure their defenses are not only in place, but operationally effective. The Impetus Behind CTEM's Rise The rapid adoption of CTEM in 2025 is no coincidence. As cyber risks grow more complex and dynamic, enterprises are embracing CTEM not just as a framework, but as an effective cyber strategy that yields measurable results. Several converging trends, ranging from evolving threat tactics to regulatory pressure and expanding digital footprints, are driving security leaders to prioritize continuous validation, real-time visibility, and operational efficiency across the attack surface. Several factors contribute to the widespread adoption of CTEM: Scalability: The rapid shift to cloud-native architectures, growing supply chain, and interconnected systems has expanded the attack surface. CTEM delivers the visibility and control needed to manage this complexity at scale. Operational Efficiency: By integrating tools and automating threat validation, CTEM reduces redundancy, streamlines workflows, and accelerates response times. Measurable Outcomes: CTEM enables CISOs to shift from abstract risk discussions to data-driven decisions by providing clear metrics on exposure, control effectiveness, and remediation progress, supporting better alignment with business objectives and board-level reporting. Regulatory Compliance: With rising enforcement of cybersecurity regulations like NIS2, DORA, and SEC reporting mandates, CTEM's continuous validation and visibility help enterprises stay compliant and audit ready. Conclusion Cybersecurity cannot evolve by standing still, and neither can security leaders and their organizations. The shift toward a proactive, measurable, and continuous approach to threat exposure is not only necessary but achievable. In fact, it's the only viable path forward. CTEM isn't just another framework, it's a blueprint for transforming security into a business-aligned, data-driven discipline. By embracing real-time validation, prioritizing exposures that matter, and proving effectiveness with metrics that resonate beyond the SOC, CISOs are moving the industry beyond checkboxes toward true resilience. Today, the enterprises that lead in cybersecurity will be the ones that measure it and manage it, continuously. About BreachLock: BreachLock is a leader in offensive security, delivering scalable and continuous security testing. Trusted by global enterprises, BreachLock provides human-led and AI-assisted attack surface management, penetration testing services, red teaming, and Adversarial Exposure Validationservices that help security teams stay ahead of adversaries. With a mission to make proactive security the new standard, BreachLock is shaping the future of cybersecurity through automation, data-driven intelligence, and expert-driven execution. References: Hacking Reviews.. How attack surface management supports continuous threat exposure management. Retrieved 30, April 2025, from .. How to Manage Cybersecurity Threats, Not Episodes. Retrieved 30, April 2025, from Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #why #ctem #winning #bet #cisos
    THEHACKERNEWS.COM
    Why CTEM is the Winning Bet for CISOs in 2025
    May 19, 2025The Hacker NewsRisk Management / Threat Detection Continuous Threat Exposure Management (CTEM) has moved from concept to cornerstone, solidifying its role as a strategic enabler for CISOs. No longer a theoretical framework, CTEM now anchors today's cybersecurity programs by continuously aligning security efforts with real-world risk. At the heart of CTEM is the integration of Adversarial Exposure Validation (AEV), an advanced, offensive methodology powered by proactive security tools including External Attack Surface Management (ASM), autonomous penetration testing and red teaming, and Breach and Attack Simulation (BAS). Together, these AEV tools transform how enterprises proactively identify, validate, and reduce risks, turning threat exposure into a manageable business metric. CTEM reflects a broader evolution in how security leaders measure effectiveness and allocate resources. As board expectations grow and cyber risk becomes inseparable from business risk, CISOs are leveraging CTEM to drive measurable, outcome-based security initiatives. Early adopters report improved risk visibility, faster validation and remediation cycles, and tighter alignment between security investments and business priorities.1 With tools like ASM and autonomous pentesting delivering real-time insights into exposure, CTEM empowers CISOs to adopt a continuous, adaptive model that keeps pace with attacker techniques and the evolving threat landscape. CTEM's Moment Has Arrived CTEM introduces a continuous, iterative process encompassing three pillars: Adversarial Exposure Validation (AEV), Exposure Assessment Platforms (EAP), and Exposure Management (EM). These methodologies ensure enterprises can dynamically assess and respond to threats, aligning security efforts with business objectives.1 Gartner underscores the significance of CTEM, predicting that by 2026, organizations prioritizing security investments based on a CTEM program will be three times less likely to suffer a breach.2 Adversarial Exposure Validation (AEV): Simulating Real-World Threats AEV strengthens CTEM by continuously validating the effectiveness of security controls through the simulated exploitation of assets using real-world attacker behaviors. This often involves the use of automation, AI, and machine learning to replicate tactics, techniques, and procedures (TTPs) used by adversaries, helping enterprises to proactively identify exploitable exposures before they can be leveraged in an actual attack. This proactive approach is crucial in understanding weaknesses and refining defenses more effectively. Attack Surface Management (ASM): Expanding Visibility ASM complements CTEM by providing comprehensive visibility into an enterprise's digital footprint. By continuously discovering, prioritizing, and monitoring assets, ASM enables security teams to identify potential vulnerabilities and exposures promptly. This expanded visibility is essential for effective threat exposure management, ensuring that no asset remains unmonitored. AEV transforms ASM from a map into a mission plan, and enterprises need it urgently. Autonomous Penetration Testing and Red Teaming: Improving Scalability The integration of autonomous penetrating testing and red teaming into CTEM frameworks marks a significant advancement in cybersecurity practices. Autonomous pentesting, for example, delivers real-time, scalable, and actionable insights unlike periodic assessments. This shift enhances operational efficiency while proactively identifying and mitigating vulnerabilities in real-time. While regulatory compliance remains important, it is no longer the sole driver – modern mandates increasingly emphasize continuous, proactive security testing. Breach and Attack Simulation (BAS): Continuous Security Validation BAS tools also play a role in CTEM by automating the simulation of known attack techniques across the kill chain – ranging from phishing and lateral movement to data exfiltration. Unlike autonomous pentesting, which actively exploits vulnerabilities, BAS focuses on continuously validating the effectiveness of security controls without causing disruption. These simulated attacks help uncover blind spots, misconfigurations, and detection and response gaps across endpoints, networks, and cloud environments. By aligning results with threat intelligence and frameworks like MITRE ATT&CK, BAS enables security teams to prioritize remediation based on real exposure and risk, helping CISOs ensure their defenses are not only in place, but operationally effective. The Impetus Behind CTEM's Rise The rapid adoption of CTEM in 2025 is no coincidence. As cyber risks grow more complex and dynamic, enterprises are embracing CTEM not just as a framework, but as an effective cyber strategy that yields measurable results. Several converging trends, ranging from evolving threat tactics to regulatory pressure and expanding digital footprints, are driving security leaders to prioritize continuous validation, real-time visibility, and operational efficiency across the attack surface. Several factors contribute to the widespread adoption of CTEM: Scalability: The rapid shift to cloud-native architectures, growing supply chain, and interconnected systems has expanded the attack surface. CTEM delivers the visibility and control needed to manage this complexity at scale. Operational Efficiency: By integrating tools and automating threat validation, CTEM reduces redundancy, streamlines workflows, and accelerates response times. Measurable Outcomes: CTEM enables CISOs to shift from abstract risk discussions to data-driven decisions by providing clear metrics on exposure, control effectiveness, and remediation progress, supporting better alignment with business objectives and board-level reporting. Regulatory Compliance: With rising enforcement of cybersecurity regulations like NIS2, DORA, and SEC reporting mandates, CTEM's continuous validation and visibility help enterprises stay compliant and audit ready. Conclusion Cybersecurity cannot evolve by standing still, and neither can security leaders and their organizations. The shift toward a proactive, measurable, and continuous approach to threat exposure is not only necessary but achievable. In fact, it's the only viable path forward. CTEM isn't just another framework, it's a blueprint for transforming security into a business-aligned, data-driven discipline. By embracing real-time validation, prioritizing exposures that matter, and proving effectiveness with metrics that resonate beyond the SOC, CISOs are moving the industry beyond checkboxes toward true resilience. Today, the enterprises that lead in cybersecurity will be the ones that measure it and manage it, continuously. About BreachLock: BreachLock is a leader in offensive security, delivering scalable and continuous security testing. Trusted by global enterprises, BreachLock provides human-led and AI-assisted attack surface management, penetration testing services, red teaming, and Adversarial Exposure Validation (AEV) services that help security teams stay ahead of adversaries. With a mission to make proactive security the new standard, BreachLock is shaping the future of cybersecurity through automation, data-driven intelligence, and expert-driven execution. References: Hacking Reviews. (n.d.). How attack surface management supports continuous threat exposure management. Retrieved 30, April 2025, from https://www.hacking.reviews/2023/05/how-attack-surface-management-supports.htmlGartner. (n.d.). How to Manage Cybersecurity Threats, Not Episodes. Retrieved 30, April 2025, from https://www.gartner.com/en/articles/how-to-manage-cybersecurity-threats-not-episodes Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Kommentare 0 Anteile