• Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 

    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks.
    To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms.
    Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA. 
    Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior.
    Proving Rust program properties with Aeneas
    Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”.
    For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references.
    As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs.
    Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community.
    Compiling Rust to C supports backward compatibility  
    We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs.
    Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code.
    As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed.

    Microsoft research podcast

    Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India.

    Listen now

    Opens in a new tab
    Timing analysis with Revizor 
    Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct. 
    To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.  
    Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel. 
    By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code.
    Verified Rust implementations begin with ML-KEM
    This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling.
    A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings. 
    Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations. 
    As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems.
    Looking forward 
    This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library.
    Opens in a new tab
    #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab #rewriting #symcrypt #rust #modernize #microsofts
    WWW.MICROSOFT.COM
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt (opens in new tab)—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsics (compiler-provided low-level functions) and assembly language (direct processor instructions). It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneas (opens in new tab) because it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean (opens in new tab), allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice (opens in new tab), a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydice (opens in new tab) compiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries (via C or Rust APIs), or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor (opens in new tab), a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcrypto (opens in new tab) branch of the SymCrypt repository. We encourage users to try the Rust build and share feedback (opens in new tab). Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab
    0 Comments 0 Shares
  • Microsoft’s New AI: Ray Tracing 16,000,000 Images!

    Microsoft’s New AI: Ray Tracing 16,000,000 Images!
    #microsofts #new #ray #tracing #images
    Microsoft’s New AI: Ray Tracing 16,000,000 Images!
    Microsoft’s New AI: Ray Tracing 16,000,000 Images! #microsofts #new #ray #tracing #images
    WWW.YOUTUBE.COM
    Microsoft’s New AI: Ray Tracing 16,000,000 Images!
    Microsoft’s New AI: Ray Tracing 16,000,000 Images!
    Like
    Love
    Wow
    Sad
    Angry
    675
    4 Comments 0 Shares
  • FTC drops final challenge to Microsoft’s $69B Activision Blizzard deal

    Three years after suing to block Microsoft from buying one of the biggest names in video games, the U.S. government is finally giving up.

    The FTC announced plans Thursday to drop a Biden-era case against Microsoft over its billion acquisition of game maker Activision Blizzard, a decision the regulator said now best serves the public interest. 

    In 2022, the FTC first announced that it would try to kill Microsoft’s planned acquisition of the gaming giant, which makes hit games like Call of Duty and World of Warcraft. The following year, after the FTC failed to secure a preliminary injunction to stop it, Microsoft actually finalized the massive deal, but the regulator vowed to continue appealing that decision. 

    Earlier this month, the 9th Circuit Court of Appeals upheld the lower court’s order denying the injunction, ruling that the FTC’s claims that the deal would limit competition in the gaming industry were weak. The acquisition was destined for intense scrutiny from day one, both for its size and its potential to totally reshape the landscape for one of tech’s hottest sectors. 

    Microsoft swooped in to save Activision Blizzard from itself

    When Microsoft announced its plan to buy Activision Blizzard in January 2022, the smaller company had been rocked by emerging allegations of systemic sexual harassment and discrimination in the workplace. Those ongoing scandals eventually forced longtime CEO Bobby Kotick out of the company as Microsoft cleaned house leading into the merger. 

    Microsoft also had to clear major regulatory hurdles in the U.K., resolving antitrust concerns there over its cloud gaming services before getting the green light to close the deal. That bit of regulatory maneuvering resulted in an unusual arrangement to offload cloud streaming rights for its games to competitor Ubisoft in order to appease the Competition and Markets Authority, the U.K.’s powerful trust buster.A boost to Microsoft’s online gaming roadmap

    By bringing Activision Blizzard under its wing, Microsoft can also bring the company’s many hit titles into the popular Xbox Game Pass service, which gives players unlimited access to games for a monthly subscription fee. 

    Gaming companies have increasingly turned to monthly subscriptions and live service games over the last decade and many of Activision Blizzard’s hit franchises revolve around online multiplayer, including Call of Duty, Overwatch, Diablo and World of Warcraft. Activision Blizzard also owns Candy Crush, a colorful tile-matching game that’s still synonymous with mobile gaming almost a decade after Activision Blizzard bought its developer King for a then whopping billion.

    Microsoft President Brad Smith described his company as “grateful” to the FTC for its decision to allow the acquisition to settle. “Today’s decision is a victory for players across the country and for common sense in Washington, D.C.,” Smith said.
    #ftc #drops #final #challenge #microsofts
    FTC drops final challenge to Microsoft’s $69B Activision Blizzard deal
    Three years after suing to block Microsoft from buying one of the biggest names in video games, the U.S. government is finally giving up. The FTC announced plans Thursday to drop a Biden-era case against Microsoft over its billion acquisition of game maker Activision Blizzard, a decision the regulator said now best serves the public interest.  In 2022, the FTC first announced that it would try to kill Microsoft’s planned acquisition of the gaming giant, which makes hit games like Call of Duty and World of Warcraft. The following year, after the FTC failed to secure a preliminary injunction to stop it, Microsoft actually finalized the massive deal, but the regulator vowed to continue appealing that decision.  Earlier this month, the 9th Circuit Court of Appeals upheld the lower court’s order denying the injunction, ruling that the FTC’s claims that the deal would limit competition in the gaming industry were weak. The acquisition was destined for intense scrutiny from day one, both for its size and its potential to totally reshape the landscape for one of tech’s hottest sectors.  Microsoft swooped in to save Activision Blizzard from itself When Microsoft announced its plan to buy Activision Blizzard in January 2022, the smaller company had been rocked by emerging allegations of systemic sexual harassment and discrimination in the workplace. Those ongoing scandals eventually forced longtime CEO Bobby Kotick out of the company as Microsoft cleaned house leading into the merger.  Microsoft also had to clear major regulatory hurdles in the U.K., resolving antitrust concerns there over its cloud gaming services before getting the green light to close the deal. That bit of regulatory maneuvering resulted in an unusual arrangement to offload cloud streaming rights for its games to competitor Ubisoft in order to appease the Competition and Markets Authority, the U.K.’s powerful trust buster.A boost to Microsoft’s online gaming roadmap By bringing Activision Blizzard under its wing, Microsoft can also bring the company’s many hit titles into the popular Xbox Game Pass service, which gives players unlimited access to games for a monthly subscription fee.  Gaming companies have increasingly turned to monthly subscriptions and live service games over the last decade and many of Activision Blizzard’s hit franchises revolve around online multiplayer, including Call of Duty, Overwatch, Diablo and World of Warcraft. Activision Blizzard also owns Candy Crush, a colorful tile-matching game that’s still synonymous with mobile gaming almost a decade after Activision Blizzard bought its developer King for a then whopping billion. Microsoft President Brad Smith described his company as “grateful” to the FTC for its decision to allow the acquisition to settle. “Today’s decision is a victory for players across the country and for common sense in Washington, D.C.,” Smith said. #ftc #drops #final #challenge #microsofts
    WWW.FASTCOMPANY.COM
    FTC drops final challenge to Microsoft’s $69B Activision Blizzard deal
    Three years after suing to block Microsoft from buying one of the biggest names in video games, the U.S. government is finally giving up. The FTC announced plans Thursday to drop a Biden-era case against Microsoft over its $69 billion acquisition of game maker Activision Blizzard, a decision the regulator said now best serves the public interest.  In 2022, the FTC first announced that it would try to kill Microsoft’s planned acquisition of the gaming giant, which makes hit games like Call of Duty and World of Warcraft. The following year, after the FTC failed to secure a preliminary injunction to stop it, Microsoft actually finalized the massive deal, but the regulator vowed to continue appealing that decision.  Earlier this month, the 9th Circuit Court of Appeals upheld the lower court’s order denying the injunction, ruling that the FTC’s claims that the deal would limit competition in the gaming industry were weak. The acquisition was destined for intense scrutiny from day one, both for its size and its potential to totally reshape the landscape for one of tech’s hottest sectors.  Microsoft swooped in to save Activision Blizzard from itself When Microsoft announced its plan to buy Activision Blizzard in January 2022, the smaller company had been rocked by emerging allegations of systemic sexual harassment and discrimination in the workplace. Those ongoing scandals eventually forced longtime CEO Bobby Kotick out of the company as Microsoft cleaned house leading into the merger.  Microsoft also had to clear major regulatory hurdles in the U.K., resolving antitrust concerns there over its cloud gaming services before getting the green light to close the deal. That bit of regulatory maneuvering resulted in an unusual arrangement to offload cloud streaming rights for its games to competitor Ubisoft in order to appease the Competition and Markets Authority, the U.K.’s powerful trust buster. (This portion of the deal isn’t great news for anyone who’s wrestled with Ubisoft’s awkward online gaming service over the years.) A boost to Microsoft’s online gaming roadmap By bringing Activision Blizzard under its wing, Microsoft can also bring the company’s many hit titles into the popular Xbox Game Pass service, which gives players unlimited access to games for a monthly subscription fee.  Gaming companies have increasingly turned to monthly subscriptions and live service games over the last decade and many of Activision Blizzard’s hit franchises revolve around online multiplayer, including Call of Duty, Overwatch, Diablo and World of Warcraft. Activision Blizzard also owns Candy Crush, a colorful tile-matching game that’s still synonymous with mobile gaming almost a decade after Activision Blizzard bought its developer King for a then whopping $5.9 billion. Microsoft President Brad Smith described his company as “grateful” to the FTC for its decision to allow the acquisition to settle. “Today’s decision is a victory for players across the country and for common sense in Washington, D.C.,” Smith said.
    0 Comments 0 Shares
  • Microsoft’s Emergency Windows Update—Stop Blue Screen Of Death

    What to do if this PC disaster strikes.
    #microsofts #emergency #windows #updatestop #blue
    Microsoft’s Emergency Windows Update—Stop Blue Screen Of Death
    What to do if this PC disaster strikes. #microsofts #emergency #windows #updatestop #blue
    0 Comments 0 Shares
  • New Claude 4 AI model refactored code for 7 hours straight

    No sleep till Brooklyn

    New Claude 4 AI model refactored code for 7 hours straight

    Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours.

    Benj Edwards



    May 22, 2025 12:45 pm

    |

    4

    The Claude 4 logo, created by Anthropic.

    Credit:

    Anthropic

    The Claude 4 logo, created by Anthropic.

    Credit:

    Anthropic

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.
    Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly."
    Before we go further, a brief refresher on Claude's three AI model "size" namesis probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price, speed, and capability.
    Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth"and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequentlythan larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks.

    A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown.

    Credit:

    Anthropic

    There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors.

    In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validatedcapabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release.
    Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered"vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below.
    To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions.
    Extended thinking meets tool use
    Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response.
    "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming.

    General Claude 4 benchmark results, provided by Anthropic.

    Credit:

    Anthropic

    In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs.

    “The world’s best coding model”
    Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files."
    In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoftsuggests Anthropic has built something genuinely competitive.

    Software engineering benchmark results, provided by Anthropic.

    Credit:

    Anthropic

    Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours.
    When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part."

    Pricing and availability
    Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs per million tokens for input and per million for output, while Sonnet 4 remains at and The models offer two response modes: traditional LLM and simulated reasoningfor complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild.
    Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription.
    The Claude 4 models also debut Claude Codeas a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework.

    A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game.

    Credit:

    Anthropic

    Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said.
    "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well."

    Benj Edwards
    Senior AI Reporter

    Benj Edwards
    Senior AI Reporter

    Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

    4 Comments
    #new #claude #model #refactored #code
    New Claude 4 AI model refactored code for 7 hours straight
    No sleep till Brooklyn New Claude 4 AI model refactored code for 7 hours straight Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours. Benj Edwards – May 22, 2025 12:45 pm | 4 The Claude 4 logo, created by Anthropic. Credit: Anthropic The Claude 4 logo, created by Anthropic. Credit: Anthropic Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours. Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly." Before we go further, a brief refresher on Claude's three AI model "size" namesis probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price, speed, and capability. Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth"and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequentlythan larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks. A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors. In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validatedcapabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release. Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered"vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below. To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions. Extended thinking meets tool use Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response. "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming. General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs. “The world’s best coding model” Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files." In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoftsuggests Anthropic has built something genuinely competitive. Software engineering benchmark results, provided by Anthropic. Credit: Anthropic Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours. When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part." Pricing and availability Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs per million tokens for input and per million for output, while Sonnet 4 remains at and The models offer two response modes: traditional LLM and simulated reasoningfor complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild. Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription. The Claude 4 models also debut Claude Codeas a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework. A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said. "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well." Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 4 Comments #new #claude #model #refactored #code
    ARSTECHNICA.COM
    New Claude 4 AI model refactored code for 7 hours straight
    No sleep till Brooklyn New Claude 4 AI model refactored code for 7 hours straight Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours. Benj Edwards – May 22, 2025 12:45 pm | 4 The Claude 4 logo, created by Anthropic. Credit: Anthropic The Claude 4 logo, created by Anthropic. Credit: Anthropic Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours. Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly." Before we go further, a brief refresher on Claude's three AI model "size" names (first introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability. Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth" (considering conceptual relationships in the prompt) and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequently (plausibly answering questions based on lack of data) than larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks. A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors. In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validated [Claude's] capabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release. Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered (and ironically-named) "vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below. To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions. Extended thinking meets tool use Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response. "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming. General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs. “The world’s best coding model” Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files." In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoft (which has close ties to OpenAI) suggests Anthropic has built something genuinely competitive. Software engineering benchmark results, provided by Anthropic. Credit: Anthropic Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours. When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part." Pricing and availability Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs $15 per million tokens for input and $75 per million for output, while Sonnet 4 remains at $3 and $15. The models offer two response modes: traditional LLM and simulated reasoning ("extended thinking") for complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild. Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription. The Claude 4 models also debut Claude Code (first introduced in February) as a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework. A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said. "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well." Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 4 Comments
    0 Comments 0 Shares
  • Signal’s New Update Prohibits Microsoft’s AI-Powered Recall Feature From Taking Screenshots

    Signal, the popular privacy-focused messaging platform, rolled out a security feature on Wednesday to counteract Microsoft's Recall feature. The new feature, dubbed Screen Security, will prevent devices from capturing screenshots of the application's window. The San Francisco-based company said it was forced to take such drastic measures as Microsoft left it with limited options to protect the privacy of its users. The Screen Security feature will be turned on by default on all Windows 11 devices once the update is installed.Signal Takes Action Against Microsoft's AI-Powered Recall FeatureIn a blog post, the company detailed the new feature and highlighted why it had to resort to it. Towards the end of the last month, Microsoft finally began rolling out the AI-powered Recall feature to all Copilot+ PCs branded computers.Recall was first announced in May 2024 as an on-device search history tracker that takes continuous screenshots of whatever the user is doing on the device. This way, when users ask the AI about what they were doing on a particular date and time, it can accurately tell them. However, the feature faced backlash from security experts and netizens due to lack of privacy controls in the feature.Over the last year, the company says it has reworked the tool and added various security features and made it opt-in instead of being on by default. However, Signal now claims that Microsoft did not provide the app developers with any tools to reject granting OS-level AI systems access to chats, which can contain sensitive information about apps.As a workaround, the messaging platform has now added a Digital Rights Managementflag on the app window to prevent the device from capturing any screenshots. It is the same system streaming platforms such as Netflix use to prevent users from taking screenshots of the content. Signal is also turning this security setting on by default on Windows 11 operating system.Signal also acknowledged that this feature could give rise to some accessibility issues, as screen readers or magnification tools might not function correctly when the setting is active. However, it is possible to turn off the feature. Users can go to Signal Settings and find the Privacy option. There, they will see the Screen Security setting that can be disabled.Do note that when turning off the feature on Signal Desktop on Windows 11, the app will display a warning that says, “If disabled, this may allow Microsoft Windows to capture screenshots of Signal and use them for features that may not be private.” Users can confirm this warning pop-up and disable the feature.“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future. Apps like Signal shouldn't have to implement “one weird trick” in order to maintain the privacy and integrity of their services without proper developer tools. People who care about privacy shouldn't be forced to sacrifice accessibility upon the altar of AI aspirations either,” the company said.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    Signal, Microsoft, Windows 11, Recall, AI, Artificial Intelligence

    Akash Dutta

    Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food.
    More

    Related Stories
    #signals #new #update #prohibits #microsofts
    Signal’s New Update Prohibits Microsoft’s AI-Powered Recall Feature From Taking Screenshots
    Signal, the popular privacy-focused messaging platform, rolled out a security feature on Wednesday to counteract Microsoft's Recall feature. The new feature, dubbed Screen Security, will prevent devices from capturing screenshots of the application's window. The San Francisco-based company said it was forced to take such drastic measures as Microsoft left it with limited options to protect the privacy of its users. The Screen Security feature will be turned on by default on all Windows 11 devices once the update is installed.Signal Takes Action Against Microsoft's AI-Powered Recall FeatureIn a blog post, the company detailed the new feature and highlighted why it had to resort to it. Towards the end of the last month, Microsoft finally began rolling out the AI-powered Recall feature to all Copilot+ PCs branded computers.Recall was first announced in May 2024 as an on-device search history tracker that takes continuous screenshots of whatever the user is doing on the device. This way, when users ask the AI about what they were doing on a particular date and time, it can accurately tell them. However, the feature faced backlash from security experts and netizens due to lack of privacy controls in the feature.Over the last year, the company says it has reworked the tool and added various security features and made it opt-in instead of being on by default. However, Signal now claims that Microsoft did not provide the app developers with any tools to reject granting OS-level AI systems access to chats, which can contain sensitive information about apps.As a workaround, the messaging platform has now added a Digital Rights Managementflag on the app window to prevent the device from capturing any screenshots. It is the same system streaming platforms such as Netflix use to prevent users from taking screenshots of the content. Signal is also turning this security setting on by default on Windows 11 operating system.Signal also acknowledged that this feature could give rise to some accessibility issues, as screen readers or magnification tools might not function correctly when the setting is active. However, it is possible to turn off the feature. Users can go to Signal Settings and find the Privacy option. There, they will see the Screen Security setting that can be disabled.Do note that when turning off the feature on Signal Desktop on Windows 11, the app will display a warning that says, “If disabled, this may allow Microsoft Windows to capture screenshots of Signal and use them for features that may not be private.” Users can confirm this warning pop-up and disable the feature.“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future. Apps like Signal shouldn't have to implement “one weird trick” in order to maintain the privacy and integrity of their services without proper developer tools. People who care about privacy shouldn't be forced to sacrifice accessibility upon the altar of AI aspirations either,” the company said. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Signal, Microsoft, Windows 11, Recall, AI, Artificial Intelligence Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories #signals #new #update #prohibits #microsofts
    WWW.GADGETS360.COM
    Signal’s New Update Prohibits Microsoft’s AI-Powered Recall Feature From Taking Screenshots
    Signal, the popular privacy-focused messaging platform, rolled out a security feature on Wednesday to counteract Microsoft's Recall feature. The new feature, dubbed Screen Security, will prevent devices from capturing screenshots of the application's window. The San Francisco-based company said it was forced to take such drastic measures as Microsoft left it with limited options to protect the privacy of its users. The Screen Security feature will be turned on by default on all Windows 11 devices once the update is installed.Signal Takes Action Against Microsoft's AI-Powered Recall FeatureIn a blog post, the company detailed the new feature and highlighted why it had to resort to it. Towards the end of the last month, Microsoft finally began rolling out the AI-powered Recall feature to all Copilot+ PCs branded computers.Recall was first announced in May 2024 as an on-device search history tracker that takes continuous screenshots of whatever the user is doing on the device. This way, when users ask the AI about what they were doing on a particular date and time, it can accurately tell them. However, the feature faced backlash from security experts and netizens due to lack of privacy controls in the feature.Over the last year, the company says it has reworked the tool and added various security features and made it opt-in instead of being on by default. However, Signal now claims that Microsoft did not provide the app developers with any tools to reject granting OS-level AI systems access to chats, which can contain sensitive information about apps.As a workaround, the messaging platform has now added a Digital Rights Management (DRM) flag on the app window to prevent the device from capturing any screenshots. It is the same system streaming platforms such as Netflix use to prevent users from taking screenshots of the content. Signal is also turning this security setting on by default on Windows 11 operating system.Signal also acknowledged that this feature could give rise to some accessibility issues, as screen readers or magnification tools might not function correctly when the setting is active. However, it is possible to turn off the feature. Users can go to Signal Settings and find the Privacy option. There, they will see the Screen Security setting that can be disabled.Do note that when turning off the feature on Signal Desktop on Windows 11, the app will display a warning that says, “If disabled, this may allow Microsoft Windows to capture screenshots of Signal and use them for features that may not be private.” Users can confirm this warning pop-up and disable the feature.“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future. Apps like Signal shouldn't have to implement “one weird trick” in order to maintain the privacy and integrity of their services without proper developer tools. People who care about privacy shouldn't be forced to sacrifice accessibility upon the altar of AI aspirations either,” the company said. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Signal, Microsoft, Windows 11, Recall, AI, Artificial Intelligence Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories
    0 Comments 0 Shares
  • Signal Slams Microsoft’s Recall, Disables Screenshots on Windows 11

    Worried that Microsoft Recall might take screenshots of your Signal chats? Don’t be. Signal has introduced a new “Screen security” setting for its Windows 11 app that will give users a black screen every time you or the system attempts to take a screenshot. You might be familiar with the outcome if you have attempted screenshotting shows or movies on Netflix. The feature arrives just a month after Microsoft officially rolled out its controversial Recall feature, which takes screenshots at regular intervals to create a history of everything you have seen or done on your PC. Though announced in May last year, the launch was delayed due to privacy concerns.Explaining the reason behind the feature, Signal said that “although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk.”Recommended by Our EditorsAdditionally, the press release notes that Microsoft has yet to provide an API for the feature that would allow app developers to opt out of it.“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future,” Signal says. “Apps like Signal shouldn’t have to implement “one weird trick” in order to maintain the privacy and integrity of their services without proper developer tools.”Screen security for Signal is now rolling out to Windows 11 PCs. Users who aren’t worried about Recall or are willing to let it store screenshots of their Signal activity can disable the new feature by going to Signal Settings > Privacy > Screen security.
    #signal #slams #microsofts #recall #disables
    Signal Slams Microsoft’s Recall, Disables Screenshots on Windows 11
    Worried that Microsoft Recall might take screenshots of your Signal chats? Don’t be. Signal has introduced a new “Screen security” setting for its Windows 11 app that will give users a black screen every time you or the system attempts to take a screenshot. You might be familiar with the outcome if you have attempted screenshotting shows or movies on Netflix. The feature arrives just a month after Microsoft officially rolled out its controversial Recall feature, which takes screenshots at regular intervals to create a history of everything you have seen or done on your PC. Though announced in May last year, the launch was delayed due to privacy concerns.Explaining the reason behind the feature, Signal said that “although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk.”Recommended by Our EditorsAdditionally, the press release notes that Microsoft has yet to provide an API for the feature that would allow app developers to opt out of it.“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future,” Signal says. “Apps like Signal shouldn’t have to implement “one weird trick” in order to maintain the privacy and integrity of their services without proper developer tools.”Screen security for Signal is now rolling out to Windows 11 PCs. Users who aren’t worried about Recall or are willing to let it store screenshots of their Signal activity can disable the new feature by going to Signal Settings > Privacy > Screen security. #signal #slams #microsofts #recall #disables
    ME.PCMAG.COM
    Signal Slams Microsoft’s Recall, Disables Screenshots on Windows 11
    Worried that Microsoft Recall might take screenshots of your Signal chats? Don’t be. Signal has introduced a new “Screen security” setting for its Windows 11 app that will give users a black screen every time you or the system attempts to take a screenshot. You might be familiar with the outcome if you have attempted screenshotting shows or movies on Netflix. The feature arrives just a month after Microsoft officially rolled out its controversial Recall feature, which takes screenshots at regular intervals to create a history of everything you have seen or done on your PC. Though announced in May last year, the launch was delayed due to privacy concerns.Explaining the reason behind the feature, Signal said that “although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk.”Recommended by Our EditorsAdditionally, the press release notes that Microsoft has yet to provide an API for the feature that would allow app developers to opt out of it.“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future,” Signal says. “Apps like Signal shouldn’t have to implement “one weird trick” in order to maintain the privacy and integrity of their services without proper developer tools.”Screen security for Signal is now rolling out to Windows 11 PCs. Users who aren’t worried about Recall or are willing to let it store screenshots of their Signal activity can disable the new feature by going to Signal Settings > Privacy > Screen security.
    0 Comments 0 Shares
  • Microsoft’s AI security chief accidentally reveals Walmart’s AI plans after protest

    Microsoft’s head of security for AI, Neta Haiby, accidentally revealed confidential messages about Walmart’s use of Microsoft’s AI tools during a Build talk that was disrupted by protesters. The Build livestream was muted and the camera pointed down, but the session resumed moments later after the protesters were escorted out. In the aftermath, Haiby then accidentally switched to Microsoft Teams while sharing her screen, revealing confidential internal messages about Walmart’s upcoming use of Microsoft’s Entra and AI gateway services.Haiby was co-hosting a Build session on best security practices for AI, alongside Sarah Bird, Microsoft’s head of responsible AI, when two former Microsoft employees disrupted the talk to protest against the company’s cloud contracts with the Israeli government.“Sarah, you are whitewashing the crimes of Microsoft in Palestine, how dare you talk about responsible AI when Microsoft is fueling the genocide in Palestine,” shouted Hossam Nasr, an organizer with the protest group No Azure for Apartheid, and a former Microsoft employee who was fired for holding a vigil outside Microsoft’s headquarters for Palestinians killed in Gaza.Walmart is one of Microsoft’s biggest corporate customers, and already uses the company’s Azure OpenAI service for some of its AI work. “Walmart is ready to rock and roll with Entra Web and AI Gateway,” says one of Microsoft’s cloud solution architects in the Teams messages. The chat session also quoted a Walmart AI engineer, saying: “Microsoft is WAY ahead of Google with AI security. We are excited to go down this path with you.”We asked Microsoft to comment on this protest and the Teams messages, but the company did not respond in time for publication.The private Microsoft Teams messages shown during the disrupted Build session. Image: MicrosoftBoth of the protesters involved in this latest Microsoft Build disruption were former Microsoft employees, with Vaniya Agrawal appearing alongside Nasr. Agrawal interrupted Microsoft co-founder Bill Gates, former CEO Steve Ballmer, and CEO Satya Nadella later during the company’s 50th anniversary event last month. Agrawal was dismissed shortly after putting in her two weeks’ notice at Microsoft before the protest, according to an email seen by The Verge.This is the third interruption of Microsoft Build by protesters, after a Palestinian tech worker disrupted Microsoft’s head of CoreAI on Tuesday, and a Microsoft employee interrupted the opening keynote of Build while CEO Satya Nadella was talking on stage.This latest protest comes days after Microsoft announced last week that it had conducted an internal review and used an unnamed external firm to assess how its technology is used in the war in Gaza. Microsoft says that its relationship with Israel’s Ministry of Defenseis “structured as a standard commercial relationship” and that it has “found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct.”See More:
    #microsofts #security #chief #accidentally #reveals
    Microsoft’s AI security chief accidentally reveals Walmart’s AI plans after protest
    Microsoft’s head of security for AI, Neta Haiby, accidentally revealed confidential messages about Walmart’s use of Microsoft’s AI tools during a Build talk that was disrupted by protesters. The Build livestream was muted and the camera pointed down, but the session resumed moments later after the protesters were escorted out. In the aftermath, Haiby then accidentally switched to Microsoft Teams while sharing her screen, revealing confidential internal messages about Walmart’s upcoming use of Microsoft’s Entra and AI gateway services.Haiby was co-hosting a Build session on best security practices for AI, alongside Sarah Bird, Microsoft’s head of responsible AI, when two former Microsoft employees disrupted the talk to protest against the company’s cloud contracts with the Israeli government.“Sarah, you are whitewashing the crimes of Microsoft in Palestine, how dare you talk about responsible AI when Microsoft is fueling the genocide in Palestine,” shouted Hossam Nasr, an organizer with the protest group No Azure for Apartheid, and a former Microsoft employee who was fired for holding a vigil outside Microsoft’s headquarters for Palestinians killed in Gaza.Walmart is one of Microsoft’s biggest corporate customers, and already uses the company’s Azure OpenAI service for some of its AI work. “Walmart is ready to rock and roll with Entra Web and AI Gateway,” says one of Microsoft’s cloud solution architects in the Teams messages. The chat session also quoted a Walmart AI engineer, saying: “Microsoft is WAY ahead of Google with AI security. We are excited to go down this path with you.”We asked Microsoft to comment on this protest and the Teams messages, but the company did not respond in time for publication.The private Microsoft Teams messages shown during the disrupted Build session. Image: MicrosoftBoth of the protesters involved in this latest Microsoft Build disruption were former Microsoft employees, with Vaniya Agrawal appearing alongside Nasr. Agrawal interrupted Microsoft co-founder Bill Gates, former CEO Steve Ballmer, and CEO Satya Nadella later during the company’s 50th anniversary event last month. Agrawal was dismissed shortly after putting in her two weeks’ notice at Microsoft before the protest, according to an email seen by The Verge.This is the third interruption of Microsoft Build by protesters, after a Palestinian tech worker disrupted Microsoft’s head of CoreAI on Tuesday, and a Microsoft employee interrupted the opening keynote of Build while CEO Satya Nadella was talking on stage.This latest protest comes days after Microsoft announced last week that it had conducted an internal review and used an unnamed external firm to assess how its technology is used in the war in Gaza. Microsoft says that its relationship with Israel’s Ministry of Defenseis “structured as a standard commercial relationship” and that it has “found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct.”See More: #microsofts #security #chief #accidentally #reveals
    WWW.THEVERGE.COM
    Microsoft’s AI security chief accidentally reveals Walmart’s AI plans after protest
    Microsoft’s head of security for AI, Neta Haiby, accidentally revealed confidential messages about Walmart’s use of Microsoft’s AI tools during a Build talk that was disrupted by protesters. The Build livestream was muted and the camera pointed down, but the session resumed moments later after the protesters were escorted out. In the aftermath, Haiby then accidentally switched to Microsoft Teams while sharing her screen, revealing confidential internal messages about Walmart’s upcoming use of Microsoft’s Entra and AI gateway services.Haiby was co-hosting a Build session on best security practices for AI, alongside Sarah Bird, Microsoft’s head of responsible AI, when two former Microsoft employees disrupted the talk to protest against the company’s cloud contracts with the Israeli government.“Sarah, you are whitewashing the crimes of Microsoft in Palestine, how dare you talk about responsible AI when Microsoft is fueling the genocide in Palestine,” shouted Hossam Nasr, an organizer with the protest group No Azure for Apartheid, and a former Microsoft employee who was fired for holding a vigil outside Microsoft’s headquarters for Palestinians killed in Gaza.Walmart is one of Microsoft’s biggest corporate customers, and already uses the company’s Azure OpenAI service for some of its AI work. “Walmart is ready to rock and roll with Entra Web and AI Gateway,” says one of Microsoft’s cloud solution architects in the Teams messages. The chat session also quoted a Walmart AI engineer, saying: “Microsoft is WAY ahead of Google with AI security. We are excited to go down this path with you.”We asked Microsoft to comment on this protest and the Teams messages, but the company did not respond in time for publication.The private Microsoft Teams messages shown during the disrupted Build session. Image: MicrosoftBoth of the protesters involved in this latest Microsoft Build disruption were former Microsoft employees, with Vaniya Agrawal appearing alongside Nasr. Agrawal interrupted Microsoft co-founder Bill Gates, former CEO Steve Ballmer, and CEO Satya Nadella later during the company’s 50th anniversary event last month. Agrawal was dismissed shortly after putting in her two weeks’ notice at Microsoft before the protest, according to an email seen by The Verge.This is the third interruption of Microsoft Build by protesters, after a Palestinian tech worker disrupted Microsoft’s head of CoreAI on Tuesday, and a Microsoft employee interrupted the opening keynote of Build while CEO Satya Nadella was talking on stage.This latest protest comes days after Microsoft announced last week that it had conducted an internal review and used an unnamed external firm to assess how its technology is used in the war in Gaza. Microsoft says that its relationship with Israel’s Ministry of Defense (IMOD) is “structured as a standard commercial relationship” and that it has “found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct.”See More:
    0 Comments 0 Shares
  • Microsoft’s wartime pact with the EU rings hollow - and could spell trouble for UK IT buyers

    charles taylor - stock.adobe.com

    Opinion

    Microsoft’s wartime pact with the EU rings hollow - and could spell trouble for UK IT buyers
    Microsoft has moved to assure its European customers that it will fight any attempt by President Trump to disrupt their ability to access its services, but can UK customers take the company at its word?

    By

    Owen Sayers,
    Secon Solutions

    Published: 20 May 2025

    Microsoft has moved to re-assure governments across the European Unionthat it will fight any move by President Trump to interrupt services, should relations between Brussels and Washington deteriorate further, causing the United States to make cloud services a gambling chit in a trade war. 
    The company promised Europe it would “promptly and vigorously contest such a measure,” yet its overtures to EU customers should leave its governments confused, rather than assured.
    Its tacit recognition that the much-vaunted Microsoft EU Boundary might provide its European customers with little real protection from US interventions – despite taking over two years to implement – cannot be glossed over. 
    The issue in play is not just that Microsoft, being a US-headquartered ‘communication services provider’, is subject to many American laws that make transfers of data to them a complicated process. But that they might no longer be able to give guarantees of continuity of service if, for example, President Trump should wake up one morning and decide to order them to cease their EU operations.
    This admission, coming on top of well documented Microsoft global service outages and serial security compromises in recent years, will almost certainly stoke any fires of concern. 
    Such a possibility might have seemed remote just a few months ago, but Trump’s recent search for effective levers to exert control over other countries as part of his “America First” initiative means that what was previously a low risk, with negligible likelihood but massive impact, is now much more likely and could perhaps even be proximate.
    Microsoft president Brad Smith clearly agrees, as evidenced by his new comments about pre-emptive measures and offsets. 
    Certainly, that is the view many readers will hold after Smith’s somewhat clumsily presented efforts to calm the cloud market that he admits drives 25% of Microsoft’s global revenues, and is clearly important for him to protect.
    Telling foreign leaders that Microsoft is embarking on a crusade of change, as he repeatedly did, flies in the face of his attempts to thwart regulatory interventions regarding the software giant’s restrictive cloud licensing practices and grudging moves to unbundle its software from its operating system in the European Union.
    They might also consider that previous positive approaches and engagements from Microsoft, and even directly from Smith, have not prevented them from levelling blame at the EU when things go wrong. As the company did during the wholly EU-unrelated Crowdstrike-initiated global service outage.
    At best, the historic relationship between Microsoft and European leaders has been spasmodic, and it would be understandable if they take these new assurances with a huge pinch of salt.
    Microsoft now plans to further address these new risks through the creation of a new EU board to manage its expanded datacentre estates in Europe, whilst ignoring that ‘branch-office’ management does not in fact change the nature of its US-centric operations, and nor can they prevent the effects of any Presidential diktat.
    Time and again Microsoft has attempted to address legitimate consumer and EU-member government concerns through measures that are presented as forward-thinking and positive, but have zero effective benefit when analysed.
    For example, UK FOI disclosures made in June 2024 confirmed the long-held suspicion that Microsoft is dependent on the ability to process data globally wherever they choose for both their Azure and Microsoft 365 cloud service families, and this – not layers of localised senior execs – is at the root of their problems.
    Due to their global operating model any EU board managing datacentres will have no practical ability to technically or legally protect EU data from a US Government choosing to exert its entirely legitimate, if controversial, control over them and the European data they manage.
    What should cause immediate concern for the UK is that these overtures to the EU do not consider the UK at all – because it lies entirely outside of the Microsoft EU Data Boundary, and doesn’t appear to be included in these new promises either.
    Should companies with a foot in both UK and Europe decide the protections offered up by Microsoft are indeed effective they may therefore need to re-locate their data and workloads to benefit from them, and history suggests that where the work shifts, so invariably do the key jobs.
    Microsoft, in any event, appears to feel that such efforts in the UK are unnecessary given the level of dependency the UK government already has on the Seattle tech giant, whether that be in the workings of the civil service, the NHS or national infrastructure.
    A view that can only have been solidified by the key positions the government has given over to Microsoft executives to effectively steer the UK’s national technology strategy.
    This is what should be the attention-grabber for Microsoft’s UK customers today; not that Microsoft is making big commitments and high-profile promises to the EU, but that the tech giant no longer feels the need to do the same for its UK operations and, as a result, UK consumers and companies can expect to suffer as a result.
    With their choices limited and their data access subject to the whims of foreign powers, with a government too dependent on Microsoft to put up a fight.

    about Microsoft cloud

    Microsoft makes ‘digital commitments’ amid pledge to continue growing its European datacentre footprint, in the face of growing geopolitical uncertainty
    Microsoft pushes back on analyst claims its changing relationship with OpenAI is forcing it to scale back its datacentre expansion plans in the US and Europe.

    In The Current Issue:

    UK critical systems at risk from ‘digital divide’ created by AI threats
    UK at risk of Russian cyber and physical attacks as Ukraine seeks peace deal
    Standard Chartered grounds AI ambitions in data governance

    Download Current Issue

    Starburst chews into the fruits of agentic
    – CW Developer Network

    Calm settles over digital identity market - for now...– Computer Weekly Editors Blog

    View All Blogs
    #microsofts #wartime #pact #with #rings
    Microsoft’s wartime pact with the EU rings hollow - and could spell trouble for UK IT buyers
    charles taylor - stock.adobe.com Opinion Microsoft’s wartime pact with the EU rings hollow - and could spell trouble for UK IT buyers Microsoft has moved to assure its European customers that it will fight any attempt by President Trump to disrupt their ability to access its services, but can UK customers take the company at its word? By Owen Sayers, Secon Solutions Published: 20 May 2025 Microsoft has moved to re-assure governments across the European Unionthat it will fight any move by President Trump to interrupt services, should relations between Brussels and Washington deteriorate further, causing the United States to make cloud services a gambling chit in a trade war.  The company promised Europe it would “promptly and vigorously contest such a measure,” yet its overtures to EU customers should leave its governments confused, rather than assured. Its tacit recognition that the much-vaunted Microsoft EU Boundary might provide its European customers with little real protection from US interventions – despite taking over two years to implement – cannot be glossed over.  The issue in play is not just that Microsoft, being a US-headquartered ‘communication services provider’, is subject to many American laws that make transfers of data to them a complicated process. But that they might no longer be able to give guarantees of continuity of service if, for example, President Trump should wake up one morning and decide to order them to cease their EU operations. This admission, coming on top of well documented Microsoft global service outages and serial security compromises in recent years, will almost certainly stoke any fires of concern.  Such a possibility might have seemed remote just a few months ago, but Trump’s recent search for effective levers to exert control over other countries as part of his “America First” initiative means that what was previously a low risk, with negligible likelihood but massive impact, is now much more likely and could perhaps even be proximate. Microsoft president Brad Smith clearly agrees, as evidenced by his new comments about pre-emptive measures and offsets.  Certainly, that is the view many readers will hold after Smith’s somewhat clumsily presented efforts to calm the cloud market that he admits drives 25% of Microsoft’s global revenues, and is clearly important for him to protect. Telling foreign leaders that Microsoft is embarking on a crusade of change, as he repeatedly did, flies in the face of his attempts to thwart regulatory interventions regarding the software giant’s restrictive cloud licensing practices and grudging moves to unbundle its software from its operating system in the European Union. They might also consider that previous positive approaches and engagements from Microsoft, and even directly from Smith, have not prevented them from levelling blame at the EU when things go wrong. As the company did during the wholly EU-unrelated Crowdstrike-initiated global service outage. At best, the historic relationship between Microsoft and European leaders has been spasmodic, and it would be understandable if they take these new assurances with a huge pinch of salt. Microsoft now plans to further address these new risks through the creation of a new EU board to manage its expanded datacentre estates in Europe, whilst ignoring that ‘branch-office’ management does not in fact change the nature of its US-centric operations, and nor can they prevent the effects of any Presidential diktat. Time and again Microsoft has attempted to address legitimate consumer and EU-member government concerns through measures that are presented as forward-thinking and positive, but have zero effective benefit when analysed. For example, UK FOI disclosures made in June 2024 confirmed the long-held suspicion that Microsoft is dependent on the ability to process data globally wherever they choose for both their Azure and Microsoft 365 cloud service families, and this – not layers of localised senior execs – is at the root of their problems. Due to their global operating model any EU board managing datacentres will have no practical ability to technically or legally protect EU data from a US Government choosing to exert its entirely legitimate, if controversial, control over them and the European data they manage. What should cause immediate concern for the UK is that these overtures to the EU do not consider the UK at all – because it lies entirely outside of the Microsoft EU Data Boundary, and doesn’t appear to be included in these new promises either. Should companies with a foot in both UK and Europe decide the protections offered up by Microsoft are indeed effective they may therefore need to re-locate their data and workloads to benefit from them, and history suggests that where the work shifts, so invariably do the key jobs. Microsoft, in any event, appears to feel that such efforts in the UK are unnecessary given the level of dependency the UK government already has on the Seattle tech giant, whether that be in the workings of the civil service, the NHS or national infrastructure. A view that can only have been solidified by the key positions the government has given over to Microsoft executives to effectively steer the UK’s national technology strategy. This is what should be the attention-grabber for Microsoft’s UK customers today; not that Microsoft is making big commitments and high-profile promises to the EU, but that the tech giant no longer feels the need to do the same for its UK operations and, as a result, UK consumers and companies can expect to suffer as a result. With their choices limited and their data access subject to the whims of foreign powers, with a government too dependent on Microsoft to put up a fight. about Microsoft cloud Microsoft makes ‘digital commitments’ amid pledge to continue growing its European datacentre footprint, in the face of growing geopolitical uncertainty Microsoft pushes back on analyst claims its changing relationship with OpenAI is forcing it to scale back its datacentre expansion plans in the US and Europe. In The Current Issue: UK critical systems at risk from ‘digital divide’ created by AI threats UK at risk of Russian cyber and physical attacks as Ukraine seeks peace deal Standard Chartered grounds AI ambitions in data governance Download Current Issue Starburst chews into the fruits of agentic – CW Developer Network Calm settles over digital identity market - for now...– Computer Weekly Editors Blog View All Blogs #microsofts #wartime #pact #with #rings
    WWW.COMPUTERWEEKLY.COM
    Microsoft’s wartime pact with the EU rings hollow - and could spell trouble for UK IT buyers
    charles taylor - stock.adobe.com Opinion Microsoft’s wartime pact with the EU rings hollow - and could spell trouble for UK IT buyers Microsoft has moved to assure its European customers that it will fight any attempt by President Trump to disrupt their ability to access its services, but can UK customers take the company at its word? By Owen Sayers, Secon Solutions Published: 20 May 2025 Microsoft has moved to re-assure governments across the European Union (EU) that it will fight any move by President Trump to interrupt services, should relations between Brussels and Washington deteriorate further, causing the United States to make cloud services a gambling chit in a trade war.  The company promised Europe it would “promptly and vigorously contest such a measure,” yet its overtures to EU customers should leave its governments confused, rather than assured. Its tacit recognition that the much-vaunted Microsoft EU Boundary might provide its European customers with little real protection from US interventions – despite taking over two years to implement – cannot be glossed over.  The issue in play is not just that Microsoft, being a US-headquartered ‘communication services provider’, is subject to many American laws that make transfers of data to them a complicated process. But that they might no longer be able to give guarantees of continuity of service if, for example, President Trump should wake up one morning and decide to order them to cease their EU operations. This admission, coming on top of well documented Microsoft global service outages and serial security compromises in recent years, will almost certainly stoke any fires of concern.  Such a possibility might have seemed remote just a few months ago, but Trump’s recent search for effective levers to exert control over other countries as part of his “America First” initiative means that what was previously a low risk, with negligible likelihood but massive impact, is now much more likely and could perhaps even be proximate. Microsoft president Brad Smith clearly agrees, as evidenced by his new comments about pre-emptive measures and offsets.  Certainly, that is the view many readers will hold after Smith’s somewhat clumsily presented efforts to calm the cloud market that he admits drives 25% of Microsoft’s global revenues, and is clearly important for him to protect. Telling foreign leaders that Microsoft is embarking on a crusade of change, as he repeatedly did, flies in the face of his attempts to thwart regulatory interventions regarding the software giant’s restrictive cloud licensing practices and grudging moves to unbundle its software from its operating system in the European Union. They might also consider that previous positive approaches and engagements from Microsoft, and even directly from Smith, have not prevented them from levelling blame at the EU when things go wrong. As the company did during the wholly EU-unrelated Crowdstrike-initiated global service outage. At best, the historic relationship between Microsoft and European leaders has been spasmodic, and it would be understandable if they take these new assurances with a huge pinch of salt. Microsoft now plans to further address these new risks through the creation of a new EU board to manage its expanded datacentre estates in Europe, whilst ignoring that ‘branch-office’ management does not in fact change the nature of its US-centric operations, and nor can they prevent the effects of any Presidential diktat. Time and again Microsoft has attempted to address legitimate consumer and EU-member government concerns through measures that are presented as forward-thinking and positive, but have zero effective benefit when analysed. For example, UK FOI disclosures made in June 2024 confirmed the long-held suspicion that Microsoft is dependent on the ability to process data globally wherever they choose for both their Azure and Microsoft 365 cloud service families, and this – not layers of localised senior execs – is at the root of their problems. Due to their global operating model any EU board managing datacentres will have no practical ability to technically or legally protect EU data from a US Government choosing to exert its entirely legitimate, if controversial, control over them and the European data they manage. What should cause immediate concern for the UK is that these overtures to the EU do not consider the UK at all – because it lies entirely outside of the Microsoft EU Data Boundary, and doesn’t appear to be included in these new promises either. Should companies with a foot in both UK and Europe decide the protections offered up by Microsoft are indeed effective they may therefore need to re-locate their data and workloads to benefit from them, and history suggests that where the work shifts, so invariably do the key jobs. Microsoft, in any event, appears to feel that such efforts in the UK are unnecessary given the level of dependency the UK government already has on the Seattle tech giant, whether that be in the workings of the civil service, the NHS or national infrastructure. A view that can only have been solidified by the key positions the government has given over to Microsoft executives to effectively steer the UK’s national technology strategy. This is what should be the attention-grabber for Microsoft’s UK customers today; not that Microsoft is making big commitments and high-profile promises to the EU, but that the tech giant no longer feels the need to do the same for its UK operations and, as a result, UK consumers and companies can expect to suffer as a result. With their choices limited and their data access subject to the whims of foreign powers, with a government too dependent on Microsoft to put up a fight. Read more about Microsoft cloud Microsoft makes ‘digital commitments’ amid pledge to continue growing its European datacentre footprint, in the face of growing geopolitical uncertainty Microsoft pushes back on analyst claims its changing relationship with OpenAI is forcing it to scale back its datacentre expansion plans in the US and Europe. In The Current Issue: UK critical systems at risk from ‘digital divide’ created by AI threats UK at risk of Russian cyber and physical attacks as Ukraine seeks peace deal Standard Chartered grounds AI ambitions in data governance Download Current Issue Starburst chews into the fruits of agentic – CW Developer Network Calm settles over digital identity market - for now... (Hark, is that Big Tech on the horizon?) – Computer Weekly Editors Blog View All Blogs
    0 Comments 0 Shares