How Microsoft wants AI agents to use your PC for you
Imagine a chatbot using your PC for you. That’s one of many visions Microsoft showed off at its developer-focused Build 2025 conference on Monday. And the vision, it would seem, is already almost here.
Microsoft’s concept revolves around the Model Context Protocol, which was created by Anthropiclast year. That’s an open-source protocol that AI apps can use to talk to other apps and web services. Soon, Microsoft says, you’ll be able to let a chatbot — or “AI agent” — connect to apps running on your PC and manipulate them on your behalf.
That’s not all for the future of Windows, either: Microsoft also announced it’s getting serious about local AI apps on Windows. They’re not just being used to push Copilot+ PCs anymore.
Want more Windows PC knowledge? Sign up for my free Windows Intelligence newsletter. I’ll send you in-depth Windows Field Guides as an instant welcome bonus!
The Windows chatbot basics
So what might Microsoft’s bot-centric Windows vision actually look like? While using an AI chatbot like Copilot, ChatGPT, or Perplexity, you could say commands like “Open Photoshop andfor me,” “Generate a playlist and put it together for me on Spotify,” “Get information from,” or “Search my PC for.” The chatbot would request permission to connect to the app on your PC, and it could then connect and perform those actions for you.
At least, it could — if the associated applications support it. I’m speculating about what would be possible in the future if, say, Adobe and Spotify actually chose to support this system in their respective Windows apps.
If this all sounds extreme, keep these qualifying points from Microsoft’s demo in mind:
The Settings app will have a page where you can turn the feature on and off for each app on your system.
Your chatbot of choice will always request access to apps, even when they’ve made themselves available to AI experiences. You have to give permission.
This will all only work with apps where the developer has actively chosen to support it.
And none of this would happen automatically in the background — only when you explicitly ask a chatbot for something.
Also worth noting is that this week’s demo was mostly aimed at developers. In Microsoft’s examples, they showed off using Copilot to install the Fedora Linux distribution in the Windows Subsystem for Linux and then install software inside it.
In other words, here’s the pitch: AI can discover the right commands to run to install and configure a Linux system, and it can do that work for you. You don’t have to copy-paste the commands.
Diving deeper into Microsoft’s AI agent approach
Compared to what Microsoft is proposing, past “agentic” AI solutions that promised to use your computer for you aren’t quite as compelling. They’ve relied on looking at your computer’s screen and using that input to determine what to click and type. This new setup, in contrast, is neat — if it works as promised — because it lets an AI chatbot interact directly with any old traditional Windows PC app.
But the Model Context Protocol solution is even more advanced and streamlined than that. Rather than a chatbot having to put together a Spotify playlist by dragging and dropping songs in the old-fashioned way, it would give the AI the ability to give instructions to the Spotify app in a more simplified form.
On a more technical level, Microsoft will let application developers make their applications function as MCP servers — a fancy way of saying they’d act like a bridge between the AI models and the tasks they perform. There will be an MCP registry for Windows that apps like Copilot, ChatGPT, and Perplexity could use to discover available MCP servers running on your PC, then connect to themand run actions within them.
MCP is already in use on the web, but Microsoft’s idea to open Windows applications up to this same sort of use case is a whole new kind of thinking.
The AI-PC security factor
It’s all well and good if an AI agent has to get permission before using apps on your PCs, but what happens after you give it that permission and it’s working on a particular request? Currently, if an agent responds by recommending you run commands or perform specific tasks, you’re still the one typing them. If the AI is doing the work, how can you tell it isn’t doing anything dangerous — or making any mistakes?
Microsoft’s answer comes in a detailed blog post about MCP security, taking a wide variety of security threats into account. For example, the AI has to be clear about what it did so you can audit it. That’s great, but it still requires you let the AI agent take actions on its own and then look back on those actions after the fact.
A new permissions page will let you control which applications expose themselves as an options to AI apps.Microsoft
While it’s good to know Microsoft is taking security into account, it’s tough to imagine what the security and data threats could be in the real world. Microsoft is providing an early preview of the software only to developers, so the company appears to be moving slowly rather than dropping a feature imminently.
That’s a big change from last year: At Microsoft Build 2024, Microsoft announced the Windows Recall feature to much controversy, with numerous security experts excoriating it for data privacy and security risks. While Microsoft made lots of changes, the feature was delayed and didn’t make it onto Copilot+ PCs until nearly a year after its announcement.
AI features beyond the NPU
Last week at PCWorld, I called for Microsoft to stop limiting local Windows AI features to lightweight laptops with neural processing units, noting that the company’s “Copilot+ PC” approach and the associated Copilot Runtime prevented Windows from taking advantage of powerful GPU hardware — and also CPUs, encouraging developers to ignore these AI features that only a handful of new laptops could run.
At Build 2025, Microsoft did just that. I’m not taking credit: Restricting AI features to NPUs was obviously unsustainable if Microsoft wanted developers to build AI apps that could take advantage of a variety of hardware types. Microsoft made the smart move.
Specifically, Microsoft announced Windows ML — a runtime designed to run AI software on a wide range of Windows PCs, whether that’s on a low-power NPU, a high-performance GPU, or a standard CPU. It’s a single framework developers can target so they don’t have to write to a wide variety of different hardware types from NVIDIA, AMD, Intel, and Qualcomm separately — as is the case today.
In other words, Windows will provide better ways for application developers to build AI features that run on your computer rather than in the cloud. That’s a win for privacy.
Microsoft also announced the Windows AI Foundry, calling it “An evolution of the Windows Copilot Runtime.” And while Microsoft didn’t trumpet it by saying it directly during the keynote, it sounds like AI features designed for Copilot+ PCs may come to a wider variety of Windows PCs in the future.
Windows will soon run AI applications on any hardware — not just an NPU on a new laptop.
Microsoft
So that’s the future of Windows Microsoft is selling right now: deep integration for AI agents to use apps on your PC, if you want, and more ways for app developers to create AI features that use your PC’s hardware rather than cloud servers. Now all that’s left is to see how it all shapes up in reality.
Want to stay on top of Windows? Sign up for my free Windows Intelligence newsletter. I’ll send you three new things to know and try each Friday.
#how #microsoft #wants #agents #use
How Microsoft wants AI agents to use your PC for you
Imagine a chatbot using your PC for you. That’s one of many visions Microsoft showed off at its developer-focused Build 2025 conference on Monday. And the vision, it would seem, is already almost here.
Microsoft’s concept revolves around the Model Context Protocol, which was created by Anthropiclast year. That’s an open-source protocol that AI apps can use to talk to other apps and web services. Soon, Microsoft says, you’ll be able to let a chatbot — or “AI agent” — connect to apps running on your PC and manipulate them on your behalf.
That’s not all for the future of Windows, either: Microsoft also announced it’s getting serious about local AI apps on Windows. They’re not just being used to push Copilot+ PCs anymore.
Want more Windows PC knowledge? Sign up for my free Windows Intelligence newsletter. I’ll send you in-depth Windows Field Guides as an instant welcome bonus!
The Windows chatbot basics
So what might Microsoft’s bot-centric Windows vision actually look like? While using an AI chatbot like Copilot, ChatGPT, or Perplexity, you could say commands like “Open Photoshop andfor me,” “Generate a playlist and put it together for me on Spotify,” “Get information from,” or “Search my PC for.” The chatbot would request permission to connect to the app on your PC, and it could then connect and perform those actions for you.
At least, it could — if the associated applications support it. I’m speculating about what would be possible in the future if, say, Adobe and Spotify actually chose to support this system in their respective Windows apps.
If this all sounds extreme, keep these qualifying points from Microsoft’s demo in mind:
The Settings app will have a page where you can turn the feature on and off for each app on your system.
Your chatbot of choice will always request access to apps, even when they’ve made themselves available to AI experiences. You have to give permission.
This will all only work with apps where the developer has actively chosen to support it.
And none of this would happen automatically in the background — only when you explicitly ask a chatbot for something.
Also worth noting is that this week’s demo was mostly aimed at developers. In Microsoft’s examples, they showed off using Copilot to install the Fedora Linux distribution in the Windows Subsystem for Linux and then install software inside it.
In other words, here’s the pitch: AI can discover the right commands to run to install and configure a Linux system, and it can do that work for you. You don’t have to copy-paste the commands.
Diving deeper into Microsoft’s AI agent approach
Compared to what Microsoft is proposing, past “agentic” AI solutions that promised to use your computer for you aren’t quite as compelling. They’ve relied on looking at your computer’s screen and using that input to determine what to click and type. This new setup, in contrast, is neat — if it works as promised — because it lets an AI chatbot interact directly with any old traditional Windows PC app.
But the Model Context Protocol solution is even more advanced and streamlined than that. Rather than a chatbot having to put together a Spotify playlist by dragging and dropping songs in the old-fashioned way, it would give the AI the ability to give instructions to the Spotify app in a more simplified form.
On a more technical level, Microsoft will let application developers make their applications function as MCP servers — a fancy way of saying they’d act like a bridge between the AI models and the tasks they perform. There will be an MCP registry for Windows that apps like Copilot, ChatGPT, and Perplexity could use to discover available MCP servers running on your PC, then connect to themand run actions within them.
MCP is already in use on the web, but Microsoft’s idea to open Windows applications up to this same sort of use case is a whole new kind of thinking.
The AI-PC security factor
It’s all well and good if an AI agent has to get permission before using apps on your PCs, but what happens after you give it that permission and it’s working on a particular request? Currently, if an agent responds by recommending you run commands or perform specific tasks, you’re still the one typing them. If the AI is doing the work, how can you tell it isn’t doing anything dangerous — or making any mistakes?
Microsoft’s answer comes in a detailed blog post about MCP security, taking a wide variety of security threats into account. For example, the AI has to be clear about what it did so you can audit it. That’s great, but it still requires you let the AI agent take actions on its own and then look back on those actions after the fact.
A new permissions page will let you control which applications expose themselves as an options to AI apps.Microsoft
While it’s good to know Microsoft is taking security into account, it’s tough to imagine what the security and data threats could be in the real world. Microsoft is providing an early preview of the software only to developers, so the company appears to be moving slowly rather than dropping a feature imminently.
That’s a big change from last year: At Microsoft Build 2024, Microsoft announced the Windows Recall feature to much controversy, with numerous security experts excoriating it for data privacy and security risks. While Microsoft made lots of changes, the feature was delayed and didn’t make it onto Copilot+ PCs until nearly a year after its announcement.
AI features beyond the NPU
Last week at PCWorld, I called for Microsoft to stop limiting local Windows AI features to lightweight laptops with neural processing units, noting that the company’s “Copilot+ PC” approach and the associated Copilot Runtime prevented Windows from taking advantage of powerful GPU hardware — and also CPUs, encouraging developers to ignore these AI features that only a handful of new laptops could run.
At Build 2025, Microsoft did just that. I’m not taking credit: Restricting AI features to NPUs was obviously unsustainable if Microsoft wanted developers to build AI apps that could take advantage of a variety of hardware types. Microsoft made the smart move.
Specifically, Microsoft announced Windows ML — a runtime designed to run AI software on a wide range of Windows PCs, whether that’s on a low-power NPU, a high-performance GPU, or a standard CPU. It’s a single framework developers can target so they don’t have to write to a wide variety of different hardware types from NVIDIA, AMD, Intel, and Qualcomm separately — as is the case today.
In other words, Windows will provide better ways for application developers to build AI features that run on your computer rather than in the cloud. That’s a win for privacy.
Microsoft also announced the Windows AI Foundry, calling it “An evolution of the Windows Copilot Runtime.” And while Microsoft didn’t trumpet it by saying it directly during the keynote, it sounds like AI features designed for Copilot+ PCs may come to a wider variety of Windows PCs in the future.
Windows will soon run AI applications on any hardware — not just an NPU on a new laptop.
Microsoft
So that’s the future of Windows Microsoft is selling right now: deep integration for AI agents to use apps on your PC, if you want, and more ways for app developers to create AI features that use your PC’s hardware rather than cloud servers. Now all that’s left is to see how it all shapes up in reality.
Want to stay on top of Windows? Sign up for my free Windows Intelligence newsletter. I’ll send you three new things to know and try each Friday.
#how #microsoft #wants #agents #use
·28 Visualizações