• Apple WWDC 2025: News and analysis

    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news.

    Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually.

    However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts.

    The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price.

    Follow this page for Computerworld‘s coverage of WWDC25.

    WWDC25 news and analysis

    Apple’s AI Revolution: Insights from WWDC

    June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job.

    For developers, Apple’s tools get a lot better for AI

    June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development.

    WWDC 25: What’s new for Apple and the enterprise?

    June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025.

    What we know so far about Apple’s Liquid Glass UI

    June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices. 

    WWDC first look: How Apple is improving its ecosystem

    June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten.

    Apple infuses AI into the Vision Pro

    June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences.

    WWDC: Apple is about to unlock international business

    June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    #apple #wwdc #news #analysis
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly.  #apple #wwdc #news #analysis
    Apple WWDC 2025: News and analysis
    www.computerworld.com
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligence (AI) strategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AI (genAI) rollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models (LLM) such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conference (WWDC) might have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    Like
    Love
    Wow
    Angry
    Sad
    391
    · 2 التعليقات ·0 المشاركات ·0 معاينة
  • Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools

    Paul Hill

    Neowin
    @ziks_99 ·

    Jun 6, 2025 03:02 EDT

    Microsoft has just announced that it will be rolling out an extremely convenient feature for Microsoft 365 customers who use Word throughout this year. The Redmond giant said that you’ll now be able to use SharePoint’s native eSignature service directly in Microsoft Word.
    The new feature allows customers to request electronic signatures without converting the documents to a PDF or leaving the Word interface, significantly speeding up workflows.
    Microsoft’s integration of eSignatures also allows you to create eSignature templates which will speed up document approvals, eliminate physical signing steps, and help with compliance and security in the Microsoft 365 environment.

    This change has the potential to significantly improve the quality-of-life for those in work finding themselves adding lots of signatures to documents as they will no longer have to export PDFs from Word and apply the signature outside of Word. It’s also key to point out that this feature is integrated natively and is not an extension.
    The move is quite clever from Microsoft, if businesses were using third-party tools to sign their documents, they would no longer need to use these as it’s easier to do it in Word. Not only does it reduce reliance on other tools, it also makes Microsoft’s products more competitive against other office suites such as Google Workspace.
    Streamlined, secure, and compliant
    The new eSignature feature is tightly integrated into Word. It lets you insert signature fields seamlessly into documents and request other people’s signatures, all while remaining in Word. The eSignature feature can be accessed in Word by going to the Insert ribbon.
    When you send a signature request to someone from Word, the recipient will get an automatically generated PDF copy of the Word document to sign. The signed PDF will then be kept in the same SharePoint location as the original Word file. To ensure end-to-end security and compliance, the document never leaves the Microsoft 365 trust boundary.
    For anyone with a repetitive signing process, this integration allows you to turn Word documents into eSignature templates so they can be reused.
    Another feature that Microsoft has built in is audit trail and notifications. Both the senders and signers will get email notifications throughout the entire signing process. Additionally, you can view the activity historyin the signed PDF to check who signed it and when.
    Finally, Microsoft said that administrators will be able to control how the feature is used in Word throughout the organization. They can decide to enable it for specific users via an Office group policy or limit it to particular SharePoint sites. The company said that SharePoint eSignature also lets admins log activities in the Purview Audit log.
    A key security measure included by Microsoft, which was mentioned above, was the Microsoft 365 trust boundary. By keeping documents in this boundary, Microsoft ensures that all organizations can use this feature without worry.
    The inclusion of automatic PDF creation is all a huge benefit to users as it will cut out the step of manual PDF creation. While creating a PDF isn’t complicated, it can be time consuming.
    The eSignature feature looks like a win-win-win for organizations that rely on digital signatures. Not only does it speed things along and remain secure, but it’s also packed with features like tracking, making it really useful and comprehensive.
    When and how your organization gets it
    SharePoint eSignature has started rolling out to Word on the M365 Beta and Current Channels in the United States, Canada, the United Kingdom, Europe, and Australia-Pacific. This phase of the rollout is expected to be completed by early July.
    People in the rest of the world will also be gaining this time-saving feature but it will not reach everyone right away, though Microsoft promises to reach everybody by the end of the year.
    To use the feature, it will need to be enabled by administrators. If you’re an admin who needs to enable this, just go to the M365 Admin Center and enable SharePoint eSignature, ensuring the Word checkbox is selected. Once the service is enabled, apply the “Allow the use of SharePoint eSignature for Microsoft Word” policy. The policy can be enabled via Intune, Group Policy manager, or the Cloud Policy service for Microsoft 365
    Assuming the admins have given permission to use the feature, users will be able to access SharePoint eSignatures on Word Desktop using the Microsoft 365 Current Channel or Beta Channel.
    The main caveats include that the rollout is phased, so you might not get it right away, and it requires IT admins to enable the feature - in which case, it may never get enabled at all.
    Overall, this feature stands to benefit users who sign documents a lot as it can save huge amounts of time cumulatively. It’s also good for Microsoft who increase organizations’ dependence on Word.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #microsoft #word #gets #sharepoint #esignature
    Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools Paul Hill Neowin @ziks_99 · Jun 6, 2025 03:02 EDT Microsoft has just announced that it will be rolling out an extremely convenient feature for Microsoft 365 customers who use Word throughout this year. The Redmond giant said that you’ll now be able to use SharePoint’s native eSignature service directly in Microsoft Word. The new feature allows customers to request electronic signatures without converting the documents to a PDF or leaving the Word interface, significantly speeding up workflows. Microsoft’s integration of eSignatures also allows you to create eSignature templates which will speed up document approvals, eliminate physical signing steps, and help with compliance and security in the Microsoft 365 environment. This change has the potential to significantly improve the quality-of-life for those in work finding themselves adding lots of signatures to documents as they will no longer have to export PDFs from Word and apply the signature outside of Word. It’s also key to point out that this feature is integrated natively and is not an extension. The move is quite clever from Microsoft, if businesses were using third-party tools to sign their documents, they would no longer need to use these as it’s easier to do it in Word. Not only does it reduce reliance on other tools, it also makes Microsoft’s products more competitive against other office suites such as Google Workspace. Streamlined, secure, and compliant The new eSignature feature is tightly integrated into Word. It lets you insert signature fields seamlessly into documents and request other people’s signatures, all while remaining in Word. The eSignature feature can be accessed in Word by going to the Insert ribbon. When you send a signature request to someone from Word, the recipient will get an automatically generated PDF copy of the Word document to sign. The signed PDF will then be kept in the same SharePoint location as the original Word file. To ensure end-to-end security and compliance, the document never leaves the Microsoft 365 trust boundary. For anyone with a repetitive signing process, this integration allows you to turn Word documents into eSignature templates so they can be reused. Another feature that Microsoft has built in is audit trail and notifications. Both the senders and signers will get email notifications throughout the entire signing process. Additionally, you can view the activity historyin the signed PDF to check who signed it and when. Finally, Microsoft said that administrators will be able to control how the feature is used in Word throughout the organization. They can decide to enable it for specific users via an Office group policy or limit it to particular SharePoint sites. The company said that SharePoint eSignature also lets admins log activities in the Purview Audit log. A key security measure included by Microsoft, which was mentioned above, was the Microsoft 365 trust boundary. By keeping documents in this boundary, Microsoft ensures that all organizations can use this feature without worry. The inclusion of automatic PDF creation is all a huge benefit to users as it will cut out the step of manual PDF creation. While creating a PDF isn’t complicated, it can be time consuming. The eSignature feature looks like a win-win-win for organizations that rely on digital signatures. Not only does it speed things along and remain secure, but it’s also packed with features like tracking, making it really useful and comprehensive. When and how your organization gets it SharePoint eSignature has started rolling out to Word on the M365 Beta and Current Channels in the United States, Canada, the United Kingdom, Europe, and Australia-Pacific. This phase of the rollout is expected to be completed by early July. People in the rest of the world will also be gaining this time-saving feature but it will not reach everyone right away, though Microsoft promises to reach everybody by the end of the year. To use the feature, it will need to be enabled by administrators. If you’re an admin who needs to enable this, just go to the M365 Admin Center and enable SharePoint eSignature, ensuring the Word checkbox is selected. Once the service is enabled, apply the “Allow the use of SharePoint eSignature for Microsoft Word” policy. The policy can be enabled via Intune, Group Policy manager, or the Cloud Policy service for Microsoft 365 Assuming the admins have given permission to use the feature, users will be able to access SharePoint eSignatures on Word Desktop using the Microsoft 365 Current Channel or Beta Channel. The main caveats include that the rollout is phased, so you might not get it right away, and it requires IT admins to enable the feature - in which case, it may never get enabled at all. Overall, this feature stands to benefit users who sign documents a lot as it can save huge amounts of time cumulatively. It’s also good for Microsoft who increase organizations’ dependence on Word. Tags Report a problem with article Follow @NeowinFeed #microsoft #word #gets #sharepoint #esignature
    Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools
    www.neowin.net
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools Paul Hill Neowin @ziks_99 · Jun 6, 2025 03:02 EDT Microsoft has just announced that it will be rolling out an extremely convenient feature for Microsoft 365 customers who use Word throughout this year. The Redmond giant said that you’ll now be able to use SharePoint’s native eSignature service directly in Microsoft Word. The new feature allows customers to request electronic signatures without converting the documents to a PDF or leaving the Word interface, significantly speeding up workflows. Microsoft’s integration of eSignatures also allows you to create eSignature templates which will speed up document approvals, eliminate physical signing steps, and help with compliance and security in the Microsoft 365 environment. This change has the potential to significantly improve the quality-of-life for those in work finding themselves adding lots of signatures to documents as they will no longer have to export PDFs from Word and apply the signature outside of Word. It’s also key to point out that this feature is integrated natively and is not an extension. The move is quite clever from Microsoft, if businesses were using third-party tools to sign their documents, they would no longer need to use these as it’s easier to do it in Word. Not only does it reduce reliance on other tools, it also makes Microsoft’s products more competitive against other office suites such as Google Workspace. Streamlined, secure, and compliant The new eSignature feature is tightly integrated into Word. It lets you insert signature fields seamlessly into documents and request other people’s signatures, all while remaining in Word. The eSignature feature can be accessed in Word by going to the Insert ribbon. When you send a signature request to someone from Word, the recipient will get an automatically generated PDF copy of the Word document to sign. The signed PDF will then be kept in the same SharePoint location as the original Word file. To ensure end-to-end security and compliance, the document never leaves the Microsoft 365 trust boundary. For anyone with a repetitive signing process, this integration allows you to turn Word documents into eSignature templates so they can be reused. Another feature that Microsoft has built in is audit trail and notifications. Both the senders and signers will get email notifications throughout the entire signing process. Additionally, you can view the activity history (audit trail) in the signed PDF to check who signed it and when. Finally, Microsoft said that administrators will be able to control how the feature is used in Word throughout the organization. They can decide to enable it for specific users via an Office group policy or limit it to particular SharePoint sites. The company said that SharePoint eSignature also lets admins log activities in the Purview Audit log. A key security measure included by Microsoft, which was mentioned above, was the Microsoft 365 trust boundary. By keeping documents in this boundary, Microsoft ensures that all organizations can use this feature without worry. The inclusion of automatic PDF creation is all a huge benefit to users as it will cut out the step of manual PDF creation. While creating a PDF isn’t complicated, it can be time consuming. The eSignature feature looks like a win-win-win for organizations that rely on digital signatures. Not only does it speed things along and remain secure, but it’s also packed with features like tracking, making it really useful and comprehensive. When and how your organization gets it SharePoint eSignature has started rolling out to Word on the M365 Beta and Current Channels in the United States, Canada, the United Kingdom, Europe, and Australia-Pacific. This phase of the rollout is expected to be completed by early July. People in the rest of the world will also be gaining this time-saving feature but it will not reach everyone right away, though Microsoft promises to reach everybody by the end of the year. To use the feature, it will need to be enabled by administrators. If you’re an admin who needs to enable this, just go to the M365 Admin Center and enable SharePoint eSignature, ensuring the Word checkbox is selected. Once the service is enabled, apply the “Allow the use of SharePoint eSignature for Microsoft Word” policy. The policy can be enabled via Intune, Group Policy manager, or the Cloud Policy service for Microsoft 365 Assuming the admins have given permission to use the feature, users will be able to access SharePoint eSignatures on Word Desktop using the Microsoft 365 Current Channel or Beta Channel. The main caveats include that the rollout is phased, so you might not get it right away, and it requires IT admins to enable the feature - in which case, it may never get enabled at all. Overall, this feature stands to benefit users who sign documents a lot as it can save huge amounts of time cumulatively. It’s also good for Microsoft who increase organizations’ dependence on Word. Tags Report a problem with article Follow @NeowinFeed
    Like
    Love
    Wow
    Sad
    Angry
    305
    · 5 التعليقات ·0 المشاركات ·0 معاينة
  • Understanding the Relationship Between Security Gateways and DMARC

    Email authentication protocols like SPF, DKIM, and DMARC play a critical role in protecting domains from spoofing and phishing. However, when SEGs are introduced into the email path, the interaction with these protocols becomes more complex.
    Security gatewaysare a core part of many organizations’ email infrastructure. They act as intermediaries between the public internet and internal mail systems, inspecting, filtering, and routing messages.
    This blog examines how security gateways handle SPF, DKIM, and DMARC, with real-world examples from popular gateways such as Proofpoint, Mimecast, and Avanan. We’ll also cover best practices for maintaining authentication integrity and avoiding misconfigurations that can compromise email authentication or lead to false DMARC failures.
    Security gateways often sit at the boundary between your organization and the internet, managing both inbound and outbound email traffic. Their role affects how email authentication protocols behave.
    An inbound SEG examines emails coming into your organization. It checks SPF, DKIM, and DMARC to determine if the message is authentic and safe before passing it to your internal mail servers.
    An outbound SEG handles emails sent from your domain. It may modify headers, rewrite envelope addresses, or even apply DKIM signing. All of these can impact SPF,  DKIM, or DMARC validation on the recipient’s side.

    Understanding how SEGs influence these flows is crucial to maintaining proper authentication and avoiding unexpected DMARC failures.
    Inbound Handling of SPF, DKIM, and DMARC by Common Security Gateways
    When an email comes into your organization, your security gateway is the first to inspect it. It checks whether the message is real, trustworthy, and properly authenticated. Let’s look at how different SEGs handle these checks.
    AvananSPF: Avanan verifies whether the sending server is authorized to send emails for the domain by checking the SPF record.
    DKIM: It verifies if the message was signed by the sending domain and if that signature is valid.
    DMARC: It uses the results of the SPF and DKIM check to evaluate DMARC. However, final enforcement usually depends on how DMARC is handled by Microsoft 365 or Gmail, as Avanan integrates directly with them.

    Avanan offers two methods of integration:1. API integration: Avanan connects via APIs, no change in MX, usually Monitor or Detect modes.2. Inline integration: Avanan is placed inline in the mail flow, actively blocking or remediating threats.
    Proofpoint Email Protection

    SPF: Proofpoint checks SPF to confirm the sender’s IP is authorized to send on behalf of the domain. You can set custom rules.
    DKIM: It verifies DKIM signatures and shows clear pass/fail results in logs.
    DMARC: It fully evaluates DMARC by combining SPF and DKIM results with alignment checks. Administrators can configure how to handle messages that fail DMARC, such as rejecting, quarantining, or delivering them. Additionally, Proofpoint allows whitelisting specific senders you trust, even if their emails fail authentication checks.

    Integration Methods

    Inline Mode: In this traditional deployment, Proofpoint is positioned directly in the email flow by modifying MX records. Emails are routed through Proofpoint’s infrastructure, allowing it to inspect and filter messages before they reach the recipient’s inbox. This mode provides pre-delivery protection and is commonly used in on-premises or hybrid environments.
    API-BasedMode: Proofpoint offers API-based integration, particularly with cloud email platforms like Microsoft 365 and Google Workspace. In this mode, Proofpoint connects to the email platform via APIs, enabling it to monitor and remediate threats post-delivery without altering the email flow. This approach allows for rapid deployment and seamless integration with existing cloud email services.

    Mimecast

    SPF: Mimecast performs SPF checks to verify whether the sending server is authorized by the domain’s SPF record. Administrators can configure actions for SPF failures, including block, quarantine, permit, or tag with a warning. This gives flexibility in balancing security with business needs.
    DKIM: It validates DKIM signatures by checking that the message was correctly signed by the sending domain and that the content hasn’t been tampered with. If the signature fails, Mimecast can take actions based on your configured policies.
    DMARC: It fully evaluates DMARC by combining the results of SPF and DKIM with domain alignment checks. You can choose to honor the sending domain’s DMARC policyor apply custom rules, for example, quarantining or tagging messages that fail DMARC regardless of the published policy. This allows more granular control for businesses that want to override external domain policies based on specific contexts.

    Integration Methods

    Inline Deployment: Mimecast is typically deployed as a cloud-based secure email gateway. Organizations update their domain’s MX records to point to Mimecast, so all inboundemails pass through it first. This allows Mimecast to inspect, filter, and process emails before delivery, providing robust protection.
    API Integrations: Mimecast also offers API-based services through its Mimecast API platform, primarily for management, archival, continuity, and threat intelligence purposes. However, API-only email protection is not Mimecast’s core model. Instead, the APIs are used to enhance the inline deployment, not replace it.

    Barracuda Email Security Gateway
    SPF: Barracuda checks the sender’s IP against the domain’s published SPF record. If the check fails, you can configure the system to block, quarantine, tag, or allow the message, depending on your policy preferences.
    DKIM: It validates whether the incoming message includes a valid DKIM signature. The outcome is logged and used to inform further policy decisions or DMARC evaluations.
    DMARC: It combines SPF and DKIM results, checks for domain alignment, and applies the DMARC policy defined by the sender. Administrators can also choose to override the DMARC policy, allowing messages to pass or be treated differently based on organizational needs.
    Integration Methods

    Inline mode: Barracuda Email Security Gateway is commonly deployed inline by updating your domain’s MX records to point to Barracuda’s cloud or on-premises gateway. This ensures that all inbound emails pass through Barracuda first for filtering and SPF, DKIM, and DMARC validation before being delivered to your mail servers.
    Deployment Behind the Corporate Firewall: Alternatively, Barracuda can be deployed in transparent or bridge mode without modifying MX records. In this setup, the gateway is placed inline at the network level, such as behind a firewall, and intercepts mail traffic transparently. This method is typically used in complex on-premises environments where changing DNS records is not feasible.

    Cisco Secure EmailCisco Secure Email acts as an inline gateway for inbound email, usually requiring your domain’s MX records to point to the Cisco Email Security Appliance or cloud service.
    SPF: Cisco Secure Email verifies whether the sending server is authorized in the sender domain’s SPF record. Administrators can set detailed policies on how to handle SPF failures.
    DKIM: It validates the DKIM signature on incoming emails and logs whether the signature is valid or has failed.
    DMARC: It evaluates DMARC by combining SPF and DKIM results along with domain alignment checks. Admins can configure specific actions, such as quarantine, reject, or tag, based on different failure scenarios or trusted sender exceptions.
    Integration methods

    On-premises Email Security Appliance: You deploy Cisco’s hardware or virtual appliance inline, updating MX records to route mail through it for filtering.
    Cisco Cloud Email Security: Cisco offers a cloud-based email security service where MX records are pointed to Cisco’s cloud infrastructure, which filters and processes inbound mail.

    Cisco Secure Email also offers advanced, rule-based filtering capabilities and integrates with Cisco’s broader threat protection ecosystem, enabling comprehensive inbound email security.
    Outbound Handling of SPF, DKIM, and DMARC by Common Security Gateways
    When your organization sends emails, security gateways can play an active role in processing and authenticating those messages. Depending on the configuration, a gateway might rewrite headers, re-sign messages, or route them through different IPs – all actions that can help or hurt the authentication process. Let’s look at how major SEGs handle outbound email flow.
    Avanan – Outbound Handling and Integration Methods
    Outbound Logic
    Avanan analyzes outbound emails primarily to detect data loss, malware, and policy violations. In API-based integration, emails are sent directly by the original mail server, so SPF and DKIM signatures remain intact. Avanan does not alter the message or reroute traffic, which helps maintain full DMARC alignment and domain reputation.
    Integration Methods
    1. API Integration: Connects to Microsoft 365 or Google Workspace via API. No MX changes are needed. Emails are scanned after they are sent, with no modification to SPF, DKIM, or the delivery path. 

    How it works: Microsoft Graph API or Google Workspace APIs are used to monitor and intervene in outbound emails.
    Protection level: Despite no MX changes, it can offer inline-like protection, meaning it can block, quarantine, or encrypt emails before they are delivered externally.
    SPF/DKIM/DMARC impact: Preserves original headers and signatures since mail is sent directly from Microsoft/Google servers.

    2. Inline Integration: Requires changing MX records to route email through Avanan. In this mode, Avanan can intercept and inspect outbound emails before delivery. Depending on the configuration, this may affect SPF or DKIM if not properly handled.

    How it works: Requires adding Avanan’s
    Protection level: Traditional inline security with full visibility and control, including encryption, DLP, policy enforcement, and advanced threat protection.
    SPF/DKIM/DMARC impact: SPF configuration is needed by adding Avanan’s include mechanism to the sending domain’s SPF record. The DKIM record of the original sending source is preserved.

    For configurations, you can refer to the steps in this blog.
    Proofpoint – Outbound Handling and Integration Methods
    Outbound Logic
    Proofpoint analyzes outbound emails to detect and prevent data loss, to identify advanced threatsoriginating from compromised internal accounts, and to ensure compliance. Their API integration provides crucial visibility and powerful remediation capabilities, while their traditional gatewaydeployment delivers true inline, pre-delivery blocking for outbound traffic.
    Integration methods
    1. API Integration: No MX record changes are required for this deployment method. Integration is done with Microsoft 365 or Google Workspace.

    How it works: Through its API integration, Proofpoint gains deep visibility into outbound emails and provides layered security and response features, including:

    Detect and alert: Identifies sensitive content, malicious attachments, or suspicious links in outbound emails.
    Post-delivery remediation: A key capability of the API model is Threat Response Auto-Pull, which enables Proofpoint to automatically recall, quarantine, or delete emails after delivery. This is particularly useful for internally sent messages or those forwarded to other users.
    Enhanced visibility: Aggregates message metadata and logs into Proofpoint’s threat intelligence platform, giving security teams a centralized view of outbound risks and user behavior.

    Protection level: API-based integration provides strong post-delivery detection and response, as well as visibility into DLP incidents and suspicious behavior. 
    SPF/DKIM/DMARC impact: Proofpoint does not alter SPF, DKIM, or DMARC because emails are sent directly through Microsoft or Google servers. Since Proofpoint’s servers are not involved in the actual sending process, the original authentication headers remain intact.

    2. Gateway Integration: This method requires updating MX records or routing outbound mail through Proofpoint via a smart host.

    How it works: Proofpoint acts as an inline gateway, inspecting emails before delivery. Inbound mail is filtered via MX changes; outbound mail is relayed through Proofpoint’s servers.
    Threat and DLP filtering: Scans outbound messages for sensitive content, malware, and policy violations.
    Real-time enforcement: Blocks, encrypts, or quarantines emails before they’re delivered.
    Policy controls: Applies rules based on content, recipient, or behavior.
    Protection level: Provides strong, real-time protection for outbound traffic with pre-delivery enforcement, DLP, and encryption.
    SPF/DKIM/DMARC impact: Proofpoint becomes the sending server:

    SPF: You need to configure ProofPoint’s SPF.
    DKIM: Can sign messages; requires DKIM setup.
    DMARC: DMARC passes if SPF and DKIM are set up properly.

    Please refer to this article to configure SPF and DKIM for ProofPoint.
    Mimecast – Outbound Handling and Integration Methods
    Outbound Logic
    Mimecast inspects outbound emails to prevent data loss, detect internal threats such as malware and impersonation, and ensure regulatory compliance. It primarily functions as a Secure Email Gateway, meaning it sits directly in the outbound email flow. While Mimecast offers APIs, its core outbound protection is built around this inline gateway model.
    Integration Methods
    1. Gateway IntegrationThis is Mimecast’s primary method for outbound email protection. Organizations route their outbound traffic through Mimecast by configuring their email serverto use Mimecast as a smart host. This enables Mimecast to inspect and enforce policies on all outgoing emails in real time.

    How it works:
    Updating outbound routing in your email system, or
    Using Mimecast SMTP relay to direct messages through their infrastructure.
    Mimecast then scans, filters, and applies policies before the email reaches the final recipient.

    Protection level:
    Advanced DLP: Identifies and prevents sensitive data leaks.
    Impersonation and Threat Protection: Blocks malware, phishing, and abuse from compromised internal accounts.
    Email Encryption and Secure Messaging: Applies encryption policies or routes messages via secure portals.

    Regulatory Compliance: Enforces outbound compliance rules based on content, recipient, or metadata.
    SPF/DKIM/DMARC impact:

    SPF: Your SPF record must include Mimecast’s SPF mechanism based on your region to avoid SPF failures.
    DKIM: A new DKIM record should be configured to make sure your emails are DKIM signed when routing through Mimecast.
    DMARC: With correct SPF and DKIM setup, Mimecast ensures DMARC alignment, maintaining your domain’s sending reputation. Please refer to the steps in this detailed article to set up SPF and DKIM for Mimecast.

    2. API IntegrationMimecast’s APIs complement the main gateway by providing automation, reporting, and management tools rather than handling live outbound mail flow. They allow you to manage policies, export logs, search archived emails, and sync users.
    APIs enhance visibility and operational tasks but do not provide real-time filtering or blocking of outbound messages. Since APIs don’t process live mail, they have no direct effect on SPF, DKIM, or DMARC; those depend on your gatewaysetup.
    Barracuda – Outbound Handling and Integration Methods
    Outbound Logic
    Barracuda analyzes outbound emails to prevent data loss, block malware, stop phishing/impersonation attempts from compromised internal accounts, and ensure compliance. Barracuda offers flexible deployment options, including both traditional gatewayand API-based integrations. While both contribute to outbound security, their roles are distinct.
    Integration Methods
    1. Gateway Integration— Primary Inline Security

    How it works: All outbound emails pass through Barracuda’s security stack for real-time inspection, threat blocking, and policy enforcement before delivery.
    Protection level:

    Comprehensive DLP 
    Outbound spam and virus filtering 
    Enforcement of compliance and content policies

    This approach offers a high level of control and immediate threat mitigation on outbound mail flow.

    SPF/DKIM/DMARC impact:

    SPF: Update SPF records to include Barracuda’s sending IPs or SPF include mechanism.
    DKIM: Currently, no explicit setup is needed; DKIM of the main sending source is preserved.

    Refer to this article for more comprehensive guidance on Barracuda SEG configuration.
    2. API IntegrationHow it works: The API accesses cloud email environments to analyze historical and real-time data, learning normal communication patterns to detect anomalies in outbound emails. It also supports post-delivery remediation, enabling the removal of malicious emails from internal mailboxes after sending.
    Protection level: Advanced AI-driven detection and near real-time blocking of outbound threats, plus strong post-delivery cleanup capabilities.
    SPF/DKIM/DMARC impact: Since mail is sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation.

    Cisco Secure Email– Outbound Handling and Integration Methods
    Outbound Logic
    Cisco Secure Email protects outbound email by preventing data loss, blocking spam and malware from internal accounts, stopping business email compromiseand impersonation attacks, and ensuring compliance. Cisco provides both traditional gateway appliances/cloud gateways and modern API-based solutions for layered outbound security.
    Integration Methods
    1. Gateway Integration– Cisco Secure Email GatewayHow it works: Organizations update MX records to route mail through the Cisco Secure Email Gateway or configure their mail serverto smart host outbound email via the gateway. All outbound mail is inspected and policies enforced before delivery.
    Protection level:

    Granular DLPOutbound spam and malware filtering to protect IP reputation
    Email encryption for sensitive outbound messages
    Comprehensive content and attachment policy enforcement

    SPF: Check this article for comprehensive guidance on Cisco SPF settings.
    DKIM: Refer to this article for detailed guidance on Cisco DKIM settings.

    2. API Integration – Cisco Secure Email Threat Defense

    How it works: Integrates directly via API with Microsoft 365, continuously monitoring email metadata, content, and user behavior across inbound, outbound, and internal messages. Leverages Cisco’s threat intelligence and AI to detect anomalous outbound activity linked to BEC, account takeover, and phishing.
    Post-Delivery Remediation: Automates the removal or quarantine of malicious or policy-violating emails from mailboxes even after sending.
    Protection level: Advanced, AI-driven detection of sophisticated outbound threats with real-time monitoring and automated remediation. Complements gateway filtering by adding cloud-native visibility and swift post-send action.
    SPF/DKIM/DMARC impact: Since emails are sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation.

    If you have any questions or need assistance, feel free to reach out to EasyDMARC technical support.
    #understanding #relationship #between #security #gateways
    Understanding the Relationship Between Security Gateways and DMARC
    Email authentication protocols like SPF, DKIM, and DMARC play a critical role in protecting domains from spoofing and phishing. However, when SEGs are introduced into the email path, the interaction with these protocols becomes more complex. Security gatewaysare a core part of many organizations’ email infrastructure. They act as intermediaries between the public internet and internal mail systems, inspecting, filtering, and routing messages. This blog examines how security gateways handle SPF, DKIM, and DMARC, with real-world examples from popular gateways such as Proofpoint, Mimecast, and Avanan. We’ll also cover best practices for maintaining authentication integrity and avoiding misconfigurations that can compromise email authentication or lead to false DMARC failures. Security gateways often sit at the boundary between your organization and the internet, managing both inbound and outbound email traffic. Their role affects how email authentication protocols behave. An inbound SEG examines emails coming into your organization. It checks SPF, DKIM, and DMARC to determine if the message is authentic and safe before passing it to your internal mail servers. An outbound SEG handles emails sent from your domain. It may modify headers, rewrite envelope addresses, or even apply DKIM signing. All of these can impact SPF,  DKIM, or DMARC validation on the recipient’s side. Understanding how SEGs influence these flows is crucial to maintaining proper authentication and avoiding unexpected DMARC failures. Inbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When an email comes into your organization, your security gateway is the first to inspect it. It checks whether the message is real, trustworthy, and properly authenticated. Let’s look at how different SEGs handle these checks. AvananSPF: Avanan verifies whether the sending server is authorized to send emails for the domain by checking the SPF record. DKIM: It verifies if the message was signed by the sending domain and if that signature is valid. DMARC: It uses the results of the SPF and DKIM check to evaluate DMARC. However, final enforcement usually depends on how DMARC is handled by Microsoft 365 or Gmail, as Avanan integrates directly with them. Avanan offers two methods of integration:1. API integration: Avanan connects via APIs, no change in MX, usually Monitor or Detect modes.2. Inline integration: Avanan is placed inline in the mail flow, actively blocking or remediating threats. Proofpoint Email Protection SPF: Proofpoint checks SPF to confirm the sender’s IP is authorized to send on behalf of the domain. You can set custom rules. DKIM: It verifies DKIM signatures and shows clear pass/fail results in logs. DMARC: It fully evaluates DMARC by combining SPF and DKIM results with alignment checks. Administrators can configure how to handle messages that fail DMARC, such as rejecting, quarantining, or delivering them. Additionally, Proofpoint allows whitelisting specific senders you trust, even if their emails fail authentication checks. Integration Methods Inline Mode: In this traditional deployment, Proofpoint is positioned directly in the email flow by modifying MX records. Emails are routed through Proofpoint’s infrastructure, allowing it to inspect and filter messages before they reach the recipient’s inbox. This mode provides pre-delivery protection and is commonly used in on-premises or hybrid environments. API-BasedMode: Proofpoint offers API-based integration, particularly with cloud email platforms like Microsoft 365 and Google Workspace. In this mode, Proofpoint connects to the email platform via APIs, enabling it to monitor and remediate threats post-delivery without altering the email flow. This approach allows for rapid deployment and seamless integration with existing cloud email services. Mimecast SPF: Mimecast performs SPF checks to verify whether the sending server is authorized by the domain’s SPF record. Administrators can configure actions for SPF failures, including block, quarantine, permit, or tag with a warning. This gives flexibility in balancing security with business needs. DKIM: It validates DKIM signatures by checking that the message was correctly signed by the sending domain and that the content hasn’t been tampered with. If the signature fails, Mimecast can take actions based on your configured policies. DMARC: It fully evaluates DMARC by combining the results of SPF and DKIM with domain alignment checks. You can choose to honor the sending domain’s DMARC policyor apply custom rules, for example, quarantining or tagging messages that fail DMARC regardless of the published policy. This allows more granular control for businesses that want to override external domain policies based on specific contexts. Integration Methods Inline Deployment: Mimecast is typically deployed as a cloud-based secure email gateway. Organizations update their domain’s MX records to point to Mimecast, so all inboundemails pass through it first. This allows Mimecast to inspect, filter, and process emails before delivery, providing robust protection. API Integrations: Mimecast also offers API-based services through its Mimecast API platform, primarily for management, archival, continuity, and threat intelligence purposes. However, API-only email protection is not Mimecast’s core model. Instead, the APIs are used to enhance the inline deployment, not replace it. Barracuda Email Security Gateway SPF: Barracuda checks the sender’s IP against the domain’s published SPF record. If the check fails, you can configure the system to block, quarantine, tag, or allow the message, depending on your policy preferences. DKIM: It validates whether the incoming message includes a valid DKIM signature. The outcome is logged and used to inform further policy decisions or DMARC evaluations. DMARC: It combines SPF and DKIM results, checks for domain alignment, and applies the DMARC policy defined by the sender. Administrators can also choose to override the DMARC policy, allowing messages to pass or be treated differently based on organizational needs. Integration Methods Inline mode: Barracuda Email Security Gateway is commonly deployed inline by updating your domain’s MX records to point to Barracuda’s cloud or on-premises gateway. This ensures that all inbound emails pass through Barracuda first for filtering and SPF, DKIM, and DMARC validation before being delivered to your mail servers. Deployment Behind the Corporate Firewall: Alternatively, Barracuda can be deployed in transparent or bridge mode without modifying MX records. In this setup, the gateway is placed inline at the network level, such as behind a firewall, and intercepts mail traffic transparently. This method is typically used in complex on-premises environments where changing DNS records is not feasible. Cisco Secure EmailCisco Secure Email acts as an inline gateway for inbound email, usually requiring your domain’s MX records to point to the Cisco Email Security Appliance or cloud service. SPF: Cisco Secure Email verifies whether the sending server is authorized in the sender domain’s SPF record. Administrators can set detailed policies on how to handle SPF failures. DKIM: It validates the DKIM signature on incoming emails and logs whether the signature is valid or has failed. DMARC: It evaluates DMARC by combining SPF and DKIM results along with domain alignment checks. Admins can configure specific actions, such as quarantine, reject, or tag, based on different failure scenarios or trusted sender exceptions. Integration methods On-premises Email Security Appliance: You deploy Cisco’s hardware or virtual appliance inline, updating MX records to route mail through it for filtering. Cisco Cloud Email Security: Cisco offers a cloud-based email security service where MX records are pointed to Cisco’s cloud infrastructure, which filters and processes inbound mail. Cisco Secure Email also offers advanced, rule-based filtering capabilities and integrates with Cisco’s broader threat protection ecosystem, enabling comprehensive inbound email security. Outbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When your organization sends emails, security gateways can play an active role in processing and authenticating those messages. Depending on the configuration, a gateway might rewrite headers, re-sign messages, or route them through different IPs – all actions that can help or hurt the authentication process. Let’s look at how major SEGs handle outbound email flow. Avanan – Outbound Handling and Integration Methods Outbound Logic Avanan analyzes outbound emails primarily to detect data loss, malware, and policy violations. In API-based integration, emails are sent directly by the original mail server, so SPF and DKIM signatures remain intact. Avanan does not alter the message or reroute traffic, which helps maintain full DMARC alignment and domain reputation. Integration Methods 1. API Integration: Connects to Microsoft 365 or Google Workspace via API. No MX changes are needed. Emails are scanned after they are sent, with no modification to SPF, DKIM, or the delivery path.  How it works: Microsoft Graph API or Google Workspace APIs are used to monitor and intervene in outbound emails. Protection level: Despite no MX changes, it can offer inline-like protection, meaning it can block, quarantine, or encrypt emails before they are delivered externally. SPF/DKIM/DMARC impact: Preserves original headers and signatures since mail is sent directly from Microsoft/Google servers. 2. Inline Integration: Requires changing MX records to route email through Avanan. In this mode, Avanan can intercept and inspect outbound emails before delivery. Depending on the configuration, this may affect SPF or DKIM if not properly handled. How it works: Requires adding Avanan’s Protection level: Traditional inline security with full visibility and control, including encryption, DLP, policy enforcement, and advanced threat protection. SPF/DKIM/DMARC impact: SPF configuration is needed by adding Avanan’s include mechanism to the sending domain’s SPF record. The DKIM record of the original sending source is preserved. For configurations, you can refer to the steps in this blog. Proofpoint – Outbound Handling and Integration Methods Outbound Logic Proofpoint analyzes outbound emails to detect and prevent data loss, to identify advanced threatsoriginating from compromised internal accounts, and to ensure compliance. Their API integration provides crucial visibility and powerful remediation capabilities, while their traditional gatewaydeployment delivers true inline, pre-delivery blocking for outbound traffic. Integration methods 1. API Integration: No MX record changes are required for this deployment method. Integration is done with Microsoft 365 or Google Workspace. How it works: Through its API integration, Proofpoint gains deep visibility into outbound emails and provides layered security and response features, including: Detect and alert: Identifies sensitive content, malicious attachments, or suspicious links in outbound emails. Post-delivery remediation: A key capability of the API model is Threat Response Auto-Pull, which enables Proofpoint to automatically recall, quarantine, or delete emails after delivery. This is particularly useful for internally sent messages or those forwarded to other users. Enhanced visibility: Aggregates message metadata and logs into Proofpoint’s threat intelligence platform, giving security teams a centralized view of outbound risks and user behavior. Protection level: API-based integration provides strong post-delivery detection and response, as well as visibility into DLP incidents and suspicious behavior.  SPF/DKIM/DMARC impact: Proofpoint does not alter SPF, DKIM, or DMARC because emails are sent directly through Microsoft or Google servers. Since Proofpoint’s servers are not involved in the actual sending process, the original authentication headers remain intact. 2. Gateway Integration: This method requires updating MX records or routing outbound mail through Proofpoint via a smart host. How it works: Proofpoint acts as an inline gateway, inspecting emails before delivery. Inbound mail is filtered via MX changes; outbound mail is relayed through Proofpoint’s servers. Threat and DLP filtering: Scans outbound messages for sensitive content, malware, and policy violations. Real-time enforcement: Blocks, encrypts, or quarantines emails before they’re delivered. Policy controls: Applies rules based on content, recipient, or behavior. Protection level: Provides strong, real-time protection for outbound traffic with pre-delivery enforcement, DLP, and encryption. SPF/DKIM/DMARC impact: Proofpoint becomes the sending server: SPF: You need to configure ProofPoint’s SPF. DKIM: Can sign messages; requires DKIM setup. DMARC: DMARC passes if SPF and DKIM are set up properly. Please refer to this article to configure SPF and DKIM for ProofPoint. Mimecast – Outbound Handling and Integration Methods Outbound Logic Mimecast inspects outbound emails to prevent data loss, detect internal threats such as malware and impersonation, and ensure regulatory compliance. It primarily functions as a Secure Email Gateway, meaning it sits directly in the outbound email flow. While Mimecast offers APIs, its core outbound protection is built around this inline gateway model. Integration Methods 1. Gateway IntegrationThis is Mimecast’s primary method for outbound email protection. Organizations route their outbound traffic through Mimecast by configuring their email serverto use Mimecast as a smart host. This enables Mimecast to inspect and enforce policies on all outgoing emails in real time. How it works: Updating outbound routing in your email system, or Using Mimecast SMTP relay to direct messages through their infrastructure. Mimecast then scans, filters, and applies policies before the email reaches the final recipient. Protection level: Advanced DLP: Identifies and prevents sensitive data leaks. Impersonation and Threat Protection: Blocks malware, phishing, and abuse from compromised internal accounts. Email Encryption and Secure Messaging: Applies encryption policies or routes messages via secure portals. Regulatory Compliance: Enforces outbound compliance rules based on content, recipient, or metadata. SPF/DKIM/DMARC impact: SPF: Your SPF record must include Mimecast’s SPF mechanism based on your region to avoid SPF failures. DKIM: A new DKIM record should be configured to make sure your emails are DKIM signed when routing through Mimecast. DMARC: With correct SPF and DKIM setup, Mimecast ensures DMARC alignment, maintaining your domain’s sending reputation. Please refer to the steps in this detailed article to set up SPF and DKIM for Mimecast. 2. API IntegrationMimecast’s APIs complement the main gateway by providing automation, reporting, and management tools rather than handling live outbound mail flow. They allow you to manage policies, export logs, search archived emails, and sync users. APIs enhance visibility and operational tasks but do not provide real-time filtering or blocking of outbound messages. Since APIs don’t process live mail, they have no direct effect on SPF, DKIM, or DMARC; those depend on your gatewaysetup. Barracuda – Outbound Handling and Integration Methods Outbound Logic Barracuda analyzes outbound emails to prevent data loss, block malware, stop phishing/impersonation attempts from compromised internal accounts, and ensure compliance. Barracuda offers flexible deployment options, including both traditional gatewayand API-based integrations. While both contribute to outbound security, their roles are distinct. Integration Methods 1. Gateway Integration— Primary Inline Security How it works: All outbound emails pass through Barracuda’s security stack for real-time inspection, threat blocking, and policy enforcement before delivery. Protection level: Comprehensive DLP  Outbound spam and virus filtering  Enforcement of compliance and content policies This approach offers a high level of control and immediate threat mitigation on outbound mail flow. SPF/DKIM/DMARC impact: SPF: Update SPF records to include Barracuda’s sending IPs or SPF include mechanism. DKIM: Currently, no explicit setup is needed; DKIM of the main sending source is preserved. Refer to this article for more comprehensive guidance on Barracuda SEG configuration. 2. API IntegrationHow it works: The API accesses cloud email environments to analyze historical and real-time data, learning normal communication patterns to detect anomalies in outbound emails. It also supports post-delivery remediation, enabling the removal of malicious emails from internal mailboxes after sending. Protection level: Advanced AI-driven detection and near real-time blocking of outbound threats, plus strong post-delivery cleanup capabilities. SPF/DKIM/DMARC impact: Since mail is sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. Cisco Secure Email– Outbound Handling and Integration Methods Outbound Logic Cisco Secure Email protects outbound email by preventing data loss, blocking spam and malware from internal accounts, stopping business email compromiseand impersonation attacks, and ensuring compliance. Cisco provides both traditional gateway appliances/cloud gateways and modern API-based solutions for layered outbound security. Integration Methods 1. Gateway Integration– Cisco Secure Email GatewayHow it works: Organizations update MX records to route mail through the Cisco Secure Email Gateway or configure their mail serverto smart host outbound email via the gateway. All outbound mail is inspected and policies enforced before delivery. Protection level: Granular DLPOutbound spam and malware filtering to protect IP reputation Email encryption for sensitive outbound messages Comprehensive content and attachment policy enforcement SPF: Check this article for comprehensive guidance on Cisco SPF settings. DKIM: Refer to this article for detailed guidance on Cisco DKIM settings. 2. API Integration – Cisco Secure Email Threat Defense How it works: Integrates directly via API with Microsoft 365, continuously monitoring email metadata, content, and user behavior across inbound, outbound, and internal messages. Leverages Cisco’s threat intelligence and AI to detect anomalous outbound activity linked to BEC, account takeover, and phishing. Post-Delivery Remediation: Automates the removal or quarantine of malicious or policy-violating emails from mailboxes even after sending. Protection level: Advanced, AI-driven detection of sophisticated outbound threats with real-time monitoring and automated remediation. Complements gateway filtering by adding cloud-native visibility and swift post-send action. SPF/DKIM/DMARC impact: Since emails are sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. If you have any questions or need assistance, feel free to reach out to EasyDMARC technical support. #understanding #relationship #between #security #gateways
    Understanding the Relationship Between Security Gateways and DMARC
    easydmarc.com
    Email authentication protocols like SPF, DKIM, and DMARC play a critical role in protecting domains from spoofing and phishing. However, when SEGs are introduced into the email path, the interaction with these protocols becomes more complex. Security gateways(SEGs) are a core part of many organizations’ email infrastructure. They act as intermediaries between the public internet and internal mail systems, inspecting, filtering, and routing messages. This blog examines how security gateways handle SPF, DKIM, and DMARC, with real-world examples from popular gateways such as Proofpoint, Mimecast, and Avanan. We’ll also cover best practices for maintaining authentication integrity and avoiding misconfigurations that can compromise email authentication or lead to false DMARC failures. Security gateways often sit at the boundary between your organization and the internet, managing both inbound and outbound email traffic. Their role affects how email authentication protocols behave. An inbound SEG examines emails coming into your organization. It checks SPF, DKIM, and DMARC to determine if the message is authentic and safe before passing it to your internal mail servers. An outbound SEG handles emails sent from your domain. It may modify headers, rewrite envelope addresses, or even apply DKIM signing. All of these can impact SPF,  DKIM, or DMARC validation on the recipient’s side. Understanding how SEGs influence these flows is crucial to maintaining proper authentication and avoiding unexpected DMARC failures. Inbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When an email comes into your organization, your security gateway is the first to inspect it. It checks whether the message is real, trustworthy, and properly authenticated. Let’s look at how different SEGs handle these checks. Avanan (by Check Point) SPF: Avanan verifies whether the sending server is authorized to send emails for the domain by checking the SPF record. DKIM: It verifies if the message was signed by the sending domain and if that signature is valid. DMARC: It uses the results of the SPF and DKIM check to evaluate DMARC. However, final enforcement usually depends on how DMARC is handled by Microsoft 365 or Gmail, as Avanan integrates directly with them. Avanan offers two methods of integration:1. API integration: Avanan connects via APIs, no change in MX, usually Monitor or Detect modes.2. Inline integration: Avanan is placed inline in the mail flow (MX records changed), actively blocking or remediating threats. Proofpoint Email Protection SPF: Proofpoint checks SPF to confirm the sender’s IP is authorized to send on behalf of the domain. You can set custom rules (e.g. treat “softfail” as “fail”). DKIM: It verifies DKIM signatures and shows clear pass/fail results in logs. DMARC: It fully evaluates DMARC by combining SPF and DKIM results with alignment checks. Administrators can configure how to handle messages that fail DMARC, such as rejecting, quarantining, or delivering them. Additionally, Proofpoint allows whitelisting specific senders you trust, even if their emails fail authentication checks. Integration Methods Inline Mode: In this traditional deployment, Proofpoint is positioned directly in the email flow by modifying MX records. Emails are routed through Proofpoint’s infrastructure, allowing it to inspect and filter messages before they reach the recipient’s inbox. This mode provides pre-delivery protection and is commonly used in on-premises or hybrid environments. API-Based (Integrated Cloud Email Security – ICES) Mode: Proofpoint offers API-based integration, particularly with cloud email platforms like Microsoft 365 and Google Workspace. In this mode, Proofpoint connects to the email platform via APIs, enabling it to monitor and remediate threats post-delivery without altering the email flow. This approach allows for rapid deployment and seamless integration with existing cloud email services. Mimecast SPF: Mimecast performs SPF checks to verify whether the sending server is authorized by the domain’s SPF record. Administrators can configure actions for SPF failures, including block, quarantine, permit, or tag with a warning. This gives flexibility in balancing security with business needs. DKIM: It validates DKIM signatures by checking that the message was correctly signed by the sending domain and that the content hasn’t been tampered with. If the signature fails, Mimecast can take actions based on your configured policies. DMARC: It fully evaluates DMARC by combining the results of SPF and DKIM with domain alignment checks. You can choose to honor the sending domain’s DMARC policy (none, quarantine, reject) or apply custom rules, for example, quarantining or tagging messages that fail DMARC regardless of the published policy. This allows more granular control for businesses that want to override external domain policies based on specific contexts. Integration Methods Inline Deployment: Mimecast is typically deployed as a cloud-based secure email gateway. Organizations update their domain’s MX records to point to Mimecast, so all inbound (and optionally outbound) emails pass through it first. This allows Mimecast to inspect, filter, and process emails before delivery, providing robust protection. API Integrations: Mimecast also offers API-based services through its Mimecast API platform, primarily for management, archival, continuity, and threat intelligence purposes. However, API-only email protection is not Mimecast’s core model. Instead, the APIs are used to enhance the inline deployment, not replace it. Barracuda Email Security Gateway SPF: Barracuda checks the sender’s IP against the domain’s published SPF record. If the check fails, you can configure the system to block, quarantine, tag, or allow the message, depending on your policy preferences. DKIM: It validates whether the incoming message includes a valid DKIM signature. The outcome is logged and used to inform further policy decisions or DMARC evaluations. DMARC: It combines SPF and DKIM results, checks for domain alignment, and applies the DMARC policy defined by the sender. Administrators can also choose to override the DMARC policy, allowing messages to pass or be treated differently based on organizational needs (e.g., trusted senders or internal exceptions). Integration Methods Inline mode (more common and straightforward): Barracuda Email Security Gateway is commonly deployed inline by updating your domain’s MX records to point to Barracuda’s cloud or on-premises gateway. This ensures that all inbound emails pass through Barracuda first for filtering and SPF, DKIM, and DMARC validation before being delivered to your mail servers. Deployment Behind the Corporate Firewall: Alternatively, Barracuda can be deployed in transparent or bridge mode without modifying MX records. In this setup, the gateway is placed inline at the network level, such as behind a firewall, and intercepts mail traffic transparently. This method is typically used in complex on-premises environments where changing DNS records is not feasible. Cisco Secure Email (formerly IronPort) Cisco Secure Email acts as an inline gateway for inbound email, usually requiring your domain’s MX records to point to the Cisco Email Security Appliance or cloud service. SPF: Cisco Secure Email verifies whether the sending server is authorized in the sender domain’s SPF record. Administrators can set detailed policies on how to handle SPF failures. DKIM: It validates the DKIM signature on incoming emails and logs whether the signature is valid or has failed. DMARC: It evaluates DMARC by combining SPF and DKIM results along with domain alignment checks. Admins can configure specific actions, such as quarantine, reject, or tag, based on different failure scenarios or trusted sender exceptions. Integration methods On-premises Email Security Appliance (ESA): You deploy Cisco’s hardware or virtual appliance inline, updating MX records to route mail through it for filtering. Cisco Cloud Email Security: Cisco offers a cloud-based email security service where MX records are pointed to Cisco’s cloud infrastructure, which filters and processes inbound mail. Cisco Secure Email also offers advanced, rule-based filtering capabilities and integrates with Cisco’s broader threat protection ecosystem, enabling comprehensive inbound email security. Outbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When your organization sends emails, security gateways can play an active role in processing and authenticating those messages. Depending on the configuration, a gateway might rewrite headers, re-sign messages, or route them through different IPs – all actions that can help or hurt the authentication process. Let’s look at how major SEGs handle outbound email flow. Avanan – Outbound Handling and Integration Methods Outbound Logic Avanan analyzes outbound emails primarily to detect data loss, malware, and policy violations. In API-based integration, emails are sent directly by the original mail server (e.g., Microsoft 365 or Google Workspace), so SPF and DKIM signatures remain intact. Avanan does not alter the message or reroute traffic, which helps maintain full DMARC alignment and domain reputation. Integration Methods 1. API Integration: Connects to Microsoft 365 or Google Workspace via API. No MX changes are needed. Emails are scanned after they are sent, with no modification to SPF, DKIM, or the delivery path.  How it works: Microsoft Graph API or Google Workspace APIs are used to monitor and intervene in outbound emails. Protection level: Despite no MX changes, it can offer inline-like protection, meaning it can block, quarantine, or encrypt emails before they are delivered externally. SPF/DKIM/DMARC impact: Preserves original headers and signatures since mail is sent directly from Microsoft/Google servers. 2. Inline Integration: Requires changing MX records to route email through Avanan. In this mode, Avanan can intercept and inspect outbound emails before delivery. Depending on the configuration, this may affect SPF or DKIM if not properly handled. How it works: Requires adding Avanan’s Protection level: Traditional inline security with full visibility and control, including encryption, DLP, policy enforcement, and advanced threat protection. SPF/DKIM/DMARC impact: SPF configuration is needed by adding Avanan’s include mechanism to the sending domain’s SPF record. The DKIM record of the original sending source is preserved. For configurations, you can refer to the steps in this blog. Proofpoint – Outbound Handling and Integration Methods Outbound Logic Proofpoint analyzes outbound emails to detect and prevent data loss (DLP), to identify advanced threats (malware, phishing, BEC) originating from compromised internal accounts, and to ensure compliance. Their API integration provides crucial visibility and powerful remediation capabilities, while their traditional gateway (MX record) deployment delivers true inline, pre-delivery blocking for outbound traffic. Integration methods 1. API Integration: No MX record changes are required for this deployment method. Integration is done with Microsoft 365 or Google Workspace. How it works: Through its API integration, Proofpoint gains deep visibility into outbound emails and provides layered security and response features, including: Detect and alert: Identifies sensitive content (Data Loss Prevention violations), malicious attachments, or suspicious links in outbound emails. Post-delivery remediation (TRAP): A key capability of the API model is Threat Response Auto-Pull (TRAP), which enables Proofpoint to automatically recall, quarantine, or delete emails after delivery. This is particularly useful for internally sent messages or those forwarded to other users. Enhanced visibility: Aggregates message metadata and logs into Proofpoint’s threat intelligence platform, giving security teams a centralized view of outbound risks and user behavior. Protection level: API-based integration provides strong post-delivery detection and response, as well as visibility into DLP incidents and suspicious behavior.  SPF/DKIM/DMARC impact: Proofpoint does not alter SPF, DKIM, or DMARC because emails are sent directly through Microsoft or Google servers. Since Proofpoint’s servers are not involved in the actual sending process, the original authentication headers remain intact. 2. Gateway Integration (MX Record/Smart Host): This method requires updating MX records or routing outbound mail through Proofpoint via a smart host. How it works: Proofpoint acts as an inline gateway, inspecting emails before delivery. Inbound mail is filtered via MX changes; outbound mail is relayed through Proofpoint’s servers. Threat and DLP filtering: Scans outbound messages for sensitive content, malware, and policy violations. Real-time enforcement: Blocks, encrypts, or quarantines emails before they’re delivered. Policy controls: Applies rules based on content, recipient, or behavior. Protection level: Provides strong, real-time protection for outbound traffic with pre-delivery enforcement, DLP, and encryption. SPF/DKIM/DMARC impact: Proofpoint becomes the sending server: SPF: You need to configure ProofPoint’s SPF. DKIM: Can sign messages; requires DKIM setup. DMARC: DMARC passes if SPF and DKIM are set up properly. Please refer to this article to configure SPF and DKIM for ProofPoint. Mimecast – Outbound Handling and Integration Methods Outbound Logic Mimecast inspects outbound emails to prevent data loss (DLP), detect internal threats such as malware and impersonation, and ensure regulatory compliance. It primarily functions as a Secure Email Gateway (SEG), meaning it sits directly in the outbound email flow. While Mimecast offers APIs, its core outbound protection is built around this inline gateway model. Integration Methods 1. Gateway Integration (MX Record change required) This is Mimecast’s primary method for outbound email protection. Organizations route their outbound traffic through Mimecast by configuring their email server (e.g., Microsoft 365, Google Workspace, etc.) to use Mimecast as a smart host. This enables Mimecast to inspect and enforce policies on all outgoing emails in real time. How it works: Updating outbound routing in your email system (smart host settings), or Using Mimecast SMTP relay to direct messages through their infrastructure. Mimecast then scans, filters, and applies policies before the email reaches the final recipient. Protection level: Advanced DLP: Identifies and prevents sensitive data leaks. Impersonation and Threat Protection: Blocks malware, phishing, and abuse from compromised internal accounts. Email Encryption and Secure Messaging: Applies encryption policies or routes messages via secure portals. Regulatory Compliance: Enforces outbound compliance rules based on content, recipient, or metadata. SPF/DKIM/DMARC impact: SPF: Your SPF record must include Mimecast’s SPF mechanism based on your region to avoid SPF failures. DKIM: A new DKIM record should be configured to make sure your emails are DKIM signed when routing through Mimecast. DMARC: With correct SPF and DKIM setup, Mimecast ensures DMARC alignment, maintaining your domain’s sending reputation. Please refer to the steps in this detailed article to set up SPF and DKIM for Mimecast. 2. API Integration (Complementary to Gateway) Mimecast’s APIs complement the main gateway by providing automation, reporting, and management tools rather than handling live outbound mail flow. They allow you to manage policies, export logs, search archived emails, and sync users. APIs enhance visibility and operational tasks but do not provide real-time filtering or blocking of outbound messages. Since APIs don’t process live mail, they have no direct effect on SPF, DKIM, or DMARC; those depend on your gateway (smart host) setup. Barracuda – Outbound Handling and Integration Methods Outbound Logic Barracuda analyzes outbound emails to prevent data loss (DLP), block malware, stop phishing/impersonation attempts from compromised internal accounts, and ensure compliance. Barracuda offers flexible deployment options, including both traditional gateway (MX record) and API-based integrations. While both contribute to outbound security, their roles are distinct. Integration Methods 1. Gateway Integration (MX Record / Smart Host) — Primary Inline Security How it works: All outbound emails pass through Barracuda’s security stack for real-time inspection, threat blocking, and policy enforcement before delivery. Protection level: Comprehensive DLP (blocking, encrypting, or quarantining sensitive content)  Outbound spam and virus filtering  Enforcement of compliance and content policies This approach offers a high level of control and immediate threat mitigation on outbound mail flow. SPF/DKIM/DMARC impact: SPF: Update SPF records to include Barracuda’s sending IPs or SPF include mechanism. DKIM: Currently, no explicit setup is needed; DKIM of the main sending source is preserved. Refer to this article for more comprehensive guidance on Barracuda SEG configuration. 2. API Integration (Complementary & Advanced Threat Focus) How it works: The API accesses cloud email environments to analyze historical and real-time data, learning normal communication patterns to detect anomalies in outbound emails. It also supports post-delivery remediation, enabling the removal of malicious emails from internal mailboxes after sending. Protection level: Advanced AI-driven detection and near real-time blocking of outbound threats, plus strong post-delivery cleanup capabilities. SPF/DKIM/DMARC impact: Since mail is sent directly by the original mail server (e.g., Microsoft 365), SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. Cisco Secure Email (formerly IronPort) – Outbound Handling and Integration Methods Outbound Logic Cisco Secure Email protects outbound email by preventing data loss (DLP), blocking spam and malware from internal accounts, stopping business email compromise (BEC) and impersonation attacks, and ensuring compliance. Cisco provides both traditional gateway appliances/cloud gateways and modern API-based solutions for layered outbound security. Integration Methods 1. Gateway Integration (MX Record / Smart Host) – Cisco Secure Email Gateway (ESA) How it works: Organizations update MX records to route mail through the Cisco Secure Email Gateway or configure their mail server (e.g., Microsoft 365, Exchange) to smart host outbound email via the gateway. All outbound mail is inspected and policies enforced before delivery. Protection level: Granular DLP (blocking, encrypting, quarantining sensitive content) Outbound spam and malware filtering to protect IP reputation Email encryption for sensitive outbound messages Comprehensive content and attachment policy enforcement SPF: Check this article for comprehensive guidance on Cisco SPF settings. DKIM: Refer to this article for detailed guidance on Cisco DKIM settings. 2. API Integration – Cisco Secure Email Threat Defense How it works: Integrates directly via API with Microsoft 365 (and potentially Google Workspace), continuously monitoring email metadata, content, and user behavior across inbound, outbound, and internal messages. Leverages Cisco’s threat intelligence and AI to detect anomalous outbound activity linked to BEC, account takeover, and phishing. Post-Delivery Remediation: Automates the removal or quarantine of malicious or policy-violating emails from mailboxes even after sending. Protection level: Advanced, AI-driven detection of sophisticated outbound threats with real-time monitoring and automated remediation. Complements gateway filtering by adding cloud-native visibility and swift post-send action. SPF/DKIM/DMARC impact: Since emails are sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. If you have any questions or need assistance, feel free to reach out to EasyDMARC technical support.
    Like
    Love
    Wow
    Sad
    Angry
    398
    · 0 التعليقات ·0 المشاركات ·0 معاينة
  • HMRC phishing breach wholly avoidable, but hard to stop

    A significant cyber breach at His Majesty’s Revenue and Customsthat saw scammers cheat the public purse out of approximately £47m has been met with dismay from security experts thanks to the sheer simplicity of the attack, which originated via account takeover attempts on legitimate taxpayers.
    HMRC disclosed the breach to a Treasury Select Committee this week, revealing that hackers accessed the online accounts of about 100,000 people via phishing attacks and managed to claim a significant amount of money in tax rebates before being stopped.
    It is understood that those individuals affected have been contacted by HMRC – they have not personally lost any money and are not themselves in any trouble. Arrests in the case have already been made.
    During proceedings, HMRC also came in for criticism by the committee’s chair Meg Hillier, who had learned about the via an earlier news report on the matter, over the length of time taken to come clean over the incident.

    With phishing emails sent to unwitting taxpayers identified as the initial attack vector for the scammers, HMRC might feel relieved that it has dodged full blame for the incident.
    But according to Will Richmond-Coggan, a partner specialising in data and cyber disputes at law firm Freeths, even though the tax office had gone to pains to stress its own systems were never actually compromised, the incident underscored just how widespread the consequences of cyber attacks can be – snowballing from simple origins into a multimillion pound loss.
    “It is clear from HMRC's explanation that the crime against HMRC was only possible because of earlier data breaches and cyber attacks,” said Richmond-Coggan.
    “Those earlier attacks put personal data in the hands of the criminals which enabled them to impersonate tax payers and apply successfully to claim back tax.”

    Meanwhile, Gerasim Hovhannisyan, CEO of EasyDMARC, an email security provider, pointed out that phishing against both private individuals and businesses and other organisations had long ago moved beyond the domain of scammers chancing their luck.
    While this type of scattergun fraud remains a potent threat, particularly to consumers who may not be informed about cyber security matters – the scale of the HMRC phish surely suggests a targeted operation, likely using carefully crafted email purporting to represent HMRC itself, designed to lure self-assessment taxpayers into handing over their accounts.
    Not only that, but generative artificial intelligencemeans targeted phishing operations have become exponentially more dangerous in a very short space of time, added Hovhannisyan.
    “has madescalable, polished, and dangerously convincing, often indistinguishable from legitimate communication. And while many organisations have strengthened their security perimeters, email remains the most consistently exploited and underestimated attack vector,” he said.
    “These scams exploit human trust, using urgency, authority, and increasingly realistic impersonation tactics. If HMRC can be phished, anyone can.”
    Added Hovhannisyan: “What’s more alarming is that the Treasury Select Committee only learned of the breach through the news. When £47m is stolen through impersonation, institutions can’t afford to stay quiet. Delayed disclosure erodes trust, stalls response, and gives attackers room to manoeuvre.”

    Once again a service’s end-users have turned out to be the source of a cyber attack and as such, whether they are internal or – as in this case – external, are often considered an organisation’s first line of defence.
    However, it is not always wise to take this approach, and for an organisation like HMRC daily engaging with members of the public, it is also not really possible. Security education is a difficult proposition at the best of times and although the UK’s National Cyber Security Centreprovides extensive advice and guidance on spotting and dealing with phishing emails for consumers – it also operates a phishing reporting service that as of April 2025 has received over 41 million scam reports – bodies like HMRC cannot rely on everybody having visited the NCSC’s website.
    As such, Mike Britton, chief information officerat Abnormal AI, a specialist in phishing, social engineering and account takeover prevention, argued that HMRC could and should have done more from a technical perspective.
    “Governments will always be a high tier target for cyber criminals due to the valuable information they hold. In fact, attacks against this sector are rising,” he said.
    “In this case, it looks like criminals utilised account take over to conduct fraud. To combat this, multifactor authenticationis key, but as attacks grow more sophisticated, further steps must be taken.”
    Britton said organisations like HMRC really needed to consider adopting more layered security strategies, not only including MFA but also incorporating wider visibility and unified controls across its IT systems.
    Account takeover attacks such as the ones seen in this incident can unfold quickly, he added, so its cyber function should also be equipped with the tools to identify and remediate compromised accounts on the fly.

    about trends in phishing

    Quishing, meaning QR code phishing, is an offputting term for an on-the-rise attack method. Learn how to defend against it.
    A healthy dose of judicious skepticism is crucial to preventing phishing attacks, said David Fine, supervisory special agent at the FBI, during a presentation at a HIMSS event.
    Exchange admins got a boost from Microsoft when it improved how it handles DMARC authentication failures to help organisations fight back from email-based attacks on their users.
    #hmrc #phishing #breach #wholly #avoidable
    HMRC phishing breach wholly avoidable, but hard to stop
    A significant cyber breach at His Majesty’s Revenue and Customsthat saw scammers cheat the public purse out of approximately £47m has been met with dismay from security experts thanks to the sheer simplicity of the attack, which originated via account takeover attempts on legitimate taxpayers. HMRC disclosed the breach to a Treasury Select Committee this week, revealing that hackers accessed the online accounts of about 100,000 people via phishing attacks and managed to claim a significant amount of money in tax rebates before being stopped. It is understood that those individuals affected have been contacted by HMRC – they have not personally lost any money and are not themselves in any trouble. Arrests in the case have already been made. During proceedings, HMRC also came in for criticism by the committee’s chair Meg Hillier, who had learned about the via an earlier news report on the matter, over the length of time taken to come clean over the incident. With phishing emails sent to unwitting taxpayers identified as the initial attack vector for the scammers, HMRC might feel relieved that it has dodged full blame for the incident. But according to Will Richmond-Coggan, a partner specialising in data and cyber disputes at law firm Freeths, even though the tax office had gone to pains to stress its own systems were never actually compromised, the incident underscored just how widespread the consequences of cyber attacks can be – snowballing from simple origins into a multimillion pound loss. “It is clear from HMRC's explanation that the crime against HMRC was only possible because of earlier data breaches and cyber attacks,” said Richmond-Coggan. “Those earlier attacks put personal data in the hands of the criminals which enabled them to impersonate tax payers and apply successfully to claim back tax.” Meanwhile, Gerasim Hovhannisyan, CEO of EasyDMARC, an email security provider, pointed out that phishing against both private individuals and businesses and other organisations had long ago moved beyond the domain of scammers chancing their luck. While this type of scattergun fraud remains a potent threat, particularly to consumers who may not be informed about cyber security matters – the scale of the HMRC phish surely suggests a targeted operation, likely using carefully crafted email purporting to represent HMRC itself, designed to lure self-assessment taxpayers into handing over their accounts. Not only that, but generative artificial intelligencemeans targeted phishing operations have become exponentially more dangerous in a very short space of time, added Hovhannisyan. “has madescalable, polished, and dangerously convincing, often indistinguishable from legitimate communication. And while many organisations have strengthened their security perimeters, email remains the most consistently exploited and underestimated attack vector,” he said. “These scams exploit human trust, using urgency, authority, and increasingly realistic impersonation tactics. If HMRC can be phished, anyone can.” Added Hovhannisyan: “What’s more alarming is that the Treasury Select Committee only learned of the breach through the news. When £47m is stolen through impersonation, institutions can’t afford to stay quiet. Delayed disclosure erodes trust, stalls response, and gives attackers room to manoeuvre.” Once again a service’s end-users have turned out to be the source of a cyber attack and as such, whether they are internal or – as in this case – external, are often considered an organisation’s first line of defence. However, it is not always wise to take this approach, and for an organisation like HMRC daily engaging with members of the public, it is also not really possible. Security education is a difficult proposition at the best of times and although the UK’s National Cyber Security Centreprovides extensive advice and guidance on spotting and dealing with phishing emails for consumers – it also operates a phishing reporting service that as of April 2025 has received over 41 million scam reports – bodies like HMRC cannot rely on everybody having visited the NCSC’s website. As such, Mike Britton, chief information officerat Abnormal AI, a specialist in phishing, social engineering and account takeover prevention, argued that HMRC could and should have done more from a technical perspective. “Governments will always be a high tier target for cyber criminals due to the valuable information they hold. In fact, attacks against this sector are rising,” he said. “In this case, it looks like criminals utilised account take over to conduct fraud. To combat this, multifactor authenticationis key, but as attacks grow more sophisticated, further steps must be taken.” Britton said organisations like HMRC really needed to consider adopting more layered security strategies, not only including MFA but also incorporating wider visibility and unified controls across its IT systems. Account takeover attacks such as the ones seen in this incident can unfold quickly, he added, so its cyber function should also be equipped with the tools to identify and remediate compromised accounts on the fly. about trends in phishing Quishing, meaning QR code phishing, is an offputting term for an on-the-rise attack method. Learn how to defend against it. A healthy dose of judicious skepticism is crucial to preventing phishing attacks, said David Fine, supervisory special agent at the FBI, during a presentation at a HIMSS event. Exchange admins got a boost from Microsoft when it improved how it handles DMARC authentication failures to help organisations fight back from email-based attacks on their users. #hmrc #phishing #breach #wholly #avoidable
    HMRC phishing breach wholly avoidable, but hard to stop
    www.computerweekly.com
    A significant cyber breach at His Majesty’s Revenue and Customs (HMRC) that saw scammers cheat the public purse out of approximately £47m has been met with dismay from security experts thanks to the sheer simplicity of the attack, which originated via account takeover attempts on legitimate taxpayers. HMRC disclosed the breach to a Treasury Select Committee this week, revealing that hackers accessed the online accounts of about 100,000 people via phishing attacks and managed to claim a significant amount of money in tax rebates before being stopped. It is understood that those individuals affected have been contacted by HMRC – they have not personally lost any money and are not themselves in any trouble. Arrests in the case have already been made. During proceedings, HMRC also came in for criticism by the committee’s chair Meg Hillier, who had learned about the via an earlier news report on the matter, over the length of time taken to come clean over the incident. With phishing emails sent to unwitting taxpayers identified as the initial attack vector for the scammers, HMRC might feel relieved that it has dodged full blame for the incident. But according to Will Richmond-Coggan, a partner specialising in data and cyber disputes at law firm Freeths, even though the tax office had gone to pains to stress its own systems were never actually compromised, the incident underscored just how widespread the consequences of cyber attacks can be – snowballing from simple origins into a multimillion pound loss. “It is clear from HMRC's explanation that the crime against HMRC was only possible because of earlier data breaches and cyber attacks,” said Richmond-Coggan. “Those earlier attacks put personal data in the hands of the criminals which enabled them to impersonate tax payers and apply successfully to claim back tax.” Meanwhile, Gerasim Hovhannisyan, CEO of EasyDMARC, an email security provider, pointed out that phishing against both private individuals and businesses and other organisations had long ago moved beyond the domain of scammers chancing their luck. While this type of scattergun fraud remains a potent threat, particularly to consumers who may not be informed about cyber security matters – the scale of the HMRC phish surely suggests a targeted operation, likely using carefully crafted email purporting to represent HMRC itself, designed to lure self-assessment taxpayers into handing over their accounts. Not only that, but generative artificial intelligence (GenAI) means targeted phishing operations have become exponentially more dangerous in a very short space of time, added Hovhannisyan. “[It] has made [phishing] scalable, polished, and dangerously convincing, often indistinguishable from legitimate communication. And while many organisations have strengthened their security perimeters, email remains the most consistently exploited and underestimated attack vector,” he said. “These scams exploit human trust, using urgency, authority, and increasingly realistic impersonation tactics. If HMRC can be phished, anyone can.” Added Hovhannisyan: “What’s more alarming is that the Treasury Select Committee only learned of the breach through the news. When £47m is stolen through impersonation, institutions can’t afford to stay quiet. Delayed disclosure erodes trust, stalls response, and gives attackers room to manoeuvre.” Once again a service’s end-users have turned out to be the source of a cyber attack and as such, whether they are internal or – as in this case – external, are often considered an organisation’s first line of defence. However, it is not always wise to take this approach, and for an organisation like HMRC daily engaging with members of the public, it is also not really possible. Security education is a difficult proposition at the best of times and although the UK’s National Cyber Security Centre (NCSC) provides extensive advice and guidance on spotting and dealing with phishing emails for consumers – it also operates a phishing reporting service that as of April 2025 has received over 41 million scam reports – bodies like HMRC cannot rely on everybody having visited the NCSC’s website. As such, Mike Britton, chief information officer (CIO) at Abnormal AI, a specialist in phishing, social engineering and account takeover prevention, argued that HMRC could and should have done more from a technical perspective. “Governments will always be a high tier target for cyber criminals due to the valuable information they hold. In fact, attacks against this sector are rising,” he said. “In this case, it looks like criminals utilised account take over to conduct fraud. To combat this, multifactor authentication (MFA) is key, but as attacks grow more sophisticated, further steps must be taken.” Britton said organisations like HMRC really needed to consider adopting more layered security strategies, not only including MFA but also incorporating wider visibility and unified controls across its IT systems. Account takeover attacks such as the ones seen in this incident can unfold quickly, he added, so its cyber function should also be equipped with the tools to identify and remediate compromised accounts on the fly. Read more about trends in phishing Quishing, meaning QR code phishing, is an offputting term for an on-the-rise attack method. Learn how to defend against it. A healthy dose of judicious skepticism is crucial to preventing phishing attacks, said David Fine, supervisory special agent at the FBI, during a presentation at a HIMSS event. Exchange admins got a boost from Microsoft when it improved how it handles DMARC authentication failures to help organisations fight back from email-based attacks on their users.
    Like
    Love
    Wow
    Sad
    Angry
    279
    · 0 التعليقات ·0 المشاركات ·0 معاينة
  • JAMF puts AI inside Apple device management

    When it comes to Apple, all eyes are on AI. Generative AIis the most disruptive technology we’ve seen in years; it is weaving itself into all parts of life – so why should IT management be left unscathed? It won’t be, and the latest AI-powered IT management features within the Jamf platform will soon be the kind of tools IT expects.

    Jamf is a leading Apple-in-the-enterprise device management . The company has been working away on AI features to support its solutions for some time, and has at last introduced some of these at its Jamf Nation Live event. The tools are designed to boost efficiency and support better decision-making when it comes to handling your fleets.

    Of course, you’d expect anyone fielding genAI solutions to say something like that, so what do these tools do?

    Introducing Jamf AI Assistant

    Available as a beta, AI Assistant is designed to support tech support! That means it will help IT admins find what they need and help them understand how and why devices they do find are configured. Jamf splits these two paths into two categories: Search skill and Explain skill.

    Search skill lets admins perform natural language inventory queries across their managed fleets, enabling them to quickly find devices within their flotilla that meet the search parameters. The goal is to make it quicker and easier to audit managed devices for compliance, and to troubleshoot when things go wrong.

    Explain skill caters to another facet of an IT admin’s daily challenges. As Jamf explains, it means the genAI can translate complex configurations and policies into clear, easy-to-understand language. This helps admins make informed decisions, streamline troubleshooting and manage policies more confidently, says Jamf.

    While these new Jamf tools don’t automate much of the workload facing IT, it’s not hard to see how once the AI can understand what’s happening on a Mac and identify those devices that meet a set of parameters, the only missing piece is to automate some of the workflow in between.

    This, of course, is the direction of travel and will likely ripple across IT and every platform. Who knows, it might even make the cost of supporting Windows fleets almost as affordable as that of managing fleets of Apple devices.Beyond AI

    Jamf also made a handful of announcements outside of AI, including the general availability of Blueprints, a set of tools the company announced at JNUC last year. Blueprints builds on Apple’s Declarative Device Management framework and is designed to simplify and accelerate device configuration by consolidating policies, profiles and restrictions into a single, unified workflow.

    This makes a lot of sense on a road map to further AI deployment, as well as for anyone attempting to manage and deploy large Apple fleets. I imagine admins preparing for mammoth college- or school-wide deployments will have some optimism that Blueprints could help save time. Don’t neglect that education tech is expected to deploy thousands of devices in a few weeks, so these tools should be significant to them.

    Jamf continues working on Blueprints, and has introduced a beta release of Configuration Profiles within Blueprints. This tech consists of a new dynamic framework designed to help teams manage devices at scale, thanks to the new dynamic framework for MDM key delivery.

    Ticket to ride

    Jamf has offered a Self Service+ portal since earlier this year. Aimed at end-users, the system lets users request, download and update apps, as well as monitor their device security. Those features have been expanded with identity management tools, so users can view their accounts change passwords, and request things like temporary admin access.

    The beauty of Self Service+ is that it enables users to do these things autonomously while keeping their devices fully auditable and compliant. The idea is that it’s a lot better to focus the expensive tech support teams on the big problems, rather than seeing them bogged down in small, transient, challenges. 

    The company also introduced Compliance Benchmarks. Based on Apple’s macOS Security Compliance Project, this system helps IT automate the process of securing their Apple devices.

    Jamf has also added malware detection to its App Installers module, which means every application made available through that system is scanned to maintain security confidence. That’s really important to companies attempting to provision apps to employees, particularly if they want to avoid accidental installs of hacked malware posing as the original app.

    You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
    #jamf #puts #inside #apple #device
    JAMF puts AI inside Apple device management
    When it comes to Apple, all eyes are on AI. Generative AIis the most disruptive technology we’ve seen in years; it is weaving itself into all parts of life – so why should IT management be left unscathed? It won’t be, and the latest AI-powered IT management features within the Jamf platform will soon be the kind of tools IT expects. Jamf is a leading Apple-in-the-enterprise device management . The company has been working away on AI features to support its solutions for some time, and has at last introduced some of these at its Jamf Nation Live event. The tools are designed to boost efficiency and support better decision-making when it comes to handling your fleets. Of course, you’d expect anyone fielding genAI solutions to say something like that, so what do these tools do? Introducing Jamf AI Assistant Available as a beta, AI Assistant is designed to support tech support! That means it will help IT admins find what they need and help them understand how and why devices they do find are configured. Jamf splits these two paths into two categories: Search skill and Explain skill. Search skill lets admins perform natural language inventory queries across their managed fleets, enabling them to quickly find devices within their flotilla that meet the search parameters. The goal is to make it quicker and easier to audit managed devices for compliance, and to troubleshoot when things go wrong. Explain skill caters to another facet of an IT admin’s daily challenges. As Jamf explains, it means the genAI can translate complex configurations and policies into clear, easy-to-understand language. This helps admins make informed decisions, streamline troubleshooting and manage policies more confidently, says Jamf. While these new Jamf tools don’t automate much of the workload facing IT, it’s not hard to see how once the AI can understand what’s happening on a Mac and identify those devices that meet a set of parameters, the only missing piece is to automate some of the workflow in between. This, of course, is the direction of travel and will likely ripple across IT and every platform. Who knows, it might even make the cost of supporting Windows fleets almost as affordable as that of managing fleets of Apple devices.Beyond AI Jamf also made a handful of announcements outside of AI, including the general availability of Blueprints, a set of tools the company announced at JNUC last year. Blueprints builds on Apple’s Declarative Device Management framework and is designed to simplify and accelerate device configuration by consolidating policies, profiles and restrictions into a single, unified workflow. This makes a lot of sense on a road map to further AI deployment, as well as for anyone attempting to manage and deploy large Apple fleets. I imagine admins preparing for mammoth college- or school-wide deployments will have some optimism that Blueprints could help save time. Don’t neglect that education tech is expected to deploy thousands of devices in a few weeks, so these tools should be significant to them. Jamf continues working on Blueprints, and has introduced a beta release of Configuration Profiles within Blueprints. This tech consists of a new dynamic framework designed to help teams manage devices at scale, thanks to the new dynamic framework for MDM key delivery. Ticket to ride Jamf has offered a Self Service+ portal since earlier this year. Aimed at end-users, the system lets users request, download and update apps, as well as monitor their device security. Those features have been expanded with identity management tools, so users can view their accounts change passwords, and request things like temporary admin access. The beauty of Self Service+ is that it enables users to do these things autonomously while keeping their devices fully auditable and compliant. The idea is that it’s a lot better to focus the expensive tech support teams on the big problems, rather than seeing them bogged down in small, transient, challenges.  The company also introduced Compliance Benchmarks. Based on Apple’s macOS Security Compliance Project, this system helps IT automate the process of securing their Apple devices. Jamf has also added malware detection to its App Installers module, which means every application made available through that system is scanned to maintain security confidence. That’s really important to companies attempting to provision apps to employees, particularly if they want to avoid accidental installs of hacked malware posing as the original app. You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon. #jamf #puts #inside #apple #device
    JAMF puts AI inside Apple device management
    www.computerworld.com
    When it comes to Apple, all eyes are on AI. Generative AI (genAI) is the most disruptive technology we’ve seen in years; it is weaving itself into all parts of life – so why should IT management be left unscathed? It won’t be, and the latest AI-powered IT management features within the Jamf platform will soon be the kind of tools IT expects. Jamf is a leading Apple-in-the-enterprise device management (and security vendor recently began offering enterprise support for Android devices). The company has been working away on AI features to support its solutions for some time, and has at last introduced some of these at its Jamf Nation Live event. The tools are designed to boost efficiency and support better decision-making when it comes to handling your fleets. Of course, you’d expect anyone fielding genAI solutions to say something like that, so what do these tools do? Introducing Jamf AI Assistant Available as a beta, AI Assistant is designed to support tech support! That means it will help IT admins find what they need and help them understand how and why devices they do find are configured. Jamf splits these two paths into two categories: Search skill and Explain skill. Search skill lets admins perform natural language inventory queries across their managed fleets, enabling them to quickly find devices within their flotilla that meet the search parameters. The goal is to make it quicker and easier to audit managed devices for compliance, and to troubleshoot when things go wrong. Explain skill caters to another facet of an IT admin’s daily challenges. As Jamf explains, it means the genAI can translate complex configurations and policies into clear, easy-to-understand language. This helps admins make informed decisions, streamline troubleshooting and manage policies more confidently, says Jamf. While these new Jamf tools don’t automate much of the workload facing IT, it’s not hard to see how once the AI can understand what’s happening on a Mac and identify those devices that meet a set of parameters, the only missing piece is to automate some of the workflow in between. This, of course, is the direction of travel and will likely ripple across IT and every platform. Who knows, it might even make the cost of supporting Windows fleets almost as affordable as that of managing fleets of Apple devices. (Though I doubt it.) Beyond AI Jamf also made a handful of announcements outside of AI, including the general availability of Blueprints, a set of tools the company announced at JNUC last year. Blueprints builds on Apple’s Declarative Device Management framework and is designed to simplify and accelerate device configuration by consolidating policies, profiles and restrictions into a single, unified workflow. This makes a lot of sense on a road map to further AI deployment, as well as for anyone attempting to manage and deploy large Apple fleets. I imagine admins preparing for mammoth college- or school-wide deployments will have some optimism that Blueprints could help save time. Don’t neglect that education tech is expected to deploy thousands of devices in a few weeks, so these tools should be significant to them. Jamf continues working on Blueprints, and has introduced a beta release of Configuration Profiles within Blueprints. This tech consists of a new dynamic framework designed to help teams manage devices at scale, thanks to the new dynamic framework for MDM key delivery. Ticket to ride Jamf has offered a Self Service+ portal since earlier this year. Aimed at end-users, the system lets users request, download and update apps, as well as monitor their device security. Those features have been expanded with identity management tools, so users can view their accounts change passwords, and request things like temporary admin access. The beauty of Self Service+ is that it enables users to do these things autonomously while keeping their devices fully auditable and compliant. The idea is that it’s a lot better to focus the expensive tech support teams on the big problems, rather than seeing them bogged down in small, transient (albeit important), challenges.  The company also introduced Compliance Benchmarks. Based on Apple’s macOS Security Compliance Project (mSCP), this system helps IT automate the process of securing their Apple devices. Jamf has also added malware detection to its App Installers module, which means every application made available through that system is scanned to maintain security confidence. That’s really important to companies attempting to provision apps to employees, particularly if they want to avoid accidental installs of hacked malware posing as the original app. You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
    Like
    Love
    Wow
    Sad
    Angry
    527
    · 0 التعليقات ·0 المشاركات ·0 معاينة
  • OpenAI Brings ChatGPT Record Mode on MacOS, Adds Tool to Connect to Gmail and Outlook

    Photo Credit: Unsplash/Solen Feyissa Connectors on ChatGPT is not available in European Union countries, China, and the UK

    Highlights

    With Record Mode, ChatGPT can transcribe and summarise meetings
    ChatGPT’s Connector feature works only with Deep Research
    It is available to all the paid subscribers of ChatGPT

    Advertisement

    OpenAI released two new utility features for ChatGPT users on Wednesday. The artificial intelligenceapp on MacOS now has a Record Mode that can capture meetings, brainstorming sessions, and voice notes, and transcribe and summarise the main discussion points. This feature is currently only available to the ChatGPT Team subscribers. Additionally, the San Francisco-based AI firm is also introducing Connectors, which is a tool that lets the chatbot connect to the user's internal cloud-based data sources such as Gmail, Outlook, Google Drive, and more.ChatGPT Can Now Record Your MeetingsIn a series of posts on X, the official handle of OpenAI announced the new ChatGPT features. The company also hosted a live stream on YouTube to provide a demonstration of these business-focused features. Both of these features are exclusive to the company's paid subscribers, however, the Record Mode is only aimed at the Team users. Additionally, Record Mode is not available in the European Economic Area, China, and the UK.Record Mode is a new capability available on ChatGPT's macOS desktop app. Team users can now tap the new Record button at the bottom of any chat. Once the user has given permission for microphone, the chatbot will begin capturing the meeting. It can also record voice notes. Once the session has ended, it can provide an editable summary of the conversation as well as its recording.OpenAI says users will be able to search for past meetings, references to them during conversations, and bring relevant context. The transcripts of the meetings also get saved as a canvas in the user's chat history. This transcript can also be rewritten as an email, project plans, or code scaffold. Notably, the tool can record up to 120 minutes per session.Separately, the AI firm also released Connectors. The tool allows ChatGPT to connect to third-party internal data sources and retrieve information in real-time. The feature works with Outlook, Teams, Google Drive, Gmail, Linear, and more. The Team, Enterprise, and Edu subscribers can also connect to SharePoint, DropBox, and Box. Connectors will only work when using Deep Research.

    OpenAI is also letting workspace admins build custom Deep Research Connectors using Model Context Protocolin beta.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    OpenAI, ChatGPT, AI, Artificial Intelligence, Apps

    Akash Dutta

    Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food.
    More

    Related Stories
    #openai #brings #chatgpt #record #mode
    OpenAI Brings ChatGPT Record Mode on MacOS, Adds Tool to Connect to Gmail and Outlook
    Photo Credit: Unsplash/Solen Feyissa Connectors on ChatGPT is not available in European Union countries, China, and the UK Highlights With Record Mode, ChatGPT can transcribe and summarise meetings ChatGPT’s Connector feature works only with Deep Research It is available to all the paid subscribers of ChatGPT Advertisement OpenAI released two new utility features for ChatGPT users on Wednesday. The artificial intelligenceapp on MacOS now has a Record Mode that can capture meetings, brainstorming sessions, and voice notes, and transcribe and summarise the main discussion points. This feature is currently only available to the ChatGPT Team subscribers. Additionally, the San Francisco-based AI firm is also introducing Connectors, which is a tool that lets the chatbot connect to the user's internal cloud-based data sources such as Gmail, Outlook, Google Drive, and more.ChatGPT Can Now Record Your MeetingsIn a series of posts on X, the official handle of OpenAI announced the new ChatGPT features. The company also hosted a live stream on YouTube to provide a demonstration of these business-focused features. Both of these features are exclusive to the company's paid subscribers, however, the Record Mode is only aimed at the Team users. Additionally, Record Mode is not available in the European Economic Area, China, and the UK.Record Mode is a new capability available on ChatGPT's macOS desktop app. Team users can now tap the new Record button at the bottom of any chat. Once the user has given permission for microphone, the chatbot will begin capturing the meeting. It can also record voice notes. Once the session has ended, it can provide an editable summary of the conversation as well as its recording.OpenAI says users will be able to search for past meetings, references to them during conversations, and bring relevant context. The transcripts of the meetings also get saved as a canvas in the user's chat history. This transcript can also be rewritten as an email, project plans, or code scaffold. Notably, the tool can record up to 120 minutes per session.Separately, the AI firm also released Connectors. The tool allows ChatGPT to connect to third-party internal data sources and retrieve information in real-time. The feature works with Outlook, Teams, Google Drive, Gmail, Linear, and more. The Team, Enterprise, and Edu subscribers can also connect to SharePoint, DropBox, and Box. Connectors will only work when using Deep Research. OpenAI is also letting workspace admins build custom Deep Research Connectors using Model Context Protocolin beta. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: OpenAI, ChatGPT, AI, Artificial Intelligence, Apps Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories #openai #brings #chatgpt #record #mode
    OpenAI Brings ChatGPT Record Mode on MacOS, Adds Tool to Connect to Gmail and Outlook
    www.gadgets360.com
    Photo Credit: Unsplash/Solen Feyissa Connectors on ChatGPT is not available in European Union countries, China, and the UK Highlights With Record Mode, ChatGPT can transcribe and summarise meetings ChatGPT’s Connector feature works only with Deep Research It is available to all the paid subscribers of ChatGPT Advertisement OpenAI released two new utility features for ChatGPT users on Wednesday. The artificial intelligence (AI) app on MacOS now has a Record Mode that can capture meetings, brainstorming sessions, and voice notes, and transcribe and summarise the main discussion points. This feature is currently only available to the ChatGPT Team subscribers. Additionally, the San Francisco-based AI firm is also introducing Connectors, which is a tool that lets the chatbot connect to the user's internal cloud-based data sources such as Gmail, Outlook, Google Drive, and more.ChatGPT Can Now Record Your MeetingsIn a series of posts on X (formerly known as Twitter), the official handle of OpenAI announced the new ChatGPT features. The company also hosted a live stream on YouTube to provide a demonstration of these business-focused features. Both of these features are exclusive to the company's paid subscribers, however, the Record Mode is only aimed at the Team users. Additionally, Record Mode is not available in the European Economic Area (EEA), China, and the UK.Record Mode is a new capability available on ChatGPT's macOS desktop app. Team users can now tap the new Record button at the bottom of any chat. Once the user has given permission for microphone, the chatbot will begin capturing the meeting. It can also record voice notes. Once the session has ended, it can provide an editable summary of the conversation as well as its recording.OpenAI says users will be able to search for past meetings, references to them during conversations, and bring relevant context. The transcripts of the meetings also get saved as a canvas in the user's chat history. This transcript can also be rewritten as an email, project plans, or code scaffold. Notably, the tool can record up to 120 minutes per session.Separately, the AI firm also released Connectors. The tool allows ChatGPT to connect to third-party internal data sources and retrieve information in real-time. The feature works with Outlook, Teams, Google Drive, Gmail, Linear, and more. The Team, Enterprise, and Edu subscribers can also connect to SharePoint, DropBox, and Box. Connectors will only work when using Deep Research. OpenAI is also letting workspace admins build custom Deep Research Connectors using Model Context Protocol (MCP) in beta. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: OpenAI, ChatGPT, AI, Artificial Intelligence, Apps Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories
    Like
    Love
    Wow
    Sad
    Angry
    322
    · 0 التعليقات ·0 المشاركات ·0 معاينة
  • Coming soon to enterprises: One Windows Update to rule them all

    Microsoft is giving its Windows Update software stack more power, and the tool will soon be able to update other software and drivers within Windows systems.

    The company is establishing the capability for system administrators to wrangle all software updates into a one-click experience, Microsoft said in a blog post on Wednesday.

    Sysadmins today have to run Windows Update to keep the OS updated, and separately patch individual pieces of software, which can be a lot of work.

    “To solve this, we’re building a vision for a unified, intelligent update orchestration platform capable of supporting any updateto be orchestrated alongside Windows updates,” Microsoft said.

    Typically, system administrators deploy patch management tools to update Windows and related enterprise software, but Microsoft wants to bring it all to a Windows Update-style deployment. Potential benefits include more streamlined and lower-cost deployment of updates, the company said. A unified patch management system also reduces computing requirements.

    The current process for doing updates to Windows systems is a hodgepodge of different tools and techniques, said Jack Gold, principal analyst at J. Gold Associates.

    “I applaud Microsoft for finally trying to bring all of this under one umbrella but wonder why it took them so long to do this,” Gold said.

    In addition to Windows, Windows Update today updates Microsoft’s development tools such as .NET and Defender, and also updates system drivers. With ARM-based PCs, it also delivers system BIOS and firmware so users don’t have to download it from the PC maker’s website.

    But how quickly companies adopt this new way of doing things will depend on how easy Microsoft makes it to adopt the new service, Gold said.

    Microsoft is providing a tool for software providers to put their software updates into its orchestration platform. The company has only provided information on how developers can test it out with their applications, and Microsoft will then provide further information.

    Developers who have access to the Windows Runtime environment can test it out and implement it. APIs are also available to test out the system.

    Microsoft separately announced that Windows Backup for Organizations, a data backup feature announced last year, is now in public preview.

    The product will allow for a smooth transition to Windows 11 from Windows 10 for enterprises, the company said. Windows 10 support ends in October 2025.

    “This capability helps reduce migration overhead, minimize user disruption, and strengthen device resilience against incidents,” Microsoft wrote in a blog entry.

    Microsoft’s Entra identity authentication is a key component of such transitions via Windows Backup for Organizations, Microsoft said.

    Further reading:

    How to handle Windows 10 and 11 updates

    Windows 10: A guide to the updates

    Windows 11: A guide to the updates

    How to preview and deploy Windows 10 and 11 updates

    How to troubleshoot and reset Windows Update

    How to keep your apps up to date in Windows 10 and 11
    #coming #soon #enterprises #one #windows
    Coming soon to enterprises: One Windows Update to rule them all
    Microsoft is giving its Windows Update software stack more power, and the tool will soon be able to update other software and drivers within Windows systems. The company is establishing the capability for system administrators to wrangle all software updates into a one-click experience, Microsoft said in a blog post on Wednesday. Sysadmins today have to run Windows Update to keep the OS updated, and separately patch individual pieces of software, which can be a lot of work. “To solve this, we’re building a vision for a unified, intelligent update orchestration platform capable of supporting any updateto be orchestrated alongside Windows updates,” Microsoft said. Typically, system administrators deploy patch management tools to update Windows and related enterprise software, but Microsoft wants to bring it all to a Windows Update-style deployment. Potential benefits include more streamlined and lower-cost deployment of updates, the company said. A unified patch management system also reduces computing requirements. The current process for doing updates to Windows systems is a hodgepodge of different tools and techniques, said Jack Gold, principal analyst at J. Gold Associates. “I applaud Microsoft for finally trying to bring all of this under one umbrella but wonder why it took them so long to do this,” Gold said. In addition to Windows, Windows Update today updates Microsoft’s development tools such as .NET and Defender, and also updates system drivers. With ARM-based PCs, it also delivers system BIOS and firmware so users don’t have to download it from the PC maker’s website. But how quickly companies adopt this new way of doing things will depend on how easy Microsoft makes it to adopt the new service, Gold said. Microsoft is providing a tool for software providers to put their software updates into its orchestration platform. The company has only provided information on how developers can test it out with their applications, and Microsoft will then provide further information. Developers who have access to the Windows Runtime environment can test it out and implement it. APIs are also available to test out the system. Microsoft separately announced that Windows Backup for Organizations, a data backup feature announced last year, is now in public preview. The product will allow for a smooth transition to Windows 11 from Windows 10 for enterprises, the company said. Windows 10 support ends in October 2025. “This capability helps reduce migration overhead, minimize user disruption, and strengthen device resilience against incidents,” Microsoft wrote in a blog entry. Microsoft’s Entra identity authentication is a key component of such transitions via Windows Backup for Organizations, Microsoft said. Further reading: How to handle Windows 10 and 11 updates Windows 10: A guide to the updates Windows 11: A guide to the updates How to preview and deploy Windows 10 and 11 updates How to troubleshoot and reset Windows Update How to keep your apps up to date in Windows 10 and 11 #coming #soon #enterprises #one #windows
    Coming soon to enterprises: One Windows Update to rule them all
    www.computerworld.com
    Microsoft is giving its Windows Update software stack more power, and the tool will soon be able to update other software and drivers within Windows systems. The company is establishing the capability for system administrators to wrangle all software updates into a one-click experience, Microsoft said in a blog post on Wednesday. Sysadmins today have to run Windows Update to keep the OS updated, and separately patch individual pieces of software, which can be a lot of work. “To solve this, we’re building a vision for a unified, intelligent update orchestration platform capable of supporting any update (apps, drivers, etc.) to be orchestrated alongside Windows updates,” Microsoft said. Typically, system administrators deploy patch management tools to update Windows and related enterprise software, but Microsoft wants to bring it all to a Windows Update-style deployment. Potential benefits include more streamlined and lower-cost deployment of updates, the company said. A unified patch management system also reduces computing requirements. The current process for doing updates to Windows systems is a hodgepodge of different tools and techniques, said Jack Gold, principal analyst at J. Gold Associates. “I applaud Microsoft for finally trying to bring all of this under one umbrella but wonder why it took them so long to do this,” Gold said. In addition to Windows, Windows Update today updates Microsoft’s development tools such as .NET and Defender, and also updates system drivers. With ARM-based PCs, it also delivers system BIOS and firmware so users don’t have to download it from the PC maker’s website. But how quickly companies adopt this new way of doing things will depend on how easy Microsoft makes it to adopt the new service, Gold said. Microsoft is providing a tool for software providers to put their software updates into its orchestration platform. The company has only provided information on how developers can test it out with their applications, and Microsoft will then provide further information. Developers who have access to the Windows Runtime environment can test it out and implement it. APIs are also available to test out the system. Microsoft separately announced that Windows Backup for Organizations, a data backup feature announced last year, is now in public preview. The product will allow for a smooth transition to Windows 11 from Windows 10 for enterprises, the company said. Windows 10 support ends in October 2025. “This capability helps reduce migration overhead, minimize user disruption, and strengthen device resilience against incidents,” Microsoft wrote in a blog entry. Microsoft’s Entra identity authentication is a key component of such transitions via Windows Backup for Organizations, Microsoft said. Further reading: How to handle Windows 10 and 11 updates Windows 10: A guide to the updates Windows 11: A guide to the updates How to preview and deploy Windows 10 and 11 updates How to troubleshoot and reset Windows Update How to keep your apps up to date in Windows 10 and 11
    0 التعليقات ·0 المشاركات ·0 معاينة
  • Gemini will now automatically summarize your long emails unless you opt out

    Google’s AI assistant, Gemini, is gaining a more prominent place in your inbox with the launch of email summary cards, which will appear at the top of your emails. The company announced Thursday that users would no longer have to tap an option to summarize an email with AI. Instead, the AI will now automatically summarize the content when needed, without requiring user interaction.
    When Gemini launched in the side panel of Gmail last year, one of the features allowed users to summarize their long email threads, along with other tools like those to draft email messages or see suggested responses, among other things.
    Now, Google is putting the AI to work on your inbox, whether or not it’s something you want to use.
    The update is another example of how AI is quickly infiltrating the software and services people use the most, even though AI summaries aren’t always reliable. When Apple rolled out AI summaries for app push notifications, for example, the BBC found the feature made repeated mistakes when summarizing news headlines. Apple ended up pausing the AI summaries for news apps.
    Google’s own AI Overviews feature for Search has also repeatedly made mistakes, offering poor quality and inaccurate information at times.
    With the new email summary cards, Gemini will list a longer email’s key points and will then continue to update that synopsis as replies arrive.

    The feature won’t replace the option to manually click a button to summarize an email, Google notes. That will still appear as a chip at the top of the email and in Gmail’s Gemini side panel.

    Techcrunch event

    now through June 4 for TechCrunch Sessions: AI
    on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    The feature is initially available only for emails in English.
    Depending on your region, the summary cards may be turned on or off by default.Others can choose to enable or disable the feature from Gmail’s Settings under “Smart features.” Workplace admins can also opt to disable the personalization settings for users from the Admin console.
    #gemini #will #now #automatically #summarize
    Gemini will now automatically summarize your long emails unless you opt out
    Google’s AI assistant, Gemini, is gaining a more prominent place in your inbox with the launch of email summary cards, which will appear at the top of your emails. The company announced Thursday that users would no longer have to tap an option to summarize an email with AI. Instead, the AI will now automatically summarize the content when needed, without requiring user interaction. When Gemini launched in the side panel of Gmail last year, one of the features allowed users to summarize their long email threads, along with other tools like those to draft email messages or see suggested responses, among other things. Now, Google is putting the AI to work on your inbox, whether or not it’s something you want to use. The update is another example of how AI is quickly infiltrating the software and services people use the most, even though AI summaries aren’t always reliable. When Apple rolled out AI summaries for app push notifications, for example, the BBC found the feature made repeated mistakes when summarizing news headlines. Apple ended up pausing the AI summaries for news apps. Google’s own AI Overviews feature for Search has also repeatedly made mistakes, offering poor quality and inaccurate information at times. With the new email summary cards, Gemini will list a longer email’s key points and will then continue to update that synopsis as replies arrive. The feature won’t replace the option to manually click a button to summarize an email, Google notes. That will still appear as a chip at the top of the email and in Gmail’s Gemini side panel. Techcrunch event now through June 4 for TechCrunch Sessions: AI on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The feature is initially available only for emails in English. Depending on your region, the summary cards may be turned on or off by default.Others can choose to enable or disable the feature from Gmail’s Settings under “Smart features.” Workplace admins can also opt to disable the personalization settings for users from the Admin console. #gemini #will #now #automatically #summarize
    Gemini will now automatically summarize your long emails unless you opt out
    techcrunch.com
    Google’s AI assistant, Gemini, is gaining a more prominent place in your inbox with the launch of email summary cards, which will appear at the top of your emails. The company announced Thursday that users would no longer have to tap an option to summarize an email with AI. Instead, the AI will now automatically summarize the content when needed, without requiring user interaction. When Gemini launched in the side panel of Gmail last year, one of the features allowed users to summarize their long email threads, along with other tools like those to draft email messages or see suggested responses, among other things. Now, Google is putting the AI to work on your inbox, whether or not it’s something you want to use. The update is another example of how AI is quickly infiltrating the software and services people use the most, even though AI summaries aren’t always reliable. When Apple rolled out AI summaries for app push notifications, for example, the BBC found the feature made repeated mistakes when summarizing news headlines. Apple ended up pausing the AI summaries for news apps. Google’s own AI Overviews feature for Search has also repeatedly made mistakes, offering poor quality and inaccurate information at times. With the new email summary cards, Gemini will list a longer email’s key points and will then continue to update that synopsis as replies arrive. The feature won’t replace the option to manually click a button to summarize an email, Google notes. That will still appear as a chip at the top of the email and in Gmail’s Gemini side panel. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The feature is initially available only for emails in English. Depending on your region, the summary cards may be turned on or off by default. (For instance, smart features are turned off in the EU, the UK, Switzerland, and Japan, Google’s help documentation notes.) Others can choose to enable or disable the feature from Gmail’s Settings under “Smart features.” Workplace admins can also opt to disable the personalization settings for users from the Admin console.
    0 التعليقات ·0 المشاركات ·0 معاينة
  • Microsoft Debuts Windows Update Orchestration Platform For Updating All Apps From A Single Place

    Menu

    Home
    News

    Hardware

    Gaming

    Mobile

    Finance
    Deals
    Reviews
    How To

    Wccftech

    Microsoft Debuts Windows Update Orchestration Platform For Updating All Apps From A Single Place

    Sarfraz Khan •
    May 30, 2025 at 10:42am EDT

    To simplify the process, Microsoft has rolled out a new orchestration platform that can help developers update their apps through Windows Update.
    Microsoft Says That Through Windows Update Orchestration Platform, Users Can Have Access to a Simplified Update Process and Will Also Help Developers in Managing Their Apps Conveniently
    Windows' built-in update feature usually updates the OS components, but Microsoft's line-of-business as well as third-party apps are still managed independently. Apart from aiming to manage its LOB apps in a single place for easier updates, Microsoft is also interested in managing third-party apps from its latest platform.
    Microsoft calls it the Windows Update Orchestration Platform, a new feature that will allow Windows users to download the latest app updates from a single place instead of having to download them independently. Microsoft has released its private preview for developers, who can now sign up to explore this unified approach to reduce the hassles in managing their apps independently.

    Microsoft released this orchestration platform citing various reasons, such as CPU and bandwidth spikes, confusing and conflicting notifications, added support costs etc. With the new Windows Update stack, not only will users be able to download all the updates from one place, but app developers will benefit highly from them as well.
    IT admins will particularly benefit from the orchesration as they have to rely on independent update mechanisms. Each tool brings its own logic for scanning, downloading, installing and notifying users, which results in a fragmented experience. Through the new platform, there are multiple benefits developers can have, including eco-efficient scheduling, which helps reduce the impact on productivity and energy consumption, consistent notifications via native Windows Update notfiications, centralized update history, unified troubleshooting tools and more.
    The Windows Update orchestration platform allows developers to integrate their apps through the preview via a set of Windows Runtime APIs and PowerShell commands. Once they are in, developers can handle the behavior of their apps, like registration, defining updates, custom update logic, managed scheduling and status reporting.
    News Source: Microsoft

    Subscribe to get an everyday digest of the latest technology news in your inbox

    Follow us on

    Topics

    Sections

    Company

    Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC
    Associates Program, an affiliate advertising program designed to provide a means for sites to earn
    advertising fees by advertising and linking to amazon.com
    © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada
    #microsoft #debuts #windows #update #orchestration
    Microsoft Debuts Windows Update Orchestration Platform For Updating All Apps From A Single Place
    Menu Home News Hardware Gaming Mobile Finance Deals Reviews How To Wccftech Microsoft Debuts Windows Update Orchestration Platform For Updating All Apps From A Single Place Sarfraz Khan • May 30, 2025 at 10:42am EDT To simplify the process, Microsoft has rolled out a new orchestration platform that can help developers update their apps through Windows Update. Microsoft Says That Through Windows Update Orchestration Platform, Users Can Have Access to a Simplified Update Process and Will Also Help Developers in Managing Their Apps Conveniently Windows' built-in update feature usually updates the OS components, but Microsoft's line-of-business as well as third-party apps are still managed independently. Apart from aiming to manage its LOB apps in a single place for easier updates, Microsoft is also interested in managing third-party apps from its latest platform. Microsoft calls it the Windows Update Orchestration Platform, a new feature that will allow Windows users to download the latest app updates from a single place instead of having to download them independently. Microsoft has released its private preview for developers, who can now sign up to explore this unified approach to reduce the hassles in managing their apps independently. Microsoft released this orchestration platform citing various reasons, such as CPU and bandwidth spikes, confusing and conflicting notifications, added support costs etc. With the new Windows Update stack, not only will users be able to download all the updates from one place, but app developers will benefit highly from them as well. IT admins will particularly benefit from the orchesration as they have to rely on independent update mechanisms. Each tool brings its own logic for scanning, downloading, installing and notifying users, which results in a fragmented experience. Through the new platform, there are multiple benefits developers can have, including eco-efficient scheduling, which helps reduce the impact on productivity and energy consumption, consistent notifications via native Windows Update notfiications, centralized update history, unified troubleshooting tools and more. The Windows Update orchestration platform allows developers to integrate their apps through the preview via a set of Windows Runtime APIs and PowerShell commands. Once they are in, developers can handle the behavior of their apps, like registration, defining updates, custom update logic, managed scheduling and status reporting. News Source: Microsoft Subscribe to get an everyday digest of the latest technology news in your inbox Follow us on Topics Sections Company Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada #microsoft #debuts #windows #update #orchestration
    Microsoft Debuts Windows Update Orchestration Platform For Updating All Apps From A Single Place
    wccftech.com
    Menu Home News Hardware Gaming Mobile Finance Deals Reviews How To Wccftech Microsoft Debuts Windows Update Orchestration Platform For Updating All Apps From A Single Place Sarfraz Khan • May 30, 2025 at 10:42am EDT To simplify the process, Microsoft has rolled out a new orchestration platform that can help developers update their apps through Windows Update. Microsoft Says That Through Windows Update Orchestration Platform, Users Can Have Access to a Simplified Update Process and Will Also Help Developers in Managing Their Apps Conveniently Windows' built-in update feature usually updates the OS components, but Microsoft's line-of-business as well as third-party apps are still managed independently. Apart from aiming to manage its LOB apps in a single place for easier updates, Microsoft is also interested in managing third-party apps from its latest platform. Microsoft calls it the Windows Update Orchestration Platform, a new feature that will allow Windows users to download the latest app updates from a single place instead of having to download them independently. Microsoft has released its private preview for developers, who can now sign up to explore this unified approach to reduce the hassles in managing their apps independently. Microsoft released this orchestration platform citing various reasons, such as CPU and bandwidth spikes, confusing and conflicting notifications, added support costs etc. With the new Windows Update stack, not only will users be able to download all the updates from one place, but app developers will benefit highly from them as well. IT admins will particularly benefit from the orchesration as they have to rely on independent update mechanisms. Each tool brings its own logic for scanning, downloading, installing and notifying users, which results in a fragmented experience. Through the new platform, there are multiple benefits developers can have, including eco-efficient scheduling, which helps reduce the impact on productivity and energy consumption, consistent notifications via native Windows Update notfiications, centralized update history, unified troubleshooting tools and more. The Windows Update orchestration platform allows developers to integrate their apps through the preview via a set of Windows Runtime APIs and PowerShell commands. Once they are in, developers can handle the behavior of their apps, like registration, defining updates, custom update logic, managed scheduling and status reporting. News Source: Microsoft Subscribe to get an everyday digest of the latest technology news in your inbox Follow us on Topics Sections Company Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada
    0 التعليقات ·0 المشاركات ·0 معاينة
  • AI for network admins

    There are few industries these days that are not touched by artificial intelligence. Networking is very much one that is touched. It is barely conceivable that any network of any reasonable size – from an office local area network or home router to a global telecoms infrastructure – could not “just” be improved by AI.
    Just take the words of Swisscom’s chief technical officer, Mark Düsener, about his company’s partnership with Cisco-owned Outshift to deploy agentic AI – of which more later – through his organisation. “The goal of getting into an agentic AI world, operating networks and connectivity is all about reducing the impact of service changes, reducing the risk of downtime and costs – therefore levelling up our customer experience.” 
    In other words, the implementation of AI results in operational efficiencies, increased reliability and user benefits. Seems simple, yes? But as we know, nothing in life is simple, and to guarantee such gains, AI can’t be “just” switched on. And perhaps most importantly, the benefits of AI in networking can’t be realised fully without considering networking for AI.

    It seems logical that any investigation of AI and networking – or indeed, AI and anything – should start with Nvidia, a company that has played a pivotal role in developing the AI tech ecosystem, and is set to do so further.
    Speaking in 2024 at a tech conference about how AI has established itself as an intrinsic part of business, Nvidia founder and CEO Jensen Huang observed that the era of generative AIis here and that enterprises must engage with “the single most consequential technology in history”. He told the audience that what was happening was the greatest fundamental computing platform transformation in 60 years, encompassing general-purpose computing to accelerated computing. 
    “We’re sitting on a mountain of data. All of us. We’ve been collecting it in our businesses for a long time. But until now, we haven’t had the ability to refine that, then discover insight and codify it automatically into our company’s natural experience, our digital intelligence. Every company is going to be an intelligence manufacturer. Every company is built on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said.
    “AI is a lifecycle that lives forever. What we are looking to do is turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient and do things at a larger scale.” 
    Concluding his keynote, Huang stressed that enterprises must now engage with the “single most consequential technology in history” to translate and condense a company’s intelligence into digital intelligence.
    This is precisely what Swisscom is aiming to achieve. The company is Switzerland’s largest telecoms provider with more than six million mobile customers and 10,000 mobile antenna sites that have to be managed effectively. When its network engineers make changes to the infrastructure, they face a common challenge: how to update systems that serve millions of customers without disrupting the service.
    The solution was partnering with Outshift to develop practical applications of AI agents in network operations to “redefine” customer experiences. That is, using Outshift’s Internet of Agents to deliver meaningful results for the telco, while also meeting customer needs through AI innovation.
    But these advantages are not the preserve of large enterprises such as telcos. Indeed, from a networking perspective, AI can enable small- and medium-sized businesses to gain access to enterprise-level technology that can allow them to focus on growth and eliminate the costs and infrastructure challenges that arise when managing complex IT infrastructures. 

    From a broader perspective, Swisscom and Outshift have also shown that making AI work effectively requires something new: an infrastructure that lets businesses communicate and work together securely. And this is where the two sides of AI and networking come into play.
    At the event where Nvidia’s Huang outlined his vision, David Hughes, chief product officer of HPE Aruba Networking, said there were pressing issues about the use of AI in enterprise networks, in particular around harnessing the benefits that GenAI can offer. Regarding “AI for networking” and “networking for AI”, Hughes suggested there are subtle but fundamental differences between the two. 
    “AI for networking is where we spend time from an engineering and data science point of view. It’s really abouthow we use AI technology to turn IT admins into super-admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’rethe same number of people,” he observed. 

    What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process

    Bastien Aerni, GTT

    “Networking for AI is about building out, first and foremost, the kind of switching infrastructure that’s needed to interconnect GPUclusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way people might want to build out their network.” 
    And impact there is. A lot of firms currently investigating AI within their businesses find themselves asking how to manage the mass adoption of AI in relation to networking and data flows, such as the kind of bandwidth and capacity required to facilitate AI-generated output such as text, image and video content.
    This, says Bastien Aerni, vice-president of strategy and technology adoption at global networking and security-as-a-service firm GTT, is causing companies to rethink the speed and scale of their networking needs. 
    “To achieve the return on investment of AI initiatives, they have to be able to secure and process large amounts of data quickly, and to this end, their network architecture must be configured to support this kind of workload. Utilising a platform embedded in a Tier 1 IPbackbone here ensures low latency, high bandwidth and direct internet access globally,” he remarks.  
    “What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process. Leveraging software-defined wide area networkservices built in the right platform to efficiently route AI data traffic can reduce latency and security risk, and provide more control over data.” 

    At the end of 2023, BT revealed that its networks had come under huge strain after the simultaneous online broadcast of six Premier League football matches and downloads of popular games, with the update of Call of Duty Modern Warfare particularly cited. AI promises to add to this headache. 
    Speaking at Mobile World Congress 2025, BT Business chief technology officerColin Bannon said that in the new, reshaped world of work, a robust and reliable network is a fundamental prerequisite for AI to work, and that it requires effort to stay relevant to meet ongoing challenges faced by the customers BT serves, mainly international business, governments and multinationals. The bottom line is that network performance to support the AI-enabled world is crucial in a world where “slow is the new down”. 
    Bannon added that Global Fabric, BT’s network-as-a-service product, was constructed before AI “blew up” and that BT was thinking of how to deal with a hyper-distributed set of workloads on a network and to be able to make it fully programmable.
    Looking at the challenges ahead and how the new network will resolve them, he said: “just makes distributed and more complex workflows even bigger, which makes the need for a fabric-type network even more important. You need a network that canburst, and that is programmable, and that you canbandwidth on demand as well. All of this programmabilityhave never had before. I would argue that the network is the computer, and the network is a prerequisite for AI to work.” 
    The result would be constructing enterprise networks that can cope with the massive strain placed on utilisation from AI, especially in terms of what is needed for training models. Bannon said there were three key network challenges and conditions to deal with AI: training requirements, inference requirements and general requirements.  
    He stated that the dynamic nature of AI workloads means networks need to be scalable and agile, with visibility tools that offer real-time monitoring, issue detection and troubleshooting. As regards specific training requirements, dealing with AI necessitates the movement of large datasets across the network, thus demanding high-bandwidth networks.
    He also described “elephant” flows of data – that is, continuous transmission over time and training over days. He warned that network inconsistencies could affect the accuracy and training time of AI models, and that tail latency could impact job completion time significantly. This means robust congestion management is needed to detect potential congestion and redistribute network traffic. 
    But AI training models generally spell network trouble. And now the conversation is turning from the use of generic large language modelsto application/industry-dedicated small language models.

    articles about AI for networking

    How network engineers can prepare for the future with AI: The rapid rise of AI has left some professionals feeling unprepared. GenAI is beneficial to networks, but engineers must have the proper tools to adapt to this new change.
    Cisco Live EMEA – network supplier tightens AI embrace: At its annual EMEA show, Cisco tech leaders unveiled a raft of new products, services and features designed to help customers do more with artificial intelligence.

    NTT Data has created and deployed a small language model called Tsuzumi, described as an ultra-lightweight model designed to reduce learning and inference costs. According to NTT’s UK and Ireland CTO, Tom Winstanley, the reason for developing this model has principally been to support edge use cases.
    “literally deployment at the edge of the network to avoid flooding of the network, also addressing privacy concerns, also addressing sustainability concerns around some of these very large language models being very specific in creating domain context,” he says.  
    “Examples of that can be used in video analytics, media analytics, and in capturing conversations in real time, but locally, and not deploying it out to flood the network. That said, the flip side of this was there was immense power sitting in some of these central hyper-scale models and capacities, and you also therefore need to find out morewhat’s the right network background, and what’s the right balance of your network infrastructure. For example, if you want to do real-time media streaming from aand do all of the edits on-site, or remotely so not to have to deployto every single location, then you need a different backbone, too.” 
    Winstanley notes that his company is part of a wider group that in media use cases could offer hyper-directional sound systems supported by AI. “This is looking like a really interesting area of technology that is relevant for supporter experience in a stadium – dampening, sound targeting. And then we’re back to the connection to the edge of the AI story. And that’s exciting for us. That is the frontier.” 
    But coming back from the frontier of technology to bread-and-butter business operations, even if the IT and comms community is confident that it can address any technological issues that arise regarding AI and networking, businesses themselves may not be so sure. 

    Research published by managed network-as-a-service provider Expereo in April 2025 revealed that despite 88% of UK business leaders regarding AI as becoming important to fulfilling business priorities in the next 12 months, there are a number of major roadblocks to AI plans by UK businesses. These include from employees and unreasonable demands, as well as poor existing infrastructure.  
    Worryingly, among the key findings of Expereo’s Enterprise horizons 2025 study was the general feeling from a lot of UK technology leaders that expectations within their organisation of what AI can do are growing faster than their ability to meet them. While 47% of UK organisations noted that their network/connectivity infrastructure was not ready to support new technology initiatives, such as AI, in general, a further 49% reported that their network performance was preventing or limiting their ability to support large data and AI projects. 
    Assessing the key trends revealed in the study, Expereo CEO Ben Elms says that as global businesses embrace AI to transform employee and customer experience, setting realistic goals and aligning expectations will be critical to ensuring that AI delivers long-term value, rather than being viewed as a quick fix.
    “While the potential of AI is immense, its successful integration requires careful planning. Technology leaders must recognise the need for robust networks and connectivity infrastructure to support AI at scale, while also ensuring consistent performance across these networks,” he says. 
    Summing up the state of the industry, Elms states that business is currently at a pivotal moment where strategic investments in technology and IT infrastructure are necessary to meet both current and future demands. In short, reflecting Düsener’s point about Swisscom’s aim to reduce the impact of service changes, reduce the risk of downtime and costs, and improve customer services.
    Just switching on any AI system and believing that any answer is “out there” just won’t do. Your network could very well tell you otherwise. 

    Through its core Catia platform and its SolidWorks subsidiary, engineering software company Dassault Systèmes sees artificial intelligenceas now fundamental to its design and manufacturing work in virtually all production industries.
    Speaking to Computer Weekly in February 2025, the company’s senior vice-president, Gian Paolo Bassi, said the conversation of its sector has evolved from Industry 4.0, which was focused on automation, productivity and innovation without taking into account the effect of technological changes in society.  
    “The industry has decided that it’s time for an evolution,” he said. “It’s called Industry 5.0. At the intersection of the experience economy, there is a new, compelling necessity to be sustainable, to create a circular economy. So then, at the intersection,the generativeeconomy.”
    Yet in aiming to generate gains in sustainability through Industry 5.0, there is a danger that the increased use of AI could potentially see increased power usage, as well as the need to invest in much more robust and responsive connected network infrastructure to support the rise in AI-based workloads. 
    Dassault first revealed it was working with generative AI design principles in 2024. As the practice has evolved, Bassi said it now captures two fundamental concepts. The first is the ability of AI to create new and original content based on language models that comprise details of processes, business models, designs of parts assemblies, specifications and manufacturing practices. These models, he stressed, would not be traditional, generic, compute-intensive models such as ChatGPT. Instead, they would be vertical, industry-specific, and trained on engineering content and technical documentation. 
    “We can now build large models of everything, which is a virtual twin, and we can get to a level of sophistication where new ideas can come in, be tested, and much more knowledge can be put into the innovation process. This is a tipping point,” he remarked. “It’s not a technological change. It’s a technological expansion – a very important one – because we are going to improve, to increase our portfolio with AI agents, with virtual companions and also content, because generative AI can generate content, and can generate, more importantly, know-how and knowledge that can be put to use by our customers immediately.”
    This tipping point means the software provider can bring knowledge and know-how to a new level because, in Bassi’s belief, this is what AI is best at: exploiting the large models of industrial practices. And with the most important benefit of addressing customer needs as the capabilities of AI are translated into the industrial world, offering a pathway for engineers to save precious time in research and spend more time on being creative in design, without massive, network-intensive models.
    “Right now, there is this rush to create larger and more comprehensive models. However, it maybe a temporary limitation of the technology,” Bassi suggested. “In fact, it is indeed possible that you don’t need the huge models to do specific tasks.” 
    #network #admins
    AI for network admins
    There are few industries these days that are not touched by artificial intelligence. Networking is very much one that is touched. It is barely conceivable that any network of any reasonable size – from an office local area network or home router to a global telecoms infrastructure – could not “just” be improved by AI. Just take the words of Swisscom’s chief technical officer, Mark Düsener, about his company’s partnership with Cisco-owned Outshift to deploy agentic AI – of which more later – through his organisation. “The goal of getting into an agentic AI world, operating networks and connectivity is all about reducing the impact of service changes, reducing the risk of downtime and costs – therefore levelling up our customer experience.”  In other words, the implementation of AI results in operational efficiencies, increased reliability and user benefits. Seems simple, yes? But as we know, nothing in life is simple, and to guarantee such gains, AI can’t be “just” switched on. And perhaps most importantly, the benefits of AI in networking can’t be realised fully without considering networking for AI. It seems logical that any investigation of AI and networking – or indeed, AI and anything – should start with Nvidia, a company that has played a pivotal role in developing the AI tech ecosystem, and is set to do so further. Speaking in 2024 at a tech conference about how AI has established itself as an intrinsic part of business, Nvidia founder and CEO Jensen Huang observed that the era of generative AIis here and that enterprises must engage with “the single most consequential technology in history”. He told the audience that what was happening was the greatest fundamental computing platform transformation in 60 years, encompassing general-purpose computing to accelerated computing.  “We’re sitting on a mountain of data. All of us. We’ve been collecting it in our businesses for a long time. But until now, we haven’t had the ability to refine that, then discover insight and codify it automatically into our company’s natural experience, our digital intelligence. Every company is going to be an intelligence manufacturer. Every company is built on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said. “AI is a lifecycle that lives forever. What we are looking to do is turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient and do things at a larger scale.”  Concluding his keynote, Huang stressed that enterprises must now engage with the “single most consequential technology in history” to translate and condense a company’s intelligence into digital intelligence. This is precisely what Swisscom is aiming to achieve. The company is Switzerland’s largest telecoms provider with more than six million mobile customers and 10,000 mobile antenna sites that have to be managed effectively. When its network engineers make changes to the infrastructure, they face a common challenge: how to update systems that serve millions of customers without disrupting the service. The solution was partnering with Outshift to develop practical applications of AI agents in network operations to “redefine” customer experiences. That is, using Outshift’s Internet of Agents to deliver meaningful results for the telco, while also meeting customer needs through AI innovation. But these advantages are not the preserve of large enterprises such as telcos. Indeed, from a networking perspective, AI can enable small- and medium-sized businesses to gain access to enterprise-level technology that can allow them to focus on growth and eliminate the costs and infrastructure challenges that arise when managing complex IT infrastructures.  From a broader perspective, Swisscom and Outshift have also shown that making AI work effectively requires something new: an infrastructure that lets businesses communicate and work together securely. And this is where the two sides of AI and networking come into play. At the event where Nvidia’s Huang outlined his vision, David Hughes, chief product officer of HPE Aruba Networking, said there were pressing issues about the use of AI in enterprise networks, in particular around harnessing the benefits that GenAI can offer. Regarding “AI for networking” and “networking for AI”, Hughes suggested there are subtle but fundamental differences between the two.  “AI for networking is where we spend time from an engineering and data science point of view. It’s really abouthow we use AI technology to turn IT admins into super-admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’rethe same number of people,” he observed.  What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process Bastien Aerni, GTT “Networking for AI is about building out, first and foremost, the kind of switching infrastructure that’s needed to interconnect GPUclusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way people might want to build out their network.”  And impact there is. A lot of firms currently investigating AI within their businesses find themselves asking how to manage the mass adoption of AI in relation to networking and data flows, such as the kind of bandwidth and capacity required to facilitate AI-generated output such as text, image and video content. This, says Bastien Aerni, vice-president of strategy and technology adoption at global networking and security-as-a-service firm GTT, is causing companies to rethink the speed and scale of their networking needs.  “To achieve the return on investment of AI initiatives, they have to be able to secure and process large amounts of data quickly, and to this end, their network architecture must be configured to support this kind of workload. Utilising a platform embedded in a Tier 1 IPbackbone here ensures low latency, high bandwidth and direct internet access globally,” he remarks.   “What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process. Leveraging software-defined wide area networkservices built in the right platform to efficiently route AI data traffic can reduce latency and security risk, and provide more control over data.”  At the end of 2023, BT revealed that its networks had come under huge strain after the simultaneous online broadcast of six Premier League football matches and downloads of popular games, with the update of Call of Duty Modern Warfare particularly cited. AI promises to add to this headache.  Speaking at Mobile World Congress 2025, BT Business chief technology officerColin Bannon said that in the new, reshaped world of work, a robust and reliable network is a fundamental prerequisite for AI to work, and that it requires effort to stay relevant to meet ongoing challenges faced by the customers BT serves, mainly international business, governments and multinationals. The bottom line is that network performance to support the AI-enabled world is crucial in a world where “slow is the new down”.  Bannon added that Global Fabric, BT’s network-as-a-service product, was constructed before AI “blew up” and that BT was thinking of how to deal with a hyper-distributed set of workloads on a network and to be able to make it fully programmable. Looking at the challenges ahead and how the new network will resolve them, he said: “just makes distributed and more complex workflows even bigger, which makes the need for a fabric-type network even more important. You need a network that canburst, and that is programmable, and that you canbandwidth on demand as well. All of this programmabilityhave never had before. I would argue that the network is the computer, and the network is a prerequisite for AI to work.”  The result would be constructing enterprise networks that can cope with the massive strain placed on utilisation from AI, especially in terms of what is needed for training models. Bannon said there were three key network challenges and conditions to deal with AI: training requirements, inference requirements and general requirements.   He stated that the dynamic nature of AI workloads means networks need to be scalable and agile, with visibility tools that offer real-time monitoring, issue detection and troubleshooting. As regards specific training requirements, dealing with AI necessitates the movement of large datasets across the network, thus demanding high-bandwidth networks. He also described “elephant” flows of data – that is, continuous transmission over time and training over days. He warned that network inconsistencies could affect the accuracy and training time of AI models, and that tail latency could impact job completion time significantly. This means robust congestion management is needed to detect potential congestion and redistribute network traffic.  But AI training models generally spell network trouble. And now the conversation is turning from the use of generic large language modelsto application/industry-dedicated small language models. articles about AI for networking How network engineers can prepare for the future with AI: The rapid rise of AI has left some professionals feeling unprepared. GenAI is beneficial to networks, but engineers must have the proper tools to adapt to this new change. Cisco Live EMEA – network supplier tightens AI embrace: At its annual EMEA show, Cisco tech leaders unveiled a raft of new products, services and features designed to help customers do more with artificial intelligence. NTT Data has created and deployed a small language model called Tsuzumi, described as an ultra-lightweight model designed to reduce learning and inference costs. According to NTT’s UK and Ireland CTO, Tom Winstanley, the reason for developing this model has principally been to support edge use cases. “literally deployment at the edge of the network to avoid flooding of the network, also addressing privacy concerns, also addressing sustainability concerns around some of these very large language models being very specific in creating domain context,” he says.   “Examples of that can be used in video analytics, media analytics, and in capturing conversations in real time, but locally, and not deploying it out to flood the network. That said, the flip side of this was there was immense power sitting in some of these central hyper-scale models and capacities, and you also therefore need to find out morewhat’s the right network background, and what’s the right balance of your network infrastructure. For example, if you want to do real-time media streaming from aand do all of the edits on-site, or remotely so not to have to deployto every single location, then you need a different backbone, too.”  Winstanley notes that his company is part of a wider group that in media use cases could offer hyper-directional sound systems supported by AI. “This is looking like a really interesting area of technology that is relevant for supporter experience in a stadium – dampening, sound targeting. And then we’re back to the connection to the edge of the AI story. And that’s exciting for us. That is the frontier.”  But coming back from the frontier of technology to bread-and-butter business operations, even if the IT and comms community is confident that it can address any technological issues that arise regarding AI and networking, businesses themselves may not be so sure.  Research published by managed network-as-a-service provider Expereo in April 2025 revealed that despite 88% of UK business leaders regarding AI as becoming important to fulfilling business priorities in the next 12 months, there are a number of major roadblocks to AI plans by UK businesses. These include from employees and unreasonable demands, as well as poor existing infrastructure.   Worryingly, among the key findings of Expereo’s Enterprise horizons 2025 study was the general feeling from a lot of UK technology leaders that expectations within their organisation of what AI can do are growing faster than their ability to meet them. While 47% of UK organisations noted that their network/connectivity infrastructure was not ready to support new technology initiatives, such as AI, in general, a further 49% reported that their network performance was preventing or limiting their ability to support large data and AI projects.  Assessing the key trends revealed in the study, Expereo CEO Ben Elms says that as global businesses embrace AI to transform employee and customer experience, setting realistic goals and aligning expectations will be critical to ensuring that AI delivers long-term value, rather than being viewed as a quick fix. “While the potential of AI is immense, its successful integration requires careful planning. Technology leaders must recognise the need for robust networks and connectivity infrastructure to support AI at scale, while also ensuring consistent performance across these networks,” he says.  Summing up the state of the industry, Elms states that business is currently at a pivotal moment where strategic investments in technology and IT infrastructure are necessary to meet both current and future demands. In short, reflecting Düsener’s point about Swisscom’s aim to reduce the impact of service changes, reduce the risk of downtime and costs, and improve customer services. Just switching on any AI system and believing that any answer is “out there” just won’t do. Your network could very well tell you otherwise.  Through its core Catia platform and its SolidWorks subsidiary, engineering software company Dassault Systèmes sees artificial intelligenceas now fundamental to its design and manufacturing work in virtually all production industries. Speaking to Computer Weekly in February 2025, the company’s senior vice-president, Gian Paolo Bassi, said the conversation of its sector has evolved from Industry 4.0, which was focused on automation, productivity and innovation without taking into account the effect of technological changes in society.   “The industry has decided that it’s time for an evolution,” he said. “It’s called Industry 5.0. At the intersection of the experience economy, there is a new, compelling necessity to be sustainable, to create a circular economy. So then, at the intersection,the generativeeconomy.” Yet in aiming to generate gains in sustainability through Industry 5.0, there is a danger that the increased use of AI could potentially see increased power usage, as well as the need to invest in much more robust and responsive connected network infrastructure to support the rise in AI-based workloads.  Dassault first revealed it was working with generative AI design principles in 2024. As the practice has evolved, Bassi said it now captures two fundamental concepts. The first is the ability of AI to create new and original content based on language models that comprise details of processes, business models, designs of parts assemblies, specifications and manufacturing practices. These models, he stressed, would not be traditional, generic, compute-intensive models such as ChatGPT. Instead, they would be vertical, industry-specific, and trained on engineering content and technical documentation.  “We can now build large models of everything, which is a virtual twin, and we can get to a level of sophistication where new ideas can come in, be tested, and much more knowledge can be put into the innovation process. This is a tipping point,” he remarked. “It’s not a technological change. It’s a technological expansion – a very important one – because we are going to improve, to increase our portfolio with AI agents, with virtual companions and also content, because generative AI can generate content, and can generate, more importantly, know-how and knowledge that can be put to use by our customers immediately.” This tipping point means the software provider can bring knowledge and know-how to a new level because, in Bassi’s belief, this is what AI is best at: exploiting the large models of industrial practices. And with the most important benefit of addressing customer needs as the capabilities of AI are translated into the industrial world, offering a pathway for engineers to save precious time in research and spend more time on being creative in design, without massive, network-intensive models. “Right now, there is this rush to create larger and more comprehensive models. However, it maybe a temporary limitation of the technology,” Bassi suggested. “In fact, it is indeed possible that you don’t need the huge models to do specific tasks.”  #network #admins
    AI for network admins
    www.computerweekly.com
    There are few industries these days that are not touched by artificial intelligence (AI). Networking is very much one that is touched. It is barely conceivable that any network of any reasonable size – from an office local area network or home router to a global telecoms infrastructure – could not “just” be improved by AI. Just take the words of Swisscom’s chief technical officer, Mark Düsener, about his company’s partnership with Cisco-owned Outshift to deploy agentic AI – of which more later – through his organisation. “The goal of getting into an agentic AI world, operating networks and connectivity is all about reducing the impact of service changes, reducing the risk of downtime and costs – therefore levelling up our customer experience.”  In other words, the implementation of AI results in operational efficiencies, increased reliability and user benefits. Seems simple, yes? But as we know, nothing in life is simple, and to guarantee such gains, AI can’t be “just” switched on. And perhaps most importantly, the benefits of AI in networking can’t be realised fully without considering networking for AI. It seems logical that any investigation of AI and networking – or indeed, AI and anything – should start with Nvidia, a company that has played a pivotal role in developing the AI tech ecosystem, and is set to do so further. Speaking in 2024 at a tech conference about how AI has established itself as an intrinsic part of business, Nvidia founder and CEO Jensen Huang observed that the era of generative AI (GenAI) is here and that enterprises must engage with “the single most consequential technology in history”. He told the audience that what was happening was the greatest fundamental computing platform transformation in 60 years, encompassing general-purpose computing to accelerated computing.  “We’re sitting on a mountain of data. All of us. We’ve been collecting it in our businesses for a long time. But until now, we haven’t had the ability to refine that, then discover insight and codify it automatically into our company’s natural experience, our digital intelligence. Every company is going to be an intelligence manufacturer. Every company is built on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said. “AI is a lifecycle that lives forever. What we are looking to do is turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient and do things at a larger scale.”  Concluding his keynote, Huang stressed that enterprises must now engage with the “single most consequential technology in history” to translate and condense a company’s intelligence into digital intelligence. This is precisely what Swisscom is aiming to achieve. The company is Switzerland’s largest telecoms provider with more than six million mobile customers and 10,000 mobile antenna sites that have to be managed effectively. When its network engineers make changes to the infrastructure, they face a common challenge: how to update systems that serve millions of customers without disrupting the service. The solution was partnering with Outshift to develop practical applications of AI agents in network operations to “redefine” customer experiences. That is, using Outshift’s Internet of Agents to deliver meaningful results for the telco, while also meeting customer needs through AI innovation. But these advantages are not the preserve of large enterprises such as telcos. Indeed, from a networking perspective, AI can enable small- and medium-sized businesses to gain access to enterprise-level technology that can allow them to focus on growth and eliminate the costs and infrastructure challenges that arise when managing complex IT infrastructures.  From a broader perspective, Swisscom and Outshift have also shown that making AI work effectively requires something new: an infrastructure that lets businesses communicate and work together securely. And this is where the two sides of AI and networking come into play. At the event where Nvidia’s Huang outlined his vision, David Hughes, chief product officer of HPE Aruba Networking, said there were pressing issues about the use of AI in enterprise networks, in particular around harnessing the benefits that GenAI can offer. Regarding “AI for networking” and “networking for AI”, Hughes suggested there are subtle but fundamental differences between the two.  “AI for networking is where we spend time from an engineering and data science point of view. It’s really about [questioning] how we use AI technology to turn IT admins into super-admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’re [asking this of] the same number of people,” he observed.  What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process Bastien Aerni, GTT “Networking for AI is about building out, first and foremost, the kind of switching infrastructure that’s needed to interconnect GPU [graphics processing unit] clusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way people might want to build out their network.”  And impact there is. A lot of firms currently investigating AI within their businesses find themselves asking how to manage the mass adoption of AI in relation to networking and data flows, such as the kind of bandwidth and capacity required to facilitate AI-generated output such as text, image and video content. This, says Bastien Aerni, vice-president of strategy and technology adoption at global networking and security-as-a-service firm GTT, is causing companies to rethink the speed and scale of their networking needs.  “To achieve the return on investment of AI initiatives, they have to be able to secure and process large amounts of data quickly, and to this end, their network architecture must be configured to support this kind of workload. Utilising a platform embedded in a Tier 1 IP [internet protocol] backbone here ensures low latency, high bandwidth and direct internet access globally,” he remarks.   “What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process. Leveraging software-defined wide area network [SD-WAN] services built in the right platform to efficiently route AI data traffic can reduce latency and security risk, and provide more control over data.”  At the end of 2023, BT revealed that its networks had come under huge strain after the simultaneous online broadcast of six Premier League football matches and downloads of popular games, with the update of Call of Duty Modern Warfare particularly cited. AI promises to add to this headache.  Speaking at Mobile World Congress 2025, BT Business chief technology officer (CTO) Colin Bannon said that in the new, reshaped world of work, a robust and reliable network is a fundamental prerequisite for AI to work, and that it requires effort to stay relevant to meet ongoing challenges faced by the customers BT serves, mainly international business, governments and multinationals. The bottom line is that network performance to support the AI-enabled world is crucial in a world where “slow is the new down”.  Bannon added that Global Fabric, BT’s network-as-a-service product, was constructed before AI “blew up” and that BT was thinking of how to deal with a hyper-distributed set of workloads on a network and to be able to make it fully programmable. Looking at the challenges ahead and how the new network will resolve them, he said: “[AI] just makes distributed and more complex workflows even bigger, which makes the need for a fabric-type network even more important. You need a network that can [handle data] burst, and that is programmable, and that you can [control] bandwidth on demand as well. All of this programmability [is something businesses] have never had before. I would argue that the network is the computer, and the network is a prerequisite for AI to work.”  The result would be constructing enterprise networks that can cope with the massive strain placed on utilisation from AI, especially in terms of what is needed for training models. Bannon said there were three key network challenges and conditions to deal with AI: training requirements, inference requirements and general requirements.   He stated that the dynamic nature of AI workloads means networks need to be scalable and agile, with visibility tools that offer real-time monitoring, issue detection and troubleshooting. As regards specific training requirements, dealing with AI necessitates the movement of large datasets across the network, thus demanding high-bandwidth networks. He also described “elephant” flows of data – that is, continuous transmission over time and training over days. He warned that network inconsistencies could affect the accuracy and training time of AI models, and that tail latency could impact job completion time significantly. This means robust congestion management is needed to detect potential congestion and redistribute network traffic.  But AI training models generally spell network trouble. And now the conversation is turning from the use of generic large language models (see Preparing networks for Industry 5.0 box) to application/industry-dedicated small language models. Read more articles about AI for networking How network engineers can prepare for the future with AI: The rapid rise of AI has left some professionals feeling unprepared. GenAI is beneficial to networks, but engineers must have the proper tools to adapt to this new change. Cisco Live EMEA – network supplier tightens AI embrace: At its annual EMEA show, Cisco tech leaders unveiled a raft of new products, services and features designed to help customers do more with artificial intelligence. NTT Data has created and deployed a small language model called Tsuzumi, described as an ultra-lightweight model designed to reduce learning and inference costs. According to NTT’s UK and Ireland CTO, Tom Winstanley, the reason for developing this model has principally been to support edge use cases. “[That is] literally deployment at the edge of the network to avoid flooding of the network, also addressing privacy concerns, also addressing sustainability concerns around some of these very large language models being very specific in creating domain context,” he says.   “Examples of that can be used in video analytics, media analytics, and in capturing conversations in real time, but locally, and not deploying it out to flood the network. That said, the flip side of this was there was immense power sitting in some of these central hyper-scale models and capacities, and you also therefore need to find out more [about] what’s the right network background, and what’s the right balance of your network infrastructure. For example, if you want to do real-time media streaming from a [sports stadium] and do all of the edits on-site, or remotely so not to have to deploy [facilities] to every single location, then you need a different backbone, too.”  Winstanley notes that his company is part of a wider group that in media use cases could offer hyper-directional sound systems supported by AI. “This is looking like a really interesting area of technology that is relevant for supporter experience in a stadium – dampening, sound targeting. And then we’re back to the connection to the edge of the AI story. And that’s exciting for us. That is the frontier.”  But coming back from the frontier of technology to bread-and-butter business operations, even if the IT and comms community is confident that it can address any technological issues that arise regarding AI and networking, businesses themselves may not be so sure.  Research published by managed network-as-a-service provider Expereo in April 2025 revealed that despite 88% of UK business leaders regarding AI as becoming important to fulfilling business priorities in the next 12 months, there are a number of major roadblocks to AI plans by UK businesses. These include from employees and unreasonable demands, as well as poor existing infrastructure.   Worryingly, among the key findings of Expereo’s Enterprise horizons 2025 study was the general feeling from a lot of UK technology leaders that expectations within their organisation of what AI can do are growing faster than their ability to meet them. While 47% of UK organisations noted that their network/connectivity infrastructure was not ready to support new technology initiatives, such as AI, in general, a further 49% reported that their network performance was preventing or limiting their ability to support large data and AI projects.  Assessing the key trends revealed in the study, Expereo CEO Ben Elms says that as global businesses embrace AI to transform employee and customer experience, setting realistic goals and aligning expectations will be critical to ensuring that AI delivers long-term value, rather than being viewed as a quick fix. “While the potential of AI is immense, its successful integration requires careful planning. Technology leaders must recognise the need for robust networks and connectivity infrastructure to support AI at scale, while also ensuring consistent performance across these networks,” he says.  Summing up the state of the industry, Elms states that business is currently at a pivotal moment where strategic investments in technology and IT infrastructure are necessary to meet both current and future demands. In short, reflecting Düsener’s point about Swisscom’s aim to reduce the impact of service changes, reduce the risk of downtime and costs, and improve customer services. Just switching on any AI system and believing that any answer is “out there” just won’t do. Your network could very well tell you otherwise.  Through its core Catia platform and its SolidWorks subsidiary, engineering software company Dassault Systèmes sees artificial intelligence (AI) as now fundamental to its design and manufacturing work in virtually all production industries. Speaking to Computer Weekly in February 2025, the company’s senior vice-president, Gian Paolo Bassi, said the conversation of its sector has evolved from Industry 4.0, which was focused on automation, productivity and innovation without taking into account the effect of technological changes in society.   “The industry has decided that it’s time for an evolution,” he said. “It’s called Industry 5.0. At the intersection of the experience economy, there is a new, compelling necessity to be sustainable, to create a circular economy. So then, at the intersection, [we have] the generative [AI] economy.” Yet in aiming to generate gains in sustainability through Industry 5.0, there is a danger that the increased use of AI could potentially see increased power usage, as well as the need to invest in much more robust and responsive connected network infrastructure to support the rise in AI-based workloads.  Dassault first revealed it was working with generative AI design principles in 2024. As the practice has evolved, Bassi said it now captures two fundamental concepts. The first is the ability of AI to create new and original content based on language models that comprise details of processes, business models, designs of parts assemblies, specifications and manufacturing practices. These models, he stressed, would not be traditional, generic, compute-intensive models such as ChatGPT. Instead, they would be vertical, industry-specific, and trained on engineering content and technical documentation.  “We can now build large models of everything, which is a virtual twin, and we can get to a level of sophistication where new ideas can come in, be tested, and much more knowledge can be put into the innovation process. This is a tipping point,” he remarked. “It’s not a technological change. It’s a technological expansion – a very important one – because we are going to improve, to increase our portfolio with AI agents, with virtual companions and also content, because generative AI can generate content, and can generate, more importantly, know-how and knowledge that can be put to use by our customers immediately.” This tipping point means the software provider can bring knowledge and know-how to a new level because, in Bassi’s belief, this is what AI is best at: exploiting the large models of industrial practices. And with the most important benefit of addressing customer needs as the capabilities of AI are translated into the industrial world, offering a pathway for engineers to save precious time in research and spend more time on being creative in design, without massive, network-intensive models. “Right now, there is this rush to create larger and more comprehensive models. However, it may [just] be a temporary limitation of the technology,” Bassi suggested. “In fact, it is indeed possible that you don’t need the huge models to do specific tasks.” 
    0 التعليقات ·0 المشاركات ·0 معاينة
الصفحات المعززة
CGShares https://cgshares.com