• DSPM, DLP, data security, cybersecurity, data protection, innovation, compliance, data threats, security strategy

    ---

    In the relentless battlefield of data security, it's infuriating to witness the ongoing debate about whether to prioritize Data Security Posture Management (DSPM) or Data Loss Prevention (DLP). Let's get one thing straight: **you need both**. This ridiculous notion that one can substitute the other is not just naive; it’s downright reckless! If you're in charge of safeguarding ...
    DSPM, DLP, data security, cybersecurity, data protection, innovation, compliance, data threats, security strategy --- In the relentless battlefield of data security, it's infuriating to witness the ongoing debate about whether to prioritize Data Security Posture Management (DSPM) or Data Loss Prevention (DLP). Let's get one thing straight: **you need both**. This ridiculous notion that one can substitute the other is not just naive; it’s downright reckless! If you're in charge of safeguarding ...
    No more excuses: You need both DSPM and DLP to secure your data
    DSPM, DLP, data security, cybersecurity, data protection, innovation, compliance, data threats, security strategy --- In the relentless battlefield of data security, it's infuriating to witness the ongoing debate about whether to prioritize Data Security Posture Management (DSPM) or Data Loss Prevention (DLP). Let's get one thing straight: **you need both**. This ridiculous notion that one...
    Like
    Love
    Wow
    Sad
    Angry
    600
    1 Комментарии 0 Поделились
  • New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Комментарии 0 Поделились
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Комментарии 0 Поделились
  • Understanding the Relationship Between Security Gateways and DMARC

    Email authentication protocols like SPF, DKIM, and DMARC play a critical role in protecting domains from spoofing and phishing. However, when SEGs are introduced into the email path, the interaction with these protocols becomes more complex.
    Security gatewaysare a core part of many organizations’ email infrastructure. They act as intermediaries between the public internet and internal mail systems, inspecting, filtering, and routing messages.
    This blog examines how security gateways handle SPF, DKIM, and DMARC, with real-world examples from popular gateways such as Proofpoint, Mimecast, and Avanan. We’ll also cover best practices for maintaining authentication integrity and avoiding misconfigurations that can compromise email authentication or lead to false DMARC failures.
    Security gateways often sit at the boundary between your organization and the internet, managing both inbound and outbound email traffic. Their role affects how email authentication protocols behave.
    An inbound SEG examines emails coming into your organization. It checks SPF, DKIM, and DMARC to determine if the message is authentic and safe before passing it to your internal mail servers.
    An outbound SEG handles emails sent from your domain. It may modify headers, rewrite envelope addresses, or even apply DKIM signing. All of these can impact SPF,  DKIM, or DMARC validation on the recipient’s side.

    Understanding how SEGs influence these flows is crucial to maintaining proper authentication and avoiding unexpected DMARC failures.
    Inbound Handling of SPF, DKIM, and DMARC by Common Security Gateways
    When an email comes into your organization, your security gateway is the first to inspect it. It checks whether the message is real, trustworthy, and properly authenticated. Let’s look at how different SEGs handle these checks.
    AvananSPF: Avanan verifies whether the sending server is authorized to send emails for the domain by checking the SPF record.
    DKIM: It verifies if the message was signed by the sending domain and if that signature is valid.
    DMARC: It uses the results of the SPF and DKIM check to evaluate DMARC. However, final enforcement usually depends on how DMARC is handled by Microsoft 365 or Gmail, as Avanan integrates directly with them.

    Avanan offers two methods of integration:1. API integration: Avanan connects via APIs, no change in MX, usually Monitor or Detect modes.2. Inline integration: Avanan is placed inline in the mail flow, actively blocking or remediating threats.
    Proofpoint Email Protection

    SPF: Proofpoint checks SPF to confirm the sender’s IP is authorized to send on behalf of the domain. You can set custom rules.
    DKIM: It verifies DKIM signatures and shows clear pass/fail results in logs.
    DMARC: It fully evaluates DMARC by combining SPF and DKIM results with alignment checks. Administrators can configure how to handle messages that fail DMARC, such as rejecting, quarantining, or delivering them. Additionally, Proofpoint allows whitelisting specific senders you trust, even if their emails fail authentication checks.

    Integration Methods

    Inline Mode: In this traditional deployment, Proofpoint is positioned directly in the email flow by modifying MX records. Emails are routed through Proofpoint’s infrastructure, allowing it to inspect and filter messages before they reach the recipient’s inbox. This mode provides pre-delivery protection and is commonly used in on-premises or hybrid environments.
    API-BasedMode: Proofpoint offers API-based integration, particularly with cloud email platforms like Microsoft 365 and Google Workspace. In this mode, Proofpoint connects to the email platform via APIs, enabling it to monitor and remediate threats post-delivery without altering the email flow. This approach allows for rapid deployment and seamless integration with existing cloud email services.

    Mimecast

    SPF: Mimecast performs SPF checks to verify whether the sending server is authorized by the domain’s SPF record. Administrators can configure actions for SPF failures, including block, quarantine, permit, or tag with a warning. This gives flexibility in balancing security with business needs.
    DKIM: It validates DKIM signatures by checking that the message was correctly signed by the sending domain and that the content hasn’t been tampered with. If the signature fails, Mimecast can take actions based on your configured policies.
    DMARC: It fully evaluates DMARC by combining the results of SPF and DKIM with domain alignment checks. You can choose to honor the sending domain’s DMARC policyor apply custom rules, for example, quarantining or tagging messages that fail DMARC regardless of the published policy. This allows more granular control for businesses that want to override external domain policies based on specific contexts.

    Integration Methods

    Inline Deployment: Mimecast is typically deployed as a cloud-based secure email gateway. Organizations update their domain’s MX records to point to Mimecast, so all inboundemails pass through it first. This allows Mimecast to inspect, filter, and process emails before delivery, providing robust protection.
    API Integrations: Mimecast also offers API-based services through its Mimecast API platform, primarily for management, archival, continuity, and threat intelligence purposes. However, API-only email protection is not Mimecast’s core model. Instead, the APIs are used to enhance the inline deployment, not replace it.

    Barracuda Email Security Gateway
    SPF: Barracuda checks the sender’s IP against the domain’s published SPF record. If the check fails, you can configure the system to block, quarantine, tag, or allow the message, depending on your policy preferences.
    DKIM: It validates whether the incoming message includes a valid DKIM signature. The outcome is logged and used to inform further policy decisions or DMARC evaluations.
    DMARC: It combines SPF and DKIM results, checks for domain alignment, and applies the DMARC policy defined by the sender. Administrators can also choose to override the DMARC policy, allowing messages to pass or be treated differently based on organizational needs.
    Integration Methods

    Inline mode: Barracuda Email Security Gateway is commonly deployed inline by updating your domain’s MX records to point to Barracuda’s cloud or on-premises gateway. This ensures that all inbound emails pass through Barracuda first for filtering and SPF, DKIM, and DMARC validation before being delivered to your mail servers.
    Deployment Behind the Corporate Firewall: Alternatively, Barracuda can be deployed in transparent or bridge mode without modifying MX records. In this setup, the gateway is placed inline at the network level, such as behind a firewall, and intercepts mail traffic transparently. This method is typically used in complex on-premises environments where changing DNS records is not feasible.

    Cisco Secure EmailCisco Secure Email acts as an inline gateway for inbound email, usually requiring your domain’s MX records to point to the Cisco Email Security Appliance or cloud service.
    SPF: Cisco Secure Email verifies whether the sending server is authorized in the sender domain’s SPF record. Administrators can set detailed policies on how to handle SPF failures.
    DKIM: It validates the DKIM signature on incoming emails and logs whether the signature is valid or has failed.
    DMARC: It evaluates DMARC by combining SPF and DKIM results along with domain alignment checks. Admins can configure specific actions, such as quarantine, reject, or tag, based on different failure scenarios or trusted sender exceptions.
    Integration methods

    On-premises Email Security Appliance: You deploy Cisco’s hardware or virtual appliance inline, updating MX records to route mail through it for filtering.
    Cisco Cloud Email Security: Cisco offers a cloud-based email security service where MX records are pointed to Cisco’s cloud infrastructure, which filters and processes inbound mail.

    Cisco Secure Email also offers advanced, rule-based filtering capabilities and integrates with Cisco’s broader threat protection ecosystem, enabling comprehensive inbound email security.
    Outbound Handling of SPF, DKIM, and DMARC by Common Security Gateways
    When your organization sends emails, security gateways can play an active role in processing and authenticating those messages. Depending on the configuration, a gateway might rewrite headers, re-sign messages, or route them through different IPs – all actions that can help or hurt the authentication process. Let’s look at how major SEGs handle outbound email flow.
    Avanan – Outbound Handling and Integration Methods
    Outbound Logic
    Avanan analyzes outbound emails primarily to detect data loss, malware, and policy violations. In API-based integration, emails are sent directly by the original mail server, so SPF and DKIM signatures remain intact. Avanan does not alter the message or reroute traffic, which helps maintain full DMARC alignment and domain reputation.
    Integration Methods
    1. API Integration: Connects to Microsoft 365 or Google Workspace via API. No MX changes are needed. Emails are scanned after they are sent, with no modification to SPF, DKIM, or the delivery path. 

    How it works: Microsoft Graph API or Google Workspace APIs are used to monitor and intervene in outbound emails.
    Protection level: Despite no MX changes, it can offer inline-like protection, meaning it can block, quarantine, or encrypt emails before they are delivered externally.
    SPF/DKIM/DMARC impact: Preserves original headers and signatures since mail is sent directly from Microsoft/Google servers.

    2. Inline Integration: Requires changing MX records to route email through Avanan. In this mode, Avanan can intercept and inspect outbound emails before delivery. Depending on the configuration, this may affect SPF or DKIM if not properly handled.

    How it works: Requires adding Avanan’s
    Protection level: Traditional inline security with full visibility and control, including encryption, DLP, policy enforcement, and advanced threat protection.
    SPF/DKIM/DMARC impact: SPF configuration is needed by adding Avanan’s include mechanism to the sending domain’s SPF record. The DKIM record of the original sending source is preserved.

    For configurations, you can refer to the steps in this blog.
    Proofpoint – Outbound Handling and Integration Methods
    Outbound Logic
    Proofpoint analyzes outbound emails to detect and prevent data loss, to identify advanced threatsoriginating from compromised internal accounts, and to ensure compliance. Their API integration provides crucial visibility and powerful remediation capabilities, while their traditional gatewaydeployment delivers true inline, pre-delivery blocking for outbound traffic.
    Integration methods
    1. API Integration: No MX record changes are required for this deployment method. Integration is done with Microsoft 365 or Google Workspace.

    How it works: Through its API integration, Proofpoint gains deep visibility into outbound emails and provides layered security and response features, including:

    Detect and alert: Identifies sensitive content, malicious attachments, or suspicious links in outbound emails.
    Post-delivery remediation: A key capability of the API model is Threat Response Auto-Pull, which enables Proofpoint to automatically recall, quarantine, or delete emails after delivery. This is particularly useful for internally sent messages or those forwarded to other users.
    Enhanced visibility: Aggregates message metadata and logs into Proofpoint’s threat intelligence platform, giving security teams a centralized view of outbound risks and user behavior.

    Protection level: API-based integration provides strong post-delivery detection and response, as well as visibility into DLP incidents and suspicious behavior. 
    SPF/DKIM/DMARC impact: Proofpoint does not alter SPF, DKIM, or DMARC because emails are sent directly through Microsoft or Google servers. Since Proofpoint’s servers are not involved in the actual sending process, the original authentication headers remain intact.

    2. Gateway Integration: This method requires updating MX records or routing outbound mail through Proofpoint via a smart host.

    How it works: Proofpoint acts as an inline gateway, inspecting emails before delivery. Inbound mail is filtered via MX changes; outbound mail is relayed through Proofpoint’s servers.
    Threat and DLP filtering: Scans outbound messages for sensitive content, malware, and policy violations.
    Real-time enforcement: Blocks, encrypts, or quarantines emails before they’re delivered.
    Policy controls: Applies rules based on content, recipient, or behavior.
    Protection level: Provides strong, real-time protection for outbound traffic with pre-delivery enforcement, DLP, and encryption.
    SPF/DKIM/DMARC impact: Proofpoint becomes the sending server:

    SPF: You need to configure ProofPoint’s SPF.
    DKIM: Can sign messages; requires DKIM setup.
    DMARC: DMARC passes if SPF and DKIM are set up properly.

    Please refer to this article to configure SPF and DKIM for ProofPoint.
    Mimecast – Outbound Handling and Integration Methods
    Outbound Logic
    Mimecast inspects outbound emails to prevent data loss, detect internal threats such as malware and impersonation, and ensure regulatory compliance. It primarily functions as a Secure Email Gateway, meaning it sits directly in the outbound email flow. While Mimecast offers APIs, its core outbound protection is built around this inline gateway model.
    Integration Methods
    1. Gateway IntegrationThis is Mimecast’s primary method for outbound email protection. Organizations route their outbound traffic through Mimecast by configuring their email serverto use Mimecast as a smart host. This enables Mimecast to inspect and enforce policies on all outgoing emails in real time.

    How it works:
    Updating outbound routing in your email system, or
    Using Mimecast SMTP relay to direct messages through their infrastructure.
    Mimecast then scans, filters, and applies policies before the email reaches the final recipient.

    Protection level:
    Advanced DLP: Identifies and prevents sensitive data leaks.
    Impersonation and Threat Protection: Blocks malware, phishing, and abuse from compromised internal accounts.
    Email Encryption and Secure Messaging: Applies encryption policies or routes messages via secure portals.

    Regulatory Compliance: Enforces outbound compliance rules based on content, recipient, or metadata.
    SPF/DKIM/DMARC impact:

    SPF: Your SPF record must include Mimecast’s SPF mechanism based on your region to avoid SPF failures.
    DKIM: A new DKIM record should be configured to make sure your emails are DKIM signed when routing through Mimecast.
    DMARC: With correct SPF and DKIM setup, Mimecast ensures DMARC alignment, maintaining your domain’s sending reputation. Please refer to the steps in this detailed article to set up SPF and DKIM for Mimecast.

    2. API IntegrationMimecast’s APIs complement the main gateway by providing automation, reporting, and management tools rather than handling live outbound mail flow. They allow you to manage policies, export logs, search archived emails, and sync users.
    APIs enhance visibility and operational tasks but do not provide real-time filtering or blocking of outbound messages. Since APIs don’t process live mail, they have no direct effect on SPF, DKIM, or DMARC; those depend on your gatewaysetup.
    Barracuda – Outbound Handling and Integration Methods
    Outbound Logic
    Barracuda analyzes outbound emails to prevent data loss, block malware, stop phishing/impersonation attempts from compromised internal accounts, and ensure compliance. Barracuda offers flexible deployment options, including both traditional gatewayand API-based integrations. While both contribute to outbound security, their roles are distinct.
    Integration Methods
    1. Gateway Integration— Primary Inline Security

    How it works: All outbound emails pass through Barracuda’s security stack for real-time inspection, threat blocking, and policy enforcement before delivery.
    Protection level:

    Comprehensive DLP 
    Outbound spam and virus filtering 
    Enforcement of compliance and content policies

    This approach offers a high level of control and immediate threat mitigation on outbound mail flow.

    SPF/DKIM/DMARC impact:

    SPF: Update SPF records to include Barracuda’s sending IPs or SPF include mechanism.
    DKIM: Currently, no explicit setup is needed; DKIM of the main sending source is preserved.

    Refer to this article for more comprehensive guidance on Barracuda SEG configuration.
    2. API IntegrationHow it works: The API accesses cloud email environments to analyze historical and real-time data, learning normal communication patterns to detect anomalies in outbound emails. It also supports post-delivery remediation, enabling the removal of malicious emails from internal mailboxes after sending.
    Protection level: Advanced AI-driven detection and near real-time blocking of outbound threats, plus strong post-delivery cleanup capabilities.
    SPF/DKIM/DMARC impact: Since mail is sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation.

    Cisco Secure Email– Outbound Handling and Integration Methods
    Outbound Logic
    Cisco Secure Email protects outbound email by preventing data loss, blocking spam and malware from internal accounts, stopping business email compromiseand impersonation attacks, and ensuring compliance. Cisco provides both traditional gateway appliances/cloud gateways and modern API-based solutions for layered outbound security.
    Integration Methods
    1. Gateway Integration– Cisco Secure Email GatewayHow it works: Organizations update MX records to route mail through the Cisco Secure Email Gateway or configure their mail serverto smart host outbound email via the gateway. All outbound mail is inspected and policies enforced before delivery.
    Protection level:

    Granular DLPOutbound spam and malware filtering to protect IP reputation
    Email encryption for sensitive outbound messages
    Comprehensive content and attachment policy enforcement

    SPF: Check this article for comprehensive guidance on Cisco SPF settings.
    DKIM: Refer to this article for detailed guidance on Cisco DKIM settings.

    2. API Integration – Cisco Secure Email Threat Defense

    How it works: Integrates directly via API with Microsoft 365, continuously monitoring email metadata, content, and user behavior across inbound, outbound, and internal messages. Leverages Cisco’s threat intelligence and AI to detect anomalous outbound activity linked to BEC, account takeover, and phishing.
    Post-Delivery Remediation: Automates the removal or quarantine of malicious or policy-violating emails from mailboxes even after sending.
    Protection level: Advanced, AI-driven detection of sophisticated outbound threats with real-time monitoring and automated remediation. Complements gateway filtering by adding cloud-native visibility and swift post-send action.
    SPF/DKIM/DMARC impact: Since emails are sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation.

    If you have any questions or need assistance, feel free to reach out to EasyDMARC technical support.
    #understanding #relationship #between #security #gateways
    Understanding the Relationship Between Security Gateways and DMARC
    Email authentication protocols like SPF, DKIM, and DMARC play a critical role in protecting domains from spoofing and phishing. However, when SEGs are introduced into the email path, the interaction with these protocols becomes more complex. Security gatewaysare a core part of many organizations’ email infrastructure. They act as intermediaries between the public internet and internal mail systems, inspecting, filtering, and routing messages. This blog examines how security gateways handle SPF, DKIM, and DMARC, with real-world examples from popular gateways such as Proofpoint, Mimecast, and Avanan. We’ll also cover best practices for maintaining authentication integrity and avoiding misconfigurations that can compromise email authentication or lead to false DMARC failures. Security gateways often sit at the boundary between your organization and the internet, managing both inbound and outbound email traffic. Their role affects how email authentication protocols behave. An inbound SEG examines emails coming into your organization. It checks SPF, DKIM, and DMARC to determine if the message is authentic and safe before passing it to your internal mail servers. An outbound SEG handles emails sent from your domain. It may modify headers, rewrite envelope addresses, or even apply DKIM signing. All of these can impact SPF,  DKIM, or DMARC validation on the recipient’s side. Understanding how SEGs influence these flows is crucial to maintaining proper authentication and avoiding unexpected DMARC failures. Inbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When an email comes into your organization, your security gateway is the first to inspect it. It checks whether the message is real, trustworthy, and properly authenticated. Let’s look at how different SEGs handle these checks. AvananSPF: Avanan verifies whether the sending server is authorized to send emails for the domain by checking the SPF record. DKIM: It verifies if the message was signed by the sending domain and if that signature is valid. DMARC: It uses the results of the SPF and DKIM check to evaluate DMARC. However, final enforcement usually depends on how DMARC is handled by Microsoft 365 or Gmail, as Avanan integrates directly with them. Avanan offers two methods of integration:1. API integration: Avanan connects via APIs, no change in MX, usually Monitor or Detect modes.2. Inline integration: Avanan is placed inline in the mail flow, actively blocking or remediating threats. Proofpoint Email Protection SPF: Proofpoint checks SPF to confirm the sender’s IP is authorized to send on behalf of the domain. You can set custom rules. DKIM: It verifies DKIM signatures and shows clear pass/fail results in logs. DMARC: It fully evaluates DMARC by combining SPF and DKIM results with alignment checks. Administrators can configure how to handle messages that fail DMARC, such as rejecting, quarantining, or delivering them. Additionally, Proofpoint allows whitelisting specific senders you trust, even if their emails fail authentication checks. Integration Methods Inline Mode: In this traditional deployment, Proofpoint is positioned directly in the email flow by modifying MX records. Emails are routed through Proofpoint’s infrastructure, allowing it to inspect and filter messages before they reach the recipient’s inbox. This mode provides pre-delivery protection and is commonly used in on-premises or hybrid environments. API-BasedMode: Proofpoint offers API-based integration, particularly with cloud email platforms like Microsoft 365 and Google Workspace. In this mode, Proofpoint connects to the email platform via APIs, enabling it to monitor and remediate threats post-delivery without altering the email flow. This approach allows for rapid deployment and seamless integration with existing cloud email services. Mimecast SPF: Mimecast performs SPF checks to verify whether the sending server is authorized by the domain’s SPF record. Administrators can configure actions for SPF failures, including block, quarantine, permit, or tag with a warning. This gives flexibility in balancing security with business needs. DKIM: It validates DKIM signatures by checking that the message was correctly signed by the sending domain and that the content hasn’t been tampered with. If the signature fails, Mimecast can take actions based on your configured policies. DMARC: It fully evaluates DMARC by combining the results of SPF and DKIM with domain alignment checks. You can choose to honor the sending domain’s DMARC policyor apply custom rules, for example, quarantining or tagging messages that fail DMARC regardless of the published policy. This allows more granular control for businesses that want to override external domain policies based on specific contexts. Integration Methods Inline Deployment: Mimecast is typically deployed as a cloud-based secure email gateway. Organizations update their domain’s MX records to point to Mimecast, so all inboundemails pass through it first. This allows Mimecast to inspect, filter, and process emails before delivery, providing robust protection. API Integrations: Mimecast also offers API-based services through its Mimecast API platform, primarily for management, archival, continuity, and threat intelligence purposes. However, API-only email protection is not Mimecast’s core model. Instead, the APIs are used to enhance the inline deployment, not replace it. Barracuda Email Security Gateway SPF: Barracuda checks the sender’s IP against the domain’s published SPF record. If the check fails, you can configure the system to block, quarantine, tag, or allow the message, depending on your policy preferences. DKIM: It validates whether the incoming message includes a valid DKIM signature. The outcome is logged and used to inform further policy decisions or DMARC evaluations. DMARC: It combines SPF and DKIM results, checks for domain alignment, and applies the DMARC policy defined by the sender. Administrators can also choose to override the DMARC policy, allowing messages to pass or be treated differently based on organizational needs. Integration Methods Inline mode: Barracuda Email Security Gateway is commonly deployed inline by updating your domain’s MX records to point to Barracuda’s cloud or on-premises gateway. This ensures that all inbound emails pass through Barracuda first for filtering and SPF, DKIM, and DMARC validation before being delivered to your mail servers. Deployment Behind the Corporate Firewall: Alternatively, Barracuda can be deployed in transparent or bridge mode without modifying MX records. In this setup, the gateway is placed inline at the network level, such as behind a firewall, and intercepts mail traffic transparently. This method is typically used in complex on-premises environments where changing DNS records is not feasible. Cisco Secure EmailCisco Secure Email acts as an inline gateway for inbound email, usually requiring your domain’s MX records to point to the Cisco Email Security Appliance or cloud service. SPF: Cisco Secure Email verifies whether the sending server is authorized in the sender domain’s SPF record. Administrators can set detailed policies on how to handle SPF failures. DKIM: It validates the DKIM signature on incoming emails and logs whether the signature is valid or has failed. DMARC: It evaluates DMARC by combining SPF and DKIM results along with domain alignment checks. Admins can configure specific actions, such as quarantine, reject, or tag, based on different failure scenarios or trusted sender exceptions. Integration methods On-premises Email Security Appliance: You deploy Cisco’s hardware or virtual appliance inline, updating MX records to route mail through it for filtering. Cisco Cloud Email Security: Cisco offers a cloud-based email security service where MX records are pointed to Cisco’s cloud infrastructure, which filters and processes inbound mail. Cisco Secure Email also offers advanced, rule-based filtering capabilities and integrates with Cisco’s broader threat protection ecosystem, enabling comprehensive inbound email security. Outbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When your organization sends emails, security gateways can play an active role in processing and authenticating those messages. Depending on the configuration, a gateway might rewrite headers, re-sign messages, or route them through different IPs – all actions that can help or hurt the authentication process. Let’s look at how major SEGs handle outbound email flow. Avanan – Outbound Handling and Integration Methods Outbound Logic Avanan analyzes outbound emails primarily to detect data loss, malware, and policy violations. In API-based integration, emails are sent directly by the original mail server, so SPF and DKIM signatures remain intact. Avanan does not alter the message or reroute traffic, which helps maintain full DMARC alignment and domain reputation. Integration Methods 1. API Integration: Connects to Microsoft 365 or Google Workspace via API. No MX changes are needed. Emails are scanned after they are sent, with no modification to SPF, DKIM, or the delivery path.  How it works: Microsoft Graph API or Google Workspace APIs are used to monitor and intervene in outbound emails. Protection level: Despite no MX changes, it can offer inline-like protection, meaning it can block, quarantine, or encrypt emails before they are delivered externally. SPF/DKIM/DMARC impact: Preserves original headers and signatures since mail is sent directly from Microsoft/Google servers. 2. Inline Integration: Requires changing MX records to route email through Avanan. In this mode, Avanan can intercept and inspect outbound emails before delivery. Depending on the configuration, this may affect SPF or DKIM if not properly handled. How it works: Requires adding Avanan’s Protection level: Traditional inline security with full visibility and control, including encryption, DLP, policy enforcement, and advanced threat protection. SPF/DKIM/DMARC impact: SPF configuration is needed by adding Avanan’s include mechanism to the sending domain’s SPF record. The DKIM record of the original sending source is preserved. For configurations, you can refer to the steps in this blog. Proofpoint – Outbound Handling and Integration Methods Outbound Logic Proofpoint analyzes outbound emails to detect and prevent data loss, to identify advanced threatsoriginating from compromised internal accounts, and to ensure compliance. Their API integration provides crucial visibility and powerful remediation capabilities, while their traditional gatewaydeployment delivers true inline, pre-delivery blocking for outbound traffic. Integration methods 1. API Integration: No MX record changes are required for this deployment method. Integration is done with Microsoft 365 or Google Workspace. How it works: Through its API integration, Proofpoint gains deep visibility into outbound emails and provides layered security and response features, including: Detect and alert: Identifies sensitive content, malicious attachments, or suspicious links in outbound emails. Post-delivery remediation: A key capability of the API model is Threat Response Auto-Pull, which enables Proofpoint to automatically recall, quarantine, or delete emails after delivery. This is particularly useful for internally sent messages or those forwarded to other users. Enhanced visibility: Aggregates message metadata and logs into Proofpoint’s threat intelligence platform, giving security teams a centralized view of outbound risks and user behavior. Protection level: API-based integration provides strong post-delivery detection and response, as well as visibility into DLP incidents and suspicious behavior.  SPF/DKIM/DMARC impact: Proofpoint does not alter SPF, DKIM, or DMARC because emails are sent directly through Microsoft or Google servers. Since Proofpoint’s servers are not involved in the actual sending process, the original authentication headers remain intact. 2. Gateway Integration: This method requires updating MX records or routing outbound mail through Proofpoint via a smart host. How it works: Proofpoint acts as an inline gateway, inspecting emails before delivery. Inbound mail is filtered via MX changes; outbound mail is relayed through Proofpoint’s servers. Threat and DLP filtering: Scans outbound messages for sensitive content, malware, and policy violations. Real-time enforcement: Blocks, encrypts, or quarantines emails before they’re delivered. Policy controls: Applies rules based on content, recipient, or behavior. Protection level: Provides strong, real-time protection for outbound traffic with pre-delivery enforcement, DLP, and encryption. SPF/DKIM/DMARC impact: Proofpoint becomes the sending server: SPF: You need to configure ProofPoint’s SPF. DKIM: Can sign messages; requires DKIM setup. DMARC: DMARC passes if SPF and DKIM are set up properly. Please refer to this article to configure SPF and DKIM for ProofPoint. Mimecast – Outbound Handling and Integration Methods Outbound Logic Mimecast inspects outbound emails to prevent data loss, detect internal threats such as malware and impersonation, and ensure regulatory compliance. It primarily functions as a Secure Email Gateway, meaning it sits directly in the outbound email flow. While Mimecast offers APIs, its core outbound protection is built around this inline gateway model. Integration Methods 1. Gateway IntegrationThis is Mimecast’s primary method for outbound email protection. Organizations route their outbound traffic through Mimecast by configuring their email serverto use Mimecast as a smart host. This enables Mimecast to inspect and enforce policies on all outgoing emails in real time. How it works: Updating outbound routing in your email system, or Using Mimecast SMTP relay to direct messages through their infrastructure. Mimecast then scans, filters, and applies policies before the email reaches the final recipient. Protection level: Advanced DLP: Identifies and prevents sensitive data leaks. Impersonation and Threat Protection: Blocks malware, phishing, and abuse from compromised internal accounts. Email Encryption and Secure Messaging: Applies encryption policies or routes messages via secure portals. Regulatory Compliance: Enforces outbound compliance rules based on content, recipient, or metadata. SPF/DKIM/DMARC impact: SPF: Your SPF record must include Mimecast’s SPF mechanism based on your region to avoid SPF failures. DKIM: A new DKIM record should be configured to make sure your emails are DKIM signed when routing through Mimecast. DMARC: With correct SPF and DKIM setup, Mimecast ensures DMARC alignment, maintaining your domain’s sending reputation. Please refer to the steps in this detailed article to set up SPF and DKIM for Mimecast. 2. API IntegrationMimecast’s APIs complement the main gateway by providing automation, reporting, and management tools rather than handling live outbound mail flow. They allow you to manage policies, export logs, search archived emails, and sync users. APIs enhance visibility and operational tasks but do not provide real-time filtering or blocking of outbound messages. Since APIs don’t process live mail, they have no direct effect on SPF, DKIM, or DMARC; those depend on your gatewaysetup. Barracuda – Outbound Handling and Integration Methods Outbound Logic Barracuda analyzes outbound emails to prevent data loss, block malware, stop phishing/impersonation attempts from compromised internal accounts, and ensure compliance. Barracuda offers flexible deployment options, including both traditional gatewayand API-based integrations. While both contribute to outbound security, their roles are distinct. Integration Methods 1. Gateway Integration— Primary Inline Security How it works: All outbound emails pass through Barracuda’s security stack for real-time inspection, threat blocking, and policy enforcement before delivery. Protection level: Comprehensive DLP  Outbound spam and virus filtering  Enforcement of compliance and content policies This approach offers a high level of control and immediate threat mitigation on outbound mail flow. SPF/DKIM/DMARC impact: SPF: Update SPF records to include Barracuda’s sending IPs or SPF include mechanism. DKIM: Currently, no explicit setup is needed; DKIM of the main sending source is preserved. Refer to this article for more comprehensive guidance on Barracuda SEG configuration. 2. API IntegrationHow it works: The API accesses cloud email environments to analyze historical and real-time data, learning normal communication patterns to detect anomalies in outbound emails. It also supports post-delivery remediation, enabling the removal of malicious emails from internal mailboxes after sending. Protection level: Advanced AI-driven detection and near real-time blocking of outbound threats, plus strong post-delivery cleanup capabilities. SPF/DKIM/DMARC impact: Since mail is sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. Cisco Secure Email– Outbound Handling and Integration Methods Outbound Logic Cisco Secure Email protects outbound email by preventing data loss, blocking spam and malware from internal accounts, stopping business email compromiseand impersonation attacks, and ensuring compliance. Cisco provides both traditional gateway appliances/cloud gateways and modern API-based solutions for layered outbound security. Integration Methods 1. Gateway Integration– Cisco Secure Email GatewayHow it works: Organizations update MX records to route mail through the Cisco Secure Email Gateway or configure their mail serverto smart host outbound email via the gateway. All outbound mail is inspected and policies enforced before delivery. Protection level: Granular DLPOutbound spam and malware filtering to protect IP reputation Email encryption for sensitive outbound messages Comprehensive content and attachment policy enforcement SPF: Check this article for comprehensive guidance on Cisco SPF settings. DKIM: Refer to this article for detailed guidance on Cisco DKIM settings. 2. API Integration – Cisco Secure Email Threat Defense How it works: Integrates directly via API with Microsoft 365, continuously monitoring email metadata, content, and user behavior across inbound, outbound, and internal messages. Leverages Cisco’s threat intelligence and AI to detect anomalous outbound activity linked to BEC, account takeover, and phishing. Post-Delivery Remediation: Automates the removal or quarantine of malicious or policy-violating emails from mailboxes even after sending. Protection level: Advanced, AI-driven detection of sophisticated outbound threats with real-time monitoring and automated remediation. Complements gateway filtering by adding cloud-native visibility and swift post-send action. SPF/DKIM/DMARC impact: Since emails are sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. If you have any questions or need assistance, feel free to reach out to EasyDMARC technical support. #understanding #relationship #between #security #gateways
    EASYDMARC.COM
    Understanding the Relationship Between Security Gateways and DMARC
    Email authentication protocols like SPF, DKIM, and DMARC play a critical role in protecting domains from spoofing and phishing. However, when SEGs are introduced into the email path, the interaction with these protocols becomes more complex. Security gateways(SEGs) are a core part of many organizations’ email infrastructure. They act as intermediaries between the public internet and internal mail systems, inspecting, filtering, and routing messages. This blog examines how security gateways handle SPF, DKIM, and DMARC, with real-world examples from popular gateways such as Proofpoint, Mimecast, and Avanan. We’ll also cover best practices for maintaining authentication integrity and avoiding misconfigurations that can compromise email authentication or lead to false DMARC failures. Security gateways often sit at the boundary between your organization and the internet, managing both inbound and outbound email traffic. Their role affects how email authentication protocols behave. An inbound SEG examines emails coming into your organization. It checks SPF, DKIM, and DMARC to determine if the message is authentic and safe before passing it to your internal mail servers. An outbound SEG handles emails sent from your domain. It may modify headers, rewrite envelope addresses, or even apply DKIM signing. All of these can impact SPF,  DKIM, or DMARC validation on the recipient’s side. Understanding how SEGs influence these flows is crucial to maintaining proper authentication and avoiding unexpected DMARC failures. Inbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When an email comes into your organization, your security gateway is the first to inspect it. It checks whether the message is real, trustworthy, and properly authenticated. Let’s look at how different SEGs handle these checks. Avanan (by Check Point) SPF: Avanan verifies whether the sending server is authorized to send emails for the domain by checking the SPF record. DKIM: It verifies if the message was signed by the sending domain and if that signature is valid. DMARC: It uses the results of the SPF and DKIM check to evaluate DMARC. However, final enforcement usually depends on how DMARC is handled by Microsoft 365 or Gmail, as Avanan integrates directly with them. Avanan offers two methods of integration:1. API integration: Avanan connects via APIs, no change in MX, usually Monitor or Detect modes.2. Inline integration: Avanan is placed inline in the mail flow (MX records changed), actively blocking or remediating threats. Proofpoint Email Protection SPF: Proofpoint checks SPF to confirm the sender’s IP is authorized to send on behalf of the domain. You can set custom rules (e.g. treat “softfail” as “fail”). DKIM: It verifies DKIM signatures and shows clear pass/fail results in logs. DMARC: It fully evaluates DMARC by combining SPF and DKIM results with alignment checks. Administrators can configure how to handle messages that fail DMARC, such as rejecting, quarantining, or delivering them. Additionally, Proofpoint allows whitelisting specific senders you trust, even if their emails fail authentication checks. Integration Methods Inline Mode: In this traditional deployment, Proofpoint is positioned directly in the email flow by modifying MX records. Emails are routed through Proofpoint’s infrastructure, allowing it to inspect and filter messages before they reach the recipient’s inbox. This mode provides pre-delivery protection and is commonly used in on-premises or hybrid environments. API-Based (Integrated Cloud Email Security – ICES) Mode: Proofpoint offers API-based integration, particularly with cloud email platforms like Microsoft 365 and Google Workspace. In this mode, Proofpoint connects to the email platform via APIs, enabling it to monitor and remediate threats post-delivery without altering the email flow. This approach allows for rapid deployment and seamless integration with existing cloud email services. Mimecast SPF: Mimecast performs SPF checks to verify whether the sending server is authorized by the domain’s SPF record. Administrators can configure actions for SPF failures, including block, quarantine, permit, or tag with a warning. This gives flexibility in balancing security with business needs. DKIM: It validates DKIM signatures by checking that the message was correctly signed by the sending domain and that the content hasn’t been tampered with. If the signature fails, Mimecast can take actions based on your configured policies. DMARC: It fully evaluates DMARC by combining the results of SPF and DKIM with domain alignment checks. You can choose to honor the sending domain’s DMARC policy (none, quarantine, reject) or apply custom rules, for example, quarantining or tagging messages that fail DMARC regardless of the published policy. This allows more granular control for businesses that want to override external domain policies based on specific contexts. Integration Methods Inline Deployment: Mimecast is typically deployed as a cloud-based secure email gateway. Organizations update their domain’s MX records to point to Mimecast, so all inbound (and optionally outbound) emails pass through it first. This allows Mimecast to inspect, filter, and process emails before delivery, providing robust protection. API Integrations: Mimecast also offers API-based services through its Mimecast API platform, primarily for management, archival, continuity, and threat intelligence purposes. However, API-only email protection is not Mimecast’s core model. Instead, the APIs are used to enhance the inline deployment, not replace it. Barracuda Email Security Gateway SPF: Barracuda checks the sender’s IP against the domain’s published SPF record. If the check fails, you can configure the system to block, quarantine, tag, or allow the message, depending on your policy preferences. DKIM: It validates whether the incoming message includes a valid DKIM signature. The outcome is logged and used to inform further policy decisions or DMARC evaluations. DMARC: It combines SPF and DKIM results, checks for domain alignment, and applies the DMARC policy defined by the sender. Administrators can also choose to override the DMARC policy, allowing messages to pass or be treated differently based on organizational needs (e.g., trusted senders or internal exceptions). Integration Methods Inline mode (more common and straightforward): Barracuda Email Security Gateway is commonly deployed inline by updating your domain’s MX records to point to Barracuda’s cloud or on-premises gateway. This ensures that all inbound emails pass through Barracuda first for filtering and SPF, DKIM, and DMARC validation before being delivered to your mail servers. Deployment Behind the Corporate Firewall: Alternatively, Barracuda can be deployed in transparent or bridge mode without modifying MX records. In this setup, the gateway is placed inline at the network level, such as behind a firewall, and intercepts mail traffic transparently. This method is typically used in complex on-premises environments where changing DNS records is not feasible. Cisco Secure Email (formerly IronPort) Cisco Secure Email acts as an inline gateway for inbound email, usually requiring your domain’s MX records to point to the Cisco Email Security Appliance or cloud service. SPF: Cisco Secure Email verifies whether the sending server is authorized in the sender domain’s SPF record. Administrators can set detailed policies on how to handle SPF failures. DKIM: It validates the DKIM signature on incoming emails and logs whether the signature is valid or has failed. DMARC: It evaluates DMARC by combining SPF and DKIM results along with domain alignment checks. Admins can configure specific actions, such as quarantine, reject, or tag, based on different failure scenarios or trusted sender exceptions. Integration methods On-premises Email Security Appliance (ESA): You deploy Cisco’s hardware or virtual appliance inline, updating MX records to route mail through it for filtering. Cisco Cloud Email Security: Cisco offers a cloud-based email security service where MX records are pointed to Cisco’s cloud infrastructure, which filters and processes inbound mail. Cisco Secure Email also offers advanced, rule-based filtering capabilities and integrates with Cisco’s broader threat protection ecosystem, enabling comprehensive inbound email security. Outbound Handling of SPF, DKIM, and DMARC by Common Security Gateways When your organization sends emails, security gateways can play an active role in processing and authenticating those messages. Depending on the configuration, a gateway might rewrite headers, re-sign messages, or route them through different IPs – all actions that can help or hurt the authentication process. Let’s look at how major SEGs handle outbound email flow. Avanan – Outbound Handling and Integration Methods Outbound Logic Avanan analyzes outbound emails primarily to detect data loss, malware, and policy violations. In API-based integration, emails are sent directly by the original mail server (e.g., Microsoft 365 or Google Workspace), so SPF and DKIM signatures remain intact. Avanan does not alter the message or reroute traffic, which helps maintain full DMARC alignment and domain reputation. Integration Methods 1. API Integration: Connects to Microsoft 365 or Google Workspace via API. No MX changes are needed. Emails are scanned after they are sent, with no modification to SPF, DKIM, or the delivery path.  How it works: Microsoft Graph API or Google Workspace APIs are used to monitor and intervene in outbound emails. Protection level: Despite no MX changes, it can offer inline-like protection, meaning it can block, quarantine, or encrypt emails before they are delivered externally. SPF/DKIM/DMARC impact: Preserves original headers and signatures since mail is sent directly from Microsoft/Google servers. 2. Inline Integration: Requires changing MX records to route email through Avanan. In this mode, Avanan can intercept and inspect outbound emails before delivery. Depending on the configuration, this may affect SPF or DKIM if not properly handled. How it works: Requires adding Avanan’s Protection level: Traditional inline security with full visibility and control, including encryption, DLP, policy enforcement, and advanced threat protection. SPF/DKIM/DMARC impact: SPF configuration is needed by adding Avanan’s include mechanism to the sending domain’s SPF record. The DKIM record of the original sending source is preserved. For configurations, you can refer to the steps in this blog. Proofpoint – Outbound Handling and Integration Methods Outbound Logic Proofpoint analyzes outbound emails to detect and prevent data loss (DLP), to identify advanced threats (malware, phishing, BEC) originating from compromised internal accounts, and to ensure compliance. Their API integration provides crucial visibility and powerful remediation capabilities, while their traditional gateway (MX record) deployment delivers true inline, pre-delivery blocking for outbound traffic. Integration methods 1. API Integration: No MX record changes are required for this deployment method. Integration is done with Microsoft 365 or Google Workspace. How it works: Through its API integration, Proofpoint gains deep visibility into outbound emails and provides layered security and response features, including: Detect and alert: Identifies sensitive content (Data Loss Prevention violations), malicious attachments, or suspicious links in outbound emails. Post-delivery remediation (TRAP): A key capability of the API model is Threat Response Auto-Pull (TRAP), which enables Proofpoint to automatically recall, quarantine, or delete emails after delivery. This is particularly useful for internally sent messages or those forwarded to other users. Enhanced visibility: Aggregates message metadata and logs into Proofpoint’s threat intelligence platform, giving security teams a centralized view of outbound risks and user behavior. Protection level: API-based integration provides strong post-delivery detection and response, as well as visibility into DLP incidents and suspicious behavior.  SPF/DKIM/DMARC impact: Proofpoint does not alter SPF, DKIM, or DMARC because emails are sent directly through Microsoft or Google servers. Since Proofpoint’s servers are not involved in the actual sending process, the original authentication headers remain intact. 2. Gateway Integration (MX Record/Smart Host): This method requires updating MX records or routing outbound mail through Proofpoint via a smart host. How it works: Proofpoint acts as an inline gateway, inspecting emails before delivery. Inbound mail is filtered via MX changes; outbound mail is relayed through Proofpoint’s servers. Threat and DLP filtering: Scans outbound messages for sensitive content, malware, and policy violations. Real-time enforcement: Blocks, encrypts, or quarantines emails before they’re delivered. Policy controls: Applies rules based on content, recipient, or behavior. Protection level: Provides strong, real-time protection for outbound traffic with pre-delivery enforcement, DLP, and encryption. SPF/DKIM/DMARC impact: Proofpoint becomes the sending server: SPF: You need to configure ProofPoint’s SPF. DKIM: Can sign messages; requires DKIM setup. DMARC: DMARC passes if SPF and DKIM are set up properly. Please refer to this article to configure SPF and DKIM for ProofPoint. Mimecast – Outbound Handling and Integration Methods Outbound Logic Mimecast inspects outbound emails to prevent data loss (DLP), detect internal threats such as malware and impersonation, and ensure regulatory compliance. It primarily functions as a Secure Email Gateway (SEG), meaning it sits directly in the outbound email flow. While Mimecast offers APIs, its core outbound protection is built around this inline gateway model. Integration Methods 1. Gateway Integration (MX Record change required) This is Mimecast’s primary method for outbound email protection. Organizations route their outbound traffic through Mimecast by configuring their email server (e.g., Microsoft 365, Google Workspace, etc.) to use Mimecast as a smart host. This enables Mimecast to inspect and enforce policies on all outgoing emails in real time. How it works: Updating outbound routing in your email system (smart host settings), or Using Mimecast SMTP relay to direct messages through their infrastructure. Mimecast then scans, filters, and applies policies before the email reaches the final recipient. Protection level: Advanced DLP: Identifies and prevents sensitive data leaks. Impersonation and Threat Protection: Blocks malware, phishing, and abuse from compromised internal accounts. Email Encryption and Secure Messaging: Applies encryption policies or routes messages via secure portals. Regulatory Compliance: Enforces outbound compliance rules based on content, recipient, or metadata. SPF/DKIM/DMARC impact: SPF: Your SPF record must include Mimecast’s SPF mechanism based on your region to avoid SPF failures. DKIM: A new DKIM record should be configured to make sure your emails are DKIM signed when routing through Mimecast. DMARC: With correct SPF and DKIM setup, Mimecast ensures DMARC alignment, maintaining your domain’s sending reputation. Please refer to the steps in this detailed article to set up SPF and DKIM for Mimecast. 2. API Integration (Complementary to Gateway) Mimecast’s APIs complement the main gateway by providing automation, reporting, and management tools rather than handling live outbound mail flow. They allow you to manage policies, export logs, search archived emails, and sync users. APIs enhance visibility and operational tasks but do not provide real-time filtering or blocking of outbound messages. Since APIs don’t process live mail, they have no direct effect on SPF, DKIM, or DMARC; those depend on your gateway (smart host) setup. Barracuda – Outbound Handling and Integration Methods Outbound Logic Barracuda analyzes outbound emails to prevent data loss (DLP), block malware, stop phishing/impersonation attempts from compromised internal accounts, and ensure compliance. Barracuda offers flexible deployment options, including both traditional gateway (MX record) and API-based integrations. While both contribute to outbound security, their roles are distinct. Integration Methods 1. Gateway Integration (MX Record / Smart Host) — Primary Inline Security How it works: All outbound emails pass through Barracuda’s security stack for real-time inspection, threat blocking, and policy enforcement before delivery. Protection level: Comprehensive DLP (blocking, encrypting, or quarantining sensitive content)  Outbound spam and virus filtering  Enforcement of compliance and content policies This approach offers a high level of control and immediate threat mitigation on outbound mail flow. SPF/DKIM/DMARC impact: SPF: Update SPF records to include Barracuda’s sending IPs or SPF include mechanism. DKIM: Currently, no explicit setup is needed; DKIM of the main sending source is preserved. Refer to this article for more comprehensive guidance on Barracuda SEG configuration. 2. API Integration (Complementary & Advanced Threat Focus) How it works: The API accesses cloud email environments to analyze historical and real-time data, learning normal communication patterns to detect anomalies in outbound emails. It also supports post-delivery remediation, enabling the removal of malicious emails from internal mailboxes after sending. Protection level: Advanced AI-driven detection and near real-time blocking of outbound threats, plus strong post-delivery cleanup capabilities. SPF/DKIM/DMARC impact: Since mail is sent directly by the original mail server (e.g., Microsoft 365), SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. Cisco Secure Email (formerly IronPort) – Outbound Handling and Integration Methods Outbound Logic Cisco Secure Email protects outbound email by preventing data loss (DLP), blocking spam and malware from internal accounts, stopping business email compromise (BEC) and impersonation attacks, and ensuring compliance. Cisco provides both traditional gateway appliances/cloud gateways and modern API-based solutions for layered outbound security. Integration Methods 1. Gateway Integration (MX Record / Smart Host) – Cisco Secure Email Gateway (ESA) How it works: Organizations update MX records to route mail through the Cisco Secure Email Gateway or configure their mail server (e.g., Microsoft 365, Exchange) to smart host outbound email via the gateway. All outbound mail is inspected and policies enforced before delivery. Protection level: Granular DLP (blocking, encrypting, quarantining sensitive content) Outbound spam and malware filtering to protect IP reputation Email encryption for sensitive outbound messages Comprehensive content and attachment policy enforcement SPF: Check this article for comprehensive guidance on Cisco SPF settings. DKIM: Refer to this article for detailed guidance on Cisco DKIM settings. 2. API Integration – Cisco Secure Email Threat Defense How it works: Integrates directly via API with Microsoft 365 (and potentially Google Workspace), continuously monitoring email metadata, content, and user behavior across inbound, outbound, and internal messages. Leverages Cisco’s threat intelligence and AI to detect anomalous outbound activity linked to BEC, account takeover, and phishing. Post-Delivery Remediation: Automates the removal or quarantine of malicious or policy-violating emails from mailboxes even after sending. Protection level: Advanced, AI-driven detection of sophisticated outbound threats with real-time monitoring and automated remediation. Complements gateway filtering by adding cloud-native visibility and swift post-send action. SPF/DKIM/DMARC impact: Since emails are sent directly by the original mail server, SPF and DKIM signatures remain intact, preserving DMARC alignment and domain reputation. If you have any questions or need assistance, feel free to reach out to EasyDMARC technical support.
    Like
    Love
    Wow
    Sad
    Angry
    398
    0 Комментарии 0 Поделились
  • Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World

    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud.
    This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option.
    Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment.

    1. Know What You’re Working With
    Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky.
    Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those.

    2. Build a Unified Security Strategy
    One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear.
    Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble.
    Tools like centralized dashboards, SIEM, and SOARhelp you keep everything in one place.

    3. Enforce Strict Access Controls
    In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company.
    Stick to these practices:

    Use role-based access control.
    Limit permissions to the bare minimum.
    Turn on multi-factor authentication.
    Link logins across platforms using identity federation.

    The more consistent your access rules are, the easier it is to control who gets in and what they can do.

    4. Use the Zero Trust Model
    Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default.
    This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country.
    Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed.

    5. Encrypt Data—Always
    Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys.
    Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control.
    To stay safe:

    Encrypt both at rest and in transit.
    Avoid default settings.
    Rotate encryption keys regularly.

    6. Monitor in Real Time
    Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes.
    Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong.
    Also, use automation to filter out noise and surface real threats faster.

    7. Set Up Regular Audits and Compliance Checks
    Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming.
    That’s why audits matter.
    Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything.
    Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this.

    8. Prevent Data Loss with Smart Policies
    Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Preventioncomes in.
    DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.”
    DLP also supports compliance and helps avoid lawsuits or fines when accidents happen.

    9. Automate Where You Can
    Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security.
    Automate things like:

    Patch management
    Access reviews
    Backup schedules
    Security alerts

    Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks.

    10. Centralized Security Control
    One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things.
    Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place.
    This unified view makes a huge difference. It helps you react faster and stay ahead of threats.

    Final Thought
    Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility.
    Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed.
    The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business.
    Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore.
    #cloud #security #best #practices #protecting
    Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World
    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud. This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option. Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment. 1. Know What You’re Working With Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky. Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those. 2. Build a Unified Security Strategy One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear. Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble. Tools like centralized dashboards, SIEM, and SOARhelp you keep everything in one place. 3. Enforce Strict Access Controls In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company. Stick to these practices: Use role-based access control. Limit permissions to the bare minimum. Turn on multi-factor authentication. Link logins across platforms using identity federation. The more consistent your access rules are, the easier it is to control who gets in and what they can do. 4. Use the Zero Trust Model Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default. This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country. Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed. 5. Encrypt Data—Always Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys. Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control. To stay safe: Encrypt both at rest and in transit. Avoid default settings. Rotate encryption keys regularly. 6. Monitor in Real Time Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes. Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong. Also, use automation to filter out noise and surface real threats faster. 7. Set Up Regular Audits and Compliance Checks Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming. That’s why audits matter. Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything. Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this. 8. Prevent Data Loss with Smart Policies Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Preventioncomes in. DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.” DLP also supports compliance and helps avoid lawsuits or fines when accidents happen. 9. Automate Where You Can Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security. Automate things like: Patch management Access reviews Backup schedules Security alerts Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks. 10. Centralized Security Control One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things. Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place. This unified view makes a huge difference. It helps you react faster and stay ahead of threats. Final Thought Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility. Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed. The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business. Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore. #cloud #security #best #practices #protecting
    JUSTTOTALTECH.COM
    Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World
    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud. This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option. Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment. 1. Know What You’re Working With Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky. Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those. 2. Build a Unified Security Strategy One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear. Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble. Tools like centralized dashboards, SIEM (Security Information and Event Management), and SOAR (Security Orchestration, Automation, and Response) help you keep everything in one place. 3. Enforce Strict Access Controls In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company. Stick to these practices: Use role-based access control. Limit permissions to the bare minimum. Turn on multi-factor authentication. Link logins across platforms using identity federation. The more consistent your access rules are, the easier it is to control who gets in and what they can do. 4. Use the Zero Trust Model Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default. This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country. Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed. 5. Encrypt Data—Always Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys. Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control. To stay safe: Encrypt both at rest and in transit. Avoid default settings. Rotate encryption keys regularly. 6. Monitor in Real Time Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes. Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong. Also, use automation to filter out noise and surface real threats faster. 7. Set Up Regular Audits and Compliance Checks Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming. That’s why audits matter. Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything. Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this. 8. Prevent Data Loss with Smart Policies Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Prevention (DLP) comes in. DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.” DLP also supports compliance and helps avoid lawsuits or fines when accidents happen. 9. Automate Where You Can Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security. Automate things like: Patch management Access reviews Backup schedules Security alerts Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks. 10. Centralized Security Control One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things. Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place. This unified view makes a huge difference. It helps you react faster and stay ahead of threats. Final Thought Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility. Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed. The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business. Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore.
    Like
    Wow
    Love
    Angry
    Sad
    297
    0 Комментарии 0 Поделились
  • Multicolor DLP 3D printing breakthrough enables dissolvable supports for complex freestanding structures

    Researchers at the University of Texas at Austin have developed a novel resin system for multicolor digital light processing3D printing that enables rapid fabrication of freestanding and non-assembly structures using dissolvable supports. The work, led by Zachariah A. Page and published in ACS Central Science, combines UV- and visible-light-responsive chemistries to produce materials with distinct solubility profiles, significantly streamlining post-processing.
    Current DLP workflows are often limited by the need for manually removed support structures, especially when fabricating components with overhangs or internal joints. These limitations constrain automation and increase production time and cost. To overcome this, the team designed wavelength-selective photopolymer resins that form either an insoluble thermoset or a readily dissolvable thermoplastic, depending on the light color used during printing.
    In practical terms, this allows supports to be printed in one material and rapidly dissolved using ethyl acetate, an environmentally friendly solvent, without affecting the primary structure. The supports dissolve in under 10 minutes at room temperature, eliminating the need for time-consuming sanding or cutting.
    Illustration comparing traditional DLP 3D printing with manual support removaland the new multicolor DLP process with dissolvable supports. Image via University of Texas at Austin.
    The research was supported by the U.S. Army Research Office, the National Science Foundation, and the Robert A. Welch Foundation. The authors also acknowledge collaboration with MonoPrinter and Lawrence Livermore National Laboratory.
    High-resolution multimaterial printing
    The research showcases how multicolor DLP can serve as a precise multimaterial platform, achieving sub-100 μm feature resolution with layer heights as low as 50 μm. By tuning the photoinitiator and photoacid systems to respond selectively to ultraviolet, violet, or bluelight, the team spatially controlled polymer network formation in a single vat. This enabled the production of complex, freestanding structures such as chainmail, hooks with unsupported overhangs, and fully enclosed joints, which traditionally require extensive post-processing or multi-step assembly.
    The supports, printed in a visible-light-cured thermoplastic, demonstrated sufficient mechanical integrity during the build, with tensile moduli around 160–200 MPa. Yet, upon immersion in ethyl acetate, they dissolved within 10 minutes, leaving the UV-cured thermoset structure intact. Surface profilometry confirmed that including a single interface layer of the dissolvable material between the support and the final object significantly improved surface finish, lowering roughness to under 5 μm without polishing. Computed tomography scans validated geometric fidelity, with dimensional deviations from CAD files as low as 126 μm, reinforcing the method’s capability for high-precision, solvent-cleared multimaterial printing.
    Comparison of dissolvable and traditional supports in DLP 3D printing.Disk printed with soluble supports using violet light, with rapid dissolution in ethyl acetate.Gravimetric analysis showing selective mass loss.Mechanical properties of support and structural materials.Manual support removal steps.Surface roughness comparison across methods.High-resolution test print demonstrating feature fidelity. Image via University of Texas at Austin.
    Towards scalable automation
    This work marks a significant step toward automated vat photopolymerization workflows. By removing manual support removal and achieving clean surface finishes with minimal roughness, the method could benefit applications in medical devices, robotics, and consumer products.
    The authors suggest that future work may involve refining resin formulations to enhance performance and print speed, possibly incorporating new reactive diluents and opaquing agents for improved resolution.
    Examples of printed freestanding and non-assembly structures, including a retainer, hook with overhangs, interlocked chains, and revolute joints, before and after dissolvable support removal. Image via University of Texas at Austin.
    Dissolvable materials as post-processing solutions
    Dissolvable supports have been a focal point in additive manufacturing, particularly for enhancing the efficiency of post-processing. In Fused Deposition Modeling, materials like Stratasys’ SR-30 have been effectively removed using specialized cleaning agents such as Oryx Additive‘s SRC1, which dissolves supports at twice the speed of traditional solutions. For resin-based printing, systems like Xioneer‘s Vortex EZ employ heat and fluid agitation to streamline the removal of soluble supports . In metal additive manufacturing, innovations have led to the development of chemical processes that selectively dissolve support structures without compromising the integrity of the main part . These advancements underscore the industry’s commitment to reducing manual intervention and improving the overall efficiency of 3D printing workflows.
    Read the full article in ACS Publications.
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey.
    Featured image shows: Hook geometry printed using multicolor DLP with dissolvable supports. Image via University of Texas at Austin.
    #multicolor #dlp #printing #breakthrough #enables
    Multicolor DLP 3D printing breakthrough enables dissolvable supports for complex freestanding structures
    Researchers at the University of Texas at Austin have developed a novel resin system for multicolor digital light processing3D printing that enables rapid fabrication of freestanding and non-assembly structures using dissolvable supports. The work, led by Zachariah A. Page and published in ACS Central Science, combines UV- and visible-light-responsive chemistries to produce materials with distinct solubility profiles, significantly streamlining post-processing. Current DLP workflows are often limited by the need for manually removed support structures, especially when fabricating components with overhangs or internal joints. These limitations constrain automation and increase production time and cost. To overcome this, the team designed wavelength-selective photopolymer resins that form either an insoluble thermoset or a readily dissolvable thermoplastic, depending on the light color used during printing. In practical terms, this allows supports to be printed in one material and rapidly dissolved using ethyl acetate, an environmentally friendly solvent, without affecting the primary structure. The supports dissolve in under 10 minutes at room temperature, eliminating the need for time-consuming sanding or cutting. Illustration comparing traditional DLP 3D printing with manual support removaland the new multicolor DLP process with dissolvable supports. Image via University of Texas at Austin. The research was supported by the U.S. Army Research Office, the National Science Foundation, and the Robert A. Welch Foundation. The authors also acknowledge collaboration with MonoPrinter and Lawrence Livermore National Laboratory. High-resolution multimaterial printing The research showcases how multicolor DLP can serve as a precise multimaterial platform, achieving sub-100 μm feature resolution with layer heights as low as 50 μm. By tuning the photoinitiator and photoacid systems to respond selectively to ultraviolet, violet, or bluelight, the team spatially controlled polymer network formation in a single vat. This enabled the production of complex, freestanding structures such as chainmail, hooks with unsupported overhangs, and fully enclosed joints, which traditionally require extensive post-processing or multi-step assembly. The supports, printed in a visible-light-cured thermoplastic, demonstrated sufficient mechanical integrity during the build, with tensile moduli around 160–200 MPa. Yet, upon immersion in ethyl acetate, they dissolved within 10 minutes, leaving the UV-cured thermoset structure intact. Surface profilometry confirmed that including a single interface layer of the dissolvable material between the support and the final object significantly improved surface finish, lowering roughness to under 5 μm without polishing. Computed tomography scans validated geometric fidelity, with dimensional deviations from CAD files as low as 126 μm, reinforcing the method’s capability for high-precision, solvent-cleared multimaterial printing. Comparison of dissolvable and traditional supports in DLP 3D printing.Disk printed with soluble supports using violet light, with rapid dissolution in ethyl acetate.Gravimetric analysis showing selective mass loss.Mechanical properties of support and structural materials.Manual support removal steps.Surface roughness comparison across methods.High-resolution test print demonstrating feature fidelity. Image via University of Texas at Austin. Towards scalable automation This work marks a significant step toward automated vat photopolymerization workflows. By removing manual support removal and achieving clean surface finishes with minimal roughness, the method could benefit applications in medical devices, robotics, and consumer products. The authors suggest that future work may involve refining resin formulations to enhance performance and print speed, possibly incorporating new reactive diluents and opaquing agents for improved resolution. Examples of printed freestanding and non-assembly structures, including a retainer, hook with overhangs, interlocked chains, and revolute joints, before and after dissolvable support removal. Image via University of Texas at Austin. Dissolvable materials as post-processing solutions Dissolvable supports have been a focal point in additive manufacturing, particularly for enhancing the efficiency of post-processing. In Fused Deposition Modeling, materials like Stratasys’ SR-30 have been effectively removed using specialized cleaning agents such as Oryx Additive‘s SRC1, which dissolves supports at twice the speed of traditional solutions. For resin-based printing, systems like Xioneer‘s Vortex EZ employ heat and fluid agitation to streamline the removal of soluble supports . In metal additive manufacturing, innovations have led to the development of chemical processes that selectively dissolve support structures without compromising the integrity of the main part . These advancements underscore the industry’s commitment to reducing manual intervention and improving the overall efficiency of 3D printing workflows. Read the full article in ACS Publications. Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows: Hook geometry printed using multicolor DLP with dissolvable supports. Image via University of Texas at Austin. #multicolor #dlp #printing #breakthrough #enables
    3DPRINTINGINDUSTRY.COM
    Multicolor DLP 3D printing breakthrough enables dissolvable supports for complex freestanding structures
    Researchers at the University of Texas at Austin have developed a novel resin system for multicolor digital light processing (DLP) 3D printing that enables rapid fabrication of freestanding and non-assembly structures using dissolvable supports. The work, led by Zachariah A. Page and published in ACS Central Science, combines UV- and visible-light-responsive chemistries to produce materials with distinct solubility profiles, significantly streamlining post-processing. Current DLP workflows are often limited by the need for manually removed support structures, especially when fabricating components with overhangs or internal joints. These limitations constrain automation and increase production time and cost. To overcome this, the team designed wavelength-selective photopolymer resins that form either an insoluble thermoset or a readily dissolvable thermoplastic, depending on the light color used during printing. In practical terms, this allows supports to be printed in one material and rapidly dissolved using ethyl acetate, an environmentally friendly solvent, without affecting the primary structure. The supports dissolve in under 10 minutes at room temperature, eliminating the need for time-consuming sanding or cutting. Illustration comparing traditional DLP 3D printing with manual support removal (A) and the new multicolor DLP process with dissolvable supports (B). Image via University of Texas at Austin. The research was supported by the U.S. Army Research Office, the National Science Foundation, and the Robert A. Welch Foundation. The authors also acknowledge collaboration with MonoPrinter and Lawrence Livermore National Laboratory. High-resolution multimaterial printing The research showcases how multicolor DLP can serve as a precise multimaterial platform, achieving sub-100 μm feature resolution with layer heights as low as 50 μm. By tuning the photoinitiator and photoacid systems to respond selectively to ultraviolet (365 nm), violet (405 nm), or blue (460 nm) light, the team spatially controlled polymer network formation in a single vat. This enabled the production of complex, freestanding structures such as chainmail, hooks with unsupported overhangs, and fully enclosed joints, which traditionally require extensive post-processing or multi-step assembly. The supports, printed in a visible-light-cured thermoplastic, demonstrated sufficient mechanical integrity during the build, with tensile moduli around 160–200 MPa. Yet, upon immersion in ethyl acetate, they dissolved within 10 minutes, leaving the UV-cured thermoset structure intact. Surface profilometry confirmed that including a single interface layer of the dissolvable material between the support and the final object significantly improved surface finish, lowering roughness to under 5 μm without polishing. Computed tomography scans validated geometric fidelity, with dimensional deviations from CAD files as low as 126 μm, reinforcing the method’s capability for high-precision, solvent-cleared multimaterial printing. Comparison of dissolvable and traditional supports in DLP 3D printing. (A) Disk printed with soluble supports using violet light, with rapid dissolution in ethyl acetate. (B) Gravimetric analysis showing selective mass loss. (C) Mechanical properties of support and structural materials. (D) Manual support removal steps. (E) Surface roughness comparison across methods. (F) High-resolution test print demonstrating feature fidelity. Image via University of Texas at Austin. Towards scalable automation This work marks a significant step toward automated vat photopolymerization workflows. By removing manual support removal and achieving clean surface finishes with minimal roughness, the method could benefit applications in medical devices, robotics, and consumer products. The authors suggest that future work may involve refining resin formulations to enhance performance and print speed, possibly incorporating new reactive diluents and opaquing agents for improved resolution. Examples of printed freestanding and non-assembly structures, including a retainer, hook with overhangs, interlocked chains, and revolute joints, before and after dissolvable support removal. Image via University of Texas at Austin. Dissolvable materials as post-processing solutions Dissolvable supports have been a focal point in additive manufacturing, particularly for enhancing the efficiency of post-processing. In Fused Deposition Modeling (FDM), materials like Stratasys’ SR-30 have been effectively removed using specialized cleaning agents such as Oryx Additive‘s SRC1, which dissolves supports at twice the speed of traditional solutions. For resin-based printing, systems like Xioneer‘s Vortex EZ employ heat and fluid agitation to streamline the removal of soluble supports . In metal additive manufacturing, innovations have led to the development of chemical processes that selectively dissolve support structures without compromising the integrity of the main part . These advancements underscore the industry’s commitment to reducing manual intervention and improving the overall efficiency of 3D printing workflows. Read the full article in ACS Publications. Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows: Hook geometry printed using multicolor DLP with dissolvable supports. Image via University of Texas at Austin.
    0 Комментарии 0 Поделились
  • Mapping the Expanding Role of 3D Printing in Micro and Nano Device Fabrication

    A new review by researchers from the Beijing University of Posts and Telecommunications, CETC 54, Sun Yat-sen University, Shenzhen University, and the University of Electronic Science and Technology of China surveys the latest developments in 3D printing for microelectronic and microfluidic applications. The paper released on Springer Nature Link highlights how additive manufacturing methods have reached sub-micron precision, allowing the production of devices previously limited to traditional cleanroom fabrication.
    High-resolution techniques like two-photon polymerization, electrohydrodynamic jet printing, and computed axial lithographyare now being used to create structures with feature sizes down to 100 nanometers. These capabilities have broad implications for biomedical sensors, flexible electronics, and microfluidic systems used in diagnostics and environmental monitoring.
    Overview of 3D printing applications for microelectronic and microfluidic device fabrication. Image via Springer Nature.
    Classification of High-Precision Additive Processes
    Seven categories of additive manufacturing, as defined by the American Society for Testing and Materialsserve as the foundation for modern 3D printing workflows: binder jetting, directed energy deposition, material extrusion, material jetting, powder bed fusion, sheet lamination, and vat photopolymerization.
    Among these, 2PP provides the finest resolution, enabling the fabrication of nanoscale features for optical communication components and MEMS support structures. Inkjet-based material jetting and direct ink writingallow patterned deposition of conductive or biological materials, including stretchable gels and ionic polymers. Binder jetting, which operates by spraying adhesives onto powdered substrates, is particularly suited for large-volume structures using metals or ceramics with minimal thermal stress.
    Fused deposition modeling, a form of material extrusion, continues to be widely used for its low cost and compatibility with thermoplastics. Although limited in resolution, it remains practical for building mechanical supports or sacrificial molds in soft lithography.
    Various micro-scale 3D printing strategies. Image via Springer Nature.
    3D Printing in Microelectronics, MEMS, and Sensing
    Additive manufacturing is now routinely used to fabricate microsensors, microelectromechanical systemactuators, and flexible electronics. Compared to traditional lithographic processes, 3D printing reduces material waste and bypasses the need for masks or etching steps.
    In one example cited by the review, flexible multi-directional sensors were printed directly onto skin-like substrates using a customized FDM platform. Another case involved a cantilever support for a micro-accelerometer produced via 2PP and coated with conductive materials through evaporation. These examples show how additive techniques can fabricate both support and functional layers with high geometric complexity.
    MEMS actuators fabricated with additive methods often combine printed scaffolds with conventional micromachining. A 2PP-printed spiral structure was used to house liquid metal in an electrothermal actuator. Separately, FDM was used to print a MEMS switch, combining conductive PLA and polyvinyl alcohol as the sacrificial layer. However, achieving the mechanical precision needed for switching elements remains a barrier for fully integrated use.
    3D printing material and preparation methods. Image via Springer Nature.
    Development of Functional Inks and Composite Materials
    Microelectronic applications depend on the availability of printable materials with specific electrical, mechanical, or chemical properties. MXene-based conductive inks, metal particle suspensions, and piezoelectric composites are being optimized for use in DIW, inkjet, and light-curing platforms.
    Researchers have fabricated planar asymmetric micro-supercapacitors using ink composed of nickel sulfide on nitrogen-doped MXene. These devices demonstrate increased voltage windowsand volumetric capacitance, meeting the demands of compact power systems. Other work involves composite hydrogels with ionic conductivity and high tensile stretch, used in flexible biosensing applications.
    PEDOT:PSS, a common conductive polymer, has been formulated into a high-resolution ink using lyophilization and re-dispersion in photocurable matrices. These formulations are used to create electrode arrays for neural probes and flexible circuits. Multiphoton lithography has also been applied to print complex 3D structures from organic semiconductor resins.
    Bioelectronic applications are driving the need for biocompatible inks that can perform reliably in wet and dynamic environments. One group incorporated graphene nanoplatelets and carbon nanotubes into ink for multi-jet fusion, producing pressure sensors with high mechanical durability and signal sensitivity.
    3D printed electronics achieved through the integration of active initiators into printing materials. Image via Springer Nature.
    Microfluidic Devices Fabricated via Direct and Indirect Methods
    Microfluidic systems have traditionally relied on soft lithography techniques using polydimethylsiloxane. Additive manufacturing now offers alternatives through both direct printing of fluidic chips and indirect fabrication using 3D printed molds.
    Direct fabrication using SLA, DLP, or inkjet-based systems allows the rapid prototyping of chips with integrated reservoirs and channels. However, achieving sub-100 µm channels requires careful calibration. One group demonstrated channels as small as 18 µm × 20 µm using a customized DLP printer.
    Indirect fabrication relies on printing sacrificial or reusable molds, followed by casting and demolding. PLA, ABS, and resin-based molds are commonly used, depending on whether water-soluble or solvent-dissolvable materials are preferred. These techniques are compatible with PDMS and reduce reliance on photolithography equipment.
    Surface roughness and optical transparency remain concerns. FDM-printed molds often introduce layer artifacts, while uncured resin in SLA methods can leach toxins or inhibit PDMS curing. Some teams address these issues by polishing surfaces post-print or chemically treating molds to improve release characteristics.
    Integration and Future Directions for Microdevices
    3D printed microfluidic devices in biology and chemistry.Image via Springer Nature.
    3D printing is increasingly enabling the integration of structural, electrical, and sensing components into single build processes. Multi-material printers are beginning to produce substrates, conductive paths, and dielectric layers in tandem, although component embedding still requires manual intervention.
    Applications in wearable electronics, flexible sensors, and soft robotics continue to expand. Stretchable conductors printed onto elastomeric backings are being used to simulate mechanoreceptors and thermoreceptors for electronic skin systems. Piezoelectric materials such as BaTiO₃-PVDF composites are under investigation for printed actuators and energy harvesters.
    MEMS fabrication remains constrained by the mechanical limitations of printable materials. Silicon continues to dominate high-performance actuators due to its stiffness and precision. Additive methods are currently better suited for producing packaging, connectors, and sacrificial scaffolds within MEMS systems.
    Multi-photon and light-assisted processes are being explored for producing active devices like microcapacitors and accelerometers. Recent work demonstrated the use of 2PP to fabricate nitrogen-vacancy center–based quantum sensors, capable of detecting thermal and magnetic fluctuations in microscopic environments.
    As materials, resolution, and system integration improve, 3D printing is poised to shift from peripheral use to a central role in microsystem design and production. 
    3D printing micro-nano devices. Image via Springer Nature.
    Ready to discover who won the 20243D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights.
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    Featured image shows an Overview of 3D printing applications for microelectronic and microfluidic device fabrication. Image via Springer Nature.

    Anyer Tenorio Lara
    Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    #mapping #expanding #role #printing #micro
    Mapping the Expanding Role of 3D Printing in Micro and Nano Device Fabrication
    A new review by researchers from the Beijing University of Posts and Telecommunications, CETC 54, Sun Yat-sen University, Shenzhen University, and the University of Electronic Science and Technology of China surveys the latest developments in 3D printing for microelectronic and microfluidic applications. The paper released on Springer Nature Link highlights how additive manufacturing methods have reached sub-micron precision, allowing the production of devices previously limited to traditional cleanroom fabrication. High-resolution techniques like two-photon polymerization, electrohydrodynamic jet printing, and computed axial lithographyare now being used to create structures with feature sizes down to 100 nanometers. These capabilities have broad implications for biomedical sensors, flexible electronics, and microfluidic systems used in diagnostics and environmental monitoring. Overview of 3D printing applications for microelectronic and microfluidic device fabrication. Image via Springer Nature. Classification of High-Precision Additive Processes Seven categories of additive manufacturing, as defined by the American Society for Testing and Materialsserve as the foundation for modern 3D printing workflows: binder jetting, directed energy deposition, material extrusion, material jetting, powder bed fusion, sheet lamination, and vat photopolymerization. Among these, 2PP provides the finest resolution, enabling the fabrication of nanoscale features for optical communication components and MEMS support structures. Inkjet-based material jetting and direct ink writingallow patterned deposition of conductive or biological materials, including stretchable gels and ionic polymers. Binder jetting, which operates by spraying adhesives onto powdered substrates, is particularly suited for large-volume structures using metals or ceramics with minimal thermal stress. Fused deposition modeling, a form of material extrusion, continues to be widely used for its low cost and compatibility with thermoplastics. Although limited in resolution, it remains practical for building mechanical supports or sacrificial molds in soft lithography. Various micro-scale 3D printing strategies. Image via Springer Nature. 3D Printing in Microelectronics, MEMS, and Sensing Additive manufacturing is now routinely used to fabricate microsensors, microelectromechanical systemactuators, and flexible electronics. Compared to traditional lithographic processes, 3D printing reduces material waste and bypasses the need for masks or etching steps. In one example cited by the review, flexible multi-directional sensors were printed directly onto skin-like substrates using a customized FDM platform. Another case involved a cantilever support for a micro-accelerometer produced via 2PP and coated with conductive materials through evaporation. These examples show how additive techniques can fabricate both support and functional layers with high geometric complexity. MEMS actuators fabricated with additive methods often combine printed scaffolds with conventional micromachining. A 2PP-printed spiral structure was used to house liquid metal in an electrothermal actuator. Separately, FDM was used to print a MEMS switch, combining conductive PLA and polyvinyl alcohol as the sacrificial layer. However, achieving the mechanical precision needed for switching elements remains a barrier for fully integrated use. 3D printing material and preparation methods. Image via Springer Nature. Development of Functional Inks and Composite Materials Microelectronic applications depend on the availability of printable materials with specific electrical, mechanical, or chemical properties. MXene-based conductive inks, metal particle suspensions, and piezoelectric composites are being optimized for use in DIW, inkjet, and light-curing platforms. Researchers have fabricated planar asymmetric micro-supercapacitors using ink composed of nickel sulfide on nitrogen-doped MXene. These devices demonstrate increased voltage windowsand volumetric capacitance, meeting the demands of compact power systems. Other work involves composite hydrogels with ionic conductivity and high tensile stretch, used in flexible biosensing applications. PEDOT:PSS, a common conductive polymer, has been formulated into a high-resolution ink using lyophilization and re-dispersion in photocurable matrices. These formulations are used to create electrode arrays for neural probes and flexible circuits. Multiphoton lithography has also been applied to print complex 3D structures from organic semiconductor resins. Bioelectronic applications are driving the need for biocompatible inks that can perform reliably in wet and dynamic environments. One group incorporated graphene nanoplatelets and carbon nanotubes into ink for multi-jet fusion, producing pressure sensors with high mechanical durability and signal sensitivity. 3D printed electronics achieved through the integration of active initiators into printing materials. Image via Springer Nature. Microfluidic Devices Fabricated via Direct and Indirect Methods Microfluidic systems have traditionally relied on soft lithography techniques using polydimethylsiloxane. Additive manufacturing now offers alternatives through both direct printing of fluidic chips and indirect fabrication using 3D printed molds. Direct fabrication using SLA, DLP, or inkjet-based systems allows the rapid prototyping of chips with integrated reservoirs and channels. However, achieving sub-100 µm channels requires careful calibration. One group demonstrated channels as small as 18 µm × 20 µm using a customized DLP printer. Indirect fabrication relies on printing sacrificial or reusable molds, followed by casting and demolding. PLA, ABS, and resin-based molds are commonly used, depending on whether water-soluble or solvent-dissolvable materials are preferred. These techniques are compatible with PDMS and reduce reliance on photolithography equipment. Surface roughness and optical transparency remain concerns. FDM-printed molds often introduce layer artifacts, while uncured resin in SLA methods can leach toxins or inhibit PDMS curing. Some teams address these issues by polishing surfaces post-print or chemically treating molds to improve release characteristics. Integration and Future Directions for Microdevices 3D printed microfluidic devices in biology and chemistry.Image via Springer Nature. 3D printing is increasingly enabling the integration of structural, electrical, and sensing components into single build processes. Multi-material printers are beginning to produce substrates, conductive paths, and dielectric layers in tandem, although component embedding still requires manual intervention. Applications in wearable electronics, flexible sensors, and soft robotics continue to expand. Stretchable conductors printed onto elastomeric backings are being used to simulate mechanoreceptors and thermoreceptors for electronic skin systems. Piezoelectric materials such as BaTiO₃-PVDF composites are under investigation for printed actuators and energy harvesters. MEMS fabrication remains constrained by the mechanical limitations of printable materials. Silicon continues to dominate high-performance actuators due to its stiffness and precision. Additive methods are currently better suited for producing packaging, connectors, and sacrificial scaffolds within MEMS systems. Multi-photon and light-assisted processes are being explored for producing active devices like microcapacitors and accelerometers. Recent work demonstrated the use of 2PP to fabricate nitrogen-vacancy center–based quantum sensors, capable of detecting thermal and magnetic fluctuations in microscopic environments. As materials, resolution, and system integration improve, 3D printing is poised to shift from peripheral use to a central role in microsystem design and production.  3D printing micro-nano devices. Image via Springer Nature. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Featured image shows an Overview of 3D printing applications for microelectronic and microfluidic device fabrication. Image via Springer Nature. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology. #mapping #expanding #role #printing #micro
    3DPRINTINGINDUSTRY.COM
    Mapping the Expanding Role of 3D Printing in Micro and Nano Device Fabrication
    A new review by researchers from the Beijing University of Posts and Telecommunications, CETC 54 (54th Research Institute of Electronics Technology Group Corporation), Sun Yat-sen University, Shenzhen University, and the University of Electronic Science and Technology of China surveys the latest developments in 3D printing for microelectronic and microfluidic applications. The paper released on Springer Nature Link highlights how additive manufacturing methods have reached sub-micron precision, allowing the production of devices previously limited to traditional cleanroom fabrication. High-resolution techniques like two-photon polymerization (2PP), electrohydrodynamic jet printing, and computed axial lithography (CAL) are now being used to create structures with feature sizes down to 100 nanometers. These capabilities have broad implications for biomedical sensors, flexible electronics, and microfluidic systems used in diagnostics and environmental monitoring. Overview of 3D printing applications for microelectronic and microfluidic device fabrication. Image via Springer Nature. Classification of High-Precision Additive Processes Seven categories of additive manufacturing, as defined by the American Society for Testing and Materials (ASTM) serve as the foundation for modern 3D printing workflows: binder jetting, directed energy deposition (DED), material extrusion (MEX), material jetting, powder bed fusion (PBF), sheet lamination (SHL), and vat photopolymerization (VP). Among these, 2PP provides the finest resolution, enabling the fabrication of nanoscale features for optical communication components and MEMS support structures. Inkjet-based material jetting and direct ink writing (DIW) allow patterned deposition of conductive or biological materials, including stretchable gels and ionic polymers. Binder jetting, which operates by spraying adhesives onto powdered substrates, is particularly suited for large-volume structures using metals or ceramics with minimal thermal stress. Fused deposition modeling, a form of material extrusion, continues to be widely used for its low cost and compatibility with thermoplastics. Although limited in resolution, it remains practical for building mechanical supports or sacrificial molds in soft lithography. Various micro-scale 3D printing strategies. Image via Springer Nature. 3D Printing in Microelectronics, MEMS, and Sensing Additive manufacturing is now routinely used to fabricate microsensors, microelectromechanical system (MEMS) actuators, and flexible electronics. Compared to traditional lithographic processes, 3D printing reduces material waste and bypasses the need for masks or etching steps. In one example cited by the review, flexible multi-directional sensors were printed directly onto skin-like substrates using a customized FDM platform. Another case involved a cantilever support for a micro-accelerometer produced via 2PP and coated with conductive materials through evaporation. These examples show how additive techniques can fabricate both support and functional layers with high geometric complexity. MEMS actuators fabricated with additive methods often combine printed scaffolds with conventional micromachining. A 2PP-printed spiral structure was used to house liquid metal in an electrothermal actuator. Separately, FDM was used to print a MEMS switch, combining conductive PLA and polyvinyl alcohol as the sacrificial layer. However, achieving the mechanical precision needed for switching elements remains a barrier for fully integrated use. 3D printing material and preparation methods. Image via Springer Nature. Development of Functional Inks and Composite Materials Microelectronic applications depend on the availability of printable materials with specific electrical, mechanical, or chemical properties. MXene-based conductive inks, metal particle suspensions, and piezoelectric composites are being optimized for use in DIW, inkjet, and light-curing platforms. Researchers have fabricated planar asymmetric micro-supercapacitors using ink composed of nickel sulfide on nitrogen-doped MXene. These devices demonstrate increased voltage windows (up to 1.5 V) and volumetric capacitance, meeting the demands of compact power systems. Other work involves composite hydrogels with ionic conductivity and high tensile stretch, used in flexible biosensing applications. PEDOT:PSS, a common conductive polymer, has been formulated into a high-resolution ink using lyophilization and re-dispersion in photocurable matrices. These formulations are used to create electrode arrays for neural probes and flexible circuits. Multiphoton lithography has also been applied to print complex 3D structures from organic semiconductor resins. Bioelectronic applications are driving the need for biocompatible inks that can perform reliably in wet and dynamic environments. One group incorporated graphene nanoplatelets and carbon nanotubes into ink for multi-jet fusion, producing pressure sensors with high mechanical durability and signal sensitivity. 3D printed electronics achieved through the integration of active initiators into printing materials. Image via Springer Nature. Microfluidic Devices Fabricated via Direct and Indirect Methods Microfluidic systems have traditionally relied on soft lithography techniques using polydimethylsiloxane (PDMS). Additive manufacturing now offers alternatives through both direct printing of fluidic chips and indirect fabrication using 3D printed molds. Direct fabrication using SLA, DLP, or inkjet-based systems allows the rapid prototyping of chips with integrated reservoirs and channels. However, achieving sub-100 µm channels requires careful calibration. One group demonstrated channels as small as 18 µm × 20 µm using a customized DLP printer. Indirect fabrication relies on printing sacrificial or reusable molds, followed by casting and demolding. PLA, ABS, and resin-based molds are commonly used, depending on whether water-soluble or solvent-dissolvable materials are preferred. These techniques are compatible with PDMS and reduce reliance on photolithography equipment. Surface roughness and optical transparency remain concerns. FDM-printed molds often introduce layer artifacts, while uncured resin in SLA methods can leach toxins or inhibit PDMS curing. Some teams address these issues by polishing surfaces post-print or chemically treating molds to improve release characteristics. Integration and Future Directions for Microdevices 3D printed microfluidic devices in biology and chemistry.Image via Springer Nature. 3D printing is increasingly enabling the integration of structural, electrical, and sensing components into single build processes. Multi-material printers are beginning to produce substrates, conductive paths, and dielectric layers in tandem, although component embedding still requires manual intervention. Applications in wearable electronics, flexible sensors, and soft robotics continue to expand. Stretchable conductors printed onto elastomeric backings are being used to simulate mechanoreceptors and thermoreceptors for electronic skin systems. Piezoelectric materials such as BaTiO₃-PVDF composites are under investigation for printed actuators and energy harvesters. MEMS fabrication remains constrained by the mechanical limitations of printable materials. Silicon continues to dominate high-performance actuators due to its stiffness and precision. Additive methods are currently better suited for producing packaging, connectors, and sacrificial scaffolds within MEMS systems. Multi-photon and light-assisted processes are being explored for producing active devices like microcapacitors and accelerometers. Recent work demonstrated the use of 2PP to fabricate nitrogen-vacancy center–based quantum sensors, capable of detecting thermal and magnetic fluctuations in microscopic environments. As materials, resolution, and system integration improve, 3D printing is poised to shift from peripheral use to a central role in microsystem design and production.  3D printing micro-nano devices. Image via Springer Nature. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Featured image shows an Overview of 3D printing applications for microelectronic and microfluidic device fabrication. Image via Springer Nature. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    0 Комментарии 0 Поделились
  • Researchers develop automatic exposure system for volumetric 3D printing

    A team from the National Research Council Canada and the University of Victoria has developed a fully automatic exposure control system for tomographic volumetric additive manufacturing, a technique that fabricates entire objects at once using projected light patterns inside a rotating resin vat, significantly improving the process’s accuracy and repeatability. The results, shared in a non-peer-reviewed preprint on arXiv, show that the technique enables hands-free printing with comparable or better feature resolution than commercial SLA and DLP printers, while printing parts up to ten times faster.
    Dubbed AE-VAM, the new system uses real-time monitoring of light scattering inside the resin to automatically terminate exposure during printing. This eliminates the need for manual adjustments, which previously limited the consistency and commercial viability of VAM.
    The researchers demonstrated their system by printing 25 iterations of the widely used 3DBenchy model. AE-VAM achieved an average RMS surface deviation of 0.100 mm and inter-print variation of 0.053 mm. Notably, all fine features, including blind holes, chimneys, and underside text, were successfully reproduced, outperforming prints from commercial SLA and DLP systems in some key respects.
    Schematic of the AE-VAM system, which uses light scattering measurements to determine the optimal exposure endpoint in real-time. Image via Antony Orth et al., National Research Council Canada / University of Victoria.
    From lab to potential production
    Tomographic VAM differs from conventional 3D printing in that it exposes the entire resin volume at once, rather than layer by layer. While this allows for faster printing and the elimination of support structures, previous implementations suffered from unpredictable exposure levels due to resin reuse and light diffusion, often requiring experienced operators and frequent recalibration.
    AE-VAM addresses this by using a simple optical feedback system that measures scattered red light during curing. When the measured signal reaches a calibrated threshold, the UV exposure is halted automatically. According to the authors, this makes the process “insensitive to geometry” and viable for multi-part assembly printing, such as gear systems and threaded components.
    A step toward commercial VAM
    The team benchmarked AE-VAM against the Formlabs Form 2, Form 4, and Asiga PRO4K. While the Form 2 achieved slightly higher accuracy, AE-VAM outperformed on small feature reproduction and consistency, especially as resin was re-used. The system printed the same 3DBenchy model in under a minute, compared to over 8 minutes on the fastest SLA system.
    “AE-VAM has repeatability and accuracy specifications that are within the range measured for commercial systems,” the authors wrote, noting that it also enables resin reuse up to five times with minimal degradation. They anticipate that broader testing of AE-VAM with different resins could bring the technology closer to commercialization. The team notes the approach is computationally lightweight and suitable for general-purpose use with minimal operator training.
    The work has been funded by the National Research Council of Canada’s Ideation program. Several authors are listed as inventors on provisional patents related to the system.
    AE-VAM-printed mechanical components: a functional ¼-20 screw and nut, and a gear assembly with 50 μm tolerances. Parts could also be mated with standard metal hardware. Image via Antony Orth et al., National Research Council Canada / University of Victoria.
    Volumetric 3D printing gains momentum across research and industry
    Volumetric additive manufacturinghas garnered increasing attention in recent years as a fast, support-free alternative to conventional layer-based 3D printing. Previous VAM advancements include Manifest Technologies’launch of a high-speed P-VAM evaluation kit aimed at commercial adoption, and EPFL’s demonstration of opaque resin printing using volumetric techniques. Meanwhile, researchers at Utrecht University have leveraged volumetric bioprinting to fabricate miniature liver models for regenerative medicine, and University College London explored rapid drug-loaded tablet fabrication. More recently, a holographic variant of tomographic VAMshowed promise in reducing print times and improving light efficiency. These developments underscore the broad applicability and accelerating pace of innovation in volumetric 3D printing technologies.
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us on LinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem. Help us shape the future of 3D printing industry news with our 2025 reader survey.
    Feature image shows comparison of 3DBenchy models printed with VAM, SLA and DLP. Antony Orth et al., National Research Council Canada / University of Victoria.
    #researchers #develop #automatic #exposure #system
    Researchers develop automatic exposure system for volumetric 3D printing
    A team from the National Research Council Canada and the University of Victoria has developed a fully automatic exposure control system for tomographic volumetric additive manufacturing, a technique that fabricates entire objects at once using projected light patterns inside a rotating resin vat, significantly improving the process’s accuracy and repeatability. The results, shared in a non-peer-reviewed preprint on arXiv, show that the technique enables hands-free printing with comparable or better feature resolution than commercial SLA and DLP printers, while printing parts up to ten times faster. Dubbed AE-VAM, the new system uses real-time monitoring of light scattering inside the resin to automatically terminate exposure during printing. This eliminates the need for manual adjustments, which previously limited the consistency and commercial viability of VAM. The researchers demonstrated their system by printing 25 iterations of the widely used 3DBenchy model. AE-VAM achieved an average RMS surface deviation of 0.100 mm and inter-print variation of 0.053 mm. Notably, all fine features, including blind holes, chimneys, and underside text, were successfully reproduced, outperforming prints from commercial SLA and DLP systems in some key respects. Schematic of the AE-VAM system, which uses light scattering measurements to determine the optimal exposure endpoint in real-time. Image via Antony Orth et al., National Research Council Canada / University of Victoria. From lab to potential production Tomographic VAM differs from conventional 3D printing in that it exposes the entire resin volume at once, rather than layer by layer. While this allows for faster printing and the elimination of support structures, previous implementations suffered from unpredictable exposure levels due to resin reuse and light diffusion, often requiring experienced operators and frequent recalibration. AE-VAM addresses this by using a simple optical feedback system that measures scattered red light during curing. When the measured signal reaches a calibrated threshold, the UV exposure is halted automatically. According to the authors, this makes the process “insensitive to geometry” and viable for multi-part assembly printing, such as gear systems and threaded components. A step toward commercial VAM The team benchmarked AE-VAM against the Formlabs Form 2, Form 4, and Asiga PRO4K. While the Form 2 achieved slightly higher accuracy, AE-VAM outperformed on small feature reproduction and consistency, especially as resin was re-used. The system printed the same 3DBenchy model in under a minute, compared to over 8 minutes on the fastest SLA system. “AE-VAM has repeatability and accuracy specifications that are within the range measured for commercial systems,” the authors wrote, noting that it also enables resin reuse up to five times with minimal degradation. They anticipate that broader testing of AE-VAM with different resins could bring the technology closer to commercialization. The team notes the approach is computationally lightweight and suitable for general-purpose use with minimal operator training. The work has been funded by the National Research Council of Canada’s Ideation program. Several authors are listed as inventors on provisional patents related to the system. AE-VAM-printed mechanical components: a functional ¼-20 screw and nut, and a gear assembly with 50 μm tolerances. Parts could also be mated with standard metal hardware. Image via Antony Orth et al., National Research Council Canada / University of Victoria. Volumetric 3D printing gains momentum across research and industry Volumetric additive manufacturinghas garnered increasing attention in recent years as a fast, support-free alternative to conventional layer-based 3D printing. Previous VAM advancements include Manifest Technologies’launch of a high-speed P-VAM evaluation kit aimed at commercial adoption, and EPFL’s demonstration of opaque resin printing using volumetric techniques. Meanwhile, researchers at Utrecht University have leveraged volumetric bioprinting to fabricate miniature liver models for regenerative medicine, and University College London explored rapid drug-loaded tablet fabrication. More recently, a holographic variant of tomographic VAMshowed promise in reducing print times and improving light efficiency. These developments underscore the broad applicability and accelerating pace of innovation in volumetric 3D printing technologies. Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us on LinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem. Help us shape the future of 3D printing industry news with our 2025 reader survey. Feature image shows comparison of 3DBenchy models printed with VAM, SLA and DLP. Antony Orth et al., National Research Council Canada / University of Victoria. #researchers #develop #automatic #exposure #system
    3DPRINTINGINDUSTRY.COM
    Researchers develop automatic exposure system for volumetric 3D printing
    A team from the National Research Council Canada and the University of Victoria has developed a fully automatic exposure control system for tomographic volumetric additive manufacturing (VAM), a technique that fabricates entire objects at once using projected light patterns inside a rotating resin vat, significantly improving the process’s accuracy and repeatability. The results, shared in a non-peer-reviewed preprint on arXiv, show that the technique enables hands-free printing with comparable or better feature resolution than commercial SLA and DLP printers, while printing parts up to ten times faster. Dubbed AE-VAM (Automatic Exposure Volumetric Additive Manufacturing), the new system uses real-time monitoring of light scattering inside the resin to automatically terminate exposure during printing. This eliminates the need for manual adjustments, which previously limited the consistency and commercial viability of VAM. The researchers demonstrated their system by printing 25 iterations of the widely used 3DBenchy model. AE-VAM achieved an average RMS surface deviation of 0.100 mm and inter-print variation of 0.053 mm. Notably, all fine features, including blind holes, chimneys, and underside text, were successfully reproduced, outperforming prints from commercial SLA and DLP systems in some key respects. Schematic of the AE-VAM system, which uses light scattering measurements to determine the optimal exposure endpoint in real-time. Image via Antony Orth et al., National Research Council Canada / University of Victoria. From lab to potential production Tomographic VAM differs from conventional 3D printing in that it exposes the entire resin volume at once, rather than layer by layer. While this allows for faster printing and the elimination of support structures, previous implementations suffered from unpredictable exposure levels due to resin reuse and light diffusion, often requiring experienced operators and frequent recalibration. AE-VAM addresses this by using a simple optical feedback system that measures scattered red light during curing. When the measured signal reaches a calibrated threshold, the UV exposure is halted automatically. According to the authors, this makes the process “insensitive to geometry” and viable for multi-part assembly printing, such as gear systems and threaded components. A step toward commercial VAM The team benchmarked AE-VAM against the Formlabs Form 2, Form 4, and Asiga PRO4K. While the Form 2 achieved slightly higher accuracy (0.081 mm RMS error), AE-VAM outperformed on small feature reproduction and consistency, especially as resin was re-used. The system printed the same 3DBenchy model in under a minute, compared to over 8 minutes on the fastest SLA system. “AE-VAM has repeatability and accuracy specifications that are within the range measured for commercial systems,” the authors wrote, noting that it also enables resin reuse up to five times with minimal degradation. They anticipate that broader testing of AE-VAM with different resins could bring the technology closer to commercialization. The team notes the approach is computationally lightweight and suitable for general-purpose use with minimal operator training. The work has been funded by the National Research Council of Canada’s Ideation program. Several authors are listed as inventors on provisional patents related to the system. AE-VAM-printed mechanical components: a functional ¼-20 screw and nut, and a gear assembly with 50 μm tolerances. Parts could also be mated with standard metal hardware. Image via Antony Orth et al., National Research Council Canada / University of Victoria. Volumetric 3D printing gains momentum across research and industry Volumetric additive manufacturing (VAM) has garnered increasing attention in recent years as a fast, support-free alternative to conventional layer-based 3D printing. Previous VAM advancements include Manifest Technologies’ (formerly Vitro3D) launch of a high-speed P-VAM evaluation kit aimed at commercial adoption, and EPFL’s demonstration of opaque resin printing using volumetric techniques. Meanwhile, researchers at Utrecht University have leveraged volumetric bioprinting to fabricate miniature liver models for regenerative medicine, and University College London explored rapid drug-loaded tablet fabrication. More recently, a holographic variant of tomographic VAM (TVAM) showed promise in reducing print times and improving light efficiency. These developments underscore the broad applicability and accelerating pace of innovation in volumetric 3D printing technologies. Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us on LinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem. Help us shape the future of 3D printing industry news with our 2025 reader survey. Feature image shows comparison of 3DBenchy models printed with VAM, SLA and DLP. Antony Orth et al., National Research Council Canada / University of Victoria.
    0 Комментарии 0 Поделились
  • Top 10 Best Practices for Effective Data Protection

    May 16, 2025The Hacker NewsZero Trust / Data Protection

    Data is the lifeblood of productivity, and protecting sensitive data is more critical than ever. With cyber threats evolving rapidly and data privacy regulations tightening, organizations must stay vigilant and proactive to safeguard their most valuable assets. But how do you build an effective data protection framework?
    In this article, we'll explore data protection best practices from meeting compliance requirements to streamlining day-to-day operations. Whether you're securing a small business or a large enterprise, these top strategies will help you build a strong defense against breaches and keep your sensitive data safe.
    1. Define your data goals
    When tackling any data protection project, the first step is always to understand the outcome you want.
    First, understand what data you need to protect. Identify your crown jewel data, and where you THINK it lives.Work with business owners to find any data outside the typical scope that you need to secure.
    This is all to answer the question: "What data would hurt the company if it were breached?"
    Second, work with the C-suit and board of directors to define what your data protection program will look like. Understand your budget, your risk tolerance to data loss, and what resources you have. Define how aggressive your protection program will be so you can balance risk and productivity. All organizations need to strike a balance between the two.
    2. Automate data classification
    Next, begin your data classification journey—that is, find your data and catalog it. This is often the most difficult step in the journey, as organizations create new data all the time.
    Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves, and rely on your DLP policy to jump in when risk arises.Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input.
    3. Focus on zero trust security for access control
    Adopting a zero trust architecture is crucial for modern data protection strategies to be effective. Based on the maxim "never trust, always verify," zero trust assumes security threats can come from inside or outside your network. Every access request is authenticated and authorized, greatly reducing the risk of unauthorized access and data breaches.
    Look for a zero trust solution that emphasizes the importance of least-privileged access control between users and apps. With this approach, users never access the network, reducing the ability for threats to move laterally and propagate to other entities and data on the network. The principle of least privilege ensures that users have only the access they need for their roles, reducing the attack surface.
    4. Centralize DLP for consistent alerting
    Data loss preventiontechnology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of datato ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds.
    The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine, as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response.
    Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. Focus on vendors that support the most channels so that, as your program grows, you can easily add protection across devices, inline, and cloud.
    5. Ensure blocking across key loss channels
    Once you have a centralized DLP, focus on the most important data loss channels to your organization.The most important channels can vary, but every organization focuses on certain common ones:

    Web/Email: The most common ways users accidentally send sensitive data outside the organization.
    SaaS data: Another common loss vector, as users can easily share data externally.
    Endpoint: A key focus for many organizations looking to lock down USB, printing, and network shares.
    Unmanaged devices/BYOD: If you have a large BYOD footprint, browser isolation is an innovative way to secure data headed to these devices without an agent or VDI. Devices are placed in an isolated browser, which enforces DLP inspection and prevents cut, paste, download, or print.SaaS posture control: SaaS platforms like Microsoft 365 can often be misconfigured. Continuously scanning for gaps and risky third-party integrations is key to minimizing data breaches.
    IaaS posture control: Most companies have a lot of sensitive data across AWS, Azure, or Google Cloud. Finding it all, and closing risky misconfigurations that expose it, is the driver behind data security posture management.

    6. Understand and maintain compliance
    Getting a handle on compliance is a key step for great data protection. You may need to keep up with many different regulations, depending on your industry. These rules are there to make sure personal data is safe and organizations are handling it the right way. Stay informed on the latest mandates to avoid fines and protect your brand, all while building trust with your customers and partners.
    To keep on top of compliance, strong data governance practices are a must. This means regular security audits, keeping good records, and making sure your team is well-trained. Embrace technological approaches that help drive better compliance, such as data encryption and monitoring tools. By making compliance part of your routine, you can stay ahead of risks and ensure your data protection is both effective and in line with requirements.
    7. Strategize for BYOD
    Although not a concern for every organization, unmanaged devices present a unique challenge for data protection. Your organization doesn't own or have agents on these devices, so you can't ensure their security posture or patch level, wipe them remotely, and so on. Yet their usersoften have legitimate reasons to access your critical data.
    You don't want sensitive data to land on a BYOD endpoint and vanish from your sight. Until now, solutions to secure BYOD have revolved around CASB reverse proxiesand VDI approaches.
    Browser isolation provides an effective and eloquent way to secure data without the cost and complexity of those approaches. By placing BYOD endpoints in an isolated browser, you can enforce great data protection without an endpoint agent. Data is streamed to the device as pixels, allowing interaction with the data but preventing download and cut/paste. You can also apply DLP inspection to the session and data based on your policy.
    8. Control your cloud posture with SSPM and DSPM
    Cloud posture is one of the most commonly overlooked aspects of data hygiene. SaaS platforms and public clouds have many settings that DevOps teams without security expertise can easily overlook. The resulting misconfigurations can lead to dangerous gaps that expose sensitive data. Many of the largest data breaches in history have happened because such gaps let adversaries walk right in.
    SaaS security posture managementand data security posture managementare designed to uncover and help remediate these risks. By leveraging API access, SSPM and DSPM can continuously scan your cloud deployment, locate sensitive data, identify misconfigurations, and remediate exposures. Some SSPM approaches also feature integrated compliance with frameworks like NIST, ISO, and SOC 2.
    9. Don't forget about data security training
    Data security training is often where data protection programs fall apart. If users don't understand or support your data protection goals, dissent can build across your teams and derail your program. Spend time building a training program that highlights your objectives and the value data protection will bring the organization. Ensure upper management supports and sponsors your data security training initiatives.
    Some solutions offer built-in user coaching with incident management workflows. This valuable feature allows you to notify users about incidents via Slack or email for justification, education, and policy adjustment if needed. Involving users in their incidents helps promote awareness of data protection practices as well as how to identify and safely handle sensitive content.
    10. Automate incident management and workflows
    Lastly, no data protection program would be complete without day-to-day operations. Ensuring your team can efficiently manage and quickly respond to incidents is critical. One way to ensure streamlined processes is to embrace a solution that enables workflow automation.
    Designed to automate common incident management and response tasks, this feature can be a lifesaver for IT teams. By saving time and money while improving response times, IT teams can do more with less. Look for solutions that have a strong workflow automation offering integrated into the SSE to make incident management efficient and centralized.
    Bringing it all together
    Data protection is not a one-time project; it's an ongoing commitment. Staying informed of data protection best practices will help you build a resilient defense against evolving threats and ensure your organization's long-term success.
    Remember: investing in data protection is not just about mitigating risks and preventing data breaches. It's also about building trust, maintaining your reputation, and unlocking new opportunities for growth.
    Learn more at zscaler.com/security

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #top #best #practices #effective #data
    Top 10 Best Practices for Effective Data Protection
    May 16, 2025The Hacker NewsZero Trust / Data Protection Data is the lifeblood of productivity, and protecting sensitive data is more critical than ever. With cyber threats evolving rapidly and data privacy regulations tightening, organizations must stay vigilant and proactive to safeguard their most valuable assets. But how do you build an effective data protection framework? In this article, we'll explore data protection best practices from meeting compliance requirements to streamlining day-to-day operations. Whether you're securing a small business or a large enterprise, these top strategies will help you build a strong defense against breaches and keep your sensitive data safe. 1. Define your data goals When tackling any data protection project, the first step is always to understand the outcome you want. First, understand what data you need to protect. Identify your crown jewel data, and where you THINK it lives.Work with business owners to find any data outside the typical scope that you need to secure. This is all to answer the question: "What data would hurt the company if it were breached?" Second, work with the C-suit and board of directors to define what your data protection program will look like. Understand your budget, your risk tolerance to data loss, and what resources you have. Define how aggressive your protection program will be so you can balance risk and productivity. All organizations need to strike a balance between the two. 2. Automate data classification Next, begin your data classification journey—that is, find your data and catalog it. This is often the most difficult step in the journey, as organizations create new data all the time. Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves, and rely on your DLP policy to jump in when risk arises.Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input. 3. Focus on zero trust security for access control Adopting a zero trust architecture is crucial for modern data protection strategies to be effective. Based on the maxim "never trust, always verify," zero trust assumes security threats can come from inside or outside your network. Every access request is authenticated and authorized, greatly reducing the risk of unauthorized access and data breaches. Look for a zero trust solution that emphasizes the importance of least-privileged access control between users and apps. With this approach, users never access the network, reducing the ability for threats to move laterally and propagate to other entities and data on the network. The principle of least privilege ensures that users have only the access they need for their roles, reducing the attack surface. 4. Centralize DLP for consistent alerting Data loss preventiontechnology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of datato ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds. The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine, as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response. Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. Focus on vendors that support the most channels so that, as your program grows, you can easily add protection across devices, inline, and cloud. 5. Ensure blocking across key loss channels Once you have a centralized DLP, focus on the most important data loss channels to your organization.The most important channels can vary, but every organization focuses on certain common ones: Web/Email: The most common ways users accidentally send sensitive data outside the organization. SaaS data: Another common loss vector, as users can easily share data externally. Endpoint: A key focus for many organizations looking to lock down USB, printing, and network shares. Unmanaged devices/BYOD: If you have a large BYOD footprint, browser isolation is an innovative way to secure data headed to these devices without an agent or VDI. Devices are placed in an isolated browser, which enforces DLP inspection and prevents cut, paste, download, or print.SaaS posture control: SaaS platforms like Microsoft 365 can often be misconfigured. Continuously scanning for gaps and risky third-party integrations is key to minimizing data breaches. IaaS posture control: Most companies have a lot of sensitive data across AWS, Azure, or Google Cloud. Finding it all, and closing risky misconfigurations that expose it, is the driver behind data security posture management. 6. Understand and maintain compliance Getting a handle on compliance is a key step for great data protection. You may need to keep up with many different regulations, depending on your industry. These rules are there to make sure personal data is safe and organizations are handling it the right way. Stay informed on the latest mandates to avoid fines and protect your brand, all while building trust with your customers and partners. To keep on top of compliance, strong data governance practices are a must. This means regular security audits, keeping good records, and making sure your team is well-trained. Embrace technological approaches that help drive better compliance, such as data encryption and monitoring tools. By making compliance part of your routine, you can stay ahead of risks and ensure your data protection is both effective and in line with requirements. 7. Strategize for BYOD Although not a concern for every organization, unmanaged devices present a unique challenge for data protection. Your organization doesn't own or have agents on these devices, so you can't ensure their security posture or patch level, wipe them remotely, and so on. Yet their usersoften have legitimate reasons to access your critical data. You don't want sensitive data to land on a BYOD endpoint and vanish from your sight. Until now, solutions to secure BYOD have revolved around CASB reverse proxiesand VDI approaches. Browser isolation provides an effective and eloquent way to secure data without the cost and complexity of those approaches. By placing BYOD endpoints in an isolated browser, you can enforce great data protection without an endpoint agent. Data is streamed to the device as pixels, allowing interaction with the data but preventing download and cut/paste. You can also apply DLP inspection to the session and data based on your policy. 8. Control your cloud posture with SSPM and DSPM Cloud posture is one of the most commonly overlooked aspects of data hygiene. SaaS platforms and public clouds have many settings that DevOps teams without security expertise can easily overlook. The resulting misconfigurations can lead to dangerous gaps that expose sensitive data. Many of the largest data breaches in history have happened because such gaps let adversaries walk right in. SaaS security posture managementand data security posture managementare designed to uncover and help remediate these risks. By leveraging API access, SSPM and DSPM can continuously scan your cloud deployment, locate sensitive data, identify misconfigurations, and remediate exposures. Some SSPM approaches also feature integrated compliance with frameworks like NIST, ISO, and SOC 2. 9. Don't forget about data security training Data security training is often where data protection programs fall apart. If users don't understand or support your data protection goals, dissent can build across your teams and derail your program. Spend time building a training program that highlights your objectives and the value data protection will bring the organization. Ensure upper management supports and sponsors your data security training initiatives. Some solutions offer built-in user coaching with incident management workflows. This valuable feature allows you to notify users about incidents via Slack or email for justification, education, and policy adjustment if needed. Involving users in their incidents helps promote awareness of data protection practices as well as how to identify and safely handle sensitive content. 10. Automate incident management and workflows Lastly, no data protection program would be complete without day-to-day operations. Ensuring your team can efficiently manage and quickly respond to incidents is critical. One way to ensure streamlined processes is to embrace a solution that enables workflow automation. Designed to automate common incident management and response tasks, this feature can be a lifesaver for IT teams. By saving time and money while improving response times, IT teams can do more with less. Look for solutions that have a strong workflow automation offering integrated into the SSE to make incident management efficient and centralized. Bringing it all together Data protection is not a one-time project; it's an ongoing commitment. Staying informed of data protection best practices will help you build a resilient defense against evolving threats and ensure your organization's long-term success. Remember: investing in data protection is not just about mitigating risks and preventing data breaches. It's also about building trust, maintaining your reputation, and unlocking new opportunities for growth. Learn more at zscaler.com/security Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #top #best #practices #effective #data
    THEHACKERNEWS.COM
    Top 10 Best Practices for Effective Data Protection
    May 16, 2025The Hacker NewsZero Trust / Data Protection Data is the lifeblood of productivity, and protecting sensitive data is more critical than ever. With cyber threats evolving rapidly and data privacy regulations tightening, organizations must stay vigilant and proactive to safeguard their most valuable assets. But how do you build an effective data protection framework? In this article, we'll explore data protection best practices from meeting compliance requirements to streamlining day-to-day operations. Whether you're securing a small business or a large enterprise, these top strategies will help you build a strong defense against breaches and keep your sensitive data safe. 1. Define your data goals When tackling any data protection project, the first step is always to understand the outcome you want. First, understand what data you need to protect. Identify your crown jewel data, and where you THINK it lives. (It's probably more distributed than you expect, but this is a key step to help you define your protection focus.) Work with business owners to find any data outside the typical scope that you need to secure. This is all to answer the question: "What data would hurt the company if it were breached?" Second, work with the C-suit and board of directors to define what your data protection program will look like. Understand your budget, your risk tolerance to data loss, and what resources you have (or may need). Define how aggressive your protection program will be so you can balance risk and productivity. All organizations need to strike a balance between the two. 2. Automate data classification Next, begin your data classification journey—that is, find your data and catalog it. This is often the most difficult step in the journey, as organizations create new data all the time. Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves (endpoint, inline, cloud), and rely on your DLP policy to jump in when risk arises. (More on this later.) Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input. 3. Focus on zero trust security for access control Adopting a zero trust architecture is crucial for modern data protection strategies to be effective. Based on the maxim "never trust, always verify," zero trust assumes security threats can come from inside or outside your network. Every access request is authenticated and authorized, greatly reducing the risk of unauthorized access and data breaches. Look for a zero trust solution that emphasizes the importance of least-privileged access control between users and apps. With this approach, users never access the network, reducing the ability for threats to move laterally and propagate to other entities and data on the network. The principle of least privilege ensures that users have only the access they need for their roles, reducing the attack surface. 4. Centralize DLP for consistent alerting Data loss prevention (DLP) technology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of data (along with AI) to ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds. The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine (endpoint, network, CASB), as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response. Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. Focus on vendors that support the most channels so that, as your program grows, you can easily add protection across devices, inline, and cloud. 5. Ensure blocking across key loss channels Once you have a centralized DLP, focus on the most important data loss channels to your organization. (You'll need to add more channels as you grow, so ensure your platform can accommodate all of them and grow with you.) The most important channels can vary, but every organization focuses on certain common ones: Web/Email: The most common ways users accidentally send sensitive data outside the organization. SaaS data (CASB): Another common loss vector, as users can easily share data externally. Endpoint: A key focus for many organizations looking to lock down USB, printing, and network shares. Unmanaged devices/BYOD: If you have a large BYOD footprint, browser isolation is an innovative way to secure data headed to these devices without an agent or VDI. Devices are placed in an isolated browser, which enforces DLP inspection and prevents cut, paste, download, or print. (More on this later.) SaaS posture control (SSPM/supply chain): SaaS platforms like Microsoft 365 can often be misconfigured. Continuously scanning for gaps and risky third-party integrations is key to minimizing data breaches. IaaS posture control (DSPM): Most companies have a lot of sensitive data across AWS, Azure, or Google Cloud. Finding it all, and closing risky misconfigurations that expose it, is the driver behind data security posture management (DSPM). 6. Understand and maintain compliance Getting a handle on compliance is a key step for great data protection. You may need to keep up with many different regulations, depending on your industry (GDPR, PCI DSS, HIPAA, etc.). These rules are there to make sure personal data is safe and organizations are handling it the right way. Stay informed on the latest mandates to avoid fines and protect your brand, all while building trust with your customers and partners. To keep on top of compliance, strong data governance practices are a must. This means regular security audits, keeping good records, and making sure your team is well-trained. Embrace technological approaches that help drive better compliance, such as data encryption and monitoring tools. By making compliance part of your routine, you can stay ahead of risks and ensure your data protection is both effective and in line with requirements. 7. Strategize for BYOD Although not a concern for every organization, unmanaged devices present a unique challenge for data protection. Your organization doesn't own or have agents on these devices, so you can't ensure their security posture or patch level, wipe them remotely, and so on. Yet their users (like partners or contractors) often have legitimate reasons to access your critical data. You don't want sensitive data to land on a BYOD endpoint and vanish from your sight. Until now, solutions to secure BYOD have revolved around CASB reverse proxies (problematic) and VDI approaches (expensive). Browser isolation provides an effective and eloquent way to secure data without the cost and complexity of those approaches. By placing BYOD endpoints in an isolated browser (part of the security service edge), you can enforce great data protection without an endpoint agent. Data is streamed to the device as pixels, allowing interaction with the data but preventing download and cut/paste. You can also apply DLP inspection to the session and data based on your policy. 8. Control your cloud posture with SSPM and DSPM Cloud posture is one of the most commonly overlooked aspects of data hygiene. SaaS platforms and public clouds have many settings that DevOps teams without security expertise can easily overlook. The resulting misconfigurations can lead to dangerous gaps that expose sensitive data. Many of the largest data breaches in history have happened because such gaps let adversaries walk right in. SaaS security posture management (SSPM) and data security posture management (DSPM for IaaS) are designed to uncover and help remediate these risks. By leveraging API access, SSPM and DSPM can continuously scan your cloud deployment, locate sensitive data, identify misconfigurations, and remediate exposures. Some SSPM approaches also feature integrated compliance with frameworks like NIST, ISO, and SOC 2. 9. Don't forget about data security training Data security training is often where data protection programs fall apart. If users don't understand or support your data protection goals, dissent can build across your teams and derail your program. Spend time building a training program that highlights your objectives and the value data protection will bring the organization. Ensure upper management supports and sponsors your data security training initiatives. Some solutions offer built-in user coaching with incident management workflows. This valuable feature allows you to notify users about incidents via Slack or email for justification, education, and policy adjustment if needed. Involving users in their incidents helps promote awareness of data protection practices as well as how to identify and safely handle sensitive content. 10. Automate incident management and workflows Lastly, no data protection program would be complete without day-to-day operations. Ensuring your team can efficiently manage and quickly respond to incidents is critical. One way to ensure streamlined processes is to embrace a solution that enables workflow automation. Designed to automate common incident management and response tasks, this feature can be a lifesaver for IT teams. By saving time and money while improving response times, IT teams can do more with less. Look for solutions that have a strong workflow automation offering integrated into the SSE to make incident management efficient and centralized. Bringing it all together Data protection is not a one-time project; it's an ongoing commitment. Staying informed of data protection best practices will help you build a resilient defense against evolving threats and ensure your organization's long-term success. Remember: investing in data protection is not just about mitigating risks and preventing data breaches. It's also about building trust, maintaining your reputation, and unlocking new opportunities for growth. Learn more at zscaler.com/security Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    18 Комментарии 0 Поделились