• Ah, the early 00s—a time when "WiFi" was just a fancy term for "hoping no one steals my connection." Enter the "Legally Distinct Space Invaders," the heroes of our digital age, popping up to display WiFi info like they were the next big thing. Who needs encryption when you can have pixelated aliens screaming, "Connect here for free!"?

    Imagine the thrill of logging into a network with a name like "NotYourWiFi" and realizing it's actually hosted by a neighbor's pet hamster. Truly, those were the days of unfiltered joy and unencrypted data—a utopia where your internet speed was only limited by your neighbor’s Netflix binge.

    Ah, nostalgia!

    #Wi
    Ah, the early 00s—a time when "WiFi" was just a fancy term for "hoping no one steals my connection." Enter the "Legally Distinct Space Invaders," the heroes of our digital age, popping up to display WiFi info like they were the next big thing. Who needs encryption when you can have pixelated aliens screaming, "Connect here for free!"? Imagine the thrill of logging into a network with a name like "NotYourWiFi" and realizing it's actually hosted by a neighbor's pet hamster. Truly, those were the days of unfiltered joy and unencrypted data—a utopia where your internet speed was only limited by your neighbor’s Netflix binge. Ah, nostalgia! #Wi
    HACKADAY.COM
    Legally Distinct Space Invaders Display WiFi Info
    In the early 00s there was a tiny moment before the widespread adoption of mobile broadband, after the adoption of home WiFi, and yet before the widespread use of encryption. …read more
    1 Comentários 0 Compartilhamentos 0 Anterior
  • Tell us your favourite video game of 2025 so far

    The Guardian’s writers have compiled their favourite new games of the year so far – and we’d like to hear about yours, too.Have you come across a new release that you can’t stop playing? Or one you’d recommend? Tell us your nomination and why you like it below.Share your favouriteYou can tell us your favourite game of the year so far using this form.Please share your story if you are 18 or over, anonymously if you wish. For more information please see our terms of service and privacy policy.Your responses, which can be anonymous, are secure as the form is encrypted and only the Guardian has access to your contributions. We will only use the data you provide us for the purpose of the feature and we will delete any personal data when we no longer require it for this purpose. For true anonymity please use our SecureDrop service instead.Name Where do you live? Tell us a bit about yourselfOptionalTell us about your favourite game of 2025 so far and why it's your favourite Please include as much detail as possible. If you are happy to, please upload a photo of yourself here OptionalPlease note, the maximum file size is 5.7 MB.Choose fileCan we publish your response? Yes, entirelyYes, but contact me firstYes, but please keep me anonymousNo, this is information onlyPhone number OptionalYour contact details are helpful so we can contact you for more information. They will only be seen by the Guardian.Email address Your contact details are helpful so we can contact you for more information. They will only be seen by the Guardian.You can add more information here OptionalIf you include other people's names please ask them first.Would you be interested in speaking to our audio and/or video teams? Audio onlyVideo onlyAudio and videoNo, I'm not interestedBy submitting your response, you are agreeing to share your details with us for this feature.If you’re having trouble using the form, click here. Read terms of service here and privacy policy here.
    #tell #your #favourite #video #game
    Tell us your favourite video game of 2025 so far
    The Guardian’s writers have compiled their favourite new games of the year so far – and we’d like to hear about yours, too.Have you come across a new release that you can’t stop playing? Or one you’d recommend? Tell us your nomination and why you like it below.Share your favouriteYou can tell us your favourite game of the year so far using this form.Please share your story if you are 18 or over, anonymously if you wish. For more information please see our terms of service and privacy policy.Your responses, which can be anonymous, are secure as the form is encrypted and only the Guardian has access to your contributions. We will only use the data you provide us for the purpose of the feature and we will delete any personal data when we no longer require it for this purpose. For true anonymity please use our SecureDrop service instead.Name Where do you live? Tell us a bit about yourselfOptionalTell us about your favourite game of 2025 so far and why it's your favourite Please include as much detail as possible. If you are happy to, please upload a photo of yourself here OptionalPlease note, the maximum file size is 5.7 MB.Choose fileCan we publish your response? Yes, entirelyYes, but contact me firstYes, but please keep me anonymousNo, this is information onlyPhone number OptionalYour contact details are helpful so we can contact you for more information. They will only be seen by the Guardian.Email address Your contact details are helpful so we can contact you for more information. They will only be seen by the Guardian.You can add more information here OptionalIf you include other people's names please ask them first.Would you be interested in speaking to our audio and/or video teams? Audio onlyVideo onlyAudio and videoNo, I'm not interestedBy submitting your response, you are agreeing to share your details with us for this feature.If you’re having trouble using the form, click here. Read terms of service here and privacy policy here. #tell #your #favourite #video #game
    WWW.THEGUARDIAN.COM
    Tell us your favourite video game of 2025 so far
    The Guardian’s writers have compiled their favourite new games of the year so far – and we’d like to hear about yours, too.Have you come across a new release that you can’t stop playing? Or one you’d recommend? Tell us your nomination and why you like it below.Share your favouriteYou can tell us your favourite game of the year so far using this form.Please share your story if you are 18 or over, anonymously if you wish. For more information please see our terms of service and privacy policy.Your responses, which can be anonymous, are secure as the form is encrypted and only the Guardian has access to your contributions. We will only use the data you provide us for the purpose of the feature and we will delete any personal data when we no longer require it for this purpose. For true anonymity please use our SecureDrop service instead.Name Where do you live? Tell us a bit about yourself (e.g. age and what you do for a living) OptionalTell us about your favourite game of 2025 so far and why it's your favourite Please include as much detail as possible. If you are happy to, please upload a photo of yourself here OptionalPlease note, the maximum file size is 5.7 MB.Choose fileCan we publish your response? Yes, entirelyYes, but contact me firstYes, but please keep me anonymousNo, this is information onlyPhone number OptionalYour contact details are helpful so we can contact you for more information. They will only be seen by the Guardian.Email address Your contact details are helpful so we can contact you for more information. They will only be seen by the Guardian.You can add more information here OptionalIf you include other people's names please ask them first.Would you be interested in speaking to our audio and/or video teams? Audio onlyVideo onlyAudio and videoNo, I'm not interestedBy submitting your response, you are agreeing to share your details with us for this feature.If you’re having trouble using the form, click here. Read terms of service here and privacy policy here.
    Like
    Love
    Wow
    Sad
    Angry
    412
    0 Comentários 0 Compartilhamentos 0 Anterior
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Do these nine things to protect yourself against hackers and scammers

    Scammers are using AI tools to create increasingly convincing ways to trick victims into sending money, and to access the personal information needed to commit identity theft. Deepfakes mean they can impersonate the voice of a friend or family member, and even fake a video call with them!
    The result can be criminals taking out thousands of dollars worth of loans or credit card debt in your name. Fortunately there are steps you can take to protect yourself against even the most sophisticated scams. Here are the security and privacy checks to run to ensure you are safe …

    9to5Mac is brought to by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee.

    Use a password manager
    At one time, the advice might have read “use strong, unique passwords for each website and app you use” – but these days we all use so many that this is only possible if we use a password manager.
    This is a super-easy step to take, thanks to the Passwords app on Apple devices. Each time you register for a new service, use the Passwords appto set and store the password.
    Replace older passwords
    You probably created some accounts back in the days when password rules were much less strict, meaning you now have some weak passwords that are vulnerable to attack. If you’ve been online since before the days of password managers, you probably even some passwords you’ve used on more than one website. This is a huge risk, as it means your security is only as good as the least-secure website you use.
    What happens is attackers break into a poorly-secured website, grab all the logins, then they use automated software to try those same logins on hundreds of different websites. If you’ve re-used a password, they now have access to your accounts on all the sites where you used it.
    Use the password change feature to update your older passwords, starting with the most important ones – the ones that would put you most at risk if your account where compromised. As an absolute minimum, ensure you have strong, unique passwords for all financial services, as well as other critical ones like Apple, Google, and Amazon accounts.
    Make sure you include any accounts which have already been compromised! You can identify these by putting your email address into Have I Been Pwned.
    Use passkeys where possible
    Passwords are gradually being replaced by passkeys. While the difference might seem small in terms of how you login, there’s a huge difference in the security they provide.
    With a passkey, a website or app doesn’t ask for a password, it instead asks your device to verify your identity. Your device uses Face ID or Touch ID to do so, then confirms that you are who you claim to be. Crucially, it doesn’t send a password back to the service, so there’s no way for this to be hacked – all the service sees is confirmation that you successfully passed biometric authentication on your device.
    Use two-factor authentication
    A growing number of accounts allow you to use two-factor authentication. This means that even if an attacker got your login details, they still wouldn’t be able to access your account.
    2FA works by demanding a rolling code whenever you login. These can be sent by text message, but we strongly advise against this, as it leaves you vulnerable to SIM-swap attacks, which are becoming increasingly common. In particular, never use text-based 2FA for financial services accounts.
    Instead, select the option to use an authenticator app. A QR code will be displayed which you scan in the app, adding that service to your device. Next time you login, you just open the app to see a 6-digit rolling code which you’ll need to enter to login. This feature is built into the Passwords app, or you can use a separate one like Google Authenticator.
    Check last-login details
    Some services, like banking apps, will display the date and time of your last successful login. Get into the habit of checking this each time you login, as it can provide a warning that your account has been compromised.
    Use a VPN service for public Wi-Fi hotspots
    Anytime you use a public Wi-Fi hotspot, you are at risk from what’s known as a Man-in-the-Middleattack. This is where someone uses a small device which uses the same name as a public Wi-Fi hotspot so that people connect to it. Once you do, they can monitor your internet traffic.
    Almost all modern websites use HTTPS, which provides an encrypted connection that makes MitM attacks less dangerous than they used to be. All the same, the exploit can expose you to a number of security and privacy risks, so using a VPN is still highly advisable. Always choose a respected VPN company, ideally one which keeps no logs and subjects itself to independent audits. I use NordVPN for this reason.
    Don’t disclose personal info to AI chatbots
    AI chatbots typically use their conversations with users as training material, meaning anything you say or type could end up in their database, and could potentially be regurgitated when answering another user’s question. Never reveal any personal information you wouldn’t want on the internet.
    Consider data removal
    It’s likely that much of your personal information has already been collected by data brokers. Your email address and phone number can be used for spam, which is annoying enough, but they can also be used by scammers. For this reason, you might want to scrub your data from as many broker services as possible. You can do this yourself, or use a service like Incogni to do it for you.
    Triple-check requests for money
    Finally, if anyone asks you to send them money, be immediately on the alert. Even if seems to be a friend, family member, or your boss, never take it on trust. Always contact them via a different, known communication channel. If they emailed you, phone them. If they phoned you, message or email them. Some people go as far as agreeing codewords with family members to use if they ever really do need emergency help.
    If anyone asks you to buy gift cards and send the numbers to them, it’s a scam 100% of the time. Requests to use money transfer services are also generally scams unless it’s something you arranged in advance.
    Even if you are expecting to send someone money, be alert for claims that they have changed their bank account. This is almost always a scam. Again, contact them via a different, known comms channel.
    Photo by Christina @ wocintechchat.com on Unsplash

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #these #nine #things #protect #yourself
    Do these nine things to protect yourself against hackers and scammers
    Scammers are using AI tools to create increasingly convincing ways to trick victims into sending money, and to access the personal information needed to commit identity theft. Deepfakes mean they can impersonate the voice of a friend or family member, and even fake a video call with them! The result can be criminals taking out thousands of dollars worth of loans or credit card debt in your name. Fortunately there are steps you can take to protect yourself against even the most sophisticated scams. Here are the security and privacy checks to run to ensure you are safe … 9to5Mac is brought to by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee. Use a password manager At one time, the advice might have read “use strong, unique passwords for each website and app you use” – but these days we all use so many that this is only possible if we use a password manager. This is a super-easy step to take, thanks to the Passwords app on Apple devices. Each time you register for a new service, use the Passwords appto set and store the password. Replace older passwords You probably created some accounts back in the days when password rules were much less strict, meaning you now have some weak passwords that are vulnerable to attack. If you’ve been online since before the days of password managers, you probably even some passwords you’ve used on more than one website. This is a huge risk, as it means your security is only as good as the least-secure website you use. What happens is attackers break into a poorly-secured website, grab all the logins, then they use automated software to try those same logins on hundreds of different websites. If you’ve re-used a password, they now have access to your accounts on all the sites where you used it. Use the password change feature to update your older passwords, starting with the most important ones – the ones that would put you most at risk if your account where compromised. As an absolute minimum, ensure you have strong, unique passwords for all financial services, as well as other critical ones like Apple, Google, and Amazon accounts. Make sure you include any accounts which have already been compromised! You can identify these by putting your email address into Have I Been Pwned. Use passkeys where possible Passwords are gradually being replaced by passkeys. While the difference might seem small in terms of how you login, there’s a huge difference in the security they provide. With a passkey, a website or app doesn’t ask for a password, it instead asks your device to verify your identity. Your device uses Face ID or Touch ID to do so, then confirms that you are who you claim to be. Crucially, it doesn’t send a password back to the service, so there’s no way for this to be hacked – all the service sees is confirmation that you successfully passed biometric authentication on your device. Use two-factor authentication A growing number of accounts allow you to use two-factor authentication. This means that even if an attacker got your login details, they still wouldn’t be able to access your account. 2FA works by demanding a rolling code whenever you login. These can be sent by text message, but we strongly advise against this, as it leaves you vulnerable to SIM-swap attacks, which are becoming increasingly common. In particular, never use text-based 2FA for financial services accounts. Instead, select the option to use an authenticator app. A QR code will be displayed which you scan in the app, adding that service to your device. Next time you login, you just open the app to see a 6-digit rolling code which you’ll need to enter to login. This feature is built into the Passwords app, or you can use a separate one like Google Authenticator. Check last-login details Some services, like banking apps, will display the date and time of your last successful login. Get into the habit of checking this each time you login, as it can provide a warning that your account has been compromised. Use a VPN service for public Wi-Fi hotspots Anytime you use a public Wi-Fi hotspot, you are at risk from what’s known as a Man-in-the-Middleattack. This is where someone uses a small device which uses the same name as a public Wi-Fi hotspot so that people connect to it. Once you do, they can monitor your internet traffic. Almost all modern websites use HTTPS, which provides an encrypted connection that makes MitM attacks less dangerous than they used to be. All the same, the exploit can expose you to a number of security and privacy risks, so using a VPN is still highly advisable. Always choose a respected VPN company, ideally one which keeps no logs and subjects itself to independent audits. I use NordVPN for this reason. Don’t disclose personal info to AI chatbots AI chatbots typically use their conversations with users as training material, meaning anything you say or type could end up in their database, and could potentially be regurgitated when answering another user’s question. Never reveal any personal information you wouldn’t want on the internet. Consider data removal It’s likely that much of your personal information has already been collected by data brokers. Your email address and phone number can be used for spam, which is annoying enough, but they can also be used by scammers. For this reason, you might want to scrub your data from as many broker services as possible. You can do this yourself, or use a service like Incogni to do it for you. Triple-check requests for money Finally, if anyone asks you to send them money, be immediately on the alert. Even if seems to be a friend, family member, or your boss, never take it on trust. Always contact them via a different, known communication channel. If they emailed you, phone them. If they phoned you, message or email them. Some people go as far as agreeing codewords with family members to use if they ever really do need emergency help. If anyone asks you to buy gift cards and send the numbers to them, it’s a scam 100% of the time. Requests to use money transfer services are also generally scams unless it’s something you arranged in advance. Even if you are expecting to send someone money, be alert for claims that they have changed their bank account. This is almost always a scam. Again, contact them via a different, known comms channel. Photo by Christina @ wocintechchat.com on Unsplash Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #these #nine #things #protect #yourself
    9TO5MAC.COM
    Do these nine things to protect yourself against hackers and scammers
    Scammers are using AI tools to create increasingly convincing ways to trick victims into sending money, and to access the personal information needed to commit identity theft. Deepfakes mean they can impersonate the voice of a friend or family member, and even fake a video call with them! The result can be criminals taking out thousands of dollars worth of loans or credit card debt in your name. Fortunately there are steps you can take to protect yourself against even the most sophisticated scams. Here are the security and privacy checks to run to ensure you are safe … 9to5Mac is brought to by Incogni: Protect your personal info from prying eyes. With Incogni, you can scrub your deeply sensitive information from data brokers across the web, including people search sites. Incogni limits your phone number, address, email, SSN, and more from circulating. Fight back against unwanted data brokers with a 30-day money back guarantee. Use a password manager At one time, the advice might have read “use strong, unique passwords for each website and app you use” – but these days we all use so many that this is only possible if we use a password manager. This is a super-easy step to take, thanks to the Passwords app on Apple devices. Each time you register for a new service, use the Passwords app (or your own preferred password manager) to set and store the password. Replace older passwords You probably created some accounts back in the days when password rules were much less strict, meaning you now have some weak passwords that are vulnerable to attack. If you’ve been online since before the days of password managers, you probably even some passwords you’ve used on more than one website. This is a huge risk, as it means your security is only as good as the least-secure website you use. What happens is attackers break into a poorly-secured website, grab all the logins, then they use automated software to try those same logins on hundreds of different websites. If you’ve re-used a password, they now have access to your accounts on all the sites where you used it. Use the password change feature to update your older passwords, starting with the most important ones – the ones that would put you most at risk if your account where compromised. As an absolute minimum, ensure you have strong, unique passwords for all financial services, as well as other critical ones like Apple, Google, and Amazon accounts. Make sure you include any accounts which have already been compromised! You can identify these by putting your email address into Have I Been Pwned. Use passkeys where possible Passwords are gradually being replaced by passkeys. While the difference might seem small in terms of how you login, there’s a huge difference in the security they provide. With a passkey, a website or app doesn’t ask for a password, it instead asks your device to verify your identity. Your device uses Face ID or Touch ID to do so, then confirms that you are who you claim to be. Crucially, it doesn’t send a password back to the service, so there’s no way for this to be hacked – all the service sees is confirmation that you successfully passed biometric authentication on your device. Use two-factor authentication A growing number of accounts allow you to use two-factor authentication (2FA). This means that even if an attacker got your login details, they still wouldn’t be able to access your account. 2FA works by demanding a rolling code whenever you login. These can be sent by text message, but we strongly advise against this, as it leaves you vulnerable to SIM-swap attacks, which are becoming increasingly common. In particular, never use text-based 2FA for financial services accounts. Instead, select the option to use an authenticator app. A QR code will be displayed which you scan in the app, adding that service to your device. Next time you login, you just open the app to see a 6-digit rolling code which you’ll need to enter to login. This feature is built into the Passwords app, or you can use a separate one like Google Authenticator. Check last-login details Some services, like banking apps, will display the date and time of your last successful login. Get into the habit of checking this each time you login, as it can provide a warning that your account has been compromised. Use a VPN service for public Wi-Fi hotspots Anytime you use a public Wi-Fi hotspot, you are at risk from what’s known as a Man-in-the-Middle (MitM) attack. This is where someone uses a small device which uses the same name as a public Wi-Fi hotspot so that people connect to it. Once you do, they can monitor your internet traffic. Almost all modern websites use HTTPS, which provides an encrypted connection that makes MitM attacks less dangerous than they used to be. All the same, the exploit can expose you to a number of security and privacy risks, so using a VPN is still highly advisable. Always choose a respected VPN company, ideally one which keeps no logs and subjects itself to independent audits. I use NordVPN for this reason. Don’t disclose personal info to AI chatbots AI chatbots typically use their conversations with users as training material, meaning anything you say or type could end up in their database, and could potentially be regurgitated when answering another user’s question. Never reveal any personal information you wouldn’t want on the internet. Consider data removal It’s likely that much of your personal information has already been collected by data brokers. Your email address and phone number can be used for spam, which is annoying enough, but they can also be used by scammers. For this reason, you might want to scrub your data from as many broker services as possible. You can do this yourself, or use a service like Incogni to do it for you. Triple-check requests for money Finally, if anyone asks you to send them money, be immediately on the alert. Even if seems to be a friend, family member, or your boss, never take it on trust. Always contact them via a different, known communication channel. If they emailed you, phone them. If they phoned you, message or email them. Some people go as far as agreeing codewords with family members to use if they ever really do need emergency help. If anyone asks you to buy gift cards and send the numbers to them, it’s a scam 100% of the time. Requests to use money transfer services are also generally scams unless it’s something you arranged in advance. Even if you are expecting to send someone money, be alert for claims that they have changed their bank account. This is almost always a scam. Again, contact them via a different, known comms channel. Photo by Christina @ wocintechchat.com on Unsplash Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comentários 0 Compartilhamentos 0 Anterior
  • For June’s Patch Tuesday, 68 fixes — and two zero-day flaws

    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilitieshave led to a “Patch Now” recommendation for both Windows and Office.To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates.Known issues

    Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern:

    Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” orare not supported in Excel filenames. An error is generated, advising the user to remove the offending characters.

    Windows 10: There are reports of blurry or unclear CJKtext when displayed at 96 DPIin Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity.

    Major revisions and mitigations

    Microsoft might have won an award for the shortest time between releasing an update and a revision with:

    CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release.

    Windows lifecycle and enforcement updates

    Microsoft did not release any enforcement updates for June.

    Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated.

    For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts.

    Core OS and UI compatibility

    Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations:

    Run print operations from 32-bit applications on 64-bit Windows environments.

    Use different print drivers and configurations.

    Observe printing from older productivity apps and virtual environments.

    Remote desktop and network connectivity

    This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed:

    Create and reconnect Remote Desktopsessions under varying network conditions.

    Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments.

    Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots.

    Filesystem, SMB and storage

    Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations:

    Access file shares using server names, FQDNs, and IP addresses.

    Enable and validate encrypted and compressed file-share operations between clients and servers.

    Run tests that create, open, and read from system log files using various file and storage configurations.

    Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes.

    Test disk addition/removal, failover behaviors, and resiliency settings.

    Run system-level storage diagnostics across active and passive nodes in the cluster.

    Windows installer and recovery

    Microsoft delivered another update to the Windows Installerapplication infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security:

    Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools.

    Validate restore point behavior for points older than 60 days under varying virtualization-based securitysettings.

    Check both client and server behaviors for allowed or blocked restores.

    We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected.

    Each month, we break down the update cycle into product familieswith the following basic groupings: 

    Browsers;

    Microsoft Windows;

    Microsoft Office;

    Microsoft Exchange and SQL Server; 

    Microsoft Developer Tools;

    And Adobe.

    Browsers

    Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patcheswhere both updates are rated important. These low-profile changes can be added to your standard release calendar.

    Microsoft Windows

    Microsoft released five critical patches and40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities:

    Missing release of memory after effective lifetime in Windows Cryptographic Servicesallows an unauthorized attacker to execute code over a network.

    Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network.

    Use after free in Windows KDC Proxy Serviceallows an unauthorized attacker to execute code over a network.

    Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network.

    Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates.

    Microsoft Office

    Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release.

    Microsoft Exchange and SQL Server

    There are no updates for either Microsoft Exchange or SQL Server this month. 

    Developer tools

    There were only three low-level updatesreleased, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule.

    AdobeAdobe has releaseda single update to Adobe Acrobat. There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above.
    #junes #patch #tuesday #fixes #two
    For June’s Patch Tuesday, 68 fixes — and two zero-day flaws
    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilitieshave led to a “Patch Now” recommendation for both Windows and Office.To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates.Known issues Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern: Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” orare not supported in Excel filenames. An error is generated, advising the user to remove the offending characters. Windows 10: There are reports of blurry or unclear CJKtext when displayed at 96 DPIin Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity. Major revisions and mitigations Microsoft might have won an award for the shortest time between releasing an update and a revision with: CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release. Windows lifecycle and enforcement updates Microsoft did not release any enforcement updates for June. Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated. For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts. Core OS and UI compatibility Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations: Run print operations from 32-bit applications on 64-bit Windows environments. Use different print drivers and configurations. Observe printing from older productivity apps and virtual environments. Remote desktop and network connectivity This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed: Create and reconnect Remote Desktopsessions under varying network conditions. Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments. Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots. Filesystem, SMB and storage Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations: Access file shares using server names, FQDNs, and IP addresses. Enable and validate encrypted and compressed file-share operations between clients and servers. Run tests that create, open, and read from system log files using various file and storage configurations. Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes. Test disk addition/removal, failover behaviors, and resiliency settings. Run system-level storage diagnostics across active and passive nodes in the cluster. Windows installer and recovery Microsoft delivered another update to the Windows Installerapplication infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security: Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools. Validate restore point behavior for points older than 60 days under varying virtualization-based securitysettings. Check both client and server behaviors for allowed or blocked restores. We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected. Each month, we break down the update cycle into product familieswith the following basic groupings:  Browsers; Microsoft Windows; Microsoft Office; Microsoft Exchange and SQL Server;  Microsoft Developer Tools; And Adobe. Browsers Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patcheswhere both updates are rated important. These low-profile changes can be added to your standard release calendar. Microsoft Windows Microsoft released five critical patches and40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities: Missing release of memory after effective lifetime in Windows Cryptographic Servicesallows an unauthorized attacker to execute code over a network. Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network. Use after free in Windows KDC Proxy Serviceallows an unauthorized attacker to execute code over a network. Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network. Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates. Microsoft Office Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release. Microsoft Exchange and SQL Server There are no updates for either Microsoft Exchange or SQL Server this month.  Developer tools There were only three low-level updatesreleased, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule. AdobeAdobe has releaseda single update to Adobe Acrobat. There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above. #junes #patch #tuesday #fixes #two
    WWW.COMPUTERWORLD.COM
    For June’s Patch Tuesday, 68 fixes — and two zero-day flaws
    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilities (CVE-2025-33073 and CVE-2025-33053) have led to a “Patch Now” recommendation for both Windows and Office. (Developers can follow their usual release cadence with updates to Microsoft .NET and Visual Studio.) To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates. (More information about recent Patch Tuesday releases is available here.) Known issues Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern: Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” or [] are not supported in Excel filenames. An error is generated, advising the user to remove the offending characters. Windows 10: There are reports of blurry or unclear CJK (Chinese, Japanese, Korean) text when displayed at 96 DPI (100% scaling) in Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity. Major revisions and mitigations Microsoft might have won an award for the shortest time between releasing an update and a revision with: CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release (and has been revised again for documentation purposes). Windows lifecycle and enforcement updates Microsoft did not release any enforcement updates for June. Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated. For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts. Core OS and UI compatibility Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations: Run print operations from 32-bit applications on 64-bit Windows environments. Use different print drivers and configurations (e.g., local, networked). Observe printing from older productivity apps and virtual environments. Remote desktop and network connectivity This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed: Create and reconnect Remote Desktop (RDP) sessions under varying network conditions. Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments. Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots. Filesystem, SMB and storage Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations: Access file shares using server names, FQDNs, and IP addresses. Enable and validate encrypted and compressed file-share operations between clients and servers. Run tests that create, open, and read from system log files using various file and storage configurations. Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes. Test disk addition/removal, failover behaviors, and resiliency settings. Run system-level storage diagnostics across active and passive nodes in the cluster. Windows installer and recovery Microsoft delivered another update to the Windows Installer (MSI) application infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security (VBS): Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools (e.g. Intune). Validate restore point behavior for points older than 60 days under varying virtualization-based security (VBS) settings. Check both client and server behaviors for allowed or blocked restores. We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected. Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings:  Browsers (Microsoft IE and Edge); Microsoft Windows (both desktop and server); Microsoft Office; Microsoft Exchange and SQL Server;  Microsoft Developer Tools (Visual Studio and .NET); And Adobe (if you get this far). Browsers Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patches (CVE-2025-5068 and CVE-2025-5419) where both updates are rated important. These low-profile changes can be added to your standard release calendar. Microsoft Windows Microsoft released five critical patches and (a smaller than usual) 40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities: Missing release of memory after effective lifetime in Windows Cryptographic Services (WCS) allows an unauthorized attacker to execute code over a network. Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network. Use after free in Windows KDC Proxy Service (KPSSVC) allows an unauthorized attacker to execute code over a network. Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network. Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates. Microsoft Office Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release. Microsoft Exchange and SQL Server There are no updates for either Microsoft Exchange or SQL Server this month.  Developer tools There were only three low-level updates (product focused and rated important) released, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule. Adobe (and 3rd party updates) Adobe has released (but Microsoft has not co-published) a single update to Adobe Acrobat (APSB25-57). There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Popular Chrome Extensions Leak API Keys, User Data via HTTP and Hard-Coded Credentials

    Cybersecurity researchers have flagged several popular Google Chrome extensions that have been found to transmit data in HTTP and hard-code secrets in their code, exposing users to privacy and security risks.
    "Several widely used extensionsunintentionally transmit sensitive data over simple HTTP," Yuanjing Guo, a security researcher in the Symantec's Security Technology and Response team, said. "By doing so, they expose browsing domains, machine IDs, operating system details, usage analytics, and even uninstall information, in plaintext."
    The fact that the network traffic is unencrypted also means that they are susceptible to adversary-in-the-middleattacks, allowing malicious actors on the same network such as a public Wi-Fi to intercept and, even worse, modify this data, which could lead to far more serious consequences.

    The list of identified extensions are below -

    SEMRush Rankand PI Rank, which call the URL "rank.trelliancom" over plain HTTP
    Browsec VPN, which uses HTTP to call an uninstall URL at "browsec-uninstall.s3-website.eu-central-1.amazonawscom" when a user attempts to uninstall the extension
    MSN New Taband MSN Homepage, Bing Search & News, which transmit a unique machine identifier and other details over HTTP to "g.ceipmsncom"
    DualSafe Password Manager & Digital Vault, which constructs an HTTP-based URL request to "stats.itopupdatecom" along with information about the extension version, user's browser language, and usage "type"

    "Although credentials or passwords do not appear to be leaked, the fact that a password manager uses unencrypted requests for telemetry erodes trust in its overall security posture," Guo said.
    Symantec said it also identified another set of extensions with API keys, secrets, and tokens directly embedded in the JavaScript code, which an attacker could weaponize to craft malicious requests and carry out various malicious actions -

    Online Security & Privacy extension, AVG Online Security, Speed Dial- New Tab Page, 3D, Sync, and SellerSprite - Amazon Research Tool, which expose a hard-coded Google Analytics 4API secret that an attacker could use to bombard the GA4 endpoint and corrupt metrics
    Equatio – Math Made Digital, which embeds a Microsoft Azure API key used for speech recognition that an attacker could use to inflate the developer's costs or exhaust their usage limits
    Awesome Screen Recorder & Screenshotand Scrolling Screenshot Tool & Screen Capture, which expose the developer's Amazon Web Servicesaccess key used to upload screenshots to the developer's S3 bucket
    Microsoft Editor – Spelling & Grammar Checker, which exposes a telemetry key named "StatsApiKey" to log user data for analytics
    Antidote Connector, which incorporates a third-party library called InboxSDK that contains hard-coded credentials, including API keys.
    Watch2Gether, which exposes a Tenor GIF search API key
    Trust Wallet, which exposes an API key associated with the Ramp Network, a Web3 platform that offers wallet developers a way to let users buy or sell crypto directly from the app
    TravelArrow – Your Virtual Travel Agent, which exposes a geolocation API key when making queries to "ip-apicom"

    Attackers who end up finding these keys could weaponize them to drive up API costs, host illegal content, send spoofed telemetry data, and mimic cryptocurrency transaction orders, some of which could see the developer's ban getting banned.
    Adding to the concern, Antidote Connector is just one of over 90 extensions that use InboxSDK, meaning the other extensions are susceptible to the same problem. The names of the other extensions were not disclosed by Symantec.

    "From GA4 analytics secrets to Azure speech keys, and from AWS S3 credentials to Google-specific tokens, each of these snippets demonstrates how a few lines of code can jeopardize an entire service," Guo said. "The solution: never store sensitive credentials on the client side."
    Developers are recommended to switch to HTTPS whenever they send or receive data, store credentials securely in a backend server using a credentials management service, and regularly rotate secrets to further minimize risk.
    The findings show how even popular extensions with hundreds of thousands of installations can suffer from trivial misconfigurations and security blunders like hard-coded credentials, leaving users' data at risk.
    "Users of these extensions should consider removing them until the developers address the insecurecalls," the company said. "The risk is not just theoretical; unencrypted traffic is simple to capture, and the data can be used for profiling, phishing, or other targeted attacks."
    "The overarching lesson is that a large install base or a well-known brand does not necessarily ensure best practices around encryption. Extensions should be scrutinized for the protocols they use and the data they share, to ensure users' information remains truly safe."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #popular #chrome #extensions #leak #api
    Popular Chrome Extensions Leak API Keys, User Data via HTTP and Hard-Coded Credentials
    Cybersecurity researchers have flagged several popular Google Chrome extensions that have been found to transmit data in HTTP and hard-code secrets in their code, exposing users to privacy and security risks. "Several widely used extensionsunintentionally transmit sensitive data over simple HTTP," Yuanjing Guo, a security researcher in the Symantec's Security Technology and Response team, said. "By doing so, they expose browsing domains, machine IDs, operating system details, usage analytics, and even uninstall information, in plaintext." The fact that the network traffic is unencrypted also means that they are susceptible to adversary-in-the-middleattacks, allowing malicious actors on the same network such as a public Wi-Fi to intercept and, even worse, modify this data, which could lead to far more serious consequences. The list of identified extensions are below - SEMRush Rankand PI Rank, which call the URL "rank.trelliancom" over plain HTTP Browsec VPN, which uses HTTP to call an uninstall URL at "browsec-uninstall.s3-website.eu-central-1.amazonawscom" when a user attempts to uninstall the extension MSN New Taband MSN Homepage, Bing Search & News, which transmit a unique machine identifier and other details over HTTP to "g.ceipmsncom" DualSafe Password Manager & Digital Vault, which constructs an HTTP-based URL request to "stats.itopupdatecom" along with information about the extension version, user's browser language, and usage "type" "Although credentials or passwords do not appear to be leaked, the fact that a password manager uses unencrypted requests for telemetry erodes trust in its overall security posture," Guo said. Symantec said it also identified another set of extensions with API keys, secrets, and tokens directly embedded in the JavaScript code, which an attacker could weaponize to craft malicious requests and carry out various malicious actions - Online Security & Privacy extension, AVG Online Security, Speed Dial- New Tab Page, 3D, Sync, and SellerSprite - Amazon Research Tool, which expose a hard-coded Google Analytics 4API secret that an attacker could use to bombard the GA4 endpoint and corrupt metrics Equatio – Math Made Digital, which embeds a Microsoft Azure API key used for speech recognition that an attacker could use to inflate the developer's costs or exhaust their usage limits Awesome Screen Recorder & Screenshotand Scrolling Screenshot Tool & Screen Capture, which expose the developer's Amazon Web Servicesaccess key used to upload screenshots to the developer's S3 bucket Microsoft Editor – Spelling & Grammar Checker, which exposes a telemetry key named "StatsApiKey" to log user data for analytics Antidote Connector, which incorporates a third-party library called InboxSDK that contains hard-coded credentials, including API keys. Watch2Gether, which exposes a Tenor GIF search API key Trust Wallet, which exposes an API key associated with the Ramp Network, a Web3 platform that offers wallet developers a way to let users buy or sell crypto directly from the app TravelArrow – Your Virtual Travel Agent, which exposes a geolocation API key when making queries to "ip-apicom" Attackers who end up finding these keys could weaponize them to drive up API costs, host illegal content, send spoofed telemetry data, and mimic cryptocurrency transaction orders, some of which could see the developer's ban getting banned. Adding to the concern, Antidote Connector is just one of over 90 extensions that use InboxSDK, meaning the other extensions are susceptible to the same problem. The names of the other extensions were not disclosed by Symantec. "From GA4 analytics secrets to Azure speech keys, and from AWS S3 credentials to Google-specific tokens, each of these snippets demonstrates how a few lines of code can jeopardize an entire service," Guo said. "The solution: never store sensitive credentials on the client side." Developers are recommended to switch to HTTPS whenever they send or receive data, store credentials securely in a backend server using a credentials management service, and regularly rotate secrets to further minimize risk. The findings show how even popular extensions with hundreds of thousands of installations can suffer from trivial misconfigurations and security blunders like hard-coded credentials, leaving users' data at risk. "Users of these extensions should consider removing them until the developers address the insecurecalls," the company said. "The risk is not just theoretical; unencrypted traffic is simple to capture, and the data can be used for profiling, phishing, or other targeted attacks." "The overarching lesson is that a large install base or a well-known brand does not necessarily ensure best practices around encryption. Extensions should be scrutinized for the protocols they use and the data they share, to ensure users' information remains truly safe." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. #popular #chrome #extensions #leak #api
    THEHACKERNEWS.COM
    Popular Chrome Extensions Leak API Keys, User Data via HTTP and Hard-Coded Credentials
    Cybersecurity researchers have flagged several popular Google Chrome extensions that have been found to transmit data in HTTP and hard-code secrets in their code, exposing users to privacy and security risks. "Several widely used extensions [...] unintentionally transmit sensitive data over simple HTTP," Yuanjing Guo, a security researcher in the Symantec's Security Technology and Response team, said. "By doing so, they expose browsing domains, machine IDs, operating system details, usage analytics, and even uninstall information, in plaintext." The fact that the network traffic is unencrypted also means that they are susceptible to adversary-in-the-middle (AitM) attacks, allowing malicious actors on the same network such as a public Wi-Fi to intercept and, even worse, modify this data, which could lead to far more serious consequences. The list of identified extensions are below - SEMRush Rank (extension ID: idbhoeaiokcojcgappfigpifhpkjgmab) and PI Rank (ID: ccgdboldgdlngcgfdolahmiilojmfndl), which call the URL "rank.trellian[.]com" over plain HTTP Browsec VPN (ID: omghfjlpggmjjaagoclmmobgdodcjboh), which uses HTTP to call an uninstall URL at "browsec-uninstall.s3-website.eu-central-1.amazonaws[.]com" when a user attempts to uninstall the extension MSN New Tab (ID: lklfbkdigihjaaeamncibechhgalldgl) and MSN Homepage, Bing Search & News (ID: midiombanaceofjhodpdibeppmnamfcj), which transmit a unique machine identifier and other details over HTTP to "g.ceipmsn[.]com" DualSafe Password Manager & Digital Vault (ID: lgbjhdkjmpgjgcbcdlhkokkckpjmedgc), which constructs an HTTP-based URL request to "stats.itopupdate[.]com" along with information about the extension version, user's browser language, and usage "type" "Although credentials or passwords do not appear to be leaked, the fact that a password manager uses unencrypted requests for telemetry erodes trust in its overall security posture," Guo said. Symantec said it also identified another set of extensions with API keys, secrets, and tokens directly embedded in the JavaScript code, which an attacker could weaponize to craft malicious requests and carry out various malicious actions - Online Security & Privacy extension (ID: gomekmidlodglbbmalcneegieacbdmki), AVG Online Security (ID: nbmoafcmbajniiapeidgficgifbfmjfo), Speed Dial [FVD] - New Tab Page, 3D, Sync (ID: llaficoajjainaijghjlofdfmbjpebpa), and SellerSprite - Amazon Research Tool (ID: lnbmbgocenenhhhdojdielgnmeflbnfb), which expose a hard-coded Google Analytics 4 (GA4) API secret that an attacker could use to bombard the GA4 endpoint and corrupt metrics Equatio – Math Made Digital (ID: hjngolefdpdnooamgdldlkjgmdcmcjnc), which embeds a Microsoft Azure API key used for speech recognition that an attacker could use to inflate the developer's costs or exhaust their usage limits Awesome Screen Recorder & Screenshot (ID: nlipoenfbbikpbjkfpfillcgkoblgpmj) and Scrolling Screenshot Tool & Screen Capture (ID: mfpiaehgjbbfednooihadalhehabhcjo), which expose the developer's Amazon Web Services (AWS) access key used to upload screenshots to the developer's S3 bucket Microsoft Editor – Spelling & Grammar Checker (ID: gpaiobkfhnonedkhhfjpmhdalgeoebfa), which exposes a telemetry key named "StatsApiKey" to log user data for analytics Antidote Connector (ID: lmbopdiikkamfphhgcckcjhojnokgfeo), which incorporates a third-party library called InboxSDK that contains hard-coded credentials, including API keys. Watch2Gether (ID: cimpffimgeipdhnhjohpbehjkcdpjolg), which exposes a Tenor GIF search API key Trust Wallet (ID: egjidjbpglichdcondbcbdnbeeppgdph), which exposes an API key associated with the Ramp Network, a Web3 platform that offers wallet developers a way to let users buy or sell crypto directly from the app TravelArrow – Your Virtual Travel Agent (ID: coplmfnphahpcknbchcehdikbdieognn), which exposes a geolocation API key when making queries to "ip-api[.]com" Attackers who end up finding these keys could weaponize them to drive up API costs, host illegal content, send spoofed telemetry data, and mimic cryptocurrency transaction orders, some of which could see the developer's ban getting banned. Adding to the concern, Antidote Connector is just one of over 90 extensions that use InboxSDK, meaning the other extensions are susceptible to the same problem. The names of the other extensions were not disclosed by Symantec. "From GA4 analytics secrets to Azure speech keys, and from AWS S3 credentials to Google-specific tokens, each of these snippets demonstrates how a few lines of code can jeopardize an entire service," Guo said. "The solution: never store sensitive credentials on the client side." Developers are recommended to switch to HTTPS whenever they send or receive data, store credentials securely in a backend server using a credentials management service, and regularly rotate secrets to further minimize risk. The findings show how even popular extensions with hundreds of thousands of installations can suffer from trivial misconfigurations and security blunders like hard-coded credentials, leaving users' data at risk. "Users of these extensions should consider removing them until the developers address the insecure [HTTP] calls," the company said. "The risk is not just theoretical; unencrypted traffic is simple to capture, and the data can be used for profiling, phishing, or other targeted attacks." "The overarching lesson is that a large install base or a well-known brand does not necessarily ensure best practices around encryption. Extensions should be scrutinized for the protocols they use and the data they share, to ensure users' information remains truly safe." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    Like
    Love
    Wow
    Sad
    Angry
    334
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World

    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud.
    This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option.
    Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment.

    1. Know What You’re Working With
    Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky.
    Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those.

    2. Build a Unified Security Strategy
    One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear.
    Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble.
    Tools like centralized dashboards, SIEM, and SOARhelp you keep everything in one place.

    3. Enforce Strict Access Controls
    In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company.
    Stick to these practices:

    Use role-based access control.
    Limit permissions to the bare minimum.
    Turn on multi-factor authentication.
    Link logins across platforms using identity federation.

    The more consistent your access rules are, the easier it is to control who gets in and what they can do.

    4. Use the Zero Trust Model
    Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default.
    This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country.
    Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed.

    5. Encrypt Data—Always
    Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys.
    Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control.
    To stay safe:

    Encrypt both at rest and in transit.
    Avoid default settings.
    Rotate encryption keys regularly.

    6. Monitor in Real Time
    Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes.
    Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong.
    Also, use automation to filter out noise and surface real threats faster.

    7. Set Up Regular Audits and Compliance Checks
    Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming.
    That’s why audits matter.
    Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything.
    Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this.

    8. Prevent Data Loss with Smart Policies
    Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Preventioncomes in.
    DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.”
    DLP also supports compliance and helps avoid lawsuits or fines when accidents happen.

    9. Automate Where You Can
    Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security.
    Automate things like:

    Patch management
    Access reviews
    Backup schedules
    Security alerts

    Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks.

    10. Centralized Security Control
    One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things.
    Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place.
    This unified view makes a huge difference. It helps you react faster and stay ahead of threats.

    Final Thought
    Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility.
    Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed.
    The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business.
    Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore.
    #cloud #security #best #practices #protecting
    Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World
    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud. This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option. Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment. 1. Know What You’re Working With Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky. Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those. 2. Build a Unified Security Strategy One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear. Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble. Tools like centralized dashboards, SIEM, and SOARhelp you keep everything in one place. 3. Enforce Strict Access Controls In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company. Stick to these practices: Use role-based access control. Limit permissions to the bare minimum. Turn on multi-factor authentication. Link logins across platforms using identity federation. The more consistent your access rules are, the easier it is to control who gets in and what they can do. 4. Use the Zero Trust Model Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default. This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country. Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed. 5. Encrypt Data—Always Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys. Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control. To stay safe: Encrypt both at rest and in transit. Avoid default settings. Rotate encryption keys regularly. 6. Monitor in Real Time Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes. Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong. Also, use automation to filter out noise and surface real threats faster. 7. Set Up Regular Audits and Compliance Checks Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming. That’s why audits matter. Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything. Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this. 8. Prevent Data Loss with Smart Policies Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Preventioncomes in. DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.” DLP also supports compliance and helps avoid lawsuits or fines when accidents happen. 9. Automate Where You Can Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security. Automate things like: Patch management Access reviews Backup schedules Security alerts Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks. 10. Centralized Security Control One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things. Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place. This unified view makes a huge difference. It helps you react faster and stay ahead of threats. Final Thought Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility. Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed. The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business. Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore. #cloud #security #best #practices #protecting
    JUSTTOTALTECH.COM
    Cloud Security Best Practices Protecting Business Data in a Multi-Cloud World
    The cloud has changed everything. It’s faster, cheaper, and easier to scale than traditional infrastructure. Initially, most companies chose a single cloud provider. That’s no longer enough. Now, nearly 86% of businesses use more than one cloud. This approach—called multi-cloud—lets teams choose the best features from each provider. But it also opens the door to new security risks. When apps, data, and tools are scattered across platforms, managing security gets harder. And in today's world of constant cyber threats, ignoring cloud security is not an option. Let’s walk through real-world challenges and the best ways to protect business data in a multi-cloud environment. 1. Know What You’re Working With Start with visibility. Make a full inventory of the cloud platforms, apps, and storage your business uses. Ask every department—marketing, finance, HR—what tools they’ve signed up for. Many use services without informing IT. This is shadow IT, and it’s risky. Once you have the list, figure out what data lives where. Some workloads are low-risk. Others involve customer records, credit card data, or legal files. Prioritize those. 2. Build a Unified Security Strategy One of the biggest mistakes companies make is treating each cloud provider as a separate system. Every provider has its own rules, tools, and settings. If your security strategy is broken up, gaps will appear. Instead, aim for a single, connected approach. Use the same access rules, encryption standards, and monitoring tools across all clouds. You don’t want different policies on AWS and Azure—it just invites trouble. Tools like centralized dashboards, SIEM (Security Information and Event Management), and SOAR (Security Orchestration, Automation, and Response) help you keep everything in one place. 3. Enforce Strict Access Controls In a multi-cloud world, identity and access control are one of the hardest things to get right. Every platform has its own login system. Without proper integration, mistakes happen. Someone might get more access than they need, or never lose access when they leave the company. Stick to these practices: Use role-based access control. Limit permissions to the bare minimum. Turn on multi-factor authentication. Link logins across platforms using identity federation. The more consistent your access rules are, the easier it is to control who gets in and what they can do. 4. Use the Zero Trust Model Zero Trust means never assume anything is safe. Every user, device, and app must prove itself—every time. Even if a user is on your network, don’t trust them by default. This model reduces risk. It checks each request. It verifies users. And it looks for signs of abnormal behavior, like someone logging in from a new device or country. Zero Trust works well with automation and real-time monitoring. It also forces teams to rethink how data is shared and accessed. 5. Encrypt Data—Always Encryption is a basic but powerful layer of defense. It protects data whether it’s sitting in storage or moving between systems. If attackers get in, encrypted data is useless without the keys. Most cloud platforms offer built-in encryption. But don’t rely only on that. You can manage your own keys with tools like AWS KMS or Azure Key Vault. That gives you more control. To stay safe: Encrypt both at rest and in transit. Avoid default settings. Rotate encryption keys regularly. 6. Monitor in Real Time Security is not a one-time task. You need to watch your systems around the clock. Set alerts for things like large file downloads, unusual logins, or traffic spikes. Centralized monitoring helps a lot. It pulls logs from all your platforms and tools into one place. That way, your security team isn’t flipping between dashboards when something goes wrong. Also, use automation to filter out noise and surface real threats faster. 7. Set Up Regular Audits and Compliance Checks Multi-cloud setups are great for flexibility, but complex when it comes to compliance. Each platform has its own set of controls and certifications. Managing them all can be overwhelming. That’s why audits matter. Run security checks on a regular schedule—monthly, quarterly, or after every major change. Look for misconfigured permissions, missing patches, or unsecured data. And document everything. Also, make sure your tools help meet regulations like GDPR, HIPAA, or PCI DSS. Automated compliance scans can help stay on top of this. 8. Prevent Data Loss with Smart Policies Sensitive data is always at risk. Employees might share it by mistake. Attackers might try to steal it. That’s where Data Loss Prevention (DLP) comes in. DLP tools block unauthorized sharing of personal data, financial records, or internal files. You can create rules like “Don’t send customer SSNs over email” or “Block uploads of credit card data to personal drives.” DLP also supports compliance and helps avoid lawsuits or fines when accidents happen. 9. Automate Where You Can Manual work slows things down, and mistakes happen. That’s why automation is key in cloud security. Automate things like: Patch management Access reviews Backup schedules Security alerts Automation speeds up your response time. It also frees your security team to focus on serious issues, not routine tasks. 10. Centralized Security Control One major downside of multi-cloud isa lack of visibility. If you’re jumping between different tools for each cloud, you miss things. Instead, use a centralized security management system. It collects data from all clouds, shows risk levels, flags issues, and helps you fix them from one place. This unified view makes a huge difference. It helps you react faster and stay ahead of threats. Final Thought Cloud providers have made data storage and computing easier than ever. But with great power comes risk. Using multiple clouds gives more choice, but also more responsibility. Most businesses today are not ready. Only 15% have a mature multi-cloud security plan, says the 2023 Cisco Cyber Security Readiness Index. That means many are exposed. The good news? You can fix this. Start with simple steps. Know what you use. Lock it down. Watch it closely. Keep improving. And above all, treat cloud security not as a technical box to check, but as something critical to your business. Because in today’s world, a single breach can shut you down. And that’s too big a risk to ignore.
    Like
    Wow
    Love
    Angry
    Sad
    297
    0 Comentários 0 Compartilhamentos 0 Anterior
  • The Orb Will See You Now

    Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.” The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately will be transferred to your digital wallet—your reward for becoming a “verified human.” Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,” Altman tells TIME in a video interview from the passenger seat of a car a few days before his April 30 keynote address.Internal hardware of the Orb in mid-assembly in March. Davide Monteleone for TIMEThe project’s goal is to solve a problem partly of Altman’s own making. In the near future, he and other tech leaders say, advanced AIs will be imbued with agency: the ability to not just respond to human prompting, but to take actions independently in the world. This will enable the creation of AI coworkers that can drop into your company and begin solving problems; AI tutors that can adapt their teaching style to students’ preferences; even AI doctors that can diagnose routine cases and handle scheduling or logistics. The arrival of these virtual agents, their venture capitalist backers predict, will turbocharge our productivity and unleash an age of material abundance.But AI agents will also have cascading consequences for the human experience online. “As AI systems become harder to distinguish from people, websites may face difficult trade-offs,” says a recent paper by researchers from 25 different universities, nonprofits, and tech companies, including OpenAI. “There is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online.” On social-media platforms like X and Facebook, bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.  Photograph by Davide Monteleone for TIMEBuy a copy of the Orb issue hereThe arrival of agents won’t only threaten our ability to distinguish between authentic and AI content online. It will also challenge the Internet’s core business model, online advertising, which relies on the assumption that ads are being viewed by humans. “The Internet will change very drastically sometime in the next 12 to 24 months,” says Tools for Humanity CEO Alex Blania. “So we have to succeed, or I’m not sure what else would happen.”For four years, Blania’s team has been testing the Orb’s hardware abroad. Now the U.S. rollout has arrived. Over the next 12 months, 7,500 Orbs will be arriving in dozens of American cities, in locations like gas stations, bodegas, and flagship stores in Los Angeles, Austin, and Miami. The project’s founders and fans hope the Orb’s U.S. debut will kickstart a new phase of growth. The San Francisco keynote was titled: “At Last.” It’s not clear the public appetite matches the exultant branding. Tools for Humanity has “verified” just 12 million humans since mid 2023, a pace Blania concedes is well behind schedule. Few online platforms currently support the so-called “World ID” that the Orb bestows upon its visitors, leaving little to entice users to give up their biometrics beyond the lure of free crypto. Even Altman isn’t sure whether the whole thing can work. “I can seethis becomes a fairly mainstream thing in a few years,” he says. “Or I can see that it’s still only used by a small subset of people who think about the world in a certain way.” Blaniaand Altman debut the Orb at World’s U.S. launch in San Francisco on April 30, 2025. Jason Henry—The New York Times/ReduxYet as the Internet becomes overrun with AI, the creators of this strange new piece of hardware are betting that everybody in the world will soon want—or need—to visit an Orb. The biometric code it creates, they predict, will become a new type of digital passport, without which you might be denied passage to the Internet of the future, from dating apps to government services. In a best-case scenario, World ID could be a privacy-preserving way to fortify the Internet against an AI-driven deluge of fake or deceptive content. It could also enable the distribution of universal basic income—a policy that Altman has previously touted—as AI automation transforms the global economy. To examine what this new technology might mean, I reported from three continents, interviewed 10 Tools for Humanity executives and investors, reviewed hundreds of pages of company documents, and “verified” my own humanity. The Internet will inevitably need some kind of proof-of-humanity system in the near future, says Divya Siddarth, founder of the nonprofit Collective Intelligence Project. The real question, she argues, is whether such a system will be centralized—“a big security nightmare that enables a lot of surveillance”—or privacy-preserving, as the Orb claims to be. Questions remain about Tools for Humanity’s corporate structure, its yoking to an unstable cryptocurrency, and what power it would concentrate in the hands of its owners if successful. Yet it’s also one of the only attempts to solve what many see as an increasingly urgent problem. “There are some issues with it,” Siddarth says of World ID. “But you can’t preserve the Internet in amber. Something in this direction is necessary.”In March, I met Blania at Tools for Humanity’s San Francisco headquarters, where a large screen displays the number of weekly “Orb verifications” by country. A few days earlier, the CEO had attended a million-per-head dinner at Mar-a-Lago with President Donald Trump, whom he credits with clearing the way for the company’s U.S. launch by relaxing crypto regulations. “Given Sam is a very high profile target,” Blania says, “we just decided that we would let other companies fight that fight, and enter the U.S. once the air is clear.” As a kid growing up in Germany, Blania was a little different than his peers. “Other kids were, like, drinking a lot, or doing a lot of parties, and I was just building a lot of things that could potentially blow up,” he recalls. At the California Institute of Technology, where he was pursuing research for a masters degree, he spent many evenings reading the blogs of startup gurus like Paul Graham and Altman. Then, in 2019, Blania received an email from Max Novendstern, an entrepreneur who had been kicking around a concept with Altman to build a global cryptocurrency network. They were looking for technical minds to help with the project. Over cappuccinos, Altman told Blania he was certain about three things. First, smarter-than-human AI was not only possible, but inevitable—and it would soon mean you could no longer assume that anything you read, saw, or heard on the Internet was human-created. Second, cryptocurrency and other decentralized technologies would be a massive force for change in the world. And third, scale was essential to any crypto network’s value. The Orb is tested on a calibration rig, surrounded by checkerboard targets to ensure precision in iris detection. Davide Monteleone for TIMEThe goal of Worldcoin, as the project was initially called, was to combine those three insights. Altman took a lesson from PayPal, the company co-founded by his mentor Peter Thiel. Of its initial funding, PayPal spent less than million actually building its app—but pumped an additional million or so into a referral program, whereby new users and the person who invited them would each receive in credit. The referral program helped make PayPal a leading payment platform. Altman thought a version of that strategy would propel Worldcoin to similar heights. He wanted to create a new cryptocurrency and give it to users as a reward for signing up. The more people who joined the system, the higher the token’s value would theoretically rise. Since 2019, the project has raised million from investors like Coinbase and the venture capital firm Andreessen Horowitz. That money paid for the million cost of designing the Orb, plus maintaining the software it runs on. The total market value of all Worldcoins in existence, however, is far higher—around billion. That number is a bit misleading: most of those coins are not in circulation and Worldcoin’s price has fluctuated wildly. Still, it allows the company to reward users for signing up at no cost to itself. The main lure for investors is the crypto upside. Some 75% of all Worldcoins are set aside for humans to claim when they sign up, or as referral bonuses. The remaining 25% are split between Tools for Humanity’s backers and staff, including Blania and Altman. “I’m really excited to make a lot of money,” ” Blania says.From the beginning, Altman was thinking about the consequences of the AI revolution he intended to unleash.A future in which advanced AI could perform most tasks more effectively than humans would bring a wave of unemployment and economic dislocation, he reasoned. Some kind of wealth redistribution might be necessary. In 2016, he partially funded a study of basic income, which gave per-month handouts to low-income individuals in Illinois and Texas. But there was no single financial system that would allow money to be sent to everybody in the world. Nor was there a way to stop an individual human from claiming their share twice—or to identify a sophisticated AI pretending to be human and pocketing some cash of its own. In 2023, Tools for Humanity raised the possibility of using the network to redistribute the profits of AI labs that were able to automate human labor. “As AI advances,” it said, “fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power.”Blania was taken by the pitch, and agreed to join the project as a co-founder. “Most people told us we were very stupid or crazy or insane, including Silicon Valley investors,” Blania says. At least until ChatGPT came out in 2022, transforming OpenAI into one of the world’s most famous tech companies and kickstarting a market bull-run. “Things suddenly started to make more and more sense to the external world,” Blania says of the vision to develop a global “proof-of-humanity” network. “You have to imagine a world in which you will have very smart and competent systems somehow flying through the Internet with different goals and ideas of what they want to do, and us having no idea anymore what we’re dealing with.”After our interview, Blania’s head of communications ushers me over to a circular wooden structure where eight Orbs face one another. The scene feels like a cross between an Apple Store and a ceremonial altar. “Do you want to get verified?” she asks. Putting aside my reservations for the purposes of research, I download the World App and follow its prompts. I flash a QR code at the Orb, then gaze into it. A minute or so later, my phone buzzes with confirmation: I’ve been issued my own personal World ID and some Worldcoin.The first thing the Orb does is check if you’re human, using a neural network that takes input from various sensors, including an infrared camera and a thermometer. Davide Monteleone for TIMEWhile I stared into the Orb, several complex procedures had taken place at once. A neural network took inputs from multiple sensors—an infrared camera, a thermometer—to confirm I was a living human. Simultaneously, a telephoto lens zoomed in on my iris, capturing the physical traits within that distinguish me from every other human on Earth. It then converted that image into an iris code: a numerical abstraction of my unique biometric data. Then the Orb checked to see if my iris code matched any it had seen before, using a technique allowing encrypted data to be compared without revealing the underlying information. Before the Orb deleted my data, it turned my iris code into several derivative codes—none of which on its own can be linked back to the original—encrypted them, deleted the only copies of the decryption keys, and sent each one to a different secure server, so that future users’ iris codes can be checked for uniqueness against mine. If I were to use my World ID to access a website, that site would learn nothing about me except that I’m human. The Orb is open-source, so outside experts can examine its code and verify the company’s privacy claims. “I did a colonoscopy on this company and these technologies before I agreed to join,” says Trevor Traina, a Trump donor and former U.S. ambassador to Austria who now serves as Tools for Humanity’s chief business officer. “It is the most privacy-preserving technology on the planet.”Only weeks later, when researching what would happen if I wanted to delete my data, do I discover that Tools for Humanity’s privacy claims rest on what feels like a sleight of hand. The company argues that in modifying your iris code, it has “effectively anonymized” your biometric data. If you ask Tools for Humanity to delete your iris codes, they will delete the one stored on your phone, but not the derivatives. Those, they argue, are no longer your personal data at all. But if I were to return to an Orb after deleting my data, it would still recognize those codes as uniquely mine. Once you look into the Orb, a piece of your identity remains in the system forever. If users could truly delete that data, the premise of one ID per human would collapse, Tools for Humanity’s chief privacy officer Damien Kieran tells me when I call seeking an explanation. People could delete and sign up for new World IDs after being suspended from a platform. Or claim their Worldcoin tokens, sell them, delete their data, and cash in again. This argument fell flat with European Union regulators in Germany, who recently declared that the Orb posed “fundamental data protection issues” and ordered the company to allow European users to fully delete even their anonymized data.“Just like any other technology service, users cannot delete data that is not personal data,” Kieran said in a statement. “If a person could delete anonymized data that can’t be linked to them by World or any third party, it would allow bad actors to circumvent the security and safety that World ID is working to bring to every human.”On a balmy afternoon this spring, I climb a flight of stairs up to a room above a restaurant in an outer suburb of Seoul. Five elderly South Koreans tap on their phones as they wait to be “verified” by the two Orbs in the center of the room. “We don’t really know how to distinguish between AI and humans anymore,” an attendant in a company t-shirt explains in Korean, gesturing toward the spheres. “We need a way to verify that we’re human and not AI. So how do we do that? Well, humans have irises, but AI doesn’t.”The attendant ushers an elderly woman over to an Orb. It bleeps. “Open your eyes,” a disembodied voice says in English. The woman stares into the camera. Seconds later, she checks her phone and sees that a packet of Worldcoin worth 75,000 Korean wonhas landed in her digital wallet. Congratulations, the app tells her. You are now a verified human.A visitor views the Orbs in Seoul on April 14, 2025. Taemin Ha for TIMETools for Humanity aims to “verify” 1 million Koreans over the next year. Taemin Ha for TIMEA couple dozen Orbs have been available in South Korea since 2023, verifying roughly 55,000 people. Now Tools for Humanity is redoubling its efforts there. At an event in a traditional wooden hanok house in central Seoul, an executive announces that 250 Orbs will soon be dispersed around the country—with the aim of verifying 1 million Koreans in the next 12 months. South Korea has high levels of smartphone usage, crypto and AI adoption, and Internet access, while average wages are modest enough for the free Worldcoin on offer to still be an enticing draw—all of which makes it fertile testing ground for the company’s ambitious global expansion. Yet things seem off to a slow start. In a retail space I visited in central Seoul, Tools for Humanity had constructed a wooden structure with eight Orbs facing each other. Locals and tourists wander past looking bemused; few volunteer themselves up. Most who do tell me they are crypto enthusiasts who came intentionally, driven more by the spirit of early adoption than the free coins. The next day, I visit a coffee shop in central Seoul where a chrome Orb sits unassumingly in one corner. Wu Ruijun, a 20-year-old student from China, strikes up a conversation with the barista, who doubles as the Orb’s operator. Wu was invited here by a friend who said both could claim free cryptocurrency if he signed up. The barista speeds him through the process. Wu accepts the privacy disclosure without reading it, and widens his eyes for the Orb. Soon he’s verified. “I wasn’t told anything about the privacy policy,” he says on his way out. “I just came for the money.”As Altman’s car winds through San Francisco, I ask about the vision he laid out in 2019: that AI would make it harder for us to trust each other online. To my surprise, he rejects the framing. “I’m much morelike: what is the good we can create, rather than the bad we can stop?” he says. “It’s not like, ‘Oh, we’ve got to avoid the bot overrun’ or whatever. It’s just that we can do a lot of special things for humans.” It’s an answer that may reflect how his role has changed over the years. Altman is now the chief public cheerleader of a billion company that’s touting the transformative utility of AI agents. The rise of agents, he and others say, will be a boon for our quality of life—like having an assistant on hand who can answer your most pressing questions, carry out mundane tasks, and help you develop new skills. It’s an optimistic vision that may well pan out. But it doesn’t quite fit with the prophecies of AI-enabled infopocalypse that Tools for Humanity was founded upon.Altman waves away a question about the influence he and other investors stand to gain if their vision is realized. Most holders, he assumes, will have already started selling their tokens—too early, he adds. “What I think would be bad is if an early crew had a lot of control over the protocol,” he says, “and that’s where I think the commitment to decentralization is so cool.” Altman is referring to the World Protocol, the underlying technology upon which the Orb, Worldcoin, and World ID all rely. Tools for Humanity is developing it, but has committed to giving control to its users over time—a process they say will prevent power from being concentrated in the hands of a few executives or investors. Tools for Humanity would remain a for-profit company, and could levy fees on platforms that use World ID, but other companies would be able to compete for customers by building alternative apps—or even alternative Orbs. The plan draws on ideas that animated the crypto ecosystem in the late 2010s and early 2020s, when evangelists for emerging blockchain technologies argued that the centralization of power—especially in large so-called “Web 2.0” tech companies—was responsible for many of the problems plaguing the modern Internet. Just as decentralized cryptocurrencies could reform a financial system controlled by economic elites, so too would it be possible to create decentralized organizations, run by their members instead of CEOs. How such a system might work in practice remains unclear. “Building a community-based governance system,” Tools for Humanity says in a 2023 white paper, “represents perhaps the most formidable challenge of the entire project.”Altman has a pattern of making idealistic promises that shift over time. He founded OpenAI as a nonprofit in 2015, with a mission to develop AGI safely and for the benefit of all humanity. To raise money, OpenAI restructured itself as a for-profit company in 2019, but with overall control still in the hands of its nonprofit board. Last year, Altman proposed yet another restructure—one which would dilute the board’s control and allow more profits to flow to shareholders. Why, I ask, should the public trust Tools for Humanity’s commitment to freely surrender influence and power? “I think you will just see the continued decentralization via the protocol,” he says. “The value here is going to live in the network, and the network will be owned and governed by a lot of people.” Altman talks less about universal basic income these days. He recently mused about an alternative, which he called “universal basic compute.” Instead of AI companies redistributing their profits, he seemed to suggest, they could instead give everyone in the world fair access to super-powerful AI. Blania tells me he recently “made the decision to stop talking” about UBI at Tools for Humanity. “UBI is one potential answer,” he says. “Just givingaccess to the latestmodels and having them learn faster and better is another.” Says Altman: “I still don’t know what the right answer is. I believe we should do a better job of distribution of resources than we currently do.” When I probe the question of why people should trust him, Altman gets irritated. “I understand that you hate AI, and that’s fine,” he says. “If you want to frame it as the downside of AI is that there’s going to be a proliferation of very convincing AI systems that are pretending to be human, and we need ways to know what is really human-authorized versus not, then yeah, I think you can call that a downside of AI. It’s not how I would naturally frame it.” The phrase human-authorized hints at a tension between World ID and OpenAI’s plans for AI agents. An Internet where a World ID is required to access most services might impede the usefulness of the agents that OpenAI and others are developing. So Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer. “We’ve built everything in a way that can be very easily delegatable to an agent,” Sada says. It’s a measure that would allow humans to be held accountable for the actions of their AIs. But it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization. World ID doesn’t tell you whether a piece of content is AI-generated or human-generated; all it tells you is whether the account that posted it is a human or a bot. Even in a world where everybody had a World ID, our online spaces might still be filled with AI-generated text, images, and videos.As I say goodbye to Altman, I’m left feeling conflicted about his project. If the Internet is going to be transformed by AI agents, then some kind of proof-of-humanity system will almost certainly be necessary. Yet if the Orb becomes a piece of Internet infrastructure, it could give Altman—a beneficiary of the proliferation of AI content—significant influence over a leading defense mechanism against it. People might have no choice but to participate in the network in order to access social media or online services.I thought of an encounter I witnessed in Seoul. In the room above the restaurant, Cho Jeong-yeon, 75, watched her friend get verified by an Orb. Cho had been invited to do the same, but demurred. The reward wasn’t enough for her to surrender a part of her identity. “Your iris is uniquely yours, and we don’t really know how it might be used,” she says. “Seeing the machine made me think: are we becoming machines instead of humans now? Everything is changing, and we don’t know how it’ll all turn out.”—With reporting by Stephen Kim/Seoul. This story was supported by Tarbell Grants.Correction, May 30The original version of this story misstated the market capitalization of Worldcoin if all coins were in circulation. It is billion, not billion.
    #orb #will #see #you #now
    The Orb Will See You Now
    Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.” The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately will be transferred to your digital wallet—your reward for becoming a “verified human.” Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,” Altman tells TIME in a video interview from the passenger seat of a car a few days before his April 30 keynote address.Internal hardware of the Orb in mid-assembly in March. Davide Monteleone for TIMEThe project’s goal is to solve a problem partly of Altman’s own making. In the near future, he and other tech leaders say, advanced AIs will be imbued with agency: the ability to not just respond to human prompting, but to take actions independently in the world. This will enable the creation of AI coworkers that can drop into your company and begin solving problems; AI tutors that can adapt their teaching style to students’ preferences; even AI doctors that can diagnose routine cases and handle scheduling or logistics. The arrival of these virtual agents, their venture capitalist backers predict, will turbocharge our productivity and unleash an age of material abundance.But AI agents will also have cascading consequences for the human experience online. “As AI systems become harder to distinguish from people, websites may face difficult trade-offs,” says a recent paper by researchers from 25 different universities, nonprofits, and tech companies, including OpenAI. “There is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online.” On social-media platforms like X and Facebook, bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.  Photograph by Davide Monteleone for TIMEBuy a copy of the Orb issue hereThe arrival of agents won’t only threaten our ability to distinguish between authentic and AI content online. It will also challenge the Internet’s core business model, online advertising, which relies on the assumption that ads are being viewed by humans. “The Internet will change very drastically sometime in the next 12 to 24 months,” says Tools for Humanity CEO Alex Blania. “So we have to succeed, or I’m not sure what else would happen.”For four years, Blania’s team has been testing the Orb’s hardware abroad. Now the U.S. rollout has arrived. Over the next 12 months, 7,500 Orbs will be arriving in dozens of American cities, in locations like gas stations, bodegas, and flagship stores in Los Angeles, Austin, and Miami. The project’s founders and fans hope the Orb’s U.S. debut will kickstart a new phase of growth. The San Francisco keynote was titled: “At Last.” It’s not clear the public appetite matches the exultant branding. Tools for Humanity has “verified” just 12 million humans since mid 2023, a pace Blania concedes is well behind schedule. Few online platforms currently support the so-called “World ID” that the Orb bestows upon its visitors, leaving little to entice users to give up their biometrics beyond the lure of free crypto. Even Altman isn’t sure whether the whole thing can work. “I can seethis becomes a fairly mainstream thing in a few years,” he says. “Or I can see that it’s still only used by a small subset of people who think about the world in a certain way.” Blaniaand Altman debut the Orb at World’s U.S. launch in San Francisco on April 30, 2025. Jason Henry—The New York Times/ReduxYet as the Internet becomes overrun with AI, the creators of this strange new piece of hardware are betting that everybody in the world will soon want—or need—to visit an Orb. The biometric code it creates, they predict, will become a new type of digital passport, without which you might be denied passage to the Internet of the future, from dating apps to government services. In a best-case scenario, World ID could be a privacy-preserving way to fortify the Internet against an AI-driven deluge of fake or deceptive content. It could also enable the distribution of universal basic income—a policy that Altman has previously touted—as AI automation transforms the global economy. To examine what this new technology might mean, I reported from three continents, interviewed 10 Tools for Humanity executives and investors, reviewed hundreds of pages of company documents, and “verified” my own humanity. The Internet will inevitably need some kind of proof-of-humanity system in the near future, says Divya Siddarth, founder of the nonprofit Collective Intelligence Project. The real question, she argues, is whether such a system will be centralized—“a big security nightmare that enables a lot of surveillance”—or privacy-preserving, as the Orb claims to be. Questions remain about Tools for Humanity’s corporate structure, its yoking to an unstable cryptocurrency, and what power it would concentrate in the hands of its owners if successful. Yet it’s also one of the only attempts to solve what many see as an increasingly urgent problem. “There are some issues with it,” Siddarth says of World ID. “But you can’t preserve the Internet in amber. Something in this direction is necessary.”In March, I met Blania at Tools for Humanity’s San Francisco headquarters, where a large screen displays the number of weekly “Orb verifications” by country. A few days earlier, the CEO had attended a million-per-head dinner at Mar-a-Lago with President Donald Trump, whom he credits with clearing the way for the company’s U.S. launch by relaxing crypto regulations. “Given Sam is a very high profile target,” Blania says, “we just decided that we would let other companies fight that fight, and enter the U.S. once the air is clear.” As a kid growing up in Germany, Blania was a little different than his peers. “Other kids were, like, drinking a lot, or doing a lot of parties, and I was just building a lot of things that could potentially blow up,” he recalls. At the California Institute of Technology, where he was pursuing research for a masters degree, he spent many evenings reading the blogs of startup gurus like Paul Graham and Altman. Then, in 2019, Blania received an email from Max Novendstern, an entrepreneur who had been kicking around a concept with Altman to build a global cryptocurrency network. They were looking for technical minds to help with the project. Over cappuccinos, Altman told Blania he was certain about three things. First, smarter-than-human AI was not only possible, but inevitable—and it would soon mean you could no longer assume that anything you read, saw, or heard on the Internet was human-created. Second, cryptocurrency and other decentralized technologies would be a massive force for change in the world. And third, scale was essential to any crypto network’s value. The Orb is tested on a calibration rig, surrounded by checkerboard targets to ensure precision in iris detection. Davide Monteleone for TIMEThe goal of Worldcoin, as the project was initially called, was to combine those three insights. Altman took a lesson from PayPal, the company co-founded by his mentor Peter Thiel. Of its initial funding, PayPal spent less than million actually building its app—but pumped an additional million or so into a referral program, whereby new users and the person who invited them would each receive in credit. The referral program helped make PayPal a leading payment platform. Altman thought a version of that strategy would propel Worldcoin to similar heights. He wanted to create a new cryptocurrency and give it to users as a reward for signing up. The more people who joined the system, the higher the token’s value would theoretically rise. Since 2019, the project has raised million from investors like Coinbase and the venture capital firm Andreessen Horowitz. That money paid for the million cost of designing the Orb, plus maintaining the software it runs on. The total market value of all Worldcoins in existence, however, is far higher—around billion. That number is a bit misleading: most of those coins are not in circulation and Worldcoin’s price has fluctuated wildly. Still, it allows the company to reward users for signing up at no cost to itself. The main lure for investors is the crypto upside. Some 75% of all Worldcoins are set aside for humans to claim when they sign up, or as referral bonuses. The remaining 25% are split between Tools for Humanity’s backers and staff, including Blania and Altman. “I’m really excited to make a lot of money,” ” Blania says.From the beginning, Altman was thinking about the consequences of the AI revolution he intended to unleash.A future in which advanced AI could perform most tasks more effectively than humans would bring a wave of unemployment and economic dislocation, he reasoned. Some kind of wealth redistribution might be necessary. In 2016, he partially funded a study of basic income, which gave per-month handouts to low-income individuals in Illinois and Texas. But there was no single financial system that would allow money to be sent to everybody in the world. Nor was there a way to stop an individual human from claiming their share twice—or to identify a sophisticated AI pretending to be human and pocketing some cash of its own. In 2023, Tools for Humanity raised the possibility of using the network to redistribute the profits of AI labs that were able to automate human labor. “As AI advances,” it said, “fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power.”Blania was taken by the pitch, and agreed to join the project as a co-founder. “Most people told us we were very stupid or crazy or insane, including Silicon Valley investors,” Blania says. At least until ChatGPT came out in 2022, transforming OpenAI into one of the world’s most famous tech companies and kickstarting a market bull-run. “Things suddenly started to make more and more sense to the external world,” Blania says of the vision to develop a global “proof-of-humanity” network. “You have to imagine a world in which you will have very smart and competent systems somehow flying through the Internet with different goals and ideas of what they want to do, and us having no idea anymore what we’re dealing with.”After our interview, Blania’s head of communications ushers me over to a circular wooden structure where eight Orbs face one another. The scene feels like a cross between an Apple Store and a ceremonial altar. “Do you want to get verified?” she asks. Putting aside my reservations for the purposes of research, I download the World App and follow its prompts. I flash a QR code at the Orb, then gaze into it. A minute or so later, my phone buzzes with confirmation: I’ve been issued my own personal World ID and some Worldcoin.The first thing the Orb does is check if you’re human, using a neural network that takes input from various sensors, including an infrared camera and a thermometer. Davide Monteleone for TIMEWhile I stared into the Orb, several complex procedures had taken place at once. A neural network took inputs from multiple sensors—an infrared camera, a thermometer—to confirm I was a living human. Simultaneously, a telephoto lens zoomed in on my iris, capturing the physical traits within that distinguish me from every other human on Earth. It then converted that image into an iris code: a numerical abstraction of my unique biometric data. Then the Orb checked to see if my iris code matched any it had seen before, using a technique allowing encrypted data to be compared without revealing the underlying information. Before the Orb deleted my data, it turned my iris code into several derivative codes—none of which on its own can be linked back to the original—encrypted them, deleted the only copies of the decryption keys, and sent each one to a different secure server, so that future users’ iris codes can be checked for uniqueness against mine. If I were to use my World ID to access a website, that site would learn nothing about me except that I’m human. The Orb is open-source, so outside experts can examine its code and verify the company’s privacy claims. “I did a colonoscopy on this company and these technologies before I agreed to join,” says Trevor Traina, a Trump donor and former U.S. ambassador to Austria who now serves as Tools for Humanity’s chief business officer. “It is the most privacy-preserving technology on the planet.”Only weeks later, when researching what would happen if I wanted to delete my data, do I discover that Tools for Humanity’s privacy claims rest on what feels like a sleight of hand. The company argues that in modifying your iris code, it has “effectively anonymized” your biometric data. If you ask Tools for Humanity to delete your iris codes, they will delete the one stored on your phone, but not the derivatives. Those, they argue, are no longer your personal data at all. But if I were to return to an Orb after deleting my data, it would still recognize those codes as uniquely mine. Once you look into the Orb, a piece of your identity remains in the system forever. If users could truly delete that data, the premise of one ID per human would collapse, Tools for Humanity’s chief privacy officer Damien Kieran tells me when I call seeking an explanation. People could delete and sign up for new World IDs after being suspended from a platform. Or claim their Worldcoin tokens, sell them, delete their data, and cash in again. This argument fell flat with European Union regulators in Germany, who recently declared that the Orb posed “fundamental data protection issues” and ordered the company to allow European users to fully delete even their anonymized data.“Just like any other technology service, users cannot delete data that is not personal data,” Kieran said in a statement. “If a person could delete anonymized data that can’t be linked to them by World or any third party, it would allow bad actors to circumvent the security and safety that World ID is working to bring to every human.”On a balmy afternoon this spring, I climb a flight of stairs up to a room above a restaurant in an outer suburb of Seoul. Five elderly South Koreans tap on their phones as they wait to be “verified” by the two Orbs in the center of the room. “We don’t really know how to distinguish between AI and humans anymore,” an attendant in a company t-shirt explains in Korean, gesturing toward the spheres. “We need a way to verify that we’re human and not AI. So how do we do that? Well, humans have irises, but AI doesn’t.”The attendant ushers an elderly woman over to an Orb. It bleeps. “Open your eyes,” a disembodied voice says in English. The woman stares into the camera. Seconds later, she checks her phone and sees that a packet of Worldcoin worth 75,000 Korean wonhas landed in her digital wallet. Congratulations, the app tells her. You are now a verified human.A visitor views the Orbs in Seoul on April 14, 2025. Taemin Ha for TIMETools for Humanity aims to “verify” 1 million Koreans over the next year. Taemin Ha for TIMEA couple dozen Orbs have been available in South Korea since 2023, verifying roughly 55,000 people. Now Tools for Humanity is redoubling its efforts there. At an event in a traditional wooden hanok house in central Seoul, an executive announces that 250 Orbs will soon be dispersed around the country—with the aim of verifying 1 million Koreans in the next 12 months. South Korea has high levels of smartphone usage, crypto and AI adoption, and Internet access, while average wages are modest enough for the free Worldcoin on offer to still be an enticing draw—all of which makes it fertile testing ground for the company’s ambitious global expansion. Yet things seem off to a slow start. In a retail space I visited in central Seoul, Tools for Humanity had constructed a wooden structure with eight Orbs facing each other. Locals and tourists wander past looking bemused; few volunteer themselves up. Most who do tell me they are crypto enthusiasts who came intentionally, driven more by the spirit of early adoption than the free coins. The next day, I visit a coffee shop in central Seoul where a chrome Orb sits unassumingly in one corner. Wu Ruijun, a 20-year-old student from China, strikes up a conversation with the barista, who doubles as the Orb’s operator. Wu was invited here by a friend who said both could claim free cryptocurrency if he signed up. The barista speeds him through the process. Wu accepts the privacy disclosure without reading it, and widens his eyes for the Orb. Soon he’s verified. “I wasn’t told anything about the privacy policy,” he says on his way out. “I just came for the money.”As Altman’s car winds through San Francisco, I ask about the vision he laid out in 2019: that AI would make it harder for us to trust each other online. To my surprise, he rejects the framing. “I’m much morelike: what is the good we can create, rather than the bad we can stop?” he says. “It’s not like, ‘Oh, we’ve got to avoid the bot overrun’ or whatever. It’s just that we can do a lot of special things for humans.” It’s an answer that may reflect how his role has changed over the years. Altman is now the chief public cheerleader of a billion company that’s touting the transformative utility of AI agents. The rise of agents, he and others say, will be a boon for our quality of life—like having an assistant on hand who can answer your most pressing questions, carry out mundane tasks, and help you develop new skills. It’s an optimistic vision that may well pan out. But it doesn’t quite fit with the prophecies of AI-enabled infopocalypse that Tools for Humanity was founded upon.Altman waves away a question about the influence he and other investors stand to gain if their vision is realized. Most holders, he assumes, will have already started selling their tokens—too early, he adds. “What I think would be bad is if an early crew had a lot of control over the protocol,” he says, “and that’s where I think the commitment to decentralization is so cool.” Altman is referring to the World Protocol, the underlying technology upon which the Orb, Worldcoin, and World ID all rely. Tools for Humanity is developing it, but has committed to giving control to its users over time—a process they say will prevent power from being concentrated in the hands of a few executives or investors. Tools for Humanity would remain a for-profit company, and could levy fees on platforms that use World ID, but other companies would be able to compete for customers by building alternative apps—or even alternative Orbs. The plan draws on ideas that animated the crypto ecosystem in the late 2010s and early 2020s, when evangelists for emerging blockchain technologies argued that the centralization of power—especially in large so-called “Web 2.0” tech companies—was responsible for many of the problems plaguing the modern Internet. Just as decentralized cryptocurrencies could reform a financial system controlled by economic elites, so too would it be possible to create decentralized organizations, run by their members instead of CEOs. How such a system might work in practice remains unclear. “Building a community-based governance system,” Tools for Humanity says in a 2023 white paper, “represents perhaps the most formidable challenge of the entire project.”Altman has a pattern of making idealistic promises that shift over time. He founded OpenAI as a nonprofit in 2015, with a mission to develop AGI safely and for the benefit of all humanity. To raise money, OpenAI restructured itself as a for-profit company in 2019, but with overall control still in the hands of its nonprofit board. Last year, Altman proposed yet another restructure—one which would dilute the board’s control and allow more profits to flow to shareholders. Why, I ask, should the public trust Tools for Humanity’s commitment to freely surrender influence and power? “I think you will just see the continued decentralization via the protocol,” he says. “The value here is going to live in the network, and the network will be owned and governed by a lot of people.” Altman talks less about universal basic income these days. He recently mused about an alternative, which he called “universal basic compute.” Instead of AI companies redistributing their profits, he seemed to suggest, they could instead give everyone in the world fair access to super-powerful AI. Blania tells me he recently “made the decision to stop talking” about UBI at Tools for Humanity. “UBI is one potential answer,” he says. “Just givingaccess to the latestmodels and having them learn faster and better is another.” Says Altman: “I still don’t know what the right answer is. I believe we should do a better job of distribution of resources than we currently do.” When I probe the question of why people should trust him, Altman gets irritated. “I understand that you hate AI, and that’s fine,” he says. “If you want to frame it as the downside of AI is that there’s going to be a proliferation of very convincing AI systems that are pretending to be human, and we need ways to know what is really human-authorized versus not, then yeah, I think you can call that a downside of AI. It’s not how I would naturally frame it.” The phrase human-authorized hints at a tension between World ID and OpenAI’s plans for AI agents. An Internet where a World ID is required to access most services might impede the usefulness of the agents that OpenAI and others are developing. So Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer. “We’ve built everything in a way that can be very easily delegatable to an agent,” Sada says. It’s a measure that would allow humans to be held accountable for the actions of their AIs. But it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization. World ID doesn’t tell you whether a piece of content is AI-generated or human-generated; all it tells you is whether the account that posted it is a human or a bot. Even in a world where everybody had a World ID, our online spaces might still be filled with AI-generated text, images, and videos.As I say goodbye to Altman, I’m left feeling conflicted about his project. If the Internet is going to be transformed by AI agents, then some kind of proof-of-humanity system will almost certainly be necessary. Yet if the Orb becomes a piece of Internet infrastructure, it could give Altman—a beneficiary of the proliferation of AI content—significant influence over a leading defense mechanism against it. People might have no choice but to participate in the network in order to access social media or online services.I thought of an encounter I witnessed in Seoul. In the room above the restaurant, Cho Jeong-yeon, 75, watched her friend get verified by an Orb. Cho had been invited to do the same, but demurred. The reward wasn’t enough for her to surrender a part of her identity. “Your iris is uniquely yours, and we don’t really know how it might be used,” she says. “Seeing the machine made me think: are we becoming machines instead of humans now? Everything is changing, and we don’t know how it’ll all turn out.”—With reporting by Stephen Kim/Seoul. This story was supported by Tarbell Grants.Correction, May 30The original version of this story misstated the market capitalization of Worldcoin if all coins were in circulation. It is billion, not billion. #orb #will #see #you #now
    TIME.COM
    The Orb Will See You Now
    Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.” The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately $42, will be transferred to your digital wallet—your reward for becoming a “verified human.” Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,” Altman tells TIME in a video interview from the passenger seat of a car a few days before his April 30 keynote address.Internal hardware of the Orb in mid-assembly in March. Davide Monteleone for TIMEThe project’s goal is to solve a problem partly of Altman’s own making. In the near future, he and other tech leaders say, advanced AIs will be imbued with agency: the ability to not just respond to human prompting, but to take actions independently in the world. This will enable the creation of AI coworkers that can drop into your company and begin solving problems; AI tutors that can adapt their teaching style to students’ preferences; even AI doctors that can diagnose routine cases and handle scheduling or logistics. The arrival of these virtual agents, their venture capitalist backers predict, will turbocharge our productivity and unleash an age of material abundance.But AI agents will also have cascading consequences for the human experience online. “As AI systems become harder to distinguish from people, websites may face difficult trade-offs,” says a recent paper by researchers from 25 different universities, nonprofits, and tech companies, including OpenAI. “There is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online.” On social-media platforms like X and Facebook, bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.  Photograph by Davide Monteleone for TIMEBuy a copy of the Orb issue hereThe arrival of agents won’t only threaten our ability to distinguish between authentic and AI content online. It will also challenge the Internet’s core business model, online advertising, which relies on the assumption that ads are being viewed by humans. “The Internet will change very drastically sometime in the next 12 to 24 months,” says Tools for Humanity CEO Alex Blania. “So we have to succeed, or I’m not sure what else would happen.”For four years, Blania’s team has been testing the Orb’s hardware abroad. Now the U.S. rollout has arrived. Over the next 12 months, 7,500 Orbs will be arriving in dozens of American cities, in locations like gas stations, bodegas, and flagship stores in Los Angeles, Austin, and Miami. The project’s founders and fans hope the Orb’s U.S. debut will kickstart a new phase of growth. The San Francisco keynote was titled: “At Last.” It’s not clear the public appetite matches the exultant branding. Tools for Humanity has “verified” just 12 million humans since mid 2023, a pace Blania concedes is well behind schedule. Few online platforms currently support the so-called “World ID” that the Orb bestows upon its visitors, leaving little to entice users to give up their biometrics beyond the lure of free crypto. Even Altman isn’t sure whether the whole thing can work. “I can see [how] this becomes a fairly mainstream thing in a few years,” he says. “Or I can see that it’s still only used by a small subset of people who think about the world in a certain way.” Blania (left) and Altman debut the Orb at World’s U.S. launch in San Francisco on April 30, 2025. Jason Henry—The New York Times/ReduxYet as the Internet becomes overrun with AI, the creators of this strange new piece of hardware are betting that everybody in the world will soon want—or need—to visit an Orb. The biometric code it creates, they predict, will become a new type of digital passport, without which you might be denied passage to the Internet of the future, from dating apps to government services. In a best-case scenario, World ID could be a privacy-preserving way to fortify the Internet against an AI-driven deluge of fake or deceptive content. It could also enable the distribution of universal basic income (UBI)—a policy that Altman has previously touted—as AI automation transforms the global economy. To examine what this new technology might mean, I reported from three continents, interviewed 10 Tools for Humanity executives and investors, reviewed hundreds of pages of company documents, and “verified” my own humanity. The Internet will inevitably need some kind of proof-of-humanity system in the near future, says Divya Siddarth, founder of the nonprofit Collective Intelligence Project. The real question, she argues, is whether such a system will be centralized—“a big security nightmare that enables a lot of surveillance”—or privacy-preserving, as the Orb claims to be. Questions remain about Tools for Humanity’s corporate structure, its yoking to an unstable cryptocurrency, and what power it would concentrate in the hands of its owners if successful. Yet it’s also one of the only attempts to solve what many see as an increasingly urgent problem. “There are some issues with it,” Siddarth says of World ID. “But you can’t preserve the Internet in amber. Something in this direction is necessary.”In March, I met Blania at Tools for Humanity’s San Francisco headquarters, where a large screen displays the number of weekly “Orb verifications” by country. A few days earlier, the CEO had attended a $1 million-per-head dinner at Mar-a-Lago with President Donald Trump, whom he credits with clearing the way for the company’s U.S. launch by relaxing crypto regulations. “Given Sam is a very high profile target,” Blania says, “we just decided that we would let other companies fight that fight, and enter the U.S. once the air is clear.” As a kid growing up in Germany, Blania was a little different than his peers. “Other kids were, like, drinking a lot, or doing a lot of parties, and I was just building a lot of things that could potentially blow up,” he recalls. At the California Institute of Technology, where he was pursuing research for a masters degree, he spent many evenings reading the blogs of startup gurus like Paul Graham and Altman. Then, in 2019, Blania received an email from Max Novendstern, an entrepreneur who had been kicking around a concept with Altman to build a global cryptocurrency network. They were looking for technical minds to help with the project. Over cappuccinos, Altman told Blania he was certain about three things. First, smarter-than-human AI was not only possible, but inevitable—and it would soon mean you could no longer assume that anything you read, saw, or heard on the Internet was human-created. Second, cryptocurrency and other decentralized technologies would be a massive force for change in the world. And third, scale was essential to any crypto network’s value. The Orb is tested on a calibration rig, surrounded by checkerboard targets to ensure precision in iris detection. Davide Monteleone for TIMEThe goal of Worldcoin, as the project was initially called, was to combine those three insights. Altman took a lesson from PayPal, the company co-founded by his mentor Peter Thiel. Of its initial funding, PayPal spent less than $10 million actually building its app—but pumped an additional $70 million or so into a referral program, whereby new users and the person who invited them would each receive $10 in credit. The referral program helped make PayPal a leading payment platform. Altman thought a version of that strategy would propel Worldcoin to similar heights. He wanted to create a new cryptocurrency and give it to users as a reward for signing up. The more people who joined the system, the higher the token’s value would theoretically rise. Since 2019, the project has raised $244 million from investors like Coinbase and the venture capital firm Andreessen Horowitz. That money paid for the $50 million cost of designing the Orb, plus maintaining the software it runs on. The total market value of all Worldcoins in existence, however, is far higher—around $12 billion. That number is a bit misleading: most of those coins are not in circulation and Worldcoin’s price has fluctuated wildly. Still, it allows the company to reward users for signing up at no cost to itself. The main lure for investors is the crypto upside. Some 75% of all Worldcoins are set aside for humans to claim when they sign up, or as referral bonuses. The remaining 25% are split between Tools for Humanity’s backers and staff, including Blania and Altman. “I’m really excited to make a lot of money,” ” Blania says.From the beginning, Altman was thinking about the consequences of the AI revolution he intended to unleash. (On May 21, he announced plans to team up with famed former Apple designer Jony Ive on a new AI personal device.) A future in which advanced AI could perform most tasks more effectively than humans would bring a wave of unemployment and economic dislocation, he reasoned. Some kind of wealth redistribution might be necessary. In 2016, he partially funded a study of basic income, which gave $1,000 per-month handouts to low-income individuals in Illinois and Texas. But there was no single financial system that would allow money to be sent to everybody in the world. Nor was there a way to stop an individual human from claiming their share twice—or to identify a sophisticated AI pretending to be human and pocketing some cash of its own. In 2023, Tools for Humanity raised the possibility of using the network to redistribute the profits of AI labs that were able to automate human labor. “As AI advances,” it said, “fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power.”Blania was taken by the pitch, and agreed to join the project as a co-founder. “Most people told us we were very stupid or crazy or insane, including Silicon Valley investors,” Blania says. At least until ChatGPT came out in 2022, transforming OpenAI into one of the world’s most famous tech companies and kickstarting a market bull-run. “Things suddenly started to make more and more sense to the external world,” Blania says of the vision to develop a global “proof-of-humanity” network. “You have to imagine a world in which you will have very smart and competent systems somehow flying through the Internet with different goals and ideas of what they want to do, and us having no idea anymore what we’re dealing with.”After our interview, Blania’s head of communications ushers me over to a circular wooden structure where eight Orbs face one another. The scene feels like a cross between an Apple Store and a ceremonial altar. “Do you want to get verified?” she asks. Putting aside my reservations for the purposes of research, I download the World App and follow its prompts. I flash a QR code at the Orb, then gaze into it. A minute or so later, my phone buzzes with confirmation: I’ve been issued my own personal World ID and some Worldcoin.The first thing the Orb does is check if you’re human, using a neural network that takes input from various sensors, including an infrared camera and a thermometer. Davide Monteleone for TIMEWhile I stared into the Orb, several complex procedures had taken place at once. A neural network took inputs from multiple sensors—an infrared camera, a thermometer—to confirm I was a living human. Simultaneously, a telephoto lens zoomed in on my iris, capturing the physical traits within that distinguish me from every other human on Earth. It then converted that image into an iris code: a numerical abstraction of my unique biometric data. Then the Orb checked to see if my iris code matched any it had seen before, using a technique allowing encrypted data to be compared without revealing the underlying information. Before the Orb deleted my data, it turned my iris code into several derivative codes—none of which on its own can be linked back to the original—encrypted them, deleted the only copies of the decryption keys, and sent each one to a different secure server, so that future users’ iris codes can be checked for uniqueness against mine. If I were to use my World ID to access a website, that site would learn nothing about me except that I’m human. The Orb is open-source, so outside experts can examine its code and verify the company’s privacy claims. “I did a colonoscopy on this company and these technologies before I agreed to join,” says Trevor Traina, a Trump donor and former U.S. ambassador to Austria who now serves as Tools for Humanity’s chief business officer. “It is the most privacy-preserving technology on the planet.”Only weeks later, when researching what would happen if I wanted to delete my data, do I discover that Tools for Humanity’s privacy claims rest on what feels like a sleight of hand. The company argues that in modifying your iris code, it has “effectively anonymized” your biometric data. If you ask Tools for Humanity to delete your iris codes, they will delete the one stored on your phone, but not the derivatives. Those, they argue, are no longer your personal data at all. But if I were to return to an Orb after deleting my data, it would still recognize those codes as uniquely mine. Once you look into the Orb, a piece of your identity remains in the system forever. If users could truly delete that data, the premise of one ID per human would collapse, Tools for Humanity’s chief privacy officer Damien Kieran tells me when I call seeking an explanation. People could delete and sign up for new World IDs after being suspended from a platform. Or claim their Worldcoin tokens, sell them, delete their data, and cash in again. This argument fell flat with European Union regulators in Germany, who recently declared that the Orb posed “fundamental data protection issues” and ordered the company to allow European users to fully delete even their anonymized data. (Tools for Humanity has appealed; the regulator is now reassessing the decision.) “Just like any other technology service, users cannot delete data that is not personal data,” Kieran said in a statement. “If a person could delete anonymized data that can’t be linked to them by World or any third party, it would allow bad actors to circumvent the security and safety that World ID is working to bring to every human.”On a balmy afternoon this spring, I climb a flight of stairs up to a room above a restaurant in an outer suburb of Seoul. Five elderly South Koreans tap on their phones as they wait to be “verified” by the two Orbs in the center of the room. “We don’t really know how to distinguish between AI and humans anymore,” an attendant in a company t-shirt explains in Korean, gesturing toward the spheres. “We need a way to verify that we’re human and not AI. So how do we do that? Well, humans have irises, but AI doesn’t.”The attendant ushers an elderly woman over to an Orb. It bleeps. “Open your eyes,” a disembodied voice says in English. The woman stares into the camera. Seconds later, she checks her phone and sees that a packet of Worldcoin worth 75,000 Korean won (about $54) has landed in her digital wallet. Congratulations, the app tells her. You are now a verified human.A visitor views the Orbs in Seoul on April 14, 2025. Taemin Ha for TIMETools for Humanity aims to “verify” 1 million Koreans over the next year. Taemin Ha for TIMEA couple dozen Orbs have been available in South Korea since 2023, verifying roughly 55,000 people. Now Tools for Humanity is redoubling its efforts there. At an event in a traditional wooden hanok house in central Seoul, an executive announces that 250 Orbs will soon be dispersed around the country—with the aim of verifying 1 million Koreans in the next 12 months. South Korea has high levels of smartphone usage, crypto and AI adoption, and Internet access, while average wages are modest enough for the free Worldcoin on offer to still be an enticing draw—all of which makes it fertile testing ground for the company’s ambitious global expansion. Yet things seem off to a slow start. In a retail space I visited in central Seoul, Tools for Humanity had constructed a wooden structure with eight Orbs facing each other. Locals and tourists wander past looking bemused; few volunteer themselves up. Most who do tell me they are crypto enthusiasts who came intentionally, driven more by the spirit of early adoption than the free coins. The next day, I visit a coffee shop in central Seoul where a chrome Orb sits unassumingly in one corner. Wu Ruijun, a 20-year-old student from China, strikes up a conversation with the barista, who doubles as the Orb’s operator. Wu was invited here by a friend who said both could claim free cryptocurrency if he signed up. The barista speeds him through the process. Wu accepts the privacy disclosure without reading it, and widens his eyes for the Orb. Soon he’s verified. “I wasn’t told anything about the privacy policy,” he says on his way out. “I just came for the money.”As Altman’s car winds through San Francisco, I ask about the vision he laid out in 2019: that AI would make it harder for us to trust each other online. To my surprise, he rejects the framing. “I’m much more [about] like: what is the good we can create, rather than the bad we can stop?” he says. “It’s not like, ‘Oh, we’ve got to avoid the bot overrun’ or whatever. It’s just that we can do a lot of special things for humans.” It’s an answer that may reflect how his role has changed over the years. Altman is now the chief public cheerleader of a $300 billion company that’s touting the transformative utility of AI agents. The rise of agents, he and others say, will be a boon for our quality of life—like having an assistant on hand who can answer your most pressing questions, carry out mundane tasks, and help you develop new skills. It’s an optimistic vision that may well pan out. But it doesn’t quite fit with the prophecies of AI-enabled infopocalypse that Tools for Humanity was founded upon.Altman waves away a question about the influence he and other investors stand to gain if their vision is realized. Most holders, he assumes, will have already started selling their tokens—too early, he adds. “What I think would be bad is if an early crew had a lot of control over the protocol,” he says, “and that’s where I think the commitment to decentralization is so cool.” Altman is referring to the World Protocol, the underlying technology upon which the Orb, Worldcoin, and World ID all rely. Tools for Humanity is developing it, but has committed to giving control to its users over time—a process they say will prevent power from being concentrated in the hands of a few executives or investors. Tools for Humanity would remain a for-profit company, and could levy fees on platforms that use World ID, but other companies would be able to compete for customers by building alternative apps—or even alternative Orbs. The plan draws on ideas that animated the crypto ecosystem in the late 2010s and early 2020s, when evangelists for emerging blockchain technologies argued that the centralization of power—especially in large so-called “Web 2.0” tech companies—was responsible for many of the problems plaguing the modern Internet. Just as decentralized cryptocurrencies could reform a financial system controlled by economic elites, so too would it be possible to create decentralized organizations, run by their members instead of CEOs. How such a system might work in practice remains unclear. “Building a community-based governance system,” Tools for Humanity says in a 2023 white paper, “represents perhaps the most formidable challenge of the entire project.”Altman has a pattern of making idealistic promises that shift over time. He founded OpenAI as a nonprofit in 2015, with a mission to develop AGI safely and for the benefit of all humanity. To raise money, OpenAI restructured itself as a for-profit company in 2019, but with overall control still in the hands of its nonprofit board. Last year, Altman proposed yet another restructure—one which would dilute the board’s control and allow more profits to flow to shareholders. Why, I ask, should the public trust Tools for Humanity’s commitment to freely surrender influence and power? “I think you will just see the continued decentralization via the protocol,” he says. “The value here is going to live in the network, and the network will be owned and governed by a lot of people.” Altman talks less about universal basic income these days. He recently mused about an alternative, which he called “universal basic compute.” Instead of AI companies redistributing their profits, he seemed to suggest, they could instead give everyone in the world fair access to super-powerful AI. Blania tells me he recently “made the decision to stop talking” about UBI at Tools for Humanity. “UBI is one potential answer,” he says. “Just giving [people] access to the latest [AI] models and having them learn faster and better is another.” Says Altman: “I still don’t know what the right answer is. I believe we should do a better job of distribution of resources than we currently do.” When I probe the question of why people should trust him, Altman gets irritated. “I understand that you hate AI, and that’s fine,” he says. “If you want to frame it as the downside of AI is that there’s going to be a proliferation of very convincing AI systems that are pretending to be human, and we need ways to know what is really human-authorized versus not, then yeah, I think you can call that a downside of AI. It’s not how I would naturally frame it.” The phrase human-authorized hints at a tension between World ID and OpenAI’s plans for AI agents. An Internet where a World ID is required to access most services might impede the usefulness of the agents that OpenAI and others are developing. So Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer. “We’ve built everything in a way that can be very easily delegatable to an agent,” Sada says. It’s a measure that would allow humans to be held accountable for the actions of their AIs. But it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization. World ID doesn’t tell you whether a piece of content is AI-generated or human-generated; all it tells you is whether the account that posted it is a human or a bot. Even in a world where everybody had a World ID, our online spaces might still be filled with AI-generated text, images, and videos.As I say goodbye to Altman, I’m left feeling conflicted about his project. If the Internet is going to be transformed by AI agents, then some kind of proof-of-humanity system will almost certainly be necessary. Yet if the Orb becomes a piece of Internet infrastructure, it could give Altman—a beneficiary of the proliferation of AI content—significant influence over a leading defense mechanism against it. People might have no choice but to participate in the network in order to access social media or online services.I thought of an encounter I witnessed in Seoul. In the room above the restaurant, Cho Jeong-yeon, 75, watched her friend get verified by an Orb. Cho had been invited to do the same, but demurred. The reward wasn’t enough for her to surrender a part of her identity. “Your iris is uniquely yours, and we don’t really know how it might be used,” she says. “Seeing the machine made me think: are we becoming machines instead of humans now? Everything is changing, and we don’t know how it’ll all turn out.”—With reporting by Stephen Kim/Seoul. This story was supported by Tarbell Grants.Correction, May 30The original version of this story misstated the market capitalization of Worldcoin if all coins were in circulation. It is $12 billion, not $1.2 billion.
    Like
    Love
    Wow
    Sad
    Angry
    240
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    News

    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    7 min read

    Published: June 4, 2025

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent.
    The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets.
    This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device.
    Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode.
    How Does It Work?
    Here’s the method used by Meta to spy on Android devices:

    As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour.
    When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites.
    However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background. 
    Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost. 
    Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms.

    The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites.
    Yandex also uses a similar method to harvest your personal data.

    Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone. 
    When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters. 
    These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1.
    Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains.
    The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK.

    Meta’s Infamous History with Privacy Norms
    This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations. 
    For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B. 
    Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission. 
    Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data. 
    In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users.
    In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally.
    So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place.
    That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities. 
    The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data.
    Meta’s Timid Response
    Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication. 

    We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson

    This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports.
    Here’s what will possibly happen next:

    A lawsuit may be filed based on the report.
    An investigating committee might be formed to question the matter.
    The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines.
    Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done. 

    The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes.
    More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    View all
    #meta #yandex #spying #android #users
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data.  In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all #meta #yandex #spying #android #users
    TECHREPORT.COM
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580-12585), on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying $1.4B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta $5B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid $90M and promised to delete the collected data.  In 2024, South Korea also fined Meta $15M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined $101.6M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’ (yep, that’s what they are calling it) as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all
    Like
    Love
    Wow
    Sad
    Angry
    193
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com