• Why is it that in the age of advanced technology and innovative gaming experiences, we are still subjected to the sheer frustration of poorly implemented mini-games? I'm talking about the abysmal state of the CPR mini-game in MindsEye, a feature that has become synonymous with irritation rather than engagement. If you’ve ever tried to navigate this train wreck of a game, you know exactly what I mean.

    Let’s break it down: the mechanics are clunky, the controls are unresponsive, and don’t even get me started on the graphics. This is 2023; we should expect seamless integration and fluid gameplay. Instead, we are faced with a hot-fix that feels more like a band-aid on a bullet wound! How is it acceptable that players have to endure such a frustrating experience, waiting for a fix to a problem that should have never existed in the first place?

    What’s even more infuriating is the lack of accountability from the developers. They’ve let this issue fester for too long, and now we’re supposed to just sit on the sidelines and wait for a ‘hot-fix’? How about some transparency? How about acknowledging that you dropped the ball on this one? Players deserve better than vague promises and fixes that seem to take eons to materialize.

    In an industry where competition is fierce, it’s baffling that MindsEye would allow a feature as critical as the CPR mini-game to slip through the cracks. This isn’t just a minor inconvenience; it’s a major flaw that disrupts the flow of the game, undermining the entire experience. Players are losing interest, and rightfully so! Why invest time and energy into something that’s clearly half-baked?

    And let’s talk about the community feedback. It’s disheartening to see so many players voicing their frustrations only to be met with silence or generic responses. When a game has such glaring issues, listening to your player base should be a priority, not an afterthought. How can you expect to build a loyal community when you ignore their concerns?

    At this point, it’s clear that MindsEye needs to step up its game. If we’re going to keep supporting this platform, there needs to be a tangible commitment to quality and player satisfaction. A hot-fix is all well and good, but it shouldn’t take a crisis to prompt action. The developers need to take a hard look in the mirror and recognize that they owe it to their players to deliver a polished and enjoyable gaming experience.

    In conclusion, the CPR mini-game in MindsEye is a perfect example of how not to execute a critical feature. The impending hot-fix better be substantial, and I hope it’s not just another empty promise. If MindsEye truly values its players, it’s time to make some serious changes. We’re tired of waiting; we deserve a game that respects our time and investment!

    #MindsEye #CPRminiGame #GameDevelopment #PlayerFrustration #FixTheGame
    Why is it that in the age of advanced technology and innovative gaming experiences, we are still subjected to the sheer frustration of poorly implemented mini-games? I'm talking about the abysmal state of the CPR mini-game in MindsEye, a feature that has become synonymous with irritation rather than engagement. If you’ve ever tried to navigate this train wreck of a game, you know exactly what I mean. Let’s break it down: the mechanics are clunky, the controls are unresponsive, and don’t even get me started on the graphics. This is 2023; we should expect seamless integration and fluid gameplay. Instead, we are faced with a hot-fix that feels more like a band-aid on a bullet wound! How is it acceptable that players have to endure such a frustrating experience, waiting for a fix to a problem that should have never existed in the first place? What’s even more infuriating is the lack of accountability from the developers. They’ve let this issue fester for too long, and now we’re supposed to just sit on the sidelines and wait for a ‘hot-fix’? How about some transparency? How about acknowledging that you dropped the ball on this one? Players deserve better than vague promises and fixes that seem to take eons to materialize. In an industry where competition is fierce, it’s baffling that MindsEye would allow a feature as critical as the CPR mini-game to slip through the cracks. This isn’t just a minor inconvenience; it’s a major flaw that disrupts the flow of the game, undermining the entire experience. Players are losing interest, and rightfully so! Why invest time and energy into something that’s clearly half-baked? And let’s talk about the community feedback. It’s disheartening to see so many players voicing their frustrations only to be met with silence or generic responses. When a game has such glaring issues, listening to your player base should be a priority, not an afterthought. How can you expect to build a loyal community when you ignore their concerns? At this point, it’s clear that MindsEye needs to step up its game. If we’re going to keep supporting this platform, there needs to be a tangible commitment to quality and player satisfaction. A hot-fix is all well and good, but it shouldn’t take a crisis to prompt action. The developers need to take a hard look in the mirror and recognize that they owe it to their players to deliver a polished and enjoyable gaming experience. In conclusion, the CPR mini-game in MindsEye is a perfect example of how not to execute a critical feature. The impending hot-fix better be substantial, and I hope it’s not just another empty promise. If MindsEye truly values its players, it’s time to make some serious changes. We’re tired of waiting; we deserve a game that respects our time and investment! #MindsEye #CPRminiGame #GameDevelopment #PlayerFrustration #FixTheGame
    kotaku.com
    Read more...
    Like
    Love
    Wow
    Sad
    Angry
    623
    · 1 Commenti ·0 condivisioni ·0 Anteprima
  • Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory

    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light

    Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask.
    Courtesy of the researchers via MIT

    In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred.
    Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work.
    In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light.
    The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt.

    Meet the engineer who invented an AI-powered way to restore art
    Watch on

    While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible.
    As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory.
    That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today.
    The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible.
    “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
    The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study.
    Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken.

    Overview of Physically-Applied Digital Restoration
    Watch on

    “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.”
    The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply.
    Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.”

    Get the latest stories in your inbox every weekday.
    #graduate #student #develops #aibased #approach
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday. #graduate #student #develops #aibased #approach
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    www.smithsonianmag.com
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday.
    0 Commenti ·0 condivisioni ·0 Anteprima
  • For June’s Patch Tuesday, 68 fixes — and two zero-day flaws

    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilitieshave led to a “Patch Now” recommendation for both Windows and Office.To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates.Known issues

    Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern:

    Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” orare not supported in Excel filenames. An error is generated, advising the user to remove the offending characters.

    Windows 10: There are reports of blurry or unclear CJKtext when displayed at 96 DPIin Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity.

    Major revisions and mitigations

    Microsoft might have won an award for the shortest time between releasing an update and a revision with:

    CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release.

    Windows lifecycle and enforcement updates

    Microsoft did not release any enforcement updates for June.

    Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated.

    For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts.

    Core OS and UI compatibility

    Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations:

    Run print operations from 32-bit applications on 64-bit Windows environments.

    Use different print drivers and configurations.

    Observe printing from older productivity apps and virtual environments.

    Remote desktop and network connectivity

    This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed:

    Create and reconnect Remote Desktopsessions under varying network conditions.

    Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments.

    Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots.

    Filesystem, SMB and storage

    Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations:

    Access file shares using server names, FQDNs, and IP addresses.

    Enable and validate encrypted and compressed file-share operations between clients and servers.

    Run tests that create, open, and read from system log files using various file and storage configurations.

    Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes.

    Test disk addition/removal, failover behaviors, and resiliency settings.

    Run system-level storage diagnostics across active and passive nodes in the cluster.

    Windows installer and recovery

    Microsoft delivered another update to the Windows Installerapplication infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security:

    Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools.

    Validate restore point behavior for points older than 60 days under varying virtualization-based securitysettings.

    Check both client and server behaviors for allowed or blocked restores.

    We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected.

    Each month, we break down the update cycle into product familieswith the following basic groupings: 

    Browsers;

    Microsoft Windows;

    Microsoft Office;

    Microsoft Exchange and SQL Server; 

    Microsoft Developer Tools;

    And Adobe.

    Browsers

    Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patcheswhere both updates are rated important. These low-profile changes can be added to your standard release calendar.

    Microsoft Windows

    Microsoft released five critical patches and40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities:

    Missing release of memory after effective lifetime in Windows Cryptographic Servicesallows an unauthorized attacker to execute code over a network.

    Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network.

    Use after free in Windows KDC Proxy Serviceallows an unauthorized attacker to execute code over a network.

    Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network.

    Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates.

    Microsoft Office

    Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release.

    Microsoft Exchange and SQL Server

    There are no updates for either Microsoft Exchange or SQL Server this month. 

    Developer tools

    There were only three low-level updatesreleased, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule.

    AdobeAdobe has releaseda single update to Adobe Acrobat. There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above.
    #junes #patch #tuesday #fixes #two
    For June’s Patch Tuesday, 68 fixes — and two zero-day flaws
    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilitieshave led to a “Patch Now” recommendation for both Windows and Office.To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates.Known issues Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern: Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” orare not supported in Excel filenames. An error is generated, advising the user to remove the offending characters. Windows 10: There are reports of blurry or unclear CJKtext when displayed at 96 DPIin Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity. Major revisions and mitigations Microsoft might have won an award for the shortest time between releasing an update and a revision with: CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release. Windows lifecycle and enforcement updates Microsoft did not release any enforcement updates for June. Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated. For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts. Core OS and UI compatibility Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations: Run print operations from 32-bit applications on 64-bit Windows environments. Use different print drivers and configurations. Observe printing from older productivity apps and virtual environments. Remote desktop and network connectivity This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed: Create and reconnect Remote Desktopsessions under varying network conditions. Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments. Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots. Filesystem, SMB and storage Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations: Access file shares using server names, FQDNs, and IP addresses. Enable and validate encrypted and compressed file-share operations between clients and servers. Run tests that create, open, and read from system log files using various file and storage configurations. Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes. Test disk addition/removal, failover behaviors, and resiliency settings. Run system-level storage diagnostics across active and passive nodes in the cluster. Windows installer and recovery Microsoft delivered another update to the Windows Installerapplication infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security: Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools. Validate restore point behavior for points older than 60 days under varying virtualization-based securitysettings. Check both client and server behaviors for allowed or blocked restores. We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected. Each month, we break down the update cycle into product familieswith the following basic groupings:  Browsers; Microsoft Windows; Microsoft Office; Microsoft Exchange and SQL Server;  Microsoft Developer Tools; And Adobe. Browsers Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patcheswhere both updates are rated important. These low-profile changes can be added to your standard release calendar. Microsoft Windows Microsoft released five critical patches and40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities: Missing release of memory after effective lifetime in Windows Cryptographic Servicesallows an unauthorized attacker to execute code over a network. Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network. Use after free in Windows KDC Proxy Serviceallows an unauthorized attacker to execute code over a network. Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network. Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates. Microsoft Office Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release. Microsoft Exchange and SQL Server There are no updates for either Microsoft Exchange or SQL Server this month.  Developer tools There were only three low-level updatesreleased, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule. AdobeAdobe has releaseda single update to Adobe Acrobat. There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above. #junes #patch #tuesday #fixes #two
    For June’s Patch Tuesday, 68 fixes — and two zero-day flaws
    www.computerworld.com
    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilities (CVE-2025-33073 and CVE-2025-33053) have led to a “Patch Now” recommendation for both Windows and Office. (Developers can follow their usual release cadence with updates to Microsoft .NET and Visual Studio.) To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates. (More information about recent Patch Tuesday releases is available here.) Known issues Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern: Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” or [] are not supported in Excel filenames. An error is generated, advising the user to remove the offending characters. Windows 10: There are reports of blurry or unclear CJK (Chinese, Japanese, Korean) text when displayed at 96 DPI (100% scaling) in Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity. Major revisions and mitigations Microsoft might have won an award for the shortest time between releasing an update and a revision with: CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release (and has been revised again for documentation purposes). Windows lifecycle and enforcement updates Microsoft did not release any enforcement updates for June. Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated. For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts. Core OS and UI compatibility Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations: Run print operations from 32-bit applications on 64-bit Windows environments. Use different print drivers and configurations (e.g., local, networked). Observe printing from older productivity apps and virtual environments. Remote desktop and network connectivity This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed: Create and reconnect Remote Desktop (RDP) sessions under varying network conditions. Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments. Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots. Filesystem, SMB and storage Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations: Access file shares using server names, FQDNs, and IP addresses. Enable and validate encrypted and compressed file-share operations between clients and servers. Run tests that create, open, and read from system log files using various file and storage configurations. Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes. Test disk addition/removal, failover behaviors, and resiliency settings. Run system-level storage diagnostics across active and passive nodes in the cluster. Windows installer and recovery Microsoft delivered another update to the Windows Installer (MSI) application infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security (VBS): Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools (e.g. Intune). Validate restore point behavior for points older than 60 days under varying virtualization-based security (VBS) settings. Check both client and server behaviors for allowed or blocked restores. We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected. Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings:  Browsers (Microsoft IE and Edge); Microsoft Windows (both desktop and server); Microsoft Office; Microsoft Exchange and SQL Server;  Microsoft Developer Tools (Visual Studio and .NET); And Adobe (if you get this far). Browsers Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patches (CVE-2025-5068 and CVE-2025-5419) where both updates are rated important. These low-profile changes can be added to your standard release calendar. Microsoft Windows Microsoft released five critical patches and (a smaller than usual) 40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities: Missing release of memory after effective lifetime in Windows Cryptographic Services (WCS) allows an unauthorized attacker to execute code over a network. Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network. Use after free in Windows KDC Proxy Service (KPSSVC) allows an unauthorized attacker to execute code over a network. Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network. Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates. Microsoft Office Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release. Microsoft Exchange and SQL Server There are no updates for either Microsoft Exchange or SQL Server this month.  Developer tools There were only three low-level updates (product focused and rated important) released, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule. Adobe (and 3rd party updates) Adobe has released (but Microsoft has not co-published) a single update to Adobe Acrobat (APSB25-57). There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above.
    0 Commenti ·0 condivisioni ·0 Anteprima
  • Huawei Supernode 384 disrupts Nvidia’s AI market hold

    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #huawei #supernode #disrupts #nvidias #market
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #huawei #supernode #disrupts #nvidias #market
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    www.artificialintelligence-news.com
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts models (machine-learning systems using multiple specialised sub-networks to solve complex computational challenges.)Huawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.(Image from Pixabay)See also: Oracle plans $40B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commenti ·0 condivisioni ·0 Anteprima
  • This Detailed Map of a Human Cell Could Help Us Understand How Cancer Develops

    It’s been more than two decades since scientists finished sequencing the human genome, providing a comprehensive map of human biology that has since accelerated progress in disease research and personalized medicine. Thanks to that endeavor, we know that each of us has about 20,000 protein-coding genes, which serve as blueprints for the diverse protein molecules that give shape to our cells and keep them functioning properly.Yet, we know relatively little about how those proteins are organized within cells and how they interact with each other, says Trey Ideker, a professor of medicine and bioengineering at University of California San Diego. Without that knowledge, he says, trying to study and treat disease is “like trying to understand how to fix your car without the shop manual.” Mapping the Human CellIn a recent paper in the journal Nature, Ideker and his colleagues presented their latest attempt to fill this information gap: a fine-grained map of a human cell, showing the locations of more than 5,000 proteins and how they assemble into larger and larger structures. The researchers also created an interactive version of the map. It goes far beyond the simplified diagrams you may recall from high school biology class. Familiar objects like the nucleus appear at the highest level, but zooming in, you find the nucleoplasm, then the chromatin factors, then the transcription factor IID complex, which is home to five individual proteins better left nameless. This subcellular metropolis is unintelligible to non-specialists, but it offers a look at the extraordinary complexity within us all.Surprising Cell FeaturesEven for specialists, there are some surprises. The team identified 275 protein assemblies, ranging in scale from large charismatic organelles like mitochondria, to smaller features like microtubules and ribosomes, down to the tiny protein complexes that constitute “the basic machinery” of the cell, as Ideker put it. “Across all that,” he says, “about half of it was known, and about half of it, believe it or not, wasn't known.” In other words, 50 percent of the structures they found “just simply don't map to anything in the cell biology textbook.”Multimodal Process for Cell MappingThey achieved this level of detail by taking a “multimodal” approach. First, to figure out which molecules interact with each other, the researchers would line a tube with a particular protein, called the “bait” protein; then they would pour a blended mixture of other proteins through the tube to see what stuck, revealing which ones were neighbors.Next, to get precise coordinates for the location of these proteins, they lit up individual molecules within a cell using glowing antibodies, the cellular defenders produced by the immune system to bind to and neutralize specific substances. Once an antibody found its target, the illuminated protein could be visualized under a microscope and placed on the map. Enhancing Cancer ResearchThere are many human cell types, and the one Ideker’s team chose for this study is called the U2OS cell. It’s commonly associated with pediatric bone tumors. Indeed, the researchers identified about 100 mutated proteins that are linked to this childhood cancer, enhancing our understanding of how the disease develops. Better yet, they located the assemblies those proteins belong to. Typically, Ideker says, cancer research is focused on individual mutations, whereas it’s often more useful to think about the larger systems that cancer disrupts. Returning to the car analogy, he notes that a vehicle’s braking system can fail in various ways: You can tamper with the pedal, the calipers, the discs or the brake fluid, and all these mechanisms give the same outcome.Similarly, cancer can cause a biological system to malfunction in various ways, and Ideker argues that comprehensive cell maps provide an effective way to study those diverse mechanisms of disease. “We've only understood the tip of the iceberg in terms of what gets mutated in cancer,” he says. “The problem is that we're not looking at the machines that actually matter, we're looking at the nuts and bolts.”Mapping Cells for the FutureBeyond cancer, the researchers hope their map will serve as a model for scientists attempting to chart other kinds of cells. This map took more than three years to create, but technology and methodological improvements could speed up the process — as they did for genome sequencing throughout the late 20th century — allowing medical treatments to be tailored to a person’s unique protein profile. “We're going to have to turn Moore's law on this,” Ideker says, “to really scale it up and understand differences in cell biologybetween individuals.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Cody Cottier is a contributing writer at Discover who loves exploring big questions about the universe and our home planet, the nature of consciousness, the ethical implications of science and more. He holds a bachelor's degree in journalism and media production from Washington State University.
    #this #detailed #map #human #cell
    This Detailed Map of a Human Cell Could Help Us Understand How Cancer Develops
    It’s been more than two decades since scientists finished sequencing the human genome, providing a comprehensive map of human biology that has since accelerated progress in disease research and personalized medicine. Thanks to that endeavor, we know that each of us has about 20,000 protein-coding genes, which serve as blueprints for the diverse protein molecules that give shape to our cells and keep them functioning properly.Yet, we know relatively little about how those proteins are organized within cells and how they interact with each other, says Trey Ideker, a professor of medicine and bioengineering at University of California San Diego. Without that knowledge, he says, trying to study and treat disease is “like trying to understand how to fix your car without the shop manual.” Mapping the Human CellIn a recent paper in the journal Nature, Ideker and his colleagues presented their latest attempt to fill this information gap: a fine-grained map of a human cell, showing the locations of more than 5,000 proteins and how they assemble into larger and larger structures. The researchers also created an interactive version of the map. It goes far beyond the simplified diagrams you may recall from high school biology class. Familiar objects like the nucleus appear at the highest level, but zooming in, you find the nucleoplasm, then the chromatin factors, then the transcription factor IID complex, which is home to five individual proteins better left nameless. This subcellular metropolis is unintelligible to non-specialists, but it offers a look at the extraordinary complexity within us all.Surprising Cell FeaturesEven for specialists, there are some surprises. The team identified 275 protein assemblies, ranging in scale from large charismatic organelles like mitochondria, to smaller features like microtubules and ribosomes, down to the tiny protein complexes that constitute “the basic machinery” of the cell, as Ideker put it. “Across all that,” he says, “about half of it was known, and about half of it, believe it or not, wasn't known.” In other words, 50 percent of the structures they found “just simply don't map to anything in the cell biology textbook.”Multimodal Process for Cell MappingThey achieved this level of detail by taking a “multimodal” approach. First, to figure out which molecules interact with each other, the researchers would line a tube with a particular protein, called the “bait” protein; then they would pour a blended mixture of other proteins through the tube to see what stuck, revealing which ones were neighbors.Next, to get precise coordinates for the location of these proteins, they lit up individual molecules within a cell using glowing antibodies, the cellular defenders produced by the immune system to bind to and neutralize specific substances. Once an antibody found its target, the illuminated protein could be visualized under a microscope and placed on the map. Enhancing Cancer ResearchThere are many human cell types, and the one Ideker’s team chose for this study is called the U2OS cell. It’s commonly associated with pediatric bone tumors. Indeed, the researchers identified about 100 mutated proteins that are linked to this childhood cancer, enhancing our understanding of how the disease develops. Better yet, they located the assemblies those proteins belong to. Typically, Ideker says, cancer research is focused on individual mutations, whereas it’s often more useful to think about the larger systems that cancer disrupts. Returning to the car analogy, he notes that a vehicle’s braking system can fail in various ways: You can tamper with the pedal, the calipers, the discs or the brake fluid, and all these mechanisms give the same outcome.Similarly, cancer can cause a biological system to malfunction in various ways, and Ideker argues that comprehensive cell maps provide an effective way to study those diverse mechanisms of disease. “We've only understood the tip of the iceberg in terms of what gets mutated in cancer,” he says. “The problem is that we're not looking at the machines that actually matter, we're looking at the nuts and bolts.”Mapping Cells for the FutureBeyond cancer, the researchers hope their map will serve as a model for scientists attempting to chart other kinds of cells. This map took more than three years to create, but technology and methodological improvements could speed up the process — as they did for genome sequencing throughout the late 20th century — allowing medical treatments to be tailored to a person’s unique protein profile. “We're going to have to turn Moore's law on this,” Ideker says, “to really scale it up and understand differences in cell biologybetween individuals.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Cody Cottier is a contributing writer at Discover who loves exploring big questions about the universe and our home planet, the nature of consciousness, the ethical implications of science and more. He holds a bachelor's degree in journalism and media production from Washington State University. #this #detailed #map #human #cell
    This Detailed Map of a Human Cell Could Help Us Understand How Cancer Develops
    www.discovermagazine.com
    It’s been more than two decades since scientists finished sequencing the human genome, providing a comprehensive map of human biology that has since accelerated progress in disease research and personalized medicine. Thanks to that endeavor, we know that each of us has about 20,000 protein-coding genes, which serve as blueprints for the diverse protein molecules that give shape to our cells and keep them functioning properly.Yet, we know relatively little about how those proteins are organized within cells and how they interact with each other, says Trey Ideker, a professor of medicine and bioengineering at University of California San Diego. Without that knowledge, he says, trying to study and treat disease is “like trying to understand how to fix your car without the shop manual.” Mapping the Human CellIn a recent paper in the journal Nature, Ideker and his colleagues presented their latest attempt to fill this information gap: a fine-grained map of a human cell, showing the locations of more than 5,000 proteins and how they assemble into larger and larger structures. The researchers also created an interactive version of the map. It goes far beyond the simplified diagrams you may recall from high school biology class. Familiar objects like the nucleus appear at the highest level, but zooming in, you find the nucleoplasm, then the chromatin factors, then the transcription factor IID complex, which is home to five individual proteins better left nameless. This subcellular metropolis is unintelligible to non-specialists, but it offers a look at the extraordinary complexity within us all.Surprising Cell FeaturesEven for specialists, there are some surprises. The team identified 275 protein assemblies, ranging in scale from large charismatic organelles like mitochondria, to smaller features like microtubules and ribosomes, down to the tiny protein complexes that constitute “the basic machinery” of the cell, as Ideker put it. “Across all that,” he says, “about half of it was known, and about half of it, believe it or not, wasn't known.” In other words, 50 percent of the structures they found “just simply don't map to anything in the cell biology textbook.”Multimodal Process for Cell MappingThey achieved this level of detail by taking a “multimodal” approach. First, to figure out which molecules interact with each other, the researchers would line a tube with a particular protein, called the “bait” protein; then they would pour a blended mixture of other proteins through the tube to see what stuck, revealing which ones were neighbors.Next, to get precise coordinates for the location of these proteins, they lit up individual molecules within a cell using glowing antibodies, the cellular defenders produced by the immune system to bind to and neutralize specific substances (often foreign invaders like viruses and bacteria, but in this case homegrown proteins). Once an antibody found its target, the illuminated protein could be visualized under a microscope and placed on the map. Enhancing Cancer ResearchThere are many human cell types, and the one Ideker’s team chose for this study is called the U2OS cell. It’s commonly associated with pediatric bone tumors. Indeed, the researchers identified about 100 mutated proteins that are linked to this childhood cancer, enhancing our understanding of how the disease develops. Better yet, they located the assemblies those proteins belong to. Typically, Ideker says, cancer research is focused on individual mutations, whereas it’s often more useful to think about the larger systems that cancer disrupts. Returning to the car analogy, he notes that a vehicle’s braking system can fail in various ways: You can tamper with the pedal, the calipers, the discs or the brake fluid, and all these mechanisms give the same outcome.Similarly, cancer can cause a biological system to malfunction in various ways, and Ideker argues that comprehensive cell maps provide an effective way to study those diverse mechanisms of disease. “We've only understood the tip of the iceberg in terms of what gets mutated in cancer,” he says. “The problem is that we're not looking at the machines that actually matter, we're looking at the nuts and bolts.”Mapping Cells for the FutureBeyond cancer, the researchers hope their map will serve as a model for scientists attempting to chart other kinds of cells. This map took more than three years to create, but technology and methodological improvements could speed up the process — as they did for genome sequencing throughout the late 20th century — allowing medical treatments to be tailored to a person’s unique protein profile. “We're going to have to turn Moore's law on this,” Ideker says, “to really scale it up and understand differences in cell biology […] between individuals.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Cody Cottier is a contributing writer at Discover who loves exploring big questions about the universe and our home planet, the nature of consciousness, the ethical implications of science and more. He holds a bachelor's degree in journalism and media production from Washington State University.
    11 Commenti ·0 condivisioni ·0 Anteprima
  • Reliably Detecting Third-Party Cookie Blocking In 2025

    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Grouphas been vocal in advocating for the complete removal of third-party cookies from the web platform.
    Major browsersare responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on, fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully.
    Don’t Let Silent Failures Win: Why Cookie Detection Still Matters
    Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing.
    Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work.
    And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow.
    Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience.
    Why It’s Hard To Tell If Third-Party Cookies Are Blocked
    Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know:

    // DOES NOT detect third-party cookie blocking
    function areCookiesEnabled{
    if{
    return false;
    }

    try {
    document.cookie = "test_cookie=1; SameSite=None; Secure";
    const hasCookie = document.cookie.includes;
    document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure";

    return hasCookie;
    } catch{
    return false;
    }
    }

    This function only confirms that cookies work in the currentcontext. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection.
    These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts.
    How Modern Browsers Handle Third-Party Cookies
    The behavior of modern browsers directly affects which detection methods will work and which ones silently fail.
    Safari: Full Third-Party Cookie Blocking
    Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention.
    For embedded contentthat requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API.
    Firefox: Cookie Partitioning By Design
    Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared.
    As of Firefox 102, this behavior is enabled by default in the Standardmode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site.
    As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that.
    Chrome: From Deprecation Plans To Privacy SandboxChromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected.
    In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024.
    In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally.
    This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authorityinto Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking.
    As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout.
    Edge: Tracker-Focused Blocking With User Configurability
    Edgeshares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced, and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected.
    Other Browsers: What About Them?
    Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance.
    Internet Explorer11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferencesheaders. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present.
    Older versions of Safari had partial third-party cookie restrictions, but, as mentioned before, this was replaced with full blocking via ITP.
    As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model.
    To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions.
    Overview Of Detection Techniques
    Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t workand what does.
    Basic JavaScript API ChecksAs mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status:

    In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked.
    Setting document.cookie in the parent doesn’t test the third-party context.

    These checks are first-party only. Avoid using them for detection.
    Storage Hacks Via localStoragePreviously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage.
    Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable.
    Server-Assisted Cookie ProbeOne classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back:

    Load a script/image from a third-party server that sets a cookie.
    Immediately load another resource, and the server checks whether the cookie was sent.

    This works, but it:

    Requires custom server-side logic,
    Depends on HTTP caching, response headers, and cookie attributes, and
    Adds development and infrastructure complexity.

    While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here.
    Storage Access APIThe document.hasStorageAccessmethod allows embedded third-party content to check if it has access to unpartitioned cookies:

    ChromeSupports hasStorageAccessand requestStorageAccessstarting from version 119. Additionally, hasUnpartitionedCookieAccessis available as an alias for hasStorageAccessfrom version 125 onwards.
    FirefoxSupports both hasStorageAccessand requestStorageAccessmethods.
    SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccesswithout a direct user gestureis ignored.

    Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement.
    This API is particularly useful for detecting scenarios where cookies are present but partitioned, as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check.
    iFrame + postMessageDespite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method:

    Embed a hidden iframe from a third-party domain.
    Inside the iframe, attempt to set a test cookie.
    Use window.postMessage to report success or failure to the parent.

    This approach works across all major browsers, requires no server, and simulates a real-world third-party scenario.
    We’ll implement this step-by-step next.
    Bonus: Sec-Fetch-Storage-Access
    Chromeis introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection.
    As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem.
    Step-by-Step: Detecting Third-Party Cookies Via iFrame
    So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic, it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following:

    You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com,
    The domain needs HTTPS, and
    You’ll need to host a simple static file, even if no server code is involved.

    Once that’s set up, the rest of the logic is fully client-side.
    Step 1: Create A Cookie Test PageMinimal version:

    <!DOCTYPE html>
    <html>
    <body>
    <script>
    document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;";
    const cookieFound = document.cookie.includes;

    const sendResult ==> window.parent?.postMessage;

    if{
    document.hasStorageAccess.then=> {
    sendResult;
    }).catch=> sendResult);
    } else {
    sendResult;
    }
    </script>
    </body>
    </html>

    Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it.
    Step 2: Embed The iFrame And Listen For The Result
    On your main page:

    function checkThirdPartyCookies{
    return new Promise=> {
    const iframe = document.createElement;
    iframe.style.display = 'none';
    iframe.src = ";; // your subdomain
    document.body.appendChild;

    let resolved = false;
    const cleanup ==> {
    ifreturn;
    resolved = true;
    window.removeEventListener;
    iframe.remove;
    resolve;
    };

    const onMessage ==> {
    if) {
    cleanup;
    }
    };

    window.addEventListener;
    setTimeout=> cleanup, 1000);
    });
    }

    Example usage:

    checkThirdPartyCookies.then=> {
    if{
    someCookiesBlockedCallback; // Third-party cookies are blocked.
    if{
    // No response received.
    // Optional fallback UX goes here.
    someCookiesBlockedTimeoutCallback;
    };
    }
    });

    Step 3: Enhance Detection With The Storage Access API
    In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture.
    Here’s how you could implement that in your iframe test page:

    <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button>

    <script>
    document.getElementById?.addEventListener=> {
    if{
    try {
    const granted = await document.requestStorageAccess;
    if{
    window.parent.postMessage;
    } else {
    window.parent.postMessage;
    }
    } catch{
    window.parent.postMessage;
    }
    }
    });
    </script>

    Then, on the parent page, you can listen for this message and retry detection if needed:

    // Inside the same onMessage listener from before:
    if{
    // Optionally: retry the cookie test, or reload iframe logic
    checkThirdPartyCookies.then;
    }A Purely Client-Side FallbackIn some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible.
    When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess, localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked.
    The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX.
    Here’s a basic example:

    async function inferCookieSupportFallback{
    let hasCookieAPI = navigator.cookieEnabled;
    let canSetCookie = false;
    let hasStorageAccess = false;

    try {
    document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;";
    canSetCookie = document.cookie.includes;

    document.cookie = "test_fallback=; Max-Age=0; Path=/;";
    } catch{
    canSetCookie = false;
    }

    if{
    try {
    hasStorageAccess = await document.hasStorageAccess;
    } catch{}
    }

    return {
    inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess,
    raw: { hasCookieAPI, canSetCookie, hasStorageAccess }
    };
    }

    Example usage:

    inferCookieSupportFallback.then=> {
    if{
    console.log;
    } else {
    console.warn;
    // You could inform the user or adjust behavior accordingly
    }
    });

    Use this fallback when:

    You’re building a JavaScript-only widget embedded on unknown sites,
    You don’t control a second domain, or
    You just need some visibility into user-side behavior.

    Don’t rely on it for security-critical logic! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing.
    Fallback Strategies When Third-Party Cookies Are Blocked
    Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you:
    Redirect-Based Flows
    For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless.
    Request Storage Access
    Prompt the user using requestStorageAccessafter a clear UI gesture. Use this to re-enable cookies without leaving the page.
    Token-Based Communication
    Pass session info directly from parent to iframe via:

    postMessage;
    Query params.

    This avoids reliance on cookies entirely but requires coordination between both sides:

    // Parent
    const iframe = document.getElementById;

    iframe.onload ==> {
    const token = getAccessTokenSomehow; // JWT or anything else
    iframe.contentWindow.postMessage;
    };

    // iframe
    window.addEventListener=> {
    ifreturn;

    const { type, token } = event.data;

    if{
    validateAndUseToken; // process JWT, init session, etc
    }
    });

    Partitioned CookiesChromeand other Chromium-based browsers now support cookies with the Partitioned attribute, allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed.
    Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism, while Safari blocks third-party cookies entirely.

    But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed.
    Final Thought: Transparency, Transition, And The Path Forward
    Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means:

    Keep an eye on the standards.APIs like FedCM and Privacy Sandbox featuresare reshaping how we handle identity and analytics without relying on cross-site cookies.
    Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess, or falling back to token-based messaging — every small UX improvement adds up.
    Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion.

    The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time.
    And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions.
    And don’t forget: having something is better than nothing.
    #reliably #detectingthirdparty #cookie #blockingin
    Reliably Detecting Third-Party Cookie Blocking In 2025
    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Grouphas been vocal in advocating for the complete removal of third-party cookies from the web platform. Major browsersare responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on, fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully. Don’t Let Silent Failures Win: Why Cookie Detection Still Matters Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing. Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work. And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow. Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience. Why It’s Hard To Tell If Third-Party Cookies Are Blocked Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know: // DOES NOT detect third-party cookie blocking function areCookiesEnabled{ if{ return false; } try { document.cookie = "test_cookie=1; SameSite=None; Secure"; const hasCookie = document.cookie.includes; document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure"; return hasCookie; } catch{ return false; } } This function only confirms that cookies work in the currentcontext. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection. These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts. How Modern Browsers Handle Third-Party Cookies The behavior of modern browsers directly affects which detection methods will work and which ones silently fail. Safari: Full Third-Party Cookie Blocking Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention. For embedded contentthat requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API. Firefox: Cookie Partitioning By Design Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared. As of Firefox 102, this behavior is enabled by default in the Standardmode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site. As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that. Chrome: From Deprecation Plans To Privacy SandboxChromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected. In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024. In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally. This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authorityinto Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking. As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout. Edge: Tracker-Focused Blocking With User Configurability Edgeshares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced, and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected. Other Browsers: What About Them? Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance. Internet Explorer11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferencesheaders. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present. Older versions of Safari had partial third-party cookie restrictions, but, as mentioned before, this was replaced with full blocking via ITP. As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model. To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions. Overview Of Detection Techniques Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t workand what does. Basic JavaScript API ChecksAs mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status: In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked. Setting document.cookie in the parent doesn’t test the third-party context. These checks are first-party only. Avoid using them for detection. Storage Hacks Via localStoragePreviously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage. Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable. Server-Assisted Cookie ProbeOne classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back: Load a script/image from a third-party server that sets a cookie. Immediately load another resource, and the server checks whether the cookie was sent. This works, but it: Requires custom server-side logic, Depends on HTTP caching, response headers, and cookie attributes, and Adds development and infrastructure complexity. While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here. Storage Access APIThe document.hasStorageAccessmethod allows embedded third-party content to check if it has access to unpartitioned cookies: ChromeSupports hasStorageAccessand requestStorageAccessstarting from version 119. Additionally, hasUnpartitionedCookieAccessis available as an alias for hasStorageAccessfrom version 125 onwards. FirefoxSupports both hasStorageAccessand requestStorageAccessmethods. SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccesswithout a direct user gestureis ignored. Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement. This API is particularly useful for detecting scenarios where cookies are present but partitioned, as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check. iFrame + postMessageDespite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method: Embed a hidden iframe from a third-party domain. Inside the iframe, attempt to set a test cookie. Use window.postMessage to report success or failure to the parent. This approach works across all major browsers, requires no server, and simulates a real-world third-party scenario. We’ll implement this step-by-step next. Bonus: Sec-Fetch-Storage-Access Chromeis introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection. As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem. Step-by-Step: Detecting Third-Party Cookies Via iFrame So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic, it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following: You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com, The domain needs HTTPS, and You’ll need to host a simple static file, even if no server code is involved. Once that’s set up, the rest of the logic is fully client-side. Step 1: Create A Cookie Test PageMinimal version: <!DOCTYPE html> <html> <body> <script> document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;"; const cookieFound = document.cookie.includes; const sendResult ==> window.parent?.postMessage; if{ document.hasStorageAccess.then=> { sendResult; }).catch=> sendResult); } else { sendResult; } </script> </body> </html> Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it. Step 2: Embed The iFrame And Listen For The Result On your main page: function checkThirdPartyCookies{ return new Promise=> { const iframe = document.createElement; iframe.style.display = 'none'; iframe.src = ";; // your subdomain document.body.appendChild; let resolved = false; const cleanup ==> { ifreturn; resolved = true; window.removeEventListener; iframe.remove; resolve; }; const onMessage ==> { if) { cleanup; } }; window.addEventListener; setTimeout=> cleanup, 1000); }); } Example usage: checkThirdPartyCookies.then=> { if{ someCookiesBlockedCallback; // Third-party cookies are blocked. if{ // No response received. // Optional fallback UX goes here. someCookiesBlockedTimeoutCallback; }; } }); Step 3: Enhance Detection With The Storage Access API In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture. Here’s how you could implement that in your iframe test page: <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button> <script> document.getElementById?.addEventListener=> { if{ try { const granted = await document.requestStorageAccess; if{ window.parent.postMessage; } else { window.parent.postMessage; } } catch{ window.parent.postMessage; } } }); </script> Then, on the parent page, you can listen for this message and retry detection if needed: // Inside the same onMessage listener from before: if{ // Optionally: retry the cookie test, or reload iframe logic checkThirdPartyCookies.then; }A Purely Client-Side FallbackIn some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible. When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess, localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked. The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX. Here’s a basic example: async function inferCookieSupportFallback{ let hasCookieAPI = navigator.cookieEnabled; let canSetCookie = false; let hasStorageAccess = false; try { document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;"; canSetCookie = document.cookie.includes; document.cookie = "test_fallback=; Max-Age=0; Path=/;"; } catch{ canSetCookie = false; } if{ try { hasStorageAccess = await document.hasStorageAccess; } catch{} } return { inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess, raw: { hasCookieAPI, canSetCookie, hasStorageAccess } }; } Example usage: inferCookieSupportFallback.then=> { if{ console.log; } else { console.warn; // You could inform the user or adjust behavior accordingly } }); Use this fallback when: You’re building a JavaScript-only widget embedded on unknown sites, You don’t control a second domain, or You just need some visibility into user-side behavior. Don’t rely on it for security-critical logic! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing. Fallback Strategies When Third-Party Cookies Are Blocked Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you: Redirect-Based Flows For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless. Request Storage Access Prompt the user using requestStorageAccessafter a clear UI gesture. Use this to re-enable cookies without leaving the page. Token-Based Communication Pass session info directly from parent to iframe via: postMessage; Query params. This avoids reliance on cookies entirely but requires coordination between both sides: // Parent const iframe = document.getElementById; iframe.onload ==> { const token = getAccessTokenSomehow; // JWT or anything else iframe.contentWindow.postMessage; }; // iframe window.addEventListener=> { ifreturn; const { type, token } = event.data; if{ validateAndUseToken; // process JWT, init session, etc } }); Partitioned CookiesChromeand other Chromium-based browsers now support cookies with the Partitioned attribute, allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed. Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism, while Safari blocks third-party cookies entirely. But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed. Final Thought: Transparency, Transition, And The Path Forward Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means: Keep an eye on the standards.APIs like FedCM and Privacy Sandbox featuresare reshaping how we handle identity and analytics without relying on cross-site cookies. Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess, or falling back to token-based messaging — every small UX improvement adds up. Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion. The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time. And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions. And don’t forget: having something is better than nothing. #reliably #detectingthirdparty #cookie #blockingin
    Reliably Detecting Third-Party Cookie Blocking In 2025
    smashingmagazine.com
    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Group (W3C TAG) has been vocal in advocating for the complete removal of third-party cookies from the web platform. Major browsers (Chrome, Safari, Firefox, and Edge) are responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on (SSO), fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully. Don’t Let Silent Failures Win: Why Cookie Detection Still Matters Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing. Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work. And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow. Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience. Why It’s Hard To Tell If Third-Party Cookies Are Blocked Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know: // DOES NOT detect third-party cookie blocking function areCookiesEnabled() { if (navigator.cookieEnabled === false) { return false; } try { document.cookie = "test_cookie=1; SameSite=None; Secure"; const hasCookie = document.cookie.includes("test_cookie=1"); document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure"; return hasCookie; } catch (e) { return false; } } This function only confirms that cookies work in the current (first-party) context. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection. These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts. How Modern Browsers Handle Third-Party Cookies The behavior of modern browsers directly affects which detection methods will work and which ones silently fail. Safari: Full Third-Party Cookie Blocking Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention (ITP). For embedded content (such as an SSO iframe) that requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API. Firefox: Cookie Partitioning By Design Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared. As of Firefox 102, this behavior is enabled by default in the Standard (default) mode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site. As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that. Chrome: From Deprecation Plans To Privacy Sandbox (And Industry Pushback) Chromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected. In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024. In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally. This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authority (CMA) into Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking. As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout. Edge (Chromium-Based): Tracker-Focused Blocking With User Configurability Edge (which is a Chromium-based browser) shares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced (default), and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected. Other Browsers: What About Them? Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance. Internet Explorer (IE) 11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferences (P3P) headers. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present. Older versions of Safari had partial third-party cookie restrictions (such as “Allow from websites I visit”), but, as mentioned before, this was replaced with full blocking via ITP. As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model. To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions. Overview Of Detection Techniques Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t work (and why) and what does. Basic JavaScript API Checks (Misleading) As mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status: In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked. Setting document.cookie in the parent doesn’t test the third-party context. These checks are first-party only. Avoid using them for detection. Storage Hacks Via localStorage (Obsolete) Previously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage. Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable. Server-Assisted Cookie Probe (Heavyweight) One classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back: Load a script/image from a third-party server that sets a cookie. Immediately load another resource, and the server checks whether the cookie was sent. This works, but it: Requires custom server-side logic, Depends on HTTP caching, response headers, and cookie attributes (SameSite=None; Secure), and Adds development and infrastructure complexity. While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here. Storage Access API (Supplemental Signal) The document.hasStorageAccess() method allows embedded third-party content to check if it has access to unpartitioned cookies: ChromeSupports hasStorageAccess() and requestStorageAccess() starting from version 119. Additionally, hasUnpartitionedCookieAccess() is available as an alias for hasStorageAccess() from version 125 onwards. FirefoxSupports both hasStorageAccess() and requestStorageAccess() methods. SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccess() without a direct user gesture (like a click) is ignored. Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement. This API is particularly useful for detecting scenarios where cookies are present but partitioned (e.g., Firefox’s Total Cookie Protection), as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check. iFrame + postMessage (Best Practice) Despite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method: Embed a hidden iframe from a third-party domain. Inside the iframe, attempt to set a test cookie. Use window.postMessage to report success or failure to the parent. This approach works across all major browsers (when properly configured), requires no server (kind of, more on that next), and simulates a real-world third-party scenario. We’ll implement this step-by-step next. Bonus: Sec-Fetch-Storage-Access Chrome (starting in version 133) is introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection. As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem. Step-by-Step: Detecting Third-Party Cookies Via iFrame So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic (like server-set cookies or response inspection), it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following: You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com, The domain needs HTTPS (for SameSite=None; Secure cookies to work), and You’ll need to host a simple static file (the test page), even if no server code is involved. Once that’s set up, the rest of the logic is fully client-side. Step 1: Create A Cookie Test Page (On A Third-Party Domain) Minimal version (e.g., https://cookietest.example.com/cookie-check.html): <!DOCTYPE html> <html> <body> <script> document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;"; const cookieFound = document.cookie.includes("thirdparty_test=1"); const sendResult = (status) => window.parent?.postMessage(status, "*"); if (cookieFound && document.hasStorageAccess instanceof Function) { document.hasStorageAccess().then((hasAccess) => { sendResult(hasAccess ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED"); }).catch(() => sendResult("TP_COOKIE_BLOCKED")); } else { sendResult(cookieFound ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED"); } </script> </body> </html> Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it. Step 2: Embed The iFrame And Listen For The Result On your main page: function checkThirdPartyCookies() { return new Promise((resolve) => { const iframe = document.createElement('iframe'); iframe.style.display = 'none'; iframe.src = "https://cookietest.example.com/cookie-check.html"; // your subdomain document.body.appendChild(iframe); let resolved = false; const cleanup = (result, timedOut = false) => { if (resolved) return; resolved = true; window.removeEventListener('message', onMessage); iframe.remove(); resolve({ thirdPartyCookiesEnabled: result, timedOut }); }; const onMessage = (event) => { if (["TP_COOKIE_SUPPORTED", "TP_COOKIE_BLOCKED"].includes(event.data)) { cleanup(event.data === "TP_COOKIE_SUPPORTED", false); } }; window.addEventListener('message', onMessage); setTimeout(() => cleanup(false, true), 1000); }); } Example usage: checkThirdPartyCookies().then(({ thirdPartyCookiesEnabled, timedOut }) => { if (!thirdPartyCookiesEnabled) { someCookiesBlockedCallback(); // Third-party cookies are blocked. if (timedOut) { // No response received (iframe possibly blocked). // Optional fallback UX goes here. someCookiesBlockedTimeoutCallback(); }; } }); Step 3: Enhance Detection With The Storage Access API In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture. Here’s how you could implement that in your iframe test page: <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button> <script> document.getElementById('enable-cookies')?.addEventListener('click', async () => { if (document.requestStorageAccess && typeof document.requestStorageAccess === 'function') { try { const granted = await document.requestStorageAccess(); if (granted !== false) { window.parent.postMessage("TP_STORAGE_ACCESS_GRANTED", "*"); } else { window.parent.postMessage("TP_STORAGE_ACCESS_DENIED", "*"); } } catch (e) { window.parent.postMessage("TP_STORAGE_ACCESS_FAILED", "*"); } } }); </script> Then, on the parent page, you can listen for this message and retry detection if needed: // Inside the same onMessage listener from before: if (event.data === "TP_STORAGE_ACCESS_GRANTED") { // Optionally: retry the cookie test, or reload iframe logic checkThirdPartyCookies().then(handleResultAgain); } (Bonus) A Purely Client-Side Fallback (Not Perfect, But Sometimes Necessary) In some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible. When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess(), localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked. The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX. Here’s a basic example: async function inferCookieSupportFallback() { let hasCookieAPI = navigator.cookieEnabled; let canSetCookie = false; let hasStorageAccess = false; try { document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;"; canSetCookie = document.cookie.includes("test_fallback=1"); document.cookie = "test_fallback=; Max-Age=0; Path=/;"; } catch (_) { canSetCookie = false; } if (typeof document.hasStorageAccess === "function") { try { hasStorageAccess = await document.hasStorageAccess(); } catch (_) {} } return { inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess, raw: { hasCookieAPI, canSetCookie, hasStorageAccess } }; } Example usage: inferCookieSupportFallback().then(({ inferredThirdPartyCookies }) => { if (inferredThirdPartyCookies) { console.log("Cookies likely supported. Likely, yes."); } else { console.warn("Cookies may be blocked or partitioned."); // You could inform the user or adjust behavior accordingly } }); Use this fallback when: You’re building a JavaScript-only widget embedded on unknown sites, You don’t control a second domain (or the team refuses to add one), or You just need some visibility into user-side behavior (e.g., debugging UX issues). Don’t rely on it for security-critical logic (e.g., auth gating)! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing. Fallback Strategies When Third-Party Cookies Are Blocked Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you: Redirect-Based Flows For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless. Request Storage Access Prompt the user using requestStorageAccess() after a clear UI gesture (Safari requires this). Use this to re-enable cookies without leaving the page. Token-Based Communication Pass session info directly from parent to iframe via: postMessage (with required origin); Query params (e.g., signed JWT in iframe URL). This avoids reliance on cookies entirely but requires coordination between both sides: // Parent const iframe = document.getElementById('my-iframe'); iframe.onload = () => { const token = getAccessTokenSomehow(); // JWT or anything else iframe.contentWindow.postMessage( { type: 'AUTH_TOKEN', token }, 'https://iframe.example.com' // Set the correct origin! ); }; // iframe window.addEventListener('message', (event) => { if (event.origin !== 'https://parent.example.com') return; const { type, token } = event.data; if (type === 'AUTH_TOKEN') { validateAndUseToken(token); // process JWT, init session, etc } }); Partitioned Cookies (CHIPS) Chrome (since version 114) and other Chromium-based browsers now support cookies with the Partitioned attribute (known as CHIPS), allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed. Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism (Total Cookie Protection), while Safari blocks third-party cookies entirely. But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed. Final Thought: Transparency, Transition, And The Path Forward Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means: Keep an eye on the standards.APIs like FedCM and Privacy Sandbox features (Topics, Attribution Reporting, Fenced Frames) are reshaping how we handle identity and analytics without relying on cross-site cookies. Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess(), or falling back to token-based messaging — every small UX improvement adds up. Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion. The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time. And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions. And don’t forget: having something is better than nothing.
    14 Commenti ·0 condivisioni ·0 Anteprima
  • NVIDIA and SAP Bring AI Agents to the Physical World

    As robots increasingly make their way to the largest enterprises’ manufacturing plants and warehouses, the need for access to critical business and operational data has never been more crucial.
    At its Sapphire conference, SAP announced it is collaborating with NEURA Robotics and NVIDIA to enable its SAP Joule agents to connect enterprise data and processes with NEURA’s advanced cognitive robots.
    The integration will enable robots to support tasks including adaptive manufacturing, autonomous replenishment, compliance monitoring and predictive maintenance. Using the Mega NVIDIA Omniverse Blueprint, SAP customers will be able to simulate and validate large robotic fleets in digital twins before deploying them in real-world facilities.
    Virtual Assistants Become Physical Helpers
    AI agents are traditionally confined to the digital world, which means they’re unable to take actionable steps for physical tasks and do real-world work in warehouses, factories and other industrial workplaces.
    SAP’s collaboration with NVIDIA and NEURA Robotics shows how enterprises will be able to use Joule to plan and simulate complex and dynamic scenarios that include physical AI and autonomous humanoid robots to address critical planning, safety and project requirements, streamline operations and embody business intelligence in the physical world.
    Revolutionizing Supply Chain Management: NVIDIA, SAP Tackle Complex Challenges
    Today’s supply chains have become more fragile and complex with ever-evolving constraints around consumer, economic, political and environmental dimensions. The partnership between NVIDIA and SAP aims to enhance supply chain planning capabilities by integrating NVIDIA cuOpt technology — for real-time route optimization — with SAP Integrated Business Planningto enable customers to plan and simulate the most complex and dynamic scenarios for more rapid and confident decision making.
    By extending SAP IBP with NVIDIA’s highly scalable, GPU-powered platform, customers can accelerate time-to-value by facilitating the management of larger and more intricate models without sacrificing runtime. This groundbreaking collaboration will empower businesses to address unique planning requirements, streamline operations and drive better business outcomes.
    Supporting Supply Chains With AI and Humanoid Robots
    SAP Chief Technology Officer Philipp Herzig offered a preview of the technology integration in a keynote demonstration at SAP Sapphire in Orlando, showing the transformative potential for physical AI in businesses in how Joule — SAP’s generative AI co-pilot — works with data from the real world in combination with humanoid robots.
    “Robots and autonomous agents are at the heart of the next wave of industrial AI, seamlessly connecting people, data, and processes to unlock new levels of efficiency and innovation,” said Herzig. “SAP, NVIDIA and NEURA Robotics share a vision for uniting AI and robotics to improve safety, efficiency and productivity across industries.”
    With the integration of the Mega NVIDIA Omniverse Blueprint, enterprises can harness physical AI and digital twins to enable new AI agents that can deliver real-time, contextual insights and automate routine tasks.
    The collaboration between the three companies allows NEURA robots to see, learn and adapt in real time — whether restocking shelves, inspecting infrastructure or resolving supply chain disruptions.
    SAP Joule AI Agents are deployed directly onto NEURA’s cognitive robots, enabling real-time decision-making and the execution of physical tasks such as inventory audits or equipment repairs. The robots learn continuously in physically accurate digital twins, powered by NVIDIA Omniverse libraries and technologies,  and using business-critical data from SAP applications, while the Mega NVIDIA Omniverse Blueprint helps evaluate and validate deployment options and large-scale task interactions.
    Showcasing Joule’s Data-Driven Insights on NEURA Robots
    The technology demonstration showcases the abilities of NEURA robots to handle tasks guided by Joule’s data-driven insights.
    Businesses bridging simulation-to-reality deployments can use the integrated technology stack to provide zero-shot navigation and inference to address defect rates and production line inefficiencies without interrupting operations.
    Herzig showed how Joule can direct a NEURA 4NE1 robot to inspect a machine in the showroom, powered by AI agents, scanning equipment and triggering an SAP Asset Performance management alert before a failure disrupts operations.
    It’s but a glimpse into the future of what’s possible with AI agents and robotics.
    #nvidia #sap #bring #agents #physical
    NVIDIA and SAP Bring AI Agents to the Physical World
    As robots increasingly make their way to the largest enterprises’ manufacturing plants and warehouses, the need for access to critical business and operational data has never been more crucial. At its Sapphire conference, SAP announced it is collaborating with NEURA Robotics and NVIDIA to enable its SAP Joule agents to connect enterprise data and processes with NEURA’s advanced cognitive robots. The integration will enable robots to support tasks including adaptive manufacturing, autonomous replenishment, compliance monitoring and predictive maintenance. Using the Mega NVIDIA Omniverse Blueprint, SAP customers will be able to simulate and validate large robotic fleets in digital twins before deploying them in real-world facilities. Virtual Assistants Become Physical Helpers AI agents are traditionally confined to the digital world, which means they’re unable to take actionable steps for physical tasks and do real-world work in warehouses, factories and other industrial workplaces. SAP’s collaboration with NVIDIA and NEURA Robotics shows how enterprises will be able to use Joule to plan and simulate complex and dynamic scenarios that include physical AI and autonomous humanoid robots to address critical planning, safety and project requirements, streamline operations and embody business intelligence in the physical world. Revolutionizing Supply Chain Management: NVIDIA, SAP Tackle Complex Challenges Today’s supply chains have become more fragile and complex with ever-evolving constraints around consumer, economic, political and environmental dimensions. The partnership between NVIDIA and SAP aims to enhance supply chain planning capabilities by integrating NVIDIA cuOpt technology — for real-time route optimization — with SAP Integrated Business Planningto enable customers to plan and simulate the most complex and dynamic scenarios for more rapid and confident decision making. By extending SAP IBP with NVIDIA’s highly scalable, GPU-powered platform, customers can accelerate time-to-value by facilitating the management of larger and more intricate models without sacrificing runtime. This groundbreaking collaboration will empower businesses to address unique planning requirements, streamline operations and drive better business outcomes. Supporting Supply Chains With AI and Humanoid Robots SAP Chief Technology Officer Philipp Herzig offered a preview of the technology integration in a keynote demonstration at SAP Sapphire in Orlando, showing the transformative potential for physical AI in businesses in how Joule — SAP’s generative AI co-pilot — works with data from the real world in combination with humanoid robots. “Robots and autonomous agents are at the heart of the next wave of industrial AI, seamlessly connecting people, data, and processes to unlock new levels of efficiency and innovation,” said Herzig. “SAP, NVIDIA and NEURA Robotics share a vision for uniting AI and robotics to improve safety, efficiency and productivity across industries.” With the integration of the Mega NVIDIA Omniverse Blueprint, enterprises can harness physical AI and digital twins to enable new AI agents that can deliver real-time, contextual insights and automate routine tasks. The collaboration between the three companies allows NEURA robots to see, learn and adapt in real time — whether restocking shelves, inspecting infrastructure or resolving supply chain disruptions. SAP Joule AI Agents are deployed directly onto NEURA’s cognitive robots, enabling real-time decision-making and the execution of physical tasks such as inventory audits or equipment repairs. The robots learn continuously in physically accurate digital twins, powered by NVIDIA Omniverse libraries and technologies,  and using business-critical data from SAP applications, while the Mega NVIDIA Omniverse Blueprint helps evaluate and validate deployment options and large-scale task interactions. Showcasing Joule’s Data-Driven Insights on NEURA Robots The technology demonstration showcases the abilities of NEURA robots to handle tasks guided by Joule’s data-driven insights. Businesses bridging simulation-to-reality deployments can use the integrated technology stack to provide zero-shot navigation and inference to address defect rates and production line inefficiencies without interrupting operations. Herzig showed how Joule can direct a NEURA 4NE1 robot to inspect a machine in the showroom, powered by AI agents, scanning equipment and triggering an SAP Asset Performance management alert before a failure disrupts operations. It’s but a glimpse into the future of what’s possible with AI agents and robotics. #nvidia #sap #bring #agents #physical
    NVIDIA and SAP Bring AI Agents to the Physical World
    blogs.nvidia.com
    As robots increasingly make their way to the largest enterprises’ manufacturing plants and warehouses, the need for access to critical business and operational data has never been more crucial. At its Sapphire conference, SAP announced it is collaborating with NEURA Robotics and NVIDIA to enable its SAP Joule agents to connect enterprise data and processes with NEURA’s advanced cognitive robots. The integration will enable robots to support tasks including adaptive manufacturing, autonomous replenishment, compliance monitoring and predictive maintenance. Using the Mega NVIDIA Omniverse Blueprint, SAP customers will be able to simulate and validate large robotic fleets in digital twins before deploying them in real-world facilities. Virtual Assistants Become Physical Helpers AI agents are traditionally confined to the digital world, which means they’re unable to take actionable steps for physical tasks and do real-world work in warehouses, factories and other industrial workplaces. SAP’s collaboration with NVIDIA and NEURA Robotics shows how enterprises will be able to use Joule to plan and simulate complex and dynamic scenarios that include physical AI and autonomous humanoid robots to address critical planning, safety and project requirements, streamline operations and embody business intelligence in the physical world. Revolutionizing Supply Chain Management: NVIDIA, SAP Tackle Complex Challenges Today’s supply chains have become more fragile and complex with ever-evolving constraints around consumer, economic, political and environmental dimensions. The partnership between NVIDIA and SAP aims to enhance supply chain planning capabilities by integrating NVIDIA cuOpt technology — for real-time route optimization — with SAP Integrated Business Planning (IBP) to enable customers to plan and simulate the most complex and dynamic scenarios for more rapid and confident decision making. By extending SAP IBP with NVIDIA’s highly scalable, GPU-powered platform, customers can accelerate time-to-value by facilitating the management of larger and more intricate models without sacrificing runtime. This groundbreaking collaboration will empower businesses to address unique planning requirements, streamline operations and drive better business outcomes. Supporting Supply Chains With AI and Humanoid Robots SAP Chief Technology Officer Philipp Herzig offered a preview of the technology integration in a keynote demonstration at SAP Sapphire in Orlando, showing the transformative potential for physical AI in businesses in how Joule — SAP’s generative AI co-pilot — works with data from the real world in combination with humanoid robots. “Robots and autonomous agents are at the heart of the next wave of industrial AI, seamlessly connecting people, data, and processes to unlock new levels of efficiency and innovation,” said Herzig. “SAP, NVIDIA and NEURA Robotics share a vision for uniting AI and robotics to improve safety, efficiency and productivity across industries.” With the integration of the Mega NVIDIA Omniverse Blueprint, enterprises can harness physical AI and digital twins to enable new AI agents that can deliver real-time, contextual insights and automate routine tasks. The collaboration between the three companies allows NEURA robots to see, learn and adapt in real time — whether restocking shelves, inspecting infrastructure or resolving supply chain disruptions. SAP Joule AI Agents are deployed directly onto NEURA’s cognitive robots, enabling real-time decision-making and the execution of physical tasks such as inventory audits or equipment repairs. The robots learn continuously in physically accurate digital twins, powered by NVIDIA Omniverse libraries and technologies,  and using business-critical data from SAP applications, while the Mega NVIDIA Omniverse Blueprint helps evaluate and validate deployment options and large-scale task interactions. Showcasing Joule’s Data-Driven Insights on NEURA Robots The technology demonstration showcases the abilities of NEURA robots to handle tasks guided by Joule’s data-driven insights. Businesses bridging simulation-to-reality deployments can use the integrated technology stack to provide zero-shot navigation and inference to address defect rates and production line inefficiencies without interrupting operations. Herzig showed how Joule can direct a NEURA 4NE1 robot to inspect a machine in the showroom, powered by AI agents, scanning equipment and triggering an SAP Asset Performance management alert before a failure disrupts operations. It’s but a glimpse into the future of what’s possible with AI agents and robotics.
    0 Commenti ·0 condivisioni ·0 Anteprima
CGShares https://cgshares.com