• How a planetarium show discovered a spiral at the edge of our solar system

    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system.

    “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist.

    Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years. 

    The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?” 

    To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data.

    “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says. 

    The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars.

    “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.”

    She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’” 

    While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space ShowMore simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves. 

    In each simulation, the spiral persisted.

    “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’” 

    An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system.As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system.

    “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.”

    “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.”

    It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.”

    The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems.

    Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”

     In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths.Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show.

    “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’

    “ThenNeil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'”

    “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds.

    The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.”

    By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies.

    To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX.

    The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.” 

    The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.”

    Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data.

    “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.”

    As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands.

    Our Oort cloud, a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud“New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent. 

    More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud. 

    Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.” 

    The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud. 

    For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park.
    #how #planetarium #show #discovered #spiral
    How a planetarium show discovered a spiral at the edge of our solar system
    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system. “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist. Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years.  The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?”  To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data. “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says.  The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars. “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.” She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’”  While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space ShowMore simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves.  In each simulation, the spiral persisted. “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’”  An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system.As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system. “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.” “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.” It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.” The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems. Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”  In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths.Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show. “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’ “ThenNeil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'” “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds. The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.” By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies. To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX. The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.”  The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.” Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data. “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.” As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands. Our Oort cloud, a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud“New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent.  More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud.  Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.”  The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud.  For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park. #how #planetarium #show #discovered #spiral
    WWW.FASTCOMPANY.COM
    How a planetarium show discovered a spiral at the edge of our solar system
    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system. “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist. Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years.  The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?”  To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data. “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says.  The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars. “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.” She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’”  While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space Show (curving, dusty S-shape behind the Sun) [Image: © AMNH] More simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves.  In each simulation, the spiral persisted. “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’”  An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system. [Image: NASA] As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system. “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.” “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.” It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.” The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems. Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”  In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths. [Image: © AMNH] Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show. “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’ “Then [planetarium’s director] Neil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'” “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds. The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.” By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies. To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX. The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.”  The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.” Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data. “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.” As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands. Our Oort cloud (center), a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud [Image: © AMNH ] “New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent.  More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud.  Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.”  The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud.  For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park.
    0 Commentaires 0 Parts
  • Reliably Detecting Third-Party Cookie Blocking In 2025

    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Grouphas been vocal in advocating for the complete removal of third-party cookies from the web platform.
    Major browsersare responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on, fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully.
    Don’t Let Silent Failures Win: Why Cookie Detection Still Matters
    Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing.
    Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work.
    And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow.
    Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience.
    Why It’s Hard To Tell If Third-Party Cookies Are Blocked
    Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know:

    // DOES NOT detect third-party cookie blocking
    function areCookiesEnabled{
    if{
    return false;
    }

    try {
    document.cookie = "test_cookie=1; SameSite=None; Secure";
    const hasCookie = document.cookie.includes;
    document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure";

    return hasCookie;
    } catch{
    return false;
    }
    }

    This function only confirms that cookies work in the currentcontext. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection.
    These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts.
    How Modern Browsers Handle Third-Party Cookies
    The behavior of modern browsers directly affects which detection methods will work and which ones silently fail.
    Safari: Full Third-Party Cookie Blocking
    Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention.
    For embedded contentthat requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API.
    Firefox: Cookie Partitioning By Design
    Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared.
    As of Firefox 102, this behavior is enabled by default in the Standardmode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site.
    As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that.
    Chrome: From Deprecation Plans To Privacy SandboxChromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected.
    In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024.
    In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally.
    This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authorityinto Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking.
    As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout.
    Edge: Tracker-Focused Blocking With User Configurability
    Edgeshares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced, and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected.
    Other Browsers: What About Them?
    Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance.
    Internet Explorer11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferencesheaders. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present.
    Older versions of Safari had partial third-party cookie restrictions, but, as mentioned before, this was replaced with full blocking via ITP.
    As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model.
    To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions.
    Overview Of Detection Techniques
    Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t workand what does.
    Basic JavaScript API ChecksAs mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status:

    In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked.
    Setting document.cookie in the parent doesn’t test the third-party context.

    These checks are first-party only. Avoid using them for detection.
    Storage Hacks Via localStoragePreviously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage.
    Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable.
    Server-Assisted Cookie ProbeOne classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back:

    Load a script/image from a third-party server that sets a cookie.
    Immediately load another resource, and the server checks whether the cookie was sent.

    This works, but it:

    Requires custom server-side logic,
    Depends on HTTP caching, response headers, and cookie attributes, and
    Adds development and infrastructure complexity.

    While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here.
    Storage Access APIThe document.hasStorageAccessmethod allows embedded third-party content to check if it has access to unpartitioned cookies:

    ChromeSupports hasStorageAccessand requestStorageAccessstarting from version 119. Additionally, hasUnpartitionedCookieAccessis available as an alias for hasStorageAccessfrom version 125 onwards.
    FirefoxSupports both hasStorageAccessand requestStorageAccessmethods.
    SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccesswithout a direct user gestureis ignored.

    Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement.
    This API is particularly useful for detecting scenarios where cookies are present but partitioned, as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check.
    iFrame + postMessageDespite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method:

    Embed a hidden iframe from a third-party domain.
    Inside the iframe, attempt to set a test cookie.
    Use window.postMessage to report success or failure to the parent.

    This approach works across all major browsers, requires no server, and simulates a real-world third-party scenario.
    We’ll implement this step-by-step next.
    Bonus: Sec-Fetch-Storage-Access
    Chromeis introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection.
    As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem.
    Step-by-Step: Detecting Third-Party Cookies Via iFrame
    So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic, it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following:

    You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com,
    The domain needs HTTPS, and
    You’ll need to host a simple static file, even if no server code is involved.

    Once that’s set up, the rest of the logic is fully client-side.
    Step 1: Create A Cookie Test PageMinimal version:

    <!DOCTYPE html>
    <html>
    <body>
    <script>
    document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;";
    const cookieFound = document.cookie.includes;

    const sendResult ==> window.parent?.postMessage;

    if{
    document.hasStorageAccess.then=> {
    sendResult;
    }).catch=> sendResult);
    } else {
    sendResult;
    }
    </script>
    </body>
    </html>

    Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it.
    Step 2: Embed The iFrame And Listen For The Result
    On your main page:

    function checkThirdPartyCookies{
    return new Promise=> {
    const iframe = document.createElement;
    iframe.style.display = 'none';
    iframe.src = ";; // your subdomain
    document.body.appendChild;

    let resolved = false;
    const cleanup ==> {
    ifreturn;
    resolved = true;
    window.removeEventListener;
    iframe.remove;
    resolve;
    };

    const onMessage ==> {
    if) {
    cleanup;
    }
    };

    window.addEventListener;
    setTimeout=> cleanup, 1000);
    });
    }

    Example usage:

    checkThirdPartyCookies.then=> {
    if{
    someCookiesBlockedCallback; // Third-party cookies are blocked.
    if{
    // No response received.
    // Optional fallback UX goes here.
    someCookiesBlockedTimeoutCallback;
    };
    }
    });

    Step 3: Enhance Detection With The Storage Access API
    In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture.
    Here’s how you could implement that in your iframe test page:

    <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button>

    <script>
    document.getElementById?.addEventListener=> {
    if{
    try {
    const granted = await document.requestStorageAccess;
    if{
    window.parent.postMessage;
    } else {
    window.parent.postMessage;
    }
    } catch{
    window.parent.postMessage;
    }
    }
    });
    </script>

    Then, on the parent page, you can listen for this message and retry detection if needed:

    // Inside the same onMessage listener from before:
    if{
    // Optionally: retry the cookie test, or reload iframe logic
    checkThirdPartyCookies.then;
    }A Purely Client-Side FallbackIn some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible.
    When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess, localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked.
    The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX.
    Here’s a basic example:

    async function inferCookieSupportFallback{
    let hasCookieAPI = navigator.cookieEnabled;
    let canSetCookie = false;
    let hasStorageAccess = false;

    try {
    document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;";
    canSetCookie = document.cookie.includes;

    document.cookie = "test_fallback=; Max-Age=0; Path=/;";
    } catch{
    canSetCookie = false;
    }

    if{
    try {
    hasStorageAccess = await document.hasStorageAccess;
    } catch{}
    }

    return {
    inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess,
    raw: { hasCookieAPI, canSetCookie, hasStorageAccess }
    };
    }

    Example usage:

    inferCookieSupportFallback.then=> {
    if{
    console.log;
    } else {
    console.warn;
    // You could inform the user or adjust behavior accordingly
    }
    });

    Use this fallback when:

    You’re building a JavaScript-only widget embedded on unknown sites,
    You don’t control a second domain, or
    You just need some visibility into user-side behavior.

    Don’t rely on it for security-critical logic! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing.
    Fallback Strategies When Third-Party Cookies Are Blocked
    Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you:
    Redirect-Based Flows
    For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless.
    Request Storage Access
    Prompt the user using requestStorageAccessafter a clear UI gesture. Use this to re-enable cookies without leaving the page.
    Token-Based Communication
    Pass session info directly from parent to iframe via:

    postMessage;
    Query params.

    This avoids reliance on cookies entirely but requires coordination between both sides:

    // Parent
    const iframe = document.getElementById;

    iframe.onload ==> {
    const token = getAccessTokenSomehow; // JWT or anything else
    iframe.contentWindow.postMessage;
    };

    // iframe
    window.addEventListener=> {
    ifreturn;

    const { type, token } = event.data;

    if{
    validateAndUseToken; // process JWT, init session, etc
    }
    });

    Partitioned CookiesChromeand other Chromium-based browsers now support cookies with the Partitioned attribute, allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed.
    Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism, while Safari blocks third-party cookies entirely.

    But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed.
    Final Thought: Transparency, Transition, And The Path Forward
    Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means:

    Keep an eye on the standards.APIs like FedCM and Privacy Sandbox featuresare reshaping how we handle identity and analytics without relying on cross-site cookies.
    Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess, or falling back to token-based messaging — every small UX improvement adds up.
    Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion.

    The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time.
    And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions.
    And don’t forget: having something is better than nothing.
    #reliably #detectingthirdparty #cookie #blockingin
    Reliably Detecting Third-Party Cookie Blocking In 2025
    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Grouphas been vocal in advocating for the complete removal of third-party cookies from the web platform. Major browsersare responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on, fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully. Don’t Let Silent Failures Win: Why Cookie Detection Still Matters Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing. Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work. And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow. Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience. Why It’s Hard To Tell If Third-Party Cookies Are Blocked Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know: // DOES NOT detect third-party cookie blocking function areCookiesEnabled{ if{ return false; } try { document.cookie = "test_cookie=1; SameSite=None; Secure"; const hasCookie = document.cookie.includes; document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure"; return hasCookie; } catch{ return false; } } This function only confirms that cookies work in the currentcontext. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection. These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts. How Modern Browsers Handle Third-Party Cookies The behavior of modern browsers directly affects which detection methods will work and which ones silently fail. Safari: Full Third-Party Cookie Blocking Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention. For embedded contentthat requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API. Firefox: Cookie Partitioning By Design Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared. As of Firefox 102, this behavior is enabled by default in the Standardmode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site. As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that. Chrome: From Deprecation Plans To Privacy SandboxChromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected. In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024. In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally. This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authorityinto Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking. As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout. Edge: Tracker-Focused Blocking With User Configurability Edgeshares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced, and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected. Other Browsers: What About Them? Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance. Internet Explorer11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferencesheaders. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present. Older versions of Safari had partial third-party cookie restrictions, but, as mentioned before, this was replaced with full blocking via ITP. As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model. To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions. Overview Of Detection Techniques Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t workand what does. Basic JavaScript API ChecksAs mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status: In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked. Setting document.cookie in the parent doesn’t test the third-party context. These checks are first-party only. Avoid using them for detection. Storage Hacks Via localStoragePreviously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage. Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable. Server-Assisted Cookie ProbeOne classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back: Load a script/image from a third-party server that sets a cookie. Immediately load another resource, and the server checks whether the cookie was sent. This works, but it: Requires custom server-side logic, Depends on HTTP caching, response headers, and cookie attributes, and Adds development and infrastructure complexity. While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here. Storage Access APIThe document.hasStorageAccessmethod allows embedded third-party content to check if it has access to unpartitioned cookies: ChromeSupports hasStorageAccessand requestStorageAccessstarting from version 119. Additionally, hasUnpartitionedCookieAccessis available as an alias for hasStorageAccessfrom version 125 onwards. FirefoxSupports both hasStorageAccessand requestStorageAccessmethods. SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccesswithout a direct user gestureis ignored. Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement. This API is particularly useful for detecting scenarios where cookies are present but partitioned, as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check. iFrame + postMessageDespite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method: Embed a hidden iframe from a third-party domain. Inside the iframe, attempt to set a test cookie. Use window.postMessage to report success or failure to the parent. This approach works across all major browsers, requires no server, and simulates a real-world third-party scenario. We’ll implement this step-by-step next. Bonus: Sec-Fetch-Storage-Access Chromeis introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection. As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem. Step-by-Step: Detecting Third-Party Cookies Via iFrame So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic, it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following: You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com, The domain needs HTTPS, and You’ll need to host a simple static file, even if no server code is involved. Once that’s set up, the rest of the logic is fully client-side. Step 1: Create A Cookie Test PageMinimal version: <!DOCTYPE html> <html> <body> <script> document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;"; const cookieFound = document.cookie.includes; const sendResult ==> window.parent?.postMessage; if{ document.hasStorageAccess.then=> { sendResult; }).catch=> sendResult); } else { sendResult; } </script> </body> </html> Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it. Step 2: Embed The iFrame And Listen For The Result On your main page: function checkThirdPartyCookies{ return new Promise=> { const iframe = document.createElement; iframe.style.display = 'none'; iframe.src = ";; // your subdomain document.body.appendChild; let resolved = false; const cleanup ==> { ifreturn; resolved = true; window.removeEventListener; iframe.remove; resolve; }; const onMessage ==> { if) { cleanup; } }; window.addEventListener; setTimeout=> cleanup, 1000); }); } Example usage: checkThirdPartyCookies.then=> { if{ someCookiesBlockedCallback; // Third-party cookies are blocked. if{ // No response received. // Optional fallback UX goes here. someCookiesBlockedTimeoutCallback; }; } }); Step 3: Enhance Detection With The Storage Access API In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture. Here’s how you could implement that in your iframe test page: <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button> <script> document.getElementById?.addEventListener=> { if{ try { const granted = await document.requestStorageAccess; if{ window.parent.postMessage; } else { window.parent.postMessage; } } catch{ window.parent.postMessage; } } }); </script> Then, on the parent page, you can listen for this message and retry detection if needed: // Inside the same onMessage listener from before: if{ // Optionally: retry the cookie test, or reload iframe logic checkThirdPartyCookies.then; }A Purely Client-Side FallbackIn some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible. When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess, localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked. The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX. Here’s a basic example: async function inferCookieSupportFallback{ let hasCookieAPI = navigator.cookieEnabled; let canSetCookie = false; let hasStorageAccess = false; try { document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;"; canSetCookie = document.cookie.includes; document.cookie = "test_fallback=; Max-Age=0; Path=/;"; } catch{ canSetCookie = false; } if{ try { hasStorageAccess = await document.hasStorageAccess; } catch{} } return { inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess, raw: { hasCookieAPI, canSetCookie, hasStorageAccess } }; } Example usage: inferCookieSupportFallback.then=> { if{ console.log; } else { console.warn; // You could inform the user or adjust behavior accordingly } }); Use this fallback when: You’re building a JavaScript-only widget embedded on unknown sites, You don’t control a second domain, or You just need some visibility into user-side behavior. Don’t rely on it for security-critical logic! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing. Fallback Strategies When Third-Party Cookies Are Blocked Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you: Redirect-Based Flows For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless. Request Storage Access Prompt the user using requestStorageAccessafter a clear UI gesture. Use this to re-enable cookies without leaving the page. Token-Based Communication Pass session info directly from parent to iframe via: postMessage; Query params. This avoids reliance on cookies entirely but requires coordination between both sides: // Parent const iframe = document.getElementById; iframe.onload ==> { const token = getAccessTokenSomehow; // JWT or anything else iframe.contentWindow.postMessage; }; // iframe window.addEventListener=> { ifreturn; const { type, token } = event.data; if{ validateAndUseToken; // process JWT, init session, etc } }); Partitioned CookiesChromeand other Chromium-based browsers now support cookies with the Partitioned attribute, allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed. Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism, while Safari blocks third-party cookies entirely. But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed. Final Thought: Transparency, Transition, And The Path Forward Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means: Keep an eye on the standards.APIs like FedCM and Privacy Sandbox featuresare reshaping how we handle identity and analytics without relying on cross-site cookies. Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess, or falling back to token-based messaging — every small UX improvement adds up. Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion. The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time. And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions. And don’t forget: having something is better than nothing. #reliably #detectingthirdparty #cookie #blockingin
    SMASHINGMAGAZINE.COM
    Reliably Detecting Third-Party Cookie Blocking In 2025
    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Group (W3C TAG) has been vocal in advocating for the complete removal of third-party cookies from the web platform. Major browsers (Chrome, Safari, Firefox, and Edge) are responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on (SSO), fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully. Don’t Let Silent Failures Win: Why Cookie Detection Still Matters Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing. Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work. And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow. Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience. Why It’s Hard To Tell If Third-Party Cookies Are Blocked Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know: // DOES NOT detect third-party cookie blocking function areCookiesEnabled() { if (navigator.cookieEnabled === false) { return false; } try { document.cookie = "test_cookie=1; SameSite=None; Secure"; const hasCookie = document.cookie.includes("test_cookie=1"); document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure"; return hasCookie; } catch (e) { return false; } } This function only confirms that cookies work in the current (first-party) context. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection. These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts. How Modern Browsers Handle Third-Party Cookies The behavior of modern browsers directly affects which detection methods will work and which ones silently fail. Safari: Full Third-Party Cookie Blocking Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention (ITP). For embedded content (such as an SSO iframe) that requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API. Firefox: Cookie Partitioning By Design Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared. As of Firefox 102, this behavior is enabled by default in the Standard (default) mode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site. As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that. Chrome: From Deprecation Plans To Privacy Sandbox (And Industry Pushback) Chromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected. In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024. In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally. This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authority (CMA) into Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking. As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout. Edge (Chromium-Based): Tracker-Focused Blocking With User Configurability Edge (which is a Chromium-based browser) shares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced (default), and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected. Other Browsers: What About Them? Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance. Internet Explorer (IE) 11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferences (P3P) headers. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present. Older versions of Safari had partial third-party cookie restrictions (such as “Allow from websites I visit”), but, as mentioned before, this was replaced with full blocking via ITP. As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model. To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions. Overview Of Detection Techniques Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t work (and why) and what does. Basic JavaScript API Checks (Misleading) As mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status: In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked. Setting document.cookie in the parent doesn’t test the third-party context. These checks are first-party only. Avoid using them for detection. Storage Hacks Via localStorage (Obsolete) Previously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage. Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable. Server-Assisted Cookie Probe (Heavyweight) One classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back: Load a script/image from a third-party server that sets a cookie. Immediately load another resource, and the server checks whether the cookie was sent. This works, but it: Requires custom server-side logic, Depends on HTTP caching, response headers, and cookie attributes (SameSite=None; Secure), and Adds development and infrastructure complexity. While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here. Storage Access API (Supplemental Signal) The document.hasStorageAccess() method allows embedded third-party content to check if it has access to unpartitioned cookies: ChromeSupports hasStorageAccess() and requestStorageAccess() starting from version 119. Additionally, hasUnpartitionedCookieAccess() is available as an alias for hasStorageAccess() from version 125 onwards. FirefoxSupports both hasStorageAccess() and requestStorageAccess() methods. SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccess() without a direct user gesture (like a click) is ignored. Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement. This API is particularly useful for detecting scenarios where cookies are present but partitioned (e.g., Firefox’s Total Cookie Protection), as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check. iFrame + postMessage (Best Practice) Despite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method: Embed a hidden iframe from a third-party domain. Inside the iframe, attempt to set a test cookie. Use window.postMessage to report success or failure to the parent. This approach works across all major browsers (when properly configured), requires no server (kind of, more on that next), and simulates a real-world third-party scenario. We’ll implement this step-by-step next. Bonus: Sec-Fetch-Storage-Access Chrome (starting in version 133) is introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection. As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem. Step-by-Step: Detecting Third-Party Cookies Via iFrame So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic (like server-set cookies or response inspection), it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following: You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com, The domain needs HTTPS (for SameSite=None; Secure cookies to work), and You’ll need to host a simple static file (the test page), even if no server code is involved. Once that’s set up, the rest of the logic is fully client-side. Step 1: Create A Cookie Test Page (On A Third-Party Domain) Minimal version (e.g., https://cookietest.example.com/cookie-check.html): <!DOCTYPE html> <html> <body> <script> document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;"; const cookieFound = document.cookie.includes("thirdparty_test=1"); const sendResult = (status) => window.parent?.postMessage(status, "*"); if (cookieFound && document.hasStorageAccess instanceof Function) { document.hasStorageAccess().then((hasAccess) => { sendResult(hasAccess ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED"); }).catch(() => sendResult("TP_COOKIE_BLOCKED")); } else { sendResult(cookieFound ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED"); } </script> </body> </html> Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it. Step 2: Embed The iFrame And Listen For The Result On your main page: function checkThirdPartyCookies() { return new Promise((resolve) => { const iframe = document.createElement('iframe'); iframe.style.display = 'none'; iframe.src = "https://cookietest.example.com/cookie-check.html"; // your subdomain document.body.appendChild(iframe); let resolved = false; const cleanup = (result, timedOut = false) => { if (resolved) return; resolved = true; window.removeEventListener('message', onMessage); iframe.remove(); resolve({ thirdPartyCookiesEnabled: result, timedOut }); }; const onMessage = (event) => { if (["TP_COOKIE_SUPPORTED", "TP_COOKIE_BLOCKED"].includes(event.data)) { cleanup(event.data === "TP_COOKIE_SUPPORTED", false); } }; window.addEventListener('message', onMessage); setTimeout(() => cleanup(false, true), 1000); }); } Example usage: checkThirdPartyCookies().then(({ thirdPartyCookiesEnabled, timedOut }) => { if (!thirdPartyCookiesEnabled) { someCookiesBlockedCallback(); // Third-party cookies are blocked. if (timedOut) { // No response received (iframe possibly blocked). // Optional fallback UX goes here. someCookiesBlockedTimeoutCallback(); }; } }); Step 3: Enhance Detection With The Storage Access API In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture. Here’s how you could implement that in your iframe test page: <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button> <script> document.getElementById('enable-cookies')?.addEventListener('click', async () => { if (document.requestStorageAccess && typeof document.requestStorageAccess === 'function') { try { const granted = await document.requestStorageAccess(); if (granted !== false) { window.parent.postMessage("TP_STORAGE_ACCESS_GRANTED", "*"); } else { window.parent.postMessage("TP_STORAGE_ACCESS_DENIED", "*"); } } catch (e) { window.parent.postMessage("TP_STORAGE_ACCESS_FAILED", "*"); } } }); </script> Then, on the parent page, you can listen for this message and retry detection if needed: // Inside the same onMessage listener from before: if (event.data === "TP_STORAGE_ACCESS_GRANTED") { // Optionally: retry the cookie test, or reload iframe logic checkThirdPartyCookies().then(handleResultAgain); } (Bonus) A Purely Client-Side Fallback (Not Perfect, But Sometimes Necessary) In some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible. When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess(), localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked. The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX. Here’s a basic example: async function inferCookieSupportFallback() { let hasCookieAPI = navigator.cookieEnabled; let canSetCookie = false; let hasStorageAccess = false; try { document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;"; canSetCookie = document.cookie.includes("test_fallback=1"); document.cookie = "test_fallback=; Max-Age=0; Path=/;"; } catch (_) { canSetCookie = false; } if (typeof document.hasStorageAccess === "function") { try { hasStorageAccess = await document.hasStorageAccess(); } catch (_) {} } return { inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess, raw: { hasCookieAPI, canSetCookie, hasStorageAccess } }; } Example usage: inferCookieSupportFallback().then(({ inferredThirdPartyCookies }) => { if (inferredThirdPartyCookies) { console.log("Cookies likely supported. Likely, yes."); } else { console.warn("Cookies may be blocked or partitioned."); // You could inform the user or adjust behavior accordingly } }); Use this fallback when: You’re building a JavaScript-only widget embedded on unknown sites, You don’t control a second domain (or the team refuses to add one), or You just need some visibility into user-side behavior (e.g., debugging UX issues). Don’t rely on it for security-critical logic (e.g., auth gating)! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing. Fallback Strategies When Third-Party Cookies Are Blocked Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you: Redirect-Based Flows For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless. Request Storage Access Prompt the user using requestStorageAccess() after a clear UI gesture (Safari requires this). Use this to re-enable cookies without leaving the page. Token-Based Communication Pass session info directly from parent to iframe via: postMessage (with required origin); Query params (e.g., signed JWT in iframe URL). This avoids reliance on cookies entirely but requires coordination between both sides: // Parent const iframe = document.getElementById('my-iframe'); iframe.onload = () => { const token = getAccessTokenSomehow(); // JWT or anything else iframe.contentWindow.postMessage( { type: 'AUTH_TOKEN', token }, 'https://iframe.example.com' // Set the correct origin! ); }; // iframe window.addEventListener('message', (event) => { if (event.origin !== 'https://parent.example.com') return; const { type, token } = event.data; if (type === 'AUTH_TOKEN') { validateAndUseToken(token); // process JWT, init session, etc } }); Partitioned Cookies (CHIPS) Chrome (since version 114) and other Chromium-based browsers now support cookies with the Partitioned attribute (known as CHIPS), allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed. Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism (Total Cookie Protection), while Safari blocks third-party cookies entirely. But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed. Final Thought: Transparency, Transition, And The Path Forward Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means: Keep an eye on the standards.APIs like FedCM and Privacy Sandbox features (Topics, Attribution Reporting, Fenced Frames) are reshaping how we handle identity and analytics without relying on cross-site cookies. Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess(), or falling back to token-based messaging — every small UX improvement adds up. Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion. The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time. And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions. And don’t forget: having something is better than nothing.
    14 Commentaires 0 Parts
  • What’s new for Prefabs in 2022.2?

    It’s been a while since the Scene Management team has shared an update on Prefabs. During the last few releases, and after fixing a large number of bugs you’ve reported, we’ve made several improvements to the Prefab system. Let’s take a look at each improvement coming in 2022.2 – now available in beta – and how these updates can benefit you.You can now replace the Prefab Asset for a Prefab instance that exists either in a scene or nested inside other Prefabs. This feature will keep the Prefab instance position, rotation, and scale in the scene, but merge the contents from the new Prefab Asset, all while retaining as many overrides and references as possible via name-based matching. More specifically:The Inspector for a Prefab instance has a new Object Field that can be used for replacing the Prefab Asset.The Hierarchy has Context Menus that can similarly replace the Prefab Asset of the instance.Finally, a plain GameObject can be converted to a Prefab instance through the Context Menu in the Hierarchy, or by dragging and dropping with the Ctrl/Cmd modifier key.This functionality is not only available in the UI, but as with most features we build, it has an API that allows you to manage how objects are matched, as well as how Overrides should be treated. See PrefabUtility.ReplacePrefabAssetOfPrefabInstanceand PrefabUtility.ConvertToPrefabInstance.One of the most requested improvements has been the ability to reorder added GameObjects and components. “Added GameObjects and components” refers to the GameObjects and components that are not part of a Prefab instance, but are added to the Prefab instance in a scene or inside a Variant or Nested Prefab. So as of 2022.1, it is possible to reorder the added GameObjects by drag and drop – both among themselves and between GameObjects belonging to the Prefab instance. Getting this feature ready has required a major refactoring of the Undo system.If you want to reorder added GameObjects from an Editor script, it is simply a matter of setting the sibling index on the Transform of the added GameObject. The ability to reorder added components in the Inspector is included in 2022.2. There is no public API for reordering components.The last thing we needed to achieve full feature parity between GameObjects and components was the ability to delete GameObjects from Prefab instances as an Override. Deleting GameObjects as Overrides, an option available in 2022.2, ensures that once you’ve deleted a GameObject, the usual workflows for reverting from/applying to a Prefab Asset works as you’d expect.When it comes to an Editor script, use Object.DestroyImmediate to destroy Prefab instance objects and record the destruction as an Override stored in the scene file.Users often ask what the Variant inheritance tree looks like for a specific Prefab Asset. In 2022.2, we added the Prefab Family pop-up to the Inspector. The content of the pop-up is dependent on the selected Prefab Asset in the Project Browser. After selecting a Prefab Asset and opening the Prefab Family pop-up, the Editor lists all the ancestors of the current Prefab, as well as all the immediate children.In addition to queries about the inheritance tree, users have often asked how they can get rid of unused Overrides stored in a scene but never accessed. In the worst case, such properties might reference assets that are then pulled into the final build, taking up space on the storage device and in memory – but never used.Overrides are now flagged as unused for:Null target objectsUnknown Property PathsRemoved componentsRemoved GameObjectsChanged array dimensionsWhen selecting one or more Prefab instances in the Hierarchy and opening the Overrides drop-down, the Editor now shows whether there are unused Overrides. You can then remove them from the scene using the new Unused Overrides drop-down.Moreover, you can remove all unused Overrides in a scene through the Hierarchy’s Scene Context Menu or via the Context Menu for an arbitrary selection of Prefab instances.We do not automatically remove unused Overrides. After all, the reason for their existence cannot be inferred. Removing a property from a script or deleting an asset should not automatically remove unused Overrides as you might subsequently wish to undo the removal and have the Overrides restored.In case you’re wondering: “Why do I still have Overrides on my Prefab instance after pressing ‘Apply All’?” The answer is that those Overrides simply can’t be applied to the Prefab Asset. Most commonly, such Overrides are references to other objects in the scene that cannot be referenced from the Prefab Asset. Overrides that are not typically applicable are now highlighted by a dark blue bar in the Inspector. These cannot be applied; only reverted.You can now change the default behavior when opening Prefab Mode to In Isolation instead of In Context. Go to Editor Preferences > General > Default Prefab Mode to make this change.Now, with 2022.2, Undo is recorded as a single Undo operation when exiting Prefab Mode. This results in all changes made to the Prefab being reverted if you perform an Undo after leaving Prefab Mode.Over the course of multiple releases, the error handling and reporting during scene loadhas substantially improved, and will now indicate which Prefabs the errors are related to and/or the GUID for missing Prefabs. In fact, the way we handle missing Prefabs’ assets during scene loading is safer and more stable than before.In an effort to further improve error handling and avoid introducing bad data into your project, we’ve added a Broken Prefab Asset Type, which will be produced by the Prefab Importer when errors that cannot be rectified are encountered.The most common case is when a Prefab Variant has lost its parent Prefab, perhaps because it was deleted. In this case, we can’t produce a meaningful Prefab Variant, so a Broken Prefab Asset is created instead. This new asset will show information about what is wrong in the Inspector when selected in the Project Browser. If it’s a case of a missing Prefab parent, then the GUID of the missing Prefab is shown. Alternatively, if it’s a chain of Prefab Variants that is broken, you can go up the chain through the Inspector until you find the Variant with the missing parent.The concept of Disconnected Prefab instances no longer exists as of 2022.1. We still support loading Disconnected Prefab instances, but when the Editor encounters them during scene loading, the Disconnected Prefab instances are stripped of all their Prefab information and become regular GameObjects.As mentioned, our team has fixed a series of bugs you’ve graciously reported to us over time. Some of them derive from the original Prefab system, but many have only become apparent upon the introduction of our improved Prefabs.Today, we are confident you will enjoy the stability of the latest Prefab system. We hope you will find it smooth and efficient to work with.Have more Prefab-related questions or comments? Join us in the forums to share your feedback.
    #whats #new #prefabs
    What’s new for Prefabs in 2022.2?
    It’s been a while since the Scene Management team has shared an update on Prefabs. During the last few releases, and after fixing a large number of bugs you’ve reported, we’ve made several improvements to the Prefab system. Let’s take a look at each improvement coming in 2022.2 – now available in beta – and how these updates can benefit you.You can now replace the Prefab Asset for a Prefab instance that exists either in a scene or nested inside other Prefabs. This feature will keep the Prefab instance position, rotation, and scale in the scene, but merge the contents from the new Prefab Asset, all while retaining as many overrides and references as possible via name-based matching. More specifically:The Inspector for a Prefab instance has a new Object Field that can be used for replacing the Prefab Asset.The Hierarchy has Context Menus that can similarly replace the Prefab Asset of the instance.Finally, a plain GameObject can be converted to a Prefab instance through the Context Menu in the Hierarchy, or by dragging and dropping with the Ctrl/Cmd modifier key.This functionality is not only available in the UI, but as with most features we build, it has an API that allows you to manage how objects are matched, as well as how Overrides should be treated. See PrefabUtility.ReplacePrefabAssetOfPrefabInstanceand PrefabUtility.ConvertToPrefabInstance.One of the most requested improvements has been the ability to reorder added GameObjects and components. “Added GameObjects and components” refers to the GameObjects and components that are not part of a Prefab instance, but are added to the Prefab instance in a scene or inside a Variant or Nested Prefab. So as of 2022.1, it is possible to reorder the added GameObjects by drag and drop – both among themselves and between GameObjects belonging to the Prefab instance. Getting this feature ready has required a major refactoring of the Undo system.If you want to reorder added GameObjects from an Editor script, it is simply a matter of setting the sibling index on the Transform of the added GameObject. The ability to reorder added components in the Inspector is included in 2022.2. There is no public API for reordering components.The last thing we needed to achieve full feature parity between GameObjects and components was the ability to delete GameObjects from Prefab instances as an Override. Deleting GameObjects as Overrides, an option available in 2022.2, ensures that once you’ve deleted a GameObject, the usual workflows for reverting from/applying to a Prefab Asset works as you’d expect.When it comes to an Editor script, use Object.DestroyImmediate to destroy Prefab instance objects and record the destruction as an Override stored in the scene file.Users often ask what the Variant inheritance tree looks like for a specific Prefab Asset. In 2022.2, we added the Prefab Family pop-up to the Inspector. The content of the pop-up is dependent on the selected Prefab Asset in the Project Browser. After selecting a Prefab Asset and opening the Prefab Family pop-up, the Editor lists all the ancestors of the current Prefab, as well as all the immediate children.In addition to queries about the inheritance tree, users have often asked how they can get rid of unused Overrides stored in a scene but never accessed. In the worst case, such properties might reference assets that are then pulled into the final build, taking up space on the storage device and in memory – but never used.Overrides are now flagged as unused for:Null target objectsUnknown Property PathsRemoved componentsRemoved GameObjectsChanged array dimensionsWhen selecting one or more Prefab instances in the Hierarchy and opening the Overrides drop-down, the Editor now shows whether there are unused Overrides. You can then remove them from the scene using the new Unused Overrides drop-down.Moreover, you can remove all unused Overrides in a scene through the Hierarchy’s Scene Context Menu or via the Context Menu for an arbitrary selection of Prefab instances.We do not automatically remove unused Overrides. After all, the reason for their existence cannot be inferred. Removing a property from a script or deleting an asset should not automatically remove unused Overrides as you might subsequently wish to undo the removal and have the Overrides restored.In case you’re wondering: “Why do I still have Overrides on my Prefab instance after pressing ‘Apply All’?” The answer is that those Overrides simply can’t be applied to the Prefab Asset. Most commonly, such Overrides are references to other objects in the scene that cannot be referenced from the Prefab Asset. Overrides that are not typically applicable are now highlighted by a dark blue bar in the Inspector. These cannot be applied; only reverted.You can now change the default behavior when opening Prefab Mode to In Isolation instead of In Context. Go to Editor Preferences > General > Default Prefab Mode to make this change.Now, with 2022.2, Undo is recorded as a single Undo operation when exiting Prefab Mode. This results in all changes made to the Prefab being reverted if you perform an Undo after leaving Prefab Mode.Over the course of multiple releases, the error handling and reporting during scene loadhas substantially improved, and will now indicate which Prefabs the errors are related to and/or the GUID for missing Prefabs. In fact, the way we handle missing Prefabs’ assets during scene loading is safer and more stable than before.In an effort to further improve error handling and avoid introducing bad data into your project, we’ve added a Broken Prefab Asset Type, which will be produced by the Prefab Importer when errors that cannot be rectified are encountered.The most common case is when a Prefab Variant has lost its parent Prefab, perhaps because it was deleted. In this case, we can’t produce a meaningful Prefab Variant, so a Broken Prefab Asset is created instead. This new asset will show information about what is wrong in the Inspector when selected in the Project Browser. If it’s a case of a missing Prefab parent, then the GUID of the missing Prefab is shown. Alternatively, if it’s a chain of Prefab Variants that is broken, you can go up the chain through the Inspector until you find the Variant with the missing parent.The concept of Disconnected Prefab instances no longer exists as of 2022.1. We still support loading Disconnected Prefab instances, but when the Editor encounters them during scene loading, the Disconnected Prefab instances are stripped of all their Prefab information and become regular GameObjects.As mentioned, our team has fixed a series of bugs you’ve graciously reported to us over time. Some of them derive from the original Prefab system, but many have only become apparent upon the introduction of our improved Prefabs.Today, we are confident you will enjoy the stability of the latest Prefab system. We hope you will find it smooth and efficient to work with.Have more Prefab-related questions or comments? Join us in the forums to share your feedback. #whats #new #prefabs
    UNITY.COM
    What’s new for Prefabs in 2022.2?
    It’s been a while since the Scene Management team has shared an update on Prefabs. During the last few releases, and after fixing a large number of bugs you’ve reported (thank you!), we’ve made several improvements to the Prefab system. Let’s take a look at each improvement coming in 2022.2 – now available in beta – and how these updates can benefit you.You can now replace the Prefab Asset for a Prefab instance that exists either in a scene or nested inside other Prefabs. This feature will keep the Prefab instance position, rotation, and scale in the scene, but merge the contents from the new Prefab Asset, all while retaining as many overrides and references as possible via name-based matching (by default). More specifically:The Inspector for a Prefab instance has a new Object Field that can be used for replacing the Prefab Asset.The Hierarchy has Context Menus that can similarly replace the Prefab Asset of the instance.Finally, a plain GameObject can be converted to a Prefab instance through the Context Menu in the Hierarchy, or by dragging and dropping with the Ctrl/Cmd modifier key.This functionality is not only available in the UI, but as with most features we build, it has an API that allows you to manage how objects are matched, as well as how Overrides should be treated. See PrefabUtility.ReplacePrefabAssetOfPrefabInstanceand PrefabUtility.ConvertToPrefabInstance.One of the most requested improvements has been the ability to reorder added GameObjects and components. “Added GameObjects and components” refers to the GameObjects and components that are not part of a Prefab instance, but are added to the Prefab instance in a scene or inside a Variant or Nested Prefab. So as of 2022.1, it is possible to reorder the added GameObjects by drag and drop – both among themselves and between GameObjects belonging to the Prefab instance. Getting this feature ready has required a major refactoring of the Undo system.If you want to reorder added GameObjects from an Editor script, it is simply a matter of setting the sibling index on the Transform of the added GameObject. The ability to reorder added components in the Inspector is included in 2022.2. There is no public API for reordering components.The last thing we needed to achieve full feature parity between GameObjects and components was the ability to delete GameObjects from Prefab instances as an Override. Deleting GameObjects as Overrides, an option available in 2022.2, ensures that once you’ve deleted a GameObject, the usual workflows for reverting from/applying to a Prefab Asset works as you’d expect.When it comes to an Editor script, use Object.DestroyImmediate to destroy Prefab instance objects and record the destruction as an Override stored in the scene file.Users often ask what the Variant inheritance tree looks like for a specific Prefab Asset. In 2022.2, we added the Prefab Family pop-up to the Inspector. The content of the pop-up is dependent on the selected Prefab Asset in the Project Browser. After selecting a Prefab Asset and opening the Prefab Family pop-up, the Editor lists all the ancestors of the current Prefab, as well as all the immediate children.In addition to queries about the inheritance tree, users have often asked how they can get rid of unused Overrides stored in a scene but never accessed (because the property has been removed from a script). In the worst case, such properties might reference assets that are then pulled into the final build, taking up space on the storage device and in memory – but never used.Overrides are now flagged as unused for:Null target objectsUnknown Property Paths (which are not subject to scripted FormerlySerializedAsAttribute usage)Removed componentsRemoved GameObjectsChanged array dimensions (e.g., materials array)When selecting one or more Prefab instances in the Hierarchy and opening the Overrides drop-down, the Editor now shows whether there are unused Overrides. You can then remove them from the scene using the new Unused Overrides drop-down.Moreover, you can remove all unused Overrides in a scene through the Hierarchy’s Scene Context Menu or via the Context Menu for an arbitrary selection of Prefab instances.We do not automatically remove unused Overrides. After all, the reason for their existence cannot be inferred. Removing a property from a script or deleting an asset should not automatically remove unused Overrides as you might subsequently wish to undo the removal and have the Overrides restored.In case you’re wondering: “Why do I still have Overrides on my Prefab instance after pressing ‘Apply All’?” The answer is that those Overrides simply can’t be applied to the Prefab Asset. Most commonly, such Overrides are references to other objects in the scene that cannot be referenced from the Prefab Asset. Overrides that are not typically applicable are now highlighted by a dark blue bar in the Inspector. These cannot be applied; only reverted.You can now change the default behavior when opening Prefab Mode to In Isolation instead of In Context. Go to Editor Preferences > General > Default Prefab Mode to make this change.Now, with 2022.2, Undo is recorded as a single Undo operation when exiting Prefab Mode. This results in all changes made to the Prefab being reverted if you perform an Undo after leaving Prefab Mode.Over the course of multiple releases, the error handling and reporting during scene load (and Prefab load in Prefab Mode) has substantially improved, and will now indicate which Prefabs the errors are related to and/or the GUID for missing Prefabs. In fact, the way we handle missing Prefabs’ assets during scene loading is safer and more stable than before.In an effort to further improve error handling and avoid introducing bad data into your project, we’ve added a Broken Prefab Asset Type, which will be produced by the Prefab Importer when errors that cannot be rectified are encountered.The most common case is when a Prefab Variant has lost its parent Prefab, perhaps because it was deleted. In this case, we can’t produce a meaningful Prefab Variant, so a Broken Prefab Asset is created instead. This new asset will show information about what is wrong in the Inspector when selected in the Project Browser. If it’s a case of a missing Prefab parent, then the GUID of the missing Prefab is shown. Alternatively, if it’s a chain of Prefab Variants that is broken, you can go up the chain through the Inspector until you find the Variant with the missing parent.The concept of Disconnected Prefab instances no longer exists as of 2022.1. We still support loading Disconnected Prefab instances, but when the Editor encounters them during scene loading, the Disconnected Prefab instances are stripped of all their Prefab information and become regular GameObjects.As mentioned, our team has fixed a series of bugs you’ve graciously reported to us over time. Some of them derive from the original Prefab system, but many have only become apparent upon the introduction of our improved Prefabs.Today, we are confident you will enjoy the stability of the latest Prefab system. We hope you will find it smooth and efficient to work with.Have more Prefab-related questions or comments? Join us in the forums to share your feedback.
    0 Commentaires 0 Parts
  • Inferred art—painting a computer program

    How trying to depict the complex workings of our machines can help us balance our relationship between art and AI.Continue reading on UX Collective »
    #inferred #artpainting #computer #program
    Inferred art—painting a computer program
    How trying to depict the complex workings of our machines can help us balance our relationship between art and AI.Continue reading on UX Collective » #inferred #artpainting #computer #program
    UXDESIGN.CC
    Inferred art—painting a computer program
    How trying to depict the complex workings of our machines can help us balance our relationship between art and AI.Continue reading on UX Collective »
    0 Commentaires 0 Parts
  • Scientists figure out how the brain forms emotional connections

    It's shocking!

    Scientists figure out how the brain forms emotional connections

    Neural recordings track how neurons link environments to emotional events.

    Jacek Krywko



    May 21, 2025 4:07 pm

    |

    16

    Credit:

    fotografixx

    Credit:

    fotografixx

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Whenever something bad happens to us, brain systems responsible for mediating emotions kick in to prevent it from happening again. When we get stung by a wasp, the association between pain and wasps is encoded in the region of the brain called the amygdala, which connects simple stimuli with basic emotions.
    But the brain does more than simple associations; it also encodes lots of other stimuli that are less directly connected with the harmful event—things like the place where we got stung or the wasps’ nest in a nearby tree. These are combined into complex emotional models of potentially threatening circumstances.
    Till now, we didn’t know exactly how these models are built. But we’re beginning to understand how it’s done.
    Emotional complexity
    “Decades of work has revealed how simple forms of emotional learning occurs—how sensory stimuli are paired with aversive events,” says Joshua Johansen, a team director at the Neural Circuitry of Learning and Memory at RIKEN Center for Brain Science in Tokyo. But Johansen says that these decades didn’t bring much progress in treating psychiatric conditions like anxiety and trauma-related disorders. “We thought if we could get a handle of more complex emotional processes and understand their mechanisms, we may be able to provide relief for patients with conditions like that,” Johansen claims.
    To make it happen, his team performed experiments designed to trigger complex emotional processes in rats while closely monitoring their brains.
    Johansen and Xiaowei Gu, his co-author and colleague at RIKEN, started by dividing the rats into two groups. The first “paired” group of rats was conditioned to associate an image with a sound. The second “unpaired” group watched the same image and listened to the same sound, but not at the same time. This prevented the rats from making an association.

    Then, one day later, the rats were shown the same image and treated with an electric shock until they learned to connect the image with pain. Finally, the team tested if the rats would freeze in fear in response to the sound. The “unpaired” group didn’t. The rats in the “paired” group did—it turned out human-like complex emotional models were present in rats as well.
    Once Johansen and Gu confirmed the capacity was there, they got busy figuring out how it worked exactly.
    Playing tag
    “Behaviorally, we measured freezing responses to the directly paired stimulus, which was the image, and inferred stimulus which was the sound,” Johansen says. “But we also performed something we called miniscope calcium imaging.” The trick relied on injecting rats with a virus that forced their cells to produce proteins that fluoresce in response to increased levels of calcium in the cells. Increased levels of calcium are the telltale sign of activity in neurons, meaning the team could see in real time which neurons in rats’ brains lit up during the experiments.
    It turned out that the region crucial for building these complex emotional models was not the amygdala, but the dorsomedial prefrontal cortex, which had a rather specialized role. “The dmPFC does not form the sensory model of the world. It only cares about things when they have emotional relevance,” Johansen explains. He said there wasn’t much change in neuronal activity during the sensory learning phase, when the animals were watching the image and listening to the sound. The neurons became significantly more active when the rats received the electric shock.
    In the “unpaired” group, the active neurons that held the representations of the electric shock and the image started to overlap. In the “paired” group, this overlap also included the neuronal representation of the sound. “There was a kind of an associative bundle that formed,” Johansen says.

    After Johansen and Gu pinpointed the neurons that formed those associative bundles, they started looking at how each of these components works.
    Detraumatizing rodents
    In the first step, the team identified the dmPFC neurons that sent output to the amygdala. Then they selectively inhibited those neurons and exposed the rats from the “paired” group to the image and the sound again. The result of disconnecting the dmPFC neurons from the amygdala was that rats exhibited a fear response to the image but no longer feared the sound. “It seems like the amygdala can form the simple representations on its own but requires input from the dmPFC to express more complex, inferred emotions,” Johansen says.
    But there are still a lot of unanswered questions left.
    The next thing the team wants to take a closer look at is the process that enables the brain to tie an aversive stimulus, like the shock, to one that was not active during the aversive event. In the “paired” group of rats, some multi-sensory neurons responding to both auditory and visual stimuli apparently got recruited. “We haven’t worked that out yet,” Johansen says. "This is a very novel type of mechanism.”
    Another thing is that the emotional model Johansen and Gu induced in rats was relatively simple. In the real world, especially in humans, we can have many different aversive outcomes tied to the same triggers. A single location could be where you got stung by a wasp, attacked by a dog, robbed of your wallet, and dumped by your significant other—all different aversive representations with myriad inferred, indirect stimuli to go along with them. “Does the dmPFC combine all those representations into sort of a single, overlapping representation? Or is it a really rich environment that bundles different aversive experiences with the individual aspects of these experiences?” Johansen asked. “This is something we want to test more.”
    Nature, 2025.  DOI: 10.1038/s41586-025-09001-2

    Jacek Krywko
    Associate Writer

    Jacek Krywko
    Associate Writer

    Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

    16 Comments
    #scientists #figure #out #how #brain
    Scientists figure out how the brain forms emotional connections
    It's shocking! Scientists figure out how the brain forms emotional connections Neural recordings track how neurons link environments to emotional events. Jacek Krywko – May 21, 2025 4:07 pm | 16 Credit: fotografixx Credit: fotografixx Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Whenever something bad happens to us, brain systems responsible for mediating emotions kick in to prevent it from happening again. When we get stung by a wasp, the association between pain and wasps is encoded in the region of the brain called the amygdala, which connects simple stimuli with basic emotions. But the brain does more than simple associations; it also encodes lots of other stimuli that are less directly connected with the harmful event—things like the place where we got stung or the wasps’ nest in a nearby tree. These are combined into complex emotional models of potentially threatening circumstances. Till now, we didn’t know exactly how these models are built. But we’re beginning to understand how it’s done. Emotional complexity “Decades of work has revealed how simple forms of emotional learning occurs—how sensory stimuli are paired with aversive events,” says Joshua Johansen, a team director at the Neural Circuitry of Learning and Memory at RIKEN Center for Brain Science in Tokyo. But Johansen says that these decades didn’t bring much progress in treating psychiatric conditions like anxiety and trauma-related disorders. “We thought if we could get a handle of more complex emotional processes and understand their mechanisms, we may be able to provide relief for patients with conditions like that,” Johansen claims. To make it happen, his team performed experiments designed to trigger complex emotional processes in rats while closely monitoring their brains. Johansen and Xiaowei Gu, his co-author and colleague at RIKEN, started by dividing the rats into two groups. The first “paired” group of rats was conditioned to associate an image with a sound. The second “unpaired” group watched the same image and listened to the same sound, but not at the same time. This prevented the rats from making an association. Then, one day later, the rats were shown the same image and treated with an electric shock until they learned to connect the image with pain. Finally, the team tested if the rats would freeze in fear in response to the sound. The “unpaired” group didn’t. The rats in the “paired” group did—it turned out human-like complex emotional models were present in rats as well. Once Johansen and Gu confirmed the capacity was there, they got busy figuring out how it worked exactly. Playing tag “Behaviorally, we measured freezing responses to the directly paired stimulus, which was the image, and inferred stimulus which was the sound,” Johansen says. “But we also performed something we called miniscope calcium imaging.” The trick relied on injecting rats with a virus that forced their cells to produce proteins that fluoresce in response to increased levels of calcium in the cells. Increased levels of calcium are the telltale sign of activity in neurons, meaning the team could see in real time which neurons in rats’ brains lit up during the experiments. It turned out that the region crucial for building these complex emotional models was not the amygdala, but the dorsomedial prefrontal cortex, which had a rather specialized role. “The dmPFC does not form the sensory model of the world. It only cares about things when they have emotional relevance,” Johansen explains. He said there wasn’t much change in neuronal activity during the sensory learning phase, when the animals were watching the image and listening to the sound. The neurons became significantly more active when the rats received the electric shock. In the “unpaired” group, the active neurons that held the representations of the electric shock and the image started to overlap. In the “paired” group, this overlap also included the neuronal representation of the sound. “There was a kind of an associative bundle that formed,” Johansen says. After Johansen and Gu pinpointed the neurons that formed those associative bundles, they started looking at how each of these components works. Detraumatizing rodents In the first step, the team identified the dmPFC neurons that sent output to the amygdala. Then they selectively inhibited those neurons and exposed the rats from the “paired” group to the image and the sound again. The result of disconnecting the dmPFC neurons from the amygdala was that rats exhibited a fear response to the image but no longer feared the sound. “It seems like the amygdala can form the simple representations on its own but requires input from the dmPFC to express more complex, inferred emotions,” Johansen says. But there are still a lot of unanswered questions left. The next thing the team wants to take a closer look at is the process that enables the brain to tie an aversive stimulus, like the shock, to one that was not active during the aversive event. In the “paired” group of rats, some multi-sensory neurons responding to both auditory and visual stimuli apparently got recruited. “We haven’t worked that out yet,” Johansen says. "This is a very novel type of mechanism.” Another thing is that the emotional model Johansen and Gu induced in rats was relatively simple. In the real world, especially in humans, we can have many different aversive outcomes tied to the same triggers. A single location could be where you got stung by a wasp, attacked by a dog, robbed of your wallet, and dumped by your significant other—all different aversive representations with myriad inferred, indirect stimuli to go along with them. “Does the dmPFC combine all those representations into sort of a single, overlapping representation? Or is it a really rich environment that bundles different aversive experiences with the individual aspects of these experiences?” Johansen asked. “This is something we want to test more.” Nature, 2025.  DOI: 10.1038/s41586-025-09001-2 Jacek Krywko Associate Writer Jacek Krywko Associate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 16 Comments #scientists #figure #out #how #brain
    ARSTECHNICA.COM
    Scientists figure out how the brain forms emotional connections
    It's shocking! Scientists figure out how the brain forms emotional connections Neural recordings track how neurons link environments to emotional events. Jacek Krywko – May 21, 2025 4:07 pm | 16 Credit: fotografixx Credit: fotografixx Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Whenever something bad happens to us, brain systems responsible for mediating emotions kick in to prevent it from happening again. When we get stung by a wasp, the association between pain and wasps is encoded in the region of the brain called the amygdala, which connects simple stimuli with basic emotions. But the brain does more than simple associations; it also encodes lots of other stimuli that are less directly connected with the harmful event—things like the place where we got stung or the wasps’ nest in a nearby tree. These are combined into complex emotional models of potentially threatening circumstances. Till now, we didn’t know exactly how these models are built. But we’re beginning to understand how it’s done. Emotional complexity “Decades of work has revealed how simple forms of emotional learning occurs—how sensory stimuli are paired with aversive events,” says Joshua Johansen, a team director at the Neural Circuitry of Learning and Memory at RIKEN Center for Brain Science in Tokyo. But Johansen says that these decades didn’t bring much progress in treating psychiatric conditions like anxiety and trauma-related disorders. “We thought if we could get a handle of more complex emotional processes and understand their mechanisms, we may be able to provide relief for patients with conditions like that,” Johansen claims. To make it happen, his team performed experiments designed to trigger complex emotional processes in rats while closely monitoring their brains. Johansen and Xiaowei Gu, his co-author and colleague at RIKEN, started by dividing the rats into two groups. The first “paired” group of rats was conditioned to associate an image with a sound. The second “unpaired” group watched the same image and listened to the same sound, but not at the same time. This prevented the rats from making an association. Then, one day later, the rats were shown the same image and treated with an electric shock until they learned to connect the image with pain. Finally, the team tested if the rats would freeze in fear in response to the sound. The “unpaired” group didn’t. The rats in the “paired” group did—it turned out human-like complex emotional models were present in rats as well. Once Johansen and Gu confirmed the capacity was there, they got busy figuring out how it worked exactly. Playing tag “Behaviorally, we measured freezing responses to the directly paired stimulus, which was the image, and inferred stimulus which was the sound,” Johansen says. “But we also performed something we called miniscope calcium imaging.” The trick relied on injecting rats with a virus that forced their cells to produce proteins that fluoresce in response to increased levels of calcium in the cells. Increased levels of calcium are the telltale sign of activity in neurons, meaning the team could see in real time which neurons in rats’ brains lit up during the experiments. It turned out that the region crucial for building these complex emotional models was not the amygdala, but the dorsomedial prefrontal cortex (dmPFC), which had a rather specialized role. “The dmPFC does not form the sensory model of the world. It only cares about things when they have emotional relevance,” Johansen explains. He said there wasn’t much change in neuronal activity during the sensory learning phase, when the animals were watching the image and listening to the sound. The neurons became significantly more active when the rats received the electric shock. In the “unpaired” group, the active neurons that held the representations of the electric shock and the image started to overlap. In the “paired” group, this overlap also included the neuronal representation of the sound. “There was a kind of an associative bundle that formed,” Johansen says. After Johansen and Gu pinpointed the neurons that formed those associative bundles, they started looking at how each of these components works. Detraumatizing rodents In the first step, the team identified the dmPFC neurons that sent output to the amygdala. Then they selectively inhibited those neurons and exposed the rats from the “paired” group to the image and the sound again. The result of disconnecting the dmPFC neurons from the amygdala was that rats exhibited a fear response to the image but no longer feared the sound. “It seems like the amygdala can form the simple representations on its own but requires input from the dmPFC to express more complex, inferred emotions,” Johansen says. But there are still a lot of unanswered questions left. The next thing the team wants to take a closer look at is the process that enables the brain to tie an aversive stimulus, like the shock, to one that was not active during the aversive event. In the “paired” group of rats, some multi-sensory neurons responding to both auditory and visual stimuli apparently got recruited. “We haven’t worked that out yet,” Johansen says. "This is a very novel type of mechanism.” Another thing is that the emotional model Johansen and Gu induced in rats was relatively simple. In the real world, especially in humans, we can have many different aversive outcomes tied to the same triggers. A single location could be where you got stung by a wasp, attacked by a dog, robbed of your wallet, and dumped by your significant other—all different aversive representations with myriad inferred, indirect stimuli to go along with them. “Does the dmPFC combine all those representations into sort of a single, overlapping representation? Or is it a really rich environment that bundles different aversive experiences with the individual aspects of these experiences?” Johansen asked. “This is something we want to test more.” Nature, 2025.  DOI: 10.1038/s41586-025-09001-2 Jacek Krywko Associate Writer Jacek Krywko Associate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 16 Comments
    0 Commentaires 0 Parts
  • HBO’s The Last of Us S2E6 recap: Look who’s back!

    New episodes of season 2 of The Last of Us are premiering on HBO every Sunday night, and Ars' Kyle Orlandand Andrew Cunningham will be talking about them here after they air. While these recaps don't delve into every single plot point of the episode, there are obviously heavy spoilers contained within, so go watch the episode first if you want to go in fresh.

    Kyle: Going from a sudden shot of beatific Pedro Pascal at the end of the last episode to a semi-related flashback with a young Joel Miller and his brother was certainly a choice. I almost respect how overtly they are just screwing with audience expectations here.
    As for the opening flashback scene itself, I guess the message is "Hey, look at the generational trauma his family was dealing with—isn't it great he overcame that to love Ellie?" But I'm not sure I can draw a straight line from "he got beat by his dad" to "he condemned the entire human race for his surrogate daughter."

    Andrew: I do not have the same problems you did with either the Joel pop-in at the end of the last episode or the flashback at the start of this episode—last week, the show was signaling "here comes Joel!" and this week the show is signaling "look, it's Joel!" Maybe I'm just responding to Tony Dalton as Joel's dad, who I know best as the charismatic lunatic Lalo Salamanca from Better Call Saul. I do agree that the throughline between these two events is shaky, though, and without the flashback to fill us in, the "I hope you can do a little better than me" sentiment feels like something way out of left field.
    But I dunno, it's Joel week. Joel's back! This is the Duality of Joel: you can simultaneously think that he is horrible for failing a civilization-scale trolley problem when he killed a building full of Fireflies to save Ellie, and you can't help but be utterly charmed by Pedro Pascal enthusiastically describing the many ways to use a Dremel.Truly, there's pretty much nothing in this episode that we couldn't have inferred or guessed at based on the information the show has already made available to us. And I say this as a non-game-player—I didn't need to see exactly how their relationship became as strained as it was by the beginning of the season to have some idea of why it happened, nor did I need to see The Porch Scene to understand that their bond nevertheless endured. But this is also the dynamic that everybody came to the show for last season, so I can only make myself complain about it to a point.

    Kyle: It's true, Joel Week is a time worth celebrating. If I'm coming across as cranky about it at the outset, it's probably because this whole episode is a realization of what we're missing out on this season thanks to Joel's death.
    As you said, a lot of this episode was filling in gaps that could well have been inferred from events we did see. But I would have easily taken a full seasonof Ellie growing up and Joel dealing with Ellie growing up. You could throw in some zombie attacks or an overarching Big Bad enemy or something if you want, but the development of Joel and Ellie's relationship deserves more than just some condensed flashbacks.

    "It works?!"

    Credit:
    Warner Bros. Discovery

    "It works?!"

    Credit:

    Warner Bros. Discovery

    Andrew: Yeah, it's hard not to be upset about the original sin of The Last of Us Part 2 which isthat having some boring underbaked villain crawl out of the woodwork to kill the show's main character is kind of a cheap shot. Sure, you shock the hell out of viewers like me who didn't see it coming! But part of the reason I didn't see it coming is because if you kill Joel, you need to do a whole bunch of your show without Joel and why on Earth would you decide to do that?
    To be clear, I don't mind this season so much and I've found things to like about it, though Ellie does sometimes veer into being a protagonist so short-sighted and impulsive and occasionally just-plain-stupid that it's hard to be in her corner. But yeah, flashing back to a time just two months after the end of season 1 really does make you wonder, "Why couldn't the story just be this?"

    Kyle: In the gaming space, I understand the desire to not have your sequel game be just "more of the same" from the last game. But I've always felt The Last of Us Part 2 veered too hard in the other direction and became something almost entirely unrecognizable from the original game I loved.
    But let's focus on what we do get in this episode, which is an able recreation of my favorite moment from the second game, Ellie enjoying the heck out of a ruined science museum. The childlike wonder she shows here is a great respite from a lot of action-heavy scenes in the game, and I think it serves the same purpose here. It's also much more drawn out in the game—I could have luxuriated in just this part of the flashback for an entire episode!

    Andrew: The only thing that kept me from being fully on board with that scene was that I think Ellie was acting quite a bit younger than 16, with her pantomimed launch noises and flipping of switches, But I could believe that a kid who had such a rough and abbreviated childhood would have some fun sitting in an Apollo module. For someone with no memories of the pre-outbreak society, it must seem like science fiction, and the show gives us some lovely visuals to go with it.
    The things I like best here are the little moments in between scenes rather than the parts where the show insists on showing us events that it had already alluded to in other episodes. What sticks with me the most, as we jump between Ellie's birthdays, is Joel's insistence that "we could do this kind of thing more often" as they go to a museum or patrol the trails together. That it needs to be stated multiple times suggests that they are not, in fact, doing this kind of thing more often in between birthdays.
    Joel is thoughtful and attentive in his way—a little better than his father—but It's such a bittersweet little note, a surrogate dad's clumsy effort to bridge a gap that he knows is there but doesn't fully understand.

    Why can't it be like this forever?

    Credit:
    Warner Bros. Discovery

    Why can't it be like this forever?

    Credit:

    Warner Bros. Discovery

    Kyle: Yeah, I'm OK with a little arrested development in a girl that has been forced to miss so many of the markers of a "normal" pre-apocalypse childhood.
    But yeah, Joel is pretty clumsy about this. And as we see all of these attempts with his surrogate daughter, it's easy to forget what happened to his real daughter way back at the beginning of the first season. The trauma of that event shapes Joel in a way that I feel the narrative sometimes forgets about for long stretches.
    But then we get moments like Joel leading Gail's newly infected husband to a death that the poor guy would very much like to delay by an hour for one final moment with his wife. When Joel says that you can always close your eyes and see the face of the one you love, he may have been thinking about Ellie. But I like to think he was thinking about his actual daughter.

    Andrew: Yes to the extent that Joel's actions are relatableit's because the undercurrent of his relationship with Ellie is that he can't watch another daughter die in his arms. I watched the first episode again recently, and that whole scene remains a masterfully executed gut-punch.
    But it's a tough tightrope to walk because if the story spends too much time focusing on it, you draw attention to how unhealthy it is for Joel to be forcing Ellie to play that role in his life. Don't get me wrong, Ellie was looking for a father figure, too, and that's why it works! It's a "found family" dynamic that they were both looking for. But I can't hear Joel's soothing "baby girl" epithet without it rubbing me the wrong way a little.
    My gut reaction was that it was right for Joel not to fully trust Gail's husband, but then I realized I can never not suspect Joe Pantoliano of treachery because of his role as betrayer in the 26-year-old movie The Matrix. Brains are weird.

    Kyle: I did like the way Ellie tells Joel off for lying to herabout the killing; it's a real "growing up" moment for the character. And of course it transitions well into The Porch Scene, Ellie's ultimate moment of confronting Joel on his ultimate betrayal.
    While I'm not a fan of the head-fake "this scene isn't going to happen" thing they did earlier this season, I think the TV show once again did justice to one of the most impactful parts of the game. But the game also managed to spread out these Joel-centric flashbacks a little more, so we're not transitioning from "museum fun" to "porch confrontation" quite so quickly. Here, it feels like they're trying hard to rush through all of their "bring back Pedro Pascal" requirements in a single episode.

    When you've only got one hour left, how you spend it becomes pretty important.

    Credit:
    Warner Bros. Discovery

    When you've only got one hour left, how you spend it becomes pretty important.

    Credit:

    Warner Bros. Discovery

    Andrew: Yeah, because you don't need to pay a 3D model's appearance fees if you want to use it in a bunch of scenes of your video game. Pedro Pascal has other stuff going on!

    Kyle: That's probably part of it. But without giving too much away, I think we're seeing the limits of stretching the events of "Part 2" into what is essentially two seasons. While there have been some cuts, on the whole, it feels like there's also been a lot of filler to "round out" these characters in ways that have been more harmful than helpful at points.

    Andrew: Yeah, our episode ends by depositing us back in the main action, as Ellie returns to the abandoned theater where she and Dina have holed up. I'm curious to see what we're in for in this last run of almost-certainly-Joel-less episodes, but I suspect it involves a bunch of non-Joel characters ping-ponging between the WLF forces and the local cultists. There will probably be some villain monologuing, probably some zombie hordes, probably another named character death or two. Pretty standard issue.
    What I don't expect is for anyone to lovingly and accurately describe the process of refurbishing a guitar. And that's the other issue with putting this episode where it is—just as you're getting used to a show without Joel, you're reminded that he's missing all over again.
    #hbos #last #s2e6 #recap #look
    HBO’s The Last of Us S2E6 recap: Look who’s back!
    New episodes of season 2 of The Last of Us are premiering on HBO every Sunday night, and Ars' Kyle Orlandand Andrew Cunningham will be talking about them here after they air. While these recaps don't delve into every single plot point of the episode, there are obviously heavy spoilers contained within, so go watch the episode first if you want to go in fresh. Kyle: Going from a sudden shot of beatific Pedro Pascal at the end of the last episode to a semi-related flashback with a young Joel Miller and his brother was certainly a choice. I almost respect how overtly they are just screwing with audience expectations here. As for the opening flashback scene itself, I guess the message is "Hey, look at the generational trauma his family was dealing with—isn't it great he overcame that to love Ellie?" But I'm not sure I can draw a straight line from "he got beat by his dad" to "he condemned the entire human race for his surrogate daughter." Andrew: I do not have the same problems you did with either the Joel pop-in at the end of the last episode or the flashback at the start of this episode—last week, the show was signaling "here comes Joel!" and this week the show is signaling "look, it's Joel!" Maybe I'm just responding to Tony Dalton as Joel's dad, who I know best as the charismatic lunatic Lalo Salamanca from Better Call Saul. I do agree that the throughline between these two events is shaky, though, and without the flashback to fill us in, the "I hope you can do a little better than me" sentiment feels like something way out of left field. But I dunno, it's Joel week. Joel's back! This is the Duality of Joel: you can simultaneously think that he is horrible for failing a civilization-scale trolley problem when he killed a building full of Fireflies to save Ellie, and you can't help but be utterly charmed by Pedro Pascal enthusiastically describing the many ways to use a Dremel.Truly, there's pretty much nothing in this episode that we couldn't have inferred or guessed at based on the information the show has already made available to us. And I say this as a non-game-player—I didn't need to see exactly how their relationship became as strained as it was by the beginning of the season to have some idea of why it happened, nor did I need to see The Porch Scene to understand that their bond nevertheless endured. But this is also the dynamic that everybody came to the show for last season, so I can only make myself complain about it to a point. Kyle: It's true, Joel Week is a time worth celebrating. If I'm coming across as cranky about it at the outset, it's probably because this whole episode is a realization of what we're missing out on this season thanks to Joel's death. As you said, a lot of this episode was filling in gaps that could well have been inferred from events we did see. But I would have easily taken a full seasonof Ellie growing up and Joel dealing with Ellie growing up. You could throw in some zombie attacks or an overarching Big Bad enemy or something if you want, but the development of Joel and Ellie's relationship deserves more than just some condensed flashbacks. "It works?!" Credit: Warner Bros. Discovery "It works?!" Credit: Warner Bros. Discovery Andrew: Yeah, it's hard not to be upset about the original sin of The Last of Us Part 2 which isthat having some boring underbaked villain crawl out of the woodwork to kill the show's main character is kind of a cheap shot. Sure, you shock the hell out of viewers like me who didn't see it coming! But part of the reason I didn't see it coming is because if you kill Joel, you need to do a whole bunch of your show without Joel and why on Earth would you decide to do that? To be clear, I don't mind this season so much and I've found things to like about it, though Ellie does sometimes veer into being a protagonist so short-sighted and impulsive and occasionally just-plain-stupid that it's hard to be in her corner. But yeah, flashing back to a time just two months after the end of season 1 really does make you wonder, "Why couldn't the story just be this?" Kyle: In the gaming space, I understand the desire to not have your sequel game be just "more of the same" from the last game. But I've always felt The Last of Us Part 2 veered too hard in the other direction and became something almost entirely unrecognizable from the original game I loved. But let's focus on what we do get in this episode, which is an able recreation of my favorite moment from the second game, Ellie enjoying the heck out of a ruined science museum. The childlike wonder she shows here is a great respite from a lot of action-heavy scenes in the game, and I think it serves the same purpose here. It's also much more drawn out in the game—I could have luxuriated in just this part of the flashback for an entire episode! Andrew: The only thing that kept me from being fully on board with that scene was that I think Ellie was acting quite a bit younger than 16, with her pantomimed launch noises and flipping of switches, But I could believe that a kid who had such a rough and abbreviated childhood would have some fun sitting in an Apollo module. For someone with no memories of the pre-outbreak society, it must seem like science fiction, and the show gives us some lovely visuals to go with it. The things I like best here are the little moments in between scenes rather than the parts where the show insists on showing us events that it had already alluded to in other episodes. What sticks with me the most, as we jump between Ellie's birthdays, is Joel's insistence that "we could do this kind of thing more often" as they go to a museum or patrol the trails together. That it needs to be stated multiple times suggests that they are not, in fact, doing this kind of thing more often in between birthdays. Joel is thoughtful and attentive in his way—a little better than his father—but It's such a bittersweet little note, a surrogate dad's clumsy effort to bridge a gap that he knows is there but doesn't fully understand. Why can't it be like this forever? Credit: Warner Bros. Discovery Why can't it be like this forever? Credit: Warner Bros. Discovery Kyle: Yeah, I'm OK with a little arrested development in a girl that has been forced to miss so many of the markers of a "normal" pre-apocalypse childhood. But yeah, Joel is pretty clumsy about this. And as we see all of these attempts with his surrogate daughter, it's easy to forget what happened to his real daughter way back at the beginning of the first season. The trauma of that event shapes Joel in a way that I feel the narrative sometimes forgets about for long stretches. But then we get moments like Joel leading Gail's newly infected husband to a death that the poor guy would very much like to delay by an hour for one final moment with his wife. When Joel says that you can always close your eyes and see the face of the one you love, he may have been thinking about Ellie. But I like to think he was thinking about his actual daughter. Andrew: Yes to the extent that Joel's actions are relatableit's because the undercurrent of his relationship with Ellie is that he can't watch another daughter die in his arms. I watched the first episode again recently, and that whole scene remains a masterfully executed gut-punch. But it's a tough tightrope to walk because if the story spends too much time focusing on it, you draw attention to how unhealthy it is for Joel to be forcing Ellie to play that role in his life. Don't get me wrong, Ellie was looking for a father figure, too, and that's why it works! It's a "found family" dynamic that they were both looking for. But I can't hear Joel's soothing "baby girl" epithet without it rubbing me the wrong way a little. My gut reaction was that it was right for Joel not to fully trust Gail's husband, but then I realized I can never not suspect Joe Pantoliano of treachery because of his role as betrayer in the 26-year-old movie The Matrix. Brains are weird. Kyle: I did like the way Ellie tells Joel off for lying to herabout the killing; it's a real "growing up" moment for the character. And of course it transitions well into The Porch Scene, Ellie's ultimate moment of confronting Joel on his ultimate betrayal. While I'm not a fan of the head-fake "this scene isn't going to happen" thing they did earlier this season, I think the TV show once again did justice to one of the most impactful parts of the game. But the game also managed to spread out these Joel-centric flashbacks a little more, so we're not transitioning from "museum fun" to "porch confrontation" quite so quickly. Here, it feels like they're trying hard to rush through all of their "bring back Pedro Pascal" requirements in a single episode. When you've only got one hour left, how you spend it becomes pretty important. Credit: Warner Bros. Discovery When you've only got one hour left, how you spend it becomes pretty important. Credit: Warner Bros. Discovery Andrew: Yeah, because you don't need to pay a 3D model's appearance fees if you want to use it in a bunch of scenes of your video game. Pedro Pascal has other stuff going on! Kyle: That's probably part of it. But without giving too much away, I think we're seeing the limits of stretching the events of "Part 2" into what is essentially two seasons. While there have been some cuts, on the whole, it feels like there's also been a lot of filler to "round out" these characters in ways that have been more harmful than helpful at points. Andrew: Yeah, our episode ends by depositing us back in the main action, as Ellie returns to the abandoned theater where she and Dina have holed up. I'm curious to see what we're in for in this last run of almost-certainly-Joel-less episodes, but I suspect it involves a bunch of non-Joel characters ping-ponging between the WLF forces and the local cultists. There will probably be some villain monologuing, probably some zombie hordes, probably another named character death or two. Pretty standard issue. What I don't expect is for anyone to lovingly and accurately describe the process of refurbishing a guitar. And that's the other issue with putting this episode where it is—just as you're getting used to a show without Joel, you're reminded that he's missing all over again. #hbos #last #s2e6 #recap #look
    ARSTECHNICA.COM
    HBO’s The Last of Us S2E6 recap: Look who’s back!
    New episodes of season 2 of The Last of Us are premiering on HBO every Sunday night, and Ars' Kyle Orland (who's played the games) and Andrew Cunningham (who hasn't) will be talking about them here after they air. While these recaps don't delve into every single plot point of the episode, there are obviously heavy spoilers contained within, so go watch the episode first if you want to go in fresh. Kyle: Going from a sudden shot of beatific Pedro Pascal at the end of the last episode to a semi-related flashback with a young Joel Miller and his brother was certainly a choice. I almost respect how overtly they are just screwing with audience expectations here. As for the opening flashback scene itself, I guess the message is "Hey, look at the generational trauma his family was dealing with—isn't it great he overcame that to love Ellie?" But I'm not sure I can draw a straight line from "he got beat by his dad" to "he condemned the entire human race for his surrogate daughter." Andrew: I do not have the same problems you did with either the Joel pop-in at the end of the last episode or the flashback at the start of this episode—last week, the show was signaling "here comes Joel!" and this week the show is signaling "look, it's Joel!" Maybe I'm just responding to Tony Dalton as Joel's dad, who I know best as the charismatic lunatic Lalo Salamanca from Better Call Saul. I do agree that the throughline between these two events is shaky, though, and without the flashback to fill us in, the "I hope you can do a little better than me" sentiment feels like something way out of left field. But I dunno, it's Joel week. Joel's back! This is the Duality of Joel: you can simultaneously think that he is horrible for failing a civilization-scale trolley problem when he killed a building full of Fireflies to save Ellie, and you can't help but be utterly charmed by Pedro Pascal enthusiastically describing the many ways to use a Dremel. (He's right! It's a versatile tool!) Truly, there's pretty much nothing in this episode that we couldn't have inferred or guessed at based on the information the show has already made available to us. And I say this as a non-game-player—I didn't need to see exactly how their relationship became as strained as it was by the beginning of the season to have some idea of why it happened, nor did I need to see The Porch Scene to understand that their bond nevertheless endured. But this is also the dynamic that everybody came to the show for last season, so I can only make myself complain about it to a point. Kyle: It's true, Joel Week is a time worth celebrating. If I'm coming across as cranky about it at the outset, it's probably because this whole episode is a realization of what we're missing out on this season thanks to Joel's death. As you said, a lot of this episode was filling in gaps that could well have been inferred from events we did see. But I would have easily taken a full season (or a full second game) of Ellie growing up and Joel dealing with Ellie growing up. You could throw in some zombie attacks or an overarching Big Bad enemy or something if you want, but the development of Joel and Ellie's relationship deserves more than just some condensed flashbacks. "It works?!" Credit: Warner Bros. Discovery "It works?!" Credit: Warner Bros. Discovery Andrew: Yeah, it's hard not to be upset about the original sin of The Last of Us Part 2 which is (assuming it's like the show) that having some boring underbaked villain crawl out of the woodwork to kill the show's main character is kind of a cheap shot. Sure, you shock the hell out of viewers like me who didn't see it coming! But part of the reason I didn't see it coming is because if you kill Joel, you need to do a whole bunch of your show without Joel and why on Earth would you decide to do that? To be clear, I don't mind this season so much and I've found things to like about it, though Ellie does sometimes veer into being a protagonist so short-sighted and impulsive and occasionally just-plain-stupid that it's hard to be in her corner. But yeah, flashing back to a time just two months after the end of season 1 really does make you wonder, "Why couldn't the story just be this?" Kyle: In the gaming space, I understand the desire to not have your sequel game be just "more of the same" from the last game. But I've always felt The Last of Us Part 2 veered too hard in the other direction and became something almost entirely unrecognizable from the original game I loved. But let's focus on what we do get in this episode, which is an able recreation of my favorite moment from the second game, Ellie enjoying the heck out of a ruined science museum. The childlike wonder she shows here is a great respite from a lot of action-heavy scenes in the game, and I think it serves the same purpose here. It's also much more drawn out in the game—I could have luxuriated in just this part of the flashback for an entire episode! Andrew: The only thing that kept me from being fully on board with that scene was that I think Ellie was acting quite a bit younger than 16, with her pantomimed launch noises and flipping of switches, But I could believe that a kid who had such a rough and abbreviated childhood would have some fun sitting in an Apollo module. For someone with no memories of the pre-outbreak society, it must seem like science fiction, and the show gives us some lovely visuals to go with it. The things I like best here are the little moments in between scenes rather than the parts where the show insists on showing us events that it had already alluded to in other episodes. What sticks with me the most, as we jump between Ellie's birthdays, is Joel's insistence that "we could do this kind of thing more often" as they go to a museum or patrol the trails together. That it needs to be stated multiple times suggests that they are not, in fact, doing this kind of thing more often in between birthdays. Joel is thoughtful and attentive in his way—a little better than his father—but It's such a bittersweet little note, a surrogate dad's clumsy effort to bridge a gap that he knows is there but doesn't fully understand. Why can't it be like this forever? Credit: Warner Bros. Discovery Why can't it be like this forever? Credit: Warner Bros. Discovery Kyle: Yeah, I'm OK with a little arrested development in a girl that has been forced to miss so many of the markers of a "normal" pre-apocalypse childhood. But yeah, Joel is pretty clumsy about this. And as we see all of these attempts with his surrogate daughter, it's easy to forget what happened to his real daughter way back at the beginning of the first season. The trauma of that event shapes Joel in a way that I feel the narrative sometimes forgets about for long stretches. But then we get moments like Joel leading Gail's newly infected husband to a death that the poor guy would very much like to delay by an hour for one final moment with his wife. When Joel says that you can always close your eyes and see the face of the one you love, he may have been thinking about Ellie. But I like to think he was thinking about his actual daughter. Andrew: Yes to the extent that Joel's actions are relatable (I won't say "excusable," but "relatable") it's because the undercurrent of his relationship with Ellie is that he can't watch another daughter die in his arms. I watched the first episode again recently, and that whole scene remains a masterfully executed gut-punch. But it's a tough tightrope to walk because if the story spends too much time focusing on it, you draw attention to how unhealthy it is for Joel to be forcing Ellie to play that role in his life. Don't get me wrong, Ellie was looking for a father figure, too, and that's why it works! It's a "found family" dynamic that they were both looking for. But I can't hear Joel's soothing "baby girl" epithet without it rubbing me the wrong way a little. My gut reaction was that it was right for Joel not to fully trust Gail's husband, but then I realized I can never not suspect Joe Pantoliano of treachery because of his role as betrayer in the 26-year-old movie The Matrix. Brains are weird. Kyle: I did like the way Ellie tells Joel off for lying to her (and to Gail) about the killing; it's a real "growing up" moment for the character. And of course it transitions well into The Porch Scene, Ellie's ultimate moment of confronting Joel on his ultimate betrayal. While I'm not a fan of the head-fake "this scene isn't going to happen" thing they did earlier this season, I think the TV show once again did justice to one of the most impactful parts of the game. But the game also managed to spread out these Joel-centric flashbacks a little more, so we're not transitioning from "museum fun" to "porch confrontation" quite so quickly. Here, it feels like they're trying hard to rush through all of their "bring back Pedro Pascal" requirements in a single episode. When you've only got one hour left, how you spend it becomes pretty important. Credit: Warner Bros. Discovery When you've only got one hour left, how you spend it becomes pretty important. Credit: Warner Bros. Discovery Andrew: Yeah, because you don't need to pay a 3D model's appearance fees if you want to use it in a bunch of scenes of your video game. Pedro Pascal has other stuff going on! Kyle: That's probably part of it. But without giving too much away, I think we're seeing the limits of stretching the events of "Part 2" into what is essentially two seasons. While there have been some cuts, on the whole, it feels like there's also been a lot of filler to "round out" these characters in ways that have been more harmful than helpful at points. Andrew: Yeah, our episode ends by depositing us back in the main action, as Ellie returns to the abandoned theater where she and Dina have holed up. I'm curious to see what we're in for in this last run of almost-certainly-Joel-less episodes, but I suspect it involves a bunch of non-Joel characters ping-ponging between the WLF forces and the local cultists. There will probably be some villain monologuing, probably some zombie hordes, probably another named character death or two. Pretty standard issue. What I don't expect is for anyone to lovingly and accurately describe the process of refurbishing a guitar. And that's the other issue with putting this episode where it is—just as you're getting used to a show without Joel, you're reminded that he's missing all over again.
    0 Commentaires 0 Parts
  • Thermal asymmetry in the Moon’s mantle inferred from monthly tidal response

    Nature, Published online: 14 May 2025; doi:10.1038/s41586-025-08949-5Data from the NASA GRAIL spacecraft recover the lunar gravity field suggesting preservation of a predominantly thermal anomaly in the nearside mantle, which could influence the spatial distribution of deep moonquakes.
    #thermal #asymmetry #moons #mantle #inferred
    Thermal asymmetry in the Moon’s mantle inferred from monthly tidal response
    Nature, Published online: 14 May 2025; doi:10.1038/s41586-025-08949-5Data from the NASA GRAIL spacecraft recover the lunar gravity field suggesting preservation of a predominantly thermal anomaly in the nearside mantle, which could influence the spatial distribution of deep moonquakes. #thermal #asymmetry #moons #mantle #inferred
    WWW.NATURE.COM
    Thermal asymmetry in the Moon’s mantle inferred from monthly tidal response
    Nature, Published online: 14 May 2025; doi:10.1038/s41586-025-08949-5Data from the NASA GRAIL spacecraft recover the lunar gravity field suggesting preservation of a predominantly thermal anomaly in the nearside mantle, which could influence the spatial distribution of deep moonquakes.
    0 Commentaires 0 Parts
  • Prefrontal encoding of an internal model for emotional inference

    Nature, Published online: 14 May 2025; doi:10.1038/s41586-025-09001-2Neurons in the rodent dorsomedial prefrontal cortex encode a flexible internal model of emotion by linking directly experienced and inferred associations with aversive experiences.
    #prefrontal #encoding #internal #model #emotional
    Prefrontal encoding of an internal model for emotional inference
    Nature, Published online: 14 May 2025; doi:10.1038/s41586-025-09001-2Neurons in the rodent dorsomedial prefrontal cortex encode a flexible internal model of emotion by linking directly experienced and inferred associations with aversive experiences. #prefrontal #encoding #internal #model #emotional
    WWW.NATURE.COM
    Prefrontal encoding of an internal model for emotional inference
    Nature, Published online: 14 May 2025; doi:10.1038/s41586-025-09001-2Neurons in the rodent dorsomedial prefrontal cortex encode a flexible internal model of emotion by linking directly experienced and inferred associations with aversive experiences.
    0 Commentaires 0 Parts