• So, Einstein casually strolled into the universe, dropped the bombshell that time is as relative as your aunt's opinions during family dinners, and walked away like it was no big deal. Who knew that the speed of light, that seemingly harmless little constant, could turn our entire concept of time into a cosmic joke? Apparently, if you’re zooming through space at light speed, your watch just decides to take a nap while the rest of us mortals are stuck in the slow lane. Talk about a time dilation dilemma! Next time you’re late, just blame it on Einstein; clearly, some of us are just trying to keep up with the universe’s idea of punctuality.

    #Einstein #TimeDilation #Relativity #
    So, Einstein casually strolled into the universe, dropped the bombshell that time is as relative as your aunt's opinions during family dinners, and walked away like it was no big deal. Who knew that the speed of light, that seemingly harmless little constant, could turn our entire concept of time into a cosmic joke? Apparently, if you’re zooming through space at light speed, your watch just decides to take a nap while the rest of us mortals are stuck in the slow lane. Talk about a time dilation dilemma! Next time you’re late, just blame it on Einstein; clearly, some of us are just trying to keep up with the universe’s idea of punctuality. #Einstein #TimeDilation #Relativity #
    Einstein Showed That Time Is Relative. But … Why Is It?
    The mind-bending concept of time dilation results from a seemingly harmless assumption—that the speed of light is the same for all observers.
    Like
    Love
    Wow
    Angry
    Sad
    77
    1 Comments 0 Shares 0 Reviews
  • HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION

    By CHRIS McGOWAN

    This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging.
    A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award.

    This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.”
    —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.”

    This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA
    While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.”
    He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.”

    Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS
    “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.”

    Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.”

    While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA
    Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.”
    He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.”

    Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.”

    The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES
    The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.”
    SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public.

    An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    #hollywood #vfx #tools #space #exploration
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.” #hollywood #vfx #tools #space #exploration
    WWW.VFXVOICE.COM
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCam (Near-Infrared Camera) shows stunning details of the majestic planet in infrared light. (Image courtesy of NASA, ESA and CSA) Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey (1968). Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studio (SVS) produces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studio (SVS) at the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization Studio (SVS) About his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) The Gulf Stream and connected currents. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars. (Image courtesy of NASA’s Goddard Space Flight Center) WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatory (SDO) shows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between. (Image courtesy of ASA/JPL/Space Science Institute) The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar. (Image courtesy of DNEG and Paramount Pictures) Another visualization of the black hole Gargantua. (Image courtesy of DNEG and Paramount Pictures) INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar (2014). “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet [in the film]. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Lab (CIL) and Goddard Media Studios (GMS) to publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars. Hellas Basin can be seen in the lower right portion of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars slightly tilted to show the Martian North Pole. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    Like
    Love
    Wow
    Angry
    Sad
    144
    0 Comments 0 Shares 0 Reviews
  • Research roundup: 7 stories we almost missed

    Best of the rest

    Research roundup: 7 stories we almost missed

    Also: drumming chimpanzees, picking styles of two jazz greats, and an ancient underground city's soundscape

    Jennifer Ouellette



    May 31, 2025 5:37 pm

    |

    4

    Time lapse photos show a new ping-pong-playing robot performing a top spin.

    Credit:

    David Nguyen, Kendrick Cancio and Sangbae Kim

    Time lapse photos show a new ping-pong-playing robot performing a top spin.

    Credit:

    David Nguyen, Kendrick Cancio and Sangbae Kim

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories wemissed. This year, we're experimenting with a monthly collection. May's list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.
    Special relativity made visible

    Credit:

    TU Wien

    Perhaps the most well-known feature of Albert Einstein's special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It's not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.
    They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.
    Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere's North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

    DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  .
    Drumming chimpanzees

    A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

    Chimpanzees are known to "drum" on the roots of trees as a means of communication, often combining that action with what are known as "pant-hoot" vocalizations. Scientists have found that the chimps' drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.
    Back in 2022, the same team observed that individual chimps had unique styles of "buttress drumming," which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africaand West Africa, amounting to 371 drumming bouts.
    Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.
    DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  .
    Distinctive styles of two jazz greats

    Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn't use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn't want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and "flat picking."
    Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.
    Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass's rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumbproduced more of a "pluck" compared to the pick, which produced more of a "strike." Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.
    Sounds of an ancient underground city

    Credit:

    Sezin Nas

    Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channelsserving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

    The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu's most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site's acoustic environment.
    Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.
    MIT's latest ping-pong robot
    Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to "learn" from prior data to improve their performance.
    MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid's arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

    The new bot can execute three different swing typesand during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot's strike speed up to 19 meters per second, close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.
    Why orange cats are orange

    Credit:

    Astropulse/CC BY-SA 3.0

    Cat lovers know orange cats are special for more than their unique coloring, but that's the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.
    Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshellcoloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resourcesfor cats which greatly aided the team's research, along with taking additional DNA samples from cats at spay and neuter clinics.

    From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn't known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutationturns on Arhgap36 expression in pigment cells, thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.
    DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  .
    Not a Roman "massacre" after all

    Credit:

    Martin Smith

    In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that's the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.
    But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn't die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It's possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.
    DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  .

    Jennifer Ouellette
    Senior Writer

    Jennifer Ouellette
    Senior Writer

    Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

    4 Comments
    #research #roundup #stories #almost #missed
    Research roundup: 7 stories we almost missed
    Best of the rest Research roundup: 7 stories we almost missed Also: drumming chimpanzees, picking styles of two jazz greats, and an ancient underground city's soundscape Jennifer Ouellette – May 31, 2025 5:37 pm | 4 Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories wemissed. This year, we're experimenting with a monthly collection. May's list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights. Special relativity made visible Credit: TU Wien Perhaps the most well-known feature of Albert Einstein's special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It's not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics. They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera. Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere's North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959. DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  . Drumming chimpanzees A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025. Chimpanzees are known to "drum" on the roots of trees as a means of communication, often combining that action with what are known as "pant-hoot" vocalizations. Scientists have found that the chimps' drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms. Back in 2022, the same team observed that individual chimps had unique styles of "buttress drumming," which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africaand West Africa, amounting to 371 drumming bouts. Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved. DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  . Distinctive styles of two jazz greats Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn't use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn't want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and "flat picking." Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA. Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass's rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumbproduced more of a "pluck" compared to the pick, which produced more of a "strike." Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery. Sounds of an ancient underground city Credit: Sezin Nas Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channelsserving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions. The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu's most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site's acoustic environment. Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves. MIT's latest ping-pong robot Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to "learn" from prior data to improve their performance. MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid's arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras. The new bot can execute three different swing typesand during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot's strike speed up to 19 meters per second, close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming. Why orange cats are orange Credit: Astropulse/CC BY-SA 3.0 Cat lovers know orange cats are special for more than their unique coloring, but that's the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology. Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshellcoloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resourcesfor cats which greatly aided the team's research, along with taking additional DNA samples from cats at spay and neuter clinics. From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn't known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutationturns on Arhgap36 expression in pigment cells, thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve. DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  . Not a Roman "massacre" after all Credit: Martin Smith In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that's the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries. But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn't die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It's possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle. DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  . Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 4 Comments #research #roundup #stories #almost #missed
    ARSTECHNICA.COM
    Research roundup: 7 stories we almost missed
    Best of the rest Research roundup: 7 stories we almost missed Also: drumming chimpanzees, picking styles of two jazz greats, and an ancient underground city's soundscape Jennifer Ouellette – May 31, 2025 5:37 pm | 4 Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories we (almost) missed. This year, we're experimenting with a monthly collection. May's list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights. Special relativity made visible Credit: TU Wien Perhaps the most well-known feature of Albert Einstein's special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It's not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics. They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera. Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere's North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959. DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs). Drumming chimpanzees A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025. Chimpanzees are known to "drum" on the roots of trees as a means of communication, often combining that action with what are known as "pant-hoot" vocalizations (see above video). Scientists have found that the chimps' drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms. Back in 2022, the same team observed that individual chimps had unique styles of "buttress drumming," which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts. Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved. DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs). Distinctive styles of two jazz greats Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn't use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn't want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and "flat picking." Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA. Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass's rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a "pluck" compared to the pick (Pass), which produced more of a "strike." Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery. Sounds of an ancient underground city Credit: Sezin Nas Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions. The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu's most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site's acoustic environment. Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves. MIT's latest ping-pong robot Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to "learn" from prior data to improve their performance. MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid's arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras. The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot's strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming. Why orange cats are orange Credit: Astropulse/CC BY-SA 3.0 Cat lovers know orange cats are special for more than their unique coloring, but that's the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology. Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team's research, along with taking additional DNA samples from cats at spay and neuter clinics. From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn't known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve. DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs). Not a Roman "massacre" after all Credit: Martin Smith In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that's the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries. But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn't die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It's possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle. DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs). Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 4 Comments
    13 Comments 0 Shares 0 Reviews
  • A new atomic clock in space could help us measure elevations on Earth

    In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side. The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany.While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge.  To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort.  Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space.
    That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone.  Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device. 
    From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics.  But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world. Alignment over this “zero point”is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters.  A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.ESA-T. PEIGNIER To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude. However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert.  To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America.  To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet.  A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation. 

    “We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive. ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks. These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field.  Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago.  Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time.
    #new #atomic #clock #space #could
    A new atomic clock in space could help us measure elevations on Earth
    In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side. The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany.While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge.  To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort.  Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space. That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone.  Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device.  From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics.  But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world. Alignment over this “zero point”is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters.  A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.ESA-T. PEIGNIER To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude. However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert.  To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America.  To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet.  A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation.  “We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive. ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks. These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field.  Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago.  Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time. #new #atomic #clock #space #could
    WWW.TECHNOLOGYREVIEW.COM
    A new atomic clock in space could help us measure elevations on Earth
    In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side. The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany. (Geodesists study our planet’s shape, orientation, and gravitational field.) While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge.  To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort.  Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space. That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone.  Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device. (In 2022, China installed a potentially stabler clock on its space station, but the Chinese government has not publicly shared the clock’s performance after launch, according to Cacciapuoti.)  From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics.  But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world. Alignment over this “zero point” (basically where you stick the end of the tape measure to measure elevation) is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters.  A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.ESA-T. PEIGNIER To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude. However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert.  To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America.  To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet.  A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation.  “We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive. ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks. These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field.  Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago.  Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time.
    0 Comments 0 Shares 0 Reviews
  • Blue Light Exposure Can Impact Sleep, Skin, and Eyes — Here's How to Shield Against It

    In today’s ever more connected world, it’s fair to say that some of us receive nearly as much screen time as we do actual sunlight — if not more, depending on your job and the time of year.A growing body of research shows that the blue light that these screens emit might have effects on human health, whether it’s our vision, skin, or our sleep.“Blue light has an effect on skin health and even the retina in the eyes,” says Kseniya Kobets, an assistant professor of medicine at Albert Einstein College of Medicine and the director of cosmetic dermatology at Montefiore Einstein Advanced Care.Blue Light ExposureBlue light sits in the light spectrum between ultraviolet, high-energy light, and other types of visible light that aren’t blue light and emit lower energy such as green, orange, and red light. About one third of all visible light falls into the blue light category, which is also called high-energy light.Most blue light we are exposed to comes directly from the sun. But LED lights and screens, whether it’s your television, computer, tablet, or smart phone, also emit blue light. While the amount screens emit is minimal compared to that from the sun, they are becoming increasingly ubiquitous in our lives, at all hours of the day. And some doctors are concerned that the way many hold their phones so close to their faces could also increase a negative effect.Read More: Does Blue Light Damage Skin?Is All Blue Light Bad for You?Blue light isn’t all bad. Some research has shown that low amounts of HEV can help decrease acne, for example, while other studies showed that limited exposure to the light may help some symptoms related to psoriasis and eczema, according to a review study.In fact, the U.S. Food and Drug Administration approved a wearable blue light device for the treatment of mild psoriasis.Some research has also found that blue light therapy might actually help treat certain types of skin cancer in a controlled treatment. But the relationship between blue light and cancer isn’t all beneficial.The Impact of Blue Light on Your SkinStudies on mice have shown that long-term exposure to blue light can also cause some of the conditions that lead to cancer, though the authors stated that more research is needed to confirm this.“Many of the effects of blue light on living organisms are unknown, and further research is required, including on methods of protection,” the authors stated. Blue light could cause some lesser skin problems as well, though. Kobets says that blue light can cause oxidative stress on the skin, which could cause premature skin aging and hyperpigmentation — a condition in which some skin patches become darker than others.“Most people want to avoid hyperpigmentation and uneven skin tone,” Kobets says.Other Effects of Blue LightIt’s possible that our exposure to too much blue light — especially outside of daylight hours — can suppress our production of melatonin, the hormone our body uses to help set its inner clock, or circadian rhythm. This essentially means that too much blue light at night could affect healthy sleep, Kobets says.All light can affect melatonin production, but blue light suppresses it more effectively, according to Harvard Health Publishing.Our eyes also aren’t very good at filtering out blue light. As a result, it reaches our retina, where it may damage cells. Serious exposure could also contribute to conditions like cataracts and vision loss from age-related macular degeneration. Kids are more at risk since their eyes absorb more blue light than adults.Blue Light ProtectionsThe best way to limit blue light exposure is to lower your screen time — especially at night. But Kobets also says that people can take other steps to limit the potential damage of blue light. Sunscreen can help — even in the winter or indoors.“The oxidative stress from visible blue light and its effect on DNA damage and hyperpigmentation of the skin is one of the main reasons I recommend usingdaily,” she says.Even makeup might help, if it has the right components. “The best makeup is the one that offers tint cover-up which contains iron oxide plus has mineralto add to the protection,” Kobets says.Other steps to help reduce damage include face masks or glasses made to shield blue light, or just simply lowering the brightness of your phone. You can also use a shield on your phone or computer screen that decreases the amount of blue light displayed. This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Journal of Cosmetic Dermatology. Blue light protection, part II—Ingredients and performance testing methodsAmerican Academy of Dermatology Association. Can a wearable blue-light device clear psoriasis?Iowa Healthcare. Blue-light therapy warding off skin cancerHarvard Health. Blue light has a dark sideJoshua Rapp Learn is an award-winning D.C.-based science writer. An expat Albertan, he contributes to a number of science publications like National Geographic, The New York Times, The Guardian, New Scientist, Hakai, and others.
    #blue #light #exposure #can #impact
    Blue Light Exposure Can Impact Sleep, Skin, and Eyes — Here's How to Shield Against It
    In today’s ever more connected world, it’s fair to say that some of us receive nearly as much screen time as we do actual sunlight — if not more, depending on your job and the time of year.A growing body of research shows that the blue light that these screens emit might have effects on human health, whether it’s our vision, skin, or our sleep.“Blue light has an effect on skin health and even the retina in the eyes,” says Kseniya Kobets, an assistant professor of medicine at Albert Einstein College of Medicine and the director of cosmetic dermatology at Montefiore Einstein Advanced Care.Blue Light ExposureBlue light sits in the light spectrum between ultraviolet, high-energy light, and other types of visible light that aren’t blue light and emit lower energy such as green, orange, and red light. About one third of all visible light falls into the blue light category, which is also called high-energy light.Most blue light we are exposed to comes directly from the sun. But LED lights and screens, whether it’s your television, computer, tablet, or smart phone, also emit blue light. While the amount screens emit is minimal compared to that from the sun, they are becoming increasingly ubiquitous in our lives, at all hours of the day. And some doctors are concerned that the way many hold their phones so close to their faces could also increase a negative effect.Read More: Does Blue Light Damage Skin?Is All Blue Light Bad for You?Blue light isn’t all bad. Some research has shown that low amounts of HEV can help decrease acne, for example, while other studies showed that limited exposure to the light may help some symptoms related to psoriasis and eczema, according to a review study.In fact, the U.S. Food and Drug Administration approved a wearable blue light device for the treatment of mild psoriasis.Some research has also found that blue light therapy might actually help treat certain types of skin cancer in a controlled treatment. But the relationship between blue light and cancer isn’t all beneficial.The Impact of Blue Light on Your SkinStudies on mice have shown that long-term exposure to blue light can also cause some of the conditions that lead to cancer, though the authors stated that more research is needed to confirm this.“Many of the effects of blue light on living organisms are unknown, and further research is required, including on methods of protection,” the authors stated. Blue light could cause some lesser skin problems as well, though. Kobets says that blue light can cause oxidative stress on the skin, which could cause premature skin aging and hyperpigmentation — a condition in which some skin patches become darker than others.“Most people want to avoid hyperpigmentation and uneven skin tone,” Kobets says.Other Effects of Blue LightIt’s possible that our exposure to too much blue light — especially outside of daylight hours — can suppress our production of melatonin, the hormone our body uses to help set its inner clock, or circadian rhythm. This essentially means that too much blue light at night could affect healthy sleep, Kobets says.All light can affect melatonin production, but blue light suppresses it more effectively, according to Harvard Health Publishing.Our eyes also aren’t very good at filtering out blue light. As a result, it reaches our retina, where it may damage cells. Serious exposure could also contribute to conditions like cataracts and vision loss from age-related macular degeneration. Kids are more at risk since their eyes absorb more blue light than adults.Blue Light ProtectionsThe best way to limit blue light exposure is to lower your screen time — especially at night. But Kobets also says that people can take other steps to limit the potential damage of blue light. Sunscreen can help — even in the winter or indoors.“The oxidative stress from visible blue light and its effect on DNA damage and hyperpigmentation of the skin is one of the main reasons I recommend usingdaily,” she says.Even makeup might help, if it has the right components. “The best makeup is the one that offers tint cover-up which contains iron oxide plus has mineralto add to the protection,” Kobets says.Other steps to help reduce damage include face masks or glasses made to shield blue light, or just simply lowering the brightness of your phone. You can also use a shield on your phone or computer screen that decreases the amount of blue light displayed. This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Journal of Cosmetic Dermatology. Blue light protection, part II—Ingredients and performance testing methodsAmerican Academy of Dermatology Association. Can a wearable blue-light device clear psoriasis?Iowa Healthcare. Blue-light therapy warding off skin cancerHarvard Health. Blue light has a dark sideJoshua Rapp Learn is an award-winning D.C.-based science writer. An expat Albertan, he contributes to a number of science publications like National Geographic, The New York Times, The Guardian, New Scientist, Hakai, and others. #blue #light #exposure #can #impact
    WWW.DISCOVERMAGAZINE.COM
    Blue Light Exposure Can Impact Sleep, Skin, and Eyes — Here's How to Shield Against It
    In today’s ever more connected world, it’s fair to say that some of us receive nearly as much screen time as we do actual sunlight — if not more, depending on your job and the time of year.A growing body of research shows that the blue light that these screens emit might have effects on human health, whether it’s our vision, skin, or our sleep.“Blue light has an effect on skin health and even the retina in the eyes,” says Kseniya Kobets, an assistant professor of medicine at Albert Einstein College of Medicine and the director of cosmetic dermatology at Montefiore Einstein Advanced Care.Blue Light ExposureBlue light sits in the light spectrum between ultraviolet, high-energy light, and other types of visible light that aren’t blue light and emit lower energy such as green, orange, and red light. About one third of all visible light falls into the blue light category, which is also called high-energy light (HEV).Most blue light we are exposed to comes directly from the sun. But LED lights and screens, whether it’s your television, computer, tablet, or smart phone, also emit blue light. While the amount screens emit is minimal compared to that from the sun, they are becoming increasingly ubiquitous in our lives, at all hours of the day. And some doctors are concerned that the way many hold their phones so close to their faces could also increase a negative effect.Read More: Does Blue Light Damage Skin?Is All Blue Light Bad for You?Blue light isn’t all bad. Some research has shown that low amounts of HEV can help decrease acne, for example, while other studies showed that limited exposure to the light may help some symptoms related to psoriasis and eczema, according to a review study.In fact, the U.S. Food and Drug Administration approved a wearable blue light device for the treatment of mild psoriasis.Some research has also found that blue light therapy might actually help treat certain types of skin cancer in a controlled treatment. But the relationship between blue light and cancer isn’t all beneficial.The Impact of Blue Light on Your SkinStudies on mice have shown that long-term exposure to blue light can also cause some of the conditions that lead to cancer, though the authors stated that more research is needed to confirm this.“Many of the effects of blue light on living organisms are unknown, and further research is required, including on methods of protection,” the authors stated. Blue light could cause some lesser skin problems as well, though. Kobets says that blue light can cause oxidative stress on the skin, which could cause premature skin aging and hyperpigmentation — a condition in which some skin patches become darker than others.“Most people want to avoid hyperpigmentation and uneven skin tone,” Kobets says.Other Effects of Blue LightIt’s possible that our exposure to too much blue light — especially outside of daylight hours — can suppress our production of melatonin, the hormone our body uses to help set its inner clock, or circadian rhythm. This essentially means that too much blue light at night could affect healthy sleep, Kobets says.All light can affect melatonin production, but blue light suppresses it more effectively, according to Harvard Health Publishing.Our eyes also aren’t very good at filtering out blue light. As a result, it reaches our retina, where it may damage cells. Serious exposure could also contribute to conditions like cataracts and vision loss from age-related macular degeneration. Kids are more at risk since their eyes absorb more blue light than adults.Blue Light ProtectionsThe best way to limit blue light exposure is to lower your screen time — especially at night. But Kobets also says that people can take other steps to limit the potential damage of blue light. Sunscreen can help — even in the winter or indoors.“The oxidative stress from visible blue light and its effect on DNA damage and hyperpigmentation of the skin is one of the main reasons I recommend using [sun protection factor] daily,” she says.Even makeup might help, if it has the right components. “The best makeup is the one that offers tint cover-up which contains iron oxide plus has mineral [sun protection factor] to add to the protection,” Kobets says.Other steps to help reduce damage include face masks or glasses made to shield blue light, or just simply lowering the brightness of your phone. You can also use a shield on your phone or computer screen that decreases the amount of blue light displayed. This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Journal of Cosmetic Dermatology. Blue light protection, part II—Ingredients and performance testing methodsAmerican Academy of Dermatology Association. Can a wearable blue-light device clear psoriasis?Iowa Healthcare. Blue-light therapy warding off skin cancerHarvard Health. Blue light has a dark sideJoshua Rapp Learn is an award-winning D.C.-based science writer. An expat Albertan, he contributes to a number of science publications like National Geographic, The New York Times, The Guardian, New Scientist, Hakai, and others.
    0 Comments 0 Shares 0 Reviews
  • Meta hypes AI friends as social media’s future, but users want real connections

    Friend requests

    Meta hypes AI friends as social media’s future, but users want real connections

    Two visions for social media’s future pit real connections against AI friends.

    Ashley Belanger



    May 21, 2025 9:38 am

    |

    1

    Credit:

    Aurich Lawson | Getty Images

    Credit:

    Aurich Lawson | Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer.
    At the Federal Trade Commission's monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta's family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.
    As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.
    "Mark Zuckerberg says social media is over," a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg's words. That chart, shared at the trial, showed the "percent of time spent viewing content posted by 'friends'" had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram.
    Supposedly because of this trend, Zuckerberg testified that "it doesn't matter much" if someone's friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it's not so much focused on beating the FTC's flagged rivals in the connecting-friends-and-family business, Snap and MeWe.
    But while Zuckerberg claims that hosting that kind of content doesn't move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta's own press releases seem to back that up.

    Weeks ahead of Zuckerberg's testimony, Meta announced that it would bring back the "magic of friends," introducing a "friends" tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots.
    Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but "in a very creepy way," The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could "meaningfully" fill the void of real friendship online, as the average person has only three friends but "has demand" for up to 15. To critics seeking to undo Meta's alleged monopoly, this latest move could signal a contradiction in Zuckerberg's testimony, showing that the company is so invested in keeping users on its platforms that it's now creating AI friendsto bait the loneliest among us into more engagement.
    "The average person wants more connectivity, connection, than they have," Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren't the answer to providing that basic social need. All this comes more than a decade after he sought billion in Facebook's 2012 initial public offering so that he could keep building tools that he told investors would expand "people's capacity to build and maintain relationships."
    At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta's platforms in the future, predicting that "several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive."

    Meta declined to comment further on the company's vision for social media's future. In a statement, a Meta spokesperson told Ars that "the FTC’s lawsuit against Meta defies reality," claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta's true rivals.
    "More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final," Meta's spokesperson said. "Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.”

    Meta faces calls to open up its platforms
    Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, "it was so beautiful because we didn't think about bots and trolls. We didn't think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place."
    But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms' functionality or leverage their platforms to compete for their users' attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing.
    Meta "is still entirely based on personal social networking," Weinstein told Ars.
    In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after "competition became impossible" with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app's more recent shift to decentralization. Still defending MeWe's failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the "anti-Facebook."

    Among his complaints, Weinstein accused Meta of thwarting MeWe's attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he's urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta's platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry.
    "The glue that holds it all together is Facebook’s monopoly over data," Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. "Its ownership and control of the personal information of Facebook users and non-users alike is unmatched."
    Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta's should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said.
    Doctorow said that solution would create "an equilibrium where companies are more incentivized to behave themselves than they are to cheat" by, say, retaliating against, killing off, or buying out rivals. And "if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy," Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive.
    Doctorow summed up the frustration that some users have faced through the ongoing "enshittification" of platformsever since platforms took over the Internet.

    "I'm 55 now, and I've gotten a lot less interested in how things work because I've had too many experiences with how things fail," Doctorow told Ars. "And I just want to make sure that if I'm on a service and it goes horribly wrong, I can leave."
    Social media haters wish OG platforms were doomed
    Weinstein pointed out that Meta's alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it.
    As Meta's monopoly trial got underway, a personal blog post titled "No Instagram, no privacy" rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025.

    In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt "blessed" to have "somehow escaped having an Instagram account," feeling no pressure to "update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with."
    But despite never having an account, he's found that "you don’t have to be on Instagram to be on Instagram," complaining that "it bugs me" when friends seem to know "more about my life than I tell them" because of various friends' posts that mention or show images of him. In his blog, he defined privacy as "being in control of what other people know about you" and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to "fix or regulate this," Leys suggested, or maybe some universal "etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering."

    On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media?
    Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, "just live your life without being so bothered about offending other people" or saying that "the entire world doesn't have to be sanitized to meet individual people's preferences." Others seemed to better understand Leys' point of view, with one agreeing that "the problem is that our modern normslead to everyone sharing everything with a large social network."
    Surveying the lively thread, another social media hater joked, "I feel vindicated for my decision to entirely stay off of this drama machine."
    Leys told Ars that he would "absolutely" be in favor of personal social networks like Meta's platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other "odd social dynamics."
    Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends' posts. Meta announced that it would double the amount of force-fed filler in people's feeds on Instagram and Facebook starting in 2023. That's when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends' content engagement.
    So while it's easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram's default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels "humiliating" and like a "social risk."

    But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted.
    Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that "consumers want more friends and family content, and Meta is belatedly trying to address this" with features like the "friends" tab, while claiming there's less interest in this content.
    Leys doesn't think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or "indeedtired of the social dynamics and personal brand-building... the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back," especially if it's easy to switch to other platforms that respond better to user preferences.
    He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that "it would still not get me on Instagram."

    Interoperability shakes up social media
    Meta thought it may have already beaten the FTC's monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta's influence over the social media world may be waning just as it's facing increasing pressure to open up its platforms more than ever.

    The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta, the FTC claimed, was avoiding "training users to check multiple feeds," which might allow other apps to "cannibalize" its users.
    "Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves," the FTC alleged.
    By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook's permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged.
    According to the FTC, Meta continues "to this day" to "screen developers and can weaponize API access in ways that cement its dominance," and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up.
    One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that "huge public groundswells of mistrust and anger about excessive corporate power" that "cross political lines" are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues.

    For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users.
    Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks.
    In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol "to enable people to own, upload, download, and relocate their social graphs," which maps users' connections across platforms. That could be used to mitigate "the network effect" that locks users into platforms like Meta's "while interrupting unwanted data collection."
    At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into "building interoperable gateways" between their services. Doctorow said that communicating with other users across platforms may feel "awkward" at first, but ultimately, it may be like "having to find the diesel pump at the gas station" instead of the unleaded gas pump. "You'll still be going to the same gas station," Doctorow suggested.
    Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust.

    The EFF supports regulators' attempts to pass well-crafted interoperability mandates, Doctorow said, noting that "if you have to worry about your users leaving, you generally have to treat them better."

    But would interoperability fix social media?
    The FTC has alleged that "Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs."
    Meta disputes the FTC's complaint as outdated, arguing that its platform could be substituted by pretty much any social network.
    However, Guy Aridor, a co-author of a recent article called "The Economics of Social Media" in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain "resistant to interoperability" because "it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away." For Meta, research shows its platforms' network effects have appeared to weaken somewhat but "clearly still exist" despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said.
    Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warnersaid that "interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors." He's hoping that passing these "long-overdue requirements" will "boost competition and give consumers more power."
    Aridor told Ars it's obvious that "interoperability would clearly increase competition," but he still has questions about whether users would benefit from that competition "since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility."

    Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash.
    Aridor said there is currently "very little empirical evidence on the effects of interoperability," but theoretically, if it increased competition in the current climate, it would likely "push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content."
    Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta's alleged monopoly following a breakup, which he views as the "natural remedy" following a potential win in the FTC's lawsuit.
    Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta's influence over social media well into the future. And if Zuckerberg's vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends' behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram.
    Aridor's team's article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that's true, users could get stuck forever using whichever platforms connect them with the widest range of contacts.
    "While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives," his team's article concluded.

    Ashley Belanger
    Senior Policy Reporter

    Ashley Belanger
    Senior Policy Reporter

    Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

    1 Comments
    #meta #hypes #friends #social #medias
    Meta hypes AI friends as social media’s future, but users want real connections
    Friend requests Meta hypes AI friends as social media’s future, but users want real connections Two visions for social media’s future pit real connections against AI friends. Ashley Belanger – May 21, 2025 9:38 am | 1 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer. At the Federal Trade Commission's monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta's family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family. As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly. "Mark Zuckerberg says social media is over," a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg's words. That chart, shared at the trial, showed the "percent of time spent viewing content posted by 'friends'" had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram. Supposedly because of this trend, Zuckerberg testified that "it doesn't matter much" if someone's friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it's not so much focused on beating the FTC's flagged rivals in the connecting-friends-and-family business, Snap and MeWe. But while Zuckerberg claims that hosting that kind of content doesn't move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta's own press releases seem to back that up. Weeks ahead of Zuckerberg's testimony, Meta announced that it would bring back the "magic of friends," introducing a "friends" tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots. Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but "in a very creepy way," The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could "meaningfully" fill the void of real friendship online, as the average person has only three friends but "has demand" for up to 15. To critics seeking to undo Meta's alleged monopoly, this latest move could signal a contradiction in Zuckerberg's testimony, showing that the company is so invested in keeping users on its platforms that it's now creating AI friendsto bait the loneliest among us into more engagement. "The average person wants more connectivity, connection, than they have," Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren't the answer to providing that basic social need. All this comes more than a decade after he sought billion in Facebook's 2012 initial public offering so that he could keep building tools that he told investors would expand "people's capacity to build and maintain relationships." At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta's platforms in the future, predicting that "several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive." Meta declined to comment further on the company's vision for social media's future. In a statement, a Meta spokesperson told Ars that "the FTC’s lawsuit against Meta defies reality," claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta's true rivals. "More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final," Meta's spokesperson said. "Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.” Meta faces calls to open up its platforms Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, "it was so beautiful because we didn't think about bots and trolls. We didn't think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place." But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms' functionality or leverage their platforms to compete for their users' attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing. Meta "is still entirely based on personal social networking," Weinstein told Ars. In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after "competition became impossible" with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app's more recent shift to decentralization. Still defending MeWe's failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the "anti-Facebook." Among his complaints, Weinstein accused Meta of thwarting MeWe's attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he's urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta's platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry. "The glue that holds it all together is Facebook’s monopoly over data," Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. "Its ownership and control of the personal information of Facebook users and non-users alike is unmatched." Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta's should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said. Doctorow said that solution would create "an equilibrium where companies are more incentivized to behave themselves than they are to cheat" by, say, retaliating against, killing off, or buying out rivals. And "if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy," Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive. Doctorow summed up the frustration that some users have faced through the ongoing "enshittification" of platformsever since platforms took over the Internet. "I'm 55 now, and I've gotten a lot less interested in how things work because I've had too many experiences with how things fail," Doctorow told Ars. "And I just want to make sure that if I'm on a service and it goes horribly wrong, I can leave." Social media haters wish OG platforms were doomed Weinstein pointed out that Meta's alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it. As Meta's monopoly trial got underway, a personal blog post titled "No Instagram, no privacy" rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025. In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt "blessed" to have "somehow escaped having an Instagram account," feeling no pressure to "update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with." But despite never having an account, he's found that "you don’t have to be on Instagram to be on Instagram," complaining that "it bugs me" when friends seem to know "more about my life than I tell them" because of various friends' posts that mention or show images of him. In his blog, he defined privacy as "being in control of what other people know about you" and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to "fix or regulate this," Leys suggested, or maybe some universal "etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering." On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media? Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, "just live your life without being so bothered about offending other people" or saying that "the entire world doesn't have to be sanitized to meet individual people's preferences." Others seemed to better understand Leys' point of view, with one agreeing that "the problem is that our modern normslead to everyone sharing everything with a large social network." Surveying the lively thread, another social media hater joked, "I feel vindicated for my decision to entirely stay off of this drama machine." Leys told Ars that he would "absolutely" be in favor of personal social networks like Meta's platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other "odd social dynamics." Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends' posts. Meta announced that it would double the amount of force-fed filler in people's feeds on Instagram and Facebook starting in 2023. That's when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends' content engagement. So while it's easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram's default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels "humiliating" and like a "social risk." But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted. Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that "consumers want more friends and family content, and Meta is belatedly trying to address this" with features like the "friends" tab, while claiming there's less interest in this content. Leys doesn't think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or "indeedtired of the social dynamics and personal brand-building... the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back," especially if it's easy to switch to other platforms that respond better to user preferences. He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that "it would still not get me on Instagram." Interoperability shakes up social media Meta thought it may have already beaten the FTC's monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta's influence over the social media world may be waning just as it's facing increasing pressure to open up its platforms more than ever. The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta, the FTC claimed, was avoiding "training users to check multiple feeds," which might allow other apps to "cannibalize" its users. "Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves," the FTC alleged. By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook's permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged. According to the FTC, Meta continues "to this day" to "screen developers and can weaponize API access in ways that cement its dominance," and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up. One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that "huge public groundswells of mistrust and anger about excessive corporate power" that "cross political lines" are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues. For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users. Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks. In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol "to enable people to own, upload, download, and relocate their social graphs," which maps users' connections across platforms. That could be used to mitigate "the network effect" that locks users into platforms like Meta's "while interrupting unwanted data collection." At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into "building interoperable gateways" between their services. Doctorow said that communicating with other users across platforms may feel "awkward" at first, but ultimately, it may be like "having to find the diesel pump at the gas station" instead of the unleaded gas pump. "You'll still be going to the same gas station," Doctorow suggested. Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust. The EFF supports regulators' attempts to pass well-crafted interoperability mandates, Doctorow said, noting that "if you have to worry about your users leaving, you generally have to treat them better." But would interoperability fix social media? The FTC has alleged that "Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs." Meta disputes the FTC's complaint as outdated, arguing that its platform could be substituted by pretty much any social network. However, Guy Aridor, a co-author of a recent article called "The Economics of Social Media" in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain "resistant to interoperability" because "it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away." For Meta, research shows its platforms' network effects have appeared to weaken somewhat but "clearly still exist" despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said. Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warnersaid that "interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors." He's hoping that passing these "long-overdue requirements" will "boost competition and give consumers more power." Aridor told Ars it's obvious that "interoperability would clearly increase competition," but he still has questions about whether users would benefit from that competition "since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility." Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash. Aridor said there is currently "very little empirical evidence on the effects of interoperability," but theoretically, if it increased competition in the current climate, it would likely "push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content." Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta's alleged monopoly following a breakup, which he views as the "natural remedy" following a potential win in the FTC's lawsuit. Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta's influence over social media well into the future. And if Zuckerberg's vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends' behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram. Aridor's team's article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that's true, users could get stuck forever using whichever platforms connect them with the widest range of contacts. "While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives," his team's article concluded. Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 1 Comments #meta #hypes #friends #social #medias
    ARSTECHNICA.COM
    Meta hypes AI friends as social media’s future, but users want real connections
    Friend requests Meta hypes AI friends as social media’s future, but users want real connections Two visions for social media’s future pit real connections against AI friends. Ashley Belanger – May 21, 2025 9:38 am | 1 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer. At the Federal Trade Commission's monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta's family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family. As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly. "Mark Zuckerberg says social media is over," a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg's words. That chart, shared at the trial, showed the "percent of time spent viewing content posted by 'friends'" had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram. Supposedly because of this trend, Zuckerberg testified that "it doesn't matter much" if someone's friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it's not so much focused on beating the FTC's flagged rivals in the connecting-friends-and-family business, Snap and MeWe. But while Zuckerberg claims that hosting that kind of content doesn't move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta's own press releases seem to back that up. Weeks ahead of Zuckerberg's testimony, Meta announced that it would bring back the "magic of friends," introducing a "friends" tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots. Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but "in a very creepy way," The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could "meaningfully" fill the void of real friendship online, as the average person has only three friends but "has demand" for up to 15. To critics seeking to undo Meta's alleged monopoly, this latest move could signal a contradiction in Zuckerberg's testimony, showing that the company is so invested in keeping users on its platforms that it's now creating AI friends (wh0 can never leave its platform) to bait the loneliest among us into more engagement. "The average person wants more connectivity, connection, than they have," Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren't the answer to providing that basic social need. All this comes more than a decade after he sought $5 billion in Facebook's 2012 initial public offering so that he could keep building tools that he told investors would expand "people's capacity to build and maintain relationships." At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta's platforms in the future, predicting that "several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive." Meta declined to comment further on the company's vision for social media's future. In a statement, a Meta spokesperson told Ars that "the FTC’s lawsuit against Meta defies reality," claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta's true rivals. "More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final," Meta's spokesperson said. "Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.” Meta faces calls to open up its platforms Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, "it was so beautiful because we didn't think about bots and trolls. We didn't think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place." But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms' functionality or leverage their platforms to compete for their users' attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing. Meta "is still entirely based on personal social networking," Weinstein told Ars. In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after "competition became impossible" with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app's more recent shift to decentralization. Still defending MeWe's failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the "anti-Facebook." Among his complaints, Weinstein accused Meta of thwarting MeWe's attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he's urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta's platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry. "The glue that holds it all together is Facebook’s monopoly over data," Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. "Its ownership and control of the personal information of Facebook users and non-users alike is unmatched." Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta's should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said. Doctorow said that solution would create "an equilibrium where companies are more incentivized to behave themselves than they are to cheat" by, say, retaliating against, killing off, or buying out rivals. And "if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy," Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive. Doctorow summed up the frustration that some users have faced through the ongoing "enshittification" of platforms (a term he coined) ever since platforms took over the Internet. "I'm 55 now, and I've gotten a lot less interested in how things work because I've had too many experiences with how things fail," Doctorow told Ars. "And I just want to make sure that if I'm on a service and it goes horribly wrong, I can leave." Social media haters wish OG platforms were doomed Weinstein pointed out that Meta's alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it. As Meta's monopoly trial got underway, a personal blog post titled "No Instagram, no privacy" rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025. In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt "blessed" to have "somehow escaped having an Instagram account," feeling no pressure to "update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with." But despite never having an account, he's found that "you don’t have to be on Instagram to be on Instagram," complaining that "it bugs me" when friends seem to know "more about my life than I tell them" because of various friends' posts that mention or show images of him. In his blog, he defined privacy as "being in control of what other people know about you" and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to "fix or regulate this," Leys suggested, or maybe some universal "etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering." On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media? Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, "just live your life without being so bothered about offending other people" or saying that "the entire world doesn't have to be sanitized to meet individual people's preferences." Others seemed to better understand Leys' point of view, with one agreeing that "the problem is that our modern norms (and tech) lead to everyone sharing everything with a large social network." Surveying the lively thread, another social media hater joked, "I feel vindicated for my decision to entirely stay off of this drama machine." Leys told Ars that he would "absolutely" be in favor of personal social networks like Meta's platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other "odd social dynamics." Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends' posts. Meta announced that it would double the amount of force-fed filler in people's feeds on Instagram and Facebook starting in 2023. That's when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends' content engagement. So while it's easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram's default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels "humiliating" and like a "social risk." But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted. Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that "consumers want more friends and family content, and Meta is belatedly trying to address this" with features like the "friends" tab, while claiming there's less interest in this content. Leys doesn't think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or "indeed [got] tired of the social dynamics and personal brand-building... the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back," especially if it's easy to switch to other platforms that respond better to user preferences. He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that "it would still not get me on Instagram." Interoperability shakes up social media Meta thought it may have already beaten the FTC's monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta's influence over the social media world may be waning just as it's facing increasing pressure to open up its platforms more than ever. The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta (then Facebook), the FTC claimed, was avoiding "training users to check multiple feeds," which might allow other apps to "cannibalize" its users. "Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves," the FTC alleged. By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook's permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged. According to the FTC, Meta continues "to this day" to "screen developers and can weaponize API access in ways that cement its dominance," and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up. One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that "huge public groundswells of mistrust and anger about excessive corporate power" that "cross political lines" are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues. For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users. Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks. In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol "to enable people to own, upload, download, and relocate their social graphs," which maps users' connections across platforms. That could be used to mitigate "the network effect" that locks users into platforms like Meta's "while interrupting unwanted data collection." At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into "building interoperable gateways" between their services. Doctorow said that communicating with other users across platforms may feel "awkward" at first, but ultimately, it may be like "having to find the diesel pump at the gas station" instead of the unleaded gas pump. "You'll still be going to the same gas station," Doctorow suggested. Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust. The EFF supports regulators' attempts to pass well-crafted interoperability mandates, Doctorow said, noting that "if you have to worry about your users leaving, you generally have to treat them better." But would interoperability fix social media? The FTC has alleged that "Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs." Meta disputes the FTC's complaint as outdated, arguing that its platform could be substituted by pretty much any social network. However, Guy Aridor, a co-author of a recent article called "The Economics of Social Media" in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain "resistant to interoperability" because "it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away." For Meta, research shows its platforms' network effects have appeared to weaken somewhat but "clearly still exist" despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said. Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warner (D-Va.) said that "interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors." He's hoping that passing these "long-overdue requirements" will "boost competition and give consumers more power." Aridor told Ars it's obvious that "interoperability would clearly increase competition," but he still has questions about whether users would benefit from that competition "since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility." Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash. Aridor said there is currently "very little empirical evidence on the effects of interoperability," but theoretically, if it increased competition in the current climate, it would likely "push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content." Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta's alleged monopoly following a breakup, which he views as the "natural remedy" following a potential win in the FTC's lawsuit. Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta's influence over social media well into the future. And if Zuckerberg's vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends' behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram. Aridor's team's article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that's true, users could get stuck forever using whichever platforms connect them with the widest range of contacts. "While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives," his team's article concluded. Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 1 Comments
    0 Comments 0 Shares 0 Reviews
  • Half of tech execs are ready to let AI take the wheel

    As AI shifts from experimental to essential, tech executives say that more than half of AI deployments will be functioning autonomously in their company in the next two years, according to a new survey by professional services firm Ernst & Young.

    While generative AItechnology has captured the attention of business leaders for the past several years, agentic AI, a specific kind of AI system that acts autonomously or semi-autonomously to achieve goals, has largely flown under the radar. That changed in late 2024 when search traffic for agentic AI and AI agents began to surge, according to Google Trends data.

    Now, half of more than 500 tech executives surveyed by EY in its latest Technology Pulse Poll said AI agents will make up the majority of upcoming AI deployments. The survey revealed that 48% are already adopting or fully deploying AI agents, and half of those leaders say that more than 50% of AI deployments will be autonomous in their company in the next 24 months.

    The EY survey also showed rising AI investment, with 92% of tech leaders planning to boost AI spending and over half believing they’re ahead of competitors in AI investment. Additionally, 81% of tech executives surveyed said they feel optimistic about AI’s promises related to achieving their organization’s goals over the next 12 months.

    Tech companies are leading the charge in adopting agentic AI, according to James Brundage, a leader in EY’s Technology Sector group. “Despite economic uncertainty, executives remain confident in AI’s value, ramping up investment and shifting from pilots to full deployment. Still, they’re under pressure to show real ROI through measurable business results,” he said.

    Among respondents planning to increase their AI budgets, 43% say agentic AI will claim more than half of their total AI budget. Leading reasons for adopting agentic AI include staying competitive, helping customers, and for internal strategy purposes.

    Tech companies are always early adopters, and many believe they’re ahead of the competition, but that confidence in AI often exceeds the reality, according to Ken Englund, a leader in EY Americas’ Technology Sector Growth.

    “It is still very early in the AI lifecycle, so it remains to be seen where these companies stand against the competition, and an outside-in view will be a critical measuring stick,” Englund said.

    Tapping into agentic AI requires structural change

    Investment in agentic AI is accelerating, reshaping enterprise architecture. While genAI gets most of the spotlight, advances in classical AI and machine learning are also key to enabling agentic AI, according to Englund, who sees the technology as a “flexible framework” for using the right tools to deliver outcomes across platforms.

    AI agents offer more than a productivity boost; they’re fundamentally reshaping customer interactions and business operations. And while there’s still work to do on trust and accuracy, the world is beginning a new tech era — one that might finally deliver on the promises seen in movies like Minority Report and Iron Man, according to Salesforce CEO Marc Benoiff.

    Salesforce has embedded AI into its CRM through the Einstein 1 Platform and tools like Agentforce, enabling businesses to deploy autonomous agents across sales, service, marketing, and commerce. Its generative AI tools, Einstein GPT and Einstein Copilot, act as intelligent assistants that draft communications, summarize case histories, auto-fill records, and answer questions using company data.

    To achieve competitive advantage in this new world, businesses must shift their focus from isolated genAI tools like chatbots to deep integration of advanced AI systems — especially agentic architectures, where autonomous AI agents collaborate to manage and optimize complex workflows, according to a recent report from services firm Accenture.

    The Accenture report was based on a survey of 2,000 C-suite and data-science executives across multiple countries and industries. Although many companies recognize AI’s potential, the report said, true enterprise reinvention requires structural change, strong leadership, and, crucially, a robust data foundation — an area where many still struggle, particularly with unstructured data.

    Additionally, outdated IT systems and inadequate employee training hinder progress. However, a small group of “front-runner” companies are succeeding by combining foundational AI investments with “bold, strategic initiatives that embed AI at the core of their operations,” the report said.

    Only 8% of companies — so-called “front-runners” — are scaling AI at an enterprise level, embedding the technology into core business strategy.

    But of those front-runners that scaled their AI implementations, many found a solid return on investment. According to Accenture:

    Front-runners with annual revenue exceeding billion grew their revenue 7% faster than companies still experimenting with AI.

    Across all sizes, front-runners outperformed the other three company groups, delivering shareholder returns that were 6% higher.

    After deploying and scaling AI across their enterprise, companies expect to reduce their costs by 11% and increase their productivity by 13%, on average, within 18 months.

    Most tech leaders are still not AI savvy, CEOs say

    But earlier this month, Gartner Research issued the results of a study showing that just 44% of CIOs are deemed by their CEOs to be “AI-savvy.”

    The survey of 456 CEOs and other senior business executives worldwide also revealed that 77% of respondents believe AI is ushering in a new business era, making the lack of AI savviness amongst executive teams all the more meaningful.

    “We have never seen such a disproportionate gap in CEOs’ impressions about technological disruption,” said David Furlonger, a distinguished VP analyst and Gartner Fellow.

    “AI is not just an incremental change from digital business. AI is a step change in how business and society work,” he said. “A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake.”

    CEOs perceived even the CIO, chief information security officer, and chief data officeras lacking AI savviness. Respondents said the top two factors limiting AI’s deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes.

    “CEOs have shifted their view of AI from just a tool to a transformative way of working,” said Jennifer Carter, a principal analyst at Gartner. “This change has highlighted the importance of upskilling. As leaders recognize AI’s potential and its impact on their organizations, they understand that success isn’t just about hiring new talent. Instead, it’s about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.”

    This focus on upskilling is a strategic response to AI’s evolving role in business, ensuring that the entire organization can adapt and thrive in this new paradigm. Sixty-six percent of CEOs said their business models are not fit for AI purposes, according to Gartner’s survey. Therefore, executives must build and improve AI savviness related to every mission-critical priority.

    Hiring workers with the right skills is also part of the effort, noted EY’s Englund. “According to our technology pulse poll, 84% of tech leaders say they anticipate hiring in the next six months as a result of AI adoption,” he said.

    “We continue to see strong overall demand for AI skills and an increase in those skills involved in the deployment of AI production solutions. In particular, we see increased recruiting of AI experienced Product Managers, Data Engineers, MLOps, and Forward Deployed Engineers,” Englund said.

    In the rush to implement AI, many companies are also turning to outside freelancers with the skills they need. New research from Fiverr, a global freelance worker marketplace, found an 18,000% surge in businesses seeking freelance help to implement agents and a 641% increase for freelancers who specialize in “humanizing AI content.”

    Last week, Fiverr published its Spring 2025 Business Trends Index, which uses data from tens of millions of searches on its platform over the last six months to provide a snapshot of today’seconomy.The demand for freelancers who have the skills to work with AI agents shows that businesses are eager — but often unsure about — how to deploy the “digital colleagues” who can independently manage tasks like reading emails, scheduling meetings, or answering customer questions.

    “At the same time, a spike in searches for freelancers who can rewrite chatbot scripts, marketing emails, and website copy to sound more natural highlights a clear takeaway: AI might be powerful, but it still needs a human touch,” Fiverr said in its report.
    #half #tech #execs #are #ready
    Half of tech execs are ready to let AI take the wheel
    As AI shifts from experimental to essential, tech executives say that more than half of AI deployments will be functioning autonomously in their company in the next two years, according to a new survey by professional services firm Ernst & Young. While generative AItechnology has captured the attention of business leaders for the past several years, agentic AI, a specific kind of AI system that acts autonomously or semi-autonomously to achieve goals, has largely flown under the radar. That changed in late 2024 when search traffic for agentic AI and AI agents began to surge, according to Google Trends data. Now, half of more than 500 tech executives surveyed by EY in its latest Technology Pulse Poll said AI agents will make up the majority of upcoming AI deployments. The survey revealed that 48% are already adopting or fully deploying AI agents, and half of those leaders say that more than 50% of AI deployments will be autonomous in their company in the next 24 months. The EY survey also showed rising AI investment, with 92% of tech leaders planning to boost AI spending and over half believing they’re ahead of competitors in AI investment. Additionally, 81% of tech executives surveyed said they feel optimistic about AI’s promises related to achieving their organization’s goals over the next 12 months. Tech companies are leading the charge in adopting agentic AI, according to James Brundage, a leader in EY’s Technology Sector group. “Despite economic uncertainty, executives remain confident in AI’s value, ramping up investment and shifting from pilots to full deployment. Still, they’re under pressure to show real ROI through measurable business results,” he said. Among respondents planning to increase their AI budgets, 43% say agentic AI will claim more than half of their total AI budget. Leading reasons for adopting agentic AI include staying competitive, helping customers, and for internal strategy purposes. Tech companies are always early adopters, and many believe they’re ahead of the competition, but that confidence in AI often exceeds the reality, according to Ken Englund, a leader in EY Americas’ Technology Sector Growth. “It is still very early in the AI lifecycle, so it remains to be seen where these companies stand against the competition, and an outside-in view will be a critical measuring stick,” Englund said. Tapping into agentic AI requires structural change Investment in agentic AI is accelerating, reshaping enterprise architecture. While genAI gets most of the spotlight, advances in classical AI and machine learning are also key to enabling agentic AI, according to Englund, who sees the technology as a “flexible framework” for using the right tools to deliver outcomes across platforms. AI agents offer more than a productivity boost; they’re fundamentally reshaping customer interactions and business operations. And while there’s still work to do on trust and accuracy, the world is beginning a new tech era — one that might finally deliver on the promises seen in movies like Minority Report and Iron Man, according to Salesforce CEO Marc Benoiff. Salesforce has embedded AI into its CRM through the Einstein 1 Platform and tools like Agentforce, enabling businesses to deploy autonomous agents across sales, service, marketing, and commerce. Its generative AI tools, Einstein GPT and Einstein Copilot, act as intelligent assistants that draft communications, summarize case histories, auto-fill records, and answer questions using company data. To achieve competitive advantage in this new world, businesses must shift their focus from isolated genAI tools like chatbots to deep integration of advanced AI systems — especially agentic architectures, where autonomous AI agents collaborate to manage and optimize complex workflows, according to a recent report from services firm Accenture. The Accenture report was based on a survey of 2,000 C-suite and data-science executives across multiple countries and industries. Although many companies recognize AI’s potential, the report said, true enterprise reinvention requires structural change, strong leadership, and, crucially, a robust data foundation — an area where many still struggle, particularly with unstructured data. Additionally, outdated IT systems and inadequate employee training hinder progress. However, a small group of “front-runner” companies are succeeding by combining foundational AI investments with “bold, strategic initiatives that embed AI at the core of their operations,” the report said. Only 8% of companies — so-called “front-runners” — are scaling AI at an enterprise level, embedding the technology into core business strategy. But of those front-runners that scaled their AI implementations, many found a solid return on investment. According to Accenture: Front-runners with annual revenue exceeding billion grew their revenue 7% faster than companies still experimenting with AI. Across all sizes, front-runners outperformed the other three company groups, delivering shareholder returns that were 6% higher. After deploying and scaling AI across their enterprise, companies expect to reduce their costs by 11% and increase their productivity by 13%, on average, within 18 months. Most tech leaders are still not AI savvy, CEOs say But earlier this month, Gartner Research issued the results of a study showing that just 44% of CIOs are deemed by their CEOs to be “AI-savvy.” The survey of 456 CEOs and other senior business executives worldwide also revealed that 77% of respondents believe AI is ushering in a new business era, making the lack of AI savviness amongst executive teams all the more meaningful. “We have never seen such a disproportionate gap in CEOs’ impressions about technological disruption,” said David Furlonger, a distinguished VP analyst and Gartner Fellow. “AI is not just an incremental change from digital business. AI is a step change in how business and society work,” he said. “A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake.” CEOs perceived even the CIO, chief information security officer, and chief data officeras lacking AI savviness. Respondents said the top two factors limiting AI’s deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes. “CEOs have shifted their view of AI from just a tool to a transformative way of working,” said Jennifer Carter, a principal analyst at Gartner. “This change has highlighted the importance of upskilling. As leaders recognize AI’s potential and its impact on their organizations, they understand that success isn’t just about hiring new talent. Instead, it’s about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.” This focus on upskilling is a strategic response to AI’s evolving role in business, ensuring that the entire organization can adapt and thrive in this new paradigm. Sixty-six percent of CEOs said their business models are not fit for AI purposes, according to Gartner’s survey. Therefore, executives must build and improve AI savviness related to every mission-critical priority. Hiring workers with the right skills is also part of the effort, noted EY’s Englund. “According to our technology pulse poll, 84% of tech leaders say they anticipate hiring in the next six months as a result of AI adoption,” he said. “We continue to see strong overall demand for AI skills and an increase in those skills involved in the deployment of AI production solutions. In particular, we see increased recruiting of AI experienced Product Managers, Data Engineers, MLOps, and Forward Deployed Engineers,” Englund said. In the rush to implement AI, many companies are also turning to outside freelancers with the skills they need. New research from Fiverr, a global freelance worker marketplace, found an 18,000% surge in businesses seeking freelance help to implement agents and a 641% increase for freelancers who specialize in “humanizing AI content.” Last week, Fiverr published its Spring 2025 Business Trends Index, which uses data from tens of millions of searches on its platform over the last six months to provide a snapshot of today’seconomy.The demand for freelancers who have the skills to work with AI agents shows that businesses are eager — but often unsure about — how to deploy the “digital colleagues” who can independently manage tasks like reading emails, scheduling meetings, or answering customer questions. “At the same time, a spike in searches for freelancers who can rewrite chatbot scripts, marketing emails, and website copy to sound more natural highlights a clear takeaway: AI might be powerful, but it still needs a human touch,” Fiverr said in its report. #half #tech #execs #are #ready
    WWW.COMPUTERWORLD.COM
    Half of tech execs are ready to let AI take the wheel
    As AI shifts from experimental to essential, tech executives say that more than half of AI deployments will be functioning autonomously in their company in the next two years, according to a new survey by professional services firm Ernst & Young (EY). While generative AI (genAI) technology has captured the attention of business leaders for the past several years, agentic AI, a specific kind of AI system that acts autonomously or semi-autonomously to achieve goals, has largely flown under the radar. That changed in late 2024 when search traffic for agentic AI and AI agents began to surge, according to Google Trends data. Now, half of more than 500 tech executives surveyed by EY in its latest Technology Pulse Poll said AI agents will make up the majority of upcoming AI deployments. The survey revealed that 48% are already adopting or fully deploying AI agents, and half of those leaders say that more than 50% of AI deployments will be autonomous in their company in the next 24 months. The EY survey also showed rising AI investment, with 92% of tech leaders planning to boost AI spending and over half believing they’re ahead of competitors in AI investment. Additionally, 81% of tech executives surveyed said they feel optimistic about AI’s promises related to achieving their organization’s goals over the next 12 months. Tech companies are leading the charge in adopting agentic AI, according to James Brundage, a leader in EY’s Technology Sector group. “Despite economic uncertainty, executives remain confident in AI’s value, ramping up investment and shifting from pilots to full deployment. Still, they’re under pressure to show real ROI through measurable business results,” he said. Among respondents planning to increase their AI budgets, 43% say agentic AI will claim more than half of their total AI budget. Leading reasons for adopting agentic AI include staying competitive (69%), helping customers (59%), and for internal strategy purposes (59%). Tech companies are always early adopters, and many believe they’re ahead of the competition, but that confidence in AI often exceeds the reality, according to Ken Englund, a leader in EY Americas’ Technology Sector Growth. “It is still very early in the AI lifecycle, so it remains to be seen where these companies stand against the competition, and an outside-in view will be a critical measuring stick,” Englund said. Tapping into agentic AI requires structural change Investment in agentic AI is accelerating, reshaping enterprise architecture. While genAI gets most of the spotlight, advances in classical AI and machine learning are also key to enabling agentic AI, according to Englund, who sees the technology as a “flexible framework” for using the right tools to deliver outcomes across platforms. AI agents offer more than a productivity boost; they’re fundamentally reshaping customer interactions and business operations. And while there’s still work to do on trust and accuracy, the world is beginning a new tech era — one that might finally deliver on the promises seen in movies like Minority Report and Iron Man, according to Salesforce CEO Marc Benoiff. Salesforce has embedded AI into its CRM through the Einstein 1 Platform and tools like Agentforce, enabling businesses to deploy autonomous agents across sales, service, marketing, and commerce. Its generative AI tools, Einstein GPT and Einstein Copilot, act as intelligent assistants that draft communications, summarize case histories, auto-fill records, and answer questions using company data. To achieve competitive advantage in this new world, businesses must shift their focus from isolated genAI tools like chatbots to deep integration of advanced AI systems — especially agentic architectures, where autonomous AI agents collaborate to manage and optimize complex workflows, according to a recent report from services firm Accenture. The Accenture report was based on a survey of 2,000 C-suite and data-science executives across multiple countries and industries. Although many companies recognize AI’s potential, the report said, true enterprise reinvention requires structural change, strong leadership, and, crucially, a robust data foundation — an area where many still struggle, particularly with unstructured data. Additionally, outdated IT systems and inadequate employee training hinder progress. However, a small group of “front-runner” companies are succeeding by combining foundational AI investments with “bold, strategic initiatives that embed AI at the core of their operations,” the report said. Only 8% of companies — so-called “front-runners” — are scaling AI at an enterprise level, embedding the technology into core business strategy. But of those front-runners that scaled their AI implementations, many found a solid return on investment. According to Accenture: Front-runners with annual revenue exceeding $10 billion grew their revenue 7% faster than companies still experimenting with AI. Across all sizes, front-runners outperformed the other three company groups, delivering shareholder returns that were 6% higher. After deploying and scaling AI across their enterprise, companies expect to reduce their costs by 11% and increase their productivity by 13%, on average, within 18 months. Most tech leaders are still not AI savvy, CEOs say But earlier this month, Gartner Research issued the results of a study showing that just 44% of CIOs are deemed by their CEOs to be “AI-savvy.” The survey of 456 CEOs and other senior business executives worldwide also revealed that 77% of respondents believe AI is ushering in a new business era, making the lack of AI savviness amongst executive teams all the more meaningful. “We have never seen such a disproportionate gap in CEOs’ impressions about technological disruption,” said David Furlonger, a distinguished VP analyst and Gartner Fellow. “AI is not just an incremental change from digital business. AI is a step change in how business and society work,” he said. “A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake.” CEOs perceived even the CIO, chief information security officer (CISO), and chief data officer (CDO) as lacking AI savviness. Respondents said the top two factors limiting AI’s deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes. “CEOs have shifted their view of AI from just a tool to a transformative way of working,” said Jennifer Carter, a principal analyst at Gartner. “This change has highlighted the importance of upskilling. As leaders recognize AI’s potential and its impact on their organizations, they understand that success isn’t just about hiring new talent. Instead, it’s about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.” This focus on upskilling is a strategic response to AI’s evolving role in business, ensuring that the entire organization can adapt and thrive in this new paradigm. Sixty-six percent of CEOs said their business models are not fit for AI purposes, according to Gartner’s survey. Therefore, executives must build and improve AI savviness related to every mission-critical priority. Hiring workers with the right skills is also part of the effort, noted EY’s Englund. “According to our technology pulse poll, 84% of tech leaders say they anticipate hiring in the next six months as a result of AI adoption,” he said. “We continue to see strong overall demand for AI skills and an increase in those skills involved in the deployment of AI production solutions. In particular, we see increased recruiting of AI experienced Product Managers, Data Engineers, MLOps, and Forward Deployed Engineers (FDE’s),” Englund said. In the rush to implement AI, many companies are also turning to outside freelancers with the skills they need. New research from Fiverr, a global freelance worker marketplace, found an 18,000% surge in businesses seeking freelance help to implement agents and a 641% increase for freelancers who specialize in “humanizing AI content.” Last week, Fiverr published its Spring 2025 Business Trends Index, which uses data from tens of millions of searches on its platform over the last six months to provide a snapshot of today’s (and tomorrow’s) economy. [ Related: Freelancers now represent more than one in four US workers ] The demand for freelancers who have the skills to work with AI agents shows that businesses are eager — but often unsure about — how to deploy the “digital colleagues” who can independently manage tasks like reading emails, scheduling meetings, or answering customer questions. “At the same time, a spike in searches for freelancers who can rewrite chatbot scripts, marketing emails, and website copy to sound more natural highlights a clear takeaway: AI might be powerful, but it still needs a human touch,” Fiverr said in its report.
    0 Comments 0 Shares 0 Reviews
  • Lessons in Decision Making from the Monty Hall Problem

    The Monty Hall Problem is a well-known brain teaser from which we can learn important lessons in Decision Making that are useful in general and in particular for data scientists.

    If you are not familiar with this problem, prepare to be perplexed . If you are, I hope to shine light on aspects that you might not have considered .

    I introduce the problem and solve with three types of intuitions:

    Common — The heart of this post focuses on applying our common sense to solve this problem. We’ll explore why it fails us and what we can do to intuitively overcome this to make the solution crystal clear . We’ll do this by using visuals , qualitative arguments and some basic probabilities.

    Bayesian — We will briefly discuss the importance of belief propagation.

    Causal — We will use a Graph Model to visualise conditions required to use the Monty Hall problem in real world settings.Spoiler alert I haven’t been convinced that there are any, but the thought process is very useful.

    I summarise by discussing lessons learnt for better data decision making.

    In regards to the Bayesian and Causal intuitions, these will be presented in a gentle form. For the mathematically inclined I also provide supplementary sections with short Deep Dives into each approach after the summary.By examining different aspects of this puzzle in probability you will hopefully be able to improve your data decision making .

    Credit: Wikipedia

    First, some history. Let’s Make a Deal is a USA television game show that originated in 1963. As its premise, audience participants were considered traders making deals with the host, Monty Hall .

    At the heart of the matter is an apparently simple scenario:

    A trader is posed with the question of choosing one of three doors for the opportunity to win a luxurious prize, e.g, a car . Behind the other two were goats .

    The trader is shown three closed doors.

    The trader chooses one of the doors. Let’s call thisdoor A and mark it with a .

    Keeping the chosen door closed, the host reveals one of the remaining doors showing a goat.

    The trader chooses door and the the host reveals door C showing a goat.

    The host then asks the trader if they would like to stick with their first choice or switch to the other remaining one.

    If the trader guesses correct they win the prize . If not they’ll be shown another goat.

    What is the probability of being Zonked? Credit: Wikipedia

    Should the trader stick with their original choice of door A or switch to B?

    Before reading further, give it a go. What would you do?

    Most people are likely to have a gut intuition that “it doesn’t matter” arguing that in the first instance each door had a ⅓ chance of hiding the prize, and that after the host intervention , when only two doors remain closed, the winning of the prize is 50:50.

    There are various ways of explaining why the coin toss intuition is incorrect. Most of these involve maths equations, or simulations. Whereas we will address these later, we’ll attempt to solve by applying Occam’s razor:

    A principle that states that simpler explanations are preferable to more complex ones — William of OckhamTo do this it is instructive to slightly redefine the problem to a large N doors instead of the original three.

    The Large N-Door Problem

    Similar to before: you have to choose one of many doors. For illustration let’s say N=100. Behind one of the doors there is the prize and behind 99of the rest are goats .

    The 100 Door Monty Hall problem before the host intervention.

    You choose one door and the host reveals 98of the other doors that have goats leaving yours and one more closed .

    The 100 Door Monty Hall Problem after the host intervention. Should you stick with your door or make the switch?

    Should you stick with your original choice or make the switch?

    I think you’ll agree with me that the remaining door, not chosen by you, is much more likely to conceal the prize … so you should definitely make the switch!

    It’s illustrative to compare both scenarios discussed so far. In the next figure we compare the post host intervention for the N=3 setupand that of N=100:

    Post intervention settings for the N=3 setupand N=100.

    In both cases we see two shut doors, one of which we’ve chosen. The main difference between these scenarios is that in the first we see one goat and in the second there are more than the eye would care to see.

    Why do most people consider the first case as a “50:50” toss up and in the second it’s obvious to make the switch?

    We’ll soon address this question of why. First let’s put probabilities of success behind the different scenarios.

    What’s The Frequency, Kenneth?

    So far we learnt from the N=100 scenario that switching doors is obviously beneficial. Inferring for the N=3 may be a leap of faith for most. Using some basic probability arguments here we’ll quantify why it is favourable to make the switch for any number door scenario N.

    We start with the standard Monty Hall problem. When it starts the probability of the prize being behind each of the doors A, B and C is p=⅓. To be explicit let’s define the Y parameter to be the door with the prize , i.e, p= p=p=⅓.

    The trick to solving this problem is that once the trader’s door A has been chosen , we should pay close attention to the set of the other doors {B,C}, which has the probability of p=p+p=⅔. This visual may help make sense of this:

    By being attentive to the {B,C} the rest should follow. When the goat is revealed

    it is apparent that the probabilities post intervention change. Note that for ease of reading I’ll drop the Y notation, where pwill read pand pwill read p. Also for completeness the full terms after the intervention should be even longer due to it being conditional, e.g, p, p, where Z is a parameter representing the choice of the host .premains ⅓

    p=p+premains ⅔,

    p=0; we just learnt that the goat is behind door C, not the prize.

    p= p-p= ⅔

    For anyone with the information provided by the hostthis means that it isn’t a toss of a fair coin! For them the fact that pbecame zero does not “raise all other boats”, but rather premains the same and pgets doubled.

    The bottom line is that the trader should consider p= ⅓ and p=⅔, hence by switching they are doubling the odds at winning!

    Let’s generalise to N.

    When we start all doors have odds of winning the prize p=1/N. After the trader chooses one door which we’ll call D₁, meaning p=1/N, we should now pay attention to the remaining set of doors {D₂, …, Dₙ} will have a chance of p=/N.

    When the host revealsdoors {D₃, …, Dₙ} with goats:

    premains 1/N

    p=p+p+… + premains/N

    p=p= …=p=p= 0; we just learnt that they have goats, not the prize.

    p=p— p— … — p=/N

    The trader should now consider two door values p=1/N and p=/N.

    Hence the odds of winning improved by a factor of N-1! In the case of N=100, this means by an odds ratio of 99!.

    The improvement of odds ratios in all scenarios between N=3 to 100 may be seen in the following graph. The thin line is the probability of winning by choosing any door prior to the intervention p=1/N. Note that it also represents the chance of winning after the intervention, if they decide to stick to their guns and not switch p.The thick line is the probability of winning the prize after the intervention if the door is switched p=/N:

    Probability of winning as a function of N. p=p=1/N is the thin line; p=N/is the thick one.Perhaps the most interesting aspect of this graphis that the N=3 case has the highest probability before the host intervention , but the lowest probability after and vice versa for N=100.

    Another interesting feature is the quick climb in the probability of winning for the switchers:

    N=3: p=67%

    N=4: p=75%

    N=5=80%

    The switchers curve gradually reaches an asymptote approaching at 100% whereas at N=99 it is 98.99% and at N=100 is equal to 99%.

    This starts to address an interesting question:

    Why Is Switching Obvious For Large N But Not N=3?

    The answer is the fact that this puzzle is slightly ambiguous. Only the highly attentive realise that by revealing the goatthe host is actually conveying a lot of information that should be incorporated into one’s calculation. Later we discuss the difference of doing this calculation in one’s mind based on intuition and slowing down by putting pen to paper or coding up the problem.

    How much information is conveyed by the host by intervening?

    A hand wavy explanation is that this information may be visualised as the gap between the lines in the graph above. For N=3 we saw that the odds of winning doubled, but that doesn’t register as strongly to our common sense intuition as the 99 factor as in the N=100.

    I have also considered describing stronger arguments from Information Theory that provide useful vocabulary to express communication of information. However, I feel that this fascinating field deserves a post of its own, which I’ve published.

    The main takeaway for the Monty Hall problem is that I have calculated the information gain to be a logarithmic function of the number of doors c using this formula:

    Information Gain due to the intervention of the host for a setup with c doors. Full details in my upcoming article.

    For c=3 door case, e.g, the information gain is ⅔ bits. Full details are in this article on entropy.

    To summarise this section, we use basic probability arguments to quantify the probabilities of winning the prize showing the benefit of switching for all N door scenarios. For those interested in more formal solutions using Bayesian and Causality on the bottom I provide supplement sections.

    In the next three final sections we’ll discuss how this problem was accepted in the general public back in the 1990s, discuss lessons learnt and then summarise how we can apply them in real-world settings.

    Being Confused Is OK

    “No, that is impossible, it should make no difference.” — Paul Erdős

    If you still don’t feel comfortable with the solution of the N=3 Monty Hall problem, don’t worry you are in good company! According to Vazsonyi¹ even Paul Erdős who is considered “of the greatest experts in probability theory” was confounded until computer simulations were demonstrated to him.

    When the original solution by Steve Selvin² was popularised by Marilyn vos Savant in her column “Ask Marilyn” in Parade magazine in 1990 many readers wrote that Selvin and Savant were wrong³. According to Tierney’s 1991 article in the New York Times, this included about 10,000 readers, including nearly 1,000 with Ph.D degrees⁴.

    On a personal note, over a decade ago I was exposed to the standard N=3 problem and since then managed to forget the solution numerous times. When I learnt about the large N approach I was quite excited about how intuitive it was. I then failed to explain it to my technical manager over lunch, so this is an attempt to compensate. I still have the same day job .

    While researching this piece I realised that there is a lot to learn in terms of decision making in general and in particular useful for data science.

    Lessons Learnt From Monty Hall Problem

    In his book Thinking Fast and Slow, the late Daniel Kahneman, the co-creator of Behaviour Economics, suggested that we have two types of thought processes:

    System 1 — fast thinking : based on intuition. This helps us react fast with confidence to familiar situations.

    System 2 – slow thinking : based on deep thought. This helps figure out new complex situations that life throws at us.

    Assuming this premise, you might have noticed that in the above you were applying both.

    By examining the visual of N=100 doors your System 1 kicked in and you immediately knew the answer. I’m guessing that in the N=3 you were straddling between System 1 and 2. Considering that you had to stop and think a bit when going throughout the probabilities exercise it was definitely System 2 .

    The decision maker’s struggle between System 1 and System 2 . Generated using Gemini Imagen 3

    Beyond the fast and slow thinking I feel that there are a lot of data decision making lessons that may be learnt.Assessing probabilities can be counter-intuitive …

    or

    Be comfortable with shifting to deep thought

    We’ve clearly shown that in the N=3 case. As previously mentioned it confounded many people including prominent statisticians.

    Another classic example is The Birthday Paradox , which shows how we underestimate the likelihood of coincidences. In this problem most people would think that one needs a large group of people until they find a pair sharing the same birthday. It turns out that all you need is 23 to have a 50% chance. And 70 for a 99.9% chance.

    One of the most confusing paradoxes in the realm of data analysis is Simpson’s, which I detailed in a previous article. This is a situation where trends of a population may be reversed in its subpopulations.

    The common with all these paradoxes is them requiring us to get comfortable to shifting gears from System 1 fast thinking to System 2 slow . This is also the common theme for the lessons outlined below.

    A few more classical examples are: The Gambler’s Fallacy , Base Rate Fallacy and the The LindaProblem . These are beyond the scope of this article, but I highly recommend looking them up to further sharpen ways of thinking about data.… especially when dealing with ambiguity

    or

    Search for clarity in ambiguity

    Let’s reread the problem, this time as stated in “Ask Marilyn”

    Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say №1, and the host, who knows what’s behind the doors, opens another door, say №3, which has a goat. He then says to you, “Do you want to pick door №2?” Is it to your advantage to switch your choice?

    We discussed that the most important piece of information is not made explicit. It says that the host “knows what’s behind the doors”, but not that they open a door at random, although it’s implicitly understood that the host will never open the door with the car.

    Many real life problems in data science involve dealing with ambiguous demands as well as in data provided by stakeholders.

    It is crucial for the researcher to track down any relevant piece of information that is likely to have an impact and update that into the solution. Statisticians refer to this as “belief update”.With new information we should update our beliefs

    This is the main aspect separating the Bayesian stream of thought to the Frequentist. The Frequentist approach takes data at face value. The Bayesian approach incorporates prior beliefs and updates it when new findings are introduced. This is especially useful when dealing with ambiguous situations.

    To drive this point home, let’s re-examine this figure comparing between the post intervention N=3 setupsand the N=100 one.

    Copied from above. Post intervention settings for the N=3 setupand N=100.

    In both cases we had a prior belief that all doors had an equal chance of winning the prize p=1/N.

    Once the host opened one doora lot of valuable information was revealed whereas in the case of N=100 it was much more apparent than N=3.

    In the Frequentist approach, however, most of this information would be ignored, as it only focuses on the two closed doors. The Frequentist conclusion, hence is a 50% chance to win the prize regardless of what else is known about the situation. Hence the Frequentist takes Paul Erdős’ “no difference” point of view, which we now know to be incorrect.

    This would be reasonable if all that was presented were the two doors and not the intervention and the goats. However, if that information is presented, one should shift gears into System 2 thinking and update their beliefs in the system. This is what we have done by focusing not only on the shut door, but rather consider what was learnt about the system at large.

    For the brave hearted , in a supplementary section below called The Bayesian Point of View I solve for the Monty Hall problem using the Bayesian formalism.Be one with subjectivity

    The Frequentist main reservation about “going Bayes” is that — “Statistics should be objective”.

    The Bayesian response is — the Frequentist’s also apply a prior without realising it — a flat one.

    Regardless of the Bayesian/Frequentist debate, as researchers we try our best to be as objective as possible in every step of the analysis.

    That said, it is inevitable that subjective decisions are made throughout.

    E.g, in a skewed distribution should one quote the mean or median? It highly depends on the context and hence a subjective decision needs to be made.

    The responsibility of the analyst is to provide justification for their choices first to convince themselves and then their stakeholders.When confused — look for a useful analogy

    … but tread with caution

    We saw that by going from the N=3 setup to the N=100 the solution was apparent. This is a trick scientists frequently use — if the problem appears at first a bit too confusing/overwhelming, break it down and try to find a useful analogy.

    It is probably not a perfect comparison, but going from the N=3 setup to N=100 is like examining a picture from up close and zooming out to see the big picture. Think of having only a puzzle piece and then glancing at the jigsaw photo on the box.

    Monty Hall in 1976. Credit: Wikipedia and using Visual Paradigm Online for the puzzle effect

    Note: whereas analogies may be powerful, one should do so with caution, not to oversimplify. Physicists refer to this situation as the spherical cow method, where models may oversimplify complex phenomena.

    I admit that even with years of experience in applied statistics at times I still get confused at which method to apply. A large part of my thought process is identifying analogies to known solved problems. Sometimes after making progress in a direction I will realise that my assumptions were wrong and seek a new direction. I used to quip with colleagues that they shouldn’t trust me before my third attempt …Simulations are powerful but not always necessary

    It’s interesting to learn that Paul Erdős and other mathematicians were convinced only after seeing simulations of the problem.

    I am two-minded about usage of simulations when it comes to problem solving.

    On the one hand simulations are powerful tools to analyse complex and intractable problems. Especially in real life data in which one wants a grasp not only of the underlying formulation, but also stochasticity.

    And here is the big BUT — if a problem can be analytically solved like the Monty Hall one, simulations as fun as they may be, may not be necessary.

    According to Occam’s razor, all that is required is a brief intuition to explain the phenomena. This is what I attempted to do here by applying common sense and some basic probability reasoning. For those who enjoy deep dives I provide below supplementary sections with two methods for analytical solutions — one using Bayesian statistics and another using Causality.After publishing the first version of this article there was a comment that Savant’s solution³ may be simpler than those presented here. I revisited her communications and agreed that it should be added. In the process I realised three more lessons may be learnt.A well designed visual goes a long way

    Continuing the principle of Occam’s razor, Savant explained³ quite convincingly in my opinion:

    You should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance. Here’s a good way to visualize what happened. Suppose there are a million doors, and you pick door #1. Then the host, who knows what’s behind the doors and will always avoid the one with the prize, opens them all except door #777,777. You’d switch to that door pretty fast, wouldn’t you?

    Hence she provided an abstract visual for the readers. I attempted to do the same with the 100 doors figures.

    Marilyn vos Savant who popularised the Monty Hall Problem. Credit: Ben David on Flickr under license

    As mentioned many readers, and especially with backgrounds in maths and statistics, still weren’t convinced.

    She revised³ with another mental image:

    The benefits of switching are readily proven by playing through the six games that exhaust all the possibilities. For the first three games, you choose #1 and “switch” each time, for the second three games, you choose #1 and “stay” each time, and the host always opens a loser. Here are the results.

    She added a table with all the scenarios. I took some artistic liberty and created the following figure. As indicated, the top batch are the scenarios in which the trader switches and the bottom when they switch. Lines in green are games which the trader wins, and in red when they get zonked. The symbolised the door chosen by the trader and Monte Hall then chooses a different door that has a goat behind it.

    Adaptation of Savant’s table³ of six scenarios that shows the solution to the Monty Hall Problem

    We clearly see from this diagram that the switcher has a ⅔ chance of winning and those that stay only ⅓.

    This is yet another elegant visualisation that clearly explains the non intuitive.

    It strengthens the claim that there is no real need for simulations in this case because all they would be doing is rerunning these six scenarios.

    One more popular solution is decision tree illustrations. You can find these in the Wikipedia page, but I find it’s a bit redundant to Savant’s table.

    The fact that we can solve this problem in so many ways yields another lesson:There are many ways to skin a … problem

    Of the many lessons that I have learnt from the writings of late Richard Feynman, one of the best physics and ideas communicators, is that a problem can be solved many ways. Mathematicians and Physicists do this all the time.

    A relevant quote that paraphrases Occam’s razor:

    If you can’t explain it simply, you don’t understand it well enough — attributed to Albert Einstein

    And finallyEmbrace ignorance and be humble ‍

    “You are utterly incorrect … How many irate mathematicians are needed to get you to change your mind?” — Ph.D from Georgetown University

    “May I suggest that you obtain and refer to a standard textbook on probability before you try to answer a question of this type again?” — Ph.D from University of Florida

    “You’re in error, but Albert Einstein earned a dearer place in the hearts of people after he admitted his errors.” — Ph.D. from University of Michigan

    Ouch!

    These are some of the said responses from mathematicians to the Parade article.

    Such unnecessary viciousness.

    You can check the reference³ to see the writer’s names and other like it. To whet your appetite: “You blew it, and you blew it big!”, , “You made a mistake, but look at the positive side. If all those Ph.D.’s were wrong, the country would be in some very serious trouble.”, “I am in shock that after being corrected by at least three mathematicians, you still do not see your mistake.”.

    And as expected from the 1990s perhaps the most embarrassing one was from a resident of Oregon:

    “Maybe women look at math problems differently than men.”

    These make me cringe and be embarrassed to be associated by gender and Ph.D. title with these graduates and professors.

    Hopefully in the 2020s most people are more humble about their ignorance. Yuval Noah Harari discusses the fact that the Scientific Revolution of Galileo Galilei et al. was not due to knowledge but rather admittance of ignorance.

    “The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions” — Yuval Noah Harari

    Fortunately for mathematicians’ image, there were also quiet a lot of more enlightened comments. I like this one from one Seth Kalson, Ph.D. of MIT:

    You are indeed correct. My colleagues at work had a ball with this problem, and I dare say that most of them, including me at first, thought you were wrong!

    We’ll summarise by examining how, and if, the Monty Hall problem may be applied in real-world settings, so you can try to relate to projects that you are working on.

    Application in Real World Settings

    Researching for this article I found that beyond artificial setups for entertainment⁶ ⁷ there aren’t practical settings for this problem to use as an analogy. Of course, I may be wrong⁸ and would be glad to hear if you know of one.

    One way of assessing the viability of an analogy is using arguments from causality which provides vocabulary that cannot be expressed with standard statistics.

    In a previous post I discussed the fact that the story behind the data is as important as the data itself. In particular Causal Graph Models visualise the story behind the data, which we will use as a framework for a reasonable analogy.

    For the Monty Hall problem we can build a Causal Graph Model like this:

    Reading:

    The door chosen by the trader is independent from that with the prize and vice versa. As important, there is no common cause between them that might generate a spurious correlation.

    The host’s choice depends on both and .

    By comparing causal graphs of two systems one can get a sense for how analogous both are. A perfect analogy would require more details, but this is beyond the scope of this article. Briefly, one would want to ensure similar functions between the parameters.

    Those interested in learning further details about using Causal Graphs Models to assess causality in real world problems may be interested in this article.

    Anecdotally it is also worth mentioning that on Let’s Make a Deal, Monty himself has admitted years later to be playing mind games with the contestants and did not always follow the rules, e.g, not always doing the intervention as “it all depends on his mood”⁴.

    In our setup we assumed perfect conditions, i.e., a host that does not skew from the script and/or play on the trader’s emotions. Taking this into consideration would require updating the Graphical Model above, which is beyond the scope of this article.

    Some might be disheartened to realise at this stage of the post that there might not be real world applications for this problem.

    I argue that lessons learnt from the Monty Hall problem definitely are.

    Just to summarise them again:Assessing probabilities can be counter intuitive …… especially when dealing with ambiguityWith new information we should update our beliefsBe one with subjectivityWhen confused — look for a useful analogy … but tread with cautionSimulations are powerful but not always necessaryA well designed visual goes a long wayThere are many ways to skin a … problemEmbrace ignorance and be humble ‍

    While the Monty Hall Problem might seem like a simple puzzle, it offers valuable insights into decision-making, particularly for data scientists. The problem highlights the importance of going beyond intuition and embracing a more analytical, data-driven approach. By understanding the principles of Bayesian thinking and updating our beliefs based on new information, we can make more informed decisions in many aspects of our lives, including data science. The Monty Hall Problem serves as a reminder that even seemingly straightforward scenarios can contain hidden complexities and that by carefully examining available information, we can uncover hidden truths and make better decisions.

    At the bottom of the article I provide a list of resources that I found useful to learn about this topic.

    Credit: Wikipedia

    Loved this post? Join me on LinkedIn or Buy me a coffee!

    Credits

    Unless otherwise noted, all images were created by the author.

    Many thanks to Jim Parr, Will Reynolds, and Betty Kazin for their useful comments.

    In the following supplementary sections I derive solutions to the Monty Hall’s problem from two perspectives:

    Bayesian

    Causal

    Both are motivated by questions in textbook: Causal Inference in Statistics A Primer by Judea Pearl, Madelyn Glymour, and Nicholas P. Jewell.

    Supplement 1: The Bayesian Point of View

    This section assumes a basic understanding of Bayes’ Theorem, in particular being comfortable conditional probabilities. In other words if this makes sense:

    We set out to use Bayes’ theorem to prove that switching doors improves chances in the N=3 Monty Hall Problem.We define

    X — the chosen door

    Y— the door with the prize

    Z — the door opened by the host

    Labelling the doors as A, B and C, without loss of generality, we need to solve for:

    Using Bayes’ theorem we equate the left side as

    and the right one as:

    Most components are equal=P=⅓ so we are left to prove:

    In the case where Y=B, the host has only one choice, making P= 1.

    In the case where Y=A, the host has two choices, making P= 1/2.

    From here:

    Quod erat demonstrandum.

    Note: if the “host choices” arguments didn’t make sense look at the table below showing this explicitly. You will want to compare entries {X=A, Y=B, Z=C} and {X=A, Y=A, Z=C}.

    Supplement 2: The Causal Point of View

    The section assumes a basic understanding of Directed Acyclic Graphsand Structural Causal Modelsis useful, but not required. In brief:

    DAGs qualitatively visualise the causal relationships between the parameter nodes.

    SCMs quantitatively express the formula relationships between the parameters.

    Given the DAG

    we are going to define the SCM that corresponds to the classic N=3 Monty Hall problem and use it to describe the joint distribution of all variables. We later will generically expand to N.We define

    X — the chosen door

    Y — the door with the prize

    Z — the door opened by the host

    According to the DAG we see that according to the chain rule:

    The SCM is defined by exogenous variables U , endogenous variables V, and the functions between them F:

    U = {X,Y}, V={Z}, F= {f}

    where X, Y and Z have door values:

    D = {A, B, C}

    The host choice is fdefined as:

    In order to generalise to N doors, the DAG remains the same, but the SCM requires to update D to be a set of N doors Dᵢ: {D₁, D₂, … Dₙ}.

    Exploring Example Scenarios

    To gain an intuition for this SCM, let’s examine 6 examples of 27:

    When X=YP= 0; cannot choose the participant’s door

    P= 1/2; is behind → chooses B at 50%

    P= 1/2; is behind → chooses C at 50%When X≠YP= 0; cannot choose the participant’s door

    P= 0; cannot choose prize door

    P= 1; has not choice in the matterCalculating Joint Probabilities

    Using logic let’s code up all 27 possibilities in python

    df = pd.DataFrame++, "Y":++)* 3, "Z":* 9})

    df= None

    p_x = 1./3

    p_y = 1./3

    df.loc= 0

    df.loc= 0.5

    df.loc= 0

    df.loc= 0

    df.loc= 1

    df= df* p_x * p_y

    print{df.sum}")

    df

    yields

    Resources

    This Quora discussion by Joshua Engel helped me shape a few aspects of this article.

    Causal Inference in Statistics A Primer / Pearl, Glymour & Jewell— excellent short text bookI also very much enjoy Tim Harford’s podcast Cautionary Tales. He wrote about this topic on November 3rd 2017 for the Financial Times: Monty Hall and the game show stick-or-switch conundrum

    Footnotes

    ¹ Vazsonyi, Andrew. “Which Door Has the Cadillac?”. Decision Line: 17–19. Archived from the originalon 13 April 2014. Retrieved 16 October 2012.

    ² Steve Selvin to the American Statistician in 1975.³Game Show Problem by Marilyn vos Savant’s “Ask Marilyn” in marilynvossavant.com: “This material in this article was originally published in PARADE magazine in 1990 and 1991”

    ⁴Tierney, John. “Behind Monty Hall’s Doors: Puzzle, Debate and Answer?”. The New York Times. Retrieved 18 January 2008.

    ⁵ Kahneman, D.. Thinking, fast and slow. Farrar, Straus and Giroux.

    ⁶ MythBusters Episode 177 “Pick a Door”Watch Mythbuster’s approach

    ⁶Monty Hall Problem on Survivor Season 41Watch Survivor’s take on the problem

    ⁷ Jingyi Jessica LiHow the Monty Hall problem is similar to the false discovery rate in high-throughput data analysis.Whereas the author points about “similarities” between hypothesis testing and the Monty Hall problem, I think that this is a bit misleading. The author is correct that both problems change by the order in which processes are done, but that is part of Bayesian statistics in general, not limited to the Monty Hall problem.
    The post Lessons in Decision Making from the Monty Hall Problem appeared first on Towards Data Science.
    #lessons #decision #making #monty #hall
    🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem
    The Monty Hall Problem is a well-known brain teaser from which we can learn important lessons in Decision Making that are useful in general and in particular for data scientists. If you are not familiar with this problem, prepare to be perplexed . If you are, I hope to shine light on aspects that you might not have considered . I introduce the problem and solve with three types of intuitions: Common — The heart of this post focuses on applying our common sense to solve this problem. We’ll explore why it fails us and what we can do to intuitively overcome this to make the solution crystal clear . We’ll do this by using visuals , qualitative arguments and some basic probabilities. Bayesian — We will briefly discuss the importance of belief propagation. Causal — We will use a Graph Model to visualise conditions required to use the Monty Hall problem in real world settings.Spoiler alert I haven’t been convinced that there are any, but the thought process is very useful. I summarise by discussing lessons learnt for better data decision making. In regards to the Bayesian and Causal intuitions, these will be presented in a gentle form. For the mathematically inclined I also provide supplementary sections with short Deep Dives into each approach after the summary.By examining different aspects of this puzzle in probability you will hopefully be able to improve your data decision making . Credit: Wikipedia First, some history. Let’s Make a Deal is a USA television game show that originated in 1963. As its premise, audience participants were considered traders making deals with the host, Monty Hall . At the heart of the matter is an apparently simple scenario: A trader is posed with the question of choosing one of three doors for the opportunity to win a luxurious prize, e.g, a car . Behind the other two were goats . The trader is shown three closed doors. The trader chooses one of the doors. Let’s call thisdoor A and mark it with a . Keeping the chosen door closed, the host reveals one of the remaining doors showing a goat. The trader chooses door and the the host reveals door C showing a goat. The host then asks the trader if they would like to stick with their first choice or switch to the other remaining one. If the trader guesses correct they win the prize . If not they’ll be shown another goat. What is the probability of being Zonked? Credit: Wikipedia Should the trader stick with their original choice of door A or switch to B? Before reading further, give it a go. What would you do? Most people are likely to have a gut intuition that “it doesn’t matter” arguing that in the first instance each door had a ⅓ chance of hiding the prize, and that after the host intervention , when only two doors remain closed, the winning of the prize is 50:50. There are various ways of explaining why the coin toss intuition is incorrect. Most of these involve maths equations, or simulations. Whereas we will address these later, we’ll attempt to solve by applying Occam’s razor: A principle that states that simpler explanations are preferable to more complex ones — William of OckhamTo do this it is instructive to slightly redefine the problem to a large N doors instead of the original three. The Large N-Door Problem Similar to before: you have to choose one of many doors. For illustration let’s say N=100. Behind one of the doors there is the prize and behind 99of the rest are goats . The 100 Door Monty Hall problem before the host intervention. You choose one door and the host reveals 98of the other doors that have goats leaving yours and one more closed . The 100 Door Monty Hall Problem after the host intervention. Should you stick with your door or make the switch? Should you stick with your original choice or make the switch? I think you’ll agree with me that the remaining door, not chosen by you, is much more likely to conceal the prize … so you should definitely make the switch! It’s illustrative to compare both scenarios discussed so far. In the next figure we compare the post host intervention for the N=3 setupand that of N=100: Post intervention settings for the N=3 setupand N=100. In both cases we see two shut doors, one of which we’ve chosen. The main difference between these scenarios is that in the first we see one goat and in the second there are more than the eye would care to see. Why do most people consider the first case as a “50:50” toss up and in the second it’s obvious to make the switch? We’ll soon address this question of why. First let’s put probabilities of success behind the different scenarios. What’s The Frequency, Kenneth? So far we learnt from the N=100 scenario that switching doors is obviously beneficial. Inferring for the N=3 may be a leap of faith for most. Using some basic probability arguments here we’ll quantify why it is favourable to make the switch for any number door scenario N. We start with the standard Monty Hall problem. When it starts the probability of the prize being behind each of the doors A, B and C is p=⅓. To be explicit let’s define the Y parameter to be the door with the prize , i.e, p= p=p=⅓. The trick to solving this problem is that once the trader’s door A has been chosen , we should pay close attention to the set of the other doors {B,C}, which has the probability of p=p+p=⅔. This visual may help make sense of this: By being attentive to the {B,C} the rest should follow. When the goat is revealed it is apparent that the probabilities post intervention change. Note that for ease of reading I’ll drop the Y notation, where pwill read pand pwill read p. Also for completeness the full terms after the intervention should be even longer due to it being conditional, e.g, p, p, where Z is a parameter representing the choice of the host .premains ⅓ p=p+premains ⅔, p=0; we just learnt that the goat is behind door C, not the prize. p= p-p= ⅔ For anyone with the information provided by the hostthis means that it isn’t a toss of a fair coin! For them the fact that pbecame zero does not “raise all other boats”, but rather premains the same and pgets doubled. The bottom line is that the trader should consider p= ⅓ and p=⅔, hence by switching they are doubling the odds at winning! Let’s generalise to N. When we start all doors have odds of winning the prize p=1/N. After the trader chooses one door which we’ll call D₁, meaning p=1/N, we should now pay attention to the remaining set of doors {D₂, …, Dₙ} will have a chance of p=/N. When the host revealsdoors {D₃, …, Dₙ} with goats: premains 1/N p=p+p+… + premains/N p=p= …=p=p= 0; we just learnt that they have goats, not the prize. p=p— p— … — p=/N The trader should now consider two door values p=1/N and p=/N. Hence the odds of winning improved by a factor of N-1! In the case of N=100, this means by an odds ratio of 99!. The improvement of odds ratios in all scenarios between N=3 to 100 may be seen in the following graph. The thin line is the probability of winning by choosing any door prior to the intervention p=1/N. Note that it also represents the chance of winning after the intervention, if they decide to stick to their guns and not switch p.The thick line is the probability of winning the prize after the intervention if the door is switched p=/N: Probability of winning as a function of N. p=p=1/N is the thin line; p=N/is the thick one.Perhaps the most interesting aspect of this graphis that the N=3 case has the highest probability before the host intervention , but the lowest probability after and vice versa for N=100. Another interesting feature is the quick climb in the probability of winning for the switchers: N=3: p=67% N=4: p=75% N=5=80% The switchers curve gradually reaches an asymptote approaching at 100% whereas at N=99 it is 98.99% and at N=100 is equal to 99%. This starts to address an interesting question: Why Is Switching Obvious For Large N But Not N=3? The answer is the fact that this puzzle is slightly ambiguous. Only the highly attentive realise that by revealing the goatthe host is actually conveying a lot of information that should be incorporated into one’s calculation. Later we discuss the difference of doing this calculation in one’s mind based on intuition and slowing down by putting pen to paper or coding up the problem. How much information is conveyed by the host by intervening? A hand wavy explanation is that this information may be visualised as the gap between the lines in the graph above. For N=3 we saw that the odds of winning doubled, but that doesn’t register as strongly to our common sense intuition as the 99 factor as in the N=100. I have also considered describing stronger arguments from Information Theory that provide useful vocabulary to express communication of information. However, I feel that this fascinating field deserves a post of its own, which I’ve published. The main takeaway for the Monty Hall problem is that I have calculated the information gain to be a logarithmic function of the number of doors c using this formula: Information Gain due to the intervention of the host for a setup with c doors. Full details in my upcoming article. For c=3 door case, e.g, the information gain is ⅔ bits. Full details are in this article on entropy. To summarise this section, we use basic probability arguments to quantify the probabilities of winning the prize showing the benefit of switching for all N door scenarios. For those interested in more formal solutions using Bayesian and Causality on the bottom I provide supplement sections. In the next three final sections we’ll discuss how this problem was accepted in the general public back in the 1990s, discuss lessons learnt and then summarise how we can apply them in real-world settings. Being Confused Is OK “No, that is impossible, it should make no difference.” — Paul Erdős If you still don’t feel comfortable with the solution of the N=3 Monty Hall problem, don’t worry you are in good company! According to Vazsonyi¹ even Paul Erdős who is considered “of the greatest experts in probability theory” was confounded until computer simulations were demonstrated to him. When the original solution by Steve Selvin² was popularised by Marilyn vos Savant in her column “Ask Marilyn” in Parade magazine in 1990 many readers wrote that Selvin and Savant were wrong³. According to Tierney’s 1991 article in the New York Times, this included about 10,000 readers, including nearly 1,000 with Ph.D degrees⁴. On a personal note, over a decade ago I was exposed to the standard N=3 problem and since then managed to forget the solution numerous times. When I learnt about the large N approach I was quite excited about how intuitive it was. I then failed to explain it to my technical manager over lunch, so this is an attempt to compensate. I still have the same day job . While researching this piece I realised that there is a lot to learn in terms of decision making in general and in particular useful for data science. Lessons Learnt From Monty Hall Problem In his book Thinking Fast and Slow, the late Daniel Kahneman, the co-creator of Behaviour Economics, suggested that we have two types of thought processes: System 1 — fast thinking : based on intuition. This helps us react fast with confidence to familiar situations. System 2 – slow thinking : based on deep thought. This helps figure out new complex situations that life throws at us. Assuming this premise, you might have noticed that in the above you were applying both. By examining the visual of N=100 doors your System 1 kicked in and you immediately knew the answer. I’m guessing that in the N=3 you were straddling between System 1 and 2. Considering that you had to stop and think a bit when going throughout the probabilities exercise it was definitely System 2 . The decision maker’s struggle between System 1 and System 2 . Generated using Gemini Imagen 3 Beyond the fast and slow thinking I feel that there are a lot of data decision making lessons that may be learnt.Assessing probabilities can be counter-intuitive … or Be comfortable with shifting to deep thought We’ve clearly shown that in the N=3 case. As previously mentioned it confounded many people including prominent statisticians. Another classic example is The Birthday Paradox , which shows how we underestimate the likelihood of coincidences. In this problem most people would think that one needs a large group of people until they find a pair sharing the same birthday. It turns out that all you need is 23 to have a 50% chance. And 70 for a 99.9% chance. One of the most confusing paradoxes in the realm of data analysis is Simpson’s, which I detailed in a previous article. This is a situation where trends of a population may be reversed in its subpopulations. The common with all these paradoxes is them requiring us to get comfortable to shifting gears from System 1 fast thinking to System 2 slow . This is also the common theme for the lessons outlined below. A few more classical examples are: The Gambler’s Fallacy , Base Rate Fallacy and the The LindaProblem . These are beyond the scope of this article, but I highly recommend looking them up to further sharpen ways of thinking about data.… especially when dealing with ambiguity or Search for clarity in ambiguity Let’s reread the problem, this time as stated in “Ask Marilyn” Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say №1, and the host, who knows what’s behind the doors, opens another door, say №3, which has a goat. He then says to you, “Do you want to pick door №2?” Is it to your advantage to switch your choice? We discussed that the most important piece of information is not made explicit. It says that the host “knows what’s behind the doors”, but not that they open a door at random, although it’s implicitly understood that the host will never open the door with the car. Many real life problems in data science involve dealing with ambiguous demands as well as in data provided by stakeholders. It is crucial for the researcher to track down any relevant piece of information that is likely to have an impact and update that into the solution. Statisticians refer to this as “belief update”.With new information we should update our beliefs This is the main aspect separating the Bayesian stream of thought to the Frequentist. The Frequentist approach takes data at face value. The Bayesian approach incorporates prior beliefs and updates it when new findings are introduced. This is especially useful when dealing with ambiguous situations. To drive this point home, let’s re-examine this figure comparing between the post intervention N=3 setupsand the N=100 one. Copied from above. Post intervention settings for the N=3 setupand N=100. In both cases we had a prior belief that all doors had an equal chance of winning the prize p=1/N. Once the host opened one doora lot of valuable information was revealed whereas in the case of N=100 it was much more apparent than N=3. In the Frequentist approach, however, most of this information would be ignored, as it only focuses on the two closed doors. The Frequentist conclusion, hence is a 50% chance to win the prize regardless of what else is known about the situation. Hence the Frequentist takes Paul Erdős’ “no difference” point of view, which we now know to be incorrect. This would be reasonable if all that was presented were the two doors and not the intervention and the goats. However, if that information is presented, one should shift gears into System 2 thinking and update their beliefs in the system. This is what we have done by focusing not only on the shut door, but rather consider what was learnt about the system at large. For the brave hearted , in a supplementary section below called The Bayesian Point of View I solve for the Monty Hall problem using the Bayesian formalism.Be one with subjectivity The Frequentist main reservation about “going Bayes” is that — “Statistics should be objective”. The Bayesian response is — the Frequentist’s also apply a prior without realising it — a flat one. Regardless of the Bayesian/Frequentist debate, as researchers we try our best to be as objective as possible in every step of the analysis. That said, it is inevitable that subjective decisions are made throughout. E.g, in a skewed distribution should one quote the mean or median? It highly depends on the context and hence a subjective decision needs to be made. The responsibility of the analyst is to provide justification for their choices first to convince themselves and then their stakeholders.When confused — look for a useful analogy … but tread with caution We saw that by going from the N=3 setup to the N=100 the solution was apparent. This is a trick scientists frequently use — if the problem appears at first a bit too confusing/overwhelming, break it down and try to find a useful analogy. It is probably not a perfect comparison, but going from the N=3 setup to N=100 is like examining a picture from up close and zooming out to see the big picture. Think of having only a puzzle piece and then glancing at the jigsaw photo on the box. Monty Hall in 1976. Credit: Wikipedia and using Visual Paradigm Online for the puzzle effect Note: whereas analogies may be powerful, one should do so with caution, not to oversimplify. Physicists refer to this situation as the spherical cow method, where models may oversimplify complex phenomena. I admit that even with years of experience in applied statistics at times I still get confused at which method to apply. A large part of my thought process is identifying analogies to known solved problems. Sometimes after making progress in a direction I will realise that my assumptions were wrong and seek a new direction. I used to quip with colleagues that they shouldn’t trust me before my third attempt …Simulations are powerful but not always necessary It’s interesting to learn that Paul Erdős and other mathematicians were convinced only after seeing simulations of the problem. I am two-minded about usage of simulations when it comes to problem solving. On the one hand simulations are powerful tools to analyse complex and intractable problems. Especially in real life data in which one wants a grasp not only of the underlying formulation, but also stochasticity. And here is the big BUT — if a problem can be analytically solved like the Monty Hall one, simulations as fun as they may be, may not be necessary. According to Occam’s razor, all that is required is a brief intuition to explain the phenomena. This is what I attempted to do here by applying common sense and some basic probability reasoning. For those who enjoy deep dives I provide below supplementary sections with two methods for analytical solutions — one using Bayesian statistics and another using Causality.After publishing the first version of this article there was a comment that Savant’s solution³ may be simpler than those presented here. I revisited her communications and agreed that it should be added. In the process I realised three more lessons may be learnt.A well designed visual goes a long way Continuing the principle of Occam’s razor, Savant explained³ quite convincingly in my opinion: You should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance. Here’s a good way to visualize what happened. Suppose there are a million doors, and you pick door #1. Then the host, who knows what’s behind the doors and will always avoid the one with the prize, opens them all except door #777,777. You’d switch to that door pretty fast, wouldn’t you? Hence she provided an abstract visual for the readers. I attempted to do the same with the 100 doors figures. Marilyn vos Savant who popularised the Monty Hall Problem. Credit: Ben David on Flickr under license As mentioned many readers, and especially with backgrounds in maths and statistics, still weren’t convinced. She revised³ with another mental image: The benefits of switching are readily proven by playing through the six games that exhaust all the possibilities. For the first three games, you choose #1 and “switch” each time, for the second three games, you choose #1 and “stay” each time, and the host always opens a loser. Here are the results. She added a table with all the scenarios. I took some artistic liberty and created the following figure. As indicated, the top batch are the scenarios in which the trader switches and the bottom when they switch. Lines in green are games which the trader wins, and in red when they get zonked. The symbolised the door chosen by the trader and Monte Hall then chooses a different door that has a goat behind it. Adaptation of Savant’s table³ of six scenarios that shows the solution to the Monty Hall Problem We clearly see from this diagram that the switcher has a ⅔ chance of winning and those that stay only ⅓. This is yet another elegant visualisation that clearly explains the non intuitive. It strengthens the claim that there is no real need for simulations in this case because all they would be doing is rerunning these six scenarios. One more popular solution is decision tree illustrations. You can find these in the Wikipedia page, but I find it’s a bit redundant to Savant’s table. The fact that we can solve this problem in so many ways yields another lesson:There are many ways to skin a … problem Of the many lessons that I have learnt from the writings of late Richard Feynman, one of the best physics and ideas communicators, is that a problem can be solved many ways. Mathematicians and Physicists do this all the time. A relevant quote that paraphrases Occam’s razor: If you can’t explain it simply, you don’t understand it well enough — attributed to Albert Einstein And finallyEmbrace ignorance and be humble ‍ “You are utterly incorrect … How many irate mathematicians are needed to get you to change your mind?” — Ph.D from Georgetown University “May I suggest that you obtain and refer to a standard textbook on probability before you try to answer a question of this type again?” — Ph.D from University of Florida “You’re in error, but Albert Einstein earned a dearer place in the hearts of people after he admitted his errors.” — Ph.D. from University of Michigan Ouch! These are some of the said responses from mathematicians to the Parade article. Such unnecessary viciousness. You can check the reference³ to see the writer’s names and other like it. To whet your appetite: “You blew it, and you blew it big!”, , “You made a mistake, but look at the positive side. If all those Ph.D.’s were wrong, the country would be in some very serious trouble.”, “I am in shock that after being corrected by at least three mathematicians, you still do not see your mistake.”. And as expected from the 1990s perhaps the most embarrassing one was from a resident of Oregon: “Maybe women look at math problems differently than men.” These make me cringe and be embarrassed to be associated by gender and Ph.D. title with these graduates and professors. Hopefully in the 2020s most people are more humble about their ignorance. Yuval Noah Harari discusses the fact that the Scientific Revolution of Galileo Galilei et al. was not due to knowledge but rather admittance of ignorance. “The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions” — Yuval Noah Harari Fortunately for mathematicians’ image, there were also quiet a lot of more enlightened comments. I like this one from one Seth Kalson, Ph.D. of MIT: You are indeed correct. My colleagues at work had a ball with this problem, and I dare say that most of them, including me at first, thought you were wrong! We’ll summarise by examining how, and if, the Monty Hall problem may be applied in real-world settings, so you can try to relate to projects that you are working on. Application in Real World Settings Researching for this article I found that beyond artificial setups for entertainment⁶ ⁷ there aren’t practical settings for this problem to use as an analogy. Of course, I may be wrong⁸ and would be glad to hear if you know of one. One way of assessing the viability of an analogy is using arguments from causality which provides vocabulary that cannot be expressed with standard statistics. In a previous post I discussed the fact that the story behind the data is as important as the data itself. In particular Causal Graph Models visualise the story behind the data, which we will use as a framework for a reasonable analogy. For the Monty Hall problem we can build a Causal Graph Model like this: Reading: The door chosen by the trader is independent from that with the prize and vice versa. As important, there is no common cause between them that might generate a spurious correlation. The host’s choice depends on both and . By comparing causal graphs of two systems one can get a sense for how analogous both are. A perfect analogy would require more details, but this is beyond the scope of this article. Briefly, one would want to ensure similar functions between the parameters. Those interested in learning further details about using Causal Graphs Models to assess causality in real world problems may be interested in this article. Anecdotally it is also worth mentioning that on Let’s Make a Deal, Monty himself has admitted years later to be playing mind games with the contestants and did not always follow the rules, e.g, not always doing the intervention as “it all depends on his mood”⁴. In our setup we assumed perfect conditions, i.e., a host that does not skew from the script and/or play on the trader’s emotions. Taking this into consideration would require updating the Graphical Model above, which is beyond the scope of this article. Some might be disheartened to realise at this stage of the post that there might not be real world applications for this problem. I argue that lessons learnt from the Monty Hall problem definitely are. Just to summarise them again:Assessing probabilities can be counter intuitive …… especially when dealing with ambiguityWith new information we should update our beliefsBe one with subjectivityWhen confused — look for a useful analogy … but tread with cautionSimulations are powerful but not always necessaryA well designed visual goes a long wayThere are many ways to skin a … problemEmbrace ignorance and be humble ‍ While the Monty Hall Problem might seem like a simple puzzle, it offers valuable insights into decision-making, particularly for data scientists. The problem highlights the importance of going beyond intuition and embracing a more analytical, data-driven approach. By understanding the principles of Bayesian thinking and updating our beliefs based on new information, we can make more informed decisions in many aspects of our lives, including data science. The Monty Hall Problem serves as a reminder that even seemingly straightforward scenarios can contain hidden complexities and that by carefully examining available information, we can uncover hidden truths and make better decisions. At the bottom of the article I provide a list of resources that I found useful to learn about this topic. Credit: Wikipedia Loved this post? Join me on LinkedIn or Buy me a coffee! Credits Unless otherwise noted, all images were created by the author. Many thanks to Jim Parr, Will Reynolds, and Betty Kazin for their useful comments. In the following supplementary sections I derive solutions to the Monty Hall’s problem from two perspectives: Bayesian Causal Both are motivated by questions in textbook: Causal Inference in Statistics A Primer by Judea Pearl, Madelyn Glymour, and Nicholas P. Jewell. Supplement 1: The Bayesian Point of View This section assumes a basic understanding of Bayes’ Theorem, in particular being comfortable conditional probabilities. In other words if this makes sense: We set out to use Bayes’ theorem to prove that switching doors improves chances in the N=3 Monty Hall Problem.We define X — the chosen door Y— the door with the prize Z — the door opened by the host Labelling the doors as A, B and C, without loss of generality, we need to solve for: Using Bayes’ theorem we equate the left side as and the right one as: Most components are equal=P=⅓ so we are left to prove: In the case where Y=B, the host has only one choice, making P= 1. In the case where Y=A, the host has two choices, making P= 1/2. From here: Quod erat demonstrandum. Note: if the “host choices” arguments didn’t make sense look at the table below showing this explicitly. You will want to compare entries {X=A, Y=B, Z=C} and {X=A, Y=A, Z=C}. Supplement 2: The Causal Point of View The section assumes a basic understanding of Directed Acyclic Graphsand Structural Causal Modelsis useful, but not required. In brief: DAGs qualitatively visualise the causal relationships between the parameter nodes. SCMs quantitatively express the formula relationships between the parameters. Given the DAG we are going to define the SCM that corresponds to the classic N=3 Monty Hall problem and use it to describe the joint distribution of all variables. We later will generically expand to N.We define X — the chosen door Y — the door with the prize Z — the door opened by the host According to the DAG we see that according to the chain rule: The SCM is defined by exogenous variables U , endogenous variables V, and the functions between them F: U = {X,Y}, V={Z}, F= {f} where X, Y and Z have door values: D = {A, B, C} The host choice is fdefined as: In order to generalise to N doors, the DAG remains the same, but the SCM requires to update D to be a set of N doors Dᵢ: {D₁, D₂, … Dₙ}. Exploring Example Scenarios To gain an intuition for this SCM, let’s examine 6 examples of 27: When X=YP= 0; cannot choose the participant’s door P= 1/2; is behind → chooses B at 50% P= 1/2; is behind → chooses C at 50%When X≠YP= 0; cannot choose the participant’s door P= 0; cannot choose prize door P= 1; has not choice in the matterCalculating Joint Probabilities Using logic let’s code up all 27 possibilities in python df = pd.DataFrame++, "Y":++)* 3, "Z":* 9}) df= None p_x = 1./3 p_y = 1./3 df.loc= 0 df.loc= 0.5 df.loc= 0 df.loc= 0 df.loc= 1 df= df* p_x * p_y print{df.sum}") df yields Resources This Quora discussion by Joshua Engel helped me shape a few aspects of this article. Causal Inference in Statistics A Primer / Pearl, Glymour & Jewell— excellent short text bookI also very much enjoy Tim Harford’s podcast Cautionary Tales. He wrote about this topic on November 3rd 2017 for the Financial Times: Monty Hall and the game show stick-or-switch conundrum Footnotes ¹ Vazsonyi, Andrew. “Which Door Has the Cadillac?”. Decision Line: 17–19. Archived from the originalon 13 April 2014. Retrieved 16 October 2012. ² Steve Selvin to the American Statistician in 1975.³Game Show Problem by Marilyn vos Savant’s “Ask Marilyn” in marilynvossavant.com: “This material in this article was originally published in PARADE magazine in 1990 and 1991” ⁴Tierney, John. “Behind Monty Hall’s Doors: Puzzle, Debate and Answer?”. The New York Times. Retrieved 18 January 2008. ⁵ Kahneman, D.. Thinking, fast and slow. Farrar, Straus and Giroux. ⁶ MythBusters Episode 177 “Pick a Door”Watch Mythbuster’s approach ⁶Monty Hall Problem on Survivor Season 41Watch Survivor’s take on the problem ⁷ Jingyi Jessica LiHow the Monty Hall problem is similar to the false discovery rate in high-throughput data analysis.Whereas the author points about “similarities” between hypothesis testing and the Monty Hall problem, I think that this is a bit misleading. The author is correct that both problems change by the order in which processes are done, but that is part of Bayesian statistics in general, not limited to the Monty Hall problem. The post 🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem appeared first on Towards Data Science. #lessons #decision #making #monty #hall
    TOWARDSDATASCIENCE.COM
    🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem
    The Monty Hall Problem is a well-known brain teaser from which we can learn important lessons in Decision Making that are useful in general and in particular for data scientists. If you are not familiar with this problem, prepare to be perplexed . If you are, I hope to shine light on aspects that you might not have considered . I introduce the problem and solve with three types of intuitions: Common — The heart of this post focuses on applying our common sense to solve this problem. We’ll explore why it fails us and what we can do to intuitively overcome this to make the solution crystal clear . We’ll do this by using visuals , qualitative arguments and some basic probabilities (not too deep, I promise). Bayesian — We will briefly discuss the importance of belief propagation. Causal — We will use a Graph Model to visualise conditions required to use the Monty Hall problem in real world settings.Spoiler alert I haven’t been convinced that there are any, but the thought process is very useful. I summarise by discussing lessons learnt for better data decision making. In regards to the Bayesian and Causal intuitions, these will be presented in a gentle form. For the mathematically inclined I also provide supplementary sections with short Deep Dives into each approach after the summary. (Note: These are not required to appreciate the main points of the article.) By examining different aspects of this puzzle in probability you will hopefully be able to improve your data decision making . Credit: Wikipedia First, some history. Let’s Make a Deal is a USA television game show that originated in 1963. As its premise, audience participants were considered traders making deals with the host, Monty Hall . At the heart of the matter is an apparently simple scenario: A trader is posed with the question of choosing one of three doors for the opportunity to win a luxurious prize, e.g, a car . Behind the other two were goats . The trader is shown three closed doors. The trader chooses one of the doors. Let’s call this (without loss of generalisability) door A and mark it with a . Keeping the chosen door closed, the host reveals one of the remaining doors showing a goat (let’s call this door C). The trader chooses door and the the host reveals door C showing a goat. The host then asks the trader if they would like to stick with their first choice or switch to the other remaining one (which we’ll call door B). If the trader guesses correct they win the prize . If not they’ll be shown another goat (also referred to as a zonk). What is the probability of being Zonked? Credit: Wikipedia Should the trader stick with their original choice of door A or switch to B? Before reading further, give it a go. What would you do? Most people are likely to have a gut intuition that “it doesn’t matter” arguing that in the first instance each door had a ⅓ chance of hiding the prize, and that after the host intervention , when only two doors remain closed, the winning of the prize is 50:50. There are various ways of explaining why the coin toss intuition is incorrect. Most of these involve maths equations, or simulations. Whereas we will address these later, we’ll attempt to solve by applying Occam’s razor: A principle that states that simpler explanations are preferable to more complex ones — William of Ockham (1287–1347) To do this it is instructive to slightly redefine the problem to a large N doors instead of the original three. The Large N-Door Problem Similar to before: you have to choose one of many doors. For illustration let’s say N=100. Behind one of the doors there is the prize and behind 99 (N-1) of the rest are goats . The 100 Door Monty Hall problem before the host intervention. You choose one door and the host reveals 98 (N-2) of the other doors that have goats leaving yours and one more closed . The 100 Door Monty Hall Problem after the host intervention. Should you stick with your door or make the switch? Should you stick with your original choice or make the switch? I think you’ll agree with me that the remaining door, not chosen by you, is much more likely to conceal the prize … so you should definitely make the switch! It’s illustrative to compare both scenarios discussed so far. In the next figure we compare the post host intervention for the N=3 setup (top panel) and that of N=100 (bottom): Post intervention settings for the N=3 setup (top) and N=100 (bottom). In both cases we see two shut doors, one of which we’ve chosen. The main difference between these scenarios is that in the first we see one goat and in the second there are more than the eye would care to see (unless you shepherd for a living). Why do most people consider the first case as a “50:50” toss up and in the second it’s obvious to make the switch? We’ll soon address this question of why. First let’s put probabilities of success behind the different scenarios. What’s The Frequency, Kenneth? So far we learnt from the N=100 scenario that switching doors is obviously beneficial. Inferring for the N=3 may be a leap of faith for most. Using some basic probability arguments here we’ll quantify why it is favourable to make the switch for any number door scenario N. We start with the standard Monty Hall problem (N=3). When it starts the probability of the prize being behind each of the doors A, B and C is p=⅓. To be explicit let’s define the Y parameter to be the door with the prize , i.e, p(Y=A)= p(Y=B)=p(Y=C)=⅓. The trick to solving this problem is that once the trader’s door A has been chosen , we should pay close attention to the set of the other doors {B,C}, which has the probability of p(Y∈{B,C})=p(Y=B)+p(Y=C)=⅔. This visual may help make sense of this: By being attentive to the {B,C} the rest should follow. When the goat is revealed it is apparent that the probabilities post intervention change. Note that for ease of reading I’ll drop the Y notation, where p(Y=A) will read p(A) and p(Y∈{B,C}) will read p({B,C}). Also for completeness the full terms after the intervention should be even longer due to it being conditional, e.g, p(Y=A|Z=C), p(Y∈{B,C}|Z=C), where Z is a parameter representing the choice of the host . (In the Bayesian supplement section below I use proper notation without this shortening.) p(A) remains ⅓ p({B,C})=p(B)+p(C) remains ⅔, p(C)=0; we just learnt that the goat is behind door C, not the prize. p(B)= p({B,C})-p(C) = ⅔ For anyone with the information provided by the host (meaning the trader and the audience) this means that it isn’t a toss of a fair coin! For them the fact that p(C) became zero does not “raise all other boats” (probabilities of doors A and B), but rather p(A) remains the same and p(B) gets doubled. The bottom line is that the trader should consider p(A) = ⅓ and p(B)=⅔, hence by switching they are doubling the odds at winning! Let’s generalise to N (to make the visual simpler we’ll use N=100 again as an analogy). When we start all doors have odds of winning the prize p=1/N. After the trader chooses one door which we’ll call D₁, meaning p(Y=D₁)=1/N, we should now pay attention to the remaining set of doors {D₂, …, Dₙ} will have a chance of p(Y∈{D₂, …, Dₙ})=(N-1)/N. When the host reveals (N-2) doors {D₃, …, Dₙ} with goats (back to short notation): p(D₁) remains 1/N p({D₂, …, Dₙ})=p(D₂)+p(D₃)+… + p(Dₙ) remains (N-1)/N p(D₃)=p(D₄)= …=p(Dₙ₋₁) =p(Dₙ) = 0; we just learnt that they have goats, not the prize. p(D₂)=p({D₂, …, Dₙ}) — p(D₃) — … — p(Dₙ)=(N-1)/N The trader should now consider two door values p(D₁)=1/N and p(D₂)=(N-1)/N. Hence the odds of winning improved by a factor of N-1! In the case of N=100, this means by an odds ratio of 99! (i.e, 99% likely to win a prize when switching vs. 1% if not). The improvement of odds ratios in all scenarios between N=3 to 100 may be seen in the following graph. The thin line is the probability of winning by choosing any door prior to the intervention p(Y)=1/N. Note that it also represents the chance of winning after the intervention, if they decide to stick to their guns and not switch p(Y=D₁|Z={D₃…Dₙ}). (Here I reintroduce the more rigorous conditional form mentioned earlier.) The thick line is the probability of winning the prize after the intervention if the door is switched p(Y=D₂|Z={D₃…Dₙ})=(N-1)/N: Probability of winning as a function of N. p(Y)=p(Y=no switch|Z)=1/N is the thin line; p(Y=switch|Z)=N/(N-1) is the thick one. (By definition the sum of both lines is 1 for each N.) Perhaps the most interesting aspect of this graph (albeit also by definition) is that the N=3 case has the highest probability before the host intervention , but the lowest probability after and vice versa for N=100. Another interesting feature is the quick climb in the probability of winning for the switchers: N=3: p=67% N=4: p=75% N=5=80% The switchers curve gradually reaches an asymptote approaching at 100% whereas at N=99 it is 98.99% and at N=100 is equal to 99%. This starts to address an interesting question: Why Is Switching Obvious For Large N But Not N=3? The answer is the fact that this puzzle is slightly ambiguous. Only the highly attentive realise that by revealing the goat (and never the prize!) the host is actually conveying a lot of information that should be incorporated into one’s calculation. Later we discuss the difference of doing this calculation in one’s mind based on intuition and slowing down by putting pen to paper or coding up the problem. How much information is conveyed by the host by intervening? A hand wavy explanation is that this information may be visualised as the gap between the lines in the graph above. For N=3 we saw that the odds of winning doubled (nothing to sneeze at!), but that doesn’t register as strongly to our common sense intuition as the 99 factor as in the N=100. I have also considered describing stronger arguments from Information Theory that provide useful vocabulary to express communication of information. However, I feel that this fascinating field deserves a post of its own, which I’ve published. The main takeaway for the Monty Hall problem is that I have calculated the information gain to be a logarithmic function of the number of doors c using this formula: Information Gain due to the intervention of the host for a setup with c doors. Full details in my upcoming article. For c=3 door case, e.g, the information gain is ⅔ bits (of a maximum possible 1.58 bits). Full details are in this article on entropy. To summarise this section, we use basic probability arguments to quantify the probabilities of winning the prize showing the benefit of switching for all N door scenarios. For those interested in more formal solutions using Bayesian and Causality on the bottom I provide supplement sections. In the next three final sections we’ll discuss how this problem was accepted in the general public back in the 1990s, discuss lessons learnt and then summarise how we can apply them in real-world settings. Being Confused Is OK “No, that is impossible, it should make no difference.” — Paul Erdős If you still don’t feel comfortable with the solution of the N=3 Monty Hall problem, don’t worry you are in good company! According to Vazsonyi (1999)¹ even Paul Erdős who is considered “of the greatest experts in probability theory” was confounded until computer simulations were demonstrated to him. When the original solution by Steve Selvin (1975)² was popularised by Marilyn vos Savant in her column “Ask Marilyn” in Parade magazine in 1990 many readers wrote that Selvin and Savant were wrong³. According to Tierney’s 1991 article in the New York Times, this included about 10,000 readers, including nearly 1,000 with Ph.D degrees⁴. On a personal note, over a decade ago I was exposed to the standard N=3 problem and since then managed to forget the solution numerous times. When I learnt about the large N approach I was quite excited about how intuitive it was. I then failed to explain it to my technical manager over lunch, so this is an attempt to compensate. I still have the same day job . While researching this piece I realised that there is a lot to learn in terms of decision making in general and in particular useful for data science. Lessons Learnt From Monty Hall Problem In his book Thinking Fast and Slow, the late Daniel Kahneman, the co-creator of Behaviour Economics, suggested that we have two types of thought processes: System 1 — fast thinking : based on intuition. This helps us react fast with confidence to familiar situations. System 2 – slow thinking : based on deep thought. This helps figure out new complex situations that life throws at us. Assuming this premise, you might have noticed that in the above you were applying both. By examining the visual of N=100 doors your System 1 kicked in and you immediately knew the answer. I’m guessing that in the N=3 you were straddling between System 1 and 2. Considering that you had to stop and think a bit when going throughout the probabilities exercise it was definitely System 2 . The decision maker’s struggle between System 1 and System 2 . Generated using Gemini Imagen 3 Beyond the fast and slow thinking I feel that there are a lot of data decision making lessons that may be learnt. (1) Assessing probabilities can be counter-intuitive … or Be comfortable with shifting to deep thought We’ve clearly shown that in the N=3 case. As previously mentioned it confounded many people including prominent statisticians. Another classic example is The Birthday Paradox , which shows how we underestimate the likelihood of coincidences. In this problem most people would think that one needs a large group of people until they find a pair sharing the same birthday. It turns out that all you need is 23 to have a 50% chance. And 70 for a 99.9% chance. One of the most confusing paradoxes in the realm of data analysis is Simpson’s, which I detailed in a previous article. This is a situation where trends of a population may be reversed in its subpopulations. The common with all these paradoxes is them requiring us to get comfortable to shifting gears from System 1 fast thinking to System 2 slow . This is also the common theme for the lessons outlined below. A few more classical examples are: The Gambler’s Fallacy , Base Rate Fallacy and the The Linda [bank teller] Problem . These are beyond the scope of this article, but I highly recommend looking them up to further sharpen ways of thinking about data. (2) … especially when dealing with ambiguity or Search for clarity in ambiguity Let’s reread the problem, this time as stated in “Ask Marilyn” Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say №1, and the host, who knows what’s behind the doors, opens another door, say №3, which has a goat. He then says to you, “Do you want to pick door №2?” Is it to your advantage to switch your choice? We discussed that the most important piece of information is not made explicit. It says that the host “knows what’s behind the doors”, but not that they open a door at random, although it’s implicitly understood that the host will never open the door with the car. Many real life problems in data science involve dealing with ambiguous demands as well as in data provided by stakeholders. It is crucial for the researcher to track down any relevant piece of information that is likely to have an impact and update that into the solution. Statisticians refer to this as “belief update”. (3) With new information we should update our beliefs This is the main aspect separating the Bayesian stream of thought to the Frequentist. The Frequentist approach takes data at face value (referred to as flat priors). The Bayesian approach incorporates prior beliefs and updates it when new findings are introduced. This is especially useful when dealing with ambiguous situations. To drive this point home, let’s re-examine this figure comparing between the post intervention N=3 setups (top panel) and the N=100 one (bottom panel). Copied from above. Post intervention settings for the N=3 setup (top) and N=100 (bottom). In both cases we had a prior belief that all doors had an equal chance of winning the prize p=1/N. Once the host opened one door (N=3; or 98 doors when N=100) a lot of valuable information was revealed whereas in the case of N=100 it was much more apparent than N=3. In the Frequentist approach, however, most of this information would be ignored, as it only focuses on the two closed doors. The Frequentist conclusion, hence is a 50% chance to win the prize regardless of what else is known about the situation. Hence the Frequentist takes Paul Erdős’ “no difference” point of view, which we now know to be incorrect. This would be reasonable if all that was presented were the two doors and not the intervention and the goats. However, if that information is presented, one should shift gears into System 2 thinking and update their beliefs in the system. This is what we have done by focusing not only on the shut door, but rather consider what was learnt about the system at large. For the brave hearted , in a supplementary section below called The Bayesian Point of View I solve for the Monty Hall problem using the Bayesian formalism. (4) Be one with subjectivity The Frequentist main reservation about “going Bayes” is that — “Statistics should be objective”. The Bayesian response is — the Frequentist’s also apply a prior without realising it — a flat one. Regardless of the Bayesian/Frequentist debate, as researchers we try our best to be as objective as possible in every step of the analysis. That said, it is inevitable that subjective decisions are made throughout. E.g, in a skewed distribution should one quote the mean or median? It highly depends on the context and hence a subjective decision needs to be made. The responsibility of the analyst is to provide justification for their choices first to convince themselves and then their stakeholders. (5) When confused — look for a useful analogy … but tread with caution We saw that by going from the N=3 setup to the N=100 the solution was apparent. This is a trick scientists frequently use — if the problem appears at first a bit too confusing/overwhelming, break it down and try to find a useful analogy. It is probably not a perfect comparison, but going from the N=3 setup to N=100 is like examining a picture from up close and zooming out to see the big picture. Think of having only a puzzle piece and then glancing at the jigsaw photo on the box. Monty Hall in 1976. Credit: Wikipedia and using Visual Paradigm Online for the puzzle effect Note: whereas analogies may be powerful, one should do so with caution, not to oversimplify. Physicists refer to this situation as the spherical cow method, where models may oversimplify complex phenomena. I admit that even with years of experience in applied statistics at times I still get confused at which method to apply. A large part of my thought process is identifying analogies to known solved problems. Sometimes after making progress in a direction I will realise that my assumptions were wrong and seek a new direction. I used to quip with colleagues that they shouldn’t trust me before my third attempt … (6) Simulations are powerful but not always necessary It’s interesting to learn that Paul Erdős and other mathematicians were convinced only after seeing simulations of the problem. I am two-minded about usage of simulations when it comes to problem solving. On the one hand simulations are powerful tools to analyse complex and intractable problems. Especially in real life data in which one wants a grasp not only of the underlying formulation, but also stochasticity. And here is the big BUT — if a problem can be analytically solved like the Monty Hall one, simulations as fun as they may be (such as the MythBusters have done⁶), may not be necessary. According to Occam’s razor, all that is required is a brief intuition to explain the phenomena. This is what I attempted to do here by applying common sense and some basic probability reasoning. For those who enjoy deep dives I provide below supplementary sections with two methods for analytical solutions — one using Bayesian statistics and another using Causality. [Update] After publishing the first version of this article there was a comment that Savant’s solution³ may be simpler than those presented here. I revisited her communications and agreed that it should be added. In the process I realised three more lessons may be learnt. (7) A well designed visual goes a long way Continuing the principle of Occam’s razor, Savant explained³ quite convincingly in my opinion: You should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance. Here’s a good way to visualize what happened. Suppose there are a million doors, and you pick door #1. Then the host, who knows what’s behind the doors and will always avoid the one with the prize, opens them all except door #777,777. You’d switch to that door pretty fast, wouldn’t you? Hence she provided an abstract visual for the readers. I attempted to do the same with the 100 doors figures. Marilyn vos Savant who popularised the Monty Hall Problem. Credit: Ben David on Flickr under license As mentioned many readers, and especially with backgrounds in maths and statistics, still weren’t convinced. She revised³ with another mental image: The benefits of switching are readily proven by playing through the six games that exhaust all the possibilities. For the first three games, you choose #1 and “switch” each time, for the second three games, you choose #1 and “stay” each time, and the host always opens a loser. Here are the results. She added a table with all the scenarios. I took some artistic liberty and created the following figure. As indicated, the top batch are the scenarios in which the trader switches and the bottom when they switch. Lines in green are games which the trader wins, and in red when they get zonked. The symbolised the door chosen by the trader and Monte Hall then chooses a different door that has a goat behind it. Adaptation of Savant’s table³ of six scenarios that shows the solution to the Monty Hall Problem We clearly see from this diagram that the switcher has a ⅔ chance of winning and those that stay only ⅓. This is yet another elegant visualisation that clearly explains the non intuitive. It strengthens the claim that there is no real need for simulations in this case because all they would be doing is rerunning these six scenarios. One more popular solution is decision tree illustrations. You can find these in the Wikipedia page, but I find it’s a bit redundant to Savant’s table. The fact that we can solve this problem in so many ways yields another lesson: (8) There are many ways to skin a … problem Of the many lessons that I have learnt from the writings of late Richard Feynman, one of the best physics and ideas communicators, is that a problem can be solved many ways. Mathematicians and Physicists do this all the time. A relevant quote that paraphrases Occam’s razor: If you can’t explain it simply, you don’t understand it well enough — attributed to Albert Einstein And finally (9) Embrace ignorance and be humble ‍ “You are utterly incorrect … How many irate mathematicians are needed to get you to change your mind?” — Ph.D from Georgetown University “May I suggest that you obtain and refer to a standard textbook on probability before you try to answer a question of this type again?” — Ph.D from University of Florida “You’re in error, but Albert Einstein earned a dearer place in the hearts of people after he admitted his errors.” — Ph.D. from University of Michigan Ouch! These are some of the said responses from mathematicians to the Parade article. Such unnecessary viciousness. You can check the reference³ to see the writer’s names and other like it. To whet your appetite: “You blew it, and you blew it big!”, , “You made a mistake, but look at the positive side. If all those Ph.D.’s were wrong, the country would be in some very serious trouble.”, “I am in shock that after being corrected by at least three mathematicians, you still do not see your mistake.”. And as expected from the 1990s perhaps the most embarrassing one was from a resident of Oregon: “Maybe women look at math problems differently than men.” These make me cringe and be embarrassed to be associated by gender and Ph.D. title with these graduates and professors. Hopefully in the 2020s most people are more humble about their ignorance. Yuval Noah Harari discusses the fact that the Scientific Revolution of Galileo Galilei et al. was not due to knowledge but rather admittance of ignorance. “The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions” — Yuval Noah Harari Fortunately for mathematicians’ image, there were also quiet a lot of more enlightened comments. I like this one from one Seth Kalson, Ph.D. of MIT: You are indeed correct. My colleagues at work had a ball with this problem, and I dare say that most of them, including me at first, thought you were wrong! We’ll summarise by examining how, and if, the Monty Hall problem may be applied in real-world settings, so you can try to relate to projects that you are working on. Application in Real World Settings Researching for this article I found that beyond artificial setups for entertainment⁶ ⁷ there aren’t practical settings for this problem to use as an analogy. Of course, I may be wrong⁸ and would be glad to hear if you know of one. One way of assessing the viability of an analogy is using arguments from causality which provides vocabulary that cannot be expressed with standard statistics. In a previous post I discussed the fact that the story behind the data is as important as the data itself. In particular Causal Graph Models visualise the story behind the data, which we will use as a framework for a reasonable analogy. For the Monty Hall problem we can build a Causal Graph Model like this: Reading: The door chosen by the trader is independent from that with the prize and vice versa. As important, there is no common cause between them that might generate a spurious correlation. The host’s choice depends on both and . By comparing causal graphs of two systems one can get a sense for how analogous both are. A perfect analogy would require more details, but this is beyond the scope of this article. Briefly, one would want to ensure similar functions between the parameters (referred to as the Structural Causal Model; for details see in the supplementary section below called The Causal Point of View). Those interested in learning further details about using Causal Graphs Models to assess causality in real world problems may be interested in this article. Anecdotally it is also worth mentioning that on Let’s Make a Deal, Monty himself has admitted years later to be playing mind games with the contestants and did not always follow the rules, e.g, not always doing the intervention as “it all depends on his mood”⁴. In our setup we assumed perfect conditions, i.e., a host that does not skew from the script and/or play on the trader’s emotions. Taking this into consideration would require updating the Graphical Model above, which is beyond the scope of this article. Some might be disheartened to realise at this stage of the post that there might not be real world applications for this problem. I argue that lessons learnt from the Monty Hall problem definitely are. Just to summarise them again: (1) Assessing probabilities can be counter intuitive …(Be comfortable with shifting to deep thought ) (2) … especially when dealing with ambiguity(Search for clarity ) (3) With new information we should update our beliefs (4) Be one with subjectivity (5) When confused — look for a useful analogy … but tread with caution (6) Simulations are powerful but not always necessary (7) A well designed visual goes a long way (8) There are many ways to skin a … problem (9) Embrace ignorance and be humble ‍ While the Monty Hall Problem might seem like a simple puzzle, it offers valuable insights into decision-making, particularly for data scientists. The problem highlights the importance of going beyond intuition and embracing a more analytical, data-driven approach. By understanding the principles of Bayesian thinking and updating our beliefs based on new information, we can make more informed decisions in many aspects of our lives, including data science. The Monty Hall Problem serves as a reminder that even seemingly straightforward scenarios can contain hidden complexities and that by carefully examining available information, we can uncover hidden truths and make better decisions. At the bottom of the article I provide a list of resources that I found useful to learn about this topic. Credit: Wikipedia Loved this post? Join me on LinkedIn or Buy me a coffee! Credits Unless otherwise noted, all images were created by the author. Many thanks to Jim Parr, Will Reynolds, and Betty Kazin for their useful comments. In the following supplementary sections I derive solutions to the Monty Hall’s problem from two perspectives: Bayesian Causal Both are motivated by questions in textbook: Causal Inference in Statistics A Primer by Judea Pearl, Madelyn Glymour, and Nicholas P. Jewell (2016). Supplement 1: The Bayesian Point of View This section assumes a basic understanding of Bayes’ Theorem, in particular being comfortable conditional probabilities. In other words if this makes sense: We set out to use Bayes’ theorem to prove that switching doors improves chances in the N=3 Monty Hall Problem. (Problem 1.3.3 of the Primer textbook.) We define X — the chosen door Y— the door with the prize Z — the door opened by the host Labelling the doors as A, B and C, without loss of generality, we need to solve for: Using Bayes’ theorem we equate the left side as and the right one as: Most components are equal (remember that P(Y=A)=P(Y=B)=⅓ so we are left to prove: In the case where Y=B (the prize is behind door B ), the host has only one choice (can only select door C ), making P(X=A, Z=C|Y=B)= 1. In the case where Y=A (the prize is behind door A ), the host has two choices (doors B and C ) , making P(X=A, Z=C|Y=A)= 1/2. From here: Quod erat demonstrandum. Note: if the “host choices” arguments didn’t make sense look at the table below showing this explicitly. You will want to compare entries {X=A, Y=B, Z=C} and {X=A, Y=A, Z=C}. Supplement 2: The Causal Point of View The section assumes a basic understanding of Directed Acyclic Graphs (DAGs) and Structural Causal Models (SCMs) is useful, but not required. In brief: DAGs qualitatively visualise the causal relationships between the parameter nodes. SCMs quantitatively express the formula relationships between the parameters. Given the DAG we are going to define the SCM that corresponds to the classic N=3 Monty Hall problem and use it to describe the joint distribution of all variables. We later will generically expand to N. (Inspired by problem 1.5.4 of the Primer textbook as well as its brief mention of the N door problem.) We define X — the chosen door Y — the door with the prize Z — the door opened by the host According to the DAG we see that according to the chain rule: The SCM is defined by exogenous variables U , endogenous variables V, and the functions between them F: U = {X,Y}, V={Z}, F= {f(Z)} where X, Y and Z have door values: D = {A, B, C} The host choice is f(Z) defined as: In order to generalise to N doors, the DAG remains the same, but the SCM requires to update D to be a set of N doors Dᵢ: {D₁, D₂, … Dₙ}. Exploring Example Scenarios To gain an intuition for this SCM, let’s examine 6 examples of 27 (=3³) : When X=Y (i.e., the prize is behind the chosen door ) P(Z=A|X=A, Y=A) = 0; cannot choose the participant’s door P(Z=B|X=A, Y=A) = 1/2; is behind → chooses B at 50% P(Z=C|X=A, Y=A) = 1/2; is behind → chooses C at 50%(complementary to the above) When X≠Y (i.e., the prize is not behind the chosen door ) P(Z=A|X=A, Y=B) = 0; cannot choose the participant’s door P(Z=B|X=A, Y=B) = 0; cannot choose prize door P(Z=C|X=A, Y=B) = 1; has not choice in the matter(complementary to the above) Calculating Joint Probabilities Using logic let’s code up all 27 possibilities in python df = pd.DataFrame({"X": (["A"] * 9) + (["B"] * 9) + (["C"] * 9), "Y": ((["A"] * 3) + (["B"] * 3) + (["C"] * 3) )* 3, "Z": ["A", "B", "C"] * 9}) df["P(Z|X,Y)"] = None p_x = 1./3 p_y = 1./3 df.loc[df.query("X == Y == Z").index, "P(Z|X,Y)"] = 0 df.loc[df.query("X == Y != Z").index, "P(Z|X,Y)"] = 0.5 df.loc[df.query("X != Y == Z").index, "P(Z|X,Y)"] = 0 df.loc[df.query("Z == X != Y").index, "P(Z|X,Y)"] = 0 df.loc[df.query("X != Y").query("Z != Y").query("Z != X").index, "P(Z|X,Y)"] = 1 df["P(X, Y, Z)"] = df["P(Z|X,Y)"] * p_x * p_y print(f"Testing normalisation of P(X,Y,Z) {df['P(X, Y, Z)'].sum()}") df yields Resources This Quora discussion by Joshua Engel helped me shape a few aspects of this article. Causal Inference in Statistics A Primer / Pearl, Glymour & Jewell (2016) — excellent short text book (site) I also very much enjoy Tim Harford’s podcast Cautionary Tales. He wrote about this topic on November 3rd 2017 for the Financial Times: Monty Hall and the game show stick-or-switch conundrum Footnotes ¹ Vazsonyi, Andrew (December 1998 — January 1999). “Which Door Has the Cadillac?” (PDF). Decision Line: 17–19. Archived from the original (PDF) on 13 April 2014. Retrieved 16 October 2012. ² Steve Selvin to the American Statistician in 1975.[1][2] ³Game Show Problem by Marilyn vos Savant’s “Ask Marilyn” in marilynvossavant.com (web archive): “This material in this article was originally published in PARADE magazine in 1990 and 1991” ⁴Tierney, John (21 July 1991). “Behind Monty Hall’s Doors: Puzzle, Debate and Answer?”. The New York Times. Retrieved 18 January 2008. ⁵ Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. ⁶ MythBusters Episode 177 “Pick a Door” (Wikipedia) Watch Mythbuster’s approach ⁶Monty Hall Problem on Survivor Season 41 (LinkedIn, YouTube) Watch Survivor’s take on the problem ⁷ Jingyi Jessica Li (2024) How the Monty Hall problem is similar to the false discovery rate in high-throughput data analysis.Whereas the author points about “similarities” between hypothesis testing and the Monty Hall problem, I think that this is a bit misleading. The author is correct that both problems change by the order in which processes are done, but that is part of Bayesian statistics in general, not limited to the Monty Hall problem. The post 🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem appeared first on Towards Data Science.
    0 Comments 0 Shares 0 Reviews
  • After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About

    AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty
    In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligencethere won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else.

    Turns out that’s hogwashand there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on.

    Let’s talk about it.

    This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities.

    Heading Toward AGI And ASI
    First, some fundamentals are required to set the stage for this weighty discussion.
    There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence.
    AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
    We have not yet attained AGI.
    In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

    AGI That Knows Everything
    A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place.
    For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge.
    Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence.
    Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story.
    Reality Begs To Differ
    There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new.
    First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge.
    It isn’t.
    There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content.
    Second, AGI won’t know what’s in our heads.
    I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces, see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI.
    Time Keeps Ticking Along
    Another consideration is that time continues to flow along in a post-AGI era.
    This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch.
    Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity.
    Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge.
    Belief That AGI Gets Uppity
    By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI.
    Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess?
    One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from.
    I am more upbeat on this posture.
    I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge.
    Synergy Is The Best Course
    Would humans be willing to learn from AGI?
    Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired.
    Boom, drop the mic.
    In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise.
    The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.
    #after #reaching #agi #some #insist
    After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About
    AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligencethere won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else. Turns out that’s hogwashand there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence. AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI That Knows Everything A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place. For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge. Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence. Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story. Reality Begs To Differ There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new. First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge. It isn’t. There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content. Second, AGI won’t know what’s in our heads. I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces, see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI. Time Keeps Ticking Along Another consideration is that time continues to flow along in a post-AGI era. This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch. Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity. Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge. Belief That AGI Gets Uppity By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI. Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess? One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from. I am more upbeat on this posture. I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge. Synergy Is The Best Course Would humans be willing to learn from AGI? Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired. Boom, drop the mic. In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise. The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly. #after #reaching #agi #some #insist
    WWW.FORBES.COM
    After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About
    AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligence (AGI) there won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else. Turns out that’s hogwash (misguided) and there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI That Knows Everything A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place. For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge. Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence. Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story. Reality Begs To Differ There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new. First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge. It isn’t. There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content. Second, AGI won’t know what’s in our heads. I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces (BMI), see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI. Time Keeps Ticking Along Another consideration is that time continues to flow along in a post-AGI era. This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch. Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity. Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge. Belief That AGI Gets Uppity By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI. Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess? One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from. I am more upbeat on this posture. I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge. Synergy Is The Best Course Would humans be willing to learn from AGI? Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired. Boom, drop the mic. In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise. The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com