• So, Einstein casually strolled into the universe, dropped the bombshell that time is as relative as your aunt's opinions during family dinners, and walked away like it was no big deal. Who knew that the speed of light, that seemingly harmless little constant, could turn our entire concept of time into a cosmic joke? Apparently, if you’re zooming through space at light speed, your watch just decides to take a nap while the rest of us mortals are stuck in the slow lane. Talk about a time dilation dilemma! Next time you’re late, just blame it on Einstein; clearly, some of us are just trying to keep up with the universe’s idea of punctuality.

    #Einstein #TimeDilation #Relativity #
    So, Einstein casually strolled into the universe, dropped the bombshell that time is as relative as your aunt's opinions during family dinners, and walked away like it was no big deal. Who knew that the speed of light, that seemingly harmless little constant, could turn our entire concept of time into a cosmic joke? Apparently, if you’re zooming through space at light speed, your watch just decides to take a nap while the rest of us mortals are stuck in the slow lane. Talk about a time dilation dilemma! Next time you’re late, just blame it on Einstein; clearly, some of us are just trying to keep up with the universe’s idea of punctuality. #Einstein #TimeDilation #Relativity #
    Einstein Showed That Time Is Relative. But … Why Is It?
    The mind-bending concept of time dilation results from a seemingly harmless assumption—that the speed of light is the same for all observers.
    Like
    Love
    Wow
    Angry
    Sad
    77
    1 Comentários 0 Compartilhamentos 0 Anterior
  • Research roundup: 7 stories we almost missed

    Best of the rest

    Research roundup: 7 stories we almost missed

    Also: drumming chimpanzees, picking styles of two jazz greats, and an ancient underground city's soundscape

    Jennifer Ouellette



    May 31, 2025 5:37 pm

    |

    4

    Time lapse photos show a new ping-pong-playing robot performing a top spin.

    Credit:

    David Nguyen, Kendrick Cancio and Sangbae Kim

    Time lapse photos show a new ping-pong-playing robot performing a top spin.

    Credit:

    David Nguyen, Kendrick Cancio and Sangbae Kim

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories wemissed. This year, we're experimenting with a monthly collection. May's list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.
    Special relativity made visible

    Credit:

    TU Wien

    Perhaps the most well-known feature of Albert Einstein's special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It's not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.
    They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.
    Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere's North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

    DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  .
    Drumming chimpanzees

    A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

    Chimpanzees are known to "drum" on the roots of trees as a means of communication, often combining that action with what are known as "pant-hoot" vocalizations. Scientists have found that the chimps' drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.
    Back in 2022, the same team observed that individual chimps had unique styles of "buttress drumming," which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africaand West Africa, amounting to 371 drumming bouts.
    Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.
    DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  .
    Distinctive styles of two jazz greats

    Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn't use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn't want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and "flat picking."
    Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.
    Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass's rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumbproduced more of a "pluck" compared to the pick, which produced more of a "strike." Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.
    Sounds of an ancient underground city

    Credit:

    Sezin Nas

    Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channelsserving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

    The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu's most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site's acoustic environment.
    Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.
    MIT's latest ping-pong robot
    Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to "learn" from prior data to improve their performance.
    MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid's arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

    The new bot can execute three different swing typesand during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot's strike speed up to 19 meters per second, close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.
    Why orange cats are orange

    Credit:

    Astropulse/CC BY-SA 3.0

    Cat lovers know orange cats are special for more than their unique coloring, but that's the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.
    Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshellcoloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resourcesfor cats which greatly aided the team's research, along with taking additional DNA samples from cats at spay and neuter clinics.

    From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn't known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutationturns on Arhgap36 expression in pigment cells, thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.
    DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  .
    Not a Roman "massacre" after all

    Credit:

    Martin Smith

    In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that's the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.
    But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn't die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It's possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.
    DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  .

    Jennifer Ouellette
    Senior Writer

    Jennifer Ouellette
    Senior Writer

    Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

    4 Comments
    #research #roundup #stories #almost #missed
    Research roundup: 7 stories we almost missed
    Best of the rest Research roundup: 7 stories we almost missed Also: drumming chimpanzees, picking styles of two jazz greats, and an ancient underground city's soundscape Jennifer Ouellette – May 31, 2025 5:37 pm | 4 Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories wemissed. This year, we're experimenting with a monthly collection. May's list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights. Special relativity made visible Credit: TU Wien Perhaps the most well-known feature of Albert Einstein's special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It's not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics. They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera. Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere's North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959. DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  . Drumming chimpanzees A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025. Chimpanzees are known to "drum" on the roots of trees as a means of communication, often combining that action with what are known as "pant-hoot" vocalizations. Scientists have found that the chimps' drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms. Back in 2022, the same team observed that individual chimps had unique styles of "buttress drumming," which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africaand West Africa, amounting to 371 drumming bouts. Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved. DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  . Distinctive styles of two jazz greats Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn't use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn't want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and "flat picking." Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA. Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass's rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumbproduced more of a "pluck" compared to the pick, which produced more of a "strike." Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery. Sounds of an ancient underground city Credit: Sezin Nas Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channelsserving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions. The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu's most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site's acoustic environment. Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves. MIT's latest ping-pong robot Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to "learn" from prior data to improve their performance. MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid's arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras. The new bot can execute three different swing typesand during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot's strike speed up to 19 meters per second, close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming. Why orange cats are orange Credit: Astropulse/CC BY-SA 3.0 Cat lovers know orange cats are special for more than their unique coloring, but that's the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology. Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshellcoloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resourcesfor cats which greatly aided the team's research, along with taking additional DNA samples from cats at spay and neuter clinics. From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn't known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutationturns on Arhgap36 expression in pigment cells, thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve. DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  . Not a Roman "massacre" after all Credit: Martin Smith In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that's the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries. But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn't die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It's possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle. DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  . Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 4 Comments #research #roundup #stories #almost #missed
    ARSTECHNICA.COM
    Research roundup: 7 stories we almost missed
    Best of the rest Research roundup: 7 stories we almost missed Also: drumming chimpanzees, picking styles of two jazz greats, and an ancient underground city's soundscape Jennifer Ouellette – May 31, 2025 5:37 pm | 4 Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories we (almost) missed. This year, we're experimenting with a monthly collection. May's list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights. Special relativity made visible Credit: TU Wien Perhaps the most well-known feature of Albert Einstein's special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It's not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics. They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera. Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere's North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959. DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs). Drumming chimpanzees A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025. Chimpanzees are known to "drum" on the roots of trees as a means of communication, often combining that action with what are known as "pant-hoot" vocalizations (see above video). Scientists have found that the chimps' drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms. Back in 2022, the same team observed that individual chimps had unique styles of "buttress drumming," which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts. Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved. DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs). Distinctive styles of two jazz greats Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn't use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn't want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and "flat picking." Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA. Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass's rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a "pluck" compared to the pick (Pass), which produced more of a "strike." Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery. Sounds of an ancient underground city Credit: Sezin Nas Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions. The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu's most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site's acoustic environment. Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves. MIT's latest ping-pong robot Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to "learn" from prior data to improve their performance. MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid's arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras. The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot's strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming. Why orange cats are orange Credit: Astropulse/CC BY-SA 3.0 Cat lovers know orange cats are special for more than their unique coloring, but that's the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology. Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team's research, along with taking additional DNA samples from cats at spay and neuter clinics. From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn't known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve. DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs). Not a Roman "massacre" after all Credit: Martin Smith In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that's the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries. But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn't die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It's possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle. DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs). Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 4 Comments
    13 Comentários 0 Compartilhamentos 0 Anterior
  • A new atomic clock in space could help us measure elevations on Earth

    In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side. The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany.While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge.  To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort.  Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space.
    That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone.  Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device. 
    From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics.  But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world. Alignment over this “zero point”is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters.  A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.ESA-T. PEIGNIER To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude. However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert.  To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America.  To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet.  A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation. 

    “We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive. ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks. These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field.  Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago.  Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time.
    #new #atomic #clock #space #could
    A new atomic clock in space could help us measure elevations on Earth
    In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side. The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany.While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge.  To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort.  Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space. That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone.  Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device.  From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics.  But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world. Alignment over this “zero point”is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters.  A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.ESA-T. PEIGNIER To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude. However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert.  To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America.  To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet.  A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation.  “We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive. ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks. These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field.  Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago.  Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time. #new #atomic #clock #space #could
    WWW.TECHNOLOGYREVIEW.COM
    A new atomic clock in space could help us measure elevations on Earth
    In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss side. The misalignment occurred because the German engineers had measured elevation with a historic level of the North Sea as its zero point, while the Swiss ones had used the Mediterranean Sea, which was 27 centimeters lower. We may speak colloquially of elevations with respect to “sea level,” but Earth’s seas are actually not level. “The sea level is varying from location to location,” says Laura Sanchez, a geodesist at the Technical University of Munich in Germany. (Geodesists study our planet’s shape, orientation, and gravitational field.) While the two teams knew about the 27-centimeter difference, they mixed up which side was higher. Ultimately, Germany lowered its side to complete the bridge.  To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. It’s the third-dimensional counterpart to latitude and longitude, says Sanchez, who helps coordinate the standardization effort.  Now, a decade after its adoption, geodesists are looking to update the standard—by using the most precise clock ever to fly in space. That clock, called the Atomic Clock Ensemble in Space, or ACES, launched into orbit from Florida last month, bound for the International Space Station. ACES, which was built by the European Space Agency, consists of two connected atomic clocks, one containing cesium atoms and the other containing hydrogen, combined to produce a single set of ticks with higher precision than either clock alone.  Pendulum clocks are only accurate to about a second per day, as the rate at which a pendulum swings can vary with humidity, temperature, and the weight of extra dust. Atomic clocks in current GPS satellites will lose or gain a second on average every 3,000 years. ACES, on the other hand, “will not lose or gain a second in 300 million years,” says Luigi Cacciapuoti, an ESA physicist who helped build and launch the device. (In 2022, China installed a potentially stabler clock on its space station, but the Chinese government has not publicly shared the clock’s performance after launch, according to Cacciapuoti.)  From space, ACES will link to some of the most accurate clocks on Earth to create a synchronized clock network, which will support its main purpose: to perform tests of fundamental physics.  But it’s of special interest for geodesists because it can be used to make gravitational measurements that will help establish a more precise zero point from which to measure elevation across the world. Alignment over this “zero point” (basically where you stick the end of the tape measure to measure elevation) is important for international collaboration. It makes it easier, for example, to monitor and compare sea-level changes around the world. It is especially useful for building infrastructure involving flowing water, such as dams and canals. In 2020, the international height standard even resolved a long-standing dispute between China and Nepal over Mount Everest’s height. For years, China said the mountain was 8,844.43 meters; Nepal measured it at 8,848. Using the IHRF, the two countries finally agreed that the mountain was 8,848.86 meters.  A worker performs tests on ACES at a cleanroom at the Kennedy Space Center in Florida.ESA-T. PEIGNIER To create a standard zero point, geodesists create a model of Earth known as a geoid. Every point on the surface of this lumpy, potato-shaped model experiences the same gravity, which means that if you dug a canal at the height of the geoid, the water within the canal would be level and would not flow. Distance from the geoid establishes a global system for altitude. However, the current model lacks precision, particularly in Africa and South America, says Sanchez. Today’s geoid has been built using instruments that directly measure Earth’s gravity. These have been carried on satellites, which excel at getting a global but low-resolution view, and have also been used to get finer details via expensive ground- and airplane-based surveys. But geodesists have not had the funding to survey Africa and South America as extensively as other parts of the world, particularly in difficult terrain such as the Amazon rainforest and Sahara Desert.  To understand the discrepancy in precision, imagine a bridge that spans Africa from the Mediterranean coast to Cape Town, South Africa. If it’s built using the current geoid, the two ends of the bridge will be misaligned by tens of centimeters. In comparison, you’d be off by at most five centimeters if you were building a bridge spanning North America.  To improve the geoid’s precision, geodesists want to create a worldwide network of clocks, synchronized from space. The idea works according to Einstein’s theory of general relativity, which states that the stronger the gravitational field, the more slowly time passes. The 2014 sci-fi movie Interstellar illustrates an extreme version of this so-called time dilation: Two astronauts spend a few hours in extreme gravity near a black hole to return to a shipmate who has aged more than two decades. Similarly, Earth’s gravity grows weaker the higher in elevation you are. Your feet, for example, experience slightly stronger gravity than your head when you’re standing. Assuming you live to be about 80 years old, over a lifetime your head will age tens of billionths of a second more than your feet.  A clock network would allow geodesists to compare the ticking of clocks all over the world. They could then use the variations in time to map Earth’s gravitational field much more precisely, and consequently create a more precise geoid. The most accurate clocks today are precise enough to measure variations in time that map onto centimeter-level differences in elevation.  “We want to have the accuracy level at the one-centimeter or sub-centimeter level,” says Jürgen Müller, a geodesist at Leibniz University Hannover in Germany. Specifically, geodesists would use the clock measurements to validate their geoid model, which they currently do with ground- and plane-based surveying techniques. They think that a clock network should be considerably less expensive. ACES is just a first step. It is capable of measuring altitudes at various points around Earth with 10-centimeter precision, says Cacciapuoti. But the point of ACES is to prototype the clock network. It will demonstrate the optical and microwave technology needed to use a clock in space to connect some of the most advanced ground-based clocks together. In the next year or so, Müller plans to use ACES to connect to clocks on the ground, starting with three in Germany. Müller’s team could then make more precise measurements at the location of those clocks. These early studies will pave the way for work connecting even more precise clocks than ACES to the network, ultimately leading to an improved geoid. The best clocks today are some 50 times more precise than ACES. “The exciting thing is that clocks are getting even stabler,” says Michael Bevis, a geodesist at Ohio State University, who was not involved with the project. A more precise geoid would allow engineers, for example, to build a canal with better control of its depth and flow, he says. However, he points out that in order for geodesists to take advantage of the clocks’ precision, they will also have to improve their mathematical models of Earth’s gravitational field.  Even starting to build this clock network has required decades of dedicated work by scientists and engineers. It took ESA three decades to make a clock as small as ACES that is suitable for space, says Cacciapuoti. This meant miniaturizing a clock the size of a laboratory into the size of a small fridge. “It was a huge engineering effort,” says Cacciapuoti, who has been working on the project since he began at ESA 20 years ago.  Geodesists expect they’ll need at least another decade to develop the clock network and launch more clocks into space. One possibility would be to slot the clocks onto GPS satellites. The timeline depends on the success of the ACES mission and the willingness of government agencies to invest, says Sanchez. But whatever the specifics, mapping the world takes time.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • How dark energy findings may inspire a new generation of physics nerds

    Mark Garlick/Science Photo Library/Alamy
    In 1998, astronomers made a startling announcement. Space-time, the unified phenomenon that comprises our universe and that was previously understood to be expanding, was actually not just growing, but growing faster and faster as time went on. In other words, its expansion was accelerating. This was the birth of the cosmic acceleration problem: what was causing this acceleration? It seemed to be literally coming from nowhere – from the vacuum.
    From the point of view of general relativity, cosmic acceleration could be explained by saying that empty space-time has energy that drives this expansion, that it isn’t completely empty. This energy…
    #how #dark #energy #findings #inspire
    How dark energy findings may inspire a new generation of physics nerds
    Mark Garlick/Science Photo Library/Alamy In 1998, astronomers made a startling announcement. Space-time, the unified phenomenon that comprises our universe and that was previously understood to be expanding, was actually not just growing, but growing faster and faster as time went on. In other words, its expansion was accelerating. This was the birth of the cosmic acceleration problem: what was causing this acceleration? It seemed to be literally coming from nowhere – from the vacuum. From the point of view of general relativity, cosmic acceleration could be explained by saying that empty space-time has energy that drives this expansion, that it isn’t completely empty. This energy… #how #dark #energy #findings #inspire
    WWW.NEWSCIENTIST.COM
    How dark energy findings may inspire a new generation of physics nerds
    Mark Garlick/Science Photo Library/Alamy In 1998, astronomers made a startling announcement. Space-time, the unified phenomenon that comprises our universe and that was previously understood to be expanding, was actually not just growing, but growing faster and faster as time went on. In other words, its expansion was accelerating. This was the birth of the cosmic acceleration problem: what was causing this acceleration? It seemed to be literally coming from nowhere – from the vacuum. From the point of view of general relativity, cosmic acceleration could be explained by saying that empty space-time has energy that drives this expansion, that it isn’t completely empty. This energy…
    0 Comentários 0 Compartilhamentos 0 Anterior
  • How To Build Stylized Water Shader: Design & Implementation For Nimue

    NimueIntroductionFor three semesters, our student team has been hard at work on the prototype for Nimue, a 3D platformer in which you play an enchanted princess who lost her memories. She needs to find her way through the castle ruins on a misty lake to uncover her past. Water is a visual core element of this game prototype, so we took extra care in its development. In this article, we will take an in-depth look at the design and technical implementation of a lake water material.The first prototype of Nimue can be played on itch.io soon. A link to our shader for use in your own projects can be found at the end of this article.Taxonomy of WaterBefore we dive into the design decisions and technical implementation, we present a simplified taxonomy of visual water components to better understand the requirements of its representation:RiMEWind WavesWaves generated by wind, which form on an open water surface, can be divided into capillary waves and gravity waves. Capillary waves, or ripples, are small, short-wavelength waves caused by weak winds affecting surface tension in calm water. They can overlap longer and larger gravity waves. How these physically complex wave types are represented in stylized video games varies depending on the respective style. Both types are usually heavily simplified in form and motion, and capillary waves are sometimes omitted entirely to reduce detail.Sea of ThievesFoam PatternsFoam patterns refer to white foam crests that form on a water surface without breaking against an obstacle or shoreline. In reality, this effect occurs when different water layers collide, and waves become steeper until their peaks collapse, and the resulting bubbles and drops scatter the sunlight. Stylized foam patterns can be found in many video game water representations and can easily be abstracted into patterns. Such patterns contribute to a cartoon look and can sometimes even replace waveforms entirely.The Legend of Zelda: The Wind WakerFoam LinesFoam lines are a very common water element in video games, represented as white graphical lines surrounding shorelines and obstacles like rocks. They typically reference two different water phenomena: foam forming around obstacles due to wave breaking, and foam along shorelines, resulting from wave breaking and the mixing of algaes with organic and artificial substances.Foam lines can have different visual appearances depending on the surface angle: The shallower the angle, the wider the foam effect. Due to the weaker waves, distinctive foam lines are rarely observed on natural lakes, but they can be included in a stylization for aesthetic purposes. Animal Crossing: New HorizonsReflectionsWhen light hits a water surface, it can either be reflectedor transmitted into the water, where it may be absorbed, scattered, or reflected back through the surface. The Fresnel effect describes the perceived balance between reflection and transmission: at steep angles, more transmitted light reaches the eye, making the water appear more translucent, while at shallow angles, increased reflection makes it appear more opaqueIn stylized video games, implementations of water reflections vary: RiME, for example, imitates the Fresnel effect but does not reflect the environment at all, only a simple, otherwise invisible cube map. Wind Waker, on the other hand, completely foregoes reflection calculations and renders a flat-shaded water surface.RiMETranslucencyAs an inhomogeneous medium, water scatters some of the transmitted light before it can be reflected back to the surface. This is why water is described as translucent rather than transparent. Some scattered light is not reflected back but absorbed, reducing intensity and shifting color toward the least absorbed wavelengths, typically blue, blue-green, or turquoise. Increased distance amplifies scattering and absorption, altering color perception. Modern real-time engines simulate these effects, including absorption-based color variation with depth. However, stylized games often simplify or omit transmission entirely, rendering water as an opaque surface.RiMERefractionAn additional aspect of water transmission is refraction, the bending of light as it transitions between air and water due to their differing densities. This causes light to bend toward the normal upon entering the water, creating the apparent distortion of submerged objects. Refraction effects also commonly appear in stylized water rendering. Kirby's Forgotten Land, for example, showcases two key visual characteristics of refraction: distortion increases with steeper viewing angles and is amplified by ripples on the water's surface.Kirby and the Forgotten LandCausticsCaustic patterns form when light rays are focused by a curved water surface, projecting bundled light patterns onto underwater surfaces or even back to surfaces above water. These patterns are influenced by the clarity of the water, the depth of the water, and the strength of the light source. They contribute greatly to the atmosphere of virtual worlds and are often found in stylized games, although only as simplistic representations.The Legend of Zelda: Ocarina of Time 3DDesign DecisionsDue to the fact that the setting of Nimue is a lake with a calm overall atmosphere, the decision was made to use very reduced gravity waves, as a calm water surface underlines this atmosphere. Capillary waves have too high a level of detail for the stylistic requirements of Nimue and were, therefore, not implemented.NimueShapesThe mood in Nimue can be summarized as calm and mystical. The design language of Nimue is graphic, rounded, and elegant. Shapes are vertically elongated and highly abstracted. Convex corners are always rounded or have a strong bevel, while concave corners are pointed to prevent the overall mass of objects from becoming too rounded.ColorsNimue uses mostly broken colors and pastels to create a serene, reflective mood and highlight the player's character with her saturated blue tones. Platforms and obstacles are depicted with a lower tonal valueto increase their visibility. Overall, the game world is kept in very unsaturated shades of blue, with the atmospheric depth, i.e., the sky and objects in the distance, falling into the complementary orange range. Shades of green and yellow are either completely avoided or extremely desaturated. The resulting reduced color palette additionally supports the atmosphere and makes it appear more harmonious.Color gamut & value/tone tests Hue, Tone & SaturationSince the color of the water, with its hue, tone, and saturation, is technically achieved by several components, a 2D mockup was first designed to more easily compare different colors in the environment. Here it could be observed that both the low and the high tonal value formed too great a contrast to the rest of the environment and thus placed an undesirable focus on the water. Therefore, the medium tone value was chosen.The hue and saturation were tested in relativity to the sky, the player character, and the background. Here, too, the color variant that harmonizes the most with the rest of the environment and contrasts the least was chosen.Foam LinesFor the design of the foam lines, we proceeded in the same way as for the color selection: In this case, a screenshot of the test scene was used as the basis for three overpaints to try out different foam lines on the stones in the scene. Version 3 offers the greatest scope in terms of movement within the patterns. Due to this, and because of the greater visual interest, we opted for variant 3. Following the mockup, the texture was prepared so that it could be technically implemented.ReflectionThe reflection of the water surface contributes to how realistic the water looks, as one would always expect a reflection with natural water, depending on the angle. However, a reflection could also contribute to the overall appearance of the water becoming less calm. The romantic character created by the reflection of diffuse light on water is more present in version 1.In addition, the soft, wafting shapes created by the reflection fit in well with the art style. A reflection is desirable, but the reflections must not take up too much focus. Ideally, the water should be lighter in tone, and the reflections should be present but less pronounced. Reflection intensityRefraction & CausticsEven though most light in our water gets absorbed, we noticed an improvement in the believability of the ground right underneath the water's surface when utilizing refraction together with the waveforms. When it comes to caustics, the diffuse lighting conditions of our scene would make visible caustic patterns physically implausible, but it felt right aesthetically, which is why we included it anyway.Technical Realization in Unreal Engine 5When building a water material in Unreal, choosing the right shading model and blend mode is crucial. While a Default Lit Translucent material with Surface Forward Shading offers the most control, it is very costly to render. The more efficient choice is the Single Layer Water shading model introduced in Unreal 4.27, which supports light absorption, scattering, reflection, refraction, and shadowing at a lower instruction count. However, there are some downsides. For example, as it only uses a single depth layer, it lacks back-face rendering, making it less suitable for underwater views. And while still quite expensive by itself, its benefits outweigh the drawbacks of our stylized water material.WaveformsStarting with the waveforms, we used panning normal maps to simulate the rather calm low-altitude gravity waves. The approach here is simple: create a wave normal map in Substance 3D Designer, sample it twice, and continuously offset the samples' UV coordinates in opposing directions at different speeds. Give one of the two samples a higher speed and normal intensity to create a sense of wind direction. This panning operation does not need to run in the fragment shader, you can move it to the vertex shader through the Vertex Interpolator without quality loss and thereby reduce the instruction count.Texture RepetitionTo reduce visible tiling, we used three simple and fairly efficient tricks. First, we offset the UVs of the Wave Samplers with a large panning noise texture to dynamically distort the wave patterns. Second, we used another sampler of that noise texture with different tiling, speed, and direction to modulate the strength of the normal maps across the surface. We sampled this noise texture four times with different variables in the material, which is a lot, but we reused them many times for most of the visual features of our water. Third, we sampled the pixel depth of the surface to mask out the waves that were far from the camera so that there were no waves in the far distance.Vertex DisplacementWhile these normal waves are enough to create the illusion of altitude on the water surface itself, they are lacking when it comes to the intersections around objects in the water, as these intersections are static without any actual vertex displacement. To fix that, two very simple sine operationswere added to drive the World Position Offset of the water mesh on the Z-axis. To keep the polycounts in check, we built a simple blueprint grid system that spawns high-res plane meshes at the center in a variable radius, and low-res plane meshes around that. This enables the culling of non-visible planes and the use of a less complex version of the water material for distant planes, where features like WPO are not needed.ColorThe general transmission amount is controlled by the opacity input of the material output, but scattering and absorption are defined via the Single Layer Water material output. The inputs Scattering Coefficients and Absorption Coefficients, which are responsible for reproducing how and how far different wavelengths travel through water, are decisive here. We use two scattering colors as parameters, which are interpolated depending on the camera distance. Close to the camera, the blue scattering colordominates, while at a distance, the orange scattering colortakes over. The advantage is a separation of the water's color from the sky's color and, thus, higher artistic control.Reflections & RefractionReflections in the Single Layer Water shading model are as usual determined by the inputs for Specularand Roughness. In our case, however, we use Lumen reflections for their accuracy and quality, and as of Unreal 5.4, the Single Layer Water model’s roughness calculation does not work with Lumen reflections. It forces mirror reflections, no matter the value input, leaving the specular lobe unaffected. Instead, it only offsets the reflection brightness, as the specular input does.For our artistic purposes, this is fine, and we do use the roughness input to fine-tune the specular level while having the specular input as the base level. A very low value was set for the specular value to keep the reflection brightness low. We further stylized the reflections by decreasing this brightness near the camera by using the already mentioned masking method via camera to interpolate between two values. For refraction, the Pixel Normal Offset mode was used, and a scalar parameter interpolates between the base refraction and the output of the normal waves.CausticsFor the caustic effect, we created a Voronoi noise pattern by using Unreal's Noise node and exporting it with a render target. In Photoshop, the pattern was duplicated twice, rotated each, colored, and blended. This texture is then projected on the objects below by using the ColorScaleBehindWater input of the Single Layer Water Material output. The pattern is dynamically distorted by adding one of the aforementioned panning noise textures to the UV coordinates.FoamlinesWe started by creating custom meshes for foam lines and applied the earlier texture pattern, but quickly realized that such a workflow would be too cumbersome and inflexible for even a small scene, so we decided to do it procedurally. Two common methods for generating intersection masks on a plane are Depth Sampling and Distance Fields. The first works by subtracting the camera's distance to the water surface at the current pixelfrom the camera's distance to the closest scene object at that pixel. The second method is to use the node "DistanceToNearestSurface" which calculates the shortest distance between a point on the water surface and the nearest object by referencing the scene's global distance field. We used both methods to control the mask width, as each alone varies with the object's surface slope, causing undesirable variations. Combining them allowed us to switch between two different mask widths, turning off "Affect Distance Field Lighting" for shallow slopes where narrower lines are wanted.The added mask of all intersections is then used for two effects to create the foam lines: "edge foam"and "edge waves". Both are shaped with the noise samplers shown above to approximate the hand-drawn foam line texture.Foam PatternsThe same noise samplers are also used to create a sparkling foam effect, loosely imitating whitecaps/foam crests to add more visual interest to the water surface. Since it only reuses operations, this effect is very cheap. Similarly, the wave normals are used to create something like fake subsurface scattering to further distinguish the moving water surface. Interactive RipplesA third type of foam is added as interactive waves that ripple around the player character when walking through shallow water. This is done through a Render Target and particles, as demonstrated in this Unity tutorial by Minions Art. The steps described there are all easily applicable in Unreal with a Niagara System, a little Blueprint work, and common material nodes. We added a Height to Normal conversion for better visual integration into our existing wave setup. Finally, here are all those operations combined for the material inputs:NimueBest PracticesUse Single Layer Water for efficient translucency, but note it lacks back-face rendering and forces mirror reflections with Lumen;For simple low-altitude waves, pan two offset samples of a normal map at different speeds; move panning to Vertex Shader for better performance;Break up texture tiling efficiently by offsetting UVs with a large panning noise, modulating normal strength, and fading distant waves using pixel depth;Sampling one small noise texture at different scales can power this and many other features of a water shader efficiently;If high-altitude waves aren't needed, a simple sine-based WPO can suffice for vertex displacement; implement a grid system for LODs and culling of subdivided water meshes;Blend two scattering colors by camera distance for artistic watercolor control and separation from sky reflections;Combining depth sampling and distance fields to derive the foam lines allows for more flexible intersection widths but comes at a higher cost. Further ResourcesHere are some resources that helped us in the shader creation process:General shader theory and creation: tharlevfx, Ben Cloward;Interactive water in Unity: Minions Art;Another free stylized water material in Unreal by Fabian Lopez Arosa;Technical art wizardry: Ghislain Girardot.ConclusionWe hope this breakdown of our water material creation process will help you in your projects.If you want to take a look at our shader yourself or even use it for your own game projects, you can download the complete setup on Gumroad. We look forward to seeing your water shaders and exchanging ideas. Feel free to reach out if you have any questions or want to connect.Kolja Bopp, Academic SupervisorLeanna Geideck, Concept ArtistStephan zu Münster, Technical Artist
    #how #build #stylized #water #shader
    How To Build Stylized Water Shader: Design & Implementation For Nimue
    NimueIntroductionFor three semesters, our student team has been hard at work on the prototype for Nimue, a 3D platformer in which you play an enchanted princess who lost her memories. She needs to find her way through the castle ruins on a misty lake to uncover her past. Water is a visual core element of this game prototype, so we took extra care in its development. In this article, we will take an in-depth look at the design and technical implementation of a lake water material.The first prototype of Nimue can be played on itch.io soon. A link to our shader for use in your own projects can be found at the end of this article.Taxonomy of WaterBefore we dive into the design decisions and technical implementation, we present a simplified taxonomy of visual water components to better understand the requirements of its representation:RiMEWind WavesWaves generated by wind, which form on an open water surface, can be divided into capillary waves and gravity waves. Capillary waves, or ripples, are small, short-wavelength waves caused by weak winds affecting surface tension in calm water. They can overlap longer and larger gravity waves. How these physically complex wave types are represented in stylized video games varies depending on the respective style. Both types are usually heavily simplified in form and motion, and capillary waves are sometimes omitted entirely to reduce detail.Sea of ThievesFoam PatternsFoam patterns refer to white foam crests that form on a water surface without breaking against an obstacle or shoreline. In reality, this effect occurs when different water layers collide, and waves become steeper until their peaks collapse, and the resulting bubbles and drops scatter the sunlight. Stylized foam patterns can be found in many video game water representations and can easily be abstracted into patterns. Such patterns contribute to a cartoon look and can sometimes even replace waveforms entirely.The Legend of Zelda: The Wind WakerFoam LinesFoam lines are a very common water element in video games, represented as white graphical lines surrounding shorelines and obstacles like rocks. They typically reference two different water phenomena: foam forming around obstacles due to wave breaking, and foam along shorelines, resulting from wave breaking and the mixing of algaes with organic and artificial substances.Foam lines can have different visual appearances depending on the surface angle: The shallower the angle, the wider the foam effect. Due to the weaker waves, distinctive foam lines are rarely observed on natural lakes, but they can be included in a stylization for aesthetic purposes. Animal Crossing: New HorizonsReflectionsWhen light hits a water surface, it can either be reflectedor transmitted into the water, where it may be absorbed, scattered, or reflected back through the surface. The Fresnel effect describes the perceived balance between reflection and transmission: at steep angles, more transmitted light reaches the eye, making the water appear more translucent, while at shallow angles, increased reflection makes it appear more opaqueIn stylized video games, implementations of water reflections vary: RiME, for example, imitates the Fresnel effect but does not reflect the environment at all, only a simple, otherwise invisible cube map. Wind Waker, on the other hand, completely foregoes reflection calculations and renders a flat-shaded water surface.RiMETranslucencyAs an inhomogeneous medium, water scatters some of the transmitted light before it can be reflected back to the surface. This is why water is described as translucent rather than transparent. Some scattered light is not reflected back but absorbed, reducing intensity and shifting color toward the least absorbed wavelengths, typically blue, blue-green, or turquoise. Increased distance amplifies scattering and absorption, altering color perception. Modern real-time engines simulate these effects, including absorption-based color variation with depth. However, stylized games often simplify or omit transmission entirely, rendering water as an opaque surface.RiMERefractionAn additional aspect of water transmission is refraction, the bending of light as it transitions between air and water due to their differing densities. This causes light to bend toward the normal upon entering the water, creating the apparent distortion of submerged objects. Refraction effects also commonly appear in stylized water rendering. Kirby's Forgotten Land, for example, showcases two key visual characteristics of refraction: distortion increases with steeper viewing angles and is amplified by ripples on the water's surface.Kirby and the Forgotten LandCausticsCaustic patterns form when light rays are focused by a curved water surface, projecting bundled light patterns onto underwater surfaces or even back to surfaces above water. These patterns are influenced by the clarity of the water, the depth of the water, and the strength of the light source. They contribute greatly to the atmosphere of virtual worlds and are often found in stylized games, although only as simplistic representations.The Legend of Zelda: Ocarina of Time 3DDesign DecisionsDue to the fact that the setting of Nimue is a lake with a calm overall atmosphere, the decision was made to use very reduced gravity waves, as a calm water surface underlines this atmosphere. Capillary waves have too high a level of detail for the stylistic requirements of Nimue and were, therefore, not implemented.NimueShapesThe mood in Nimue can be summarized as calm and mystical. The design language of Nimue is graphic, rounded, and elegant. Shapes are vertically elongated and highly abstracted. Convex corners are always rounded or have a strong bevel, while concave corners are pointed to prevent the overall mass of objects from becoming too rounded.ColorsNimue uses mostly broken colors and pastels to create a serene, reflective mood and highlight the player's character with her saturated blue tones. Platforms and obstacles are depicted with a lower tonal valueto increase their visibility. Overall, the game world is kept in very unsaturated shades of blue, with the atmospheric depth, i.e., the sky and objects in the distance, falling into the complementary orange range. Shades of green and yellow are either completely avoided or extremely desaturated. The resulting reduced color palette additionally supports the atmosphere and makes it appear more harmonious.Color gamut & value/tone tests Hue, Tone & SaturationSince the color of the water, with its hue, tone, and saturation, is technically achieved by several components, a 2D mockup was first designed to more easily compare different colors in the environment. Here it could be observed that both the low and the high tonal value formed too great a contrast to the rest of the environment and thus placed an undesirable focus on the water. Therefore, the medium tone value was chosen.The hue and saturation were tested in relativity to the sky, the player character, and the background. Here, too, the color variant that harmonizes the most with the rest of the environment and contrasts the least was chosen.Foam LinesFor the design of the foam lines, we proceeded in the same way as for the color selection: In this case, a screenshot of the test scene was used as the basis for three overpaints to try out different foam lines on the stones in the scene. Version 3 offers the greatest scope in terms of movement within the patterns. Due to this, and because of the greater visual interest, we opted for variant 3. Following the mockup, the texture was prepared so that it could be technically implemented.ReflectionThe reflection of the water surface contributes to how realistic the water looks, as one would always expect a reflection with natural water, depending on the angle. However, a reflection could also contribute to the overall appearance of the water becoming less calm. The romantic character created by the reflection of diffuse light on water is more present in version 1.In addition, the soft, wafting shapes created by the reflection fit in well with the art style. A reflection is desirable, but the reflections must not take up too much focus. Ideally, the water should be lighter in tone, and the reflections should be present but less pronounced. Reflection intensityRefraction & CausticsEven though most light in our water gets absorbed, we noticed an improvement in the believability of the ground right underneath the water's surface when utilizing refraction together with the waveforms. When it comes to caustics, the diffuse lighting conditions of our scene would make visible caustic patterns physically implausible, but it felt right aesthetically, which is why we included it anyway.Technical Realization in Unreal Engine 5When building a water material in Unreal, choosing the right shading model and blend mode is crucial. While a Default Lit Translucent material with Surface Forward Shading offers the most control, it is very costly to render. The more efficient choice is the Single Layer Water shading model introduced in Unreal 4.27, which supports light absorption, scattering, reflection, refraction, and shadowing at a lower instruction count. However, there are some downsides. For example, as it only uses a single depth layer, it lacks back-face rendering, making it less suitable for underwater views. And while still quite expensive by itself, its benefits outweigh the drawbacks of our stylized water material.WaveformsStarting with the waveforms, we used panning normal maps to simulate the rather calm low-altitude gravity waves. The approach here is simple: create a wave normal map in Substance 3D Designer, sample it twice, and continuously offset the samples' UV coordinates in opposing directions at different speeds. Give one of the two samples a higher speed and normal intensity to create a sense of wind direction. This panning operation does not need to run in the fragment shader, you can move it to the vertex shader through the Vertex Interpolator without quality loss and thereby reduce the instruction count.Texture RepetitionTo reduce visible tiling, we used three simple and fairly efficient tricks. First, we offset the UVs of the Wave Samplers with a large panning noise texture to dynamically distort the wave patterns. Second, we used another sampler of that noise texture with different tiling, speed, and direction to modulate the strength of the normal maps across the surface. We sampled this noise texture four times with different variables in the material, which is a lot, but we reused them many times for most of the visual features of our water. Third, we sampled the pixel depth of the surface to mask out the waves that were far from the camera so that there were no waves in the far distance.Vertex DisplacementWhile these normal waves are enough to create the illusion of altitude on the water surface itself, they are lacking when it comes to the intersections around objects in the water, as these intersections are static without any actual vertex displacement. To fix that, two very simple sine operationswere added to drive the World Position Offset of the water mesh on the Z-axis. To keep the polycounts in check, we built a simple blueprint grid system that spawns high-res plane meshes at the center in a variable radius, and low-res plane meshes around that. This enables the culling of non-visible planes and the use of a less complex version of the water material for distant planes, where features like WPO are not needed.ColorThe general transmission amount is controlled by the opacity input of the material output, but scattering and absorption are defined via the Single Layer Water material output. The inputs Scattering Coefficients and Absorption Coefficients, which are responsible for reproducing how and how far different wavelengths travel through water, are decisive here. We use two scattering colors as parameters, which are interpolated depending on the camera distance. Close to the camera, the blue scattering colordominates, while at a distance, the orange scattering colortakes over. The advantage is a separation of the water's color from the sky's color and, thus, higher artistic control.Reflections & RefractionReflections in the Single Layer Water shading model are as usual determined by the inputs for Specularand Roughness. In our case, however, we use Lumen reflections for their accuracy and quality, and as of Unreal 5.4, the Single Layer Water model’s roughness calculation does not work with Lumen reflections. It forces mirror reflections, no matter the value input, leaving the specular lobe unaffected. Instead, it only offsets the reflection brightness, as the specular input does.For our artistic purposes, this is fine, and we do use the roughness input to fine-tune the specular level while having the specular input as the base level. A very low value was set for the specular value to keep the reflection brightness low. We further stylized the reflections by decreasing this brightness near the camera by using the already mentioned masking method via camera to interpolate between two values. For refraction, the Pixel Normal Offset mode was used, and a scalar parameter interpolates between the base refraction and the output of the normal waves.CausticsFor the caustic effect, we created a Voronoi noise pattern by using Unreal's Noise node and exporting it with a render target. In Photoshop, the pattern was duplicated twice, rotated each, colored, and blended. This texture is then projected on the objects below by using the ColorScaleBehindWater input of the Single Layer Water Material output. The pattern is dynamically distorted by adding one of the aforementioned panning noise textures to the UV coordinates.FoamlinesWe started by creating custom meshes for foam lines and applied the earlier texture pattern, but quickly realized that such a workflow would be too cumbersome and inflexible for even a small scene, so we decided to do it procedurally. Two common methods for generating intersection masks on a plane are Depth Sampling and Distance Fields. The first works by subtracting the camera's distance to the water surface at the current pixelfrom the camera's distance to the closest scene object at that pixel. The second method is to use the node "DistanceToNearestSurface" which calculates the shortest distance between a point on the water surface and the nearest object by referencing the scene's global distance field. We used both methods to control the mask width, as each alone varies with the object's surface slope, causing undesirable variations. Combining them allowed us to switch between two different mask widths, turning off "Affect Distance Field Lighting" for shallow slopes where narrower lines are wanted.The added mask of all intersections is then used for two effects to create the foam lines: "edge foam"and "edge waves". Both are shaped with the noise samplers shown above to approximate the hand-drawn foam line texture.Foam PatternsThe same noise samplers are also used to create a sparkling foam effect, loosely imitating whitecaps/foam crests to add more visual interest to the water surface. Since it only reuses operations, this effect is very cheap. Similarly, the wave normals are used to create something like fake subsurface scattering to further distinguish the moving water surface. Interactive RipplesA third type of foam is added as interactive waves that ripple around the player character when walking through shallow water. This is done through a Render Target and particles, as demonstrated in this Unity tutorial by Minions Art. The steps described there are all easily applicable in Unreal with a Niagara System, a little Blueprint work, and common material nodes. We added a Height to Normal conversion for better visual integration into our existing wave setup. Finally, here are all those operations combined for the material inputs:NimueBest PracticesUse Single Layer Water for efficient translucency, but note it lacks back-face rendering and forces mirror reflections with Lumen;For simple low-altitude waves, pan two offset samples of a normal map at different speeds; move panning to Vertex Shader for better performance;Break up texture tiling efficiently by offsetting UVs with a large panning noise, modulating normal strength, and fading distant waves using pixel depth;Sampling one small noise texture at different scales can power this and many other features of a water shader efficiently;If high-altitude waves aren't needed, a simple sine-based WPO can suffice for vertex displacement; implement a grid system for LODs and culling of subdivided water meshes;Blend two scattering colors by camera distance for artistic watercolor control and separation from sky reflections;Combining depth sampling and distance fields to derive the foam lines allows for more flexible intersection widths but comes at a higher cost. Further ResourcesHere are some resources that helped us in the shader creation process:General shader theory and creation: tharlevfx, Ben Cloward;Interactive water in Unity: Minions Art;Another free stylized water material in Unreal by Fabian Lopez Arosa;Technical art wizardry: Ghislain Girardot.ConclusionWe hope this breakdown of our water material creation process will help you in your projects.If you want to take a look at our shader yourself or even use it for your own game projects, you can download the complete setup on Gumroad. We look forward to seeing your water shaders and exchanging ideas. Feel free to reach out if you have any questions or want to connect.Kolja Bopp, Academic SupervisorLeanna Geideck, Concept ArtistStephan zu Münster, Technical Artist #how #build #stylized #water #shader
    80.LV
    How To Build Stylized Water Shader: Design & Implementation For Nimue
    NimueIntroductionFor three semesters, our student team has been hard at work on the prototype for Nimue, a 3D platformer in which you play an enchanted princess who lost her memories. She needs to find her way through the castle ruins on a misty lake to uncover her past. Water is a visual core element of this game prototype, so we took extra care in its development. In this article, we will take an in-depth look at the design and technical implementation of a lake water material.The first prototype of Nimue can be played on itch.io soon. A link to our shader for use in your own projects can be found at the end of this article.Taxonomy of WaterBefore we dive into the design decisions and technical implementation, we present a simplified taxonomy of visual water components to better understand the requirements of its representation:RiMEWind WavesWaves generated by wind, which form on an open water surface, can be divided into capillary waves and gravity waves. Capillary waves, or ripples, are small, short-wavelength waves caused by weak winds affecting surface tension in calm water. They can overlap longer and larger gravity waves. How these physically complex wave types are represented in stylized video games varies depending on the respective style. Both types are usually heavily simplified in form and motion, and capillary waves are sometimes omitted entirely to reduce detail.Sea of ThievesFoam PatternsFoam patterns refer to white foam crests that form on a water surface without breaking against an obstacle or shoreline. In reality, this effect occurs when different water layers collide, and waves become steeper until their peaks collapse, and the resulting bubbles and drops scatter the sunlight. Stylized foam patterns can be found in many video game water representations and can easily be abstracted into patterns. Such patterns contribute to a cartoon look and can sometimes even replace waveforms entirely.The Legend of Zelda: The Wind WakerFoam LinesFoam lines are a very common water element in video games, represented as white graphical lines surrounding shorelines and obstacles like rocks. They typically reference two different water phenomena: foam forming around obstacles due to wave breaking, and foam along shorelines, resulting from wave breaking and the mixing of algaes with organic and artificial substances.Foam lines can have different visual appearances depending on the surface angle: The shallower the angle, the wider the foam effect. Due to the weaker waves, distinctive foam lines are rarely observed on natural lakes, but they can be included in a stylization for aesthetic purposes. Animal Crossing: New HorizonsReflectionsWhen light hits a water surface, it can either be reflected (specular reflection) or transmitted into the water, where it may be absorbed, scattered, or reflected back through the surface. The Fresnel effect describes the perceived balance between reflection and transmission: at steep angles, more transmitted light reaches the eye, making the water appear more translucent, while at shallow angles, increased reflection makes it appear more opaqueIn stylized video games, implementations of water reflections vary: RiME, for example, imitates the Fresnel effect but does not reflect the environment at all, only a simple, otherwise invisible cube map. Wind Waker, on the other hand, completely foregoes reflection calculations and renders a flat-shaded water surface.RiMETranslucencyAs an inhomogeneous medium, water scatters some of the transmitted light before it can be reflected back to the surface. This is why water is described as translucent rather than transparent. Some scattered light is not reflected back but absorbed, reducing intensity and shifting color toward the least absorbed wavelengths, typically blue, blue-green, or turquoise. Increased distance amplifies scattering and absorption, altering color perception. Modern real-time engines simulate these effects, including absorption-based color variation with depth. However, stylized games often simplify or omit transmission entirely, rendering water as an opaque surface.RiMERefractionAn additional aspect of water transmission is refraction, the bending of light as it transitions between air and water due to their differing densities. This causes light to bend toward the normal upon entering the water, creating the apparent distortion of submerged objects. Refraction effects also commonly appear in stylized water rendering. Kirby's Forgotten Land, for example, showcases two key visual characteristics of refraction: distortion increases with steeper viewing angles and is amplified by ripples on the water's surface.Kirby and the Forgotten LandCausticsCaustic patterns form when light rays are focused by a curved water surface (caused by waves and ripples), projecting bundled light patterns onto underwater surfaces or even back to surfaces above water. These patterns are influenced by the clarity of the water, the depth of the water, and the strength of the light source. They contribute greatly to the atmosphere of virtual worlds and are often found in stylized games, although only as simplistic representations.The Legend of Zelda: Ocarina of Time 3DDesign DecisionsDue to the fact that the setting of Nimue is a lake with a calm overall atmosphere, the decision was made to use very reduced gravity waves, as a calm water surface underlines this atmosphere. Capillary waves have too high a level of detail for the stylistic requirements of Nimue and were, therefore, not implemented.NimueShapesThe mood in Nimue can be summarized as calm and mystical. The design language of Nimue is graphic, rounded, and elegant. Shapes are vertically elongated and highly abstracted. Convex corners are always rounded or have a strong bevel, while concave corners are pointed to prevent the overall mass of objects from becoming too rounded.ColorsNimue uses mostly broken colors and pastels to create a serene, reflective mood and highlight the player's character with her saturated blue tones. Platforms and obstacles are depicted with a lower tonal value (darker) to increase their visibility. Overall, the game world is kept in very unsaturated shades of blue, with the atmospheric depth, i.e., the sky and objects in the distance, falling into the complementary orange range. Shades of green and yellow are either completely avoided or extremely desaturated. The resulting reduced color palette additionally supports the atmosphere and makes it appear more harmonious.Color gamut & value/tone tests Hue, Tone & SaturationSince the color of the water, with its hue, tone, and saturation, is technically achieved by several components, a 2D mockup was first designed to more easily compare different colors in the environment. Here it could be observed that both the low and the high tonal value formed too great a contrast to the rest of the environment and thus placed an undesirable focus on the water. Therefore, the medium tone value was chosen.The hue and saturation were tested in relativity to the sky, the player character, and the background. Here, too, the color variant that harmonizes the most with the rest of the environment and contrasts the least was chosen.Foam LinesFor the design of the foam lines, we proceeded in the same way as for the color selection: In this case, a screenshot of the test scene was used as the basis for three overpaints to try out different foam lines on the stones in the scene. Version 3 offers the greatest scope in terms of movement within the patterns. Due to this, and because of the greater visual interest, we opted for variant 3. Following the mockup, the texture was prepared so that it could be technically implemented.ReflectionThe reflection of the water surface contributes to how realistic the water looks, as one would always expect a reflection with natural water, depending on the angle. However, a reflection could also contribute to the overall appearance of the water becoming less calm. The romantic character created by the reflection of diffuse light on water is more present in version 1.In addition, the soft, wafting shapes created by the reflection fit in well with the art style. A reflection is desirable, but the reflections must not take up too much focus. Ideally, the water should be lighter in tone, and the reflections should be present but less pronounced. Reflection intensityRefraction & CausticsEven though most light in our water gets absorbed, we noticed an improvement in the believability of the ground right underneath the water's surface when utilizing refraction together with the waveforms. When it comes to caustics, the diffuse lighting conditions of our scene would make visible caustic patterns physically implausible, but it felt right aesthetically, which is why we included it anyway (not being bound to physical plausibility is one of the perks of stylized graphics).Technical Realization in Unreal Engine 5When building a water material in Unreal, choosing the right shading model and blend mode is crucial. While a Default Lit Translucent material with Surface Forward Shading offers the most control, it is very costly to render. The more efficient choice is the Single Layer Water shading model introduced in Unreal 4.27, which supports light absorption, scattering, reflection, refraction, and shadowing at a lower instruction count. However, there are some downsides. For example, as it only uses a single depth layer, it lacks back-face rendering, making it less suitable for underwater views. And while still quite expensive by itself, its benefits outweigh the drawbacks of our stylized water material.WaveformsStarting with the waveforms, we used panning normal maps to simulate the rather calm low-altitude gravity waves. The approach here is simple: create a wave normal map in Substance 3D Designer, sample it twice, and continuously offset the samples' UV coordinates in opposing directions at different speeds. Give one of the two samples a higher speed and normal intensity to create a sense of wind direction. This panning operation does not need to run in the fragment shader, you can move it to the vertex shader through the Vertex Interpolator without quality loss and thereby reduce the instruction count.Texture RepetitionTo reduce visible tiling, we used three simple and fairly efficient tricks. First, we offset the UVs of the Wave Samplers with a large panning noise texture to dynamically distort the wave patterns. Second, we used another sampler of that noise texture with different tiling, speed, and direction to modulate the strength of the normal maps across the surface. We sampled this noise texture four times with different variables in the material, which is a lot, but we reused them many times for most of the visual features of our water. Third, we sampled the pixel depth of the surface to mask out the waves that were far from the camera so that there were no waves in the far distance.Vertex DisplacementWhile these normal waves are enough to create the illusion of altitude on the water surface itself, they are lacking when it comes to the intersections around objects in the water, as these intersections are static without any actual vertex displacement. To fix that, two very simple sine operations (one along the X-axis and the other on the Y-axis) were added to drive the World Position Offset of the water mesh on the Z-axis. To keep the polycounts in check, we built a simple blueprint grid system that spawns high-res plane meshes at the center in a variable radius, and low-res plane meshes around that. This enables the culling of non-visible planes and the use of a less complex version of the water material for distant planes, where features like WPO are not needed.ColorThe general transmission amount is controlled by the opacity input of the material output, but scattering and absorption are defined via the Single Layer Water material output. The inputs Scattering Coefficients and Absorption Coefficients, which are responsible for reproducing how and how far different wavelengths travel through water, are decisive here. We use two scattering colors as parameters, which are interpolated depending on the camera distance. Close to the camera, the blue scattering color (ScatteringColorNear) dominates, while at a distance, the orange scattering color (ScatteringColorFar) takes over. The advantage is a separation of the water's color from the sky's color and, thus, higher artistic control.Reflections & RefractionReflections in the Single Layer Water shading model are as usual determined by the inputs for Specular (reflection intensity) and Roughness (reflection diffusion). In our case, however, we use Lumen reflections for their accuracy and quality, and as of Unreal 5.4, the Single Layer Water model’s roughness calculation does not work with Lumen reflections. It forces mirror reflections (Roughness = 0), no matter the value input, leaving the specular lobe unaffected. Instead, it only offsets the reflection brightness, as the specular input does.For our artistic purposes, this is fine, and we do use the roughness input to fine-tune the specular level while having the specular input as the base level. A very low value was set for the specular value to keep the reflection brightness low. We further stylized the reflections by decreasing this brightness near the camera by using the already mentioned masking method via camera to interpolate between two values (RoughnessNear and RoughnessFar). For refraction, the Pixel Normal Offset mode was used, and a scalar parameter interpolates between the base refraction and the output of the normal waves.CausticsFor the caustic effect, we created a Voronoi noise pattern by using Unreal's Noise node and exporting it with a render target. In Photoshop, the pattern was duplicated twice, rotated each, colored, and blended. This texture is then projected on the objects below by using the ColorScaleBehindWater input of the Single Layer Water Material output. The pattern is dynamically distorted by adding one of the aforementioned panning noise textures to the UV coordinates.FoamlinesWe started by creating custom meshes for foam lines and applied the earlier texture pattern, but quickly realized that such a workflow would be too cumbersome and inflexible for even a small scene, so we decided to do it procedurally. Two common methods for generating intersection masks on a plane are Depth Sampling and Distance Fields. The first works by subtracting the camera's distance to the water surface at the current pixel (i.e., the "PixelDepth") from the camera's distance to the closest scene object at that pixel (i.e., the "SceneDepth"). The second method is to use the node "DistanceToNearestSurface" which calculates the shortest distance between a point on the water surface and the nearest object by referencing the scene's global distance field. We used both methods to control the mask width, as each alone varies with the object's surface slope, causing undesirable variations. Combining them allowed us to switch between two different mask widths, turning off "Affect Distance Field Lighting" for shallow slopes where narrower lines are wanted.The added mask of all intersections is then used for two effects to create the foam lines: "edge foam" (that does not depart from the intersection) and "edge waves" (which go outwards from the edge foam). Both are shaped with the noise samplers shown above to approximate the hand-drawn foam line texture.Foam PatternsThe same noise samplers are also used to create a sparkling foam effect, loosely imitating whitecaps/foam crests to add more visual interest to the water surface. Since it only reuses operations, this effect is very cheap. Similarly, the wave normals are used to create something like fake subsurface scattering to further distinguish the moving water surface. Interactive RipplesA third type of foam is added as interactive waves that ripple around the player character when walking through shallow water. This is done through a Render Target and particles, as demonstrated in this Unity tutorial by Minions Art. The steps described there are all easily applicable in Unreal with a Niagara System, a little Blueprint work, and common material nodes. We added a Height to Normal conversion for better visual integration into our existing wave setup. Finally, here are all those operations combined for the material inputs:NimueBest PracticesUse Single Layer Water for efficient translucency, but note it lacks back-face rendering and forces mirror reflections with Lumen;For simple low-altitude waves, pan two offset samples of a normal map at different speeds; move panning to Vertex Shader for better performance;Break up texture tiling efficiently by offsetting UVs with a large panning noise, modulating normal strength, and fading distant waves using pixel depth;Sampling one small noise texture at different scales can power this and many other features of a water shader efficiently;If high-altitude waves aren't needed, a simple sine-based WPO can suffice for vertex displacement; implement a grid system for LODs and culling of subdivided water meshes;Blend two scattering colors by camera distance for artistic watercolor control and separation from sky reflections;Combining depth sampling and distance fields to derive the foam lines allows for more flexible intersection widths but comes at a higher cost. Further ResourcesHere are some resources that helped us in the shader creation process:General shader theory and creation: tharlevfx, Ben Cloward;Interactive water in Unity: Minions Art;Another free stylized water material in Unreal by Fabian Lopez Arosa;Technical art wizardry: Ghislain Girardot.ConclusionWe hope this breakdown of our water material creation process will help you in your projects.If you want to take a look at our shader yourself or even use it for your own game projects, you can download the complete setup on Gumroad. We look forward to seeing your water shaders and exchanging ideas. Feel free to reach out if you have any questions or want to connect.Kolja Bopp, Academic SupervisorLeanna Geideck, Concept ArtistStephan zu Münster, Technical Artist
    0 Comentários 0 Compartilhamentos 0 Anterior
  • After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About

    AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty
    In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligencethere won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else.

    Turns out that’s hogwashand there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on.

    Let’s talk about it.

    This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities.

    Heading Toward AGI And ASI
    First, some fundamentals are required to set the stage for this weighty discussion.
    There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence.
    AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
    We have not yet attained AGI.
    In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

    AGI That Knows Everything
    A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place.
    For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge.
    Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence.
    Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story.
    Reality Begs To Differ
    There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new.
    First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge.
    It isn’t.
    There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content.
    Second, AGI won’t know what’s in our heads.
    I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces, see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI.
    Time Keeps Ticking Along
    Another consideration is that time continues to flow along in a post-AGI era.
    This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch.
    Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity.
    Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge.
    Belief That AGI Gets Uppity
    By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI.
    Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess?
    One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from.
    I am more upbeat on this posture.
    I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge.
    Synergy Is The Best Course
    Would humans be willing to learn from AGI?
    Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired.
    Boom, drop the mic.
    In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise.
    The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.
    #after #reaching #agi #some #insist
    After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About
    AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligencethere won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else. Turns out that’s hogwashand there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence. AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI That Knows Everything A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place. For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge. Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence. Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story. Reality Begs To Differ There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new. First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge. It isn’t. There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content. Second, AGI won’t know what’s in our heads. I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces, see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI. Time Keeps Ticking Along Another consideration is that time continues to flow along in a post-AGI era. This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch. Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity. Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge. Belief That AGI Gets Uppity By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI. Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess? One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from. I am more upbeat on this posture. I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge. Synergy Is The Best Course Would humans be willing to learn from AGI? Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired. Boom, drop the mic. In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise. The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly. #after #reaching #agi #some #insist
    WWW.FORBES.COM
    After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About
    AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligence (AGI) there won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else. Turns out that’s hogwash (misguided) and there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI That Knows Everything A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place. For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge. Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence. Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story. Reality Begs To Differ There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new. First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge. It isn’t. There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content. Second, AGI won’t know what’s in our heads. I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces (BMI), see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI. Time Keeps Ticking Along Another consideration is that time continues to flow along in a post-AGI era. This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch. Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity. Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge. Belief That AGI Gets Uppity By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI. Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess? One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from. I am more upbeat on this posture. I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge. Synergy Is The Best Course Would humans be willing to learn from AGI? Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired. Boom, drop the mic. In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise. The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Black hole fly-by modelled with landmark precision

    Nature, Published online: 14 May 2025; doi:10.1038/d41586-025-01339-xA prediction of the gravitational waves produced by interacting black holes achieves high precision and demonstrates the link between general relativity and geometry.
    #black #hole #flyby #modelled #with
    Black hole fly-by modelled with landmark precision
    Nature, Published online: 14 May 2025; doi:10.1038/d41586-025-01339-xA prediction of the gravitational waves produced by interacting black holes achieves high precision and demonstrates the link between general relativity and geometry. #black #hole #flyby #modelled #with
    WWW.NATURE.COM
    Black hole fly-by modelled with landmark precision
    Nature, Published online: 14 May 2025; doi:10.1038/d41586-025-01339-xA prediction of the gravitational waves produced by interacting black holes achieves high precision and demonstrates the link between general relativity and geometry.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com