• The “online monkey torture video” arrests just keep coming

    monkey abuse

    The “online monkey torture video” arrests just keep coming

    Authorities continue the slow crackdown.

    Nate Anderson



    Jun 14, 2025 7:00 am

    |

    34

    Credit:

    Getty Images

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Today's monkey torture videos are the products of a digitally connected world. People who enjoy watching baby animals probed, snipped, and mutilated in horrible ways often have difficulty finding local collaborators, but online communities like "million tears"—now thankfully shuttered—can help them forge connections.
    Once they do meet other like-minded souls, communication takes place through chat apps like Telegram and Signal, often using encryption.
    Money is pooled through various phone apps, then sent to videographers in countries where wages are low and monkeys are plentiful.There, monkeys are tortured by a local subcontractor—sometimes a child—working to Western specs. Smartphone video of the torture is sent back to the commissioning sadists, who share it with more viewers using the same online communities in which they met.
    The unfortunate pattern was again on display this week in an indictment the US government unsealed against several more Americans said to have commissioned these videos. The accused used online handles like "Bitchy" and "DemonSwordSoulCrusher," and they hail from all over: Tennessee, North Carolina, Ohio, Pennsylvania, and Massachusetts.
    They relied on an Indonesian videographer to create the content, which was surprisingly affordable—it cost a mere to commission video of a "burning hot screwdriver" being shoved into a baby monkey's orifice. After the money was transferred, the requested video was shot and shared through a "phone-based messaging program," but the Americans were deeply disappointed in its quality. Instead of full-on impalement, the videographer had heated a screwdriver on a burner and merely touched it against the monkey a few times.
    "So lame," one of the Americans allegedly complained to another. "Live and learn," was the response.

    So the group tried again. "Million tears" had been booted by its host, but the group reconstituted on another platform and renamed itself "the trail of trillion tears." They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn't satisfy. As one of the Americans allegedly said to another, "honey that's not what you asked for. Thats the village idiot version. But I'm talking with someone about getting a good voto do it."
    Arrests continue
    In 2021, someone leaked communications from the "million tears" group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group's leaders.
    In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing "a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin." His mother, in a sentencing letter to the judge, said that her son must "have been undergoing some mental crisis when he decided to create the website." As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish.
    Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids.
    In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged.
    In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly "paid a minor in Indonesia to commit the requested acts on camera."
    As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years.

    Nate Anderson
    Deputy Editor

    Nate Anderson
    Deputy Editor

    Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds.

    34 Comments
    #online #monkey #torture #video #arrests
    The “online monkey torture video” arrests just keep coming
    monkey abuse The “online monkey torture video” arrests just keep coming Authorities continue the slow crackdown. Nate Anderson – Jun 14, 2025 7:00 am | 34 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Today's monkey torture videos are the products of a digitally connected world. People who enjoy watching baby animals probed, snipped, and mutilated in horrible ways often have difficulty finding local collaborators, but online communities like "million tears"—now thankfully shuttered—can help them forge connections. Once they do meet other like-minded souls, communication takes place through chat apps like Telegram and Signal, often using encryption. Money is pooled through various phone apps, then sent to videographers in countries where wages are low and monkeys are plentiful.There, monkeys are tortured by a local subcontractor—sometimes a child—working to Western specs. Smartphone video of the torture is sent back to the commissioning sadists, who share it with more viewers using the same online communities in which they met. The unfortunate pattern was again on display this week in an indictment the US government unsealed against several more Americans said to have commissioned these videos. The accused used online handles like "Bitchy" and "DemonSwordSoulCrusher," and they hail from all over: Tennessee, North Carolina, Ohio, Pennsylvania, and Massachusetts. They relied on an Indonesian videographer to create the content, which was surprisingly affordable—it cost a mere to commission video of a "burning hot screwdriver" being shoved into a baby monkey's orifice. After the money was transferred, the requested video was shot and shared through a "phone-based messaging program," but the Americans were deeply disappointed in its quality. Instead of full-on impalement, the videographer had heated a screwdriver on a burner and merely touched it against the monkey a few times. "So lame," one of the Americans allegedly complained to another. "Live and learn," was the response. So the group tried again. "Million tears" had been booted by its host, but the group reconstituted on another platform and renamed itself "the trail of trillion tears." They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn't satisfy. As one of the Americans allegedly said to another, "honey that's not what you asked for. Thats the village idiot version. But I'm talking with someone about getting a good voto do it." Arrests continue In 2021, someone leaked communications from the "million tears" group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group's leaders. In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing "a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin." His mother, in a sentencing letter to the judge, said that her son must "have been undergoing some mental crisis when he decided to create the website." As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish. Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids. In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged. In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly "paid a minor in Indonesia to commit the requested acts on camera." As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years. Nate Anderson Deputy Editor Nate Anderson Deputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 34 Comments #online #monkey #torture #video #arrests
    ARSTECHNICA.COM
    The “online monkey torture video” arrests just keep coming
    monkey abuse The “online monkey torture video” arrests just keep coming Authorities continue the slow crackdown. Nate Anderson – Jun 14, 2025 7:00 am | 34 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Today's monkey torture videos are the products of a digitally connected world. People who enjoy watching baby animals probed, snipped, and mutilated in horrible ways often have difficulty finding local collaborators, but online communities like "million tears"—now thankfully shuttered—can help them forge connections. Once they do meet other like-minded souls, communication takes place through chat apps like Telegram and Signal, often using encryption. Money is pooled through various phone apps, then sent to videographers in countries where wages are low and monkeys are plentiful. (The cases I have seen usually involve Indonesia; read my feature from last year to learn more about how these groups work.) There, monkeys are tortured by a local subcontractor—sometimes a child—working to Western specs. Smartphone video of the torture is sent back to the commissioning sadists, who share it with more viewers using the same online communities in which they met. The unfortunate pattern was again on display this week in an indictment the US government unsealed against several more Americans said to have commissioned these videos. The accused used online handles like "Bitchy" and "DemonSwordSoulCrusher," and they hail from all over: Tennessee, North Carolina, Ohio, Pennsylvania, and Massachusetts. They relied on an Indonesian videographer to create the content, which was surprisingly affordable—it cost a mere $40 to commission video of a "burning hot screwdriver" being shoved into a baby monkey's orifice. After the money was transferred, the requested video was shot and shared through a "phone-based messaging program," but the Americans were deeply disappointed in its quality. Instead of full-on impalement, the videographer had heated a screwdriver on a burner and merely touched it against the monkey a few times. "So lame," one of the Americans allegedly complained to another. "Live and learn," was the response. So the group tried again. "Million tears" had been booted by its host, but the group reconstituted on another platform and renamed itself "the trail of trillion tears." They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn't satisfy. As one of the Americans allegedly said to another, "honey that's not what you asked for. Thats the village idiot version. But I'm talking with someone about getting a good vo [videographer] to do it." Arrests continue In 2021, someone leaked communications from the "million tears" group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group's leaders. In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing "a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin." His mother, in a sentencing letter to the judge, said that her son must "have been undergoing some mental crisis when he decided to create the website." As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish. Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids. In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged. In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly "paid a minor in Indonesia to commit the requested acts on camera." As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years. Nate Anderson Deputy Editor Nate Anderson Deputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 34 Comments
    0 Commentarii 0 Distribuiri 0 previzualizare
  • How a US agriculture agency became key in the fight against bird flu

    A dangerous strain of bird flu is spreading in US livestockMediaMedium/Alamy
    Since Donald Trump assumed office in January, the leading US public health agency has pulled back preparations for a potential bird flu pandemic. But as it steps back, another government agency is stepping up.

    While the US Department of Health and Human Servicespreviously held regular briefings on its efforts to prevent a wider outbreak of a deadly bird flu virus called H5N1 in people, it largely stopped once Trump took office. It has also cancelled funding for a vaccine that would have targeted the virus. In contrast, the US Department of Agriculturehas escalated its fight against H5N1’s spread in poultry flocks and dairy herds, including by funding the development of livestock vaccines.
    This particular virus – a strain of avian influenza called H5N1 – poses a significant threat to humans, having killed about half of the roughly 1000 people worldwide who tested positive for it since 2003. While the pathogen spreads rapidly in birds, it is poorly adapted to infecting humans and isn’t known to transmit between people. But that could change if it acquires mutations that allow it to spread more easily among mammals – a risk that increases with each mammalian infection.
    The possibility of H5N1 evolving to become more dangerous to people has grown significantly since March 2024, when the virus jumped from migratory birds to dairy cows in Texas. More than 1,070 herds across 17 states have been affected since then.
    H5N1 also infects poultry, placing the virus in closer proximity to people. Since 2022, nearly 175 million domestic birds have been culled in the US due to H5N1, and almost all of the 71 people who have tested positive for it had direct contact with livestock.

    Get the most essential health and fitness news in your inbox every Saturday.

    Sign up to newsletter

    “We need to take this seriously because whenconstantly is spreading, it’s constantly spilling over into humans,” says Seema Lakdawala at Emory University in Georgia. The virus has already killed a person in the US and a child in Mexico this year.
    Still, cases have declined under Trump. The last recorded human case was in February, and the number of affected poultry flocks fell 95 per cent between then and June. Outbreaks in dairy herds have also stabilised.
    It isn’t clear what is behind the decline. Lakdawala believes it is partly due to a lull in bird migration, which reduces opportunities for the virus to spread from wild birds to livestock. It may also reflect efforts by the USDA to contain outbreaks on farms. In February, the USDA unveiled a billion plan for tackling H5N1, including strengthening farmers’ defences against the virus, such as through free biosecurity assessments. Of the 150 facilities that have undergone assessment, only one has experienced an H5N1 outbreak.
    Under Trump, the USDA also continued its National Milk Testing Strategy, which mandates farms provide raw milk samples for influenza testing. If a farm is positive for H5N1, it must allow the USDA to monitor livestock and implement measures to contain the virus. The USDA launched the programme in December and has since ramped up participation to 45 states.
    “The National Milk Testing Strategy is a fantastic system,” says Erin Sorrell at Johns Hopkins University in Maryland. Along with the USDA’s efforts to improve biosecurity measures on farms, milk testing is crucial for containing the outbreak, says Sorrell.

    But while the USDA has bolstered its efforts against H5N1, the HHS doesn’t appear to have followed suit. In fact, the recent drop in human cases may reflect decreased surveillance due to workforce cuts, says Sorrell. In April, the HHS laid off about 10,000 employees, including 90 per cent of staff at the National Institute for Occupational Safety and Health, an office that helps investigate H5N1 outbreaks in farm workers.
    “There is an old saying that if you don’t test for something, you can’t find it,” says Sorrell. Yet a spokesperson for the US Centers for Disease Control and Preventionsays its guidance and surveillance efforts have not changed. “State and local health departments continue to monitor for illness in persons exposed to sick animals,” they told New Scientist. “CDC remains committed to rapidly communicating information as needed about H5N1.”
    The USDA and HHS also diverge on vaccination. While the USDA has allocated million toward developing vaccines and other solutions for preventing H5N1’s spread in livestock, the HHS cancelled million in contracts for influenza vaccine development. The contracts – terminated on 28 May – were with the pharmaceutical company Moderna to develop vaccines targeting flu subtypes, including H5N1, that could cause future pandemics. The news came the same day Moderna reported nearly 98 per cent of the roughly 300 participants who received two doses of the H5 vaccine in a clinical trial had antibody levels believed to be protective against the virus.
    The US has about five million H5N1 vaccine doses stockpiled, but these are made using eggs and cultured cells, which take longer to produce than mRNA-based vaccines like Moderna’s. The Moderna vaccine would have modernised the stockpile and enabled the government to rapidly produce vaccines in the event of a pandemic, says Sorrell. “It seems like a very effective platform and would have positioned the US and others to be on good footing if and when we needed a vaccine for our general public,” she says.

    The HHS cancelled the contracts due to concerns about mRNA vaccines, which Robert F Kennedy Jr – the country’s highest-ranking public health official – has previously cast doubt on. “The reality is that mRNA technology remains under-tested, and we are not going to spend taxpayer dollars repeating the mistakes of the last administration,” said HHS communications director Andrew Nixon in a statement to New Scientist.
    However, mRNA technology isn’t new. It has been in development for more than half a century and numerous clinical trials have shown mRNA vaccines are safe. While they do carry the risk of side effects – the majority of which are mild – this is true of almost every medical treatment. In a press release, Moderna said it would explore alternative funding paths for the programme.
    “My stance is that we should not be looking to take anything off the table, and that includes any type of vaccine regimen,” says Lakdawala.
    “Vaccines are the most effective way to counter an infectious disease,” says Sorrell. “And so having that in your arsenal and ready to go just give you more options.”
    Topics:
    #how #agriculture #agency #became #key
    How a US agriculture agency became key in the fight against bird flu
    A dangerous strain of bird flu is spreading in US livestockMediaMedium/Alamy Since Donald Trump assumed office in January, the leading US public health agency has pulled back preparations for a potential bird flu pandemic. But as it steps back, another government agency is stepping up. While the US Department of Health and Human Servicespreviously held regular briefings on its efforts to prevent a wider outbreak of a deadly bird flu virus called H5N1 in people, it largely stopped once Trump took office. It has also cancelled funding for a vaccine that would have targeted the virus. In contrast, the US Department of Agriculturehas escalated its fight against H5N1’s spread in poultry flocks and dairy herds, including by funding the development of livestock vaccines. This particular virus – a strain of avian influenza called H5N1 – poses a significant threat to humans, having killed about half of the roughly 1000 people worldwide who tested positive for it since 2003. While the pathogen spreads rapidly in birds, it is poorly adapted to infecting humans and isn’t known to transmit between people. But that could change if it acquires mutations that allow it to spread more easily among mammals – a risk that increases with each mammalian infection. The possibility of H5N1 evolving to become more dangerous to people has grown significantly since March 2024, when the virus jumped from migratory birds to dairy cows in Texas. More than 1,070 herds across 17 states have been affected since then. H5N1 also infects poultry, placing the virus in closer proximity to people. Since 2022, nearly 175 million domestic birds have been culled in the US due to H5N1, and almost all of the 71 people who have tested positive for it had direct contact with livestock. Get the most essential health and fitness news in your inbox every Saturday. Sign up to newsletter “We need to take this seriously because whenconstantly is spreading, it’s constantly spilling over into humans,” says Seema Lakdawala at Emory University in Georgia. The virus has already killed a person in the US and a child in Mexico this year. Still, cases have declined under Trump. The last recorded human case was in February, and the number of affected poultry flocks fell 95 per cent between then and June. Outbreaks in dairy herds have also stabilised. It isn’t clear what is behind the decline. Lakdawala believes it is partly due to a lull in bird migration, which reduces opportunities for the virus to spread from wild birds to livestock. It may also reflect efforts by the USDA to contain outbreaks on farms. In February, the USDA unveiled a billion plan for tackling H5N1, including strengthening farmers’ defences against the virus, such as through free biosecurity assessments. Of the 150 facilities that have undergone assessment, only one has experienced an H5N1 outbreak. Under Trump, the USDA also continued its National Milk Testing Strategy, which mandates farms provide raw milk samples for influenza testing. If a farm is positive for H5N1, it must allow the USDA to monitor livestock and implement measures to contain the virus. The USDA launched the programme in December and has since ramped up participation to 45 states. “The National Milk Testing Strategy is a fantastic system,” says Erin Sorrell at Johns Hopkins University in Maryland. Along with the USDA’s efforts to improve biosecurity measures on farms, milk testing is crucial for containing the outbreak, says Sorrell. But while the USDA has bolstered its efforts against H5N1, the HHS doesn’t appear to have followed suit. In fact, the recent drop in human cases may reflect decreased surveillance due to workforce cuts, says Sorrell. In April, the HHS laid off about 10,000 employees, including 90 per cent of staff at the National Institute for Occupational Safety and Health, an office that helps investigate H5N1 outbreaks in farm workers. “There is an old saying that if you don’t test for something, you can’t find it,” says Sorrell. Yet a spokesperson for the US Centers for Disease Control and Preventionsays its guidance and surveillance efforts have not changed. “State and local health departments continue to monitor for illness in persons exposed to sick animals,” they told New Scientist. “CDC remains committed to rapidly communicating information as needed about H5N1.” The USDA and HHS also diverge on vaccination. While the USDA has allocated million toward developing vaccines and other solutions for preventing H5N1’s spread in livestock, the HHS cancelled million in contracts for influenza vaccine development. The contracts – terminated on 28 May – were with the pharmaceutical company Moderna to develop vaccines targeting flu subtypes, including H5N1, that could cause future pandemics. The news came the same day Moderna reported nearly 98 per cent of the roughly 300 participants who received two doses of the H5 vaccine in a clinical trial had antibody levels believed to be protective against the virus. The US has about five million H5N1 vaccine doses stockpiled, but these are made using eggs and cultured cells, which take longer to produce than mRNA-based vaccines like Moderna’s. The Moderna vaccine would have modernised the stockpile and enabled the government to rapidly produce vaccines in the event of a pandemic, says Sorrell. “It seems like a very effective platform and would have positioned the US and others to be on good footing if and when we needed a vaccine for our general public,” she says. The HHS cancelled the contracts due to concerns about mRNA vaccines, which Robert F Kennedy Jr – the country’s highest-ranking public health official – has previously cast doubt on. “The reality is that mRNA technology remains under-tested, and we are not going to spend taxpayer dollars repeating the mistakes of the last administration,” said HHS communications director Andrew Nixon in a statement to New Scientist. However, mRNA technology isn’t new. It has been in development for more than half a century and numerous clinical trials have shown mRNA vaccines are safe. While they do carry the risk of side effects – the majority of which are mild – this is true of almost every medical treatment. In a press release, Moderna said it would explore alternative funding paths for the programme. “My stance is that we should not be looking to take anything off the table, and that includes any type of vaccine regimen,” says Lakdawala. “Vaccines are the most effective way to counter an infectious disease,” says Sorrell. “And so having that in your arsenal and ready to go just give you more options.” Topics: #how #agriculture #agency #became #key
    WWW.NEWSCIENTIST.COM
    How a US agriculture agency became key in the fight against bird flu
    A dangerous strain of bird flu is spreading in US livestockMediaMedium/Alamy Since Donald Trump assumed office in January, the leading US public health agency has pulled back preparations for a potential bird flu pandemic. But as it steps back, another government agency is stepping up. While the US Department of Health and Human Services (HHS) previously held regular briefings on its efforts to prevent a wider outbreak of a deadly bird flu virus called H5N1 in people, it largely stopped once Trump took office. It has also cancelled funding for a vaccine that would have targeted the virus. In contrast, the US Department of Agriculture (USDA) has escalated its fight against H5N1’s spread in poultry flocks and dairy herds, including by funding the development of livestock vaccines. This particular virus – a strain of avian influenza called H5N1 – poses a significant threat to humans, having killed about half of the roughly 1000 people worldwide who tested positive for it since 2003. While the pathogen spreads rapidly in birds, it is poorly adapted to infecting humans and isn’t known to transmit between people. But that could change if it acquires mutations that allow it to spread more easily among mammals – a risk that increases with each mammalian infection. The possibility of H5N1 evolving to become more dangerous to people has grown significantly since March 2024, when the virus jumped from migratory birds to dairy cows in Texas. More than 1,070 herds across 17 states have been affected since then. H5N1 also infects poultry, placing the virus in closer proximity to people. Since 2022, nearly 175 million domestic birds have been culled in the US due to H5N1, and almost all of the 71 people who have tested positive for it had direct contact with livestock. Get the most essential health and fitness news in your inbox every Saturday. Sign up to newsletter “We need to take this seriously because when [H5N1] constantly is spreading, it’s constantly spilling over into humans,” says Seema Lakdawala at Emory University in Georgia. The virus has already killed a person in the US and a child in Mexico this year. Still, cases have declined under Trump. The last recorded human case was in February, and the number of affected poultry flocks fell 95 per cent between then and June. Outbreaks in dairy herds have also stabilised. It isn’t clear what is behind the decline. Lakdawala believes it is partly due to a lull in bird migration, which reduces opportunities for the virus to spread from wild birds to livestock. It may also reflect efforts by the USDA to contain outbreaks on farms. In February, the USDA unveiled a $1 billion plan for tackling H5N1, including strengthening farmers’ defences against the virus, such as through free biosecurity assessments. Of the 150 facilities that have undergone assessment, only one has experienced an H5N1 outbreak. Under Trump, the USDA also continued its National Milk Testing Strategy, which mandates farms provide raw milk samples for influenza testing. If a farm is positive for H5N1, it must allow the USDA to monitor livestock and implement measures to contain the virus. The USDA launched the programme in December and has since ramped up participation to 45 states. “The National Milk Testing Strategy is a fantastic system,” says Erin Sorrell at Johns Hopkins University in Maryland. Along with the USDA’s efforts to improve biosecurity measures on farms, milk testing is crucial for containing the outbreak, says Sorrell. But while the USDA has bolstered its efforts against H5N1, the HHS doesn’t appear to have followed suit. In fact, the recent drop in human cases may reflect decreased surveillance due to workforce cuts, says Sorrell. In April, the HHS laid off about 10,000 employees, including 90 per cent of staff at the National Institute for Occupational Safety and Health, an office that helps investigate H5N1 outbreaks in farm workers. “There is an old saying that if you don’t test for something, you can’t find it,” says Sorrell. Yet a spokesperson for the US Centers for Disease Control and Prevention (CDC) says its guidance and surveillance efforts have not changed. “State and local health departments continue to monitor for illness in persons exposed to sick animals,” they told New Scientist. “CDC remains committed to rapidly communicating information as needed about H5N1.” The USDA and HHS also diverge on vaccination. While the USDA has allocated $100 million toward developing vaccines and other solutions for preventing H5N1’s spread in livestock, the HHS cancelled $776 million in contracts for influenza vaccine development. The contracts – terminated on 28 May – were with the pharmaceutical company Moderna to develop vaccines targeting flu subtypes, including H5N1, that could cause future pandemics. The news came the same day Moderna reported nearly 98 per cent of the roughly 300 participants who received two doses of the H5 vaccine in a clinical trial had antibody levels believed to be protective against the virus. The US has about five million H5N1 vaccine doses stockpiled, but these are made using eggs and cultured cells, which take longer to produce than mRNA-based vaccines like Moderna’s. The Moderna vaccine would have modernised the stockpile and enabled the government to rapidly produce vaccines in the event of a pandemic, says Sorrell. “It seems like a very effective platform and would have positioned the US and others to be on good footing if and when we needed a vaccine for our general public,” she says. The HHS cancelled the contracts due to concerns about mRNA vaccines, which Robert F Kennedy Jr – the country’s highest-ranking public health official – has previously cast doubt on. “The reality is that mRNA technology remains under-tested, and we are not going to spend taxpayer dollars repeating the mistakes of the last administration,” said HHS communications director Andrew Nixon in a statement to New Scientist. However, mRNA technology isn’t new. It has been in development for more than half a century and numerous clinical trials have shown mRNA vaccines are safe. While they do carry the risk of side effects – the majority of which are mild – this is true of almost every medical treatment. In a press release, Moderna said it would explore alternative funding paths for the programme. “My stance is that we should not be looking to take anything off the table, and that includes any type of vaccine regimen,” says Lakdawala. “Vaccines are the most effective way to counter an infectious disease,” says Sorrell. “And so having that in your arsenal and ready to go just give you more options.” Topics:
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Tutankhamun's Iconic Gold Death Mask Is Getting a New Home Near the Pyramids of Giza

    Tutankhamun’s Iconic Gold Death Mask Is Getting a New Home Near the Pyramids of Giza
    Soon, the elaborately decorated artifact will be transferred to the brand new Grand Egyptian Museum, joining more than 5,000 other items from the boy king’s tomb

    Tutankhamun's gold funerary mask has been on display at the Egyptian Museum for nearly a century.

    Mostafa Elshemy / Anadolu Agency / Getty Images

    For nearly a century, visitors have flocked to the Egyptian Museum on Cairo’s Tahrir Square to admire Tutankhamun’s funerary mask, the intricately decorated artifact designed to cover the mummified pharaoh’s face.
    Starting this summer, they’ll be able to see the mask in its new home, the billion Grand Egyptian Museum located in nearby Giza. Officials will soon transfer the mask to the massive new venue, where it will join more than 5,000 artifacts from the boy king’s tomb.
    “Only 26 objects from the Tutankhamun collection, including the golden mask and two coffins, remain herein Tahrir,” says Ali Abdel Halim, director of the Egyptian Museum, to the Agence France-Presse. “All are set to be moved soon.”
    Halim didn’t say when the death mask will be transferred, but the new Grand Egyptian Museum is scheduled to fully open to the public in early July after years of delays.
    Some portions of the Grand Egyptian Museum have been open since November 2023, with an additional 12 exhibit halls opening last October. All told, the museum complex spans more than 5 million square feet and houses more than 100,000 artifacts, which makes it the largest museum in the world focused on a single civilization.
    “We spent all this money to build the greatest museum in the world,” said Zahi Hawass, an Egyptologist who has twice served as Egypt's tourism and antiquities minister, to NBC News’ Keir Simmons, Charlene Gubash and Mithil Aggarwal in October. “You will see the objects for the first time in an incredible way.”

    The Untold Secrets of King Tut's Tomb
    Watch on

    Tutankhamun's mask has been housed at the Egyptian Museum in Cairo since 1934, 12 years after British archaeologist Howard Carter discovered the pharaoh’s tomb. However, the 123-year-old Beaux Arts venue is small and starting to show its age, so officials decided to relocate Tutankhamun's treasures to the enormous, high-tech Grand Egyptian Museum.
    At the new facility, the Tutankhamun artifacts will have their own dedicated, climate-controlled wing—one that’s large enough to display all of them together for the first time.
    Last month, officials carefully transferred 163 Tutankhamun treasures to the new museum, according to an announcement from the Egyptian Ministry of Tourism and Antiquities. That delivery included the pharaoh’s elaborately decorated ceremonial chair, various pieces of jewelry and the canopic chest that held the jars containing Tutankhamun’s organs, reports Artnet’s Sarah Cascone.
    The Egyptian Museum in Cairo, meanwhile, is not closing. Though it has lost Tutankhamun’s treasures and more than 20 mummies, it still has roughly 170,000 artifacts in its collection, per the AFP. Curators say they plan to replace the Tutankhamun artifacts with a new exhibition, though they haven’t shared many details.

    Get the latest stories in your inbox every weekday.
    #tutankhamun039s #iconic #gold #death #mask
    Tutankhamun's Iconic Gold Death Mask Is Getting a New Home Near the Pyramids of Giza
    Tutankhamun’s Iconic Gold Death Mask Is Getting a New Home Near the Pyramids of Giza Soon, the elaborately decorated artifact will be transferred to the brand new Grand Egyptian Museum, joining more than 5,000 other items from the boy king’s tomb Tutankhamun's gold funerary mask has been on display at the Egyptian Museum for nearly a century. Mostafa Elshemy / Anadolu Agency / Getty Images For nearly a century, visitors have flocked to the Egyptian Museum on Cairo’s Tahrir Square to admire Tutankhamun’s funerary mask, the intricately decorated artifact designed to cover the mummified pharaoh’s face. Starting this summer, they’ll be able to see the mask in its new home, the billion Grand Egyptian Museum located in nearby Giza. Officials will soon transfer the mask to the massive new venue, where it will join more than 5,000 artifacts from the boy king’s tomb. “Only 26 objects from the Tutankhamun collection, including the golden mask and two coffins, remain herein Tahrir,” says Ali Abdel Halim, director of the Egyptian Museum, to the Agence France-Presse. “All are set to be moved soon.” Halim didn’t say when the death mask will be transferred, but the new Grand Egyptian Museum is scheduled to fully open to the public in early July after years of delays. Some portions of the Grand Egyptian Museum have been open since November 2023, with an additional 12 exhibit halls opening last October. All told, the museum complex spans more than 5 million square feet and houses more than 100,000 artifacts, which makes it the largest museum in the world focused on a single civilization. “We spent all this money to build the greatest museum in the world,” said Zahi Hawass, an Egyptologist who has twice served as Egypt's tourism and antiquities minister, to NBC News’ Keir Simmons, Charlene Gubash and Mithil Aggarwal in October. “You will see the objects for the first time in an incredible way.” The Untold Secrets of King Tut's Tomb Watch on Tutankhamun's mask has been housed at the Egyptian Museum in Cairo since 1934, 12 years after British archaeologist Howard Carter discovered the pharaoh’s tomb. However, the 123-year-old Beaux Arts venue is small and starting to show its age, so officials decided to relocate Tutankhamun's treasures to the enormous, high-tech Grand Egyptian Museum. At the new facility, the Tutankhamun artifacts will have their own dedicated, climate-controlled wing—one that’s large enough to display all of them together for the first time. Last month, officials carefully transferred 163 Tutankhamun treasures to the new museum, according to an announcement from the Egyptian Ministry of Tourism and Antiquities. That delivery included the pharaoh’s elaborately decorated ceremonial chair, various pieces of jewelry and the canopic chest that held the jars containing Tutankhamun’s organs, reports Artnet’s Sarah Cascone. The Egyptian Museum in Cairo, meanwhile, is not closing. Though it has lost Tutankhamun’s treasures and more than 20 mummies, it still has roughly 170,000 artifacts in its collection, per the AFP. Curators say they plan to replace the Tutankhamun artifacts with a new exhibition, though they haven’t shared many details. Get the latest stories in your inbox every weekday. #tutankhamun039s #iconic #gold #death #mask
    WWW.SMITHSONIANMAG.COM
    Tutankhamun's Iconic Gold Death Mask Is Getting a New Home Near the Pyramids of Giza
    Tutankhamun’s Iconic Gold Death Mask Is Getting a New Home Near the Pyramids of Giza Soon, the elaborately decorated artifact will be transferred to the brand new Grand Egyptian Museum, joining more than 5,000 other items from the boy king’s tomb Tutankhamun's gold funerary mask has been on display at the Egyptian Museum for nearly a century. Mostafa Elshemy / Anadolu Agency / Getty Images For nearly a century, visitors have flocked to the Egyptian Museum on Cairo’s Tahrir Square to admire Tutankhamun’s funerary mask, the intricately decorated artifact designed to cover the mummified pharaoh’s face. Starting this summer, they’ll be able to see the mask in its new home, the $1 billion Grand Egyptian Museum located in nearby Giza. Officials will soon transfer the mask to the massive new venue, where it will join more than 5,000 artifacts from the boy king’s tomb. “Only 26 objects from the Tutankhamun collection, including the golden mask and two coffins, remain here [at the Egyptian Museum] in Tahrir,” says Ali Abdel Halim, director of the Egyptian Museum, to the Agence France-Presse (AFP). “All are set to be moved soon.” Halim didn’t say when the death mask will be transferred, but the new Grand Egyptian Museum is scheduled to fully open to the public in early July after years of delays. Some portions of the Grand Egyptian Museum have been open since November 2023, with an additional 12 exhibit halls opening last October. All told, the museum complex spans more than 5 million square feet and houses more than 100,000 artifacts, which makes it the largest museum in the world focused on a single civilization. “We spent all this money to build the greatest museum in the world,” said Zahi Hawass, an Egyptologist who has twice served as Egypt's tourism and antiquities minister, to NBC News’ Keir Simmons, Charlene Gubash and Mithil Aggarwal in October. “You will see the objects for the first time in an incredible way.” The Untold Secrets of King Tut's Tomb Watch on Tutankhamun's mask has been housed at the Egyptian Museum in Cairo since 1934, 12 years after British archaeologist Howard Carter discovered the pharaoh’s tomb. However, the 123-year-old Beaux Arts venue is small and starting to show its age, so officials decided to relocate Tutankhamun's treasures to the enormous, high-tech Grand Egyptian Museum. At the new facility, the Tutankhamun artifacts will have their own dedicated, climate-controlled wing—one that’s large enough to display all of them together for the first time. Last month, officials carefully transferred 163 Tutankhamun treasures to the new museum, according to an announcement from the Egyptian Ministry of Tourism and Antiquities. That delivery included the pharaoh’s elaborately decorated ceremonial chair, various pieces of jewelry and the canopic chest that held the jars containing Tutankhamun’s organs, reports Artnet’s Sarah Cascone. The Egyptian Museum in Cairo, meanwhile, is not closing. Though it has lost Tutankhamun’s treasures and more than 20 mummies, it still has roughly 170,000 artifacts in its collection, per the AFP. Curators say they plan to replace the Tutankhamun artifacts with a new exhibition, though they haven’t shared many details. Get the latest stories in your inbox every weekday.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Climate Change Is Ruining Cheese, Scientists and Farmers Warn

    Climate change is making everything worse — including apparently threatening the dairy that makes our precious cheese.In interviews with Science News, veterinary researchers and dairy farmers alike warned that changes to the climate that affect cows are impacting not only affects the nutritional value of the cheeses produced from their milk, but also the color, texture, and even taste.Researchers from the Université Clermont Auvergne, which is located in the mountainous Central France region that produces a delicious firm cheese known as Cantal, explained in a new paper for the Journal of Dairy Science that grass shortages caused by climate change can greatly affect how cows' milk, and the subsequent cheese created from it, tastes.At regular intervals throughout a five-month testing period in 2021, the scientists sampled milk from two groups of cows, each containing 20 cows from two different breeds that were either allowed to graze on grass like normal or only graze part-time while being fed a supplemental diet that featured corn and other concentrated foods.As the researchers found, the corn-fed cohort consistently produced the same amount of milk and less methane than their grass-fed counterparts — but the taste of the resulting milk products was less savory and rich than the grass-fed bovines.Moreover, the milk from the grass-fed cows contained more omega-3 fatty acids, which are good for the heart, and lactic acids, which act as probiotics."Farmers are looking for feed with better yields than grass or that are more resilient to droughts," explained Matthieu Bouchon, the fittingly-named lead author of the study.Still, those same farmers want to know how supplementing their cows' feed will change the nutritional value and taste, Bouchon said — and one farmer who spoke to Science News affirmed anecdotally, this effect is bearing out in other parts of the world, too."We were having lots of problems with milk protein and fat content due to the heat," Gustavo Abijaodi, a dairy farmer in Brazil, told the website. "If we can stabilize heat effects, the cattle will respond with better and more nutritious milk."The heat also seems to be getting to the way cows eat and behave as well."Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature," noted Marina Danes, a dairy scientist at Brazil's Federal University of Lavras. "This process spirals into immunosuppression, leaving the animal vulnerable to disease."Whether it's the food quality or the heat affecting the cows, the effects are palpable — or, in this case, edible."If climate change progresses the way it’s going, we’ll feel it in our cheese," remarked Bouchon, the French researcher.More on cattle science: Brazilian "Supercows" Reportedly Close to Achieving World DominationShare This Article
    #climate #change #ruining #cheese #scientists
    Climate Change Is Ruining Cheese, Scientists and Farmers Warn
    Climate change is making everything worse — including apparently threatening the dairy that makes our precious cheese.In interviews with Science News, veterinary researchers and dairy farmers alike warned that changes to the climate that affect cows are impacting not only affects the nutritional value of the cheeses produced from their milk, but also the color, texture, and even taste.Researchers from the Université Clermont Auvergne, which is located in the mountainous Central France region that produces a delicious firm cheese known as Cantal, explained in a new paper for the Journal of Dairy Science that grass shortages caused by climate change can greatly affect how cows' milk, and the subsequent cheese created from it, tastes.At regular intervals throughout a five-month testing period in 2021, the scientists sampled milk from two groups of cows, each containing 20 cows from two different breeds that were either allowed to graze on grass like normal or only graze part-time while being fed a supplemental diet that featured corn and other concentrated foods.As the researchers found, the corn-fed cohort consistently produced the same amount of milk and less methane than their grass-fed counterparts — but the taste of the resulting milk products was less savory and rich than the grass-fed bovines.Moreover, the milk from the grass-fed cows contained more omega-3 fatty acids, which are good for the heart, and lactic acids, which act as probiotics."Farmers are looking for feed with better yields than grass or that are more resilient to droughts," explained Matthieu Bouchon, the fittingly-named lead author of the study.Still, those same farmers want to know how supplementing their cows' feed will change the nutritional value and taste, Bouchon said — and one farmer who spoke to Science News affirmed anecdotally, this effect is bearing out in other parts of the world, too."We were having lots of problems with milk protein and fat content due to the heat," Gustavo Abijaodi, a dairy farmer in Brazil, told the website. "If we can stabilize heat effects, the cattle will respond with better and more nutritious milk."The heat also seems to be getting to the way cows eat and behave as well."Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature," noted Marina Danes, a dairy scientist at Brazil's Federal University of Lavras. "This process spirals into immunosuppression, leaving the animal vulnerable to disease."Whether it's the food quality or the heat affecting the cows, the effects are palpable — or, in this case, edible."If climate change progresses the way it’s going, we’ll feel it in our cheese," remarked Bouchon, the French researcher.More on cattle science: Brazilian "Supercows" Reportedly Close to Achieving World DominationShare This Article #climate #change #ruining #cheese #scientists
    FUTURISM.COM
    Climate Change Is Ruining Cheese, Scientists and Farmers Warn
    Climate change is making everything worse — including apparently threatening the dairy that makes our precious cheese.In interviews with Science News, veterinary researchers and dairy farmers alike warned that changes to the climate that affect cows are impacting not only affects the nutritional value of the cheeses produced from their milk, but also the color, texture, and even taste.Researchers from the Université Clermont Auvergne, which is located in the mountainous Central France region that produces a delicious firm cheese known as Cantal, explained in a new paper for the Journal of Dairy Science that grass shortages caused by climate change can greatly affect how cows' milk, and the subsequent cheese created from it, tastes.At regular intervals throughout a five-month testing period in 2021, the scientists sampled milk from two groups of cows, each containing 20 cows from two different breeds that were either allowed to graze on grass like normal or only graze part-time while being fed a supplemental diet that featured corn and other concentrated foods.As the researchers found, the corn-fed cohort consistently produced the same amount of milk and less methane than their grass-fed counterparts — but the taste of the resulting milk products was less savory and rich than the grass-fed bovines.Moreover, the milk from the grass-fed cows contained more omega-3 fatty acids, which are good for the heart, and lactic acids, which act as probiotics."Farmers are looking for feed with better yields than grass or that are more resilient to droughts," explained Matthieu Bouchon, the fittingly-named lead author of the study.Still, those same farmers want to know how supplementing their cows' feed will change the nutritional value and taste, Bouchon said — and one farmer who spoke to Science News affirmed anecdotally, this effect is bearing out in other parts of the world, too."We were having lots of problems with milk protein and fat content due to the heat," Gustavo Abijaodi, a dairy farmer in Brazil, told the website. "If we can stabilize heat effects, the cattle will respond with better and more nutritious milk."The heat also seems to be getting to the way cows eat and behave as well."Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature," noted Marina Danes, a dairy scientist at Brazil's Federal University of Lavras. "This process spirals into immunosuppression, leaving the animal vulnerable to disease."Whether it's the food quality or the heat affecting the cows, the effects are palpable — or, in this case, edible."If climate change progresses the way it’s going, we’ll feel it in our cheese," remarked Bouchon, the French researcher.More on cattle science: Brazilian "Supercows" Reportedly Close to Achieving World DominationShare This Article
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS

    I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API.
    The Initial Markup
    Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active.
    See the Pen Moving Highlight Navbar Starting Markupby Blake Lundquist.
    For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually.
    #highlight {
    z-index: 0;
    position: absolute;
    height: 100%;
    width: 100px;
    left: -200px;
    border: 2px solid green;
    box-sizing: border-box;
    transition: all 0.2s ease;
    }

    Add A Boilerplate Event Handler For Click Interactions
    We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class.
    Initially, we can call console.log to ensure the handler fires only when expected:

    const navbar = document.querySelector;

    navbar.addEventListener{
    // return if the clicked element doesn't have the correct selector
    if')) {
    return;
    }

    console.log;
    });

    Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar.
    Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item.

    const navbar = document.querySelector;

    navbar.addEventListener{
    // return if the clicked element doesn't have the correct selector
    if')) {
    return;
    }

    - console.log;
    + document.querySelector.classList.remove;
    + event.target.classList.add;

    });

    Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes.
    Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match.

    // handler for moving the highlight
    const moveHighlight ==> {
    const activeNavItem = document.querySelector;
    const highlighterElement = document.querySelector;

    const width = activeNavItem.offsetWidth;

    const itemPos = activeNavItem.getBoundingClientRect;
    const navbarPos = navbar.getBoundingClientRectconst relativePosX = itemPos.left - navbarPos.left;

    const styles = {
    left: ${relativePosX}px,
    width: ${width}px,
    };

    Object.assign;
    }

    Let’s call our new function when the click event fires:

    navbar.addEventListener{
    // return if the clicked element doesn't have the correct selector
    if')) {
    return;
    }

    document.querySelector.classList.remove;
    event.target.classList.add;

    + moveHighlight;
    });

    Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads:
    // handler for moving the highlight
    const moveHighlight ==> {
    // ...
    }

    // display the highlight when the page loads
    moveHighlight;

    Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar.
    See the Pen Moving Highlight Navbarby Blake Lundquist.
    That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API.
    Using The View Transition API
    The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality.
    For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked.
    We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector:
    <nav>
    - <div id="highlight"></div>
    <a href="#" class="active">Home</a>
    <a href="#services">Services</a>
    <a href="#about">About</a>
    <a href="#contact">Contact</a>
    </nav>

    - #highlight {
    - z-index: 0;
    - position: absolute;
    - height: 100%;
    - width: 0;
    - left: 0;
    - box-sizing: border-box;
    - transition: all 0.2s ease;
    - }

    + nav a::after {
    + content: " ";
    + position: absolute;
    + left: 0;
    + top: 0;
    + width: 100%;
    + height: 100%;
    + border: none;
    + box-sizing: border-box;
    + }

    For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name.
    nav a.active::after {
    border: 2px solid green;
    view-transition-name: highlight;
    }

    Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function.

    const navbar = document.querySelector;

    // Change the active nav item on click
    navbar.addEventListener{

    if')) {
    return;
    }

    document.startViewTransition=> {
    document.querySelector.classList.remove;

    event.target.classList.add;
    });
    });

    Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class!
    Adjusting The View Transition
    At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively.
    ::view-transition-old{
    height: 100%;
    }

    ::view-transition-new{
    height: 100%;
    }

    Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported:

    const navbar = document.querySelector;

    // change the item that has the .active class applied
    const setActiveElement ==> {
    document.querySelector.classList.remove;
    elem.classList.add;
    }

    // Start view transition and pass in a callback on click
    navbar.addEventListener{
    if')) {
    return;
    }

    // Fallback for browsers that don't support View Transitions:
    if{
    setActiveElement;
    return;
    }

    document.startViewTransition=> setActiveElement);
    });

    Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links.
    See the Pen Moving Highlight Navbar with View Transitionby Blake Lundquist.
    Conclusion
    Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRectmethod and the View Transition API.
    Resources

    getBoundingClientRectmethod documentation
    View Transition API documentation
    “View Transitions: Handling Aspect Ratio Changes” by Jake Archibald
    #creating #ampampldquomoving #highlightampamprdquo #navigation #bar
    Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS
    I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API. The Initial Markup Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active. See the Pen Moving Highlight Navbar Starting Markupby Blake Lundquist. For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually. #highlight { z-index: 0; position: absolute; height: 100%; width: 100px; left: -200px; border: 2px solid green; box-sizing: border-box; transition: all 0.2s ease; } Add A Boilerplate Event Handler For Click Interactions We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class. Initially, we can call console.log to ensure the handler fires only when expected: const navbar = document.querySelector; navbar.addEventListener{ // return if the clicked element doesn't have the correct selector if')) { return; } console.log; }); Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar. Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item. const navbar = document.querySelector; navbar.addEventListener{ // return if the clicked element doesn't have the correct selector if')) { return; } - console.log; + document.querySelector.classList.remove; + event.target.classList.add; }); Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes. Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match. // handler for moving the highlight const moveHighlight ==> { const activeNavItem = document.querySelector; const highlighterElement = document.querySelector; const width = activeNavItem.offsetWidth; const itemPos = activeNavItem.getBoundingClientRect; const navbarPos = navbar.getBoundingClientRectconst relativePosX = itemPos.left - navbarPos.left; const styles = { left: ${relativePosX}px, width: ${width}px, }; Object.assign; } Let’s call our new function when the click event fires: navbar.addEventListener{ // return if the clicked element doesn't have the correct selector if')) { return; } document.querySelector.classList.remove; event.target.classList.add; + moveHighlight; }); Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads: // handler for moving the highlight const moveHighlight ==> { // ... } // display the highlight when the page loads moveHighlight; Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar. See the Pen Moving Highlight Navbarby Blake Lundquist. That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API. Using The View Transition API The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality. For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked. We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector: <nav> - <div id="highlight"></div> <a href="#" class="active">Home</a> <a href="#services">Services</a> <a href="#about">About</a> <a href="#contact">Contact</a> </nav> - #highlight { - z-index: 0; - position: absolute; - height: 100%; - width: 0; - left: 0; - box-sizing: border-box; - transition: all 0.2s ease; - } + nav a::after { + content: " "; + position: absolute; + left: 0; + top: 0; + width: 100%; + height: 100%; + border: none; + box-sizing: border-box; + } For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name. nav a.active::after { border: 2px solid green; view-transition-name: highlight; } Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function. const navbar = document.querySelector; // Change the active nav item on click navbar.addEventListener{ if')) { return; } document.startViewTransition=> { document.querySelector.classList.remove; event.target.classList.add; }); }); Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class! Adjusting The View Transition At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively. ::view-transition-old{ height: 100%; } ::view-transition-new{ height: 100%; } Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported: const navbar = document.querySelector; // change the item that has the .active class applied const setActiveElement ==> { document.querySelector.classList.remove; elem.classList.add; } // Start view transition and pass in a callback on click navbar.addEventListener{ if')) { return; } // Fallback for browsers that don't support View Transitions: if{ setActiveElement; return; } document.startViewTransition=> setActiveElement); }); Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links. See the Pen Moving Highlight Navbar with View Transitionby Blake Lundquist. Conclusion Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRectmethod and the View Transition API. Resources getBoundingClientRectmethod documentation View Transition API documentation “View Transitions: Handling Aspect Ratio Changes” by Jake Archibald #creating #ampampldquomoving #highlightampamprdquo #navigation #bar
    SMASHINGMAGAZINE.COM
    Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS
    I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae. (Large preview) In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API. The Initial Markup Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active. See the Pen Moving Highlight Navbar Starting Markup [forked] by Blake Lundquist. For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually. #highlight { z-index: 0; position: absolute; height: 100%; width: 100px; left: -200px; border: 2px solid green; box-sizing: border-box; transition: all 0.2s ease; } Add A Boilerplate Event Handler For Click Interactions We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class. Initially, we can call console.log to ensure the handler fires only when expected: const navbar = document.querySelector('nav'); navbar.addEventListener('click', function (event) { // return if the clicked element doesn't have the correct selector if (!event.target.matches('nav a:not(active)')) { return; } console.log('click'); }); Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar. Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item. const navbar = document.querySelector('nav'); navbar.addEventListener('click', function (event) { // return if the clicked element doesn't have the correct selector if (!event.target.matches('nav a:not(active)')) { return; } - console.log('click'); + document.querySelector('nav a.active').classList.remove('active'); + event.target.classList.add('active'); }); Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes. Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match. // handler for moving the highlight const moveHighlight = () => { const activeNavItem = document.querySelector('a.active'); const highlighterElement = document.querySelector('#highlight'); const width = activeNavItem.offsetWidth; const itemPos = activeNavItem.getBoundingClientRect(); const navbarPos = navbar.getBoundingClientRect() const relativePosX = itemPos.left - navbarPos.left; const styles = { left: ${relativePosX}px, width: ${width}px, }; Object.assign(highlighterElement.style, styles); } Let’s call our new function when the click event fires: navbar.addEventListener('click', function (event) { // return if the clicked element doesn't have the correct selector if (!event.target.matches('nav a:not(active)')) { return; } document.querySelector('nav a.active').classList.remove('active'); event.target.classList.add('active'); + moveHighlight(); }); Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads: // handler for moving the highlight const moveHighlight = () => { // ... } // display the highlight when the page loads moveHighlight(); Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar. See the Pen Moving Highlight Navbar [forked] by Blake Lundquist. That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API. Using The View Transition API The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality. For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked. We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector: <nav> - <div id="highlight"></div> <a href="#" class="active">Home</a> <a href="#services">Services</a> <a href="#about">About</a> <a href="#contact">Contact</a> </nav> - #highlight { - z-index: 0; - position: absolute; - height: 100%; - width: 0; - left: 0; - box-sizing: border-box; - transition: all 0.2s ease; - } + nav a::after { + content: " "; + position: absolute; + left: 0; + top: 0; + width: 100%; + height: 100%; + border: none; + box-sizing: border-box; + } For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name. nav a.active::after { border: 2px solid green; view-transition-name: highlight; } Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function. const navbar = document.querySelector('nav'); // Change the active nav item on click navbar.addEventListener('click', async function (event) { if (!event.target.matches('nav a:not(.active)')) { return; } document.startViewTransition(() => { document.querySelector('nav a.active').classList.remove('active'); event.target.classList.add('active'); }); }); Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class! Adjusting The View Transition At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible. (Large preview) This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively. ::view-transition-old(highlight) { height: 100%; } ::view-transition-new(highlight) { height: 100%; } Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported: const navbar = document.querySelector('nav'); // change the item that has the .active class applied const setActiveElement = (elem) => { document.querySelector('nav a.active').classList.remove('active'); elem.classList.add('active'); } // Start view transition and pass in a callback on click navbar.addEventListener('click', async function (event) { if (!event.target.matches('nav a:not(.active)')) { return; } // Fallback for browsers that don't support View Transitions: if (!document.startViewTransition) { setActiveElement(event.target); return; } document.startViewTransition(() => setActiveElement(event.target)); }); Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links. See the Pen Moving Highlight Navbar with View Transition [forked] by Blake Lundquist. Conclusion Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRect() method and the View Transition API. Resources getBoundingClientRect() method documentation View Transition API documentation “View Transitions: Handling Aspect Ratio Changes” by Jake Archibald
    0 Commentarii 0 Distribuiri 0 previzualizare
  • PC Gaming ERA | June 2025 - The Pink Pony Steam Community Group

    PC Gaming Threads - To Be Updated. Looking for suggestions.

    Broken Arrow - Uzzy - Out June 19th

    Broken Arrow is a large-scale real-time modern warfare tactics game. The base game features both the American and Russian factions, each containing 5 unique sub-factions like the marines, armoured, airborne, and more. Broken Arrow brings the...
    #gaming #era #june #pink #pony
    PC Gaming ERA | June 2025 - The Pink Pony Steam Community Group
    PC Gaming Threads - To Be Updated. Looking for suggestions. Broken Arrow - Uzzy - Out June 19th Broken Arrow is a large-scale real-time modern warfare tactics game. The base game features both the American and Russian factions, each containing 5 unique sub-factions like the marines, armoured, airborne, and more. Broken Arrow brings the... #gaming #era #june #pink #pony
    WWW.RESETERA.COM
    PC Gaming ERA | June 2025 - The Pink Pony Steam Community Group
    PC Gaming Threads - To Be Updated. Looking for suggestions. Broken Arrow - Uzzy - Out June 19th Broken Arrow is a large-scale real-time modern warfare tactics game. The base game features both the American and Russian factions, each containing 5 unique sub-factions like the marines, armoured, airborne, and more. Broken Arrow brings the... Read more
    Like
    Love
    Wow
    Sad
    Angry
    772
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Is the Dog Room the New Luxury Must-Have?

    Every item on this page was chosen by an ELLE Decor editor. We may earn commission on some of the items you choose to buy.When Corey Moriarty moved into a new home in Palm Beach, Florida with his four dogs—Maverick and Bauerand Blue and Titan—he found himself wondering what to do with his spare bedroom: “We had an extra room just sitting there, and instead of turning it into an office or a guest room no one ever uses, we thought, ‘Why not make it a space entirely for them?’”What started as a lark quickly turned delightfully over-the-top. Moriarty outfitted the room with custom bunk beds, a Murphy bed, and a wall lined with glass jars filled with the dogs' favorite snacks. There’s a ball pit, a full TV setup for nightly Bluey viewings, and a closet containing all of their outfits. Moriarty has been documenting the room’s evolution on TikTok, where his latest post racked up more than 24 million views.Corey MoriartyCorey Moriarty’s dogs have their own bonafide bedroom, complete with bunk beds, a TV area, and a treat wall. Pet ownership is booming in the U.S. In 2024, 59.8 million households had dogs, and 42.2 million had cats, according to the American Veterinary Medical Association. And people aren’t just adding pets to their families—they’re investing in them. In 2023, Americans spent more than billion on their pets, per the American Pet Products Association, with an increasing chunk of that going toward pet-focused home upgrades. These aren’t mere afterthoughts—they’re carefully crafted extensions of the home that call for thoughtful planning and, often, the expertise of an interior designer. In fact, the dog room has truly become the newest status symbol.View full post on TiktokA dog room's scale can range from a small nook under the stairs to a full-on suite complete with built-in feeding stations, toy storage, grooming areas, and plush four-poster beds. Some include tiled dog showers, temperature-controlled flooring, and built-in cabinetry. Others have more indulgent luxuries—like a TV with DOGTV, a streaming channel with programming designed specifically for canine attention spans. Think: dogs playing in fields, soothing nature sounds, and friendly humans doing relaxing things with pets. It’s ambient TV, but for your hyper-active schnauzer.For Moriarty, the trend taps into a bigger cultural shift. “There’s a continuing movement toward including pets more fully in people’s lives—as real family members,” he says. “Social media has poured gas on the fire. Everyone’s showing off these amazing pet spaces, and it’s inspiring others to level up.” The result is a growing “barkitecture” trend, where design for pets isn’t an afterthought—it’s part of the floor plan from day one. “We’re in the process of finding or building a more permanent home,” he adds, “and a huge part of that decision is based on what the dogs need—a pool, a yard, a room of their own, space to add a dog wash station.”Ken FulkKen Fulk’s three cream golden retrievers found a home in the curry colored library of his Provincetown home, overlooking the harbor in an antique captain’s daybed. Interior designers are seeing a rise in the trend, too. And some are even participating themselves. Ken Fulk, who shares his Provincetown home with four dogs—three English cream golden retrieversand a wirehaired Dachshund named Wiggy—says one room evolved into their dedicated canine space, though it wasn’t premeditated. “Our often-photographed curry colored library became a de facto nursery,” he says. “Soon, no one would come upstairs with us to bed. They preferred their perch overlooking the harbor in an antique captain’s daybed.”Matt McWaltersKen Fulk’s L.A. shop sells wicker dog beds. And for those not ready to sacrifice a spare room? You don’t have to ditch your home office to make your pets feel like part of the design. Fulk says more clients are requesting pet-focused features, like custom dog beds, built-in food stations, and dog-wash areas in stylish mudrooms. At his new shop in Los Angeles, Fulk even offers wicker dog beds upholstered in outdoor fabric, including his own Designer Dogs print for Pierre Frey, as well as an Air Blue and Indigo Stripe. In a world where dogs are living better than their owners, what's next? "I got some very positive feedback on my idea of our doggy hotel called DEN," Fulk laughs. "It was dreamed up as an April Fool’s joke, but there just might be something there."Rachel SilvaAssociate Digital EditorRachel Silva is the associate digital editor at ELLE DECOR, where she covers all things design, architecture, and lifestyle. She also oversees the publication’s feature article coverage, and is, at any moment, knee-deep in an investigation on everything from the best spa gifts to the best faux florals on the internet right now. She has more than 16 years of experience in editorial, working as a photo assignment editor at Time and acting as the president of Women in Media in NYC. She went to Columbia Journalism School, and her work has been nominated for awards from ASME, the Society of Publication Designers, and World Press Photo. 
    #dog #room #new #luxury #musthave
    Is the Dog Room the New Luxury Must-Have?
    Every item on this page was chosen by an ELLE Decor editor. We may earn commission on some of the items you choose to buy.When Corey Moriarty moved into a new home in Palm Beach, Florida with his four dogs—Maverick and Bauerand Blue and Titan—he found himself wondering what to do with his spare bedroom: “We had an extra room just sitting there, and instead of turning it into an office or a guest room no one ever uses, we thought, ‘Why not make it a space entirely for them?’”What started as a lark quickly turned delightfully over-the-top. Moriarty outfitted the room with custom bunk beds, a Murphy bed, and a wall lined with glass jars filled with the dogs' favorite snacks. There’s a ball pit, a full TV setup for nightly Bluey viewings, and a closet containing all of their outfits. Moriarty has been documenting the room’s evolution on TikTok, where his latest post racked up more than 24 million views.Corey MoriartyCorey Moriarty’s dogs have their own bonafide bedroom, complete with bunk beds, a TV area, and a treat wall. Pet ownership is booming in the U.S. In 2024, 59.8 million households had dogs, and 42.2 million had cats, according to the American Veterinary Medical Association. And people aren’t just adding pets to their families—they’re investing in them. In 2023, Americans spent more than billion on their pets, per the American Pet Products Association, with an increasing chunk of that going toward pet-focused home upgrades. These aren’t mere afterthoughts—they’re carefully crafted extensions of the home that call for thoughtful planning and, often, the expertise of an interior designer. In fact, the dog room has truly become the newest status symbol.View full post on TiktokA dog room's scale can range from a small nook under the stairs to a full-on suite complete with built-in feeding stations, toy storage, grooming areas, and plush four-poster beds. Some include tiled dog showers, temperature-controlled flooring, and built-in cabinetry. Others have more indulgent luxuries—like a TV with DOGTV, a streaming channel with programming designed specifically for canine attention spans. Think: dogs playing in fields, soothing nature sounds, and friendly humans doing relaxing things with pets. It’s ambient TV, but for your hyper-active schnauzer.For Moriarty, the trend taps into a bigger cultural shift. “There’s a continuing movement toward including pets more fully in people’s lives—as real family members,” he says. “Social media has poured gas on the fire. Everyone’s showing off these amazing pet spaces, and it’s inspiring others to level up.” The result is a growing “barkitecture” trend, where design for pets isn’t an afterthought—it’s part of the floor plan from day one. “We’re in the process of finding or building a more permanent home,” he adds, “and a huge part of that decision is based on what the dogs need—a pool, a yard, a room of their own, space to add a dog wash station.”Ken FulkKen Fulk’s three cream golden retrievers found a home in the curry colored library of his Provincetown home, overlooking the harbor in an antique captain’s daybed. Interior designers are seeing a rise in the trend, too. And some are even participating themselves. Ken Fulk, who shares his Provincetown home with four dogs—three English cream golden retrieversand a wirehaired Dachshund named Wiggy—says one room evolved into their dedicated canine space, though it wasn’t premeditated. “Our often-photographed curry colored library became a de facto nursery,” he says. “Soon, no one would come upstairs with us to bed. They preferred their perch overlooking the harbor in an antique captain’s daybed.”Matt McWaltersKen Fulk’s L.A. shop sells wicker dog beds. And for those not ready to sacrifice a spare room? You don’t have to ditch your home office to make your pets feel like part of the design. Fulk says more clients are requesting pet-focused features, like custom dog beds, built-in food stations, and dog-wash areas in stylish mudrooms. At his new shop in Los Angeles, Fulk even offers wicker dog beds upholstered in outdoor fabric, including his own Designer Dogs print for Pierre Frey, as well as an Air Blue and Indigo Stripe. In a world where dogs are living better than their owners, what's next? "I got some very positive feedback on my idea of our doggy hotel called DEN," Fulk laughs. "It was dreamed up as an April Fool’s joke, but there just might be something there."Rachel SilvaAssociate Digital EditorRachel Silva is the associate digital editor at ELLE DECOR, where she covers all things design, architecture, and lifestyle. She also oversees the publication’s feature article coverage, and is, at any moment, knee-deep in an investigation on everything from the best spa gifts to the best faux florals on the internet right now. She has more than 16 years of experience in editorial, working as a photo assignment editor at Time and acting as the president of Women in Media in NYC. She went to Columbia Journalism School, and her work has been nominated for awards from ASME, the Society of Publication Designers, and World Press Photo.  #dog #room #new #luxury #musthave
    WWW.ELLEDECOR.COM
    Is the Dog Room the New Luxury Must-Have?
    Every item on this page was chosen by an ELLE Decor editor. We may earn commission on some of the items you choose to buy.When Corey Moriarty moved into a new home in Palm Beach, Florida with his four dogs—Maverick and Bauer (Golden Retrievers) and Blue and Titan (Siberian Huskies)—he found himself wondering what to do with his spare bedroom: “We had an extra room just sitting there, and instead of turning it into an office or a guest room no one ever uses, we thought, ‘Why not make it a space entirely for them?’”What started as a lark quickly turned delightfully over-the-top. Moriarty outfitted the room with custom bunk beds, a Murphy bed, and a wall lined with glass jars filled with the dogs' favorite snacks. There’s a ball pit, a full TV setup for nightly Bluey viewings, and a closet containing all of their outfits. Moriarty has been documenting the room’s evolution on TikTok, where his latest post racked up more than 24 million views.Corey MoriartyCorey Moriarty’s dogs have their own bonafide bedroom, complete with bunk beds, a TV area, and a treat wall. Pet ownership is booming in the U.S. In 2024, 59.8 million households had dogs, and 42.2 million had cats, according to the American Veterinary Medical Association. And people aren’t just adding pets to their families—they’re investing in them. In 2023, Americans spent more than $147 billion on their pets, per the American Pet Products Association, with an increasing chunk of that going toward pet-focused home upgrades. These aren’t mere afterthoughts—they’re carefully crafted extensions of the home that call for thoughtful planning and, often, the expertise of an interior designer. In fact, the dog room has truly become the newest status symbol.View full post on TiktokA dog room's scale can range from a small nook under the stairs to a full-on suite complete with built-in feeding stations, toy storage, grooming areas, and plush four-poster beds. Some include tiled dog showers, temperature-controlled flooring, and built-in cabinetry. Others have more indulgent luxuries—like a TV with DOGTV, a streaming channel with programming designed specifically for canine attention spans. Think: dogs playing in fields, soothing nature sounds, and friendly humans doing relaxing things with pets. It’s ambient TV, but for your hyper-active schnauzer.For Moriarty, the trend taps into a bigger cultural shift. “There’s a continuing movement toward including pets more fully in people’s lives—as real family members,” he says. “Social media has poured gas on the fire. Everyone’s showing off these amazing pet spaces, and it’s inspiring others to level up.” The result is a growing “barkitecture” trend, where design for pets isn’t an afterthought—it’s part of the floor plan from day one. “We’re in the process of finding or building a more permanent home,” he adds, “and a huge part of that decision is based on what the dogs need—a pool, a yard, a room of their own, space to add a dog wash station.”Ken FulkKen Fulk’s three cream golden retrievers found a home in the curry colored library of his Provincetown home, overlooking the harbor in an antique captain’s daybed. Interior designers are seeing a rise in the trend, too. And some are even participating themselves. Ken Fulk, who shares his Provincetown home with four dogs—three English cream golden retrievers (above) and a wirehaired Dachshund named Wiggy—says one room evolved into their dedicated canine space, though it wasn’t premeditated. “Our often-photographed curry colored library became a de facto nursery,” he says. “Soon, no one would come upstairs with us to bed. They preferred their perch overlooking the harbor in an antique captain’s daybed.”Matt McWaltersKen Fulk’s L.A. shop sells wicker dog beds. And for those not ready to sacrifice a spare room? You don’t have to ditch your home office to make your pets feel like part of the design. Fulk says more clients are requesting pet-focused features, like custom dog beds, built-in food stations, and dog-wash areas in stylish mudrooms. At his new shop in Los Angeles, Fulk even offers wicker dog beds upholstered in outdoor fabric, including his own Designer Dogs print for Pierre Frey, as well as an Air Blue and Indigo Stripe. In a world where dogs are living better than their owners, what's next? "I got some very positive feedback on my idea of our doggy hotel called DEN," Fulk laughs. "It was dreamed up as an April Fool’s joke, but there just might be something there."Rachel SilvaAssociate Digital EditorRachel Silva is the associate digital editor at ELLE DECOR, where she covers all things design, architecture, and lifestyle. She also oversees the publication’s feature article coverage, and is, at any moment, knee-deep in an investigation on everything from the best spa gifts to the best faux florals on the internet right now. She has more than 16 years of experience in editorial, working as a photo assignment editor at Time and acting as the president of Women in Media in NYC. She went to Columbia Journalism School, and her work has been nominated for awards from ASME, the Society of Publication Designers, and World Press Photo. 
    Like
    Love
    Wow
    Sad
    Angry
    609
    0 Commentarii 0 Distribuiri 0 previzualizare
  • BenchmarkQED: Automated benchmarking of RAG systems

    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics. 
    To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub. It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.  
    BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks. 
    In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes.
    In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset. 
    Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.  
    AutoQ: Automated query synthesis
    This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset.
    AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the queryforming a logical progression along the spectrum.
    Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum. 
    AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset.
    Figure 2. Synthesis process and example query for each of the four AutoQ query classes. 

    About Microsoft Research
    Advancing science and technology to benefit humanity

    View our story

    Opens in a new tab
    AutoE: Automated evaluation framework 
    Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation:

    Comprehensiveness: Does the answer address all relevant aspects of the question? 
    Diversity: Does it present varied perspectives or insights? 
    Empowerment: Does it help the reader understand and make informed judgments? 
    Relevance: Does it address what the question is specifically asking?  

    The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics.
    An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class . AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge.
    These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method.
    Figure 3. Win rates of four LazyGraphRAG configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%.
    The four LazyGraphRAG conditionsdiffer by query budgetand chunk size. All used GPT-4o mini for relevance tests and GPT-4o for query expansionand answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout.
    Comparison systems were GraphRAG , Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG, RAPTOR, and TREX. All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy.
    LazyGraphRAG outperformed every comparison condition using the same generative model, winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration. For DataLocal queries, the smaller budgetperformed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk sizehad a slight edge, likely because longer chunks provide a more coherent context.
    Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall.
    Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset.
    Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall.
    Figure 4. Win rates of LazyGraphRAG  over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition. 
    AutoD: Automated data sampling and summarization
    Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities.
    The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clustersand the number of samples per cluster. This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations.
    AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited.
    Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository, alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.  
    We hope these datasets, together with the BenchmarkQED tools, help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub. 
    Opens in a new tab
    #benchmarkqedautomatedbenchmarking #ofrag #systems
    BenchmarkQED: Automated benchmarking of RAG systems
    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics.  To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub. It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.   BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks.  In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes. In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset.  Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.   AutoQ: Automated query synthesis This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset. AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the queryforming a logical progression along the spectrum. Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum.  AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset. Figure 2. Synthesis process and example query for each of the four AutoQ query classes.  About Microsoft Research Advancing science and technology to benefit humanity View our story Opens in a new tab AutoE: Automated evaluation framework  Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation: Comprehensiveness: Does the answer address all relevant aspects of the question?  Diversity: Does it present varied perspectives or insights?  Empowerment: Does it help the reader understand and make informed judgments?  Relevance: Does it address what the question is specifically asking?   The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics. An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class . AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge. These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method. Figure 3. Win rates of four LazyGraphRAG configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%. The four LazyGraphRAG conditionsdiffer by query budgetand chunk size. All used GPT-4o mini for relevance tests and GPT-4o for query expansionand answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout. Comparison systems were GraphRAG , Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG, RAPTOR, and TREX. All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy. LazyGraphRAG outperformed every comparison condition using the same generative model, winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration. For DataLocal queries, the smaller budgetperformed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk sizehad a slight edge, likely because longer chunks provide a more coherent context. Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall. Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset. Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall. Figure 4. Win rates of LazyGraphRAG  over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition.  AutoD: Automated data sampling and summarization Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities. The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clustersand the number of samples per cluster. This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations. AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited. Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository, alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.   We hope these datasets, together with the BenchmarkQED tools, help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub.  Opens in a new tab #benchmarkqedautomatedbenchmarking #ofrag #systems
    WWW.MICROSOFT.COM
    BenchmarkQED: Automated benchmarking of RAG systems
    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation (RAG) as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics.  To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub (opens in new tab). It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.   BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model (LLM) to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks.  In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes. In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset.  Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.   AutoQ: Automated query synthesis This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset. AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the query (Figure 1, top) forming a logical progression along the spectrum (Figure 1, bottom). Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum.  AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset. Figure 2. Synthesis process and example query for each of the four AutoQ query classes.  About Microsoft Research Advancing science and technology to benefit humanity View our story Opens in a new tab AutoE: Automated evaluation framework  Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation: Comprehensiveness: Does the answer address all relevant aspects of the question?  Diversity: Does it present varied perspectives or insights?  Empowerment: Does it help the reader understand and make informed judgments?  Relevance: Does it address what the question is specifically asking?   The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics. An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class (200 total). AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge. These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method. Figure 3. Win rates of four LazyGraphRAG (LGR) configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%. The four LazyGraphRAG conditions (LGR_b200_c200, LGR_b50_c200, LGR_b50_c600, LGR_b200_c200_mini) differ by query budget (b50, b200) and chunk size (c200, c600). All used GPT-4o mini for relevance tests and GPT-4o for query expansion (to five subqueries) and answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout. Comparison systems were GraphRAG (Local, Global, and Drift Search), Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG (opens in new tab), RAPTOR (opens in new tab), and TREX (opens in new tab). All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy. LazyGraphRAG outperformed every comparison condition using the same generative model (GPT-4o), winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration (LGR_b200_c200). For DataLocal queries, the smaller budget (LGR_b50_c200) performed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk size (LGR_b50_c600) had a slight edge, likely because longer chunks provide a more coherent context. Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall. Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset. Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall. Figure 4. Win rates of LazyGraphRAG (LGR) over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition.  AutoD: Automated data sampling and summarization Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities. The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clusters (breadth) and the number of samples per cluster (depth). This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations. AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited. Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech (opens in new tab) podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository (opens in new tab), alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.   We hope these datasets, together with the BenchmarkQED tools (opens in new tab), help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub (opens in new tab).  Opens in a new tab
    Like
    Love
    Wow
    Sad
    Angry
    487
    0 Commentarii 0 Distribuiri 0 previzualizare
  • New Multi-Axis Tool from Virginia Tech Boosts Fiber-Reinforced 3D Printing

    Researchers from the Department of Mechanical Engineering at Virginia Tech have introduced a continuous fiber reinforcementdeposition tool designed for multi-axis 3D printing, significantly enhancing mechanical performance in composite structures. Led by Kieran D. Beaumont, Joseph R. Kubalak, and Christopher B. Williams, and published in Springer Nature Link, the study demonstrates an 820% improvement in maximum load capacity compared to conventional planar short carbon fiber3D printing methods. This tool integrates three key functions: reliable fiber cutting and re-feeding, in situ fiber volume fraction control, and a slender collision volume to support complex multi-axis toolpaths.
    The newly developed deposition tool addresses critical challenges in CFR additive manufacturing. It is capable of cutting and re-feeding continuous fibers during travel movements, a function required to create complex geometries without material tearing or print failure. In situ control of fiber volume fraction is also achieved by adjusting the polymer extrusion rate. A slender geometry minimizes collisions between the tool and the printed part during multi-axis movements.
    The researchers designed the tool to co-extrude a thermoplastic polymer matrix with a continuous carbon fibertowpreg. This approach allowed reliable fiber re-feeding after each cut and enabled printing with variable fiber content within a single part. The tool’s slender collision volume supports increased range of motion for the robotic arm used in the experiments, allowing alignment of fibers with three-dimensional load paths in complex structures.
    The six Degree-of-Freedom Robotic Arm printing a multi-axis geometry from a CFR polymer composite. Photo via Springer Nature Link.
    Mechanical Testing Confirms Load-Bearing Improvements
    Mechanical tests evaluated the impact of continuous fiber reinforcement on polylactic acidparts. In tensile tests, samples reinforced with continuous carbon fibers achieved a tensile strength of 190.76 MPa and a tensile modulus of 9.98 GPa in the fiber direction. These values compare to 60.31 MPa and 3.01 GPa for neat PLA, and 56.92 MPa and 4.30 GPa for parts containing short carbon fibers. Additional tests assessed intra-layer and inter-layer performance, revealing that the continuous fiber–reinforced material had reduced mechanical properties in these orientations. Compared to neat PLA, intra-layer tensile strength and modulus dropped by 66% and 63%, respectively, and inter-layer strength and modulus decreased by 86% and 60%.
    Researchers printed curved tensile bar geometries using three methods to evaluate performance in parts with three-dimensional load paths: planar short carbon fiber–reinforced PLA, multi-axis short fiber–reinforced samples, and multi-axis continuous fiber–reinforced composites. The multi-axis short fiber–reinforced parts showed a 41.6% increase in maximum load compared to their planar counterparts. Meanwhile, multi-axis continuous fiber–reinforced parts absorbed loads 8.2 times higher than the planar short fiber–reinforced specimens. Scanning electron microscopyimages of fracture surfaces revealed fiber pull-out and limited fiber-matrix bonding, particularly in samples with continuous fibers.
    Schematic illustration of common continuous fiber reinforcement–material extrusionmodalities: in situ impregnation, towpreg extrusion, and co-extrusion with towpreg. Photo via Springer Nature Link.
    To verify the tool’s fiber cutting and re-feeding capability, the researchers printed a 100 × 150 × 3 mm rectangular plaque that required 426 cutting and re-feeding operations across six layers. The deposition tool achieved a 100% success rate, demonstrating reliable cutting and re-feeding without fiber clogging. This reliability is critical for manufacturing complex structures that require frequent travel movements between deposition paths.
    In situ fiber volume fraction control was validated through printing a rectangular prism sample with varying polymer feed rates, road widths, and layer heights. The fiber volume fractions achieved in different sections of the part were 6.51%, 8.00%, and 9.86%, as measured by cross-sectional microscopy and image analysis. Although lower than some literature reports, the researchers attributed this to the specific combination of tool geometry, polymer-fiber interaction time, and print speed.
    The tool uses Anisoprint’s CCF towpreg, a pre-impregnated continuous carbon fiber product with a fiber volume fraction of 57% and a diameter of 0.35 mm. 3DXTECH’s black PLA and SCF-PLA filaments were selected to ensure consistent matrix properties and avoid the influence of pigment variations on mechanical testing. The experiments were conducted using an ABB IRB 4600–40/2.55 robotic arm equipped with a tool changer for switching between the CFR-MEX deposition tool and a standard MEX tool with an elongated nozzle for planar prints.
    Deposition Tool CAD and Assembly. Photo via Springer Nature Link.
    Context Within Existing Research and Future Directions
    Continuous fiber reinforcement in additive manufacturing has previously demonstrated significant improvements in part performance, with some studies reporting tensile strengths of up to 650 MPa for PLA composites reinforced with continuous carbon fibers. However, traditional three-axis printing methods restrict fiber orientation to planar directions, limiting these gains to within the XY-plane. Multi-axis 3D printing approaches have demonstrated improved load-bearing capacity in short-fiber reinforced parts. For example, multi-axis printed samples have shown failure loads several times higher than planar-printed counterparts in pressure cap and curved geometry applications.
    Virginia Tech’s tool integrates multiple functionalities that previous tools in literature could not achieve simultaneously. It combines a polymer feeder based on a dual drive extruder, a fiber cutter and re-feeder assembly, and a co-extrusion hotend with adjustable interaction time for fiber-polymer bonding. A needle-like geometry and external pneumatic cooling pipes reduce the risk of collision with the printed part during multi-axis reorientation. Measured collision volume angles were 56.2° for the full tool and 41.6° for the hotend assembly.
    Load-extension performance graphs for curved tensile bars. Photo via Springer Nature Link.
    Despite these advances, the researchers identified challenges related to weak bonding between the fiber and the polymer matrix. SEM images showed limited impregnation of the polymer into the fiber towpreg, with the fiber-matrix interface remaining a key area for future work. The study highlights that optimizing fiber tow sizing and improving the fiber-polymer interaction time during printing could enhance inter-layer and intra-layer performance. The results also suggest that advanced toolpath planning algorithms could further leverage the tool’s ability to align fiber deposition along three-dimensional load paths, improving mechanical performance in functional parts.
    The publication in Springer Nature Link documents the full design, validation experiments, and mechanical characterization of the CFR-MEX tool. The work adds to a growing body of research on multi-axis additive manufacturing, particularly in combining continuous fiber reinforcement with complex geometries.
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    Ready to discover who won the 20243D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights.
    Featured photo shows the six Degree-of-Freedom Robotic Arm printing a multi-axis geometry. Photo via Springer Nature Link.

    Anyer Tenorio Lara
    Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    #new #multiaxis #tool #virginia #tech
    New Multi-Axis Tool from Virginia Tech Boosts Fiber-Reinforced 3D Printing
    Researchers from the Department of Mechanical Engineering at Virginia Tech have introduced a continuous fiber reinforcementdeposition tool designed for multi-axis 3D printing, significantly enhancing mechanical performance in composite structures. Led by Kieran D. Beaumont, Joseph R. Kubalak, and Christopher B. Williams, and published in Springer Nature Link, the study demonstrates an 820% improvement in maximum load capacity compared to conventional planar short carbon fiber3D printing methods. This tool integrates three key functions: reliable fiber cutting and re-feeding, in situ fiber volume fraction control, and a slender collision volume to support complex multi-axis toolpaths. The newly developed deposition tool addresses critical challenges in CFR additive manufacturing. It is capable of cutting and re-feeding continuous fibers during travel movements, a function required to create complex geometries without material tearing or print failure. In situ control of fiber volume fraction is also achieved by adjusting the polymer extrusion rate. A slender geometry minimizes collisions between the tool and the printed part during multi-axis movements. The researchers designed the tool to co-extrude a thermoplastic polymer matrix with a continuous carbon fibertowpreg. This approach allowed reliable fiber re-feeding after each cut and enabled printing with variable fiber content within a single part. The tool’s slender collision volume supports increased range of motion for the robotic arm used in the experiments, allowing alignment of fibers with three-dimensional load paths in complex structures. The six Degree-of-Freedom Robotic Arm printing a multi-axis geometry from a CFR polymer composite. Photo via Springer Nature Link. Mechanical Testing Confirms Load-Bearing Improvements Mechanical tests evaluated the impact of continuous fiber reinforcement on polylactic acidparts. In tensile tests, samples reinforced with continuous carbon fibers achieved a tensile strength of 190.76 MPa and a tensile modulus of 9.98 GPa in the fiber direction. These values compare to 60.31 MPa and 3.01 GPa for neat PLA, and 56.92 MPa and 4.30 GPa for parts containing short carbon fibers. Additional tests assessed intra-layer and inter-layer performance, revealing that the continuous fiber–reinforced material had reduced mechanical properties in these orientations. Compared to neat PLA, intra-layer tensile strength and modulus dropped by 66% and 63%, respectively, and inter-layer strength and modulus decreased by 86% and 60%. Researchers printed curved tensile bar geometries using three methods to evaluate performance in parts with three-dimensional load paths: planar short carbon fiber–reinforced PLA, multi-axis short fiber–reinforced samples, and multi-axis continuous fiber–reinforced composites. The multi-axis short fiber–reinforced parts showed a 41.6% increase in maximum load compared to their planar counterparts. Meanwhile, multi-axis continuous fiber–reinforced parts absorbed loads 8.2 times higher than the planar short fiber–reinforced specimens. Scanning electron microscopyimages of fracture surfaces revealed fiber pull-out and limited fiber-matrix bonding, particularly in samples with continuous fibers. Schematic illustration of common continuous fiber reinforcement–material extrusionmodalities: in situ impregnation, towpreg extrusion, and co-extrusion with towpreg. Photo via Springer Nature Link. To verify the tool’s fiber cutting and re-feeding capability, the researchers printed a 100 × 150 × 3 mm rectangular plaque that required 426 cutting and re-feeding operations across six layers. The deposition tool achieved a 100% success rate, demonstrating reliable cutting and re-feeding without fiber clogging. This reliability is critical for manufacturing complex structures that require frequent travel movements between deposition paths. In situ fiber volume fraction control was validated through printing a rectangular prism sample with varying polymer feed rates, road widths, and layer heights. The fiber volume fractions achieved in different sections of the part were 6.51%, 8.00%, and 9.86%, as measured by cross-sectional microscopy and image analysis. Although lower than some literature reports, the researchers attributed this to the specific combination of tool geometry, polymer-fiber interaction time, and print speed. The tool uses Anisoprint’s CCF towpreg, a pre-impregnated continuous carbon fiber product with a fiber volume fraction of 57% and a diameter of 0.35 mm. 3DXTECH’s black PLA and SCF-PLA filaments were selected to ensure consistent matrix properties and avoid the influence of pigment variations on mechanical testing. The experiments were conducted using an ABB IRB 4600–40/2.55 robotic arm equipped with a tool changer for switching between the CFR-MEX deposition tool and a standard MEX tool with an elongated nozzle for planar prints. Deposition Tool CAD and Assembly. Photo via Springer Nature Link. Context Within Existing Research and Future Directions Continuous fiber reinforcement in additive manufacturing has previously demonstrated significant improvements in part performance, with some studies reporting tensile strengths of up to 650 MPa for PLA composites reinforced with continuous carbon fibers. However, traditional three-axis printing methods restrict fiber orientation to planar directions, limiting these gains to within the XY-plane. Multi-axis 3D printing approaches have demonstrated improved load-bearing capacity in short-fiber reinforced parts. For example, multi-axis printed samples have shown failure loads several times higher than planar-printed counterparts in pressure cap and curved geometry applications. Virginia Tech’s tool integrates multiple functionalities that previous tools in literature could not achieve simultaneously. It combines a polymer feeder based on a dual drive extruder, a fiber cutter and re-feeder assembly, and a co-extrusion hotend with adjustable interaction time for fiber-polymer bonding. A needle-like geometry and external pneumatic cooling pipes reduce the risk of collision with the printed part during multi-axis reorientation. Measured collision volume angles were 56.2° for the full tool and 41.6° for the hotend assembly. Load-extension performance graphs for curved tensile bars. Photo via Springer Nature Link. Despite these advances, the researchers identified challenges related to weak bonding between the fiber and the polymer matrix. SEM images showed limited impregnation of the polymer into the fiber towpreg, with the fiber-matrix interface remaining a key area for future work. The study highlights that optimizing fiber tow sizing and improving the fiber-polymer interaction time during printing could enhance inter-layer and intra-layer performance. The results also suggest that advanced toolpath planning algorithms could further leverage the tool’s ability to align fiber deposition along three-dimensional load paths, improving mechanical performance in functional parts. The publication in Springer Nature Link documents the full design, validation experiments, and mechanical characterization of the CFR-MEX tool. The work adds to a growing body of research on multi-axis additive manufacturing, particularly in combining continuous fiber reinforcement with complex geometries. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Featured photo shows the six Degree-of-Freedom Robotic Arm printing a multi-axis geometry. Photo via Springer Nature Link. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology. #new #multiaxis #tool #virginia #tech
    3DPRINTINGINDUSTRY.COM
    New Multi-Axis Tool from Virginia Tech Boosts Fiber-Reinforced 3D Printing
    Researchers from the Department of Mechanical Engineering at Virginia Tech have introduced a continuous fiber reinforcement (CFR) deposition tool designed for multi-axis 3D printing, significantly enhancing mechanical performance in composite structures. Led by Kieran D. Beaumont, Joseph R. Kubalak, and Christopher B. Williams, and published in Springer Nature Link, the study demonstrates an 820% improvement in maximum load capacity compared to conventional planar short carbon fiber (SCF) 3D printing methods. This tool integrates three key functions: reliable fiber cutting and re-feeding, in situ fiber volume fraction control, and a slender collision volume to support complex multi-axis toolpaths. The newly developed deposition tool addresses critical challenges in CFR additive manufacturing. It is capable of cutting and re-feeding continuous fibers during travel movements, a function required to create complex geometries without material tearing or print failure. In situ control of fiber volume fraction is also achieved by adjusting the polymer extrusion rate. A slender geometry minimizes collisions between the tool and the printed part during multi-axis movements. The researchers designed the tool to co-extrude a thermoplastic polymer matrix with a continuous carbon fiber (CCF) towpreg. This approach allowed reliable fiber re-feeding after each cut and enabled printing with variable fiber content within a single part. The tool’s slender collision volume supports increased range of motion for the robotic arm used in the experiments, allowing alignment of fibers with three-dimensional load paths in complex structures. The six Degree-of-Freedom Robotic Arm printing a multi-axis geometry from a CFR polymer composite. Photo via Springer Nature Link. Mechanical Testing Confirms Load-Bearing Improvements Mechanical tests evaluated the impact of continuous fiber reinforcement on polylactic acid (PLA) parts. In tensile tests, samples reinforced with continuous carbon fibers achieved a tensile strength of 190.76 MPa and a tensile modulus of 9.98 GPa in the fiber direction. These values compare to 60.31 MPa and 3.01 GPa for neat PLA, and 56.92 MPa and 4.30 GPa for parts containing short carbon fibers. Additional tests assessed intra-layer and inter-layer performance, revealing that the continuous fiber–reinforced material had reduced mechanical properties in these orientations. Compared to neat PLA, intra-layer tensile strength and modulus dropped by 66% and 63%, respectively, and inter-layer strength and modulus decreased by 86% and 60%. Researchers printed curved tensile bar geometries using three methods to evaluate performance in parts with three-dimensional load paths: planar short carbon fiber–reinforced PLA, multi-axis short fiber–reinforced samples, and multi-axis continuous fiber–reinforced composites. The multi-axis short fiber–reinforced parts showed a 41.6% increase in maximum load compared to their planar counterparts. Meanwhile, multi-axis continuous fiber–reinforced parts absorbed loads 8.2 times higher than the planar short fiber–reinforced specimens. Scanning electron microscopy (SEM) images of fracture surfaces revealed fiber pull-out and limited fiber-matrix bonding, particularly in samples with continuous fibers. Schematic illustration of common continuous fiber reinforcement–material extrusion (CFR-MEX) modalities: in situ impregnation, towpreg extrusion, and co-extrusion with towpreg. Photo via Springer Nature Link. To verify the tool’s fiber cutting and re-feeding capability, the researchers printed a 100 × 150 × 3 mm rectangular plaque that required 426 cutting and re-feeding operations across six layers. The deposition tool achieved a 100% success rate, demonstrating reliable cutting and re-feeding without fiber clogging. This reliability is critical for manufacturing complex structures that require frequent travel movements between deposition paths. In situ fiber volume fraction control was validated through printing a rectangular prism sample with varying polymer feed rates, road widths, and layer heights. The fiber volume fractions achieved in different sections of the part were 6.51%, 8.00%, and 9.86%, as measured by cross-sectional microscopy and image analysis. Although lower than some literature reports, the researchers attributed this to the specific combination of tool geometry, polymer-fiber interaction time, and print speed. The tool uses Anisoprint’s CCF towpreg, a pre-impregnated continuous carbon fiber product with a fiber volume fraction of 57% and a diameter of 0.35 mm. 3DXTECH’s black PLA and SCF-PLA filaments were selected to ensure consistent matrix properties and avoid the influence of pigment variations on mechanical testing. The experiments were conducted using an ABB IRB 4600–40/2.55 robotic arm equipped with a tool changer for switching between the CFR-MEX deposition tool and a standard MEX tool with an elongated nozzle for planar prints. Deposition Tool CAD and Assembly. Photo via Springer Nature Link. Context Within Existing Research and Future Directions Continuous fiber reinforcement in additive manufacturing has previously demonstrated significant improvements in part performance, with some studies reporting tensile strengths of up to 650 MPa for PLA composites reinforced with continuous carbon fibers. However, traditional three-axis printing methods restrict fiber orientation to planar directions, limiting these gains to within the XY-plane. Multi-axis 3D printing approaches have demonstrated improved load-bearing capacity in short-fiber reinforced parts. For example, multi-axis printed samples have shown failure loads several times higher than planar-printed counterparts in pressure cap and curved geometry applications. Virginia Tech’s tool integrates multiple functionalities that previous tools in literature could not achieve simultaneously. It combines a polymer feeder based on a dual drive extruder, a fiber cutter and re-feeder assembly, and a co-extrusion hotend with adjustable interaction time for fiber-polymer bonding. A needle-like geometry and external pneumatic cooling pipes reduce the risk of collision with the printed part during multi-axis reorientation. Measured collision volume angles were 56.2° for the full tool and 41.6° for the hotend assembly. Load-extension performance graphs for curved tensile bars. Photo via Springer Nature Link. Despite these advances, the researchers identified challenges related to weak bonding between the fiber and the polymer matrix. SEM images showed limited impregnation of the polymer into the fiber towpreg, with the fiber-matrix interface remaining a key area for future work. The study highlights that optimizing fiber tow sizing and improving the fiber-polymer interaction time during printing could enhance inter-layer and intra-layer performance. The results also suggest that advanced toolpath planning algorithms could further leverage the tool’s ability to align fiber deposition along three-dimensional load paths, improving mechanical performance in functional parts. The publication in Springer Nature Link documents the full design, validation experiments, and mechanical characterization of the CFR-MEX tool. The work adds to a growing body of research on multi-axis additive manufacturing, particularly in combining continuous fiber reinforcement with complex geometries. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Featured photo shows the six Degree-of-Freedom Robotic Arm printing a multi-axis geometry. Photo via Springer Nature Link. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    Like
    Love
    Wow
    Sad
    Angry
    323
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com