• Dune: Awakening Helicopters Are 'Goomba Stomping' Players, Devs Are Working On A Fix

    In a crowded field full of online survival sims, Dune: Awakening is kicking up storm. The adaptation of Frank Herbert’s sci-fi novels lets players build bases, rid sand worms, and smash Ornithopters into one another. That last part has become a problem, and the developers are already looking into a fix. Suggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode

    Share SubtitlesOffEnglishview videoSuggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode

    Share SubtitlesOffEnglishDune’s Ornithopters are helicopters shaped like dragonflies. In Dune: Awakening, they’re one of the many vehicles players can build that serve as both a resource and an end-goal of sorts. They require a lot of equipment and resources to craft if you’re playing solo, which is why most of them belong to players working in groups. It turns out that they’re pretty indestructible too, making them lethal weapons for ramming enemy players with in PVP. Reddit user Bombe18 shared his run-in with Dune: Awakening’s man-made scourge in a recent clip that blew up on the subreddit showing him repeatedly being accosted by multiple Ornithopters. Shooting at them does nothing. They’re unscathed by constantly smashing into the ground on top of him. At one point, he tries to wall-jump off a ledge and stab one. “Yeah sorry about this,” wrote game director Joel Bylos. “We have people working on fixing the goomba stomping ASAP.”Players have been debating the role of Ornithopters in Dune: Awakening since its beta tests last year. On the one hand, they’re a lot of fun and a cool reward for players to build toward. On the other, they sort of trivialize trying to travel around the desert and survive, the two things the game is supposed to be about. They can also shoot missiles, completely dominating the ground game. Now that’s real desert power. In terms of stopping players from griefing one another with Ornithopters, there are a few different suggestions. Some players just want the vehicles not to be able to be used as weapons at all. Others want them isolated to specific PVP areas. Another solution is to make it easier to destroy them. “Seems like they should just make guns deal more damage to them,” wrote one player. “They’d think twice about doing this if their orni could get wrecked by gunfire.” Another wrote, “Make Deep Desert crashes do significant damage. Two crashes or something past a certain physics threshold should disable the vehicle.”However the developers decide to address the recent outbreak of Ornithopter “goomba stomping,” Dune: Awakening is having a great launch so far. Out earlier this week on PC, it’s nearing a 90 percent positive rating on Steam with almost 20,000 reviews. The concurrent player-count is very healthy, too, peaking at just under 150,000 heading into the weekend. Unfortunately, console players will have to wait a bit to build Ornithropters of their own. A PlayStation 5 and Xbox Series X/S release isn’t planned until sometime in 2026. .
    #dune #awakening #helicopters #are #039goomba
    Dune: Awakening Helicopters Are 'Goomba Stomping' Players, Devs Are Working On A Fix
    In a crowded field full of online survival sims, Dune: Awakening is kicking up storm. The adaptation of Frank Herbert’s sci-fi novels lets players build bases, rid sand worms, and smash Ornithopters into one another. That last part has become a problem, and the developers are already looking into a fix. Suggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishview videoSuggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishDune’s Ornithopters are helicopters shaped like dragonflies. In Dune: Awakening, they’re one of the many vehicles players can build that serve as both a resource and an end-goal of sorts. They require a lot of equipment and resources to craft if you’re playing solo, which is why most of them belong to players working in groups. It turns out that they’re pretty indestructible too, making them lethal weapons for ramming enemy players with in PVP. Reddit user Bombe18 shared his run-in with Dune: Awakening’s man-made scourge in a recent clip that blew up on the subreddit showing him repeatedly being accosted by multiple Ornithopters. Shooting at them does nothing. They’re unscathed by constantly smashing into the ground on top of him. At one point, he tries to wall-jump off a ledge and stab one. “Yeah sorry about this,” wrote game director Joel Bylos. “We have people working on fixing the goomba stomping ASAP.”Players have been debating the role of Ornithopters in Dune: Awakening since its beta tests last year. On the one hand, they’re a lot of fun and a cool reward for players to build toward. On the other, they sort of trivialize trying to travel around the desert and survive, the two things the game is supposed to be about. They can also shoot missiles, completely dominating the ground game. Now that’s real desert power. In terms of stopping players from griefing one another with Ornithopters, there are a few different suggestions. Some players just want the vehicles not to be able to be used as weapons at all. Others want them isolated to specific PVP areas. Another solution is to make it easier to destroy them. “Seems like they should just make guns deal more damage to them,” wrote one player. “They’d think twice about doing this if their orni could get wrecked by gunfire.” Another wrote, “Make Deep Desert crashes do significant damage. Two crashes or something past a certain physics threshold should disable the vehicle.”However the developers decide to address the recent outbreak of Ornithopter “goomba stomping,” Dune: Awakening is having a great launch so far. Out earlier this week on PC, it’s nearing a 90 percent positive rating on Steam with almost 20,000 reviews. The concurrent player-count is very healthy, too, peaking at just under 150,000 heading into the weekend. Unfortunately, console players will have to wait a bit to build Ornithropters of their own. A PlayStation 5 and Xbox Series X/S release isn’t planned until sometime in 2026. . #dune #awakening #helicopters #are #039goomba
    KOTAKU.COM
    Dune: Awakening Helicopters Are 'Goomba Stomping' Players, Devs Are Working On A Fix
    In a crowded field full of online survival sims, Dune: Awakening is kicking up storm. The adaptation of Frank Herbert’s sci-fi novels lets players build bases, rid sand worms, and smash Ornithopters into one another. That last part has become a problem, and the developers are already looking into a fix. Suggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishview videoSuggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishDune’s Ornithopters are helicopters shaped like dragonflies. In Dune: Awakening, they’re one of the many vehicles players can build that serve as both a resource and an end-goal of sorts. They require a lot of equipment and resources to craft if you’re playing solo, which is why most of them belong to players working in groups. It turns out that they’re pretty indestructible too, making them lethal weapons for ramming enemy players with in PVP. Reddit user Bombe18 shared his run-in with Dune: Awakening’s man-made scourge in a recent clip that blew up on the subreddit showing him repeatedly being accosted by multiple Ornithopters. Shooting at them does nothing. They’re unscathed by constantly smashing into the ground on top of him. At one point, he tries to wall-jump off a ledge and stab one. “Yeah sorry about this,” wrote game director Joel Bylos. “We have people working on fixing the goomba stomping ASAP.”Players have been debating the role of Ornithopters in Dune: Awakening since its beta tests last year. On the one hand, they’re a lot of fun and a cool reward for players to build toward. On the other, they sort of trivialize trying to travel around the desert and survive, the two things the game is supposed to be about. They can also shoot missiles, completely dominating the ground game. Now that’s real desert power. In terms of stopping players from griefing one another with Ornithopters, there are a few different suggestions. Some players just want the vehicles not to be able to be used as weapons at all. Others want them isolated to specific PVP areas. Another solution is to make it easier to destroy them. “Seems like they should just make guns deal more damage to them,” wrote one player. “They’d think twice about doing this if their orni could get wrecked by gunfire.” Another wrote, “Make Deep Desert crashes do significant damage. Two crashes or something past a certain physics threshold should disable the vehicle.”However the developers decide to address the recent outbreak of Ornithopter “goomba stomping,” Dune: Awakening is having a great launch so far. Out earlier this week on PC, it’s nearing a 90 percent positive rating on Steam with almost 20,000 reviews. The concurrent player-count is very healthy, too, peaking at just under 150,000 heading into the weekend. Unfortunately, console players will have to wait a bit to build Ornithropters of their own. A PlayStation 5 and Xbox Series X/S release isn’t planned until sometime in 2026. .
    0 Commentaires 0 Parts
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Commentaires 0 Parts
  • Punctured Photographs by Yael Martínez Illuminate the Daily Ruptures of Systemic Violence

    “El Hombre y la Montaña”. All images courtesy of This Book Is True, shared with permission
    Punctured Photographs by Yael Martínez Illuminate the Daily Ruptures of Systemic Violence
    June 13, 2025
    Grace Ebert

    The Mexican state of Guerrero lies on the southern Pacific coast and is home to the popular tourist destination of Acapulco. It’s also one of the nation’s most violent areas due to drug trafficking and cartel presence, and is one of six states that account for nearly half of the country’s total homicides.
    For artist and photographer Yael Martínez, the reality of organized crime became more pronounced when, in 2013, three of his family members disappeared. He began to speak with others in his community who had experienced similar traumas and to connect threads across the borders of Mexico to Honduras, Brazil, and the United States.
    “Itzel at home,” Guerrero, Mexico
    Luciérnagas, which translates to fireflies, comes from Martínez’s meditation on this extreme brutality that “infiltrates daily life and transforms the spirit of a place,” a statement says. Now published in a volume by This Book Is True, the poetic series punctures dark, nighttime photographs with minuscule holes. When backlit, the images bear a dazzling constellation of light that distorts the images in which violence isn’t depicted but rather felt.
    In one work, for example, a man holding a firework stands in a poppy field, a perforated cloud of smoke enveloping his figure. He’s performing an annual ritual on the sacred hill of La Garza, and the setting exemplifies a poignant contradiction between ancestral cultures and a crop that has been subsumed by capitalism and is essential to cartel power. A statement elaborates:

    We don’t see death in Luciérnaga, but its omnipresence is felt throughout, lingering in the shadows of each photograph. Each image painfully underwritten by the result of a calculated violence that visited unseen and undetected, leaving behind the immense void of a vanished loved one. And yet there is always a sense of hope that informs the making of this work.

    Luciérnagas is available from This Book Is True. Find more from Martínez on Instagram.
    “Toro”, Guerrero, Mexico
    “Abuelo-Estrella”, Cochoapa El Grande, Guerrero, Mexico
    “Levantada de Cruz”“El Río de la Memoria y Mis Hijas”Next article
    #punctured #photographs #yael #martínez #illuminate
    Punctured Photographs by Yael Martínez Illuminate the Daily Ruptures of Systemic Violence
    “El Hombre y la Montaña”. All images courtesy of This Book Is True, shared with permission Punctured Photographs by Yael Martínez Illuminate the Daily Ruptures of Systemic Violence June 13, 2025 Grace Ebert The Mexican state of Guerrero lies on the southern Pacific coast and is home to the popular tourist destination of Acapulco. It’s also one of the nation’s most violent areas due to drug trafficking and cartel presence, and is one of six states that account for nearly half of the country’s total homicides. For artist and photographer Yael Martínez, the reality of organized crime became more pronounced when, in 2013, three of his family members disappeared. He began to speak with others in his community who had experienced similar traumas and to connect threads across the borders of Mexico to Honduras, Brazil, and the United States. “Itzel at home,” Guerrero, Mexico Luciérnagas, which translates to fireflies, comes from Martínez’s meditation on this extreme brutality that “infiltrates daily life and transforms the spirit of a place,” a statement says. Now published in a volume by This Book Is True, the poetic series punctures dark, nighttime photographs with minuscule holes. When backlit, the images bear a dazzling constellation of light that distorts the images in which violence isn’t depicted but rather felt. In one work, for example, a man holding a firework stands in a poppy field, a perforated cloud of smoke enveloping his figure. He’s performing an annual ritual on the sacred hill of La Garza, and the setting exemplifies a poignant contradiction between ancestral cultures and a crop that has been subsumed by capitalism and is essential to cartel power. A statement elaborates: We don’t see death in Luciérnaga, but its omnipresence is felt throughout, lingering in the shadows of each photograph. Each image painfully underwritten by the result of a calculated violence that visited unseen and undetected, leaving behind the immense void of a vanished loved one. And yet there is always a sense of hope that informs the making of this work. Luciérnagas is available from This Book Is True. Find more from Martínez on Instagram. “Toro”, Guerrero, Mexico “Abuelo-Estrella”, Cochoapa El Grande, Guerrero, Mexico “Levantada de Cruz”“El Río de la Memoria y Mis Hijas”Next article #punctured #photographs #yael #martínez #illuminate
    WWW.THISISCOLOSSAL.COM
    Punctured Photographs by Yael Martínez Illuminate the Daily Ruptures of Systemic Violence
    “El Hombre y la Montaña” (December 31, 2020). All images courtesy of This Book Is True, shared with permission Punctured Photographs by Yael Martínez Illuminate the Daily Ruptures of Systemic Violence June 13, 2025 Grace Ebert The Mexican state of Guerrero lies on the southern Pacific coast and is home to the popular tourist destination of Acapulco. It’s also one of the nation’s most violent areas due to drug trafficking and cartel presence, and is one of six states that account for nearly half of the country’s total homicides. For artist and photographer Yael Martínez, the reality of organized crime became more pronounced when, in 2013, three of his family members disappeared. He began to speak with others in his community who had experienced similar traumas and to connect threads across the borders of Mexico to Honduras, Brazil, and the United States. “Itzel at home,” Guerrero, Mexico Luciérnagas, which translates to fireflies, comes from Martínez’s meditation on this extreme brutality that “infiltrates daily life and transforms the spirit of a place,” a statement says. Now published in a volume by This Book Is True, the poetic series punctures dark, nighttime photographs with minuscule holes. When backlit, the images bear a dazzling constellation of light that distorts the images in which violence isn’t depicted but rather felt. In one work, for example, a man holding a firework stands in a poppy field, a perforated cloud of smoke enveloping his figure. He’s performing an annual ritual on the sacred hill of La Garza, and the setting exemplifies a poignant contradiction between ancestral cultures and a crop that has been subsumed by capitalism and is essential to cartel power. A statement elaborates: We don’t see death in Luciérnaga, but its omnipresence is felt throughout, lingering in the shadows of each photograph. Each image painfully underwritten by the result of a calculated violence that visited unseen and undetected, leaving behind the immense void of a vanished loved one. And yet there is always a sense of hope that informs the making of this work. Luciérnagas is available from This Book Is True. Find more from Martínez on Instagram. “Toro” (2018), Guerrero, Mexico “Abuelo-Estrella” (December 21, 2020), Cochoapa El Grande, Guerrero, Mexico “Levantada de Cruz” (2021) “El Río de la Memoria y Mis Hijas” (2022) Next article
    0 Commentaires 0 Parts
  • AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES

    By CHRIS McGOWAN

    Images courtesy of Warner Bros. Pictures.

    Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors.

    “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”
    —Nordin Rahhali, VFX Supervisor

    The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed.

    “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.”

    “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.”

    Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor.

    “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”

    The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.”
    —Christian Sebaldt, ASC, Director of Photography

    For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day”

    Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”

    Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall.

    The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.”

    The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.”

    Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils.

    “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”
    —Nordin Rahhali, VFX Supervisor

    Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.”

    To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.”

    Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine.

    Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard.

    A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.”

    Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films.

    From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    #explosive #mix #sfx #vfx #ignites
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.” #explosive #mix #sfx #vfx #ignites
    WWW.VFXVOICE.COM
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbell (Brec Bassinger) has a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes (Kaitlyn Santa Juana), inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the whole [Skyview restaurant] on fire, but Tony [Lazarowich, Special Effects Supervisor] tried and put as much fire as he could safely and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots. (Photo: Eric Milner) Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive set [that] was fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical. (Photo: Eric Milner) “We got all the Vancouver skyline [with drones] so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height [we needed]. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wall [so] we could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed. (Photo: Eric Milner) “We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineered [them] while we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots. [For example,] some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Max Lloyd-Jones] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbell (Max Lloyd-Jones) as he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful location [in] GVRD [Greater Vancouver], very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosion [of Iris’s home] was unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbell (Richard Harmon) and drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producer [Craig Perry] came up with a great gag [for the] septum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell (Richard Harmon) – with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “[S]ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Campbell] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire line [for] when Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result. (Photo: Eric Milner) A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erik (Richard Harmon) appears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws it [off the deck] are all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines. (Photo: Eric Milner) Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris. (Photo: Eric Milner) Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    0 Commentaires 0 Parts
  • NASA orbiter saw something astonishing peek through Martian clouds

    NASA's Mars Odyssey orbiter captured the first horizon view of Arsia Mons, an enormous volcano on the Red Planet.
    Credit: NASA / JPL-Caltech / ASU

    NASA’s longest-running Mars mission has sent back an unprecedented side view of a massive volcano rising above the Red Planet, just before dawn.On May 2, as sunlight crept over the Martian horizon, the Odyssey spacecraft captured Arsia Mons, a towering, long-extinct volcano, puncturing a glowing band of greenish haze in the planet’s upper atmosphere. The 12-mile-high volcano — nearly twice the height of Mauna Loa in Hawaii — punctures a veil of fog, emerging like a monument to the planet's ancient past. The space snapshot is both visually arresting and scientifically enlightening."We picked Arsia Mons hoping we would see the summit poke above the early morning clouds," said Jonathon Hill, who leads Odyssey's camera operations at Arizona State University, in a statement, "and it didn't disappoint."  

    Arsia Mons sits at the southern end of a towering trio of volcanoes called the Tharsis Montes.
    Credit: NASA / JPL-Caltech

    To get this view, Odyssey had to do something it wasn’t originally built for. The orbiter, which has been flying around Mars since 2001, usually points its camera straight down to map the planet’s surface. But over the past two years, scientists have begun rotating the spacecraft 90 degrees to look toward the horizon. That adjustment allows NASA to study how dust and ice clouds change over the seasons.

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Though the image is still an aerial view, the vantage point is of the horizon, similar to how astronauts can see Earth's horizon 250 miles above the planet on the International Space Station. From that altitude, Earth doesn’t fill their entire view — there’s enough distance and perspective for them to see the planet's curved edge meeting the blackness of space. Odyssey flies above Mars at about the same altitude. Arsia Mons sits at the southern end of a towering trio of volcanoes called the Tharsis Montes. The Tharsis region is home to the largest volcanoes in the solar system. The lack of plate tectonics on the Red Planet allowed them to grow many times larger than those anywhere on Earth.Together, they dominate the Martian landscape and are sometimes covered in clouds, especially in the early hours. But not just any clouds — these are made of water ice, a different breed than the planet’s more common carbon dioxide clouds. Arsia Mons is the cloudiest of the three. 

    Scientists have recently studied a particular, localized cloud formation that occurs over the mountain, dubbed the Arsia Mons Elongated Cloud. The transient feature, streaking 1,100 miles over southern Mars, lasts only about three hours in the morning during spring before vanishing in the warm sunlight. It's formed by strong winds being forced up the mountainside.  

    Related Stories

    The cloudy canopy on display in Odyssey's new image, according to NASA, is called the aphelion cloud belt. This widespread seasonal system drapes across the planet's equator when Mars is farthest from the sun. This is Odyssey's fourth side image since 2023, and it is the first to show a volcano breaking through the clouds."We're seeing some really significant seasonal differences in these horizon images," said Michael D. Smith, a NASA planetary scientist, in a statement. "It’s giving us new clues to how Mars' atmosphere evolves over time."

    Topics
    NASA

    Elisha Sauers

    Elisha Sauers writes about space for Mashable, taking deep dives into NASA's moon and Mars missions, chatting up astronauts and history-making discoverers, and jetting above the clouds. Through 17 years of reporting, she's covered a variety of topics, including health, business, and government, with a penchant for public records requests. She previously worked for The Virginian-Pilot in Norfolk, Virginia, and The Capital in Annapolis, Maryland. Her work has earned numerous state awards, including the Virginia Press Association's top honor, Best in Show, and national recognition for narrative storytelling. For each year she has covered space, Sauers has won National Headliner Awards, including first place for her Sex in Space series. Send space tips and story ideas toor text 443-684-2489. Follow her on X at @elishasauers.
    #nasa #orbiter #saw #something #astonishing
    NASA orbiter saw something astonishing peek through Martian clouds
    NASA's Mars Odyssey orbiter captured the first horizon view of Arsia Mons, an enormous volcano on the Red Planet. Credit: NASA / JPL-Caltech / ASU NASA’s longest-running Mars mission has sent back an unprecedented side view of a massive volcano rising above the Red Planet, just before dawn.On May 2, as sunlight crept over the Martian horizon, the Odyssey spacecraft captured Arsia Mons, a towering, long-extinct volcano, puncturing a glowing band of greenish haze in the planet’s upper atmosphere. The 12-mile-high volcano — nearly twice the height of Mauna Loa in Hawaii — punctures a veil of fog, emerging like a monument to the planet's ancient past. The space snapshot is both visually arresting and scientifically enlightening."We picked Arsia Mons hoping we would see the summit poke above the early morning clouds," said Jonathon Hill, who leads Odyssey's camera operations at Arizona State University, in a statement, "and it didn't disappoint."   Arsia Mons sits at the southern end of a towering trio of volcanoes called the Tharsis Montes. Credit: NASA / JPL-Caltech To get this view, Odyssey had to do something it wasn’t originally built for. The orbiter, which has been flying around Mars since 2001, usually points its camera straight down to map the planet’s surface. But over the past two years, scientists have begun rotating the spacecraft 90 degrees to look toward the horizon. That adjustment allows NASA to study how dust and ice clouds change over the seasons. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Though the image is still an aerial view, the vantage point is of the horizon, similar to how astronauts can see Earth's horizon 250 miles above the planet on the International Space Station. From that altitude, Earth doesn’t fill their entire view — there’s enough distance and perspective for them to see the planet's curved edge meeting the blackness of space. Odyssey flies above Mars at about the same altitude. Arsia Mons sits at the southern end of a towering trio of volcanoes called the Tharsis Montes. The Tharsis region is home to the largest volcanoes in the solar system. The lack of plate tectonics on the Red Planet allowed them to grow many times larger than those anywhere on Earth.Together, they dominate the Martian landscape and are sometimes covered in clouds, especially in the early hours. But not just any clouds — these are made of water ice, a different breed than the planet’s more common carbon dioxide clouds. Arsia Mons is the cloudiest of the three.  Scientists have recently studied a particular, localized cloud formation that occurs over the mountain, dubbed the Arsia Mons Elongated Cloud. The transient feature, streaking 1,100 miles over southern Mars, lasts only about three hours in the morning during spring before vanishing in the warm sunlight. It's formed by strong winds being forced up the mountainside.   Related Stories The cloudy canopy on display in Odyssey's new image, according to NASA, is called the aphelion cloud belt. This widespread seasonal system drapes across the planet's equator when Mars is farthest from the sun. This is Odyssey's fourth side image since 2023, and it is the first to show a volcano breaking through the clouds."We're seeing some really significant seasonal differences in these horizon images," said Michael D. Smith, a NASA planetary scientist, in a statement. "It’s giving us new clues to how Mars' atmosphere evolves over time." Topics NASA Elisha Sauers Elisha Sauers writes about space for Mashable, taking deep dives into NASA's moon and Mars missions, chatting up astronauts and history-making discoverers, and jetting above the clouds. Through 17 years of reporting, she's covered a variety of topics, including health, business, and government, with a penchant for public records requests. She previously worked for The Virginian-Pilot in Norfolk, Virginia, and The Capital in Annapolis, Maryland. Her work has earned numerous state awards, including the Virginia Press Association's top honor, Best in Show, and national recognition for narrative storytelling. For each year she has covered space, Sauers has won National Headliner Awards, including first place for her Sex in Space series. Send space tips and story ideas toor text 443-684-2489. Follow her on X at @elishasauers. #nasa #orbiter #saw #something #astonishing
    MASHABLE.COM
    NASA orbiter saw something astonishing peek through Martian clouds
    NASA's Mars Odyssey orbiter captured the first horizon view of Arsia Mons, an enormous volcano on the Red Planet. Credit: NASA / JPL-Caltech / ASU NASA’s longest-running Mars mission has sent back an unprecedented side view of a massive volcano rising above the Red Planet, just before dawn.On May 2, as sunlight crept over the Martian horizon, the Odyssey spacecraft captured Arsia Mons, a towering, long-extinct volcano, puncturing a glowing band of greenish haze in the planet’s upper atmosphere. The 12-mile-high volcano — nearly twice the height of Mauna Loa in Hawaii — punctures a veil of fog, emerging like a monument to the planet's ancient past. The space snapshot is both visually arresting and scientifically enlightening."We picked Arsia Mons hoping we would see the summit poke above the early morning clouds," said Jonathon Hill, who leads Odyssey's camera operations at Arizona State University, in a statement, "and it didn't disappoint."   Arsia Mons sits at the southern end of a towering trio of volcanoes called the Tharsis Montes. Credit: NASA / JPL-Caltech To get this view, Odyssey had to do something it wasn’t originally built for. The orbiter, which has been flying around Mars since 2001, usually points its camera straight down to map the planet’s surface. But over the past two years, scientists have begun rotating the spacecraft 90 degrees to look toward the horizon. That adjustment allows NASA to study how dust and ice clouds change over the seasons. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Though the image is still an aerial view, the vantage point is of the horizon, similar to how astronauts can see Earth's horizon 250 miles above the planet on the International Space Station. From that altitude, Earth doesn’t fill their entire view — there’s enough distance and perspective for them to see the planet's curved edge meeting the blackness of space. Odyssey flies above Mars at about the same altitude. Arsia Mons sits at the southern end of a towering trio of volcanoes called the Tharsis Montes. The Tharsis region is home to the largest volcanoes in the solar system. The lack of plate tectonics on the Red Planet allowed them to grow many times larger than those anywhere on Earth.Together, they dominate the Martian landscape and are sometimes covered in clouds, especially in the early hours. But not just any clouds — these are made of water ice, a different breed than the planet’s more common carbon dioxide clouds. Arsia Mons is the cloudiest of the three.  Scientists have recently studied a particular, localized cloud formation that occurs over the mountain, dubbed the Arsia Mons Elongated Cloud. The transient feature, streaking 1,100 miles over southern Mars, lasts only about three hours in the morning during spring before vanishing in the warm sunlight. It's formed by strong winds being forced up the mountainside.   Related Stories The cloudy canopy on display in Odyssey's new image, according to NASA, is called the aphelion cloud belt. This widespread seasonal system drapes across the planet's equator when Mars is farthest from the sun. This is Odyssey's fourth side image since 2023, and it is the first to show a volcano breaking through the clouds."We're seeing some really significant seasonal differences in these horizon images," said Michael D. Smith, a NASA planetary scientist, in a statement. "It’s giving us new clues to how Mars' atmosphere evolves over time." Topics NASA Elisha Sauers Elisha Sauers writes about space for Mashable, taking deep dives into NASA's moon and Mars missions, chatting up astronauts and history-making discoverers, and jetting above the clouds. Through 17 years of reporting, she's covered a variety of topics, including health, business, and government, with a penchant for public records requests. She previously worked for The Virginian-Pilot in Norfolk, Virginia, and The Capital in Annapolis, Maryland. Her work has earned numerous state awards, including the Virginia Press Association's top honor, Best in Show, and national recognition for narrative storytelling. For each year she has covered space, Sauers has won National Headliner Awards, including first place for her Sex in Space series. Send space tips and story ideas to [email protected] or text 443-684-2489. Follow her on X at @elishasauers.
    Like
    Love
    Wow
    Sad
    Angry
    703
    4 Commentaires 0 Parts
  • Proposed Federal Budget Would Devastate U.S. Space Science

    June 3, 20258 min readWhite House Budget Plan Would Devastate U.S. Space ScienceScientists are rallying to reverse ruinous proposed cuts to both NASA and the National Science FoundationBy Nadia Drake edited by Lee BillingsFog shrouds the iconic Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida in this photograph from February 25, 2025. Gregg Newton/AFP via GettyLate last week the Trump Administration released its detailed budget request for fiscal year 2026 —a request that, if enacted, would be the equivalent of carpet-bombing the national scientific enterprise.“This is a profound, generational threat to scientific leadership in the United States,” says Casey Dreier, chief of space policy at the Planetary Society, a science advocacy group. “If implemented, it would fundamentally undermine and potentially devastate the most unique capabilities that the U.S. has built up over a half-century.”The Trump administration’s proposal, which still needs to be approved by Congress, is sure to ignite fierce resistance from scientists and senators alike. Among other agencies, the budget deals staggering blows to NASA and the National Science Foundation, which together fund the majority of U.S. research in astronomy, astrophysics, planetary science, heliophysics and Earth science —all space-related sciences that have typically mustered hearty bipartisan support.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.The NSF supports ground-based astronomy, including such facilities as the Nobel Prize–winning gravitational-wave detectors of the Laser Interferometer Gravitational-Wave Observatory, globe-spanning arrays of radio telescopes, and cutting-edge observatories that stretch from Hawaii to the South Pole. The agency faces a lethal 57 percent reduction to its -billion budget, with deep cuts to every program except those in President Trump’s priority areas, which include artificial intelligence and quantum information science. NASA, which funds space-based observatories, faces a 25 percent reduction, dropping the agency’s -billion budget to billion. The proposal beefs up efforts to send humans to the moon and to Mars, but the agency’s Science Mission Directorate —home to Mars rovers, the Voyager interstellar probes, the James Webb Space Telescope, the Hubble Space Telescope, and much more —is looking at a nearly 50 percent reduction, with dozens of missions canceled, turned off or operating on a starvation diet.“It’s an end-game scenario for science at NASA,” says Joel Parriott, director of external affairs and public policy at the American Astronomical Society. “It’s not just the facilities. You’re punching a generation-size hole, maybe a multigenerational hole, in the scientific and technical workforce. You don’t just Cryovac these people and pull them out when the money comes back. People are going to move on.”Adding to the chaos, on Saturday President Trump announced that billionaire entrepreneur and private astronaut Jared Isaacman was no longer his pick for NASA administrator—just days before the Senate was set to confirm Isaacman’s nomination. Initial reports—which have now been disputed—explained the president’s decision as stemming from his discovery that Isaacman recently donated money to Democratic candidates. Regardless of the true reason, the decision leaves both NASA and the NSF, whose director abruptly resigned in April, with respective placeholder “acting” leaders at the top. That leadership vacuum significantly weakens the agencies’ ability to fight the proposed budget cuts and advocate for themselves. “What’s more inefficient than a rudderless agency without an empowered leadership?” Dreier asks.Actions versus WordsDuring his second administration, President Trump has repeatedly celebrated U.S. leadership in space. When he nominated Isaacman last December, Trump noted “NASA’s mission of discovery and inspiration” and looked to a future of “groundbreaking achievements in space science, technology and exploration.” More recently, while celebrating Hubble’s 35th anniversary in April, Trump called the telescope “a symbol of America’s unmatched exploratory might” and declared that NASA would “continue to lead the way in fueling the pursuit of space discovery and exploration.” The administration’s budgetary actions speak louder than Trump’s words, however. Instead of ushering in a new golden age of space exploration—or even setting up the U.S. to stay atop the podium—the president’s budget “narrows down what the cosmos is to moon and Mars and pretty much nothing else,” Dreier says. “And the cosmos is a lot bigger, and there’s a lot more to learn out there.”Dreier notes that when corrected for inflation, the overall NASA budget would be the lowest it’s been since 1961. But in April of that year, the Soviet Union launched the first human into orbit, igniting a space race that swelled NASA’s budget and led to the Apollo program putting American astronauts on the moon. Today China’s rapidprogress and enormous ambitions in space would make the moment ripe for a 21st-century version of this competition, with the U.S. generously funding its own efforts to maintain pole position. Instead the White House’s budget would do the exact opposite.“The seesaw is sort of unbalanced,” says Tony Beasley, director of the NSF-funded National Radio Astronomy Observatory. “On the one side, we’re saying, ‘Well, China’s kicking our ass, and we need to do something about that.’ But then we’re not going to give any money to anything that might actually do that.”How NASA will achieve a crewed return to the moon and send astronauts to Mars—goals that the agency now considers part of “winning the second space race”—while also maintaining its leadership in science is unclear.“This is Russ Vought’s budget,” Dreier says, referring to the director of the White House’s Office of Management and Budget, an unelected bureaucrat who has been notorious for his efforts to reshape the U.S. government by weaponizing federal funding. “This isn’t even Trump’s budget. Trump’s budget would be good for space. This one undermines the president’s own claims and ambitions when it comes to space.”“Low Expectations” at the High FrontierRumors began swirling about the demise of NASA science in April, when a leaked OMB document described some of the proposed cuts and cancellations. Those included both the beleaguered, bloated Mars Sample Returnprogram and the on-time, on-budget Nancy Grace Roman Space Telescope, the next astrophysics flagship mission.The top-line numbers in the more fleshed-out proposal are consistent with that document, and MSR would still be canceled. But Roman would be granted a stay of execution: rather than being zeroed out, it would be put on life support.“It’s a reprieve from outright termination, but it’s still a cut for functionally no reason,” Dreier says. “In some ways,is slightly better than I was expecting. But I had very low expectations.”In the proposal, many of the deepest cuts would be made to NASA science, which would sink from billion to billion. Earth science missions focused on carbon monitoring and climate change, as well as programs aimed at education and workforce diversity, would be effectively erased by the cuts. But a slew of high-profile planetary science projects would suffer, too, with cancellations proposed for two future Venus missions, the Juno mission that is currently surveilling Jupiter, the New Horizons mission that flew by Pluto and two Mars orbiters.NASA’s international partnerships in planetary science fare poorly, too, as the budget rescinds the agency’s involvement with multiple European-led projects, including a Venus mission and Mars rover.The proposal is even worse for NASA astrophysics—the study of our cosmic home—which “really takes it to the chin,” Dreier says, with a roughly -billion drop to just million. In the president’s proposal, only three big astrophysics missions would survive: the soon-to-launch Roman and the already-operational Hubble and JWST. The rest of NASA’s active astrophysics missions, which include the Chandra X-ray Observatory, the Fermi Gamma-Ray Space Telescope and the Transiting Exoplanet Survey Satellite, would be severely pared back or zeroed out. Additionally, the budget would nix NASA’s contributions to large European missions, such as a future space-based gravitational-wave observatory.“This is the most powerful fleet of missions in the history of the study of astrophysics from space,” says John O’Meara, chief scientist at the W. M. Keck Observatory in Hawaii and co-chair of a recent senior review panel that evaluated NASA’s astrophysics missions. The report found that each reviewed mission “continues to be capable of producing important, impactful science.” This fleet, O’Meara adds, is more than the sum of its parts, with much of its power emerging from synergies among multiple telescopes that study the cosmos in many different types, or wavelengths, of light.By hollowing out NASA’s science to ruthlessly focus on crewed missions, the White House budget might be charitably viewed as seeking to rekindle a heroic age of spaceflight—with China’s burgeoning space program as the new archrival. But even for these supposedly high-priority initiatives, the proposed funding levels appear too anemic and meager to give the U.S. any competitive edge. For example, the budget directs about billion to new technology investments to support crewed Mars missions while conservative estimates have projected that such voyages would cost hundreds of billions of dollars more.“It cedes U.S. leadership in space science at a time when other nations, particularly China, are increasing their ambitions,” Dreier says. “It completely flies in the face of the president’s own stated goals for American leadership in space.”Undermining the FoundationThe NSF’s situation, which one senior space scientist predicted would be “diabolical” when the NASA numbers leaked back in April, is also unsurprisingly dire. Unlike NASA, which is focused on space science and exploration, the NSF’s programs span the sweep of scientific disciplines, meaning that even small, isolated cuts—let alone the enormous ones that the budget has proposed—can have shockingly large effects on certain research domains.“Across the different parts of the NSF, the programs that are upvoted are the president’s strategic initiatives, but then everything else gets hit,” Beasley says.Several large-scale NSF-funded projects would escape more or less intact. Among these are the panoramic Vera C. Rubin Observatory, scheduled to unveil its first science images later this month, and the Atacama Large Millimeter/submillimeter Arrayradio telescope. The budget also moves the Giant Magellan Telescope, which would boast starlight-gathering mirrors totaling more than 25 meters across, into a final design phase. All three of those facilities take advantage of Chile’s pristine dark skies. Other large NSF-funded projects that would survive include the proposed Next Generation Very Large Array of radio telescopes in New Mexico and several facilities at the South Pole, such as the IceCube Neutrino Observatory.If this budget is enacted, however, NSF officials anticipate only funding a measly 7 percent of research proposals overall rather than 25 percent; the number of graduate research fellowships awarded would be cleaved in half, and postdoctoral fellowships in the physical sciences would drop to zero. NRAO’s Green Bank Observatory — home to the largest steerable single-dish radio telescope on the planet — would likely shut down. So would other, smaller observatories in Arizona and Chile. The Thirty Meter Telescope, a humongous, perennially embattled project with no clear site selection, would be canceled. And the budget proposes closing one of the two gravitational-wave detectors used by the LIGO collaboration—whose observations of colliding black holes earned the 2017 Nobel Prize in Physics—even though both detectors need to be online for LIGO’s experiment to work. Even factoring in other operational detectors, such as Virgo in Europe and the Kamioka Gravitational Wave Detectorin Japan, shutting down half of LIGO would leave a gaping blind spot in humanity’s gravitational-wave view of the heavens.“The consequences of this budget are that key scientific priorities, on the ground and in space, will take at least a decade longer—or not be realized at all,” O’Meara says. “The universe is telling its story at all wavelengths. It doesn’t care what you build, but if you want to hear that story, you must build many things.”Dreier, Parriott and others are anticipating fierce battles on Capitol Hill. And already both Democratic and Republican legislators have issued statement signaling that they won’t support the budget request as is. “This sick joke of a budget is a nonstarter,” said Representative Zoe Lofgren of California, ranking member of the House Committee on Science, Space, and Technology, in a recent statement. And in an earlier statement, Senator Susan Collins of Maine, chair of the powerful Senate Committee on Appropriations, cautioned that “the President’s Budget Request is simply one step in the annual budget process.”The Trump administration has “thrown a huge punch here, and there will be a certain back-reaction, and we’ll end up in the middle somewhere,” Beasley says. “The mistake you can make right now is to assume that this represents finalized decisions and the future—because it doesn’t.”
    #proposed #federal #budget #would #devastate
    Proposed Federal Budget Would Devastate U.S. Space Science
    June 3, 20258 min readWhite House Budget Plan Would Devastate U.S. Space ScienceScientists are rallying to reverse ruinous proposed cuts to both NASA and the National Science FoundationBy Nadia Drake edited by Lee BillingsFog shrouds the iconic Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida in this photograph from February 25, 2025. Gregg Newton/AFP via GettyLate last week the Trump Administration released its detailed budget request for fiscal year 2026 —a request that, if enacted, would be the equivalent of carpet-bombing the national scientific enterprise.“This is a profound, generational threat to scientific leadership in the United States,” says Casey Dreier, chief of space policy at the Planetary Society, a science advocacy group. “If implemented, it would fundamentally undermine and potentially devastate the most unique capabilities that the U.S. has built up over a half-century.”The Trump administration’s proposal, which still needs to be approved by Congress, is sure to ignite fierce resistance from scientists and senators alike. Among other agencies, the budget deals staggering blows to NASA and the National Science Foundation, which together fund the majority of U.S. research in astronomy, astrophysics, planetary science, heliophysics and Earth science —all space-related sciences that have typically mustered hearty bipartisan support.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.The NSF supports ground-based astronomy, including such facilities as the Nobel Prize–winning gravitational-wave detectors of the Laser Interferometer Gravitational-Wave Observatory, globe-spanning arrays of radio telescopes, and cutting-edge observatories that stretch from Hawaii to the South Pole. The agency faces a lethal 57 percent reduction to its -billion budget, with deep cuts to every program except those in President Trump’s priority areas, which include artificial intelligence and quantum information science. NASA, which funds space-based observatories, faces a 25 percent reduction, dropping the agency’s -billion budget to billion. The proposal beefs up efforts to send humans to the moon and to Mars, but the agency’s Science Mission Directorate —home to Mars rovers, the Voyager interstellar probes, the James Webb Space Telescope, the Hubble Space Telescope, and much more —is looking at a nearly 50 percent reduction, with dozens of missions canceled, turned off or operating on a starvation diet.“It’s an end-game scenario for science at NASA,” says Joel Parriott, director of external affairs and public policy at the American Astronomical Society. “It’s not just the facilities. You’re punching a generation-size hole, maybe a multigenerational hole, in the scientific and technical workforce. You don’t just Cryovac these people and pull them out when the money comes back. People are going to move on.”Adding to the chaos, on Saturday President Trump announced that billionaire entrepreneur and private astronaut Jared Isaacman was no longer his pick for NASA administrator—just days before the Senate was set to confirm Isaacman’s nomination. Initial reports—which have now been disputed—explained the president’s decision as stemming from his discovery that Isaacman recently donated money to Democratic candidates. Regardless of the true reason, the decision leaves both NASA and the NSF, whose director abruptly resigned in April, with respective placeholder “acting” leaders at the top. That leadership vacuum significantly weakens the agencies’ ability to fight the proposed budget cuts and advocate for themselves. “What’s more inefficient than a rudderless agency without an empowered leadership?” Dreier asks.Actions versus WordsDuring his second administration, President Trump has repeatedly celebrated U.S. leadership in space. When he nominated Isaacman last December, Trump noted “NASA’s mission of discovery and inspiration” and looked to a future of “groundbreaking achievements in space science, technology and exploration.” More recently, while celebrating Hubble’s 35th anniversary in April, Trump called the telescope “a symbol of America’s unmatched exploratory might” and declared that NASA would “continue to lead the way in fueling the pursuit of space discovery and exploration.” The administration’s budgetary actions speak louder than Trump’s words, however. Instead of ushering in a new golden age of space exploration—or even setting up the U.S. to stay atop the podium—the president’s budget “narrows down what the cosmos is to moon and Mars and pretty much nothing else,” Dreier says. “And the cosmos is a lot bigger, and there’s a lot more to learn out there.”Dreier notes that when corrected for inflation, the overall NASA budget would be the lowest it’s been since 1961. But in April of that year, the Soviet Union launched the first human into orbit, igniting a space race that swelled NASA’s budget and led to the Apollo program putting American astronauts on the moon. Today China’s rapidprogress and enormous ambitions in space would make the moment ripe for a 21st-century version of this competition, with the U.S. generously funding its own efforts to maintain pole position. Instead the White House’s budget would do the exact opposite.“The seesaw is sort of unbalanced,” says Tony Beasley, director of the NSF-funded National Radio Astronomy Observatory. “On the one side, we’re saying, ‘Well, China’s kicking our ass, and we need to do something about that.’ But then we’re not going to give any money to anything that might actually do that.”How NASA will achieve a crewed return to the moon and send astronauts to Mars—goals that the agency now considers part of “winning the second space race”—while also maintaining its leadership in science is unclear.“This is Russ Vought’s budget,” Dreier says, referring to the director of the White House’s Office of Management and Budget, an unelected bureaucrat who has been notorious for his efforts to reshape the U.S. government by weaponizing federal funding. “This isn’t even Trump’s budget. Trump’s budget would be good for space. This one undermines the president’s own claims and ambitions when it comes to space.”“Low Expectations” at the High FrontierRumors began swirling about the demise of NASA science in April, when a leaked OMB document described some of the proposed cuts and cancellations. Those included both the beleaguered, bloated Mars Sample Returnprogram and the on-time, on-budget Nancy Grace Roman Space Telescope, the next astrophysics flagship mission.The top-line numbers in the more fleshed-out proposal are consistent with that document, and MSR would still be canceled. But Roman would be granted a stay of execution: rather than being zeroed out, it would be put on life support.“It’s a reprieve from outright termination, but it’s still a cut for functionally no reason,” Dreier says. “In some ways,is slightly better than I was expecting. But I had very low expectations.”In the proposal, many of the deepest cuts would be made to NASA science, which would sink from billion to billion. Earth science missions focused on carbon monitoring and climate change, as well as programs aimed at education and workforce diversity, would be effectively erased by the cuts. But a slew of high-profile planetary science projects would suffer, too, with cancellations proposed for two future Venus missions, the Juno mission that is currently surveilling Jupiter, the New Horizons mission that flew by Pluto and two Mars orbiters.NASA’s international partnerships in planetary science fare poorly, too, as the budget rescinds the agency’s involvement with multiple European-led projects, including a Venus mission and Mars rover.The proposal is even worse for NASA astrophysics—the study of our cosmic home—which “really takes it to the chin,” Dreier says, with a roughly -billion drop to just million. In the president’s proposal, only three big astrophysics missions would survive: the soon-to-launch Roman and the already-operational Hubble and JWST. The rest of NASA’s active astrophysics missions, which include the Chandra X-ray Observatory, the Fermi Gamma-Ray Space Telescope and the Transiting Exoplanet Survey Satellite, would be severely pared back or zeroed out. Additionally, the budget would nix NASA’s contributions to large European missions, such as a future space-based gravitational-wave observatory.“This is the most powerful fleet of missions in the history of the study of astrophysics from space,” says John O’Meara, chief scientist at the W. M. Keck Observatory in Hawaii and co-chair of a recent senior review panel that evaluated NASA’s astrophysics missions. The report found that each reviewed mission “continues to be capable of producing important, impactful science.” This fleet, O’Meara adds, is more than the sum of its parts, with much of its power emerging from synergies among multiple telescopes that study the cosmos in many different types, or wavelengths, of light.By hollowing out NASA’s science to ruthlessly focus on crewed missions, the White House budget might be charitably viewed as seeking to rekindle a heroic age of spaceflight—with China’s burgeoning space program as the new archrival. But even for these supposedly high-priority initiatives, the proposed funding levels appear too anemic and meager to give the U.S. any competitive edge. For example, the budget directs about billion to new technology investments to support crewed Mars missions while conservative estimates have projected that such voyages would cost hundreds of billions of dollars more.“It cedes U.S. leadership in space science at a time when other nations, particularly China, are increasing their ambitions,” Dreier says. “It completely flies in the face of the president’s own stated goals for American leadership in space.”Undermining the FoundationThe NSF’s situation, which one senior space scientist predicted would be “diabolical” when the NASA numbers leaked back in April, is also unsurprisingly dire. Unlike NASA, which is focused on space science and exploration, the NSF’s programs span the sweep of scientific disciplines, meaning that even small, isolated cuts—let alone the enormous ones that the budget has proposed—can have shockingly large effects on certain research domains.“Across the different parts of the NSF, the programs that are upvoted are the president’s strategic initiatives, but then everything else gets hit,” Beasley says.Several large-scale NSF-funded projects would escape more or less intact. Among these are the panoramic Vera C. Rubin Observatory, scheduled to unveil its first science images later this month, and the Atacama Large Millimeter/submillimeter Arrayradio telescope. The budget also moves the Giant Magellan Telescope, which would boast starlight-gathering mirrors totaling more than 25 meters across, into a final design phase. All three of those facilities take advantage of Chile’s pristine dark skies. Other large NSF-funded projects that would survive include the proposed Next Generation Very Large Array of radio telescopes in New Mexico and several facilities at the South Pole, such as the IceCube Neutrino Observatory.If this budget is enacted, however, NSF officials anticipate only funding a measly 7 percent of research proposals overall rather than 25 percent; the number of graduate research fellowships awarded would be cleaved in half, and postdoctoral fellowships in the physical sciences would drop to zero. NRAO’s Green Bank Observatory — home to the largest steerable single-dish radio telescope on the planet — would likely shut down. So would other, smaller observatories in Arizona and Chile. The Thirty Meter Telescope, a humongous, perennially embattled project with no clear site selection, would be canceled. And the budget proposes closing one of the two gravitational-wave detectors used by the LIGO collaboration—whose observations of colliding black holes earned the 2017 Nobel Prize in Physics—even though both detectors need to be online for LIGO’s experiment to work. Even factoring in other operational detectors, such as Virgo in Europe and the Kamioka Gravitational Wave Detectorin Japan, shutting down half of LIGO would leave a gaping blind spot in humanity’s gravitational-wave view of the heavens.“The consequences of this budget are that key scientific priorities, on the ground and in space, will take at least a decade longer—or not be realized at all,” O’Meara says. “The universe is telling its story at all wavelengths. It doesn’t care what you build, but if you want to hear that story, you must build many things.”Dreier, Parriott and others are anticipating fierce battles on Capitol Hill. And already both Democratic and Republican legislators have issued statement signaling that they won’t support the budget request as is. “This sick joke of a budget is a nonstarter,” said Representative Zoe Lofgren of California, ranking member of the House Committee on Science, Space, and Technology, in a recent statement. And in an earlier statement, Senator Susan Collins of Maine, chair of the powerful Senate Committee on Appropriations, cautioned that “the President’s Budget Request is simply one step in the annual budget process.”The Trump administration has “thrown a huge punch here, and there will be a certain back-reaction, and we’ll end up in the middle somewhere,” Beasley says. “The mistake you can make right now is to assume that this represents finalized decisions and the future—because it doesn’t.” #proposed #federal #budget #would #devastate
    WWW.SCIENTIFICAMERICAN.COM
    Proposed Federal Budget Would Devastate U.S. Space Science
    June 3, 20258 min readWhite House Budget Plan Would Devastate U.S. Space ScienceScientists are rallying to reverse ruinous proposed cuts to both NASA and the National Science FoundationBy Nadia Drake edited by Lee BillingsFog shrouds the iconic Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida in this photograph from February 25, 2025. Gregg Newton/AFP via GettyLate last week the Trump Administration released its detailed budget request for fiscal year 2026 —a request that, if enacted, would be the equivalent of carpet-bombing the national scientific enterprise.“This is a profound, generational threat to scientific leadership in the United States,” says Casey Dreier, chief of space policy at the Planetary Society, a science advocacy group. “If implemented, it would fundamentally undermine and potentially devastate the most unique capabilities that the U.S. has built up over a half-century.”The Trump administration’s proposal, which still needs to be approved by Congress, is sure to ignite fierce resistance from scientists and senators alike. Among other agencies, the budget deals staggering blows to NASA and the National Science Foundation (NSF), which together fund the majority of U.S. research in astronomy, astrophysics, planetary science, heliophysics and Earth science —all space-related sciences that have typically mustered hearty bipartisan support.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.The NSF supports ground-based astronomy, including such facilities as the Nobel Prize–winning gravitational-wave detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO), globe-spanning arrays of radio telescopes, and cutting-edge observatories that stretch from Hawaii to the South Pole. The agency faces a lethal 57 percent reduction to its $9-billion budget, with deep cuts to every program except those in President Trump’s priority areas, which include artificial intelligence and quantum information science. NASA, which funds space-based observatories, faces a 25 percent reduction, dropping the agency’s $24.9-billion budget to $18.8 billion. The proposal beefs up efforts to send humans to the moon and to Mars, but the agency’s Science Mission Directorate —home to Mars rovers, the Voyager interstellar probes, the James Webb Space Telescope (JWST), the Hubble Space Telescope, and much more —is looking at a nearly 50 percent reduction, with dozens of missions canceled, turned off or operating on a starvation diet.“It’s an end-game scenario for science at NASA,” says Joel Parriott, director of external affairs and public policy at the American Astronomical Society. “It’s not just the facilities. You’re punching a generation-size hole, maybe a multigenerational hole, in the scientific and technical workforce. You don’t just Cryovac these people and pull them out when the money comes back. People are going to move on.”Adding to the chaos, on Saturday President Trump announced that billionaire entrepreneur and private astronaut Jared Isaacman was no longer his pick for NASA administrator—just days before the Senate was set to confirm Isaacman’s nomination. Initial reports—which have now been disputed—explained the president’s decision as stemming from his discovery that Isaacman recently donated money to Democratic candidates. Regardless of the true reason, the decision leaves both NASA and the NSF, whose director abruptly resigned in April, with respective placeholder “acting” leaders at the top. That leadership vacuum significantly weakens the agencies’ ability to fight the proposed budget cuts and advocate for themselves. “What’s more inefficient than a rudderless agency without an empowered leadership?” Dreier asks.Actions versus WordsDuring his second administration, President Trump has repeatedly celebrated U.S. leadership in space. When he nominated Isaacman last December, Trump noted “NASA’s mission of discovery and inspiration” and looked to a future of “groundbreaking achievements in space science, technology and exploration.” More recently, while celebrating Hubble’s 35th anniversary in April, Trump called the telescope “a symbol of America’s unmatched exploratory might” and declared that NASA would “continue to lead the way in fueling the pursuit of space discovery and exploration.” The administration’s budgetary actions speak louder than Trump’s words, however. Instead of ushering in a new golden age of space exploration—or even setting up the U.S. to stay atop the podium—the president’s budget “narrows down what the cosmos is to moon and Mars and pretty much nothing else,” Dreier says. “And the cosmos is a lot bigger, and there’s a lot more to learn out there.”Dreier notes that when corrected for inflation, the overall NASA budget would be the lowest it’s been since 1961. But in April of that year, the Soviet Union launched the first human into orbit, igniting a space race that swelled NASA’s budget and led to the Apollo program putting American astronauts on the moon. Today China’s rapidprogress and enormous ambitions in space would make the moment ripe for a 21st-century version of this competition, with the U.S. generously funding its own efforts to maintain pole position. Instead the White House’s budget would do the exact opposite.“The seesaw is sort of unbalanced,” says Tony Beasley, director of the NSF-funded National Radio Astronomy Observatory (NRAO). “On the one side, we’re saying, ‘Well, China’s kicking our ass, and we need to do something about that.’ But then we’re not going to give any money to anything that might actually do that.”How NASA will achieve a crewed return to the moon and send astronauts to Mars—goals that the agency now considers part of “winning the second space race”—while also maintaining its leadership in science is unclear.“This is Russ Vought’s budget,” Dreier says, referring to the director of the White House’s Office of Management and Budget (OMB), an unelected bureaucrat who has been notorious for his efforts to reshape the U.S. government by weaponizing federal funding. “This isn’t even Trump’s budget. Trump’s budget would be good for space. This one undermines the president’s own claims and ambitions when it comes to space.”“Low Expectations” at the High FrontierRumors began swirling about the demise of NASA science in April, when a leaked OMB document described some of the proposed cuts and cancellations. Those included both the beleaguered, bloated Mars Sample Return (MSR) program and the on-time, on-budget Nancy Grace Roman Space Telescope, the next astrophysics flagship mission.The top-line numbers in the more fleshed-out proposal are consistent with that document, and MSR would still be canceled. But Roman would be granted a stay of execution: rather than being zeroed out, it would be put on life support.“It’s a reprieve from outright termination, but it’s still a cut for functionally no reason,” Dreier says. “In some ways, [the budget] is slightly better than I was expecting. But I had very low expectations.”In the proposal, many of the deepest cuts would be made to NASA science, which would sink from $7.3 billion to $3.9 billion. Earth science missions focused on carbon monitoring and climate change, as well as programs aimed at education and workforce diversity, would be effectively erased by the cuts. But a slew of high-profile planetary science projects would suffer, too, with cancellations proposed for two future Venus missions, the Juno mission that is currently surveilling Jupiter, the New Horizons mission that flew by Pluto and two Mars orbiters. (The Dragonfly mission to Saturn’s moon Titan would survive, as would the flagship Europa Clipper spacecraft, which launched last October.) NASA’s international partnerships in planetary science fare poorly, too, as the budget rescinds the agency’s involvement with multiple European-led projects, including a Venus mission and Mars rover.The proposal is even worse for NASA astrophysics—the study of our cosmic home—which “really takes it to the chin,” Dreier says, with a roughly $1-billion drop to just $523 million. In the president’s proposal, only three big astrophysics missions would survive: the soon-to-launch Roman and the already-operational Hubble and JWST. The rest of NASA’s active astrophysics missions, which include the Chandra X-ray Observatory, the Fermi Gamma-Ray Space Telescope and the Transiting Exoplanet Survey Satellite (TESS), would be severely pared back or zeroed out. Additionally, the budget would nix NASA’s contributions to large European missions, such as a future space-based gravitational-wave observatory.“This is the most powerful fleet of missions in the history of the study of astrophysics from space,” says John O’Meara, chief scientist at the W. M. Keck Observatory in Hawaii and co-chair of a recent senior review panel that evaluated NASA’s astrophysics missions. The report found that each reviewed mission “continues to be capable of producing important, impactful science.” This fleet, O’Meara adds, is more than the sum of its parts, with much of its power emerging from synergies among multiple telescopes that study the cosmos in many different types, or wavelengths, of light.By hollowing out NASA’s science to ruthlessly focus on crewed missions, the White House budget might be charitably viewed as seeking to rekindle a heroic age of spaceflight—with China’s burgeoning space program as the new archrival. But even for these supposedly high-priority initiatives, the proposed funding levels appear too anemic and meager to give the U.S. any competitive edge. For example, the budget directs about $1 billion to new technology investments to support crewed Mars missions while conservative estimates have projected that such voyages would cost hundreds of billions of dollars more.“It cedes U.S. leadership in space science at a time when other nations, particularly China, are increasing their ambitions,” Dreier says. “It completely flies in the face of the president’s own stated goals for American leadership in space.”Undermining the FoundationThe NSF’s situation, which one senior space scientist predicted would be “diabolical” when the NASA numbers leaked back in April, is also unsurprisingly dire. Unlike NASA, which is focused on space science and exploration, the NSF’s programs span the sweep of scientific disciplines, meaning that even small, isolated cuts—let alone the enormous ones that the budget has proposed—can have shockingly large effects on certain research domains.“Across the different parts of the NSF, the programs that are upvoted are the president’s strategic initiatives, but then everything else gets hit,” Beasley says.Several large-scale NSF-funded projects would escape more or less intact. Among these are the panoramic Vera C. Rubin Observatory, scheduled to unveil its first science images later this month, and the Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope. The budget also moves the Giant Magellan Telescope, which would boast starlight-gathering mirrors totaling more than 25 meters across, into a final design phase. All three of those facilities take advantage of Chile’s pristine dark skies. Other large NSF-funded projects that would survive include the proposed Next Generation Very Large Array of radio telescopes in New Mexico and several facilities at the South Pole, such as the IceCube Neutrino Observatory.If this budget is enacted, however, NSF officials anticipate only funding a measly 7 percent of research proposals overall rather than 25 percent; the number of graduate research fellowships awarded would be cleaved in half, and postdoctoral fellowships in the physical sciences would drop to zero. NRAO’s Green Bank Observatory — home to the largest steerable single-dish radio telescope on the planet — would likely shut down. So would other, smaller observatories in Arizona and Chile. The Thirty Meter Telescope, a humongous, perennially embattled project with no clear site selection, would be canceled. And the budget proposes closing one of the two gravitational-wave detectors used by the LIGO collaboration—whose observations of colliding black holes earned the 2017 Nobel Prize in Physics—even though both detectors need to be online for LIGO’s experiment to work. Even factoring in other operational detectors, such as Virgo in Europe and the Kamioka Gravitational Wave Detector (KAGRA) in Japan, shutting down half of LIGO would leave a gaping blind spot in humanity’s gravitational-wave view of the heavens.“The consequences of this budget are that key scientific priorities, on the ground and in space, will take at least a decade longer—or not be realized at all,” O’Meara says. “The universe is telling its story at all wavelengths. It doesn’t care what you build, but if you want to hear that story, you must build many things.”Dreier, Parriott and others are anticipating fierce battles on Capitol Hill. And already both Democratic and Republican legislators have issued statement signaling that they won’t support the budget request as is. “This sick joke of a budget is a nonstarter,” said Representative Zoe Lofgren of California, ranking member of the House Committee on Science, Space, and Technology, in a recent statement. And in an earlier statement, Senator Susan Collins of Maine, chair of the powerful Senate Committee on Appropriations, cautioned that “the President’s Budget Request is simply one step in the annual budget process.”The Trump administration has “thrown a huge punch here, and there will be a certain back-reaction, and we’ll end up in the middle somewhere,” Beasley says. “The mistake you can make right now is to assume that this represents finalized decisions and the future—because it doesn’t.”
    Like
    Love
    Wow
    Sad
    Angry
    119
    0 Commentaires 0 Parts
  • Smashing Animations Part 4: Optimising SVGs

    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations.
    But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it.

    Start Clean And Design With Optimisation In Mind
    Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths.
    Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks.

    Beginning With Outlines
    For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference.

    Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them.

    Drawing Simple Background Shapes
    With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size.

    Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes.

    Optimising The Code
    It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size.

    Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs.

    Layering SVG Elements
    My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack.

    That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic.

    Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future:
    <svg ...>
    <defs>
    <!-- ... -->
    </defs>
    <path fill="url" d="…"/>
    <!-- TITLE GRAPHIC -->
    <g>
    <path … />
    <!-- ... -->
    </g>
    </svg>

    Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers:
    <svg ...>
    <defs>
    <linearGradient id="grad" …>…</linearGradient>
    <filter id="trail" …>…</filter>
    </defs>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <path filter="url" …/>
    <!-- TITLE GRAPHIC -->
    </svg>

    Then come the magical stars, added in the same sequential fashion:
    <svg ...>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <!-- STARS -->
    <!-- TITLE GRAPHIC -->
    </svg>

    To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi:
    <g id="yogi">...</g>

    Then I build Yogi from the ground up — starting with background props, like his broom:
    <g id="broom">...</g>

    Followed by grouped elements for his body, head, collar, and tie:
    <g id="yogi">
    <g id="broom">…</g>
    <g id="body">…</g>
    <g id="head">…</g>
    <g id="collar">…</g>
    <g id="tie">…</g>
    </g>

    Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify.
    Reusing Elements With <use>
    When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB:
    <g id="stars">
    <path class="star-small" fill="#eae3da" d="..."/>
    <path class="star-medium" fill="#eae3da" d="..."/>
    <path class="star-large" fill="#eae3da" d="..."/>
    <!-- ... -->
    </g>

    Moving the stars’ fill attribute values to their parent group reduces the overall weight a little:
    <g id="stars" fill="#eae3da">
    <path class="star-small" d="…"/>
    <path class="star-medium" d="…"/>
    <path class="star-large" d="…"/>
    <!-- ... -->
    </g>

    But a more efficient and manageable option is to define each star size as a reusable template:

    <defs>
    <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/>
    </defs>

    With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes:
    <g id="stars">
    <!-- Large stars -->
    <use href="#star-large" x="1575" y="495"/>
    <!-- ... -->
    <!-- Medium stars -->
    <use href="#star-medium" x="1453" y="696"/>
    <!-- ... -->
    <!-- Small stars -->
    <use href="#star-small" x="1287" y="741"/>
    <!-- ... -->
    </g>

    This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance.
    Adding Animations
    The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels:
    @keyframes sparkle {
    0%, 100% { opacity: .1; }
    50% { opacity: 1; }
    }

    Next, I applied this looping animation to every use element inside my stars group:
    #stars use {
    animation: sparkle 10s ease-in-out infinite;
    }

    The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects:
    /* Fast, frequent */
    #stars use:nth-child:nth-child{
    animation-delay: .1s;
    animation-duration: 2s;
    }

    From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses:
    /* Medium */
    #stars use:nth-child:nth-child{ ... }

    /* Slow, dramatic */
    #stars use:nth-child:nth-child{ ... }

    /* Random */
    #stars use:nth-child{ ... }

    /* Alternating */
    #stars use:nth-child{ ... }

    /* Scattered */
    #stars use:nth-child{ ... }

    By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle.

    Then, for added realism, I make Yogi’s head wobble:

    @keyframes headWobble {
    0% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateY; }
    }

    #head {
    animation: headWobble 0.8s cubic-bezierinfinite alternate;
    }

    His tie waves:

    @keyframes tieWave {
    0%, 100% { transform: rotateZrotateYscaleX; }
    33% { transform: rotateZrotateYscaleX; }
    66% { transform: rotateZrotateYscaleX; }
    }

    #tie {
    transform-style: preserve-3d;
    animation: tieWave 10s cubic-bezierinfinite;
    }

    His broom swings:

    @keyframes broomSwing {
    0%, 20% { transform: rotate; }
    30% { transform: rotate; }
    50%, 70% { transform: rotate; }
    80% { transform: rotate; }
    100% { transform: rotate; }
    }

    #broom {
    animation: broomSwing 4s cubic-bezierinfinite;
    }

    And, finally, Yogi himself gently rotates as he flies on his magical broom:

    @keyframes yogiWobble {
    0% { transform: rotatetranslateYscale; }
    30% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateYscale; }
    }

    #yogi {
    animation: yogiWobble 3.5s cubic-bezierinfinite alternate;
    }

    All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript.
    Try this yourself:
    See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke.
    Conclusion
    Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same:

    Start clean,
    Optimise early, and
    Structure everything with animation in mind.

    SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    #smashing #animations #part #optimising #svgs
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child:nth-child{ animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child:nth-child{ ... } /* Slow, dramatic */ #stars use:nth-child:nth-child{ ... } /* Random */ #stars use:nth-child{ ... } /* Alternating */ #stars use:nth-child{ ... } /* Scattered */ #stars use:nth-child{ ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotatetranslateY; } 100% { transform: rotatetranslateY; } } #head { animation: headWobble 0.8s cubic-bezierinfinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZrotateYscaleX; } 33% { transform: rotateZrotateYscaleX; } 66% { transform: rotateZrotateYscaleX; } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezierinfinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate; } 30% { transform: rotate; } 50%, 70% { transform: rotate; } 80% { transform: rotate; } 100% { transform: rotate; } } #broom { animation: broomSwing 4s cubic-bezierinfinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotatetranslateYscale; } 30% { transform: rotatetranslateY; } 100% { transform: rotatetranslateYscale; } } #yogi { animation: yogiWobble 3.5s cubic-bezierinfinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life. #smashing #animations #part #optimising #svgs
    SMASHINGMAGAZINE.COM
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designer (UK) and Sketch (Netherlands) are alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url(#grad)" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url(#trail)" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child(n + 1):nth-child(-n + 10) { animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child(n + 11):nth-child(-n + 20) { ... } /* Slow, dramatic */ #stars use:nth-child(n + 21):nth-child(-n + 30) { ... } /* Random */ #stars use:nth-child(3n + 2) { ... } /* Alternating */ #stars use:nth-child(4n + 1) { ... } /* Scattered */ #stars use:nth-child(n + 31) { ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotate(-0.8deg) translateY(-0.5px); } 100% { transform: rotate(0.9deg) translateY(0.3px); } } #head { animation: headWobble 0.8s cubic-bezier(0.5, 0.15, 0.5, 0.85) infinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZ(-4deg) rotateY(15deg) scaleX(0.96); } 33% { transform: rotateZ(5deg) rotateY(-10deg) scaleX(1.05); } 66% { transform: rotateZ(-2deg) rotateY(5deg) scaleX(0.98); } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezier(0.68, -0.55, 0.27, 1.55) infinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate(-5deg); } 30% { transform: rotate(-4deg); } 50%, 70% { transform: rotate(5deg); } 80% { transform: rotate(4deg); } 100% { transform: rotate(-5deg); } } #broom { animation: broomSwing 4s cubic-bezier(0.5, 0.05, 0.5, 0.95) infinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotate(-2.8deg) translateY(-0.8px) scale(0.998); } 30% { transform: rotate(1.5deg) translateY(0.3px); } 100% { transform: rotate(3.2deg) translateY(1.2px) scale(1.002); } } #yogi { animation: yogiWobble 3.5s cubic-bezier(.37, .14, .3, .86) infinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animation [forked] by Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    Like
    Love
    Wow
    Angry
    Sad
    273
    0 Commentaires 0 Parts
  • Researchers genetically altered fruit flies to crave cocaine

    Fruit flies don't naturally enjoy the taste of cocaine. Credit: Deposit Photos

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    In a world first, scientists at the University of Utah have engineered fruit flies susceptible to cocaine addiction. But as strange as it sounds, there are potentially life-saving reasons for genetically altering the insects to crave the drug. The novel biological model could help addiction treatment therapies development and expedite research timelines. The findings are detailed in the Journal of Neuroscience.
    As surprising as it may sound, humans have a lot in common with fruit flies. In fact, we share around 70–75 percent of the same genes responsible for various diseases, as well as many of the same vital organs. Researchers have relied on the insects for genetic studies for years, especially for investigating the biological roots of certain addictions like cocaine abuse. This is due in large part to the fruit fly’s quick life cycle and its comparatively simple genetic makeup. But while scientists have administered the drug to the bugs in the past, there’s always been a small problem.
    “Flies don’t like cocaine one bit,” Adrian Rothenfluh, the study’s senior author and an associate professor of psychiatry, said in a statement.
    Even when previously introduced to cocaine, Rothenfluh’s team noted that the insects routinely opted for pure sugar water over sugar water laced with cocaine. Study first author Travis Philyaw theorized the reason may reside in a fly’s sense of taste that is found on their legs.
    “Insects are evolutionarily primed to avoid plant toxins, and cocaine is a plant toxin,” Philyaw explained. “They have taste receptors on their ‘arms’—their tarsal segments—so they can put their hand in something before it goes in their mouth, and decide, ‘I’m not going to touch that.'”
    After confirming that cocaine activates a fruit fly’s bitter-sensing taste receptors, Rothenfluh and Philyaw switched off those nerves. Once deactivated, there was little to stop the flies from developing a cocaine habit. These modified flies were subsequently introduced to sugar water infused with a low concentration of cocaine. Within 16 hours, the insects indicated a preference for the drug-laced drink.
    “At low doses, they start running around, just like people,” said Rothenfluh. “At very high doses, they get incapacitated, which is also true in people.”
    Now that researchers know how to breed the modified fruit flies, they can more easily study how cocaine addiction evolves in the body. Not only that, but they can do so on a much faster timeline by analyzing hundreds of genes at a time.
    “We can scale research so quickly in flies,” said Philyaw. “We can identify risk genes that might be difficult to uncover in more complex organisms, and then we pass that information to researchers who work with mammalian models.”
    From there, scientists can identify treatment targets that help link to human therapy options.
    “We can really start to understand the mechanisms of cocaine choice, and the more you understand about the mechanism, the more you have a chance to find a therapeutic that might act on that mechanism,” explained Rothenfluh.
    #researchers #genetically #altered #fruit #flies
    Researchers genetically altered fruit flies to crave cocaine
    Fruit flies don't naturally enjoy the taste of cocaine. Credit: Deposit Photos Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. In a world first, scientists at the University of Utah have engineered fruit flies susceptible to cocaine addiction. But as strange as it sounds, there are potentially life-saving reasons for genetically altering the insects to crave the drug. The novel biological model could help addiction treatment therapies development and expedite research timelines. The findings are detailed in the Journal of Neuroscience. As surprising as it may sound, humans have a lot in common with fruit flies. In fact, we share around 70–75 percent of the same genes responsible for various diseases, as well as many of the same vital organs. Researchers have relied on the insects for genetic studies for years, especially for investigating the biological roots of certain addictions like cocaine abuse. This is due in large part to the fruit fly’s quick life cycle and its comparatively simple genetic makeup. But while scientists have administered the drug to the bugs in the past, there’s always been a small problem. “Flies don’t like cocaine one bit,” Adrian Rothenfluh, the study’s senior author and an associate professor of psychiatry, said in a statement. Even when previously introduced to cocaine, Rothenfluh’s team noted that the insects routinely opted for pure sugar water over sugar water laced with cocaine. Study first author Travis Philyaw theorized the reason may reside in a fly’s sense of taste that is found on their legs. “Insects are evolutionarily primed to avoid plant toxins, and cocaine is a plant toxin,” Philyaw explained. “They have taste receptors on their ‘arms’—their tarsal segments—so they can put their hand in something before it goes in their mouth, and decide, ‘I’m not going to touch that.'” After confirming that cocaine activates a fruit fly’s bitter-sensing taste receptors, Rothenfluh and Philyaw switched off those nerves. Once deactivated, there was little to stop the flies from developing a cocaine habit. These modified flies were subsequently introduced to sugar water infused with a low concentration of cocaine. Within 16 hours, the insects indicated a preference for the drug-laced drink. “At low doses, they start running around, just like people,” said Rothenfluh. “At very high doses, they get incapacitated, which is also true in people.” Now that researchers know how to breed the modified fruit flies, they can more easily study how cocaine addiction evolves in the body. Not only that, but they can do so on a much faster timeline by analyzing hundreds of genes at a time. “We can scale research so quickly in flies,” said Philyaw. “We can identify risk genes that might be difficult to uncover in more complex organisms, and then we pass that information to researchers who work with mammalian models.” From there, scientists can identify treatment targets that help link to human therapy options. “We can really start to understand the mechanisms of cocaine choice, and the more you understand about the mechanism, the more you have a chance to find a therapeutic that might act on that mechanism,” explained Rothenfluh. #researchers #genetically #altered #fruit #flies
    WWW.POPSCI.COM
    Researchers genetically altered fruit flies to crave cocaine
    Fruit flies don't naturally enjoy the taste of cocaine. Credit: Deposit Photos Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. In a world first, scientists at the University of Utah have engineered fruit flies susceptible to cocaine addiction. But as strange as it sounds, there are potentially life-saving reasons for genetically altering the insects to crave the drug. The novel biological model could help addiction treatment therapies development and expedite research timelines. The findings are detailed in the Journal of Neuroscience. As surprising as it may sound, humans have a lot in common with fruit flies. In fact, we share around 70–75 percent of the same genes responsible for various diseases, as well as many of the same vital organs. Researchers have relied on the insects for genetic studies for years, especially for investigating the biological roots of certain addictions like cocaine abuse. This is due in large part to the fruit fly’s quick life cycle and its comparatively simple genetic makeup. But while scientists have administered the drug to the bugs in the past, there’s always been a small problem. “Flies don’t like cocaine one bit,” Adrian Rothenfluh, the study’s senior author and an associate professor of psychiatry, said in a statement. Even when previously introduced to cocaine, Rothenfluh’s team noted that the insects routinely opted for pure sugar water over sugar water laced with cocaine. Study first author Travis Philyaw theorized the reason may reside in a fly’s sense of taste that is found on their legs. “Insects are evolutionarily primed to avoid plant toxins, and cocaine is a plant toxin,” Philyaw explained. “They have taste receptors on their ‘arms’—their tarsal segments—so they can put their hand in something before it goes in their mouth, and decide, ‘I’m not going to touch that.'” After confirming that cocaine activates a fruit fly’s bitter-sensing taste receptors, Rothenfluh and Philyaw switched off those nerves. Once deactivated, there was little to stop the flies from developing a cocaine habit. These modified flies were subsequently introduced to sugar water infused with a low concentration of cocaine. Within 16 hours, the insects indicated a preference for the drug-laced drink. “At low doses, they start running around, just like people,” said Rothenfluh. “At very high doses, they get incapacitated, which is also true in people.” Now that researchers know how to breed the modified fruit flies, they can more easily study how cocaine addiction evolves in the body. Not only that, but they can do so on a much faster timeline by analyzing hundreds of genes at a time. “We can scale research so quickly in flies,” said Philyaw. “We can identify risk genes that might be difficult to uncover in more complex organisms, and then we pass that information to researchers who work with mammalian models.” From there, scientists can identify treatment targets that help link to human therapy options. “We can really start to understand the mechanisms of cocaine choice, and the more you understand about the mechanism, the more you have a chance to find a therapeutic that might act on that mechanism,” explained Rothenfluh.
    0 Commentaires 0 Parts
  • How (and Why) I Use Smart Cameras to Monitor My Garden

    We may earn a commission from links on this page.While most people think of smart cameras as just a part of their security system, they’re also a good way to monitor the things growing in your yard. In most cases, the cameras you already have set up for security can be doing double duty as a tool to keep track of what's happening in your garden.I believe we’re on the cusp of smart cameras becoming a much bigger part of the gardening experience. For the last few years, smart bird houses have exploded in popularity. One of those companies, Bird Buddy, has launched an entirely new line of cameras specifically for micro-viewing experiences in the garden. Their Petal cameras, expected to be available next year, should be positioned closer to the ground than most security cameras, and are meant to capture bees, insects, and butterflies, as well as the growth of your plants. Using AI, the camera will allow you to assign names to your plants and even communicate with them. Still, there is a lot you can do with security cameras already on the market. Remote monitoring

    Credit: Amanda Blum

    In an ideal world, you could pack up for vacation and your yard would take care of itself—but a smart camera can allow you to remotely keep an eye on what’s happening and monitor for any damage. What’s impressive to me is how well my solar-powered cameras maintain their connection, even during low temperatures and freezing rain. 

    I've been impressed at how much detail I can get from small plants through my cameras.
    Credit: Amanda Blum

    Cameras allow you to keep an active watch on your yard. Not only will your security camera let you know if your trusted waterer drops by while you're gone as promised, but you can actually see how your plants are doing and if additional help is needed.  I’m always impressed at how good the zoom is on the cameras I use around my yard; I can actually tell if a tomato is ripe or if broccoli is ready to be picked. Last year, when I couldn’t get outside because of a sprained ankle and had someone helping in the garden, being able to see what they were doing and communicate with them via my security camera was invaluable. It’s much more effective than trying to describe what you need or want. Catch pestsGarden pests are frustrating for a wealth of reasons. To start with, you often don’t know what kind of pest you’re dealing with, and it’s nearly impossible to catch them in the act. Smart cameras are perfect for this, because they give you fly-on-the-wall ability to passively watch. Motion detection does most of the work for you. My security camera let me know I had raccoons in my yard last winter. They weren’t doing any damage, but it helped influence how I design my garden and chicken coops. The cameras identified the cat that had chosen my garden to use as a litter box, checking in each night around 1 a.m. I’ve been chasing down a rat for the last two weeks, and the cameras do a spectacular job of catching his activity, which tells me where to add traps and what I may be doing that is enabling him. Other uses for smart cameras in your yard

    Credit: Amanda Blum

    The most invaluable service I’ve gotten from my cameras are how I use them to monitor backyard pets. I could not figure out how my newly adopted doberman was escaping from the yard, so I installed security cameras, and discovered she was climbing a five-foot tall chain link fence. I’ve got three cameras installed in my chicken coop, and they tell me when there are eggs to be grabbed, if a chicken is becoming broody, if everyone got into the coop at night, and if that pesky rat has cracked into the chicken food. When I first got my chickens, I couldn’t figure out which bird was laying which color egg, but the cameras helped. And now that I have a beehive, being able to see the activity going in and out of the hive is a helpful to monitor the health of the hive, and if a rodent of any kind tries to get in, I’ll know immediately. How to choose a camera for your yardI’ve tried smart cameras from almost every major brand, and I’ve figured out some things. First, in almost all cases, I want a PTZcamera. These allow you to use your phone as a remote control and move the camera around, often almost 360 degrees, to zoom in on what you want. This is far superior to a fixed range camera. It’s simply annoying to have something going on just outside of the range of your camera and not be able to do anything to adjust it remotely. Additionally, I look for an app that makes it easy to watch clips. While I think Reolink cameras are affordable and functional, their app forces you to watch a horizontal clip on a vertical screen, so details are incredibly small. The Ring app has a lot of bloat, bringing neighborhood alert notifications to your phone. I enjoy the Aqara, Switchbot, and Eufy apps for getting to the video quickly and easily. Lastly, as you add cameras to your collection, being able to remain free from subscription costs is a real bonus. For that reason, I have largely switched over to Eufy cameras, which—if connected to a Home Base—don’t need a subscription. What I use in my yard:

    I replaced all my floodlights with this camera for overhead views

    Eufy Wired Floodlight Cam

    Shop Now

    Shop Now

    I place these wireless cams anyplace I want a 360 view of what's happening in my yard.

    Eufy Solar Powered Wireless Camera

    Shop Now

    Shop Now

    I have this epoxied into three spots in my chicken coop.

    Eufy Indoor PTZ Camera

    Shop Now

    Shop Now

    Just added this to monitor my beehive.

    Eufycam S3 Pro

    All my cameras sync to the homebase so I don't need a subscription.

    Eufy HomeBase

    SEE 2 MORE

    Where to place your camera

    Credit: Amanda Blum

    All security cameras are either hardwired or wireless. You might already have exterior floodlights on your home, and wired security cameras can use those connections, replacing the lights. In this case, your connection is likely high up, and can’t be moved easily. So long as it’s high up, you likely have a good field of vision of your yard, but make sure to consider plants that grow in the summer, and if they’ll block your view. If you don’t have these connections available and don’t want to pay an electrician to create them, you need wireless cameras. But I actually prefer my wireless cameras. First, the solar power on most of them is astounding. I live in the Pacific Northwest, a place with seven months of gloom, and my cameras always stay powered. Second, being wireless means you can move your camera around to find the perfect spot. Usually all you need is to screw the base into the spot you want the camera. Don’t be afraid to try different spots, when I was chasing down how my dog escaped, I had to keep moving the camera. I attached the camera to a 2x4, and moved the wood around the yard, leaning it against whatever was near until I found the right range of vision. 
    #how #why #use #smart #cameras
    How (and Why) I Use Smart Cameras to Monitor My Garden
    We may earn a commission from links on this page.While most people think of smart cameras as just a part of their security system, they’re also a good way to monitor the things growing in your yard. In most cases, the cameras you already have set up for security can be doing double duty as a tool to keep track of what's happening in your garden.I believe we’re on the cusp of smart cameras becoming a much bigger part of the gardening experience. For the last few years, smart bird houses have exploded in popularity. One of those companies, Bird Buddy, has launched an entirely new line of cameras specifically for micro-viewing experiences in the garden. Their Petal cameras, expected to be available next year, should be positioned closer to the ground than most security cameras, and are meant to capture bees, insects, and butterflies, as well as the growth of your plants. Using AI, the camera will allow you to assign names to your plants and even communicate with them. Still, there is a lot you can do with security cameras already on the market. Remote monitoring Credit: Amanda Blum In an ideal world, you could pack up for vacation and your yard would take care of itself—but a smart camera can allow you to remotely keep an eye on what’s happening and monitor for any damage. What’s impressive to me is how well my solar-powered cameras maintain their connection, even during low temperatures and freezing rain.  I've been impressed at how much detail I can get from small plants through my cameras. Credit: Amanda Blum Cameras allow you to keep an active watch on your yard. Not only will your security camera let you know if your trusted waterer drops by while you're gone as promised, but you can actually see how your plants are doing and if additional help is needed.  I’m always impressed at how good the zoom is on the cameras I use around my yard; I can actually tell if a tomato is ripe or if broccoli is ready to be picked. Last year, when I couldn’t get outside because of a sprained ankle and had someone helping in the garden, being able to see what they were doing and communicate with them via my security camera was invaluable. It’s much more effective than trying to describe what you need or want. Catch pestsGarden pests are frustrating for a wealth of reasons. To start with, you often don’t know what kind of pest you’re dealing with, and it’s nearly impossible to catch them in the act. Smart cameras are perfect for this, because they give you fly-on-the-wall ability to passively watch. Motion detection does most of the work for you. My security camera let me know I had raccoons in my yard last winter. They weren’t doing any damage, but it helped influence how I design my garden and chicken coops. The cameras identified the cat that had chosen my garden to use as a litter box, checking in each night around 1 a.m. I’ve been chasing down a rat for the last two weeks, and the cameras do a spectacular job of catching his activity, which tells me where to add traps and what I may be doing that is enabling him. Other uses for smart cameras in your yard Credit: Amanda Blum The most invaluable service I’ve gotten from my cameras are how I use them to monitor backyard pets. I could not figure out how my newly adopted doberman was escaping from the yard, so I installed security cameras, and discovered she was climbing a five-foot tall chain link fence. I’ve got three cameras installed in my chicken coop, and they tell me when there are eggs to be grabbed, if a chicken is becoming broody, if everyone got into the coop at night, and if that pesky rat has cracked into the chicken food. When I first got my chickens, I couldn’t figure out which bird was laying which color egg, but the cameras helped. And now that I have a beehive, being able to see the activity going in and out of the hive is a helpful to monitor the health of the hive, and if a rodent of any kind tries to get in, I’ll know immediately. How to choose a camera for your yardI’ve tried smart cameras from almost every major brand, and I’ve figured out some things. First, in almost all cases, I want a PTZcamera. These allow you to use your phone as a remote control and move the camera around, often almost 360 degrees, to zoom in on what you want. This is far superior to a fixed range camera. It’s simply annoying to have something going on just outside of the range of your camera and not be able to do anything to adjust it remotely. Additionally, I look for an app that makes it easy to watch clips. While I think Reolink cameras are affordable and functional, their app forces you to watch a horizontal clip on a vertical screen, so details are incredibly small. The Ring app has a lot of bloat, bringing neighborhood alert notifications to your phone. I enjoy the Aqara, Switchbot, and Eufy apps for getting to the video quickly and easily. Lastly, as you add cameras to your collection, being able to remain free from subscription costs is a real bonus. For that reason, I have largely switched over to Eufy cameras, which—if connected to a Home Base—don’t need a subscription. What I use in my yard: I replaced all my floodlights with this camera for overhead views Eufy Wired Floodlight Cam Shop Now Shop Now I place these wireless cams anyplace I want a 360 view of what's happening in my yard. Eufy Solar Powered Wireless Camera Shop Now Shop Now I have this epoxied into three spots in my chicken coop. Eufy Indoor PTZ Camera Shop Now Shop Now Just added this to monitor my beehive. Eufycam S3 Pro All my cameras sync to the homebase so I don't need a subscription. Eufy HomeBase SEE 2 MORE Where to place your camera Credit: Amanda Blum All security cameras are either hardwired or wireless. You might already have exterior floodlights on your home, and wired security cameras can use those connections, replacing the lights. In this case, your connection is likely high up, and can’t be moved easily. So long as it’s high up, you likely have a good field of vision of your yard, but make sure to consider plants that grow in the summer, and if they’ll block your view. If you don’t have these connections available and don’t want to pay an electrician to create them, you need wireless cameras. But I actually prefer my wireless cameras. First, the solar power on most of them is astounding. I live in the Pacific Northwest, a place with seven months of gloom, and my cameras always stay powered. Second, being wireless means you can move your camera around to find the perfect spot. Usually all you need is to screw the base into the spot you want the camera. Don’t be afraid to try different spots, when I was chasing down how my dog escaped, I had to keep moving the camera. I attached the camera to a 2x4, and moved the wood around the yard, leaning it against whatever was near until I found the right range of vision.  #how #why #use #smart #cameras
    LIFEHACKER.COM
    How (and Why) I Use Smart Cameras to Monitor My Garden
    We may earn a commission from links on this page.While most people think of smart cameras as just a part of their security system, they’re also a good way to monitor the things growing in your yard. In most cases, the cameras you already have set up for security can be doing double duty as a tool to keep track of what's happening in your garden.I believe we’re on the cusp of smart cameras becoming a much bigger part of the gardening experience. For the last few years, smart bird houses have exploded in popularity. One of those companies, Bird Buddy, has launched an entirely new line of cameras specifically for micro-viewing experiences in the garden. Their Petal cameras, expected to be available next year, should be positioned closer to the ground than most security cameras, and are meant to capture bees, insects, and butterflies, as well as the growth of your plants. Using AI (as a subscription service), the camera will allow you to assign names to your plants and even communicate with them. Still, there is a lot you can do with security cameras already on the market. Remote monitoring Credit: Amanda Blum In an ideal world, you could pack up for vacation and your yard would take care of itself—but a smart camera can allow you to remotely keep an eye on what’s happening and monitor for any damage. What’s impressive to me is how well my solar-powered cameras maintain their connection, even during low temperatures and freezing rain.  I've been impressed at how much detail I can get from small plants through my cameras. Credit: Amanda Blum Cameras allow you to keep an active watch on your yard. Not only will your security camera let you know if your trusted waterer drops by while you're gone as promised, but you can actually see how your plants are doing and if additional help is needed.  I’m always impressed at how good the zoom is on the cameras I use around my yard; I can actually tell if a tomato is ripe or if broccoli is ready to be picked. Last year, when I couldn’t get outside because of a sprained ankle and had someone helping in the garden, being able to see what they were doing and communicate with them via my security camera was invaluable. It’s much more effective than trying to describe what you need or want. Catch pestsGarden pests are frustrating for a wealth of reasons. To start with, you often don’t know what kind of pest you’re dealing with, and it’s nearly impossible to catch them in the act. Smart cameras are perfect for this, because they give you fly-on-the-wall ability to passively watch. Motion detection does most of the work for you. My security camera let me know I had raccoons in my yard last winter. They weren’t doing any damage (yet), but it helped influence how I design my garden and chicken coops. The cameras identified the cat that had chosen my garden to use as a litter box, checking in each night around 1 a.m. I’ve been chasing down a rat for the last two weeks, and the cameras do a spectacular job of catching his activity, which tells me where to add traps and what I may be doing that is enabling him. Other uses for smart cameras in your yard Credit: Amanda Blum The most invaluable service I’ve gotten from my cameras are how I use them to monitor backyard pets. I could not figure out how my newly adopted doberman was escaping from the yard, so I installed security cameras, and discovered she was climbing a five-foot tall chain link fence. I’ve got three cameras installed in my chicken coop, and they tell me when there are eggs to be grabbed, if a chicken is becoming broody, if everyone got into the coop at night, and if that pesky rat has cracked into the chicken food. When I first got my chickens, I couldn’t figure out which bird was laying which color egg, but the cameras helped. And now that I have a beehive, being able to see the activity going in and out of the hive is a helpful to monitor the health of the hive, and if a rodent of any kind tries to get in, I’ll know immediately. How to choose a camera for your yardI’ve tried smart cameras from almost every major brand, and I’ve figured out some things. First, in almost all cases, I want a PTZ (point, tilt, zoom) camera. These allow you to use your phone as a remote control and move the camera around, often almost 360 degrees, to zoom in on what you want. This is far superior to a fixed range camera. It’s simply annoying to have something going on just outside of the range of your camera and not be able to do anything to adjust it remotely. Additionally, I look for an app that makes it easy to watch clips. While I think Reolink cameras are affordable and functional, their app forces you to watch a horizontal clip on a vertical screen, so details are incredibly small. The Ring app has a lot of bloat, bringing neighborhood alert notifications to your phone. I enjoy the Aqara, Switchbot, and Eufy apps for getting to the video quickly and easily. Lastly, as you add cameras to your collection, being able to remain free from subscription costs is a real bonus. For that reason, I have largely switched over to Eufy cameras, which—if connected to a Home Base—don’t need a subscription. What I use in my yard: I replaced all my floodlights with this camera for overhead views Eufy Wired Floodlight Cam $199.99 at Amazon $219.99 Save $20.00 Shop Now Shop Now $199.99 at Amazon $219.99 Save $20.00 I place these wireless cams anyplace I want a 360 view of what's happening in my yard. Eufy Solar Powered Wireless Camera $259.99 at Amazon $349.99 Save $90.00 Shop Now Shop Now $259.99 at Amazon $349.99 Save $90.00 I have this epoxied into three spots in my chicken coop. Eufy Indoor PTZ Camera $34.88 at Amazon Shop Now Shop Now $34.88 at Amazon Just added this to monitor my beehive. Eufycam S3 Pro $439.99 at Amazon $549.99 Save $110.00 Get Deal Get Deal $439.99 at Amazon $549.99 Save $110.00 All my cameras sync to the homebase so I don't need a subscription. Eufy HomeBase $149.99 at Amazon Get Deal Get Deal $149.99 at Amazon SEE 2 MORE Where to place your camera Credit: Amanda Blum All security cameras are either hardwired or wireless. You might already have exterior floodlights on your home, and wired security cameras can use those connections, replacing the lights (many units come with floodlights). In this case, your connection is likely high up, and can’t be moved easily. So long as it’s high up, you likely have a good field of vision of your yard, but make sure to consider plants that grow in the summer, and if they’ll block your view. If you don’t have these connections available and don’t want to pay an electrician to create them, you need wireless cameras. But I actually prefer my wireless cameras. First, the solar power on most of them is astounding. I live in the Pacific Northwest, a place with seven months of gloom, and my cameras always stay powered. Second, being wireless means you can move your camera around to find the perfect spot. Usually all you need is to screw the base into the spot you want the camera. Don’t be afraid to try different spots, when I was chasing down how my dog escaped, I had to keep moving the camera. I attached the camera to a 2x4, and moved the wood around the yard, leaning it against whatever was near until I found the right range of vision. 
    0 Commentaires 0 Parts
  • From Private Parts to Peckham's Medusa: Inside Anna Ginsburg's animated world

    When Anna Ginsburg opened her talk at OFFF Barcelona with her showreel, it landed like a punch to the heart and gut all at once. Immense, emotional, awesome. That three-word review wasn't just for the reel – it set the tone for a talk that was unflinchingly honest, joyously weird, and brimming with creative intensity.
    Anna began her career making music videos, which she admitted were a kind of creative scaffolding: "I didn't yet know what I wanted to say about the world, so I used music as a skeleton to hang visuals on."
    It gave her the freedom to experiment visually and technically with rotoscoping, stop motion and shooting live-action. It was an opportunity to be playful and have fun until she had something pressing to say. Then, Anna began to move into more meaningful territory, blending narrative and aesthetic experimentation.
    Alongside music videos, she became increasingly drawn to animated documentaries. "It's a powerful and overlooked genre," she explained. "When it's just voice recordings and not video, people are more candid. You're protecting your subject, so they're more honest."

    Talking genitals and creative liberation: The making of Private Parts
    A formative moment in Anna's personal and creative life occurred when she saw the artwork 'The Great Wall of Vagina' by Jamie McCartney at the age of 19. It followed an awkward teenage discovery years earlier when, after finally achieving her first orgasm, she proudly shared the news with friends and was met with horror. "Boys got high-fived. Girls got shamed."
    That gap between female pleasure and cultural discomfort became the starting point for Private Parts, her now-famous animated short about masturbation and sexual equality. It began as a personal experiment, sketching vulvas in her studio, imagining what their facial expressions might be. Then, she started interviewing friends about their experiences and animating vulvas to match their voices.
    When It's Nice That and Channel 4 emailed her looking for submissions for a late-night slot, Anna shared a clip of two vulvas in casual conversation, and they were immediately sold. With a shoestring budget of £2,000 and a five-week deadline, she rallied 11 illustrators to help bring the film to life. "I set up a Dropbox, and talking genitals started flooding in from the four corners of the world while I was sitting in my bedroom at my mum's," she laughed.
    One standout moment came from an Amsterdam-based designer who created a CGI Rubik's Cube vagina, then took two weeks off work to spray paint 100 versions of it. The result of what started as a passion project is an iconic, hilarious, and touching film that still resonates ten years on.

    From humour to heartbreak: What Is Beauty
    The talk shifted gear when Anna began to speak about her younger sister's anorexia. In 2017, during her sister's third hospitalisation, Anna found herself questioning the roots of beauty ideals, particularly in Western culture. Witnessing her sister's pain reframed how she saw her own body.
    This sparked a deep dive into beauty through the ages, from the Venus of Willendorf, a 28,000-year-old fertility goddess, to the Versace supermodels of the 1990s and the surgically sculpted Kardashians of today.
    "You realise the pace of the change in beauty ideals," she says. "If you revisit the skeletal female bodies which defined the super skinny era of the 2000s and compare it to the enhanced curves of today, you realise that trying to keep up is not only futile; it's extremely dangerous."
    She also explored the disturbing trend of dismemberment in advertising – shots taken where the heads are intentionally out of frame – and the impact this has on self-perception. Her response was What Is Beauty, released in 2018 on International Women's Day and her sister's birthday. The short film went viral, amassing over 20 million views.
    "It was a love letter to her," Anna said. "Because it didn't have English dialogue, it travelled globally. The simplicity made it resonate." And despite its runaway success, it brought her zero income. "Then I made the worst advert for a bank the world has ever seen," she joked. "I made money, but it broke my creative spirit."

    Enter the Hag: Animation, myth and millennial angst
    OFFF attendees were also treated to the world-exclusive first look at Hag, Anna's new animated short, three years in the making. It's her most ambitious and most personal project yet. Made with the support of the BFI, awarding National Lottery funding, Has is a 16-minute fantasy set in a surreal version of Peckham. The main character is a childless, single, disillusioned woman with snakes for hair.
    "I had just broken up with a lockdown boyfriend after struggling with doubts for nearly 2 years,"' she reveals. "The next day, I was at a baby shower surrounded by friends with rings and babies who recoiled at my touch. I was surrounded by flies, and a dog was doing a poo right next to me. I just felt like a hag."
    Drawing on Greek mythology, Anna reimagines Medusa not as a jealous monster but as a feminist figure of rage, autonomy and misinterpretation. "I didn't know she was a rape victim until I started researching," she told me after the talk. "The story of Athena cursing her out of jealousy is such a tired trope. What if it was solidarity? What if the snakes were power?"
    In Hag, the character initially fights with her snakes – violently clipping them back in shame and battling with them – but by the end, they align. She embraces her monstrous self. "It's a metaphor for learning to love the parts of yourself you've been told are wrong," Anna said. "That journey is universal."

    Making the personal politicalTelling a story so autobiographical wasn't easy. "It's exposing," Anna admitted. "My past work dealt with issues in the world. This one is about how I feel in the world." Even her ex-boyfriend plays himself. "Luckily, he's funny and cool about it. Otherwise, it would've been a disaster."
    She did worry about dramatising the baby shower scene too much. "None of those women were horrible in real life, but for the film, we needed to crank up the emotional tension," she says. "I just wanted to show that societal pressures make women feel monstrous whether they decide to conform or not. This is not a battle between hags and non-hags. These feelings affect us all."
    Co-writing the script with her dear friend and writer Miranda Latimer really helped. "It felt less exposing as we'd both lived versions of the same thing. Collaboration is liberating and makes me feel safer when being so honest," Anna explains.

    Sisterhood, generations and the pressure to conform
    It was very clear from our chat that Anna's younger sisters are a recurring thread throughout her work. "They've helped me understand the world through a Gen Z lens," she said. "Stalking my youngest sister on Instagram was how I noticed the way girls crop their faces or hide behind scribbles. It's dehumanising."
    That intergenerational awareness fuels many of her ideas. "I definitely wouldn't have made What Is Beauty without Maya. Seeing what she was going through just unlocked something."
    She's also keenly aware of the gender gap in healthcare. "So many women I know are living with pain, going years without a diagnosis. It's infuriating. If I get asked to work on anything to do with women's health, I'll say yes."

    Medusa, millennials, and the meaning of self-love
    One of Hag's most biting commentaries is about millennial self-care culture. "There's a scene in the character's bedroom – it's got a faded Dumbledore poster, self-help books, a flashing 'Namaste' sign. It's a shrine to the broken millennial."
    She laughs: "Self-love became a commodity. An expensive candle, a jade roller, and an oil burner from Muji. Like, really? That's it?" Her film pokes at the performative of wellness while still holding space for genuine vulnerability.
    This same self-awareness informs her reflections on generational shifts. "Gen Z is going through the same thing, just with a different flavour. It's all about skincare routines now – 11 steps for a 14-year-old. It's wild."

    Feminism with fangsAnna's feminism is open, intersectional, and laced with humour. "My mum's a lesbian and a Child Protection lawyer who helped to make rape within marriage illegal in the UK," she shared. "She sometimes jokes that my work is a bit basic. But I'm OK with that – I think there's space for approachable feminism, too."
    Importantly, she wants to bring everyone into the conversation. "It means so much when men come up to me after talks. I don't want to alienate anyone. These stories are about people, not just women."
    What's Next?
    Hag will officially premiere later this year, and it's likely to resonate far and wide. It's raw, mythic, funny and furious – and thoroughly modern.
    As Anna put it: "I've been experiencing external pressure and internal longing while making this film. So I'm basically becoming a hag while making Hag."
    As far as metamorphoses go, that's one we'll happily watch unfold.
    #private #parts #peckham039s #medusa #inside
    From Private Parts to Peckham's Medusa: Inside Anna Ginsburg's animated world
    When Anna Ginsburg opened her talk at OFFF Barcelona with her showreel, it landed like a punch to the heart and gut all at once. Immense, emotional, awesome. That three-word review wasn't just for the reel – it set the tone for a talk that was unflinchingly honest, joyously weird, and brimming with creative intensity. Anna began her career making music videos, which she admitted were a kind of creative scaffolding: "I didn't yet know what I wanted to say about the world, so I used music as a skeleton to hang visuals on." It gave her the freedom to experiment visually and technically with rotoscoping, stop motion and shooting live-action. It was an opportunity to be playful and have fun until she had something pressing to say. Then, Anna began to move into more meaningful territory, blending narrative and aesthetic experimentation. Alongside music videos, she became increasingly drawn to animated documentaries. "It's a powerful and overlooked genre," she explained. "When it's just voice recordings and not video, people are more candid. You're protecting your subject, so they're more honest." Talking genitals and creative liberation: The making of Private Parts A formative moment in Anna's personal and creative life occurred when she saw the artwork 'The Great Wall of Vagina' by Jamie McCartney at the age of 19. It followed an awkward teenage discovery years earlier when, after finally achieving her first orgasm, she proudly shared the news with friends and was met with horror. "Boys got high-fived. Girls got shamed." That gap between female pleasure and cultural discomfort became the starting point for Private Parts, her now-famous animated short about masturbation and sexual equality. It began as a personal experiment, sketching vulvas in her studio, imagining what their facial expressions might be. Then, she started interviewing friends about their experiences and animating vulvas to match their voices. When It's Nice That and Channel 4 emailed her looking for submissions for a late-night slot, Anna shared a clip of two vulvas in casual conversation, and they were immediately sold. With a shoestring budget of £2,000 and a five-week deadline, she rallied 11 illustrators to help bring the film to life. "I set up a Dropbox, and talking genitals started flooding in from the four corners of the world while I was sitting in my bedroom at my mum's," she laughed. One standout moment came from an Amsterdam-based designer who created a CGI Rubik's Cube vagina, then took two weeks off work to spray paint 100 versions of it. The result of what started as a passion project is an iconic, hilarious, and touching film that still resonates ten years on. From humour to heartbreak: What Is Beauty The talk shifted gear when Anna began to speak about her younger sister's anorexia. In 2017, during her sister's third hospitalisation, Anna found herself questioning the roots of beauty ideals, particularly in Western culture. Witnessing her sister's pain reframed how she saw her own body. This sparked a deep dive into beauty through the ages, from the Venus of Willendorf, a 28,000-year-old fertility goddess, to the Versace supermodels of the 1990s and the surgically sculpted Kardashians of today. "You realise the pace of the change in beauty ideals," she says. "If you revisit the skeletal female bodies which defined the super skinny era of the 2000s and compare it to the enhanced curves of today, you realise that trying to keep up is not only futile; it's extremely dangerous." She also explored the disturbing trend of dismemberment in advertising – shots taken where the heads are intentionally out of frame – and the impact this has on self-perception. Her response was What Is Beauty, released in 2018 on International Women's Day and her sister's birthday. The short film went viral, amassing over 20 million views. "It was a love letter to her," Anna said. "Because it didn't have English dialogue, it travelled globally. The simplicity made it resonate." And despite its runaway success, it brought her zero income. "Then I made the worst advert for a bank the world has ever seen," she joked. "I made money, but it broke my creative spirit." Enter the Hag: Animation, myth and millennial angst OFFF attendees were also treated to the world-exclusive first look at Hag, Anna's new animated short, three years in the making. It's her most ambitious and most personal project yet. Made with the support of the BFI, awarding National Lottery funding, Has is a 16-minute fantasy set in a surreal version of Peckham. The main character is a childless, single, disillusioned woman with snakes for hair. "I had just broken up with a lockdown boyfriend after struggling with doubts for nearly 2 years,"' she reveals. "The next day, I was at a baby shower surrounded by friends with rings and babies who recoiled at my touch. I was surrounded by flies, and a dog was doing a poo right next to me. I just felt like a hag." Drawing on Greek mythology, Anna reimagines Medusa not as a jealous monster but as a feminist figure of rage, autonomy and misinterpretation. "I didn't know she was a rape victim until I started researching," she told me after the talk. "The story of Athena cursing her out of jealousy is such a tired trope. What if it was solidarity? What if the snakes were power?" In Hag, the character initially fights with her snakes – violently clipping them back in shame and battling with them – but by the end, they align. She embraces her monstrous self. "It's a metaphor for learning to love the parts of yourself you've been told are wrong," Anna said. "That journey is universal." Making the personal politicalTelling a story so autobiographical wasn't easy. "It's exposing," Anna admitted. "My past work dealt with issues in the world. This one is about how I feel in the world." Even her ex-boyfriend plays himself. "Luckily, he's funny and cool about it. Otherwise, it would've been a disaster." She did worry about dramatising the baby shower scene too much. "None of those women were horrible in real life, but for the film, we needed to crank up the emotional tension," she says. "I just wanted to show that societal pressures make women feel monstrous whether they decide to conform or not. This is not a battle between hags and non-hags. These feelings affect us all." Co-writing the script with her dear friend and writer Miranda Latimer really helped. "It felt less exposing as we'd both lived versions of the same thing. Collaboration is liberating and makes me feel safer when being so honest," Anna explains. Sisterhood, generations and the pressure to conform It was very clear from our chat that Anna's younger sisters are a recurring thread throughout her work. "They've helped me understand the world through a Gen Z lens," she said. "Stalking my youngest sister on Instagram was how I noticed the way girls crop their faces or hide behind scribbles. It's dehumanising." That intergenerational awareness fuels many of her ideas. "I definitely wouldn't have made What Is Beauty without Maya. Seeing what she was going through just unlocked something." She's also keenly aware of the gender gap in healthcare. "So many women I know are living with pain, going years without a diagnosis. It's infuriating. If I get asked to work on anything to do with women's health, I'll say yes." Medusa, millennials, and the meaning of self-love One of Hag's most biting commentaries is about millennial self-care culture. "There's a scene in the character's bedroom – it's got a faded Dumbledore poster, self-help books, a flashing 'Namaste' sign. It's a shrine to the broken millennial." She laughs: "Self-love became a commodity. An expensive candle, a jade roller, and an oil burner from Muji. Like, really? That's it?" Her film pokes at the performative of wellness while still holding space for genuine vulnerability. This same self-awareness informs her reflections on generational shifts. "Gen Z is going through the same thing, just with a different flavour. It's all about skincare routines now – 11 steps for a 14-year-old. It's wild." Feminism with fangsAnna's feminism is open, intersectional, and laced with humour. "My mum's a lesbian and a Child Protection lawyer who helped to make rape within marriage illegal in the UK," she shared. "She sometimes jokes that my work is a bit basic. But I'm OK with that – I think there's space for approachable feminism, too." Importantly, she wants to bring everyone into the conversation. "It means so much when men come up to me after talks. I don't want to alienate anyone. These stories are about people, not just women." What's Next? Hag will officially premiere later this year, and it's likely to resonate far and wide. It's raw, mythic, funny and furious – and thoroughly modern. As Anna put it: "I've been experiencing external pressure and internal longing while making this film. So I'm basically becoming a hag while making Hag." As far as metamorphoses go, that's one we'll happily watch unfold. #private #parts #peckham039s #medusa #inside
    WWW.CREATIVEBOOM.COM
    From Private Parts to Peckham's Medusa: Inside Anna Ginsburg's animated world
    When Anna Ginsburg opened her talk at OFFF Barcelona with her showreel, it landed like a punch to the heart and gut all at once. Immense, emotional, awesome. That three-word review wasn't just for the reel – it set the tone for a talk that was unflinchingly honest, joyously weird, and brimming with creative intensity. Anna began her career making music videos, which she admitted were a kind of creative scaffolding: "I didn't yet know what I wanted to say about the world, so I used music as a skeleton to hang visuals on." It gave her the freedom to experiment visually and technically with rotoscoping, stop motion and shooting live-action. It was an opportunity to be playful and have fun until she had something pressing to say. Then, Anna began to move into more meaningful territory, blending narrative and aesthetic experimentation. Alongside music videos, she became increasingly drawn to animated documentaries. "It's a powerful and overlooked genre," she explained. "When it's just voice recordings and not video, people are more candid. You're protecting your subject, so they're more honest." Talking genitals and creative liberation: The making of Private Parts A formative moment in Anna's personal and creative life occurred when she saw the artwork 'The Great Wall of Vagina' by Jamie McCartney at the age of 19. It followed an awkward teenage discovery years earlier when, after finally achieving her first orgasm (post-Cruel Intentions viewing), she proudly shared the news with friends and was met with horror. "Boys got high-fived. Girls got shamed." That gap between female pleasure and cultural discomfort became the starting point for Private Parts, her now-famous animated short about masturbation and sexual equality. It began as a personal experiment, sketching vulvas in her studio, imagining what their facial expressions might be. Then, she started interviewing friends about their experiences and animating vulvas to match their voices. When It's Nice That and Channel 4 emailed her looking for submissions for a late-night slot, Anna shared a clip of two vulvas in casual conversation, and they were immediately sold. With a shoestring budget of £2,000 and a five-week deadline, she rallied 11 illustrators to help bring the film to life. "I set up a Dropbox, and talking genitals started flooding in from the four corners of the world while I was sitting in my bedroom at my mum's," she laughed. One standout moment came from an Amsterdam-based designer who created a CGI Rubik's Cube vagina, then took two weeks off work to spray paint 100 versions of it. The result of what started as a passion project is an iconic, hilarious, and touching film that still resonates ten years on. From humour to heartbreak: What Is Beauty The talk shifted gear when Anna began to speak about her younger sister's anorexia. In 2017, during her sister's third hospitalisation, Anna found herself questioning the roots of beauty ideals, particularly in Western culture. Witnessing her sister's pain reframed how she saw her own body. This sparked a deep dive into beauty through the ages, from the Venus of Willendorf, a 28,000-year-old fertility goddess, to the Versace supermodels of the 1990s and the surgically sculpted Kardashians of today. "You realise the pace of the change in beauty ideals," she says. "If you revisit the skeletal female bodies which defined the super skinny era of the 2000s and compare it to the enhanced curves of today, you realise that trying to keep up is not only futile; it's extremely dangerous." She also explored the disturbing trend of dismemberment in advertising – shots taken where the heads are intentionally out of frame – and the impact this has on self-perception. Her response was What Is Beauty, released in 2018 on International Women's Day and her sister's birthday. The short film went viral, amassing over 20 million views. "It was a love letter to her," Anna said. "Because it didn't have English dialogue, it travelled globally. The simplicity made it resonate." And despite its runaway success, it brought her zero income. "Then I made the worst advert for a bank the world has ever seen," she joked. "I made money, but it broke my creative spirit." Enter the Hag: Animation, myth and millennial angst OFFF attendees were also treated to the world-exclusive first look at Hag, Anna's new animated short, three years in the making. It's her most ambitious and most personal project yet. Made with the support of the BFI, awarding National Lottery funding, Has is a 16-minute fantasy set in a surreal version of Peckham. The main character is a childless, single, disillusioned woman with snakes for hair. "I had just broken up with a lockdown boyfriend after struggling with doubts for nearly 2 years,"' she reveals. "The next day, I was at a baby shower surrounded by friends with rings and babies who recoiled at my touch. I was surrounded by flies, and a dog was doing a poo right next to me. I just felt like a hag." Drawing on Greek mythology, Anna reimagines Medusa not as a jealous monster but as a feminist figure of rage, autonomy and misinterpretation. "I didn't know she was a rape victim until I started researching," she told me after the talk. "The story of Athena cursing her out of jealousy is such a tired trope. What if it was solidarity? What if the snakes were power?" In Hag, the character initially fights with her snakes – violently clipping them back in shame and battling with them – but by the end, they align. She embraces her monstrous self. "It's a metaphor for learning to love the parts of yourself you've been told are wrong," Anna said. "That journey is universal." Making the personal political (and funny) Telling a story so autobiographical wasn't easy. "It's exposing," Anna admitted. "My past work dealt with issues in the world. This one is about how I feel in the world." Even her ex-boyfriend plays himself. "Luckily, he's funny and cool about it. Otherwise, it would've been a disaster." She did worry about dramatising the baby shower scene too much. "None of those women were horrible in real life, but for the film, we needed to crank up the emotional tension," she says. "I just wanted to show that societal pressures make women feel monstrous whether they decide to conform or not. This is not a battle between hags and non-hags. These feelings affect us all." Co-writing the script with her dear friend and writer Miranda Latimer really helped. "It felt less exposing as we'd both lived versions of the same thing. Collaboration is liberating and makes me feel safer when being so honest," Anna explains. Sisterhood, generations and the pressure to conform It was very clear from our chat that Anna's younger sisters are a recurring thread throughout her work. "They've helped me understand the world through a Gen Z lens," she said. "Stalking my youngest sister on Instagram was how I noticed the way girls crop their faces or hide behind scribbles. It's dehumanising." That intergenerational awareness fuels many of her ideas. "I definitely wouldn't have made What Is Beauty without Maya. Seeing what she was going through just unlocked something." She's also keenly aware of the gender gap in healthcare. "So many women I know are living with pain, going years without a diagnosis. It's infuriating. If I get asked to work on anything to do with women's health, I'll say yes." Medusa, millennials, and the meaning of self-love One of Hag's most biting commentaries is about millennial self-care culture. "There's a scene in the character's bedroom – it's got a faded Dumbledore poster, self-help books, a flashing 'Namaste' sign. It's a shrine to the broken millennial." She laughs: "Self-love became a commodity. An expensive candle, a jade roller, and an oil burner from Muji. Like, really? That's it?" Her film pokes at the performative of wellness while still holding space for genuine vulnerability. This same self-awareness informs her reflections on generational shifts. "Gen Z is going through the same thing, just with a different flavour. It's all about skincare routines now – 11 steps for a 14-year-old. It's wild." Feminism with fangs (and a sense of humour) Anna's feminism is open, intersectional, and laced with humour. "My mum's a lesbian and a Child Protection lawyer who helped to make rape within marriage illegal in the UK," she shared. "She sometimes jokes that my work is a bit basic. But I'm OK with that – I think there's space for approachable feminism, too." Importantly, she wants to bring everyone into the conversation. "It means so much when men come up to me after talks. I don't want to alienate anyone. These stories are about people, not just women." What's Next? Hag will officially premiere later this year, and it's likely to resonate far and wide. It's raw, mythic, funny and furious – and thoroughly modern. As Anna put it: "I've been experiencing external pressure and internal longing while making this film. So I'm basically becoming a hag while making Hag." As far as metamorphoses go, that's one we'll happily watch unfold.
    0 Commentaires 0 Parts
Plus de résultats