0 Commentarios
0 Acciones
24 Views
Directorio
Directorio
-
Please log in to like, share and comment!
-
FUTURISM.COMIn Leaked Text, Elon Musk Harangued Woman to Have as Many of His Babies as PossibleAlarming new details about billionaire Elon Musk's attempts to buy the silence of the mother of yet another of his secret children are coming to light.As the Wall Street Journal reports, 26-year-old conservative influencer Ashley St. Clair was offered $15 million, plus a monthly $100,000 in support, to keep her from publicizing the fact that she had birthed his child.St. Clair ultimately turned down the offer and has since been on a public crusade, suing the billionaire for custody of the kid.Before things turned sour, their relationship was anything but conventional. According to the WSJ, Musk told her that he wanted to bring in other women to have more of their children faster."To reach legion-level before the apocalypse, we will need to use surrogates," he reportedly texted her.The use of the word "legion" alone demonstrates Musk's baffling obsession with having as many children as possible. The term refers to a unit of several thousand men in the ancient Roman army (so it shouldn't come as a surprise that St. Clair named her son Romulus, after the founder and first king of Rome).Musk also tried to get her to deliver the baby via caesarean section after publicly claiming that vaginal births limit brain size, a disputed theory. He even told her she should have as many as ten babies.However, St. Clair didn't agree to play by Musk's rules, deciding against a C-section.She also notably refused to sign a gag order NDA."I don’t want my son to feel like he’s a secret," she told Musk's longtime fixer Jared Birchall in December, as quoted by the WSJ.St. Clair isn't the only woman Musk has seemingly attempted to coerce into having children. According to the newspaper, he has personally reached out via DMs to other women, offering them to let them have his babies.Musk has also had four young children with Shivon Zilis, an executive at his brain computer interface company Neuralink.In total, he has had 14 children we know about with four different women. Sources close to Musk told the WSJ that the real number could be significantly higher.To Musk, it's allegedly part of an attempt to boost humanity's chances of survival. He has argued on many occasions that falling birth rates are our biggest existential threat, a belief that experts have long refuted.But according to the billionaire, women play only an insignificant, childbearing role in that fight."In all of history, there has never been a competitive army composed of women," he texted St. Clair while canvassing for Trump in Pennsylvania, as quoted by the WSJ. "Not even once.""Men are made for war," he added. "Real men, anyway."Share This Article0 Commentarios 0 Acciones 46 Views
-
THEHACKERNEWS.COMNew BPFDoor Controller Enables Stealthy Lateral Movement in Linux Server AttacksApr 16, 2025Ravie LakshmananCyber Espionage / Network Security Cybersecurity researchers have unearthed a new controller component associated with a known backdoor called BPFDoor as part of cyber attacks targeting telecommunications, finance, and retail sectors in South Korea, Hong Kong, Myanmar, Malaysia, and Egypt in 2024. "The controller could open a reverse shell," Trend Micro researcher Fernando Mercês said in a technical report published earlier in the week. "This could allow lateral movement, enabling attackers to enter deeper into compromised networks, allowing them to control more systems or gain access to sensitive data. The campaign has been attributed with medium confidence to a threat group it tracks as Earth Bluecrow, which is also known as DecisiveArchitect, Red Dev 18, and Red Menshen. The lower confidence level boils down to the fact that the BPFDoor malware source code was leaked in 2022, meaning it could also have bee adopted by other hacking groups. BPFDoor is a Linux backdoor that first came to light in 2022, with the malware positioned as a long-term espionage tool for use in attacks targeting entities in Asia and the Middle East at least a year prior to public disclosure. The most distinctive aspect of the malware is that it creates a persistent-yet-covert channel for threat actors to control compromised workstations and access sensitive data over extended periods of time. The malware gets its name from the use of Berkeley Packet Filter (BPF), a technology that allows programs to attach network filters to an open socket in order to inspect incoming network packets and monitor for a specific Magic Byte sequence so as to spring into action. "Because of how BPF is implemented in the targeted operating system, the magic packet triggers the backdoor despite being blocked by a firewall," Mercês said. "As the packet reaches the kernel's BPF engine, it activates the resident backdoor. While these features are common in rootkits, they are not typically found in backdoors." The latest analysis from Trend Micro has found that the targeted Linux servers have also been infected by a previously undocumented malware controller that's used to access other affected hosts in the same network after lateral movement. "Before sending one of the 'magic packets' checked by the BPF filter inserted by BPFDoor malware, the controller asks its user for a password that will also be checked on the BPFDoor side," Mercês explained. In the next step, the controller directs the compromised machine to perform one of the below actions based on the password provided and the command-line options used - Open a reverse shell Redirect new connections to a shell on a specific port, or Confirm the backdoor is active It's worth pointing out that the password sent by the controller must match one of the hard-coded values in the BPFDoor sample. The controller, besides supporting TCP, UDP, and ICMP protocols to commandeer the infected hosts, can also enable an optional encrypted mode for secure communication. Furthermore, the controller supports what's called a direct mode that enables the attackers to directly connect to an infected machine and obtain a shell for remote access – but only when provided the right password. "BPF opens a new window of unexplored possibilities for malware authors to exploit," Mercês said. "As threat researchers, it is a must to be equipped for future developments by analyzing BPF code, which will help protect organizations against BPF-powered threats." Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post. SHARE 0 Commentarios 0 Acciones 28 Views
-
SCREENCRUSH.COMEverything New on Disney+ in May 2025May means May the 4th which always means new Star Wars stuff on Disney+. This year that includes a new anthology series called Star Wars: Tales of the Underworld, about some of the less savory members of the galaxy far, far away. There are also some new specials about Star Wars: Galaxy’s Edge at Disneyland, including one on its great Rise of the Resistance attraction.But I’m burying the lede here; the main new Star Wars addition is the remainder of Andor Season 2 (which is also the remainder of the series; these episode conclude the prequel).If Star Wars isn’t your thing, there are new episodes of Doctor Who. And if you really dislike science-fiction, well, there’s the 100th episode of Big City Greens. Hopefully that does something for you.Here’s the full lineup for May 2025 on Disney+...Thursday, May 1- Rise Up, Sing Out (Shorts) (S2, 7 episodes)New to Disney+Spider-Man: Across the Spider-VerseFriday, May 2- Genghis Khan: The Secret History of the Mongols (S1, 6 episodes)Disney+Disney+loading...Saturday, May 3New to Disney+Doctor Who (Season 2) - Episode 4Sunday, May 4Disney+ OriginalStar Wars: Tales of the Underworld - Premiere, All Episodes StreamingGetty ImagesGetty Imagesloading...New to Disney+Star Wars: Galaxy’s Edge | Disneyland Resort - PremiereStar Wars: Rise of the Resistance | Disneyland Resort - PremiereTuesday, May 6Disney+ OriginalAndor (Season 2) - Three New Episodes at 6pm PTWednesday, May 7- Broken Karaoke (S3, 2 episodes)- Firebuds (S2, 2 episodes)- Hamster & Gretel (S2, 12 episodes)New to Disney+Big City Greens (S4, 1 episode) - 100th EpisodeFriday, May 9- History's Greatest of All Time with Peyton Manning (S1, 8 episodes)- The Toys That Built America (S3, 12 episodes)- The UnXplained (S7, 6 episodes)- WWE Rivals (S2, 10 episodes)- WWE Rivals (S4, 6 episodes)Saturday, May 10New to Disney+Doctor Who (Season 2) - Episode 5ANDOR SEASON 2Lucasfilmloading...Tuesday, May 13Disney+ OriginalAndor (Season 2) - Season Finale at 6pm PTSaturday, May 17New to Disney+Doctor Who (Season 2) - Episode 6Monday, May 19New to Disney+Tucci in Italy - Premiere, All Episodes StreamingTuesday, May 20New to Disney+Minnie's Bow-Toons: Pet Hotel - Premiere, New Short-Form SeriesSaturday, May 24New to Disney+Doctor Who (Season 2) - Episode 7DisneyDisneyloading...Wednesday, May 28- Me & Winnie the Pooh (S2, 6 episodes)- Playdate with Winnie the Pooh (S2, 5 episodes)Saturday, May 31- How Not to Draw (S3, 4 episodes)New to Disney+Doctor Who (Season 2) - Season Finale at 11am PTSign up for Disney+ here.10 Famous Actors You Forgot Were In Disney Channel Original MoviesThese big stars made appearances in Disney Channel Original Movies. How many do you remember?0 Commentarios 0 Acciones 46 Views
-
WWW.TECHNOLOGYREVIEW.COMAI is coming for music, tooThe end of this story includes samples of AI-generated music. Artificial intelligence was barely a term in 1956, when top scientists from the field of computing arrived at Dartmouth College for a summer conference. The computer scientist John McCarthy had coined the phrase in the funding proposal for the event, a gathering to work through how to build machines that could use language, solve problems like humans, and improve themselves. But it was a good choice, one that captured the organizers’ founding premise: Any feature of human intelligence could “in principle be so precisely described that a machine can be made to simulate it.” In their proposal, the group had listed several “aspects of the artificial intelligence problem.” The last item on their list, and in hindsight perhaps the most difficult, was building a machine that could exhibit creativity and originality. At the time, psychologists were grappling with how to define and measure creativity in humans. The prevailing theory—that creativity was a product of intelligence and high IQ—was fading, but psychologists weren’t sure what to replace it with. The Dartmouth organizers had one of their own. “The difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness,” they wrote, adding that such randomness “must be guided by intuition to be efficient.” Nearly 70 years later, following a number of boom-and-bust cycles in the field, we now have AI models that more or less follow that recipe. While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains. By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people, as well as bizarre, surreal results that feel distinctly nonhuman. Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. AI-generated creative works—from orchestra performances to heavy metal—are poised to suffuse our lives more thoroughly than any other product of AI has done yet. The songs are likely to blend into our streaming platforms, party and wedding playlists, soundtracks, and more, whether or not we notice who (or what) made them. For years, diffusion models have stirred debate in the visual-art world about whether what they produce reflects true creation or mere replication. Now this debate has come for music, an art form that is deeply embedded in our experiences, memories, and social lives. Music models can now create songs capable of eliciting real emotional responses, presenting a stark example of how difficult it’s becoming to define authorship and originality in the age of AI. The courts are actively grappling with this murky territory. Major record labels are suing the top AI music generators, alleging that diffusion models do little more than replicate human art without compensation to artists. The model makers counter that their tools are made to assist in human creation. In deciding who is right, we’re forced to think hard about our own human creativity. Is creativity, whether in artificial neural networks or biological ones, merely the result of vast statistical learning and drawn connections, with a sprinkling of randomness? If so, then authorship is a slippery concept. If not—if there is some distinctly human element to creativity—what is it? What does it mean to be moved by something without a human creator? I had to wrestle with these questions the first time I heard an AI-generated song that was genuinely fantastic—it was unsettling to know that someone merely wrote a prompt and clicked “Generate.” That predicament is coming soon for you, too. Making connections After the Dartmouth conference, its participants went off in different research directions to create the foundational technologies of AI. At the same time, cognitive scientists were following a 1950 call from J.P. Guilford, president of the American Psychological Association, to tackle the question of creativity in human beings. They came to a definition, first formalized in 1953 by the psychologist Morris Stein in the Journal of Psychology: Creative works are both novel, meaning they present something new, and useful, meaning they serve some purpose to someone. Some have called for “useful” to be replaced by “satisfying,” and others have pushed for a third criterion: that creative things are also surprising. Later, in the 1990s, the rise of functional magnetic resonance imaging made it possible to study more of the neural mechanisms underlying creativity in many fields, including music. Computational methods in the past few years have also made it easier to map out the role that memory and associative thinking play in creative decisions. What has emerged is less a grand unified theory of how a creative idea originates and unfolds in the brain and more an ever-growing list of powerful observations. We can first divide the human creative process into phases, including an ideation or proposal step, followed by a more critical and evaluative step that looks for merit in ideas. A leading theory on what guides these two phases is called the associative theory of creativity, which posits that the most creative people can form novel connections between distant concepts. STUART BRADFORD “It could be like spreading activation,” says Roger Beaty, a researcher who leads the Cognitive Neuroscience of Creativity Laboratory at Penn State. “You think of one thing; it just kind of activates related concepts to whatever that one concept is.” These connections often hinge specifically on semantic memory, which stores concepts and facts, as opposed to episodic memory, which stores memories from a particular time and place. Recently, more sophisticated computational models have been used to study how people make connections between concepts across great “semantic distances.” For example, the word apocalypse is more closely related to nuclear power than to celebration. Studies have shown that highly creative people may perceive very semantically distinct concepts as close together. Artists have been found to generate word associations across greater distances than non-artists. Other research has supported the idea that creative people have “leaky” attention—that is, they often notice information that might not be particularly relevant to their immediate task. Neuroscientific methods for evaluating these processes do not suggest that creativity unfolds in a particular area of the brain. “Nothing in the brain produces creativity like a gland secretes a hormone,” Dean Keith Simonton, a leader in creativity research, wrote in the Cambridge Handbook of the Neuroscience of Creativity. The evidence instead points to a few dispersed networks of activity during creative thought, Beaty says—one to support the initial generation of ideas through associative thinking, another involved in identifying promising ideas, and another for evaluation and modification. A new study, led by researchers at Harvard Medical School and published in February, suggests that creativity might even involve the suppression of particular brain networks, like ones involved in self-censorship. So far, machine creativity—if you can call it that—looks quite different. Though at the time of the Dartmouth conference AI researchers were interested in machines inspired by human brains, that focus had shifted by the time diffusion models were invented, about a decade ago. The best clue to how they work is in the name. If you dip a paintbrush loaded with red ink into a glass jar of water, the ink will diffuse and swirl into the water seemingly at random, eventually yielding a pale pink liquid. Diffusion models simulate this process in reverse, reconstructing legible forms from randomness. For a sense of how this works for images, picture a photo of an elephant. To train the model, you make a copy of the photo, adding a layer of random black-and-white static on top. Make a second copy and add a bit more, and so on hundreds of times until the last image is pure static, with no elephant in sight. For each image in between, a statistical model predicts how much of the image is noise and how much is really the elephant. It compares its guesses with the right answers and learns from its mistakes. Over millions of these examples, the model gets better at “de-noising” the images and connecting these patterns to descriptions like “male Borneo elephant in an open field.” Now that it’s been trained, generating a new image means reversing this process. If you give the model a prompt, like “a happy orangutan in a mossy forest,” it generates an image of random white noise and works backward, using its statistical model to remove bits of noise step by step. At first, rough shapes and colors appear. Details come after, and finally (if it works) an orangutan emerges, all without the model “knowing” what an orangutan is. Musical images The approach works much the same way for music. A diffusion model does not “compose” a song the way a band might, starting with piano chords and adding vocals and drums. Instead, all the elements are generated at once. The process hinges on the fact that the many complexities of a song can be depicted visually in a single waveform, representing the amplitude of a sound wave plotted against time. Think of a record player. By traveling along a groove in a piece of vinyl, a needle mirrors the path of the sound waves engraved in the material and transmits it into a signal for the speaker. The speaker simply pushes out air in these patterns, generating sound waves that convey the whole song. From a distance, a waveform might look as if it just follows a song’s volume. But if you were to zoom in closely enough, you could see patterns in the spikes and valleys, like the 49 waves per second for a bass guitar playing a low G. A waveform contains the summation of the frequencies of all different instruments and textures. “You see certain shapes start taking place,” says David Ding, cofounder of the AI music company Udio, “and that kind of corresponds to the broad melodic sense.” Since waveforms, or similar charts called spectrograms, can be treated like images, you can create a diffusion model out of them. A model is fed millions of clips of existing songs, each labeled with a description. To generate a new song, it starts with pure random noise and works backward to create a new waveform. The path it takes to do so is shaped by what words someone puts into the prompt. Ding worked at Google DeepMind for five years as a senior research engineer on diffusion models for images and videos, but he left to found Udio, based in New York, in 2023. The company and its competitor Suno, based in Cambridge, Massachusetts, are now leading the race for music generation models. Both aim to build AI tools that enable nonmusicians to make music. Suno is larger, claiming more than 12 million users, and raised a $125 million funding round in May 2024. The company has partnered with artists including Timbaland. Udio raised a seed funding round of $10 million in April 2024 from prominent investors like Andreessen Horowitz as well as musicians Will.i.am and Common. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. Suno has artist pages for creators, some with large followings, who generate songs entirely with AI, often accompanied by AI-generated images of the artist. These creators are not musicians in the conventional sense but skilled prompters, creating work that can’t be attributed to a single composer or singer. In this emerging space, our normal definitions of authorship—and our lines between creation and replication—all but dissolve. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. The music industry is pushing back. Both companies were sued by major record labels in June 2024, and the lawsuits are ongoing. The labels, including Universal and Sony, allege that the AI models have been trained on copyrighted music “at an almost unimaginable scale” and generate songs that “imitate the qualities of genuine human sound recordings” (the case against Suno cites one ABBA-adjacent song called “Prancing Queen,” for example). Suno did not respond to requests for comment on the litigation, but in a statement responding to the case posted on Suno’s blog in August, CEO Mikey Shulman said the company trains on music found on the open internet, which “indeed contains copyrighted materials.” But, he argued, “learning is not infringing.” A representative from Udio said the company would not comment on pending litigation. At the time of the lawsuit, Udio released a statement mentioning that its model has filters to ensure that it “does not reproduce copyrighted works or artists’ voices.” Complicating matters even further is guidance from the US Copyright Office, released in January, that says AI-generated works can be copyrighted if they involve a considerable amount of human input. A month later, an artist in New York received what might be the first copyright for a piece of visual art made with the help of AI. The first song could be next. Novelty and mimicry These legal cases wade into a gray area similar to one explored by other court battles unfolding in AI. At issue here is whether training AI models on copyrighted content is allowed, and whether generated songs unfairly copy a human artist’s style. But AI music is likely to proliferate in some form regardless of these court decisions; YouTube has reportedly been in talks with major labels to license their music for AI training, and Meta’s recent expansion of its agreements with Universal Music Group suggests that licensing for AI-generated music might be on the table. If AI music is here to stay, will any of it be any good? Consider three factors: the training data, the diffusion model itself, and the prompting. The model can only be as good as the library of music it learns from and the descriptions of that music, which must be complex to capture it well. A model’s architecture then determines how well it can use what’s been learned to generate songs. And the prompt you feed into the model—as well as the extent to which the model “understands” what you mean by “turn down that saxophone,” for example—is pivotal too. Is the result creation or simply replication of the training data? We could ask the same question about human creativity. Arguably the most important issue is the first: How extensive and diverse is the training data, and how well is it labeled? Neither Suno nor Udio has disclosed what music has gone into its training set, though these details will likely have to be disclosed during the lawsuits. Udio says the way those songs are labeled is essential to the model. “An area of active research for us is: How do we get more and more refined descriptions of music?” Ding says. A basic description would identify the genre, but then you could also say whether a song is moody, uplifting, or calm. More technical descriptions might mention a two-five-one chord progression or a specific scale. Udio says it does this through a combination of machine and human labeling. “Since we want to target a broad range of target users, that also means that we need a broad range of music annotators,” he says. “Not just people with music PhDs who can describe the music on a very technical level, but also music enthusiasts who have their own informal vocabulary for describing music.” Competitive AI music generators must also learn from a constant supply of new songs made by people, or else their outputs will be stuck in time, sounding stale and dated. For this, today’s AI-generated music relies on human-generated art. In the future, though, AI music models may train on their own outputs, an approach being experimented with in other AI domains. Because models start with a random sampling of noise, they are nondeterministic; giving the same AI model the same prompt will result in a new song each time. That’s also because many makers of diffusion models, including Udio, inject additional randomness through the process—essentially taking the waveform generated at each step and distorting it ever so slightly in hopes of adding imperfections that serve to make the output more interesting or real. The organizers of the Dartmouth conference themselves recommended such a tactic back in 1956. According to Udio cofounder and chief operating officer Andrew Sanchez, it’s this randomness inherent in generative AI programs that comes as a shock to many people. For the past 70 years, computers have executed deterministic programs: Give the software an input and receive the same response every time. “Many of our artists partners will be like, ‘Well, why does it do this?’” he says. “We’re like, well, we don’t really know.” The generative era requires a new mindset, even for the companies creating it: that AI programs can be messy and inscrutable. Is the result creation or simply replication of the training data? Fans of AI music told me we could ask the same question about human creativity. As we listen to music through our youth, neural mechanisms for learning are weighted by these inputs, and memories of these songs influence our creative outputs. In a recent study, Anthony Brandt, a composer and professor of music at Rice University, pointed out that both humans and large language models use past experiences to evaluate possible future scenarios and make better choices. Indeed, much of human art, especially in music, is borrowed. This often results in litigation, with artists alleging that a song was copied or sampled without permission. Some artists suggest that diffusion models should be made more transparent, so we could know that a given song’s inspiration is three parts David Bowie and one part Lou Reed. Udio says there is ongoing research to achieve this, but right now, no one can do it reliably. For great artists, “there is that combination of novelty and influence that is at play,” Sanchez says. “And I think that that’s something that is also at play in these technologies.” But there are lots of areas where attempts to equate human neural networks with artificial ones quickly fall apart under scrutiny. Brandt carves out one domain where he sees human creativity clearly soar above its machine-made counterparts: what he calls “amplifying the anomaly.” AI models operate in the realm of statistical sampling. They do not work by emphasizing the exceptional but, rather, by reducing errors and finding probable patterns. Humans, on the other hand, are intrigued by quirks. “Rather than being treated as oddball events or ‘one-offs,’” Brandt writes, the quirk “permeates the creative product.” STUART BRADFORD He cites Beethoven’s decision to add a jarring off-key note in the last movement of his Symphony no. 8. “Beethoven could have left it at that,” Brandt says. “But rather than treating it as a one-off, Beethoven continues to reference this incongruous event in various ways. In doing so, the composer takes a momentary aberration and magnifies its impact.” One could look to similar anomalies in the backward loop sampling of late Beatles recordings, pitched-up vocals from Frank Ocean, or the incorporation of “found sounds,” like recordings of a crosswalk signal or a door closing, favored by artists like Charlie Puth and by Billie Eilish’s producer Finneas O’Connell. If a creative output is indeed defined as one that’s both novel and useful, Brandt’s interpretation suggests that the machines may have us matched on the second criterion while humans reign supreme on the first. To explore whether that is true, I spent a few days playing around with Udio’s model. It takes a minute or two to generate a 30-second sample, but if you have paid versions of the model you can generate whole songs. I decided to pick 12 genres, generate a song sample for each, and then find similar songs made by people. I built a quiz to see if people in our newsroom could spot which songs were made by AI. The average score was 46%. And for a few genres, especially instrumental ones, listeners were wrong more often than not. When I watched people do the test in front of me, I noticed that the qualities they confidently flagged as a sign of composition by AI—a fake-sounding instrument, a weird lyric—rarely proved them right. Predictably, people did worse in genres they were less familiar with; some did okay on country or soul, but many stood no chance against jazz, classical piano, or pop. Beaty, the creativity researcher, scored 66%, while Brandt, the composer, finished at 50% (though he answered correctly on the orchestral and piano sonata tests). Remember that the model doesn’t deserve all the credit here; these outputs could not have been created without the work of human artists whose work was in the training data. But with just a few prompts, the model generated songs that few people would pick out as machine-made. A few could easily have been played at a party without raising objections, and I found two I genuinely loved, even as a lifelong musician and generally picky music person. But sounding real is not the same thing as sounding original. The songs did not feel driven by oddities or anomalies—certainly not on the level of Beethoven’s “jump scare.” Nor did they seem to bend genres or cover great leaps between themes. In my test, people sometimes struggled to decide whether a song was AI-generated or simply bad. How much will this matter in the end? The courts will play a role in deciding whether AI music models serve up replications or new creations—and how artists are compensated in the process—but we, as listeners, will decide their cultural value. To appreciate a song, do we need to picture a human artist behind it—someone with experience, ambitions, opinions? Is a great song no longer great if we find out it’s the product of AI? Sanchez says people may wonder who is behind the music. But “at the end of the day, however much AI component, however much human component, it’s going to be art,” he says. “And people are going to react to it on the quality of its aesthetic merits.” In my experiment, though, I saw that the question really mattered to people—and some vehemently resisted the idea of enjoying music made by a computer model. When one of my test subjects instinctively started bobbing her head to an electro-pop song on the quiz, her face expressed doubt. It was almost as if she was trying her best to picture a human rather than a machine as the song’s composer. “Man,” she said, “I really hope this isn’t AI.” It was.0 Commentarios 0 Acciones 27 Views
-
IMAGE-ENGINE.COMDune: Prophecy Case StudySet 10,000 years before the birth of Paul Atreides, the HBO prequel Dune: Prophecy follows two Harkonnen sisters as they combat forces that threaten the future of humankind and establish the fabled sect that will become known as the Bene Gesserit. Image Engine created visual effects for 208 shots across episodes 1 and 3-6 of this series, bringing the world of Arrakis to life. From the awe-inspiring desert landscapes to the intricate, towering structures of the Sisterhood’s complex, our VFX work captured the beauty and aesthetic of the Dune universe. This case study shares the challenges and successes of our VFX work, illustrating the technical artistry that made Dune: Prophecy an unforgettable extension of the Dune legacy with our crew: Cara Davies, visual effects executive producer Martyn Culpitt, visual effects supervisor Viktoria Rucker, visual effects producer Jeremy Mesana, animation supervisor Adrien Vallecilla, CG supervisor Xander Kennedy, CG supervisor Daniel Bigaj, compositing supervisor Francisco Palomares, compositing supervisor Mariusz Wesierski, FX TD Rob Richardson, head of FX Daniel James Cox, concept artist David Bocquillon, concept artist Dan Herlihy, art director at Territory Studio Sand dreams In the opening episode of Dune: Prophecy, our team created a series of complex sand FX simulations during a haunting vision in Raquella’s nightmare, involving the mighty sandworm of Arrakis, Shai-Hulud. The sequence shows the immense power of the sandworm as it devours the Sisterhood complex, built from sand, amid the desert. “The surreal nature of this sequence posed multiple challenges,” says Martyn Culpitt, VFX supervisor at Image Engine. “We had to recreate the Sisterhood structures entirely from sand and ensure they collapsed in a way that balanced dreamlike fluidity with realism.” “Capturing the scale and movement of the sandworm and the desert was critical,” notes Viktoria Rucker, VFX producer. “Achieving the immense scale and fluidity of the sand took innovative approaches and required pushing our particle system to simulate thousands of sand grains in motion.” Each sand pass was meticulously crafted to show the sandworm’s devastating force, annihilating the complex with dust and debris swirling into the desert landscape. Extensive iterations of simulations in sand movement, lighting, and volumetric dust were necessary to create a dynamic sequence true to the Dune aesthetic. The sandworm’s scale had to be flawlessly integrated into the environment. “Even though this was a dream, our FX team had to ensure the scale between the worm, the complex, and the desert felt plausible,” explains Rob Richardson, head of FX. “We used several simulations for the collapse, and certain simulations were even used as inputs for other simulations to achieve the visual complexity required.” “There was a lot of preparation required for the incoming 3D assets to ensure the relative scales between the worm, the Sisterhood complex and the surrounding desert made sense and that the velocity of the worm was physically plausible.” The complex had to be fractured to reflect its inevitable destruction under the worm’s force. “We fractured the Sisterhood complex geometry into hundreds of pieces, then created an RBD simulation for the larger building components. Once we had the timing and composition right, the transform data from the RBD simulation was accessed from inside a grain particle solver,” adds Rob. When simulating sand interacting with the sandworm, the team used art direction to ensure the sand moved naturally and seamlessly. Rob elaborates: “We needed to make sure that the sand collided with the internal structures of the worm—its hairs or teeth—so we developed techniques to control where and how the sand flowed between them.” This data was used to animate the sand and its neighbour constraints, when the calculated torque applied to the constraints went above a threshold, the sand was allowed to break apart. The solver then created attributes so that we could emit even more grains of sand in another post-simulation of smaller-scale sand. “For rendering purposes, we needed to ensure that we had a volume of particles, rather than just an outside coating, for sub-surface scattering, but not so many points that it became too memory intensive to render,” says Rob. “We created an up-res technique that took the simulations as an input, so we could dial up and down the number of points until we found the sweet spot of having enough density and being able to fit the data into RAM.” “The volumetric layers of dust were also generated from the several layers of sand simulations, and in many cases we split those layers down even further by partitioning them spatially and wedging the domains, then re-combining them later for more efficient rendering,” explains Rob. The sandworm had a pre-existing look that had been well-defined in the movies that our team was asked to match as closely as possible. When referencing the worm pushing through the sand, the team noticed that the sand would bulge up before breaking up and generating dust. We created procedural deformers and forces so that we could art direct the amount of bulge depending on the shot composition as well as artistically defining the breakup area and manipulating velocity fields for the volumetric dust solvers. There were multiple simulations depending on which part of the worm was interacting with the sand, be it the front of the worm or the sides, or whether the worm was submerged or was above ground. “The worm would swallow much of the sand as it moved through the desert,” Rob notes. “This required additional simulation work that was dependent on animated velocity fields that would funnel the sand inwards as well as collide with the mouth hairs and teeth of the worm.” In another surreal vision, a cascade of sand descends from a shrinking hole in the ceiling. The brief for the FX team was again to create something which had an otherworldly feel to it while remaining somewhat grounded in physics. “The hole in the ceiling revealed an upside-down environment containing a pool room,” shares Rob. “But instead of water falling, it was sand cascading like a waterfall. This required extensive FX sand simulations to get the weight and movement of the sand just right.” Creating the flowing sand was no small feat. The FX team ran multiple simulations, carefully adjusting the interaction between the grains of sand and the play of light and shadow. The goal was to evoke both the natural flow of falling sand and the eerie, surreal aesthetic of the vision. “The scene contained two plates stitched together and animated to move apart,” Rob explains. “The set geometry from the plates was blended to match the stitch, creating the illusion of a seamless environment.” Once the foundational look was approved, the team fine-tuned the details. “We started by spacing the emitters and dialing in the turbulence for the grain and volumetric solvers on a static frame,” Rob continues. “But as the hole shrank and rose, the next challenge was to have the sand and dust fall gracefully without appearing visually jarring.” Bringing the desert into the war room In Dune: Prophecy, the holotable serves as a vital strategic tool, used by key characters to analyze scenarios, track events across planets, and explore possible futures based on historical and genetic data. In this sequence, Emperor Corrino is woken from a bad dream and enters the war room to activate the large, interactive holotable. He watches the projection of Desmond Hart survive an attack, but then is devoured by a giant sandworm in the Arrakis desert. This sequence required the seamless blending of two distinct environments: the fully lit, expansive desert and the dimly lit, enclosed war room. Achieving this balance meant making the hologram appear semi-transparent and slightly distorted while still integrated into the scene. The team essentially “rebuilt” the desert environment through the lens of a projector, deconstructing rendered images of the desert and adding dimensionality to ensure the hologram had visual depth and accuracy. “I’m really proud of the details from the hologram we created,” states Xander Kennedy, CG supervisor. “The hues of the highlights and the semi-transparent shadows provide enough detail to be a hologram, but enough room to see through to the effects that build up the surrounding image. The team did a pretty outstanding job across all departments. The level of communication and collaboration that was necessary to pull this off was a feat of its own.” The hologram table itself was a technical marvel, featuring hundreds of independent lights and projectors. Precise coordination between departments was essential to maintain consistency across all shots, from wide angles to close-ups. “One of the biggest challenges was maintaining two worlds—the war room where the projection is taking place and the actual Arrakis desert, which was hundreds of times larger,” explains Xander. “The concept of the overall look meant that the Arrakis desert had to be fully lit as if it were actually photographed independently of the war room.” Martyn further elaborates on the intricate process: “The multi-scene scale integration and cross-pollination of FX, lighting, and compositing to create a projected hologram look was complex. These departmental elements had to be carefully considered and work perfectly in sync to maintain a correct scene/scale for connection from table to hologram to the final look.” Specific Passes for the Holotable “On the FX side, we had two main parts that contributed to the look. A field of volumetric lights, which would react to the scene with geometry GOBOs that were generated from terrain details,” Mariusz Wesierski, FX TD describes. “The other part was the reverse: holographic streaks coming from the geometry and shooting towards the light sources. To keep RAM usage low and get sharp lines without having to generate high-resolution volumes, we used motion blur in a creative way. The table was divided into overlapping circular grids projected from the camera onto the scene in “table space”. Each grid would have their velocity vectors pointed to their light sources. The beauty passes from the desert would then be used as a texture and rendered with motion blur to get pixel-perfect streaks without the use of volumes. Arnolds shutter curve was also used to control how the rays fade off.” Also featured in the hologram table projection is the spice harvester, a large, heavy, mobile vehicle designed to harvest melange, a fictional psychedelic drug in the Dune universe. These machines would harvest and process the spice from the sand of the desert floor. “For the spice harvester redesign, we sought to give the vehicle an updated design, fitting the show’s timeline that predates the events depicted in the films,” says Daniel Cox, concept artist. “Research played a key role in this process, as we examined the original movie vehicle and learned that it had been inspired by NASA’s space shuttle transport carriers.” “A series of small thumbnail silhouettes were created to provide a wide range of options for the client. After a review, the design that resonated most was one where the harvester’s main chassis was angled at roughly 20 degrees upwards. This choice was intentional to give the vehicle a “hot rod” feel, suggesting a modified, upgraded version of its predecessors. The angle also reinforced the notion that it expels sand at higher velocities, creating a dynamic, more aggressive harvesting process. Concepts were also done for the damage to the rear of the vehicle, complete with embers and a smoke plume.” Image Engine Concepts for the Harvester Final Concept Final Asset The chaos of war Episode 1 also showcases a massive battle featuring a towering battlefield robot—a four-legged mechanical giant—facing off against an advancing army. This sequence included full CG shots and detailed set extensions to transform the battlefield into a sprawling, rubble-filled warzone. “For the battlefield robot, the goal was to create a machine of war that was both imposing and practical,” explains Daniel Cox, concept artist. “The client initially referred to it as a ‘mech,’ which inspired us to craft a heavily armoured design with a unique, standout look. We wanted something that felt like a tank but with more flexibility and agility.” The robot’s four-legged design drew inspiration from an elephant’s gait, but with an enhanced, mechanized version. “We gave each leg ball joints, which allowed for incredible movement and maneuverability,” Daniel shares. “This gave the machine a dynamic presence—something that could navigate the battlefield with both strength and precision.” While the main structure featured a hard-surface, armoured chassis, advanced “nanotechnology” was suggested in the design of its arms and legs through geometric patterns. “We wanted to balance its massive power with a sense of high-tech sophistication, embedding subtle details,” says Daniel. The robot’s primary weapon was a laser emitter integrated into its “eye”. “The eye laser was a crucial design element,” Daniel notes. “It needed to reinforce its nature as a formidable opponent, cutting through the battlefield with precision and power.” The final design balanced function and intimidation, bringing to life a machine designed for the kind of high-stakes combat emblematic of the Dune universe. Concepts Final Asset “At the beginning of this sequence, one of the Atreides soldiers throws a grenade at the four-legged robot,” recalls David Bocquillon, concept artist. “The goal was to create a grenade design with a clear and simple silhouette—something that could be visually read from a distance—while still embodying the technical complexity of a weapon capable of unleashing an electromagnetic pulse strong enough to completely paralyze an enormous robot.” David continues, “For animation, we wanted the EMP Grenade to have a tactile, mechanical feel, so we incorporated the idea of it being magnetized to the robot. This concept was inspired by Dune: Part Two, where a landmine is shown snapping onto the surface of a harvester. That detail added an extra layer of realism and tension to the grenade’s behaviour in this sequence.” Final Asset As soldiers charged toward the battlefield robot, it retaliated with a devastating laser blast, hurling them back in a chaotic eruption of smoke, dust, and debris. Each element of the scene, from the fiery explosions to the dense clouds of shrapnel, added to the raw energy of battle. “There was a lot of back-and-forth between departments to make sure the final look was unified,” recalls Martyn. “The challenge wasn’t just in creating the explosions or the robot, but in blending everything seamlessly into a single, believable environment.” The FX team meticulously crafted the explosion effects, adding layers of detail, such as dynamic debris trajectories and realistic smoke simulations. These elements worked in tandem to heighten the realism of the war-torn landscape, ensuring that the Mek robot felt like an integral part of the action. “This battle takes place in a junkyard and it was a unique challenge because it was designed entirely as a full CG environment, without any prior conceptual direction from the client,” explains Daniel. “We aimed to create a desolate, atmospheric setting filled with discarded robot parts. The composition was deliberately expansive—a vast canvas of industrial detritus—highlighting the utilitarian and unforgiving nature of the junkyard.” He continues, “At the heart of the environment was the melting plant, where Mech parts were melted down. It was a central visual element, surrounded by a sense of looming industry and decay. We used backlighting to cast silhouettes, emphasizing the harsh industrial processes. Atmospheric fog swirling through the junkyard added an extra layer of mystery and foreboding, underscoring the bleak nature of the scene.” Concepts The mechanical lizard During the engagement party in episode 1, a little boy pulls out a small, seemingly harmless toy ball from his pocket. Suddenly, the ball transforms into a mechanical lizard that darts around the room, shocking the royal guests. “The challenge here was to create this animated mechanical lizard that was part toy and part companion for the prince, but also represent the forbidden AI technology,” explains Jeremy Mesana, animation supervisor. “We needed it to feel alive and aware, with the unpredictable energy of a real creature, but also retain the stop-and-start staccato movements of a mechanical toy.” The transformation from ball to lizard involved meticulous attention to detail. Each metallic panel of the lizard’s shell was individually animated to unfurl smoothly, creating a sense of mechanical intricacy. “Having to incorporate elements of the lizard state into the ball state and vice versa so that the transformation could flow between both end states believably was tricky,” Jeremy reflects. “As was convincingly hiding other parts that existed only in one state but not both.” The resulting movements combined precision with a touch of abruptness, amplifying the lizard’s unique identity as both lifelike and artificial. When the lizard is stabbed later in the scene, its movements shift, becoming erratic and jerky as it malfunctions. Despite its injury, the lizard retains a sense of life and consciousness. We really leaned into a more staccato mechanical movement when the lizard was stabbed through the body. To really show that despite being spit through with a knife it was still functioning but more in its robotic nature. Illuminating the genetic archive In Dune: Prophecy, a genetic thinking machine named Anirul is revealed to be a secret, high-functioning computer that survived the war. Located within a cavern, this immense data center houses the empire’s vast genetic archive, serving as a living repository of its bloodlines and histories, spanning countless generations. When activated, intricate data streams pulse with light, forming holographic trees—dynamic representations of the flow of genetic information—illuminating the environment with a breathtaking organic quality. As the Bene Gesserit sisters explore this archive, holographic family trees and bloodline data materialize, adding to the visual experience. “The Anirul environment was designed to feel alive, like a character in the episode,” says Adrien Vallecilla, CG supervisor. “Each asset and graphic had to pulse, fade, and move with purpose, all while interacting with realistic reflections and lighting. Organizing these elements to tell the story elegantly and seamlessly was one of the most challenging but rewarding aspects of the work.” “With such strong concept work successfully translated into the asset, compositing played a crucial role in bringing the final look to life,” says Dan Bigaj, compositing supervisor. “Collaborating closely with the lighting and asset teams, we iterated through multiple rounds of look development to ensure fidelity to the original concept and the showrunner’s vision. Once we achieved a look that the client loved, we optimized the workflow by integrating as much of the compositing work as possible into the asset and lighting setup. This approach streamlined the process, allowing us to complete the final compositing work with maximum efficiency.” To maintain consistency across all of the Anirual sequences, Dan developed a comprehensive Nuke template that consolidated key elements and compositing treatments. “This not only ensured a cohesive visual style but also enabled our small compositing team to meet the tight production deadlines without compromising on quality,” he explains. One of the key challenges in look development was determining how to handle the atmosphere within Anirul’s vast cavern. “Though subtle in the final images, this atmospheric layer played a significant role in enhancing the sense of mystery surrounding the immense thinking machine,” says Dan. “A fully CG approach proved too costly to render for such a large environment, so we implemented a Nuke particle system to dynamically generate atmospheric sprites. This solution allowed us to create a responsive, volumetric effect that was both efficient to render and required minimal manual intervention in each shot.” Another successful application of Nuke’s particle system was the addition of dust motes to the scene. “Though a minor detail, these particles subtly enhanced the interplay of light in the foreground, helping to sell the abundance of light sources in a natural and immersive way,” notes Dan. Finally, to ground the sequence in realism, Dan and his compositing team focused on ensuring that all animated light play—from the trees and servers to the graphical elements—interacted believably with the lens and plate photography. “We developed a dynamic, lightweight flare system driven by luminance, mimicking the anamorphic flare characteristics captured in the original footage,” he says. “This final touch really brought Anirul to life, seamlessly integrating the CG elements into the final composite and delivering a visually compelling result.” Concept art of the ring “The ring plays an important part in the Bene Gesserit sister’s cavern, with many pivotal scenes revolving around it,” explains David Bocquillon, concept artist. “Originally, it was a practical prop on set, but we ended up re-designing it and replacing it with a CG asset. We needed to elevate its visual impact while maintaining continuity with the original concept and shape.” He continues, “We focused on refining the Ring’s structure, especially the continuity of the band of light along the interior, which played an important role in the visual storytelling. Additional details, such as intricate ornaments and old Bene Gesserit inscriptions, were incorporated because we wanted to make it feel deeply rooted in the lore of Dune. These elements catch the light beautifully and give the design a sense of complexity and believability, fitting seamlessly into the world.” Once Territory Studios had delivered the initial designs and graphics elements for Anirul’s visual identity, Image Engine built upon their work to bring the environment to life. Territory developed intricate concept art and a graphical language for the archival systems, which served as the foundation for the sequence. From there, Image Engine’s team extended and integrated these elements into a fully realized 3D environment, refining the designs and adding storytelling layers to ensure they aligned seamlessly with the show’s narrative and aesthetic. “The client emphasized that the machine should feel like an intelligent system—a library of bloodlines and dynasties,” explains Dan Herlihy, Art Director at Territory Studios. “Our goal was to create a design that felt like an archival network with a synaptic quality, balancing both beauty and the capacity to house thousands of years of data.” Taking these initial concepts, Image Engine elevated the visual language, transforming the graphics into a fully dynamic and immersive environment. The glowing holograms and pulse-like animations were designed with storytelling at the forefront, ensuring the visual complexity of Anirul wasn’t just a backdrop but an integral part of the narrative. “In order to deliver this sequence on a tight schedule, we decided to build attributes for each holographic asset so it would allow our animation team greater control during production,” says Adrien. “The goal was for the animators to bring the trees to life early in the production stage. Choosing this workflow, instead of a more traditional downstream approach for the holograms, allowed us to approve the storytelling of the sequence early in the production process, giving us time at the end to focus on the artistic finalization of the sequence.” Image Engine also took the lead in conceptualizing the servers and providing ideas on how to incorporate them into the surrounding rock formations. “The servers were a critical part of the environment, acting as nodes in the archival network,” Adrien explains. “The goal was to create something visually unique that explores communication through technology and nature. We decided on a mixture of pulsing calligraphy on the tanks and flowing particle effects across the branches, merging technology with the organic texture of the stone to create the final look of being both ancient and advanced.” “The cavern sequence posed a particularly complex challenge,” explains Daniel, “as it was essential to visually represent the movement of data and information within a mysterious and potentially ancient subterranean environment. The cavern itself was intended to function not just as a physical space but as a conduit for the transmission of information, making it a key narrative device. The challenge was to effectively communicate this flow of data while still preserving the haunting, otherworldly atmosphere of the cavern.” “Initially, the concepts for how the data would manifest within the cavern leaned toward a more organic design, with flowing, curving lines that mimicked natural forms,” Daniel continues. “This idea was grounded in the concept of information as a living, breathing entity, almost like the lifeblood of the cavern itself.” “However, as the design process evolved, the client favoured a shift towards a more structured, mechanical aesthetic. This was in keeping with the established visual motif of the show, where the pervasive blue light served as a constant thematic anchor. The decision was made to focus on geometric shapes and sharp, angular data lines that would interact with the glowing blue light, making the data flow seem more mechanical and systematic. The shapes were placed around the blue light in a deliberate arrangement, creating a visual harmony between the data and the environment and reinforcing the show’s overarching technological tone.” “Ultimately,” Daniel concludes, “the cavern sequence became a blend of organic inspiration and mechanical precision, perfectly aligning with the show’s visual language. The data, no longer just a passive element, became an active participant in the narrative, adding to the visual complexity of the cavern and reinforcing the themes of control and hidden knowledge that run throughout Dune: Prophecy.” Main Anirul Server The claw and the truth revealed In the final episode of Dune: Prophecy, a chilling revelation demonstrates the calculated precision of a Thinking Machine’s robotic claw. This imposing mech performs a disturbing microsurgery to attach advanced technology to a human eye. “Compositing played a major role in bringing this creepy concept to life on screen,” says Daniel Bigaj, compositing supervisor. “This was an incredibly fun mini-sequence to work on—a compositor’s dream (pun intended)—because it demanded a high level of creativity to successfully execute such a unique visual experience.” A key aspect of this scene is that it unfolds from the perspective of the patient whose eye is being operated on. “That perspective was a major driving factor in the creative decisions I made while crafting the final image,” Dan explains. “I wanted to truly put the audience in the patient’s position—to give them an image as close as possible to what they might experience if they were lying on the operating table themselves.” To achieve this, Dan studied how human vision reacts in extreme conditions. “I spent a lot of time staring at nothing, adjusting my focus, blinking, squinting, and straining my eyes—really observing how these natural movements affect our perception,” he says. “From the strong vignette to the eye blinks, blurry vision, chromatic aberration, shallow depth of field, flaring, and subtle eye vibrations—every element was carefully crafted to heighten the viewer’s immersion.” However, replicating the real human experience too accurately wasn’t the goal. “Our eye movements are far too rapid to translate well to the screen, so I had to find a balance—something that felt real but remained visually readable,” notes Dan. One final detail that helped reinforce the intimate scale of the scene was the addition of floating dust motes. “I made sure they were true to scale, drifting in and out of focus in sync with the patient’s panicked eye movements,” he explains. “It’s a subtle touch, but it adds another layer of realism and tension to the sequence.” The final result was so unsettling that even showrunner Alison Schapker had chills while watching it. “That was the best reaction we could have hoped for,” Dan says with a smile. Developing the concept for the Thinking Machine in this sequence was particularly challenging to visualize due to its complex nature and unique function in the narrative. “This massive mechanical construct was more than just a machine; it needed to symbolize a disturbing and clinical purpose,” explains Daniel Cox. The concept art team began with over 20 thumbnail sketches to explore its potential shapes, focusing on wide-side views to capture the mech’s sheer scale and silhouette. The brainstorming helped pinpoint the most promising direction for the design. Once the client selected a preferred design, the focus shifted to refining the intricate details, ensuring its anatomy was tailored for its specialized purpose. A critical element of the design was the integration of multiple arms equipped with surgical tools. “Given its role in surgery, the tools needed to strike a balance between plausibility and futurism,” says Daniel. Some instruments were inspired by real-world surgical equipment, while others were entirely fictional, pushing the boundaries of the show’s technological vision. The silhouette played a vital role in the Thinking Machine’s visual impact since the sequence was heavily backlit with the show’s signature glowing blue light. “The shape had to stand out against this lighting and appear imposing and mysterious,” notes Daniel. As the sequence progresses, the camera introduces a warm glow in the distance. “This was intentional,” Daniel shares, “it’s meant to suggest the presence of a Guild member, whose silhouette was partially revealed. The warm light contrasted with the cool, sterile environment, adding another layer of mystery and hinting at a greater force at play in the narrative.” Explore the evolution of this intricate sequence—from initial concept art to final CG renders—in the images below. Concepts Concept Final Concept Final A nightmare on thin ice In the season finale, Valya finds herself trapped in a chilling nightmare—a frozen lake where she is battling fierce winds and a relentless blizzard. Reliving a traumatic memory, she struggles across cracking ice, determined to save her brother Griffin from drowning in the ice hole. The ice lake sequence is as much a psychological battle as a visual spectacle, with the environment reflecting Valya’s turmoil. “The amount of turbulence in the snowstorm was designed to be a reaction to Valya’s emotional state,” explains Rob. “As the sequence progresses, the ice beneath her feet fractures, and black snow is introduced as a manifestation of her internal chaos.” To enable nuanced control of this dynamic environment, the FX team developed multiple layers of passes for the comp artists, including random ID passes, rest position noises at various scales, depth from the camera, velocity and proximity to Valya. “We weren’t certain how much artistic flexibility we’d need over the snow’s colour, so we ensured the compositing team could adjust the balance between the black and white snow on top of the levels directly coming from FX,” Rob elaborates. The trickiest elements to manipulate were the volume layers, where the black and white values would mix very quickly and turn gray. The team experimented with NDC techniques to project particle colours on the volume render passes, but achieved more fluidic results by re-rendering volume layers separately after the compositing team dialled in the distribution of colours. The FX team also developed procedural techniques to simulate ice cracks forming beneath Valya’s hands as she crawled forward to save Griffin. “The curves for the cracks were extruded and rendered with the ice lake geometry,” Rob adds, “while shaders used ray-depth lookups to the FX geometry to achieve the effect of cracks fracturing and spreading across the ice surface.” To switch between shots dynamically, we used proprietary tools inside of Houdini, which helped with continuity and being able to submit many shots at once. This also helped the lighting and compositing teams with consistency in the data caches between shots. The conceptualization of the frozen lake environment for the concept art team was equally intricate. Daniel reflects on the evolution of the sequence: “It was originally envisioned with a giant worm erupting from the ice, but the final version took a more atmospheric and psychological approach. The focus shifted to the eerie, unstable nature of the ice itself and the escalating tension.” One of the most striking designs was a wide, top-down shot, where the vastness of the lake is shown, as well as cracks in the ice which formed the shape of Sh’alud’s mouth—a chilling metaphor that suggested the ice was alive, a thin, fragile boundary between Valya and the lurking unknown dangers below. “The cracks became a symbol of her vulnerability, amplifying the sense of imminent peril,” Daniel notes. The challenge of making the ice appear natural yet treacherous was achieved by carefully designing the texture of the ice to appear jagged, uneven, and fragile. Daniel explains, “To make the environment appear cold, it was conveyed not just through the icy surfaces, but also through the use of lighting and atmosphere. The play of light on the frozen landscape helped highlight the severity of the surroundings, making the viewer feel the biting chill as her footsteps creaked across the surface.” In its final form, the combination of lighting, ice textures, and the Sh’alud-shaped cracks reinforced the atmosphere of isolation and terror. The icy lake, in the end, became not just a physical environment but a manifestation of the character’s internal terror, making it a haunting and memorable visual in the show. FX Passes/Layers for Snow At the end of the sequence, Valya herself begins to dissipate and blend into the storm, metaphorically letting the storm pass through her. This transformation was a technical challenge for the team. “We had some 3D geometry representing Valya,” shares Rob, “but the details of her hair and the folds in her clothing were too complicated to replicate in 3D.” To overcome this, the FX team utilized NDC projection techniques, matching the live-action plate and emitting particles directly from Valya’s body and hair. “Once the particles reached a certain distance, they began to inherit the turbulent forces driving the storm,” Rob adds. The emotional weight of this key moment was further heightened by the interplay of lighting and mood. Drawing from the client’s vision, the team developed a series of keyframe concepts to explore different lighting setups and atmospheric effects that would reflect Valya’s internal battle and transformation. “The goal was to emphasize the strength of the winds and the contrast between the coldness of Lankiveil and the warmth of Arrakis in the background,” David says. This duality of the environment reinforced Valya’s internal conflict while hinting at her deep connection to the larger forces at play. “Her dissolution was a powerful metaphor, showing that she isn’t just battling the storm—she is part of it,” David explains. By combining the detailed FX work with carefully crafted lighting and symbolic visuals, this final shot captures the haunting beauty and emotional depth that defines the ice lake sequence. Concept Witness to vengeance In Dune: Prophecy, the bull is a significant symbol associated with House Atreides, representing their traditions and the dangers they face. In episode 3, “Sisterhood Above All,” Tula Harkonnen infiltrates a bull-hunting event with a sinister agenda of vengeance. Amidst the tension, a Salusan bull appears atop a rocky cliff. Its piercing gaze seems to lock onto Tula, bearing witness to her act. This moment sets an ominous tone, with the bull’s presence foreshadowing the calamity that lies ahead. Concept artist Daniel Cox explains: “Building on its natural animalistic qualities, we focused on enhancing its size and musculature, adding a level of ‘natural’ armour. The armour design extended from the bull’s horns, down its back and spine, creating a more rugged and defensive profile. This added layer of armour not only increased its survivability but also aligned with the show’s thematic elements of survival in a hostile and dangerous world. The variations in the design were intended to reflect its status as a creature of both beauty and brutality.” Concept Final Asset Original plate and final comp A testament to collaboration and craft Dune: Prophecy showcases how thoughtful collaboration and technical expertise can bring a legendary universe to life. From Anirul’s intricate holographic library to the chaos of battle, every detail was carefully crafted to serve the story. “Our goal was to create visuals that felt both grounded in the Dune universe and unique to the Bene Gesserit’s story,” says Martyn Culpitt, VFX Supervisor. “Every scene had to feel massive and intimate at the same time, and that duality is what makes this show so compelling. It was an exciting challenge.” As the series transports viewers to an era before Paul Atreides’ rise, Image Engine is proud to have contributed to the visual effects of the first season of Dune: Prophecy.0 Commentarios 0 Acciones 26 Views
-
WWW.CNET.COMYour Favorite Patreon Creators Will Soon Be Able to Livestream from the PlatformThe new tool lets artists go live, chat, and share content all within Patreon.0 Commentarios 0 Acciones 26 Views
-
WWW.SCIENTIFICAMERICAN.COMThe Dire Wolf Hoopla Hides the Real Story: How to Save Red WolvesOpinionApril 16, 20254 min readThe Dire Wolf Hoopla Hides the Real Story: How to Save Red WolvesRather than resurrect extinct species, cloning technology could save those at risk of dying out, like the red wolf, but only with solid conservation efforts and habitat protectionsBy Dan Vergano Colossal Biosciences has cloned four red wolf pups from living red wolves. The technology could aid conservation efforts in saving the species. Colossal BiosciencesFour cloned red wolves that fell out of the spotlight—hidden amid “de-extinction” hoopla about vanished dire wolves—tell the real secret to saving threatened species. And the answer isn’t a magical cloning technology.This month Colossal Laboratories & Biosciences announced the birth late last year of three gray wolf puppies with 15 gene variants that belonged to dire wolves (Aenocyon dirus); this large carnivore went extinct about 13,000 years ago. Colossal, based in Dallas, touted the (adorable, white) genetically engineered pups as a step toward “Making Extinction a Thing of the Past” on its website. There was pushback. “This is a designer dog. This is a genetically modified gray wolf,” Jacquelyn Gill, a paleoecologist at the University of Maine, told Scientific American. Colossal’s chief science officer Beth Shapiro, also an evolutionary biologist, later called such criticism “fair points” on X but defended the “de-extinction” claim. The company’s dire wolf claims were released earlier than a peer-reviewed study describing them was published (because the New Yorker released results ahead of schedule, according to Shapiro), which also has added to scientific skepticism.Colossal simultaneously announced it had cloned four red wolf pups from three living adult red wolves. Red wolves (Canis rufus) once spread from Texas to the Carolinas, but the species was declared extinct in the wild in 1980. Fewer than 20 remain alive today in captivity, all tracing their genes to just 12 founding wolves. “Adding Colossal’s red wolves to the captive breeding population would increase the number of founding lineages by 25 [percent],” the company said in promotional material.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.That’s swell. But history shows there is a lot more than genetics needed to save red wolves. Red wolf numbers plummeted in the 1970s, because of lost habitat and hunting, as well as inbreeding with coyotes. More than fresh genes, red wolves need habitat and careful conservation, or else they’ll die off again. We know because we tried before.Starting in 1973 a U.S. Fish and Wildlife Services captive-breeding program built up their population to more than 120 red wolves, and later tried to re-introduce them into the wild in North Carolina. But when that program ended during the first Trump administration, their numbers plummeted to only seven wolves. People driving to and from the Outer Banks on I-64 kept killing the wolves, while hunters shot them as coyotes. The FWS program was restarted in 2021, but now only has about 20 wolves. (On top of everything else, the Trump administration in February fired about 5 percent of the agency’s workforce.) Red wolves face some big challenges.“I think that gene modification is a reasonable subject of basic scientific investigation that could someday offer assistance to red wolf (and other) conservation,” says Benjamin Sacks of the University of California, Davis, an expert on red wolf genetics. “But [it] is not likely to be a particularly useful tool for red wolf conservation at present.”Colossal’s four red wolves derive their genetics from cloning-amenable blood cells that serve as the “progenitors” to blood vessel linings, called endothelial progenitor cells. Along Louisiana’s Gulf Coast, researchers collected these progenitor cells from blood samples of “ghost wolves,” coyotes with large amounts of red wolf ancestry. In a genuine scientific feat, Shapiro and her team placed the central nucleus of some of these progenitor cells inside donor dog eggs in a process called stem cell nuclear transfer; then they implanted the fertilized egg into a surrogate dog mother that gave birth to the pups, which were clones of the blood-drawn ghost wolves.“By cloning the red wolves from Louisiana while also leaving these individuals in the wild, their DNA both persists in the wild population and is brought into a research setting where they have the potential to contribute new red wolf ancestry to the captive population,” Shapiro tells Scientific American by e-mail. Colossal is currently working with wildlife agencies, she adds, “to find a path that allows this.”All of that seems a little elaborate, compared with just breeding the Louisiana wolves with the FWS program wolves the old-fashioned way, and then letting them go free afterward. “We have little or no idea what unseen effects the cloning process has on offspring and future generations, so why risk it unnecessarily?” says Sacks.More fundamentally, scientists currently don’t even have a good picture of red wolf genetics. While they estimate that 80 percent of ghost wolf genes are from red wolf ancestors, and not coyotes, they don’t know for sure which ones are which. Canada’s so-called “eastern” wolves might be a closer match to red wolves for example, and thus better sources of genetic diversity for their revival, again through old-fashioned breeding.The danger of thinking that genetic technology can replace the hard work of conserving habitat and rebuilding healthy populations of threatened animals soon became clear with the dire wolf news, when the new U.S. Department of Interior secretary, Doug Burgum, who oversees the FWS, pointed to the genetically engineered “dire wolf” pups to undermine the Endangered Species Act. “If we’re going to be in anguish about losing a species, now we have an opportunity to bring them back,” he told Interior Department employees, according to the Washington Post. “Pick your favorite species and call up Colossal.”Sure. Let’s kill off every endangered creature. Billionaire businessmen like Burgum will just order up a new one from the gene factory for their private amusement. Shades of Jurassic Park. In the meantime, his boss just signed an executive order calling for “increased timber production” on the federal lands that once housed this vanished wildlife. What a wonderful future. Colossal did itself no favors by touting Burgum’s endorsement on its website.“One of my concerns about all the hype about ‘de-extinction’ is the drive to hurry into the spotlight before enough is understood, which risks making Colossal mistakes, and to distract attention and funding from the more pressing priorities,” says Sacks, by e-mail. “I also get nervous because the science is being done behind closed doors for profit.”This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.0 Commentarios 0 Acciones 28 Views
-
WWW.EUROGAMER.NETXbox's 'stream your owned games to console' feature now available to all via Game Pass UltimateXbox's 'stream your owned games to console' feature now available to all via Game Pass Ultimate As part of April Update. Image credit: Microsoft News by Matt Wales News Reporter Published on April 16, 2025 Following months of Xbox Insider testing, all Xbox Game Pass Ultimate subscribers can now stream a "select" number of their owned games to Xbox Series X/S or Xbox One consoles, without needing to install them first. It's one of several Xbox features launching in April. Microsoft first discussed giving Xbox players the ability to stream their owned games back in 2019, but the feature suffered multiple delays. It finally arrived last November, but was only available through a limited number of platforms - namely TVs and via browsers on supported devices such as tablets, smartphones, and Meta Quest headset. A month later, streaming to Xbox consoles was introduced to Insider testing, and it's now available to everyone - provided they have a Xbox Game Pass Ultimate subscription, at least. The feature's arrival means all Ultimate members can now stream games outside of those included in the Game Pass library, but only if they appear on Microsoft's list of supported titles. At present, "100+" games are compatibile with the feature, and more are set to be added over time. Recent additions include Lost Records: Bloom & Rage Tape 2, Wanderstop, and Disney Epic Mickey: Rebrushed - with the full list available on the Xbox website. To begin using the feature, Game Pass Ultimate subscribers should fire up their console and navigate to My games & apps > Full library > Owned Games. All supported titles include a cloud badge on their game page, and streaming is started by selecting the game and choosing the Play with Cloud Gaming option. It's also possible to begin streaming directly from the Store app after purchasing a compatible game. Alongside owned game streaming, April's Xbox update brings a number of other improvements and additions to console. Xbox's 'Free up space' screen, for instance, now highlights duplicate copies of games and games that players no longer have access to, and Microsoft has introduced new Game hubs to Xbox consoles too. Additionally, Xbox remote play has seen changes, with Microsoft removing the feature from the Xbox app on mobile and instead requiring players to access it from their mobile device's browser, which the company insists will "make it easier for our teams to optimise the streaming experience and build new features". The change will also "soon" bring remote play to more devices, including Samsung Smart TVs, Amazon Fire TV devices, and Meta Quest headsets. And finally, Microsoft is updating its Xbox app on iOS and Android so players can buy games and add-on content, join Game Pass, and redeem Perks directly in-app. The changes will initially be available to beta users before launching for everyone via the Google Play Store on Android devices and the Apple Store on iOS. More details an be found in Microsoft's blog post.0 Commentarios 0 Acciones 46 Views
-
WWW.VIDEOGAMER.COMCounter Strike 2 modder creates Lego Mirage map that can be completely destroyed while playingYou can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Have you ever faced an annoying peaker in Counter Strike 2 and wished you could just blast through the wall to take them out? Well, one amazing CS2 modder has provided just that experience with a completely destructible LEGO version of De_Mirage. Created by modder Lillykyu, available to download here, the new LEGO Mirage map recreates the iconic Counter Strike: Global Offensive map with physically accurate LEGO bricks. Currently only available to play with the 2v2 Wingman game mode, LEGO Mirage makes it so that every section of the map can be knocked over or destroyed, offering a brand-new layer of strategy to Valve’s shooter. With the map, you can shoot out sections of walls to take out enemies hiding behind cover, or simply option up a way for you to take foes out from new angles you simply couldn’t before. Additionally, if you’re a fan of chaos, you can lob a big fat grenade around a corner and watch the map crumble into bits. The LEGO Mirage map looks like a tonne of fun to play in Counter Strike 2, but this is far from the only awesome map made by the creator. In the past, Lillykyu has also created a functional Jetpack Joyride within the serious shooter which is as hilarious as it is cursed. For more Valve coverage, read about how one Deadlock artist claims that Half-Life 3 is actually, really real this time. Additionally, check out the recent leak from renowned leaker Gabe_Follower that proves it actually might be. Counter Strike 2 Platform(s): Linux, PC Genre(s): First-Person Shooter, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share0 Commentarios 0 Acciones 65 Views